Compare commits

...

991 Commits

Author SHA1 Message Date
Jason Ertel
33ada95bbc Merge pull request #15167 from Security-Onion-Solutions/2.4/dev
2.4.190
2025-10-24 16:01:05 -04:00
Mike Reeves
de9d3c9726 Merge pull request #15166 from Security-Onion-Solutions/2.4.190
2.4.190
2025-10-23 14:09:13 -04:00
Mike Reeves
39572f36f4 2.4.190 2025-10-23 14:07:05 -04:00
Jason Ertel
0994cd515a Merge pull request #15161 from Security-Onion-Solutions/jertel/wip
add exclusion toggle
2025-10-21 09:36:45 -04:00
Jason Ertel
bdcd1e099d add exclusion toggle 2025-10-21 09:33:41 -04:00
Jorge Reyes
c64760b5f4 Merge pull request #15153 from Security-Onion-Solutions/reyesj2-patch-1 2025-10-17 07:50:36 -05:00
Jorge Reyes
d2aa60b961 log4j2 settings 2025-10-17 07:40:44 -05:00
Jorge Reyes
83d615d236 Merge pull request #15151 from Security-Onion-Solutions/reyesj2-patch-9
update log4j2 policy for ES json output
2025-10-16 16:25:47 -05:00
reyesj2
e910de0a06 update log4j2 policy for ES json output
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-10-16 16:19:55 -05:00
Josh Patterson
26b80aba38 Merge pull request #15148 from Security-Onion-Solutions/m0duspwnens-patch-1
do not log set_timezone in setup
2025-10-15 16:58:34 -04:00
Josh Patterson
ee617eeff4 do not log set_timezone in setup
creates additional sosetup.log file
2025-10-15 16:44:24 -04:00
Josh Patterson
463766782c Merge pull request #15147 from Security-Onion-Solutions/amv
omit new hypervisor state name fp
2025-10-15 15:03:31 -04:00
Josh Patterson
d9f70898dd omit new hypervisor state name fp 2025-10-15 14:59:37 -04:00
Mike Reeves
7e15c89510 Merge pull request #15145 from Security-Onion-Solutions/cogburn/add-multiline
Should be multiline
2025-10-15 13:20:26 -04:00
Corey Ogburn
ed5bd19f0e Should be multiline 2025-10-15 09:00:27 -06:00
Josh Patterson
feba97738f Merge pull request #15144 from Security-Onion-Solutions/amv
implement host os overhead based on role
2025-10-15 10:36:24 -04:00
Josh Patterson
348809bdbb implement host os overhead based on role 2025-10-15 10:30:14 -04:00
Jorge Reyes
ca0edb1cab Merge pull request #15141 from Security-Onion-Solutions/reyesj2-logstash 2025-10-14 16:01:01 -05:00
reyesj2
0172f64f15 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2-logstash 2025-10-14 15:58:38 -05:00
Jorge Reyes
48f8944e3b Merge pull request #15139 from Security-Onion-Solutions/reyesj2-patch-4
event.module elasticsearch
2025-10-14 15:58:00 -05:00
reyesj2
3e22043ea6 es logging retention 2025-10-14 15:08:51 -05:00
coreyogburn
e572b854b9 Merge pull request #15142 from Security-Onion-Solutions/cogburn/append-prompt
New Config Entries
2025-10-14 13:46:15 -06:00
Corey Ogburn
c8aad2b03b New Config Entries 2025-10-14 13:24:43 -06:00
reyesj2
8773ebc3dc logstash wrappers for troubleshooting 2025-10-14 13:34:33 -05:00
reyesj2
2baf2478da add additional elasticsearch log output in json format for elasticsearch log integration to parse 2025-10-14 12:47:03 -05:00
reyesj2
378d37d74e add event.module to elasticsearch server logs 2025-10-14 12:44:51 -05:00
Josh Patterson
f8c8e5d8e5 Merge pull request #15063 from Security-Onion-Solutions/impssu
Update so-saltstack-update
2025-10-14 11:27:29 -04:00
Josh Patterson
dca38c286a Merge pull request #15137 from Security-Onion-Solutions/amv
allow user to create VMs that mount virtual disk for /nsm. new nsm_total grain
2025-10-14 11:25:57 -04:00
Josh Patterson
860710f5f9 remove .log extension 2025-10-14 11:03:00 -04:00
Josh Patterson
d56af4acab remove .log extension 2025-10-14 10:58:57 -04:00
Josh Patterson
793e98f75c update annotation after failed vm removal from VMs file 2025-10-14 10:37:16 -04:00
Josh Patterson
f9c5aa3fef remove PROCESS_STEPS from hypervisor annotation 2025-10-14 09:36:05 -04:00
Josh Patterson
254e782da6 add volume creation and configuration process steps 2025-10-10 22:15:20 -04:00
Josh Patterson
fe3caf66a1 update failure description 2025-10-10 17:21:09 -04:00
Josh Patterson
09d699432a ui notification of nsm volume creation failure and cleanup of vm inventory in soc grid config for hypervisor 2025-10-10 17:07:02 -04:00
Jason Ertel
79b44586ce Merge pull request #15130 from Security-Onion-Solutions/jertel/wip
missed commit
2025-10-09 20:55:20 -04:00
Jason Ertel
feddd90e41 missed commit 2025-10-09 20:50:09 -04:00
Jason Ertel
ca935e4272 Merge pull request #15127 from Security-Onion-Solutions/jertel/wip
csv delimiter and query name
2025-10-09 15:48:37 -04:00
Jason Ertel
8f75bfb0a4 csv delimiter 2025-10-09 13:02:02 -04:00
Josh Patterson
e551c6e037 owner and perms of volumes 2025-10-09 10:19:25 -04:00
Jorge Reyes
1c5a72ee85 Merge pull request #15124 from Security-Onion-Solutions/reyesj2/es-8188
ignore error for elastic-fleet agent
2025-10-08 14:13:46 -05:00
reyesj2
8a8ea04088 ignore error for elastic-fleet agent 2025-10-08 14:01:18 -05:00
Josh Patterson
f730e23e30 Merge remote-tracking branch 'origin/2.4/dev' into amv 2025-10-08 14:06:48 -04:00
Josh Patterson
a3e7649a3c minor hypervisor annotation 2025-10-08 13:52:34 -04:00
Josh Patterson
af42c31740 update yaml for annotation 2025-10-08 13:24:54 -04:00
Jason Ertel
a22c9f6bcf Merge pull request #15118 from Security-Onion-Solutions/jertel/wip
support non-async state apply
2025-10-08 13:15:05 -04:00
Jason Ertel
bad9a16ebb support non-async state apply 2025-10-08 13:02:44 -04:00
Josh Patterson
7827e05c24 handle mounting vdb as nsm when nsm set in soc grid config 2025-10-08 12:18:34 -04:00
Josh Patterson
e45b0bf871 var and comment update 2025-10-08 11:51:35 -04:00
Josh Patterson
659c039ba8 handle nsm volume size and non disk passthrough 2025-10-08 10:51:04 -04:00
Josh Patterson
c7edaac42a nsm volume as vdb, os vda by ordering pci slots 2025-10-07 17:20:11 -04:00
Josh Patterson
a1a8f75409 create and mount volume. being mounted as vda 2025-10-07 16:36:23 -04:00
Jorge Reyes
23e25fa2d7 Merge pull request #15111 from Security-Onion-Solutions/reyesj2/es-8188
UPGRADE: ES 8.18.8
2025-10-07 14:03:45 -05:00
Mike Reeves
f077484121 Merge pull request #15114 from Security-Onion-Solutions/filters
Filters
2025-10-07 14:35:00 -04:00
Mike Reeves
c16bf50493 Update files 2025-10-07 14:20:25 -04:00
reyesj2
564374a8fb generate new elastic agents in post soup 2025-10-07 12:21:26 -05:00
Josh Patterson
4ab4264f77 merge 2025-10-07 12:26:58 -04:00
Josh Patterson
60cccb21b4 create volume 2025-10-07 12:20:42 -04:00
reyesj2
39432198cc Elastic 8.18.8 elastic agent build 2025-10-06 16:25:52 -05:00
reyesj2
7af95317db es upgrade 8.18.8 pipeline updates 2025-10-06 16:23:22 -05:00
reyesj2
8675193d1f elasticsearch upgrade 8.18.8 2025-10-06 12:56:31 -05:00
Josh Patterson
ac0d6c57e1 create common.grains state and nsm_total grain 2025-10-06 11:52:35 -04:00
Jorge Reyes
3db6542398 Merge pull request #15105 from Security-Onion-Solutions/reyesj2/logstashout
update logstash fleet output policy
2025-10-03 12:07:36 -05:00
reyesj2
9fd1b9aec1 make sure to pass in variables to json_string.. 2025-10-02 16:38:47 -05:00
reyesj2
e5563eb9b8 send full new ssl config 2025-10-02 15:29:55 -05:00
Josh Patterson
e8de9e3c26 Merge pull request #15103 from Security-Onion-Solutions/byoh
byoh
2025-10-02 15:50:34 -04:00
reyesj2
c8a3603577 update logstash fleet output policy 2025-10-02 14:47:38 -05:00
Josh Patterson
05321cf1ed add --force-cleanup to nvme raid script 2025-10-02 15:03:11 -04:00
Josh Patterson
7deef44ff6 check defaults or pillar file 2025-10-02 11:55:50 -04:00
Mike Reeves
9752d61699 Add Filters 2025-10-01 19:59:28 -04:00
Mike Reeves
6b8e2e2643 Add Filters 2025-10-01 19:58:07 -04:00
Josh Patterson
e3ac1dd1b4 Merge remote-tracking branch 'origin/2.4/dev' into byoh 2025-10-01 14:57:51 -04:00
Josh Patterson
86eca53d4b support for byodmodel 2025-10-01 14:57:25 -04:00
Jason Ertel
bfd3d822b1 Merge pull request #15092 from Security-Onion-Solutions/jertel/wip
updates for wiretap lib
2025-10-01 12:20:06 -04:00
Jason Ertel
030e4961d7 updates for wiretap lib 2025-10-01 12:13:56 -04:00
Matthew Wright
14bd92067b Merge pull request #15091 from Security-Onion-Solutions/mwright/soc_soc-fix
Made lowBalanceColorAlert global
2025-10-01 11:03:50 -04:00
Matthew Wright
066e227325 made lowBalanceColorAlert global 2025-10-01 11:01:10 -04:00
coreyogburn
f1cfb9cd91 Merge pull request #15087 from Security-Onion-Solutions/cogburn/health-timeout
New field for assistant health check
2025-09-30 15:49:52 -06:00
Corey Ogburn
5a2e704909 New field for assistant health check
The health check has a smaller, configurable timeout.
2025-09-30 15:33:20 -06:00
Jorge Reyes
f04e54d1d5 Merge pull request #15086 from Security-Onion-Solutions/reyesj2/fltpatch
less strict exits for fleet configuration
2025-09-30 15:26:50 -05:00
reyesj2
e9af46a8cb less strict exits for fleet configuration 2025-09-30 14:28:42 -05:00
Josh Patterson
b4b051908b Merge pull request #15082 from Security-Onion-Solutions/vlb2
fix hypervisor bridge setup
2025-09-29 17:19:22 -04:00
Jason Ertel
0148e5638c Merge pull request #15080 from Security-Onion-Solutions/jertel/wip
restart registry after upgrading images (in airgap mode)
2025-09-29 17:02:47 -04:00
Josh Patterson
c8814d0632 removed commented code 2025-09-29 16:58:45 -04:00
Jason Ertel
6c892fed78 restart registry after upgrading images (in airgap mode) 2025-09-29 16:47:05 -04:00
Josh Patterson
e775299480 so-user target minions with pillar elasticsearch:enabled:true 2025-09-26 15:43:49 -04:00
Josh Patterson
c4ca9c62aa Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-09-26 12:52:37 -04:00
Jorge Reyes
c37aeff364 Merge pull request #15075 from Security-Onion-Solutions/reyesj2/esfleetpatch
update so-elastic-fleet-setup
2025-09-26 11:36:35 -05:00
reyesj2
cdac49052f Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2/esfleetpatch 2025-09-26 11:32:44 -05:00
reyesj2
8e5fa9576c create disabled so-manager_elasticsearch output policy first, update it then verify it is the only active output 2025-09-26 11:32:25 -05:00
Josh Patterson
cd04d1e5a7 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-09-25 16:06:36 -04:00
Josh Patterson
1fb558cc77 managerhype br0 setup 2025-09-25 16:06:25 -04:00
Jason Ertel
7f1b76912c Merge pull request #15072 from Security-Onion-Solutions/jertel/wip
retry kratos pulls since this is the first image to install during setup
2025-09-25 15:45:02 -04:00
Jason Ertel
3a2ceb0b6f retry kratos pulls since this is the first image to install during setup 2025-09-25 15:40:00 -04:00
Matthew Wright
1345756fce Merge pull request #15071 from Security-Onion-Solutions/mwright/temp
Updated default investigation prompt
2025-09-25 15:18:20 -04:00
Matthew Wright
d81d9a0722 small tweak to investigation prompt 2025-09-25 14:45:06 -04:00
Jorge Reyes
55074fda69 Merge pull request #15070 from Security-Onion-Solutions/reyesj2-patch-1
make sure fleet-default-output is not set as either default output p…
2025-09-25 09:55:54 -05:00
Jorge Reyes
23e12811a1 make sure fleet-default-output is not set as either default output policy 2025-09-25 09:51:32 -05:00
Josh Patterson
5d1edf6d86 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-09-24 17:32:08 -04:00
Josh Patterson
c836dd2acd set interface for network.ip_addrs for hypervisors 2025-09-24 16:50:29 -04:00
Josh Patterson
3a87af805f update service file, use salt.minion state to update mine_functions 2025-09-24 15:19:46 -04:00
Jorge Reyes
328ac329ec Merge pull request #15064 from Security-Onion-Solutions/reyesj2-patch-1
typo
2025-09-24 09:04:14 -05:00
Jorge Reyes
a3401aad11 typo 2025-09-24 08:56:40 -05:00
Josh Patterson
5a67b89a80 Update so-saltstack-update
add -v -vv and test / dry run mode
2025-09-24 09:49:02 -04:00
Jorge Reyes
431f71cc82 Merge pull request #15047 from Security-Onion-Solutions/reyesj2/es-fleet-patch
rework fleet scripts
2025-09-24 07:45:43 -05:00
Josh Patterson
4587301cca only update mine for managerhype during setup 2025-09-23 15:56:00 -04:00
Josh Patterson
14ddbd32ad salt-minion service file changes for hypervisor and managerhype 2025-09-22 16:38:40 -04:00
Josh Patterson
4599b95ae7 separate salt-minion service file 2025-09-22 16:37:16 -04:00
reyesj2
c92dc580a2 centralize MINION_ROLE lookup_role 2025-09-19 13:17:52 -05:00
reyesj2
4666aa9818 Merge branch 'reyesj2/es-fleet-patch' of github.com:Security-Onion-Solutions/securityonion into reyesj2/es-fleet-patch 2025-09-19 12:55:08 -05:00
reyesj2
f066baf6ba use only the characters up to the last seen '_' 2025-09-19 12:54:04 -05:00
Jorge Reyes
ba710c9944 import or eval should get updated 2025-09-19 12:26:08 -05:00
reyesj2
198695af03 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2/es-fleet-patch 2025-09-19 11:56:53 -05:00
Jorge Reyes
fec78f5fb5 Merge pull request #15051 from Security-Onion-Solutions/reyesj2/patch-lgchk
add oom check to so-log-check
2025-09-19 11:41:55 -05:00
reyesj2
d03dd7ac2d check for oom kill only in the last 24 hours
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-09-19 11:32:13 -05:00
reyesj2
d2dd52b42a Merge branch 'reyesj2/patch-lgchk' of github.com:Security-Onion-Solutions/securityonion into reyesj2/es-fleet-patch 2025-09-19 11:12:09 -05:00
reyesj2
c9db52433f add oom check to so-log-check
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-09-19 11:08:42 -05:00
reyesj2
138849d258 more typos 2025-09-18 17:33:42 -05:00
reyesj2
a9ec12e402 Merge branch 'reyesj2/es-fleet-patch' of github.com:Security-Onion-Solutions/securityonion into reyesj2/es-fleet-patch 2025-09-18 16:41:34 -05:00
reyesj2
87281efc24 typo 2025-09-18 16:41:33 -05:00
reyesj2
29ac4f23c6 typo 2025-09-18 16:26:37 -05:00
reyesj2
878a3f8962 flip logic to check there aren't two default policies and fleet-default-output is disabled 2025-09-18 16:05:34 -05:00
reyesj2
21e27bce87 Merge branch 'reyesj2/es-fleet-patch' of github.com:Security-Onion-Solutions/securityonion into reyesj2/es-fleet-patch 2025-09-18 15:42:28 -05:00
reyesj2
336ca0dbbd typos 2025-09-18 15:42:25 -05:00
reyesj2
d9eba3cd0e typo 2025-09-18 15:17:22 -05:00
reyesj2
81b7e2b420 Merge remote-tracking branch 'origin' into reyesj2/es-fleet-patch 2025-09-18 14:34:41 -05:00
reyesj2
cd5483623b update import/eval fleet output config -- try to prevent corrupt dual 'default' output polices from having a successful installation 2025-09-18 14:33:34 -05:00
reyesj2
faa112eddf update last so-elastic-fleet-common functions 2025-09-18 12:18:16 -05:00
reyesj2
f663f22628 elastic_fleet_integration_id 2025-09-18 10:27:54 -05:00
reyesj2
8b07ff453d elastic_fleet_integration_policy_package_version 2025-09-18 10:21:07 -05:00
reyesj2
24a0fa3f6d add fleet_api wrapper for curl retries 2025-09-18 10:15:57 -05:00
reyesj2
a5011b398d add err check and retries to elastic_fleet_integration_policy_package_name and associated scripts 2025-09-18 09:39:56 -05:00
reyesj2
5b70398c0a add error check & retries to elastic_fleet_integration_policy_names and associated scripts 2025-09-17 15:35:20 -05:00
reyesj2
f3aaee1e41 update elastic_fleet_agent_policy_ids scripts already check rc 2025-09-17 14:59:41 -05:00
reyesj2
d0e875928d add error checking and retries for elastic_fleet_installed_packages & associated script 2025-09-17 14:59:13 -05:00
reyesj2
3e16bc8335 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2/es-fleet-patch 2025-09-17 14:37:43 -05:00
Doug Burks
c1d85493df Merge pull request #15045 from Security-Onion-Solutions/dougburks-patch-1
Update 2-4.yml
2025-09-17 14:23:23 -04:00
Doug Burks
e01d0f81ea Update 2-4.yml 2025-09-17 14:22:40 -04:00
Jason Ertel
376d0f3295 Merge pull request #15044 from Security-Onion-Solutions/jertel/wip
bump version
2025-09-17 14:22:02 -04:00
Jason Ertel
4418623f73 bump version 2025-09-17 14:20:44 -04:00
Doug Burks
d1f4e26e29 Merge pull request #15043 from Security-Onion-Solutions/2.4/dev
2.4.180
2025-09-17 14:15:32 -04:00
Doug Burks
5166db1caa Merge pull request #15042 from Security-Onion-Solutions/2.4/main
Merge pull request #14917 from Security-Onion-Solutions/2.4/dev
2025-09-17 13:13:46 -04:00
Doug Burks
ff5ad586af Merge pull request #15040 from Security-Onion-Solutions/dougburks-patch-1
2.4.180
2025-09-17 13:00:26 -04:00
reyesj2
9e24d21282 remove unused functions from so-elastic-fleet-common 2025-09-17 11:41:27 -05:00
reyesj2
5806999f63 add error check & retries to elastic_fleet_bulk_package_install 2025-09-17 11:39:06 -05:00
Doug Burks
4dae1afe0b Add files via upload 2025-09-17 12:37:29 -04:00
Doug Burks
456cad1ada Update DOWNLOAD_AND_VERIFY_ISO.md for 2.4.180 2025-09-17 12:36:55 -04:00
reyesj2
063a2b3348 update elastic_fleet_package_version_check & elastic_fleet_package_install to add error checking + retries. Update related scripts 2025-09-16 21:56:53 -05:00
reyesj2
bcd2e95fbe add error checking and retries to elastic_fleet_integration_policy_upgrade 2025-09-16 21:22:03 -05:00
reyesj2
94e8cd84e6 because of more aggressive exits use salt to rerun script as needed 2025-09-16 21:07:33 -05:00
reyesj2
948d72c282 add error check and retry to elastic_fleet_integration_update 2025-09-16 21:07:02 -05:00
reyesj2
bdeb92ab05 add err check and retries for elastic_fleet_integration_create 2025-09-16 20:30:45 -05:00
reyesj2
fdb5ad810a add err check and retries around func elastic_fleet_policy_create 2025-09-16 20:10:48 -05:00
reyesj2
f588a80ec7 fix jq error when indices don't exist (seen on fresh installs when fleet hasn't ever been installed) 2025-09-16 10:37:26 -05:00
Jorge Reyes
562b7e54cb Merge pull request #15031 from Security-Onion-Solutions/reyesj2/kfoutput
fix case of broken kafka output policy when new receiver is added and…
2025-09-15 15:33:48 -05:00
Jorge Reyes
3c847bca8b Merge pull request #15034 from Security-Onion-Solutions/reyesj2/patch31
run so-elastic-agent-gen-installers
2025-09-15 15:28:42 -05:00
reyesj2
ce2cc26224 run so-elastic-agent-gen-installers 2025-09-15 15:25:38 -05:00
Jorge Reyes
f3c574679c Merge pull request #15033 from Security-Onion-Solutions/reyesj2/patch31
8.18.6 agent
2025-09-15 15:21:46 -05:00
reyesj2
5da3fed1ce 8.18.6 agent 2025-09-15 15:19:43 -05:00
reyesj2
e6bcf5db6b fix case of broken kafka output policy when new receiver is added and secret storage was overwritten 2025-09-15 13:46:02 -05:00
Jorge Reyes
4d24c57903 Merge pull request #15028 from Security-Onion-Solutions/reyesj2/ea-alerter
agent monitor template & dataset name update
2025-09-12 14:45:20 -05:00
reyesj2
0606c0a454 agent monitor template & dataset name update 2025-09-12 14:26:22 -05:00
Josh Patterson
bb984e05e3 Merge pull request #15026 from Security-Onion-Solutions/vlb2
fix role check
2025-09-12 14:34:18 -04:00
Jorge Reyes
b35b0aaf2c Merge pull request #14941 from Security-Onion-Solutions/reyesj2/lgest
zeek dns.resolved_ip
2025-09-12 13:22:40 -05:00
Josh Patterson
62f04fa5dd fix role check 2025-09-12 14:09:30 -04:00
Josh Brower
d89df5f0dd Merge pull request #15025 from Security-Onion-Solutions/2.4/fixes
Parsing fix
2025-09-12 13:44:03 -04:00
DefensiveDepth
f0c1922600 Support endpoint logs with no host.ip field 2025-09-12 13:31:34 -04:00
DefensiveDepth
ab2cdd18ed Support endpoint logs with no host.ip field 2025-09-12 13:29:43 -04:00
Jorge Reyes
889bb7ddf4 Merge pull request #15024 from Security-Onion-Solutions/reyesj2/pypy
fix analyzers and upgrade deps
2025-09-12 11:11:34 -05:00
reyesj2
a959f90d0b Merge remote-tracking branch 'origin/2.4/dev' into reyesj2/pypy 2025-09-12 11:05:54 -05:00
Jorge Reyes
a54cd004d6 Merge pull request #15013 from Security-Onion-Solutions/reyesj2/kfoutput
update kafka output policy
2025-09-12 07:34:54 -05:00
Jorge Reyes
5100032fbd Merge pull request #15022 from Security-Onion-Solutions/reyesj2/cfqdn-recv
receiver custom fqdn
2025-09-11 16:33:41 -05:00
reyesj2
0f235baa7e receiver custom fqdn 2025-09-11 16:14:43 -05:00
Jorge Reyes
e5660b8c8e Merge pull request #15020 from Security-Onion-Solutions/reyesj2/essuriroll
suricata metadata index rollover 1d -> 30d
2025-09-11 16:03:30 -05:00
reyesj2
588a1b86d1 suricata metadata index rollover 1d -> 30d 2025-09-11 15:46:45 -05:00
Jorge Reyes
46f0afa24b Merge pull request #15019 from Security-Onion-Solutions/reyesj2/ea-alerter
lower filestream fingerprint length
2025-09-11 14:34:46 -05:00
reyesj2
a7651b2734 lower filestream fingerprint length 2025-09-11 14:30:49 -05:00
reyesj2
890f76e45c avoid delay in log ingest after a forced kafka output policy update 2025-09-10 20:21:11 -05:00
Jorge Reyes
e6eecc93c8 Merge pull request #15012 from Security-Onion-Solutions/reyesj2/ea-alerter
add configurable realert threshold per agent
2025-09-10 13:19:21 -05:00
reyesj2
8dc0f8d20e fix elastic agent ssl unpack error 2025-09-10 12:49:30 -05:00
reyesj2
fbdc0c4705 add configurable realert threshold per agent 2025-09-10 10:56:09 -05:00
Josh Patterson
d1a2b57aa2 Merge pull request #15011 from Security-Onion-Solutions/hideroni
don't show sensoroni config changes
2025-09-10 09:15:55 -04:00
Josh Patterson
f5ec1d4b7c don't show sensoroni config changes 2025-09-10 09:09:02 -04:00
Jorge Reyes
0aa556e375 Merge pull request #15009 from Security-Onion-Solutions/reyesj2/ea-alerter
so-elastic-agent-monitor
2025-09-09 17:00:39 -05:00
Josh Patterson
d9e86c15bc Merge pull request #15010 from Security-Onion-Solutions/vlb2
fix repo files to remove
2025-09-09 17:15:52 -04:00
Josh Patterson
4107fa006f fix repo files to remove 2025-09-09 16:51:42 -04:00
reyesj2
29980ea958 offline threshold check 2025-09-09 15:39:55 -05:00
reyesj2
8f36d2ec00 update log file name 2025-09-09 15:38:50 -05:00
coreyogburn
10511b8431 Merge pull request #15008 from Security-Onion-Solutions/cogburn/fix-templates
Fix Index Patterns
2025-09-09 14:03:36 -06:00
Corey Ogburn
2535ae953d Fix Index Patterns
so-assistant-chat and so-assistant-session both had templates with a trailing dash that prevented the pattern from applying to the name of the indices.
2025-09-09 14:00:01 -06:00
coreyogburn
2f68cd7483 Merge pull request #14991 from Security-Onion-Solutions/cogburn/wip-module
Cogburn/wip module
2025-09-09 10:32:06 -06:00
reyesj2
6655276410 force update to kafka-fleet-output-policy 2025-09-08 21:13:29 -05:00
reyesj2
9f7bcb0f7d add --force flag to so-kafka-fleet-output-policy & default to using fleet secret storage for client key 2025-09-08 21:13:11 -05:00
Corey Ogburn
aa43177d8c Fix Setting Name
enabledInSoc => enabled
2025-09-08 09:13:25 -06:00
Matthew Wright
12959d114c added threshold config fields for assistant 2025-09-08 09:13:25 -06:00
reyesj2
855b489c4b datastream 2025-09-08 09:13:24 -06:00
Corey Ogburn
673f9cb544 Responding to Feedback 2025-09-08 09:13:24 -06:00
Corey Ogburn
0a3ff47008 Cleanup Annotations
Removed fields no longer need annotations.
2025-09-08 09:13:24 -06:00
Corey Ogburn
834e34128d Non-dev URL 2025-09-08 09:13:23 -06:00
Corey Ogburn
73776f8d11 Cleaning up New ES Indexes 2025-09-08 09:13:23 -06:00
Corey Ogburn
120e61e45c ClientParams
Removed investigation prompt from module settings and moved to client settings, added enabledInSoc.
2025-09-08 09:13:23 -06:00
Corey Ogburn
fc2d450de0 Update Settings
The apiKey will be built off of the license rather than a new setting. The model is hardcoded for now at the AI Gateway level. We're going to use the investigationPrompt as a trigger for the feature being visible in the UI but by default will be blank for now.
2025-09-08 09:13:22 -06:00
Corey Ogburn
cea4eaf081 Updated Assistant Mapping 2025-09-08 09:13:22 -06:00
Corey Ogburn
b1753f86f9 New Message Structure 2025-09-08 09:13:22 -06:00
Corey Ogburn
6323fbf46b Content Object 2025-09-08 09:13:21 -06:00
Corey Ogburn
ba601c39b3 Rough Go at New Mappings/Settings 2025-09-08 09:13:21 -06:00
Corey Ogburn
ec27517bdd New Config Values
New config values with annotations and defaults.

Updated Nginx config to allow streaming requests to not be buffered on the way to the client.
2025-09-08 09:13:08 -06:00
Josh Brower
624ec3c93e Merge pull request #15003 from Security-Onion-Solutions/fix/wording
Make it clear that Fleet Nodes will need to be reinstalled
2025-09-08 09:10:43 -04:00
Josh Brower
f318a84c18 Update so-elastic-fleet-reset 2025-09-08 09:03:33 -04:00
Josh Patterson
8cca58dba9 Merge pull request #14998 from Security-Onion-Solutions/vlb2
manager do hypervisor things
2025-09-05 17:13:37 -04:00
Jason Ertel
6c196ea61a Merge branch '2.4/dev' into vlb2 2025-09-05 17:11:10 -04:00
Josh Patterson
207572f2f9 remove debug added to fail_setup 2025-09-05 14:16:03 -04:00
Josh Patterson
4afc986f48 firewall and logstash pipeline for managerhype 2025-09-05 13:14:47 -04:00
Jorge Reyes
ba5d140d4b Merge pull request #14996 from Security-Onion-Solutions/reyesj2/ea-alerter
so-elastic-agent-monitor
2025-09-05 10:41:59 -05:00
reyesj2
348f9dcaec prevent multiple script instances using file lock 2025-09-05 10:01:24 -05:00
reyesj2
915b9e7bd7 use logrotate 2025-09-05 09:22:44 -05:00
reyesj2
dfec29d18e custom kquery 2025-09-04 15:37:28 -05:00
Josh Patterson
38ef4a6046 pass pillar properly 2025-09-04 11:02:27 -04:00
Josh Patterson
a007fa6505 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-09-03 09:52:49 -04:00
reyesj2
1a32a0897c Merge remote-tracking branch 'origin/2.4/dev' into reyesj2/ea-alerter 2025-09-02 17:11:21 -05:00
reyesj2
e26310d172 elastic agent offline alerter
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-09-02 17:00:03 -05:00
coreyogburn
c7cdb0b466 Merge pull request #14986 from Security-Onion-Solutions/cogburn/internal-reverse
Move EnableReverseLookup
2025-09-02 15:25:19 -06:00
Corey Ogburn
df0b484b45 More Descriptive Description
Include instructions for how to add local lookups and a help link.
2025-09-02 15:07:13 -06:00
Corey Ogburn
2181cddf49 Move EnableReverseLookup
Move EnableReverseLookup and it's annotation from ClientParams to ServerConfig.
2025-09-02 14:09:55 -06:00
Jorge Reyes
a2b6968cef Merge pull request #14975 from Security-Onion-Solutions/reyesj2/es8186
ES 8.18.6 upgrade
2025-09-02 10:14:33 -05:00
Josh Patterson
285fbc2783 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-09-02 09:23:24 -04:00
Josh Patterson
94c5a1fd98 Merge pull request #14980 from Security-Onion-Solutions/mikebond
Mikebond
2025-08-29 11:08:17 -04:00
Mike Reeves
19362fe5e5 Update so-combine-bond 2025-08-29 11:06:25 -04:00
Josh Patterson
a7a81e9825 always manage script, only run it if bond0 exists 2025-08-29 11:05:42 -04:00
Mike Reeves
31484d1158 Merge pull request #14978 from Security-Onion-Solutions/mikebond
only manage bond script if bond0 exists
2025-08-29 10:07:24 -04:00
Josh Patterson
f51cd008f2 only manage bond script if bond0 exists 2025-08-29 10:04:56 -04:00
reyesj2
a5675a79fe es 8.18.6 pipeline upd 2025-08-28 19:45:17 -05:00
reyesj2
1ea7b3c09f es 8.18.6 2025-08-28 18:27:56 -05:00
Jorge Reyes
d9127a288f Merge pull request #14957 from Security-Onion-Solutions/reyesj2-patch-6
enable additional fleetnode state
2025-08-28 14:19:03 -05:00
Josh Patterson
ebb78bc9bd Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-08-28 09:21:33 -04:00
Josh Patterson
e5920b6465 add managerhype back to whiptail 2025-08-28 09:21:20 -04:00
Mike Reeves
153a99a002 Merge pull request #14971 from Security-Onion-Solutions/mikebond
and nic channel customization
2025-08-27 18:42:18 -04:00
Josh Patterson
69a5e1e2f5 remove md file 2025-08-27 15:14:15 -04:00
Josh Patterson
0858160be2 support for modifying nic channels 2025-08-27 14:51:57 -04:00
Mike Reeves
ccd79c814d Add script for bond0 channels 2025-08-27 09:53:37 -04:00
Josh Patterson
a8a01b8191 Merge branch 'bravo' into vlb2 2025-08-26 14:59:23 -04:00
Josh Patterson
ac2c044a94 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-08-26 14:55:06 -04:00
Josh Patterson
e10d00d114 support for managerhype 2025-08-26 14:54:37 -04:00
Josh Patterson
cbdd369a18 ensure x509 in mine 2025-08-25 08:39:55 -04:00
reyesj2
b2e7f58b3d analyzer test updates 2025-08-22 17:36:48 -05:00
reyesj2
a6600b8762 elasticsearch dep upgrades 2025-08-22 17:11:06 -05:00
reyesj2
5479d49379 greynoise breakup long line for linter 2025-08-22 16:00:05 -05:00
Jason Ertel
304985b61e Merge pull request #14959 from Security-Onion-Solutions/jertel/wip
rpt
2025-08-22 16:55:45 -04:00
coreyogburn
d6c725299b Merge pull request #14956 from Security-Onion-Solutions/cogburn/playbook-repo-name
Ruleset Name UiElement
2025-08-22 14:02:42 -06:00
Corey Ogburn
d99857002d Improved Label
The underlying field is called "rulesetName" but for playbook repos we're not talking about rulesets. Improved the label for user experience.
2025-08-22 13:18:22 -06:00
Corey Ogburn
2a6c74917e Ruleset Name UiElement
Add a missing UiElement so all the repo fields are represented in the UI.
2025-08-22 13:00:17 -06:00
reyesj2
9f0bd4bad3 spamhaus enable multiline annotation on nameservers entries 2025-08-22 13:51:05 -05:00
reyesj2
924b06976c spamhaus config typos 2025-08-22 13:50:40 -05:00
Jason Ertel
1357f19e48 update wording 2025-08-22 13:25:25 -04:00
Jason Ertel
c91e9ea4e0 return to normalcy 2025-08-22 13:23:19 -04:00
reyesj2
c2c96dad6e bump version 2025-08-22 08:43:48 -05:00
reyesj2
1a08833e77 typo 2025-08-22 08:41:03 -05:00
reyesj2
d16dfcf4e8 emailrep dep upgrades 2025-08-21 16:22:48 -05:00
reyesj2
b79c7b0540 sublime dep upgrades 2025-08-21 16:17:44 -05:00
reyesj2
9f45792217 pulsedive dep upgrades 2025-08-21 16:07:08 -05:00
reyesj2
d3108c3549 greynoise dep upgrade + use community version with no auth 2025-08-21 14:30:21 -05:00
reyesj2
7d883cb5e0 echotrail api no longer available 2025-08-21 12:38:00 -05:00
reyesj2
ebd81c1df9 otx dep upgrades 2025-08-21 12:22:47 -05:00
reyesj2
418dbee9fa virustotal dep upgrades 2025-08-21 12:15:13 -05:00
reyesj2
cccc3bf625 urlscan dep upgrades 2025-08-21 12:06:35 -05:00
reyesj2
a3e0072631 update readme threatfox uses auth for api now 2025-08-21 11:48:17 -05:00
reyesj2
220e485312 threatfox dep upgrade + use auth for api access 2025-08-21 11:47:54 -05:00
reyesj2
67f8fca043 spamhaus dep upgrades 2025-08-21 11:32:13 -05:00
reyesj2
0e0ab8384c localfile dep upgrade 2025-08-21 11:26:59 -05:00
reyesj2
58228f70ca malwarehashregistry dep upgrades 2025-08-21 11:16:28 -05:00
reyesj2
7968de06b4 enable access to global stig pillar 2025-08-21 11:06:29 -05:00
Mike Reeves
87fdd90f56 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into 2.4/dev 2025-08-21 10:39:34 -04:00
Josh Patterson
65e7e56fbe Merge pull request #14950 from Security-Onion-Solutions/180soup
180 soup base
2025-08-21 09:50:53 -04:00
Josh Patterson
424fdff934 180 soup base 2025-08-21 09:43:30 -04:00
Jorge Reyes
f72996d9d1 Merge pull request #14949 from Security-Onion-Solutions/reyesj2-patch-7
update pcap permissions when no stenographer user exists
2025-08-21 08:33:30 -05:00
reyesj2
d77556c672 pcap dir 2025-08-21 08:25:48 -05:00
reyesj2
c412e9bad2 malwarebazaar api uses auth 2025-08-20 21:04:05 -05:00
reyesj2
87a28e8ce7 malwarebazaar dep upgrades + use auth 2025-08-20 20:59:23 -05:00
reyesj2
9ca0c7d53a urlhaus dep upgrades + update to use authenticated abusech api 2025-08-20 17:20:10 -05:00
reyesj2
2e94e452ed whoislookup py 3.13 2025-08-20 16:39:13 -05:00
reyesj2
6a0d40ee0d leave requirements.txt as is 2025-08-20 16:20:26 -05:00
reyesj2
0cebcf4432 upgrade whoislookup deps 2025-08-20 16:09:08 -05:00
reyesj2
ed0e24fcaf Merge remote-tracking branch 'origin/2.4/dev' into reyesj2/ol9stg 2025-08-20 12:10:04 -05:00
reyesj2
24be2f869b enable stig on fleet nodes 2025-08-20 12:08:50 -05:00
reyesj2
f8058a4a3a disable showing large stig profile update in salt log 2025-08-20 12:06:54 -05:00
reyesj2
d0ba6df2fc remove any "" from dns.resolved_ip 2025-08-19 13:44:24 -05:00
reyesj2
95bee91b12 zeek dns.resolved_ip 2025-08-19 11:20:59 -05:00
Jason Ertel
751b5bd556 switch version for tests 2025-08-19 10:11:50 -04:00
Jason Ertel
77273449c9 fix typo 2025-08-18 16:58:52 -04:00
Jason Ertel
46e1f1bc5c fix typo 2025-08-18 16:12:34 -04:00
Jason Ertel
884bec7465 fix typo 2025-08-18 15:01:49 -04:00
Jason Ertel
8d3220f94b fix salt issue 2025-08-18 14:31:01 -04:00
Jason Ertel
9cb42911dc Merge branch '2.4/dev' into jertel/wip 2025-08-18 09:54:58 -04:00
Jason Ertel
a3cc6f025e reports 2025-08-18 09:54:40 -04:00
Jorge Reyes
6fae4a9974 Merge pull request #14933 from Security-Onion-Solutions/reyesj2/ol9stg
profile update
2025-08-15 16:26:11 -05:00
reyesj2
f7a1a3a172 gui / nongui profile 2025-08-15 16:07:54 -05:00
reyesj2
292e1ad782 use chrony system default 2025-08-15 15:19:31 -05:00
reyesj2
af1fe86586 update chrony config 2025-08-15 15:16:36 -05:00
Josh Patterson
97100cdfdd Merge pull request #14930 from Security-Onion-Solutions/vlb2
Vlb2
2025-08-14 16:37:15 -04:00
Josh Patterson
5f60ef1541 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-08-14 16:36:37 -04:00
Josh Patterson
c7e7a0a871 add more detail to fail_setup output 2025-08-14 16:36:09 -04:00
reyesj2
f09eff530e profile upd 2025-08-14 15:17:01 -05:00
reyesj2
50b34a116a disable rpm verify hash, salt packages are modified before install for salt bootstrap process 2025-08-14 15:02:59 -05:00
reyesj2
42874fb0d0 Merge remote-tracking branch 'origin/2.4/dev' into reyesj2/ol9stg 2025-08-13 12:50:24 -05:00
Josh Patterson
482847187c Merge pull request #14925 from Security-Onion-Solutions/vlb2
firewall allow hypervisor for managersearch and standalone
2025-08-12 16:45:27 -04:00
reyesj2
a19b99268d don't create unused zeek home directory 2025-08-12 15:44:50 -05:00
reyesj2
3c5a03d7b6 fix /nsm/pcap no group/user ownership 2025-08-12 15:35:30 -05:00
reyesj2
c1a5c2b2d1 set elasticfleet aritifact registry artifact file permissions 2025-08-12 14:39:35 -05:00
Josh Patterson
baf0f7ba95 firewall allow hypervisor for managersearch and standalone 2025-08-12 14:08:15 -04:00
Mike Reeves
ee27965314 Merge pull request #14922 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update VERSION
2025-08-12 11:00:20 -04:00
Mike Reeves
d02093295b Update 2-4.yml 2025-08-12 10:59:17 -04:00
Mike Reeves
6381444fdc Update VERSION 2025-08-12 10:58:11 -04:00
Mike Reeves
01b313868d Merge pull request #14917 from Security-Onion-Solutions/2.4/dev
2.4.170
2025-08-12 10:06:07 -04:00
Mike Reeves
3859ebd69c Merge pull request #14919 from Security-Onion-Solutions/2.4.170
2.4.170
2025-08-12 09:47:05 -04:00
Mike Reeves
9753e431e3 Merge remote-tracking branch 'origin/2.4/main' into 2.4.170 2025-08-12 09:45:06 -04:00
Mike Reeves
b307667ae2 Merge remote-tracking branch 'origin/2.4/main' into 2.4/dev 2025-08-12 09:44:02 -04:00
Mike Reeves
5d7dcbbcee Merge pull request #14918 from Security-Onion-Solutions/2.4.170
2.4.170
2025-08-12 09:42:26 -04:00
Mike Reeves
281b395053 2.4.170 2025-08-12 09:40:18 -04:00
Mike Reeves
3518f39d39 Merge pull request #14916 from Security-Onion-Solutions/2.4.170
2.4.170
2025-08-12 09:37:46 -04:00
Mike Reeves
ae0ffc4977 2.4.170 2025-08-12 09:32:42 -04:00
Josh Patterson
bc2f716c99 Merge pull request #14910 from Security-Onion-Solutions/vlb2
remove managerhype from whiptail
2025-08-07 16:19:59 -04:00
Josh Patterson
9617da1791 remove managerhype from whiptail 2025-08-07 16:13:59 -04:00
Josh Patterson
2ba5d7d64b Merge pull request #14909 from Security-Onion-Solutions/vlb2
Vlb2
2025-08-07 15:26:25 -04:00
Josh Patterson
437b9016ca Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-08-07 15:02:57 -04:00
Josh Patterson
c5db0a7195 more ed25519 to ecdsa 2025-08-07 15:02:45 -04:00
Josh Patterson
82894d88b6 ecdsa instead of ed25519 2025-08-07 14:40:58 -04:00
reyesj2
4a4146f515 ol9 profile update 2025-08-05 13:02:44 -05:00
Josh Patterson
59a4d0129f Merge pull request #14899 from Security-Onion-Solutions/vlb2
handle - in hypervisor hostname
2025-08-04 17:50:41 -04:00
Josh Patterson
5cf2149218 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-08-04 15:25:43 -04:00
Josh Patterson
453c32df0d handle - in hypervisor hostname 2025-08-04 15:25:26 -04:00
Josh Patterson
1df10b80b2 Merge pull request #14896 from Security-Onion-Solutions/vlb2
fix hyper bridge setup. simplify cpu/mem regex
2025-08-01 11:04:49 -04:00
Josh Patterson
9d96a11753 update usage 2025-08-01 08:55:38 -04:00
Josh Patterson
e9e3252bb5 nvme script move nsm if mounted 2025-08-01 08:53:45 -04:00
Josh Patterson
930c8147e7 simplify cpu and memory regex 2025-08-01 08:52:21 -04:00
Josh Patterson
378ecad94c Merge pull request #14893 from Security-Onion-Solutions/vlb2
Vlb2
2025-07-30 16:38:47 -04:00
Josh Patterson
02299a6742 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-07-30 16:37:27 -04:00
Josh Patterson
15cbc626c4 resolve for already configured RAID 2025-07-30 16:37:19 -04:00
Josh Patterson
8720a4540a remove extra line 2025-07-30 16:36:40 -04:00
Josh Patterson
7b5980bfe5 setup bridge for hypervisor using $MNIC 2025-07-30 16:04:10 -04:00
Josh Patterson
ebfb670f6a Merge pull request #14892 from Security-Onion-Solutions/vlb2
match user soqemussh, allow user additions to persist, for ssh config.
2025-07-30 09:55:56 -04:00
Josh Patterson
c98042fa80 match user soqemussh for ssh config. allow for user edits to not be overwritten in ssh config. 2025-07-30 09:44:58 -04:00
Jorge Reyes
70181e3e08 Merge pull request #14890 from Security-Onion-Solutions/reyesj2-backup-script
exclude so_agent_installer dir from config backups
2025-07-29 15:43:12 -05:00
reyesj2
adb1e01c7a exclude so_agent_installer dir from config backups 2025-07-29 15:31:53 -05:00
Jorge Reyes
cdb7f0602c Merge pull request #14889 from Security-Onion-Solutions/reyesj2-es-helper
only show data nodes in disk usage output
2025-07-29 14:45:30 -05:00
Jorge Reyes
d52e817dd5 Merge pull request #14883 from Security-Onion-Solutions/reyesj2-patch-3
increase so-elasticsearch-roles-load timeout
2025-07-29 14:45:14 -05:00
reyesj2
07305d8799 only show data nodes in disk usage output 2025-07-29 14:15:43 -05:00
reyesj2
fbf5bafae7 set 2m timeout 2025-07-28 15:17:04 -05:00
reyesj2
d49cd3cb85 increased timeout for so-elasticsearch-roles-load from default of 30s 2025-07-28 15:14:12 -05:00
Jorge Reyes
b60b9e7743 Merge pull request #14880 from Security-Onion-Solutions/reyesj2-patch-2
update ASN organization name field
2025-07-28 10:51:07 -05:00
reyesj2
26fd8562c5 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2-patch-2 2025-07-25 16:19:12 -05:00
reyesj2
84b38daf62 name destination_geo & source_geo to destination.as and source.as better aligning with ECS and linking other log sources already using .as for ASN geo data.
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-07-25 16:17:22 -05:00
Jorge Reyes
a0f9d5dc61 Merge pull request #14871 from Security-Onion-Solutions/reyesj2-patch-2
FIX: opencanary startup logs cause ingest error
2025-07-23 16:05:29 -05:00
reyesj2
e8c25d157f drop empty ip fields when its a opencanary startup log (1001) to prevent elasticsearch doc ingest error 2025-07-23 15:52:50 -05:00
Jorge Reyes
214f4f0f0c Merge pull request #14870 from Security-Onion-Solutions/foxtrot
8.18.4
2025-07-23 10:03:14 -05:00
reyesj2
7ae0369a3b VERSION 2025-07-23 09:58:55 -05:00
reyesj2
2e5682f11c 8.18.4 import evtx pipelines 2025-07-23 09:53:04 -05:00
Josh Patterson
2e7cb0e362 Merge pull request #14869 from Security-Onion-Solutions/saltuproc
add pack only holding package if installed. remove redundant hold on salt-master package
2025-07-23 10:22:21 -04:00
Josh Patterson
56748ea6e7 add pack only holding package if installed. remove redundant hold on salt-master package 2025-07-23 10:16:12 -04:00
reyesj2
621f03994c Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into foxtrot 2025-07-23 08:46:42 -05:00
Jorge Reyes
ab8ad72920 Merge pull request #14868 from Security-Onion-Solutions/reyesj2-patch-1
add some retry to so-elastic-fleet-integration-upgrade
2025-07-23 08:25:10 -05:00
reyesj2
3fc244ee85 8.18.4 2025-07-22 16:56:51 -05:00
reyesj2
4728b96c51 add a retry to so-elastic-fleet-integration-upgrade when response isn't what was expected that way the error message isn't throwin into sosetup / soup log 2025-07-22 16:16:28 -05:00
Doug Burks
f303363a73 Merge pull request #14867 from Security-Onion-Solutions/dougburks-patch-1
UPGRADE: Zeek Ethercat plugin #14783
2025-07-22 16:14:55 -04:00
Doug Burks
2a166af524 UPGRADE: Zeek Ethercat plugin #14783 2025-07-22 16:10:44 -04:00
Josh Patterson
ab4d055fd1 Merge pull request #14865 from Security-Onion-Solutions/saltuproc
don't allow bootstrap-salt to start daemons. splay non manager highstates 120 seconds
2025-07-22 13:37:28 -04:00
Josh Patterson
af49a8e4ef add back comment 2025-07-22 13:22:50 -04:00
Josh Patterson
669d219fdc splay highstate schedule 2minutes for non managers 2025-07-22 11:52:50 -04:00
Josh Patterson
442aecb9f4 bootstrap dont start daemon, use state to start it 2025-07-22 10:30:59 -04:00
Josh Patterson
beda0bc89c new state name. no longer need to close stdin, stderr stdout 2025-07-21 15:40:36 -04:00
Josh Patterson
64fd6bf979 Merge remote-tracking branch 'origin/2.4/dev' into saltuproc 2025-07-21 14:42:07 -04:00
Mike Reeves
1955434416 Merge pull request #14860 from Security-Onion-Solutions/ja4
Add JA4 support
2025-07-21 11:54:52 -04:00
Jorge Reyes
ab6a083fa8 Merge pull request #14858 from Security-Onion-Solutions/reyesj2-patch-1
fix incorrect file ownership
2025-07-21 10:42:28 -05:00
Mike Reeves
eabca5df18 Update defaults.yaml 2025-07-21 11:01:33 -04:00
Mike Reeves
5dac3ff2a6 Update enabled.sls 2025-07-21 10:58:25 -04:00
Mike Reeves
93024738d3 Update config.sls 2025-07-21 10:57:45 -04:00
Mike Reeves
05a368681a Create config.zeek.ja4 2025-07-21 10:53:54 -04:00
Josh Patterson
246161018c upgrade and start salt process change 2025-07-18 14:17:38 -04:00
reyesj2
f27714890a update file ownership to socore 2025-07-18 09:35:51 -05:00
Jorge Reyes
47831eb300 Merge pull request #14856 from Security-Onion-Solutions/reyesj2-es-ts
elasticsearch troubleshoot script
2025-07-17 15:56:40 -05:00
reyesj2
0b1f2252ee elasticsearch troubleshoot script 2025-07-17 13:27:54 -05:00
Jorge Reyes
3ce6b555f7 Merge pull request #14854 from Security-Onion-Solutions/reyesj2-zeek-ja4
ja4 ignore empty strings
2025-07-17 11:16:20 -05:00
reyesj2
c29f11863e ja4 ignore empty strings 2025-07-17 10:47:00 -05:00
Jorge Reyes
952403b696 Merge pull request #14850 from Security-Onion-Solutions/reyesj2-zeek-ja4
ja4
2025-07-16 16:08:05 -05:00
reyesj2
b3eb06f53e ja4
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-07-16 15:56:34 -05:00
Josh Patterson
5198d0cdf0 Merge pull request #14848 from Security-Onion-Solutions/vlb2
hosted image. sos hw support
2025-07-16 15:43:14 -04:00
Josh Patterson
e61e2f04b3 handle hw not having sfp,disk or copper. show none for total if that is the case 2025-07-16 15:24:43 -04:00
Josh Patterson
1aa876f4eb add missing hardware key 2025-07-16 14:20:55 -04:00
Josh Patterson
a3fb2f13be dont show state changes for user-data 2025-07-16 14:14:16 -04:00
Josh Patterson
9e77eae71e Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-07-16 12:16:27 -04:00
Josh Patterson
cd5de5cd05 add sos hw models 2025-07-16 12:14:54 -04:00
Josh Patterson
98a67530f5 update qcow2 hosted location 2025-07-16 12:14:25 -04:00
Josh Patterson
58ffe576d7 add pci mappings for sos hw 2025-07-16 12:09:39 -04:00
Josh Patterson
b0a515f2c3 update base cloud image location 2025-07-16 12:09:01 -04:00
Doug Burks
a037421809 Merge pull request #14845 from Security-Onion-Solutions/dougburks-patch-1
Simplify UniFi dashboards #14838
2025-07-16 07:28:45 -04:00
Doug Burks
6bb6c24641 Simplify UniFi dashboards #14838 2025-07-16 07:20:39 -04:00
Doug Burks
617834a044 Merge pull request #14842 from Security-Onion-Solutions/dougburks-patch-1
Issues #14836 #14837 #14838
2025-07-15 08:22:37 -04:00
Jorge Reyes
2c5c0e7830 Merge pull request #14840 from Security-Onion-Solutions/reyesj2-es-ea
kibana listingLimit
2025-07-14 16:17:32 -05:00
reyesj2
81d2c52867 kibana listingLimit 2025-07-14 16:08:11 -05:00
Doug Burks
4f8bd16910 FEATURE: Add SOC Dashboards for CEF, iptables, and UniFi logs #14838 2025-07-14 15:37:10 -04:00
Doug Burks
ab9d03bc2e FEATURE: Add SOC Dashboards for UniFi logs #14838 2025-07-14 12:21:08 -04:00
Doug Burks
10bf3e8fab FEATURE: Add SOC default fields for CEF logs #14837 2025-07-14 12:07:02 -04:00
Doug Burks
f8108e93d5 FEATURE: Add SOC default fields for iptables logs #14836 2025-07-14 12:04:46 -04:00
Jorge Reyes
3108556495 Merge pull request #14833 from Security-Onion-Solutions/reyesj2-patch-11
templates with error in name
2025-07-12 11:08:12 -05:00
reyesj2
f97b2444e7 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2-patch-11 2025-07-12 08:30:17 -05:00
reyesj2
415f456661 ignore composable templates with error in the name 2025-07-12 08:30:04 -05:00
Jason Ertel
e49b3fc260 Merge pull request #14832 from Security-Onion-Solutions/jertel/wip
fix typo
2025-07-11 11:32:18 -04:00
Jason Ertel
9b125fbe53 fix typo 2025-07-11 11:30:01 -04:00
Jason Ertel
10e3b32fed fix typo 2025-07-11 11:29:16 -04:00
Jorge Reyes
5386c07b66 Merge pull request #14830 from Security-Onion-Solutions/reyesj2-patch-10
split up bulk install of integrations
2025-07-10 19:09:08 -05:00
reyesj2
7149d20b42 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2-patch-10 2025-07-10 15:53:07 -05:00
reyesj2
8a57b79b77 make package installs go in groups of 25 or less 2025-07-10 15:52:59 -05:00
reyesj2
a4e8e7ea53 update syslog-tcp-514 policy 2025-07-10 13:12:26 -05:00
reyesj2
95ba327eb3 cribl metrics template rename 2025-07-10 11:08:46 -05:00
Jason Ertel
3056410fd1 Merge pull request #14828 from Security-Onion-Solutions/jertel/wip
exclude component updates indexes with error in the name
2025-07-10 07:51:34 -04:00
Jason Ertel
bf8da60605 exclude component updates indexes with error in the name 2025-07-10 07:47:53 -04:00
Jorge Reyes
226f858866 Merge pull request #14827 from Security-Onion-Solutions/foxtrot
check required files exist before loading map file
2025-07-09 17:31:11 -05:00
reyesj2
317d7dea7d check required files exist before loading map file
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-07-09 17:25:36 -05:00
Jorge Reyes
4e548ceb6e Merge pull request #14825 from Security-Onion-Solutions/foxtrot
ES 8.18.3
2025-07-09 16:15:48 -05:00
reyesj2
d846fe55e1 typos 2025-07-09 15:40:36 -05:00
Jorge Reyes
3b2942651e Update salt/elasticfleet/files/integrations/elastic-defend/elastic-defend-endpoints.json 2025-07-09 15:14:24 -05:00
reyesj2
fa6f4100dd ensure elasticsearch is up 2025-07-09 14:48:15 -05:00
reyesj2
33e2d18aa7 endpoint policy update 2025-07-09 13:59:01 -05:00
reyesj2
a03764d956 additional weird integration 2025-07-09 12:34:53 -05:00
reyesj2
3fb703cd22 check if generic template exists in installed component templates before defaulting to logs-filestream.generic@package 2025-07-09 11:59:25 -05:00
reyesj2
f1cbe23f57 update default kibana space 2025-07-08 21:17:57 -05:00
reyesj2
07a22a0b4b version 2025-07-08 18:32:14 -05:00
reyesj2
b9d813cef2 typo 2025-07-08 18:26:46 -05:00
reyesj2
76ab0eac03 foxtrot
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-07-08 16:45:27 -05:00
Jorge Reyes
08a2ad2c40 Merge pull request #14824 from Security-Onion-Solutions/reyesj2/es8183
es 8.18.3
2025-07-08 16:44:54 -05:00
reyesj2
47bbc9987e elastic agent upgrade prereq 2025-07-08 16:39:48 -05:00
reyesj2
59628ec8b7 revert foxtrot change 2025-07-08 16:15:18 -05:00
reyesj2
bef2fa9e8d 8.18.3 pipeline updates 2025-07-08 16:09:16 -05:00
reyesj2
d4f0cbcb67 changes for 'generic' integrations with no compoent templates assigned. Default to using the logs-filestream.generic@package componet template 2025-07-08 15:23:46 -05:00
Josh Brower
9e96b12e94 Merge pull request #14816 from Security-Onion-Solutions/2.4/socusernames
Add user.name to kratos query
2025-07-08 10:11:40 -04:00
Josh Brower
42552810fb Add user.name to kratos query 2025-07-08 09:50:08 -04:00
reyesj2
4bf2c931e9 make sure required file exists to generate ADDON_INTEGRATION_DEFAULTS 2025-07-08 08:43:24 -05:00
Jorge Reyes
beda6ac20d Merge pull request #14813 from Security-Onion-Solutions/reyesj2/es8183
es 8.18.3
2025-07-07 12:59:23 -05:00
reyesj2
d8be6e42e1 es 8.18.3 2025-07-07 12:58:00 -05:00
Josh Patterson
4fb7fe9e45 Merge pull request #14803 from Security-Onion-Solutions/vlb2
ensure hypervisor is remove from salt cloud profiles when key is deleted
2025-07-02 16:29:48 -04:00
Josh Patterson
6d7066c381 add license 2025-07-02 16:20:30 -04:00
Josh Patterson
d003e1380f ensure hypervisor is remove from salt cloud profiles when key is deleted 2025-07-02 16:14:43 -04:00
Josh Patterson
ef8badaef1 Merge pull request #14800 from Security-Onion-Solutions/vlb2
only run storage state if box has nvme
2025-07-01 16:36:31 -04:00
Josh Patterson
dea9c149d7 only run storage state if box has nvme 2025-06-30 15:30:39 -04:00
coreyogburn
56c9fa3129 Merge pull request #14793 from Security-Onion-Solutions/cogburn/playbooks-import
Refactors playbook repo configuration
2025-06-30 13:02:39 -06:00
Corey Ogburn
a86105294b Playbook Annotations 2025-06-30 12:50:56 -06:00
Corey Ogburn
33c23c30d3 Refactors playbook repo configuration
Replaces individual playbook repo fields with an array of repos to support multiple playbook sources. Refactor Jinja.
2025-06-30 11:43:02 -06:00
Josh Patterson
fe76a79ebd Merge pull request #14792 from Security-Onion-Solutions/vlb2
allow libvirt states
2025-06-30 11:25:41 -04:00
Josh Patterson
5035ec2539 allow libvirt states 2025-06-30 11:21:45 -04:00
Josh Patterson
9f35b20664 Merge pull request #14791 from Security-Onion-Solutions/vlb2
allow standalone and managersearch to run salt.cloud state
2025-06-30 10:29:34 -04:00
Josh Patterson
b93c6c0270 allow standalone and managersearch to run salt.cloud state 2025-06-30 09:51:40 -04:00
Josh Patterson
e5dd403dd1 Merge pull request #14784 from Security-Onion-Solutions/vlb2
hardware virtualization
2025-06-27 12:09:23 -04:00
Josh Patterson
493359e5a2 cleanup 2025-06-27 11:00:35 -04:00
Josh Patterson
b0f5218775 add quotes 2025-06-27 10:58:14 -04:00
Josh Patterson
8fdc7049f9 add missing , 2025-06-27 10:53:03 -04:00
Josh Patterson
d79d7e2ba1 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-06-26 15:02:00 -04:00
Jorge Reyes
596b3e2614 Merge pull request #14776 from Security-Onion-Solutions/reyesj2/msiflags
soup 2.4.170
2025-06-26 10:01:33 -05:00
Josh Patterson
59f8544324 Merge pull request #14778 from Security-Onion-Solutions/vlb2
hardware virtualization
2025-06-25 17:22:53 -04:00
Josh Patterson
daaad3699c allow wheel files 2025-06-25 17:20:17 -04:00
Josh Patterson
1e9f3a65a4 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-06-25 15:35:30 -04:00
Josh Patterson
b2acf2f807 change logic for determining if vm was destroyed 2025-06-25 15:05:49 -04:00
reyesj2
34e561f358 soup 2.4.170 2025-06-25 13:47:44 -05:00
reyesj2
e5a07170b3 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2/msiflags 2025-06-25 13:44:09 -05:00
Mike Reeves
02dbbc5289 Merge pull request #14775 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update VERSION
2025-06-25 13:59:36 -04:00
Mike Reeves
5e62d3ecb2 Update 2-4.yml 2025-06-25 13:58:57 -04:00
Mike Reeves
373ef9fe91 Update VERSION 2025-06-25 13:58:25 -04:00
Mike Reeves
2f1e6fd625 Merge pull request #14773 from Security-Onion-Solutions/2.4/dev
2.4.160
2025-06-25 13:49:06 -04:00
Mike Reeves
6b8ef43cc1 Merge pull request #14772 from Security-Onion-Solutions/2.4.160
2.4.160
2025-06-25 13:02:06 -04:00
Mike Reeves
7e746b87c5 2.4.160 2025-06-25 13:00:26 -04:00
Josh Patterson
2ad2a3110c Merge pull request #14771 from Security-Onion-Solutions/revert-14770-saltupgradechange
Revert "change salt upgrade process"
2025-06-25 12:21:00 -04:00
Josh Patterson
bc24a6c574 Revert "change salt upgrade process" 2025-06-25 12:19:45 -04:00
Josh Patterson
b25bb0faf0 Merge pull request #14770 from Security-Onion-Solutions/saltupgradechange
change salt upgrade process
2025-06-25 11:31:57 -04:00
Josh Patterson
38c74b46b6 change salt upgrade process 2025-06-25 11:05:28 -04:00
reyesj2
fbb6d8146a regen installers 2025-06-25 00:21:49 -05:00
Jason Ertel
83ecc02589 Merge pull request #14765 from Security-Onion-Solutions/jertel/wip
fix logging
2025-06-24 11:05:19 -04:00
Jason Ertel
21d9964827 fix logging 2025-06-24 11:03:08 -04:00
Jason Ertel
f3b6d9febb Merge pull request #14764 from Security-Onion-Solutions/jertel/wip
refactor airgap playbook to eliminate dupe code and shrink ISO
2025-06-24 09:39:43 -04:00
Jason Ertel
b052a75e64 refactor airgap playbook to eliminate dupe code and shrink ISO 2025-06-24 09:34:57 -04:00
Josh Patterson
0602601655 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-06-20 16:25:16 -04:00
Josh Patterson
480e248131 ensure bond and interfaces only added once 2025-06-20 16:24:54 -04:00
Josh Brower
6fc7c930a6 Merge pull request #14759 from Security-Onion-Solutions/2.4/fieldmappings
Add support for dns.resolved_ip
2025-06-20 15:08:05 -04:00
Josh Brower
31cd5b1365 Add support for dns.resolved_ip 2025-06-20 15:02:59 -04:00
Josh Patterson
19fb081fa0 additional log info 2025-06-13 15:21:38 -04:00
Josh Patterson
d3b1a4f928 use state file to only send highstate initiated event once 2025-06-13 15:21:23 -04:00
Josh Patterson
4729e194a0 spell ensure 2025-06-12 17:01:23 -04:00
Josh Patterson
ab6060c484 restore VM to VMs file so that it is still seen in soc if vm destroy fails 2025-06-12 16:50:38 -04:00
Josh Patterson
0b65021f75 exit 1 if vm is not destroyed 2025-06-12 16:49:56 -04:00
Josh Patterson
bd4f2093db add vm delete warning for ui element 2025-06-11 09:39:15 -04:00
Josh Patterson
48dfcab9f0 ensure salt-minion is running, salt-master if manager before mine update 2025-06-10 13:44:24 -04:00
Josh Patterson
849f8f13bc create virt feature pillars 160 to 170 soup 2025-06-10 13:08:42 -04:00
Josh Patterson
07359ad6ec Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-06-09 14:48:26 -04:00
Josh Patterson
1e2453eddf debug loglevel 2025-06-09 14:47:53 -04:00
Josh Patterson
4c9773c68d reenable sslverify 2025-06-09 14:37:06 -04:00
Josh Patterson
4666670f4f remove logging prefixes 2025-06-09 13:53:23 -04:00
Josh Patterson
0f71b45e0f CPU model=host is deprecated 2025-06-09 09:55:16 -04:00
Josh Brower
92e9bd43ca Merge pull request #14723 from Security-Onion-Solutions/2.4/airgapfix
Create dir if needed
2025-06-09 07:47:59 -04:00
Josh Brower
a600c64229 Create dir if needed 2025-06-09 07:33:02 -04:00
Josh Brower
121dec0180 Merge pull request #14722 from Security-Onion-Solutions/2.4/airgapfix
Add nsm bind
2025-06-08 12:30:58 -04:00
Josh Brower
b451c4c034 Merge pull request #14721 from Security-Onion-Solutions/2.4/SupExtraction
Supress alerts
2025-06-08 12:25:35 -04:00
Josh Brower
dbdbffa4b0 Add nsm bind 2025-06-08 08:23:09 -04:00
Josh Brower
f360c6ecbc Supress alerts 2025-06-07 09:29:59 -04:00
Josh Brower
b9ea151846 Merge pull request #14719 from Security-Onion-Solutions/2.4/playbookairgap
Airgap tweaks
2025-06-06 17:52:08 -04:00
Josh Brower
b428573a0a Airgap tweaks 2025-06-06 17:48:49 -04:00
Josh Brower
350e1c9d91 Merge pull request #14718 from Security-Onion-Solutions/2.4/playbookairgap
Add support for Airgap for Playbooks
2025-06-06 16:55:32 -04:00
Josh Brower
a3b5db5945 Add support for Airgap for Playbooks 2025-06-06 16:17:14 -04:00
Josh Patterson
3efe0eac13 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-06-06 08:54:23 -04:00
Jason Ertel
aca54b4645 Merge pull request #14714 from Security-Onion-Solutions/jertel/wip
enable STS for browser redirects
2025-06-05 18:48:46 -04:00
Jason Ertel
643afeeae7 enable STS for browser redirects 2025-06-05 16:02:27 -04:00
Josh Patterson
d9fb79403b seems new openldap / libldap.so.2 doesnt have EVP_md2 dependency so check for it before trying to remove it 2025-06-05 15:57:56 -04:00
Josh Patterson
2ef89be67d Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-06-05 09:40:44 -04:00
Jason Ertel
43e994f2c2 Merge pull request #14711 from Security-Onion-Solutions/jertel/wip
update to new config location
2025-06-04 17:22:13 -04:00
Jason Ertel
ab89858d04 update to new config location 2025-06-04 17:19:53 -04:00
Josh Patterson
395c4e37ba fix issue with predicable names after kernel update 2025-06-04 16:57:59 -04:00
Jason Ertel
3da2c7cabc Merge pull request #14701 from Security-Onion-Solutions/jertel/wip
upgrade registry to 3.0.0
2025-06-04 09:22:03 -04:00
Jason Ertel
832d66052e upgrade registry to 3.0.0 2025-06-04 09:13:54 -04:00
coreyogburn
add538f6dd Merge pull request #14700 from Security-Onion-Solutions/cogburn/new-playbooks-repo
Updated Playbook Repo Config
2025-06-03 14:21:23 -06:00
Corey Ogburn
fc9107f129 Updated Playbook Repo Config
The repo and folder have changed. We're splitting out playbooks into their own repo: github.com/security-onion-solutions/securityonion-resources-playbooks.
2025-06-03 13:33:30 -06:00
Jorge Reyes
d9790b04f6 Merge pull request #14676 from Security-Onion-Solutions/reyesj2/fixsystemtime
fix system integration time overwrite and delete unused ingest pipeline
2025-06-03 14:01:42 -05:00
Jorge Reyes
88fa04b0f6 Merge pull request #14698 from Security-Onion-Solutions/reyesj2/esidxinfo
add so-elasticsearch-index-growth
2025-06-03 09:37:54 -05:00
reyesj2
d240fca721 remove usage of temp file 2025-06-03 08:45:04 -05:00
reyesj2
4d6171bde6 rename script
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-06-03 07:32:12 -05:00
reyesj2
6238a5b3ed tighten up search timeframe 2025-06-02 16:31:26 -05:00
reyesj2
061600fa7a shebang line 2025-06-02 15:55:46 -05:00
reyesj2
1b89cc6818 so-elasticsearch-index-growth script 2025-06-02 15:41:03 -05:00
Josh Patterson
6e1e617124 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-06-02 14:06:00 -04:00
Josh Brower
7f8bf850a2 Merge pull request #14697 from Security-Onion-Solutions/2.4/playbook-updates
Use Stable branch
2025-06-02 13:13:43 -04:00
Josh Brower
0277891392 Use Stable branch 2025-06-02 13:10:13 -04:00
Josh Patterson
08d99a3890 remove unneeded files 2025-05-30 12:50:59 -04:00
Doug Burks
773606d876 Merge pull request #14691 from Security-Onion-Solutions/dougburks-patch-1
add echo to end of so-elasticsearch-ilm-start and so-elasticsearch-ilm-stop
2025-05-30 12:03:32 -04:00
Doug Burks
bf38055a6c add echo to end of so-elasticsearch-ilm-stop 2025-05-30 11:41:50 -04:00
Doug Burks
90b8d6b2f7 add echo to end of so-elasticsearch-ilm-start 2025-05-30 11:41:11 -04:00
Doug Burks
2d78fa1a41 Merge pull request #14689 from Security-Onion-Solutions/dougburks-patch-1
FIX: so-elasticsearch-ilm-start needs shebang #14688
2025-05-30 09:58:18 -04:00
Doug Burks
45d541d4f2 FIX: so-elasticsearch-ilm-start needs shebang #14688 2025-05-30 09:55:53 -04:00
Josh Patterson
b3c48674c5 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-05-30 09:52:14 -04:00
Doug Burks
8d42739030 Merge pull request #14687 from Security-Onion-Solutions/dougburks-patch-1
FIX: so-suricata-testrule should disable pcap logging #14685
2025-05-30 09:26:37 -04:00
Doug Burks
27358137f2 FIX: so-suricata-testrule should disable pcap logging #14685 2025-05-30 09:24:41 -04:00
Doug Burks
a54b9ddbe4 Merge pull request #14683 from Security-Onion-Solutions/dougburks-patch-1
FIX: Improve annotation for Elasticsearch index deletion #14682
2025-05-29 15:26:35 -04:00
Doug Burks
58936b31d5 FIX: Improve annotation for Elasticsearch index deletion #14682 2025-05-29 15:19:21 -04:00
reyesj2
fcdacc3b0d fix system integration time overwrite and delete unused ingest pipeline 2025-05-29 12:21:28 -05:00
Josh Patterson
40531dd919 add LSHOSTNAME option to so-minion. use -L in sominion_setup reactor 2025-05-29 12:22:52 -04:00
Josh Patterson
05dfce62fb corrections to allowed_states 2025-05-28 13:34:17 -04:00
Jorge Reyes
9df9cc2247 Merge pull request #14668 from Security-Onion-Solutions/reyesj2-patch-1
use zeek network.community_id when available
2025-05-28 12:15:18 -05:00
Jorge Reyes
d3ee5ed7b8 use zeek network.community_id when available 2025-05-28 09:20:41 -05:00
Josh Patterson
502e1e1f1b Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-05-23 15:55:21 -04:00
Josh Patterson
e5b12ecdb9 need to allow for pw removal 2025-05-23 12:44:42 -04:00
Josh Patterson
be5e41227f rename step 2025-05-23 11:41:45 -04:00
Josh Patterson
08f208cd38 ensure bootstrap-salt is updated for salt-cloud installs 2025-05-22 15:37:34 -04:00
Jason Ertel
db08ac9022 Merge pull request #14651 from Security-Onion-Solutions/jertel/mhf
Backport Hotfix to dev
2025-05-22 13:44:36 -04:00
Jason Ertel
ad5a27f991 clear out hf 2025-05-22 13:39:59 -04:00
Mike Reeves
07ec302267 Merge pull request #14650 from Security-Onion-Solutions/hotfix/2.4.150
Hotfix 2.4.150
2025-05-22 13:35:33 -04:00
Mike Reeves
112704e340 Merge pull request #14649 from Security-Onion-Solutions/hf24150
2.4.150 Hotfix
2025-05-22 13:25:50 -04:00
Mike Reeves
e6753440f8 2.4.150 Hotfix 2025-05-22 13:18:13 -04:00
Josh Patterson
18d899a7f9 add so-docker-prune from hotfix/2.4.150 2025-05-22 09:29:51 -04:00
Josh Patterson
b2650da057 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-05-22 09:10:20 -04:00
Josh Patterson
31df0b5d7d create vm pillar files 2025-05-22 09:10:09 -04:00
Josh Patterson
a430a47a30 fix allowed_states check 2025-05-21 14:45:34 -04:00
Mike Reeves
00f811ce31 Merge pull request #14646 from Security-Onion-Solutions/hotfix4150
Update HOTFIX
2025-05-21 14:38:00 -04:00
Mike Reeves
ddd023c69a Update so-docker-prune 2025-05-21 13:47:45 -04:00
Mike Reeves
2911025c0c Update HOTFIX 2025-05-21 13:45:32 -04:00
Josh Brower
2e8ab648fd Merge pull request #14643 from Security-Onion-Solutions/2.4/parsingfix
Tighten parsing
2025-05-21 12:08:10 -04:00
Josh Brower
b753d40861 Tighten parsing 2025-05-20 17:06:11 -04:00
Josh Patterson
a32aac7111 apply salt.cloud.config when hypervisor joins 2025-05-20 13:38:24 -04:00
Josh Brower
2fff6232c1 Merge pull request #14638 from Security-Onion-Solutions/2.4/playbooks-parsing
Add parsing for Playbook
2025-05-19 18:06:05 -04:00
coreyogburn
f751c82e1c Merge pull request #14639 from Security-Onion-Solutions/cogburn/ruleset-name
Add RulesetName to Rule Repos
2025-05-19 15:40:02 -06:00
Corey Ogburn
39f74fe547 Use the new JSON object editor for RulesRepos config entries 2025-05-19 15:38:45 -06:00
Corey Ogburn
11fb33fdeb Add RulesetName to Rule Repos
Fill in `rulesetName` in the rules repos of the ElastAlert and Strelka engines. These will act as an example to anybody adding their repos to these lists. The field is not required, but helps avoid collisions when managing repos as the value is used for the folder name. When not present, the final folder of the repo url is used as the rulesetName and as the folder name on disk.

Note that rulesetNames including a `/` will create extra folders in the path but the rulesetName will contain the slash, i.e. `rulesetName="joesecurity/sigma-rules"` will create the nested structure of `reposFolder/joesecurity/sigma-rules" containing the contents of the repo. All rules imported from this repo will have the ruleset of `joesecurity/sigma-rules`.
2025-05-19 14:19:56 -06:00
Josh Brower
58f4db95ea Create playbooks dir 2025-05-19 15:31:50 -04:00
Josh Brower
b55cb257b6 Add parsing for Playbook 2025-05-19 13:25:27 -04:00
Josh Patterson
b0a8191f59 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-05-19 10:02:26 -04:00
Josh Patterson
28aedcf50b remove vm map example 2025-05-19 09:58:43 -04:00
Josh Patterson
6988f03ebc setup bridge and fix salt before first highstate for hypervisors 2025-05-16 14:24:07 -04:00
Jorge Reyes
2948577b0e Merge pull request #14629 from Security-Onion-Solutions/reyesj2-wt2
logstash isn't running on receivers or manager when kafka is the glob…
2025-05-16 10:27:18 -05:00
reyesj2
870a9ff80c dedup 2025-05-16 10:24:09 -05:00
reyesj2
689db57f5f logstash isn't running on receivers or manager when kafka is the global.pipeline 2025-05-16 10:05:38 -05:00
coreyogburn
2768722132 Merge pull request #14623 from Security-Onion-Solutions/cogburn/playbooks
Cogburn/playbooks
2025-05-15 13:27:02 -06:00
Josh Brower
df103b3dca Spacing 2025-05-14 16:36:59 -04:00
Josh Brower
0542c77137 Remove wip config 2025-05-14 16:35:09 -04:00
Josh Brower
9022dc24fb Add Parsing for Playbooks 2025-05-14 13:19:50 -06:00
Corey Ogburn
78b7068638 Playbook Settings
Map a folder from the manager's soc config folder to soc's sensoroni folder for storing the playbook repo.

Added playbook module section with default values.
2025-05-14 13:19:49 -06:00
Mike Reeves
70339b9a94 Merge pull request #14621 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update soup
2025-05-14 13:48:53 -04:00
Mike Reeves
5c8460fd26 Update soup 2025-05-14 13:47:26 -04:00
Mike Reeves
69e90e1e70 Update soup
Souper Duper!
2025-05-14 13:41:08 -04:00
Jason Ertel
8c5ea19d3c Merge pull request #14619 from Security-Onion-Solutions/jertel/wip
improve consistency
2025-05-14 09:31:56 -04:00
Jason Ertel
82562f89f6 improve consistency 2025-05-14 09:23:35 -04:00
Mike Reeves
ede36b5ef8 Merge pull request #14614 from Security-Onion-Solutions/TOoSmOotH-patch-1
Get ready for .160
2025-05-12 10:49:46 -04:00
Mike Reeves
fd00a4db85 Update VERSION 2025-05-12 10:48:52 -04:00
Mike Reeves
510c7a0c19 Update 2-4.yml 2025-05-12 10:48:12 -04:00
Mike Reeves
2a7365c7d7 Merge pull request #14612 from Security-Onion-Solutions/2.4/dev
2.4.150
2025-05-12 10:34:22 -04:00
Mike Reeves
f7ca3e45ac Merge pull request #14611 from Security-Onion-Solutions/2.4.150
2.4.150
2025-05-12 10:24:27 -04:00
Mike Reeves
0172272e1b 2.4.150 2025-05-12 09:58:09 -04:00
Josh Brower
776f574427 Merge pull request #14609 from Security-Onion-Solutions/2.4/jbrower-patch-2
Cleanup
2025-05-09 10:42:05 -04:00
Josh Brower
a0aafb7c51 Cleanup 2025-05-09 10:29:23 -04:00
Jason Ertel
09ec14acd8 Merge pull request #14608 from Security-Onion-Solutions/m0duspwnens-patch-1
fix file permissions for download
2025-05-09 09:29:33 -04:00
Josh Patterson
61f8b251f0 cp to mv 2025-05-09 09:25:46 -04:00
Josh Patterson
75dd04c398 fix file permissions for download 2025-05-09 09:21:30 -04:00
Josh Brower
e2ef544bfc Merge pull request #14607 from Security-Onion-Solutions/2.4/jbpatch
Regen installers
2025-05-09 08:21:46 -04:00
Josh Brower
daad99a0b6 Regen installers 2025-05-09 08:17:46 -04:00
Jason Ertel
fdeee45d3f Merge pull request #14605 from Security-Onion-Solutions/jertel/wip
more analyzer dep updates
2025-05-08 15:57:08 -04:00
Jason Ertel
7fe9e2cbfd more analyzer dep updates 2025-05-08 15:53:16 -04:00
Jorge Reyes
74d557a5e0 Merge pull request #14603 from Security-Onion-Solutions/reyesj2/fix-14602
add null check
2025-05-08 08:34:53 -05:00
Doug Burks
82f9043a14 Merge pull request #14604 from Security-Onion-Solutions/dougburks-patch-1
Update defaults.yaml to replace remaining instances of identity_id with user.name
2025-05-08 09:14:03 -04:00
Doug Burks
a8cb18bb2e Update defaults.yaml to replace remaining instances of identity_id with user.name 2025-05-08 09:09:26 -04:00
reyesj2
e1d31c895e add null check 2025-05-07 21:25:30 -05:00
Josh Brower
e661c73583 Merge pull request #14601 from Security-Onion-Solutions/2.4/upgradeeafix
Only upgrade node agents for local stack version
2025-05-07 16:11:10 -04:00
Josh Brower
42ba778740 Only upgrade node agents for local stack version 2025-05-07 16:08:47 -04:00
Josh Brower
204d53e4a7 Merge pull request #14596 from Security-Onion-Solutions/2.4/kratosuser
Show user.name instead of id
2025-05-07 11:21:18 -04:00
Josh Brower
d47a798645 Show user.name instead of id 2025-05-07 11:17:00 -04:00
Josh Patterson
9e0f13cce5 no longer need to create hypervisor pillar directory 2025-05-07 09:01:22 -04:00
Jason Ertel
68ea229a1c Merge pull request #14595 from Security-Onion-Solutions/jertel/wip
update default actions for subgrid support
2025-05-06 14:35:01 -04:00
Jason Ertel
1ecf2b29fc update default actions for subgrid support 2025-05-06 13:56:16 -04:00
Josh Patterson
8c37a4454c merge and fix conflicts 2025-05-06 11:55:42 -04:00
Josh Patterson
ef436026d5 info to debug. remove old reactors 2025-05-06 11:51:59 -04:00
Josh Patterson
a595bc4b31 info to debug log level 2025-05-06 10:13:02 -04:00
Jorge Reyes
8a321e3f15 Merge pull request #14593 from Security-Onion-Solutions/reyesj2/feat-254
missing globals.is_manager swap
2025-05-06 09:01:58 -05:00
reyesj2
b4214f73f4 typo 2025-05-06 09:01:22 -05:00
reyesj2
b9da7eb35b missing globals.is_manager swap 2025-05-06 08:58:47 -05:00
Jorge Reyes
d6139d0f19 Merge pull request #14580 from Security-Onion-Solutions/reyesj2/feat-254
collect es index sizes
2025-05-06 08:39:16 -05:00
Josh Patterson
d2fe8da082 Merge pull request #14592 from Security-Onion-Solutions/fleetlocal
copy so_agent-installers to nsm for nginx
2025-05-05 13:47:22 -04:00
Josh Patterson
1931de2e52 copy so_agent-installers to nsm for nginx 2025-05-05 12:40:56 -04:00
Josh Patterson
d68a14d789 Merge pull request #14590 from Security-Onion-Solutions/checkmasterstatus
check master status after highstate incase master service restart
2025-05-02 17:04:03 -04:00
Josh Patterson
f988af52f6 check master status after highstate incase master service restart 2025-05-02 15:41:21 -04:00
reyesj2
fd02950864 use globals.is_manager 2025-05-02 13:36:28 -05:00
Josh Patterson
a167e5e520 fix whitespace for multiple hypervisors 2025-05-02 11:32:03 -04:00
Josh Patterson
26d7ceebb2 libvirt.images requires scripts from hypervisor state 2025-05-02 11:30:35 -04:00
Mike Reeves
382c3328df Merge pull request #14588 from Security-Onion-Solutions/TOoSmOotH-patch-6
enable the delete on heavynodes
2025-05-02 08:55:55 -04:00
Mike Reeves
92d8985f3c enable the delete on heavynodes 2025-05-02 08:52:57 -04:00
Jason Ertel
c2d9523e09 Merge pull request #14587 from Security-Onion-Solutions/jertel/wip
update deps
2025-05-02 08:26:28 -04:00
Jason Ertel
c34914c8de update deps 2025-05-02 08:19:54 -04:00
Jason Ertel
d020bf5504 Merge pull request #14584 from Security-Onion-Solutions/jertel/wip
update analyser deps for py 3.13
2025-05-01 15:59:04 -04:00
Jason Ertel
95d8e0f318 stop double workflow runs 2025-05-01 15:46:04 -04:00
Jason Ertel
be4df48742 deps update 2025-05-01 15:44:34 -04:00
Jason Ertel
ba4df4c8b6 dep updates 2025-05-01 15:36:20 -04:00
Jason Ertel
86eab6fda2 dep updates 2025-05-01 15:31:26 -04:00
Jason Ertel
5d2bed950e update analyser deps for py 3.13 2025-05-01 11:16:58 -04:00
Josh Patterson
e5c0f8a46c allow for dhcp4 2025-04-30 16:09:57 -04:00
reyesj2
044d230158 get 200 from es before collecting metrics 2025-04-30 13:05:36 -05:00
Josh Patterson
5965459423 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-04-30 13:11:12 -04:00
Josh Patterson
3a31d80a85 fix regex and label for hypervisor annotation 2025-04-30 13:10:49 -04:00
Josh Patterson
5a8e542f96 create macro for resource regex and fix regex logic for mem and cpu 2025-04-30 13:08:54 -04:00
Josh Patterson
7a60afdd5a remove duplicate logging 2025-04-30 09:11:55 -04:00
Josh Patterson
c3b3e0ab21 manager hostname in pubkey 2025-04-30 08:12:35 -04:00
reyesj2
b918a5e256 old attempt 2025-04-29 16:05:55 -05:00
reyesj2
1ddc653a52 fix input error in agentstatus script 2025-04-29 13:40:39 -05:00
reyesj2
85f5f75c84 use salt location for es curl.config 2025-04-29 12:42:05 -05:00
reyesj2
3cb3281cd5 add metrics for es index sizes 2025-04-29 12:38:41 -05:00
Josh Patterson
6246e25fbe 640 for pubkey and empty pillar 2025-04-29 10:19:01 -04:00
Jason Ertel
b858543a60 Merge pull request #14578 from Security-Onion-Solutions/jertel/wip
excluded harmless log error; suppress so-user grep output
2025-04-29 09:46:48 -04:00
Jason Ertel
5ecb483596 excluded harmless log error; suppress so-user grep output 2025-04-29 09:35:36 -04:00
Josh Patterson
102ddaf262 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-04-29 08:18:25 -04:00
Josh Patterson
151db2af30 ensure ownership and mode 2025-04-28 15:38:29 -04:00
Mike Reeves
e9a4668c63 Merge pull request #14575 from Security-Onion-Solutions/TOoSmOotH-patch-5
Add url_base to the web certificate
2025-04-28 08:43:13 -04:00
Mike Reeves
5f45327372 Update enabled.sls 2025-04-28 08:39:26 -04:00
Mike Reeves
ac8ac23522 Update enabled.sls 2025-04-28 08:36:43 -04:00
Josh Patterson
b2bd8577b9 only update mine if hypervisor provided 2025-04-24 12:59:43 -04:00
Josh Patterson
4df3070a1d ensure file permissions of libvirt images 2025-04-24 12:59:06 -04:00
Josh Patterson
142609ea67 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-04-24 09:41:27 -04:00
Jorge Reyes
46779513de Merge pull request #14569 from Security-Onion-Solutions/reyesj2/fix-225
fix storage metrics on stig installs
2025-04-23 15:38:14 -05:00
reyesj2
e27a0d8f7a Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2/fix-225 2025-04-23 15:04:08 -05:00
reyesj2
9e4c456eb9 fix nsm influxdb alert 2025-04-23 15:02:57 -05:00
reyesj2
400739736d add monitored mounts, ignores docker overlays 2025-04-23 15:02:23 -05:00
reyesj2
196e0c1486 change root bind so existing references to 'r[\"path\"] == \"/\")' work as expected 2025-04-23 15:01:48 -05:00
reyesj2
76d63bb2ad remove unused HOST_PROC env 2025-04-23 15:00:21 -05:00
Josh Patterson
ed80c4e13b Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-04-23 15:42:04 -04:00
Jorge Reyes
69c904548c Merge pull request #14561 from Security-Onion-Solutions/reyesj2/fix-14516
Disable auto-upgrading non-default integrations
2025-04-23 13:59:46 -05:00
Josh Patterson
272410ecae Merge pull request #14568 from Security-Onion-Solutions/fixem
Fixem
2025-04-23 13:28:29 -04:00
Josh Patterson
19514a969b use file.directory 2025-04-23 08:41:53 -04:00
Josh Patterson
77f88371b8 manage default and local in separate states 2025-04-23 08:30:37 -04:00
reyesj2
559190aee3 upgrade integrations if they aren't in an agent policy 2025-04-22 09:38:22 -05:00
reyesj2
8c4cf0ba08 keep hard failure 2025-04-22 07:29:12 -05:00
reyesj2
e17fea849a continue loop after encountering error with first 2025-04-21 20:32:42 -05:00
Jorge Reyes
b2c09d6fd9 Merge pull request #14560 from Security-Onion-Solutions/reyesj2-patch-2
make homedirs
2025-04-21 16:39:26 -05:00
reyesj2
30c4acb828 group 2025-04-21 16:38:16 -05:00
reyesj2
4ec185a9c7 make logstash and kratos homedirs 2025-04-21 16:26:20 -05:00
reyesj2
166e4e0ebc make bool 2025-04-21 15:51:36 -05:00
reyesj2
4b7478654f run optional integrations script so packages get installed. Hold updates unless auto_update_integrations is set 2025-04-21 14:29:37 -05:00
Jason Ertel
5bd84c4e30 Merge pull request #14558 from Security-Onion-Solutions/jertel/wip
researching install failures
2025-04-21 14:34:30 -04:00
Jason Ertel
f5a8e917a4 researching install failures 2025-04-21 14:32:33 -04:00
reyesj2
4e6c707067 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2/fix-14516 2025-04-21 10:48:25 -05:00
reyesj2
c89adce3a1 default disable automatic upgrades for optional integration packages & policies 2025-04-21 10:48:18 -05:00
Mike Reeves
af1bee4c68 Merge pull request #14556 from Security-Onion-Solutions/TOoSmOotH-patch-4
Disable Elasticsearch delete delete
2025-04-21 08:57:13 -04:00
Mike Reeves
e3c8d22cac Update enabled.sls 2025-04-18 16:43:17 -04:00
Josh Patterson
285d73d526 enable/disable soqemussh. allow for pw to be set 2025-04-18 14:07:32 -04:00
Josh Patterson
0bcb6040c9 recreate sool9 if user-data or meta-data cloud-init changes 2025-04-18 14:02:17 -04:00
Josh Brower
3f13f8deae Merge pull request #14543 from Security-Onion-Solutions/2.4/kratos_identity
Support Kratos user.name lookup
2025-04-17 16:13:58 -04:00
Jason Ertel
13d96ae5af Merge pull request #14551 from Security-Onion-Solutions/jertel/wip
additional grid support
2025-04-17 12:54:28 -04:00
Jason Ertel
3b447b343f fix typo 2025-04-17 11:51:45 -04:00
Jason Ertel
d0375d3c7e fix typo 2025-04-17 11:51:21 -04:00
Jason Ertel
b607689993 improve regex 2025-04-17 11:47:52 -04:00
Jason Ertel
8f1e528f1c improve regex 2025-04-17 11:09:39 -04:00
Jason Ertel
2f8d8d2d96 Merge branch '2.4/dev' into jertel/wip 2025-04-16 15:55:34 -04:00
Jason Ertel
366e39950a subord annotations; ensure node reboots occur in background 2025-04-16 15:55:16 -04:00
Josh Brower
5fd7bf311d Add fallback 2025-04-15 13:57:55 -04:00
Josh Brower
152fdaa7bb Support Kratos user.name lookup 2025-04-15 11:40:43 -04:00
Josh Patterson
07ef3d632c Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-04-15 08:08:12 -04:00
Jorge Reyes
7f5cde9a1c Merge pull request #14540 from Security-Onion-Solutions/reyesj2/fix-14417
FIX: Add log.origin.file.line to base templates
2025-04-14 15:46:54 -05:00
reyesj2
58df566c79 add mapping for metadata.kafka.timestamp 2025-04-14 14:30:40 -05:00
reyesj2
395b81ffc6 FIX: Add log.origin.file.line to base templates #14417 2025-04-14 14:30:00 -05:00
Jorge Reyes
e3d5829b89 Merge pull request #14539 from Security-Onion-Solutions/reyesj2-patch-1
fix kafka delayed initial connection with remote clients on multi-broker deployments
2025-04-14 13:06:20 -05:00
reyesj2
df31c349b0 update annotations 2025-04-14 12:32:31 -05:00
reyesj2
759d5f76cd fix kafka external access slow to establish initial connection 2025-04-14 12:32:22 -05:00
Josh Brower
240484deea Merge pull request #14537 from Security-Onion-Solutions/2.4/idstoolsfix
Run so-rule-update when it changes
2025-04-14 11:20:32 -04:00
Josh Brower
ceabb673e0 Refactor for so-rule-update 2025-04-14 11:08:35 -04:00
Jorge Reyes
f1070992a8 Merge pull request #14538 from Security-Onion-Solutions/reyesj2-patch-5 2025-04-14 08:41:35 -05:00
reyesj2
c0f9c344bb set logstash log rollover when log size exceeds 1G
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-04-14 08:13:27 -05:00
Josh Patterson
21bb325157 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-04-14 08:22:42 -04:00
Josh Brower
00029e6f83 Run so-rule-update when it changes 2025-04-14 08:04:46 -04:00
reyesj2
9459bf8a27 allow larger kafka log files before forcing rollover 2025-04-11 14:41:32 -05:00
Josh Patterson
96e99fc442 Merge pull request #14535 from Security-Onion-Solutions/mineimp
ensure the highstate retry runs only once
2025-04-11 14:43:17 -04:00
Josh Patterson
4b14bf90a3 ensure the highstate retry runs only once 2025-04-11 14:28:18 -04:00
reyesj2
2cb002668f restrict count of kafka log files 2025-04-11 12:32:49 -05:00
Jorge Reyes
c11a10638b Merge pull request #14528 from Security-Onion-Solutions/reyesj2-patch-4
external access to kafka topics via user/pass auth
2025-04-11 10:52:40 -05:00
reyesj2
6fe240de45 remove whitespaces then check for empty string as password 2025-04-11 10:42:45 -05:00
reyesj2
ecd7da540a skip user entries that don't have password configured 2025-04-11 10:21:46 -05:00
Josh Brower
2a43a6f37e Merge pull request #14532 from Security-Onion-Solutions/2.4/saltlogs
Fix comma
2025-04-11 07:51:35 -04:00
Josh Brower
4cdfb6e3eb Fix comma 2025-04-11 07:49:35 -04:00
Josh Brower
1edd13523c Merge pull request #14530 from Security-Onion-Solutions/fix/detections
Change timeout to 1s
2025-04-11 07:47:38 -04:00
Josh Brower
4217e23272 Merge pull request #14531 from Security-Onion-Solutions/2.4/saltlogs
Extract log level and drop INFO level
2025-04-11 07:47:25 -04:00
Josh Brower
f94c81a041 Extract log level and drop INFO level 2025-04-11 07:45:12 -04:00
Josh Brower
4c3518385b Change timeout to 1s 2025-04-11 07:37:09 -04:00
reyesj2
1429226667 nest default value for external_access under kafka:config 2025-04-10 15:55:17 -05:00
Josh Patterson
888ab162bd update mine_functions and mine after mainint switch to br0. ensure br0 has ip before updating mine 2025-04-10 15:04:08 -04:00
reyesj2
5498673fc3 group events in 10s and remove deprecated output configuration option 2025-04-10 09:46:37 -05:00
reyesj2
96c56297ce external access via user/pass 2025-04-09 22:08:13 -05:00
Josh Patterson
8ab38956d1 change from error to warning 2025-04-09 11:19:55 -04:00
Josh Patterson
0f120f7500 ensure manager is in /etc/hosts 2025-04-09 11:19:18 -04:00
Josh Patterson
f6a0e62853 include managerhype in orch. run hypervisor state before libvirt states 2025-04-08 09:50:26 -04:00
Josh Patterson
cc0e91aa96 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-04-07 08:52:50 -04:00
Josh Patterson
bf9f92b04e remove soc_hypervisor.yaml 2025-04-04 13:47:54 -04:00
Jason Ertel
270958ddfc Merge pull request #14502 from Security-Onion-Solutions/jertel/wip
support background actions via config UI
2025-04-04 11:27:36 -04:00
Jason Ertel
b99bb0b004 support options field on actions 2025-04-04 11:19:30 -04:00
Josh Patterson
8f3664f26c need to sync 2025-04-04 09:00:22 -04:00
Josh Patterson
445afca6ee use vrt 2025-04-03 13:44:13 -04:00
Josh Patterson
3083e3bc63 sync runners and create soqemussh user ssh keypair for manager and managerhype 2025-04-03 13:42:02 -04:00
Jason Ertel
9c455badb9 support background actions via config UI 2025-04-03 13:08:44 -04:00
Josh Patterson
9e16c03d25 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-04-03 08:47:54 -04:00
Josh Patterson
275489b8a3 Merge pull request #14499 from Security-Onion-Solutions/strelkaFix
Add missing scanners and fix forcedType for Strelka SOC UI annotations. Restart Strelka containers on config change.
2025-04-02 11:56:44 -04:00
Josh Patterson
cd6deae0a7 add missing strelka backend scanners to SOC UI annotation file 2025-04-02 11:20:12 -04:00
Josh Patterson
0b8a7f5b67 fix strelka annotations. restart strelka containers on config change 2025-04-02 10:10:34 -04:00
Mike Reeves
3c342bb90d Merge pull request #14486 from Security-Onion-Solutions/TOoSmOotH-patch-3
Update soup
2025-04-01 09:53:32 -04:00
Jason Ertel
ba10228fef Update soup 2025-04-01 09:42:10 -04:00
Mike Reeves
71f146d1d9 Update soup 2025-04-01 09:36:22 -04:00
Josh Patterson
b22fe5bd3d set interface for hypervisor/managerhype 2025-04-01 09:27:50 -04:00
Josh Patterson
a60e55e5cd remove whitespace control 2025-03-31 16:44:48 -04:00
Josh Patterson
e7aa4428de managerhype udate mine when switch to br0 2025-03-31 16:03:19 -04:00
Josh Patterson
64f71143dc fix docker fw rules managerhype 2025-03-31 15:51:32 -04:00
Mike Reeves
72fd25dcaf Merge pull request #14482 from Security-Onion-Solutions/TOoSmOotH-patch-2
Update 2-4.yml
2025-03-31 12:03:49 -04:00
Mike Reeves
eef4b82afb Update 2-4.yml 2025-03-31 11:46:03 -04:00
Mike Reeves
1d4d442554 Merge pull request #14481 from Security-Onion-Solutions/patchmerge
Patchmerge
2025-03-31 11:38:29 -04:00
Mike Reeves
02ad08035e Resolve Conflicts 2025-03-31 11:36:55 -04:00
Mike Reeves
335d8851e6 Resolve Conflicts 2025-03-31 11:32:35 -04:00
Mike Reeves
e4d2513609 Merge pull request #14479 from Security-Onion-Solutions/patch/2.4.141
2.4.141
2025-03-31 11:21:30 -04:00
Josh Patterson
7aad298720 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-03-31 11:14:47 -04:00
Mike Reeves
22fae2e98d Merge pull request #14478 from Security-Onion-Solutions/2.4.141
2.4.141
2025-03-31 10:38:30 -04:00
Mike Reeves
3850558be3 2.4.141 2025-03-31 10:37:04 -04:00
Josh Patterson
5b785d3ef8 Merge pull request #14477 from Security-Onion-Solutions/issue/14431
heavy node exclude so-import-pcap and so-pcap-import
2025-03-31 09:49:09 -04:00
Josh Patterson
8b874e46d0 heavy node exclude so-import-pcap and so-pcap-import 2025-03-31 09:09:15 -04:00
Josh Patterson
4165b33995 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-03-27 15:34:39 -04:00
Josh Patterson
3e10c95b7b Merge pull request #14463 from Security-Onion-Solutions/mineimp
break out manager from non manager in top
2025-03-27 14:04:19 -04:00
Josh Patterson
1d058729e5 break out manager from non manager 2025-03-27 13:27:34 -04:00
Josh Patterson
f9bf4e4130 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-03-27 11:26:32 -04:00
Josh Patterson
056a29ea89 Merge pull request #14457 from Security-Onion-Solutions/mineimp
prevent manager node type highstate failure from missing network.ip_addrs in mine
2025-03-26 15:12:23 -04:00
Josh Patterson
667e66bbef rename mine update and highstate state 2025-03-26 13:56:49 -04:00
Josh Patterson
595ff8dce2 Merge remote-tracking branch 'origin/2.4/dev' into mineimp 2025-03-26 13:09:36 -04:00
Jason Ertel
99aa383e01 soup and version updates 2025-03-26 12:11:53 -04:00
Josh Patterson
5f116b3e43 Merge pull request #14453 from Security-Onion-Solutions/x509v2_fix
patch x509_v2 state salt issue 66929
2025-03-26 11:41:50 -04:00
Josh Patterson
bb8f0605e1 patch x509_v2 state salt issue 66929 2025-03-26 10:50:04 -04:00
Josh Patterson
5836bc5bd1 remove require since maybe some failure from mine.update 2025-03-25 21:58:42 -04:00
Josh Patterson
55c815cae8 simplify highstate rerun when node_data pillar empty 2025-03-25 19:44:38 -04:00
Josh Patterson
79388af645 only managers need node_ips 2025-03-25 10:17:43 -04:00
Josh Patterson
d7e831fbeb add mine_update reactor config for master 2025-03-24 20:45:35 -04:00
Josh Patterson
8f40b66e3b update mine instead of failing highstate if no node_data 2025-03-24 19:49:24 -04:00
Josh Patterson
0fe3038802 Merge pull request #14444 from Security-Onion-Solutions/minionService
salt-minion service wait for ip on mainint
2025-03-24 16:27:32 -04:00
Josh Patterson
cd9b04e1bb Merge pull request #14443 from Security-Onion-Solutions/soup150
soup for 2.4.150
2025-03-24 15:55:28 -04:00
Josh Patterson
0fbb6afee1 soup for 2.4.150 2025-03-24 15:51:22 -04:00
Josh Patterson
402e26fc19 Merge remote-tracking branch 'origin/2.4/dev' into minionService 2025-03-24 15:42:07 -04:00
Mike Reeves
b6e10b1de7 Merge pull request #14440 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update VERSION
2025-03-24 15:17:15 -04:00
Mike Reeves
54f3a8cb91 Update 2-4.yml 2025-03-24 15:16:43 -04:00
Mike Reeves
1f98cef816 Update VERSION 2025-03-24 15:15:57 -04:00
Mike Reeves
7a71a5369c Merge pull request #14439 from Security-Onion-Solutions/2.4/dev
2.4.140
2025-03-24 15:08:43 -04:00
Mike Reeves
964b631d58 Merge pull request #14438 from Security-Onion-Solutions/2.4.140
2.4.140
2025-03-24 13:43:49 -04:00
Mike Reeves
dcb667b32d 2.4.140 2025-03-24 13:35:39 -04:00
Josh Patterson
e61d37893a start salt-minion service when mainint has ip 2025-03-24 12:33:10 -04:00
Josh Patterson
60bd960251 Merge pull request #14434 from Security-Onion-Solutions/backto3006.9
roll back to 3006.9 but leave prep in place for future upgrades
2025-03-23 12:09:52 -04:00
Josh Patterson
b974c6e8df roll back to 3006.9 but leave prep in place for future upgrades 2025-03-23 12:07:39 -04:00
Josh Patterson
7484495021 Merge pull request #14433 from Security-Onion-Solutions/soupupdatemine140
update mine
2025-03-22 12:59:22 -04:00
Josh Patterson
0952b7528f update mine
update mine after salt-master restart and before highstate
2025-03-22 12:57:13 -04:00
Josh Brower
14c95a5fe0 Merge pull request #14432 from Security-Onion-Solutions/jbfix
Remove pcapoutdir
2025-03-22 07:13:44 -04:00
Josh Brower
d0bb86a24f Remove pcapoutdir 2025-03-22 07:12:19 -04:00
Jorge Reyes
749825af19 Merge pull request #14429 from Security-Onion-Solutions/reyesj2-patch-3
FIX: elastic fleet package list get more than 300 results per query
2025-03-21 15:07:15 -05:00
reyesj2
844283cc38 get more results 2025-03-21 14:55:52 -05:00
Jason Ertel
ae0bf1ccdf Merge pull request #14428 from Security-Onion-Solutions/jertel/wip
ignore false positives
2025-03-21 14:56:56 -04:00
Jason Ertel
a0637fa25d ignore false positives 2025-03-21 14:54:52 -04:00
Josh Patterson
d2a21c1e4c Merge pull request #14427 from Security-Onion-Solutions/pcapperms
move pcapoutdir
2025-03-21 14:50:33 -04:00
Josh Patterson
ed23340157 move pcapoutdir 2025-03-21 14:48:31 -04:00
Jason Ertel
ef6dbf9e46 Merge pull request #14425 from Security-Onion-Solutions/jertel/wip
support pcap imports for sensors in distributed grids
2025-03-21 13:17:18 -04:00
Jason Ertel
1236c8c1f2 support pcap imports for sensors in distributed grids 2025-03-21 10:34:55 -04:00
Josh Patterson
51625e19ad Merge pull request #14423 from Security-Onion-Solutions/salt3006.10
work with quotes in version
2025-03-21 08:25:55 -04:00
Josh Patterson
760ff1e45b work with quotes in version 2025-03-21 08:20:04 -04:00
Josh Patterson
5b3fa17f81 Merge pull request #14422 from Security-Onion-Solutions/salt3006.10
fix SALTVERSION grep to work with or without quote
2025-03-20 17:01:17 -04:00
Josh Patterson
053eadbb39 fix SALTVERSION grep to work with or without quote 2025-03-20 16:58:16 -04:00
Josh Patterson
540b0de00c Merge pull request #14420 from Security-Onion-Solutions/salt3006.10
Salt3006.10
2025-03-20 15:50:10 -04:00
Josh Patterson
c30cbf9af0 remove salt-cloud 2025-03-20 15:44:56 -04:00
Josh Patterson
41c0a91d77 ensure versions are strings 2025-03-20 15:42:16 -04:00
Josh Patterson
6e1e5a2ee6 Merge pull request #14419 from Security-Onion-Solutions/salt3006.10
make string to not drop 0
2025-03-20 15:31:05 -04:00
Josh Patterson
aa8fd647b6 make string to not drop 0 2025-03-20 15:27:52 -04:00
Mike Reeves
8feae6ba11 Merge pull request #14416 from Security-Onion-Solutions/salt3006.10
add bootstrap-salt to preloaded soup_scripts
2025-03-20 13:48:46 -04:00
Josh Patterson
028297cef8 add bootstrap-salt to preloaded soup_scripts 2025-03-20 13:46:30 -04:00
Mike Reeves
19755d4077 Merge pull request #14413 from Security-Onion-Solutions/bootstrap-salt-2025.02.24
Update bootstrap-salt.sh
2025-03-20 13:38:34 -04:00
Mike Reeves
cd655e6adb Merge pull request #14415 from Security-Onion-Solutions/salt3006.10
upgrade salt 3006.10
2025-03-20 13:37:26 -04:00
Josh Patterson
2be143d902 upgrade salt 3006.10 2025-03-20 13:22:28 -04:00
Josh Patterson
1b98f9f313 Update bootstrap-salt.sh 2025-03-20 10:03:26 -04:00
Jason Ertel
762ccdd222 Merge pull request #14403 from Security-Onion-Solutions/jertel/wip
add no-op soup functions for 2.4.140
2025-03-19 07:24:14 -04:00
Jason Ertel
277504fff6 Merge pull request #14402 from Security-Onion-Solutions/reyesj2-patch-3
ldap_search include observer.name
2025-03-18 10:27:16 -04:00
Jason Ertel
3f3e7ea1e8 add no-op soup functions for 2.4.140 2025-03-18 10:12:23 -04:00
reyesj2
4d7fdd390c ldap_search include observer.name 2025-03-18 08:52:43 -05:00
Josh Patterson
269919b980 run setup_hypervisor.setup_environment for mangerhype if needed 2025-03-18 09:39:49 -04:00
Jason Ertel
05c93e3796 Merge pull request #14394 from Security-Onion-Solutions/jertel/wip
use specified role on new user add
2025-03-17 17:10:45 -04:00
Jorge Reyes
fe21a19c5c Merge pull request #14396 from Security-Onion-Solutions/reyesj2-patch-3
add zeek file_extraction forcedType for instances where a single line…
2025-03-17 14:40:40 -05:00
reyesj2
af6245f19d add zeek file_extraction forcedType for instances where a single line is speciifed 2025-03-17 14:30:17 -05:00
Jason Ertel
ad8f3dfde7 use specified role on new user add 2025-03-17 14:55:40 -04:00
Josh Patterson
2dc977ddd8 managerhype 2025-03-13 14:33:48 -04:00
Josh Patterson
28c7362cfa Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-03-13 10:56:32 -04:00
Josh Patterson
c93a5de460 additional changes for managerhype 2025-03-13 10:55:49 -04:00
Josh Patterson
44a5b3b1e5 MANAGERHYPE setup is now complete! 2025-03-12 21:05:04 -04:00
Jorge Reyes
d23b6958c1 Merge pull request #14379 from Security-Onion-Solutions/reyesj2-patch-3
update event pipeline annotation
2025-03-12 13:22:40 -05:00
reyesj2
60b1535018 update event pipeline annotation
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2025-03-12 13:15:57 -05:00
Mike Reeves
758c6728f9 Merge pull request #14375 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update VERSION
2025-03-11 13:27:21 -04:00
Mike Reeves
5234b21743 Update 2-4.yml 2025-03-11 13:25:43 -04:00
Mike Reeves
7d73f6cfd7 Update VERSION 2025-03-11 13:25:00 -04:00
Josh Patterson
ae94722eda Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-03-11 11:20:50 -04:00
Josh Patterson
ae993c47c1 remove minion pillar files when a vm is destroyed 2025-03-11 11:12:45 -04:00
Josh Patterson
c784a6e440 fix setting hypervisor for our custom event tag 2025-03-10 16:55:02 -04:00
Josh Patterson
c66cd3b2f3 ensure image is readded if removed 2025-03-10 11:23:26 -04:00
Josh Patterson
f30938ed59 hypervisor annotation show if base domain is initialized or not 2025-03-06 15:26:08 -05:00
Josh Patterson
6c472dd383 Merge remote-tracking branch 'origin/2.4/dev' into vlb2 2025-03-05 08:58:03 -05:00
Josh Patterson
2c5861a0c2 ensure local hypervisor dir when new hypervisor key accepted. apply soc.dyanno.hypervisor when hypervisor key accepted 2025-03-05 08:51:10 -05:00
Josh Patterson
8047e196fe fix pipeline workers, zeek/suricata lbprocs, CPUCORES and CORECOUNT 2025-02-28 17:21:06 -05:00
Josh Patterson
c6c979dc19 properly set memory and CPUCORES for minion pillars during vm setup 2025-02-28 16:12:28 -05:00
Josh Patterson
c8a1c8377a vm power operations 2025-02-27 16:04:44 -05:00
Josh Patterson
4e954c24f7 handle cpu, copper and sfp as options 2025-02-26 17:58:09 -05:00
Josh Patterson
52839e2a7d implement regex for cpu and mem 2025-02-26 15:22:36 -05:00
Josh Patterson
1a9d5f151f change description formatting. include full vm name in HYPERVISORS 2025-02-26 14:28:31 -05:00
Josh Patterson
d6f527881a allow for destroyed vms to be displayed in ui. VNM cleanup destroyed status files after 48h 2025-02-26 09:06:45 -05:00
Josh Patterson
5811b184be enhance annotations. account for line separation instead of comma for hardware 2025-02-25 11:13:35 -05:00
Josh Patterson
e0a3b51ca2 md in description 2025-02-25 08:54:04 -05:00
Josh Patterson
b5276a6a1d add hypervisor to firewall annotation 2025-02-25 04:41:59 -05:00
Josh Patterson
cc1b030c00 q
xMerge remote-tracking branch 'origin/2.4/dev' into vlb2
2025-02-24 15:32:54 -05:00
Josh Patterson
c896785480 fix vm deletion 2025-02-24 14:20:09 -05:00
Josh Patterson
0006948c29 get hypervisor from dir name 2025-02-24 12:26:28 -05:00
Josh Patterson
6ac14f832e only allow first process step to overwrite last 2025-02-24 12:22:52 -05:00
Josh Patterson
fd9a4966ec move logic from reactor to orchestration 2025-02-23 14:07:51 -05:00
Josh Patterson
3246176c0a comments 2025-02-21 14:34:08 -05:00
Josh Patterson
b68f561e6f progress and hw tracking for soc hypervisor dynamic annotations 2025-02-21 09:50:01 -05:00
Josh Patterson
8ffd4fc664 new examples 2025-02-16 02:31:52 -05:00
Josh Patterson
f46548ed88 remove free hw from description 2025-02-16 02:25:18 -05:00
Josh Patterson
0d335e3056 free and totals in labels 2025-02-16 02:23:11 -05:00
Josh Patterson
6ff701bd5c soc ui improvements for hypervisor layout. show free hardware for a hypervisor in the description 2025-02-16 01:33:50 -05:00
Josh Patterson
c34be5313d hardware logging. vm state file logging 2025-02-15 21:41:01 -05:00
Josh Patterson
ec2fc0a5f2 change locking method 2025-02-15 18:56:04 -05:00
Josh Patterson
ad54afe39a ensure socore:socore ownership 2025-02-15 12:11:23 -05:00
Josh Patterson
eb4cd75218 virtual_node_manager lookup hardware from defaults. allocate hw in vm file 2025-02-15 11:29:47 -05:00
Josh Patterson
a84f5a1e32 updated logging added returns 2025-02-15 11:14:39 -05:00
Josh Patterson
e193347fb4 add hypervisor to host keys first connection. cleaner qcow2 logging. 2025-02-15 10:54:49 -05:00
Josh Patterson
ad27c8674b no longer need add_* nodes 2025-02-15 10:50:09 -05:00
Josh Patterson
5123a86062 start of dynamic annotations for hypervisor 2025-02-12 13:21:39 -05:00
m0duspwnens
010c205eec configure bond and monitor nics 2025-02-07 14:45:06 -05:00
Josh Patterson
160c84ec1a Merge pull request #14200 from Security-Onion-Solutions/2.4/dev
2.4/dev
2025-02-06 17:41:22 -05:00
m0duspwnens
924c0b63bd put vnm engine in place 2025-02-06 16:05:56 -05:00
m0duspwnens
9b8dce0c77 only wait and make predicable when virt-install runs 2025-02-06 15:44:28 -05:00
m0duspwnens
7159678385 create predicatble interfaces 2025-02-06 15:30:46 -05:00
m0duspwnens
c8e232c598 cloudinit network config out of user-data. default 220G disk 2025-02-03 12:20:34 -05:00
m0duspwnens
a3013ff85b simplify the LVM deactivation process by removing unnecessary VG removal attempts 2025-01-31 16:36:51 -05:00
m0duspwnens
65c5abfa88 add note regarding possible missing devices 2025-01-31 16:15:46 -05:00
m0duspwnens
0114e36cfa set lvm = system uuid and only sanitize new nvme if doesnt belong to current vm 2025-01-31 15:17:54 -05:00
m0duspwnens
5c56e0f498 already configured not failure state 2025-01-31 11:18:11 -05:00
m0duspwnens
61992ae787 verify script work with 1 or more nvme 2025-01-30 13:28:08 -05:00
m0duspwnens
08bbeedbd7 add automatic NVMe device mounting for VMs with LVM support 2025-01-30 09:55:26 -05:00
m0duspwnens
a5f2db8c80 add preflight check to ensure repo connectivity prior to installing salt-minion with salt-cloud 2025-01-29 18:17:29 -05:00
m0duspwnens
8d1ce0460f remove possible race condition caused by vm init cron for setup.virt.init. setup.virt and mine updated during salt-cloud call with init_script 2025-01-29 14:23:10 -05:00
m0duspwnens
3c85b48291 manage with contents to simplify salt cloud profile file_map 2025-01-29 08:12:50 -05:00
m0duspwnens
ea2e026c56 only manager nodes or heavynodes should ever be single-node 2025-01-29 08:10:05 -05:00
m0duspwnens
8b3f310212 install python3-dnf-plugin-versionlock on vm before first highstate 2025-01-29 04:08:30 -05:00
m0duspwnens
87136e9e2b restart salt-minion to trigger highstate 2025-01-28 16:38:20 -05:00
m0duspwnens
5a6a9d6ec2 round ES_HEAP_SIZE 2025-01-28 16:01:49 -05:00
m0duspwnens
d3b3a0eb8a wrap salt-cloud -yd. start implementing vm/minion cleanup with ip removal 2025-01-28 14:04:58 -05:00
m0duspwnens
91fc59cffc add removehost option to so-firewall. add logging to console and so-firewall.log 2025-01-28 14:04:02 -05:00
m0duspwnens
e32dbad0d0 fix monitoring for add_ files 2025-01-28 11:22:26 -05:00
m0duspwnens
b66aafd168 fix claiming for cpu/mem 2025-01-27 17:24:04 -05:00
m0duspwnens
2cd0f69069 watch and build 2025-01-27 16:40:10 -05:00
m0duspwnens
0177f641c8 watch for files and create a vm 2025-01-27 15:09:42 -05:00
m0duspwnens
b3969a6ce0 fix hardware passthrough for pci devices 2025-01-24 17:19:41 -05:00
m0duspwnens
ab97d3b8b7 ensure 64962 patch applies to manager for salt-cloud 2025-01-24 11:26:34 -05:00
m0duspwnens
213df68d04 merge with 120 dev and fix conflicts 2025-01-23 10:56:48 -05:00
m0duspwnens
9db3cd901c update documentation of core functionality 2025-01-18 10:45:10 -05:00
m0duspwnens
64c9230423 prevent conflicts with network manager in base vm 2025-01-18 10:44:44 -05:00
m0duspwnens
17943ef0db add hypervisor state to hypervisor node 2025-01-18 08:24:50 -05:00
m0duspwnens
8ed3f0b1cc change base image path for so-salt-cloud 2025-01-18 07:30:36 -05:00
m0duspwnens
7c50a5e17b cloud-init needs to import repo gpg keys so packags can install 2025-01-17 23:16:18 -05:00
m0duspwnens
c13c85bd2d manager needs ssh config. need -r to ignore bootstrap provided repos 2025-01-17 22:54:46 -05:00
m0duspwnens
ae01dc9639 manager needs more packages for salt-cloud. change location of priv key for salt-cloud config 2025-01-17 22:26:39 -05:00
m0duspwnens
a74ed0daf0 fix disabling cloud-init and system shutdown. increase ram/cpu of base vm. shrink disk_size to 6G for testing 2025-01-17 21:25:40 -05:00
m0duspwnens
60387651d2 recreate the base vm if any of the cloud init files change 2025-01-17 20:13:42 -05:00
m0duspwnens
3a78be68d6 ensure cloud-init is removed 2025-01-17 20:05:35 -05:00
m0duspwnens
a896332db3 fix deprecation 2025-01-17 19:49:41 -05:00
m0duspwnens
54eeb0e327 handle refreshing base image and reinstalling the vm if the source qcow2 image changes 2025-01-17 19:27:04 -05:00
m0duspwnens
1f13554bd9 move add virt install and pool creation to images/init. start moving to /nsm/libvirt/ 2025-01-17 09:43:39 -05:00
m0duspwnens
4cc3691489 give all nodes access to soc license pillar file 2025-01-16 17:51:39 -05:00
m0duspwnens
24eadf2507 add libvirt state to highstate for hypervisor. update allowed_states for libvirt 2025-01-16 17:46:20 -05:00
m0duspwnens
a274bfb744 license note 2025-01-16 17:45:07 -05:00
m0duspwnens
2277c792b9 update feature error logging in so-minion 2025-01-16 17:13:36 -05:00
m0duspwnens
61f5614ac9 added logging and error handling so-minion 2025-01-16 16:57:36 -05:00
m0duspwnens
6367aed62a reactor needs to match runner function parameter structure 2025-01-16 14:59:11 -05:00
m0duspwnens
739f592061 remove old line of code 2025-01-16 14:06:01 -05:00
m0duspwnens
116c2b73c1 update gitignore 2025-01-16 11:16:34 -05:00
m0duspwnens
58be7ae5db rename from coreol9 or coreol9Small to sool9 2025-01-16 11:16:20 -05:00
m0duspwnens
0e0fb885d2 hypervisor highstate after image creation, not when key accepted 2025-01-16 11:13:36 -05:00
m0duspwnens
e8546b82f8 default image: sool9. cloud-init add local repo 2025-01-16 08:43:46 -05:00
m0duspwnens
837fbab96d minimize packages installed on manager for hyper 2025-01-15 17:00:06 -05:00
m0duspwnens
cbd2d88000 sync the runners 2025-01-15 16:59:39 -05:00
m0duspwnens
01ac1cdcca check features and allowed/states 2025-01-15 14:13:12 -05:00
m0duspwnens
161e8a6c21 ssh config for manager. dont need to create soqemussh user on manager 2025-01-14 16:21:17 -05:00
m0duspwnens
2e3c1adc63 runner to setup manager for first hypervisor 2025-01-14 16:20:21 -05:00
m0duspwnens
776afa4a36 setup items on manager when hypervisor joins the grid 2025-01-09 16:32:41 -05:00
m0duspwnens
3cac19d498 createvm script without setting network in base domain 2025-01-09 16:31:51 -05:00
m0duspwnens
2ba8a87c9d add directory where qcow2 images will be distributed from 2025-01-09 16:20:56 -05:00
m0duspwnens
d677dc51de add comment about reactors required by salt-master 2025-01-09 16:19:23 -05:00
m0duspwnens
ebbfcd169c add pkg required for so-qcow2-modify-network 2025-01-09 16:17:50 -05:00
m0duspwnens
574d2994d1 use cmd.run instead of cmd.script to resolve issue 64962 2025-01-09 16:16:59 -05:00
m0duspwnens
ecc5d64584 move logge def to global 2025-01-09 16:14:57 -05:00
m0duspwnens
6888682f92 add comments for raid scripts 2025-01-09 16:14:01 -05:00
m0duspwnens
0197cdb33d fix bridge forwarding on hypervisors bridge 2025-01-09 16:12:33 -05:00
m0duspwnens
3c59858f70 improvements to createvm 2024-12-20 11:42:53 -05:00
m0duspwnens
6f0161e9da script to create base domain 2024-12-19 17:36:48 -05:00
m0duspwnens
f2bd735f51 another script to create raid 2024-12-19 10:13:05 -05:00
m0duspwnens
7a8fd8c3e5 handle salt-cloud package 2024-12-19 10:12:29 -05:00
m0duspwnens
b24aa2f797 fix destroying virbr0 2024-12-19 10:11:54 -05:00
m0duspwnens
5e4f1fc279 only run fix ldap when lief installed 2024-12-16 10:23:14 -05:00
m0duspwnens
e779d180f9 work around libvirt issue. add raid scripts 2024-12-13 16:03:17 -05:00
m0duspwnens
a84a32c075 increase whiptail by 1 2024-12-10 16:24:18 -05:00
m0duspwnens
5649986834 Merge branch '2.4/dev' into vlb2 2024-12-09 15:35:57 -05:00
m0duspwnens
7eaa8d54dc git ignore dirs 2024-12-09 15:35:07 -05:00
m0duspwnens
61a1fbde6e create hypervisor pillars in setup 2024-12-09 15:30:48 -05:00
m0duspwnens
a0a18973d8 add new salt bootstrap 2024-12-09 15:29:51 -05:00
m0duspwnens
efbf62f56a adding beacon 2024-11-04 08:30:40 -05:00
m0duspwnens
39391c8088 sync pillar top 2024-10-29 11:27:49 -04:00
m0duspwnens
9ac5ef09ad update comment 2024-10-29 11:01:04 -04:00
m0duspwnens
3394588602 sync hypervisor state remote to local 2024-10-29 10:56:18 -04:00
m0duspwnens
c64a05f2ff dynamic annotations 2024-10-29 10:20:31 -04:00
m0duspwnens
0c4426a55e Merge branch '2.4/dev' into vertlybimp 2024-10-29 08:32:39 -04:00
m0duspwnens
feb700393e merge with 2.4.120, fix merge conflicts 2024-10-25 15:09:38 -04:00
m0duspwnens
0476585370 dynamic annotations 2024-10-22 09:03:02 -04:00
m0duspwnens
dcc1738978 dynamic annotations 2024-10-11 10:46:07 -04:00
m0duspwnens
0b0ff62bc5 update comments 2024-10-08 09:40:44 -04:00
m0duspwnens
9f76371449 add libs 2024-10-01 08:33:37 -04:00
m0duspwnens
50bd8448cc add arg to start vm after modification 2024-09-23 10:13:22 -04:00
m0duspwnens
0b326370bd script for modifying hardware of a vm 2024-09-20 14:51:36 -04:00
m0duspwnens
d0963baad4 update logging 2024-09-20 14:50:08 -04:00
m0duspwnens
75e8c60fe2 add tools to set dhcp/static ip inside the qcow2 image 2024-09-20 11:03:16 -04:00
m0duspwnens
e7ea27a1b3 script to update ip address to static or dhcp inside qcow2 image 2024-09-13 15:26:59 -04:00
m0duspwnens
aaa48f6a1a support for fleet, heavynode, receiver, idh 2024-08-29 13:41:58 -04:00
m0duspwnens
0766a5da91 change to LSHEAP. LSHOSTNAME from id grain 2024-08-28 16:59:24 -04:00
m0duspwnens
267d1a27ac use cron instead of schedule for vm init. ensure vm shutdown 2024-08-28 15:52:14 -04:00
m0duspwnens
f5e6e49075 set initial schedule for vm to deal with possible manager firewall state.apply delay 2024-08-28 14:12:23 -04:00
m0duspwnens
d44ce0a070 add so-salt-cloud as salt-cloud wrapper 2024-08-28 12:41:38 -04:00
m0duspwnens
9ddccba780 LSHEAP and pipeline workers for virt 2024-08-28 10:09:42 -04:00
m0duspwnens
301894f6e8 script to fix libvirt in salt 3006.2+ 2024-08-27 09:42:11 -04:00
m0duspwnens
a425a7fda2 update docker modules for 3006.9 2024-08-27 09:37:23 -04:00
m0duspwnens
21c3835322 salt3006.9, redo reactors, use virt.shutdown 2024-08-27 09:25:40 -04:00
m0duspwnens
d110503639 example pilalr 2024-08-20 15:27:19 -04:00
m0duspwnens
64bf7eb363 hyper 2024-08-20 15:26:05 -04:00
m0duspwnens
205560cc95 updates 2024-08-20 08:31:46 -04:00
m0duspwnens
7698243caf fix reactors 2024-08-16 13:37:44 -04:00
m0duspwnens
67f0934930 set new bridge 2024-08-16 12:21:41 -04:00
m0duspwnens
30e998edf7 bridge and pools 2024-08-16 11:58:49 -04:00
m0duspwnens
2a35e45920 hyper 2024-08-13 13:17:09 -04:00
m0duspwnens
aa5de9f7bd cloud profiles and providers. libvirt net setup 2024-08-13 10:17:45 -04:00
m0duspwnens
f9eeb76518 mine for hyper 2024-08-12 14:58:10 -04:00
m0duspwnens
957235a656 fix dns-search 2024-08-12 13:31:51 -04:00
m0duspwnens
64a0c171f3 ssh user, build cloud profiles and providers 2024-08-12 12:47:04 -04:00
m0duspwnens
a28ac3bee6 virt 2024-08-09 11:53:07 -04:00
m0duspwnens
3643303a51 remove docker 7.1.0 wheels 2024-08-07 16:21:49 -04:00
m0duspwnens
81d407f0ff new wheels 2024-08-07 15:34:37 -04:00
m0duspwnens
d29b0660f0 add docker module for salt 3006.1 2024-08-07 14:47:01 -04:00
m0duspwnens
59b94177d6 use salt3006.1 due to issue with virt state/module - salt issues 65694 2024-08-07 13:14:07 -04:00
m0duspwnens
9d2c5d54b0 hype changes 2024-08-07 10:43:53 -04:00
m0duspwnens
a6f1a0245a configure bridge during setup 2024-08-06 12:33:09 -04:00
m0duspwnens
fcf859ffed start adding bridge for hyper 2024-08-05 14:53:11 -04:00
m0duspwnens
fe3f87e1fd use salt 3006.9 2024-08-02 13:45:46 -04:00
m0duspwnens
5a24a7775e salt 3006.1 - avoid some cloud/virt bug in later version 2024-07-31 15:57:43 -04:00
m0duspwnens
52e52f35f7 hyper setup init 2024-07-31 15:49:32 -04:00
m0duspwnens
810be2c9d2 virt start 2024-07-31 15:19:29 -04:00
m0duspwnens
8e4777a5ff libvirt start 2024-07-31 15:19:29 -04:00
521 changed files with 200160 additions and 165718 deletions

View File

@@ -541,5 +541,6 @@ paths = [
'''gitleaks.toml''',
'''(.*?)(jpg|gif|doc|pdf|bin|svg|socket)$''',
'''(go.mod|go.sum)$''',
'''salt/nginx/files/enterprise-attack.json'''
'''salt/nginx/files/enterprise-attack.json''',
'''(.*?)whl$'''
]

View File

@@ -25,6 +25,13 @@ body:
- 2.4.111
- 2.4.120
- 2.4.130
- 2.4.140
- 2.4.141
- 2.4.150
- 2.4.160
- 2.4.170
- 2.4.180
- 2.4.190
- Other (please provide detail below)
validations:
required: true

View File

@@ -1,10 +1,6 @@
name: python-test
on:
push:
paths:
- "salt/sensoroni/files/analyzers/**"
- "salt/manager/tools/sbin"
pull_request:
paths:
- "salt/sensoroni/files/analyzers/**"
@@ -17,7 +13,7 @@ jobs:
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
python-version: ["3.13"]
python-code-path: ["salt/sensoroni/files/analyzers", "salt/manager/tools/sbin"]
steps:
@@ -36,4 +32,4 @@ jobs:
flake8 ${{ matrix.python-code-path }} --show-source --max-complexity=12 --doctests --max-line-length=200 --statistics
- name: Test with pytest
run: |
pytest ${{ matrix.python-code-path }} --cov=${{ matrix.python-code-path }} --doctest-modules --cov-report=term --cov-fail-under=100 --cov-config=pytest.ini
PYTHONPATH=${{ matrix.python-code-path }} pytest ${{ matrix.python-code-path }} --cov=${{ matrix.python-code-path }} --doctest-modules --cov-report=term --cov-fail-under=100 --cov-config=pytest.ini

3
.gitignore vendored
View File

@@ -1,4 +1,3 @@
# Created by https://www.gitignore.io/api/macos,windows
# Edit at https://www.gitignore.io/?templates=macos,windows
@@ -67,4 +66,4 @@ __pycache__
# Analyzer dev/test config files
*_dev.yaml
site-packages
site-packages

View File

@@ -1,17 +1,17 @@
### 2.4.130-20250311 ISO image released on 2025/03/11
### 2.4.190-20251024 ISO image released on 2025/10/24
### Download and Verify
2.4.130-20250311 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.130-20250311.iso
2.4.190-20251024 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.190-20251024.iso
MD5: 4641CA710570CCE18CD7D50653373DC0
SHA1: 786EF73F7945FDD80126C9AE00BDD29E58743715
SHA256: 48C7A042F20C46B8087BAE0F971696DADE9F9364D52F416718245C16E7CCB977
MD5: 25358481FB876226499C011FC0710358
SHA1: 0B26173C0CE136F2CA40A15046D1DFB78BCA1165
SHA256: 4FD9F62EDA672408828B3C0C446FE5EA9FF3C4EE8488A7AB1101544A3C487872
Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.130-20250311.iso.sig
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.190-20251024.iso.sig
Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS
@@ -25,22 +25,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.
Download the signature file for the ISO:
```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.130-20250311.iso.sig
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.190-20251024.iso.sig
```
Download the ISO image:
```
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.130-20250311.iso
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.190-20251024.iso
```
Verify the downloaded ISO image using the signature file:
```
gpg --verify securityonion-2.4.130-20250311.iso.sig securityonion-2.4.130-20250311.iso
gpg --verify securityonion-2.4.190-20251024.iso.sig securityonion-2.4.190-20251024.iso
```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
```
gpg: Signature made Mon 10 Mar 2025 06:30:49 PM EDT using RSA key ID FE507013
gpg: Signature made Thu 23 Oct 2025 07:21:46 AM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.

View File

@@ -1 +1 @@
2.4.130
2.4.190

View File

@@ -0,0 +1,34 @@
{% set node_types = {} %}
{% for minionid, ip in salt.saltutil.runner(
'mine.get',
tgt='G@role:so-hypervisor or G@role:so-managerhype',
fun='network.ip_addrs',
tgt_type='compound') | dictsort()
%}
# only add a node to the pillar if it returned an ip from the mine
{% if ip | length > 0%}
{% set hostname = minionid.split('_') | first %}
{% set node_type = minionid.split('_') | last %}
{% if node_type not in node_types.keys() %}
{% do node_types.update({node_type: {hostname: ip[0]}}) %}
{% else %}
{% if hostname not in node_types[node_type] %}
{% do node_types[node_type].update({hostname: ip[0]}) %}
{% else %}
{% do node_types[node_type][hostname].update(ip[0]) %}
{% endif %}
{% endif %}
{% endif %}
{% endfor %}
hypervisor:
nodes:
{% for node_type, values in node_types.items() %}
{{node_type}}:
{% for hostname, ip in values.items() %}
{{hostname}}:
ip: {{ip}}
{% endfor %}
{% endfor %}

View File

@@ -24,6 +24,7 @@
{% endif %}
{% endfor %}
{% if node_types %}
node_data:
{% for node_type, host_values in node_types.items() %}
{% for hostname, details in host_values.items() %}
@@ -33,3 +34,6 @@ node_data:
role: {{node_type}}
{% endfor %}
{% endfor %}
{% else %}
node_data: False
{% endif %}

View File

@@ -18,16 +18,22 @@ base:
- telegraf.adv_telegraf
- versionlock.soc_versionlock
- versionlock.adv_versionlock
- soc.license
'* and not *_desktop':
- firewall.soc_firewall
- firewall.adv_firewall
- nginx.soc_nginx
- nginx.adv_nginx
- node_data.ips
'*_manager or *_managersearch':
'salt-cloud:driver:libvirt':
- match: grain
- vm.soc_vm
- vm.adv_vm
'*_manager or *_managersearch or *_managerhype':
- match: compound
- node_data.ips
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth
{% endif %}
@@ -44,7 +50,6 @@ base:
- logstash.adv_logstash
- soc.soc_soc
- soc.adv_soc
- soc.license
- kibana.soc_kibana
- kibana.adv_kibana
- kratos.soc_kratos
@@ -70,6 +75,9 @@ base:
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
- hypervisor.nodes
- hypervisor.soc_hypervisor
- hypervisor.adv_hypervisor
- stig.soc_stig
'*_sensor':
@@ -87,9 +95,9 @@ base:
- minions.{{ grains.id }}
- minions.adv_{{ grains.id }}
- stig.soc_stig
- soc.license
'*_eval':
- node_data.ips
- secrets
- healthcheck.eval
- elasticsearch.index_templates
@@ -113,7 +121,6 @@ base:
- idstools.adv_idstools
- soc.soc_soc
- soc.adv_soc
- soc.license
- kibana.soc_kibana
- kibana.adv_kibana
- strelka.soc_strelka
@@ -138,6 +145,7 @@ base:
- minions.adv_{{ grains.id }}
'*_standalone':
- node_data.ips
- logstash.nodes
- logstash.soc_logstash
- logstash.adv_logstash
@@ -172,7 +180,6 @@ base:
- manager.adv_manager
- soc.soc_soc
- soc.adv_soc
- soc.license
- kibana.soc_kibana
- kibana.adv_kibana
- strelka.soc_strelka
@@ -238,7 +245,6 @@ base:
- minions.{{ grains.id }}
- minions.adv_{{ grains.id }}
- stig.soc_stig
- soc.license
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
@@ -256,10 +262,12 @@ base:
- minions.adv_{{ grains.id }}
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
- soc.license
- stig.soc_stig
- elasticfleet.soc_elasticfleet
- elasticfleet.adv_elasticfleet
'*_import':
- node_data.ips
- secrets
- elasticsearch.index_templates
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
@@ -280,7 +288,6 @@ base:
- manager.adv_manager
- soc.soc_soc
- soc.adv_soc
- soc.license
- kibana.soc_kibana
- kibana.adv_kibana
- backup.soc_backup
@@ -305,6 +312,7 @@ base:
- minions.adv_{{ grains.id }}
'*_fleet':
- node_data.ips
- backup.soc_backup
- backup.adv_backup
- logstash.nodes
@@ -314,9 +322,15 @@ base:
- elasticfleet.adv_elasticfleet
- minions.{{ grains.id }}
- minions.adv_{{ grains.id }}
- stig.soc_stig
'*_hypervisor':
- minions.{{ grains.id }}
- minions.adv_{{ grains.id }}
- stig.soc_stig
'*_desktop':
- minions.{{ grains.id }}
- minions.adv_{{ grains.id }}
- stig.soc_stig
- soc.license

View File

@@ -0,0 +1,91 @@
#!/opt/saltstack/salt/bin/python3
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
#
# Note: Per the Elastic License 2.0, the second limitation states:
#
# "You may not move, change, disable, or circumvent the license key functionality
# in the software, and you may not remove or obscure any functionality in the
# software that is protected by the license key."
"""
Salt execution module for hypervisor operations.
This module provides functions for managing hypervisor configurations,
including VM file management.
"""
import json
import logging
import os
log = logging.getLogger(__name__)
__virtualname__ = 'hypervisor'
def __virtual__():
"""
Only load this module if we're on a system that can manage hypervisors.
"""
return __virtualname__
def remove_vm_from_vms_file(vms_file_path, vm_hostname, vm_role):
"""
Remove a VM entry from the hypervisorVMs file.
Args:
vms_file_path (str): Path to the hypervisorVMs file
vm_hostname (str): Hostname of the VM to remove (without role suffix)
vm_role (str): Role of the VM
Returns:
dict: Result dictionary with success status and message
CLI Example:
salt '*' hypervisor.remove_vm_from_vms_file /opt/so/saltstack/local/salt/hypervisor/hosts/hypervisor1VMs node1 nsm
"""
try:
# Check if file exists
if not os.path.exists(vms_file_path):
msg = f"VMs file not found: {vms_file_path}"
log.error(msg)
return {'result': False, 'comment': msg}
# Read current VMs
with open(vms_file_path, 'r') as f:
content = f.read().strip()
vms = json.loads(content) if content else []
# Find and remove the VM entry
original_count = len(vms)
vms = [vm for vm in vms if not (vm.get('hostname') == vm_hostname and vm.get('role') == vm_role)]
if len(vms) < original_count:
# VM was found and removed, write back to file
with open(vms_file_path, 'w') as f:
json.dump(vms, f, indent=2)
# Set socore:socore ownership (939:939)
os.chown(vms_file_path, 939, 939)
msg = f"Removed VM {vm_hostname}_{vm_role} from {vms_file_path}"
log.info(msg)
return {'result': True, 'comment': msg}
else:
msg = f"VM {vm_hostname}_{vm_role} not found in {vms_file_path}"
log.warning(msg)
return {'result': False, 'comment': msg}
except json.JSONDecodeError as e:
msg = f"Failed to parse JSON in {vms_file_path}: {str(e)}"
log.error(msg)
return {'result': False, 'comment': msg}
except Exception as e:
msg = f"Failed to remove VM {vm_hostname}_{vm_role} from {vms_file_path}: {str(e)}"
log.error(msg)
return {'result': False, 'comment': msg}

335
salt/_modules/qcow2.py Normal file
View File

@@ -0,0 +1,335 @@
#!py
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
"""
Salt module for managing QCOW2 image configurations and VM hardware settings. This module provides functions
for modifying network configurations within QCOW2 images, adjusting virtual machine hardware settings, and
creating virtual storage volumes. It serves as a Salt interface to the so-qcow2-modify-network,
so-kvm-modify-hardware, and so-kvm-create-volume scripts.
The module offers three main capabilities:
1. Network Configuration: Modify network settings (DHCP/static IP) within QCOW2 images
2. Hardware Configuration: Adjust VM hardware settings (CPU, memory, PCI passthrough)
3. Volume Management: Create and attach virtual storage volumes for NSM data
This module is intended to work with Security Onion's virtualization infrastructure and is typically
used in conjunction with salt-cloud for VM provisioning and management.
"""
import logging
import subprocess
import shlex
log = logging.getLogger(__name__)
__virtualname__ = 'qcow2'
def __virtual__():
return __virtualname__
def modify_network_config(image, interface, mode, vm_name, ip4=None, gw4=None, dns4=None, search4=None):
'''
Usage:
salt '*' qcow2.modify_network_config image=<path> interface=<iface> mode=<mode> vm_name=<name> [ip4=<addr>] [gw4=<addr>] [dns4=<servers>] [search4=<domain>]
Options:
image
Path to the QCOW2 image file that will be modified
interface
Network interface name to configure (e.g., 'enp1s0')
mode
Network configuration mode, either 'dhcp4' or 'static4'
vm_name
Full name of the VM (hostname_role)
ip4
IPv4 address with CIDR notation (e.g., '192.168.1.10/24')
Required when mode='static4'
gw4
IPv4 gateway address (e.g., '192.168.1.1')
Required when mode='static4'
dns4
Comma-separated list of IPv4 DNS servers (e.g., '8.8.8.8,8.8.4.4')
Optional for both DHCP and static configurations
search4
DNS search domain for IPv4 (e.g., 'example.local')
Optional for both DHCP and static configurations
Examples:
1. **Configure DHCP:**
```bash
salt '*' qcow2.modify_network_config image='/nsm/libvirt/images/sool9/sool9.qcow2' interface='enp1s0' mode='dhcp4'
```
This configures enp1s0 to use DHCP for IP assignment
2. **Configure Static IP:**
```bash
salt '*' qcow2.modify_network_config image='/nsm/libvirt/images/sool9/sool9.qcow2' interface='enp1s0' mode='static4' ip4='192.168.1.10/24' gw4='192.168.1.1' dns4='192.168.1.1,8.8.8.8' search4='example.local'
```
This sets a static IP configuration with DNS servers and search domain
Notes:
- The QCOW2 image must be accessible and writable by the salt minion
- The image should not be in use by a running VM when modified
- Network changes take effect on next VM boot
- Requires so-qcow2-modify-network script to be installed
Description:
This function modifies network configuration within a QCOW2 image file by executing
the so-qcow2-modify-network script. It supports both DHCP and static IPv4 configuration.
The script mounts the image, modifies the network configuration files, and unmounts
safely. All operations are logged for troubleshooting purposes.
Exit Codes:
0: Success
1: Invalid parameters or configuration
2: Image access or mounting error
3: Network configuration error
4: System command error
255: Unexpected error
Logging:
- All operations are logged to the salt minion log
- Log entries are prefixed with 'qcow2 module:'
- Error conditions include detailed error messages and stack traces
- Success/failure status is logged for verification
'''
cmd = ['/usr/sbin/so-qcow2-modify-network', '-I', image, '-i', interface, '-n', vm_name]
if mode.lower() == 'dhcp4':
cmd.append('--dhcp4')
elif mode.lower() == 'static4':
cmd.append('--static4')
if not ip4 or not gw4:
raise ValueError('Both ip4 and gw4 are required for static configuration.')
cmd.extend(['--ip4', ip4, '--gw4', gw4])
if dns4:
cmd.extend(['--dns4', dns4])
if search4:
cmd.extend(['--search4', search4])
else:
raise ValueError("Invalid mode '{}'. Expected 'dhcp4' or 'static4'.".format(mode))
log.info('qcow2 module: Executing command: {}'.format(' '.join(shlex.quote(arg) for arg in cmd)))
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=False)
ret = {
'retcode': result.returncode,
'stdout': result.stdout,
'stderr': result.stderr
}
if result.returncode != 0:
log.error('qcow2 module: Script execution failed with return code {}: {}'.format(result.returncode, result.stderr))
else:
log.info('qcow2 module: Script executed successfully.')
return ret
except Exception as e:
log.error('qcow2 module: An error occurred while executing the script: {}'.format(e))
raise
def modify_hardware_config(vm_name, cpu=None, memory=None, pci=None, start=False):
'''
Usage:
salt '*' qcow2.modify_hardware_config vm_name=<name> [cpu=<count>] [memory=<size>] [pci=<id>] [pci=<id>] [start=<bool>]
Options:
vm_name
Name of the virtual machine to modify
cpu
Number of virtual CPUs to assign (positive integer)
Optional - VM's current CPU count retained if not specified
memory
Amount of memory to assign in MiB (positive integer)
Optional - VM's current memory size retained if not specified
pci
PCI hardware ID(s) to passthrough to the VM (e.g., '0000:c7:00.0')
Can be specified multiple times for multiple devices
Optional - no PCI passthrough if not specified
start
Boolean flag to start the VM after modification
Optional - defaults to False
Examples:
1. **Modify CPU and Memory:**
```bash
salt '*' qcow2.modify_hardware_config vm_name='sensor1' cpu=4 memory=8192
```
This assigns 4 CPUs and 8GB memory to the VM
2. **Enable PCI Passthrough:**
```bash
salt '*' qcow2.modify_hardware_config vm_name='sensor1' pci='0000:c7:00.0' pci='0000:c4:00.0' start=True
```
This configures PCI passthrough and starts the VM
3. **Complete Hardware Configuration:**
```bash
salt '*' qcow2.modify_hardware_config vm_name='sensor1' cpu=8 memory=16384 pci='0000:c7:00.0' start=True
```
This sets CPU, memory, PCI passthrough, and starts the VM
Notes:
- VM must be stopped before modification unless only the start flag is set
- Memory is specified in MiB (1024 = 1GB)
- PCI devices must be available and not in use by the host
- CPU count should align with host capabilities
- Requires so-kvm-modify-hardware script to be installed
Description:
This function modifies the hardware configuration of a KVM virtual machine using
the so-kvm-modify-hardware script. It can adjust CPU count, memory allocation,
and PCI device passthrough. Changes are applied to the VM's libvirt configuration.
The VM can optionally be started after modifications are complete.
Exit Codes:
0: Success
1: Invalid parameters
2: VM state error (running when should be stopped)
3: Hardware configuration error
4: System command error
255: Unexpected error
Logging:
- All operations are logged to the salt minion log
- Log entries are prefixed with 'qcow2 module:'
- Hardware configuration changes are logged
- Errors include detailed messages and stack traces
- Final status of modification is logged
'''
cmd = ['/usr/sbin/so-kvm-modify-hardware', '-v', vm_name]
if cpu is not None:
if isinstance(cpu, int) and cpu > 0:
cmd.extend(['-c', str(cpu)])
else:
raise ValueError('cpu must be a positive integer.')
if memory is not None:
if isinstance(memory, int) and memory > 0:
cmd.extend(['-m', str(memory)])
else:
raise ValueError('memory must be a positive integer.')
if pci:
# Handle PCI IDs (can be a single device or comma-separated list)
if isinstance(pci, str):
devices = [dev.strip() for dev in pci.split(',') if dev.strip()]
elif isinstance(pci, list):
devices = pci
else:
devices = [pci]
# Add each device with its own -p flag
for device in devices:
cmd.extend(['-p', str(device)])
if start:
cmd.append('-s')
log.info('qcow2 module: Executing command: {}'.format(' '.join(shlex.quote(arg) for arg in cmd)))
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=False)
ret = {
'retcode': result.returncode,
'stdout': result.stdout,
'stderr': result.stderr
}
if result.returncode != 0:
log.error('qcow2 module: Script execution failed with return code {}: {}'.format(result.returncode, result.stderr))
else:
log.info('qcow2 module: Script executed successfully.')
return ret
except Exception as e:
log.error('qcow2 module: An error occurred while executing the script: {}'.format(e))
raise
def create_volume_config(vm_name, size_gb, start=False):
'''
Usage:
salt '*' qcow2.create_volume_config vm_name=<name> size_gb=<size> [start=<bool>]
Options:
vm_name
Name of the virtual machine to attach the volume to
size_gb
Volume size in GB (positive integer)
This determines the capacity of the virtual storage volume
start
Boolean flag to start the VM after volume creation
Optional - defaults to False
Examples:
1. **Create 500GB Volume:**
```bash
salt '*' qcow2.create_volume_config vm_name='sensor1_sensor' size_gb=500
```
This creates a 500GB virtual volume for NSM storage
2. **Create 1TB Volume and Start VM:**
```bash
salt '*' qcow2.create_volume_config vm_name='sensor1_sensor' size_gb=1000 start=True
```
This creates a 1TB volume and starts the VM after attachment
Notes:
- VM must be stopped before volume creation
- Volume is created as a qcow2 image and attached to the VM
- This is an alternative to disk passthrough via modify_hardware_config
- Volume is automatically attached to the VM's libvirt configuration
- Requires so-kvm-create-volume script to be installed
- Volume files are stored in the hypervisor's VM storage directory
Description:
This function creates and attaches a virtual storage volume to a KVM virtual machine
using the so-kvm-create-volume script. It creates a qcow2 disk image of the specified
size and attaches it to the VM for NSM (Network Security Monitoring) storage purposes.
This provides an alternative to physical disk passthrough, allowing flexible storage
allocation without requiring dedicated hardware. The VM can optionally be started
after the volume is successfully created and attached.
Exit Codes:
0: Success
1: Invalid parameters
2: VM state error (running when should be stopped)
3: Volume creation error
4: System command error
255: Unexpected error
Logging:
- All operations are logged to the salt minion log
- Log entries are prefixed with 'qcow2 module:'
- Volume creation and attachment operations are logged
- Errors include detailed messages and stack traces
- Final status of volume creation is logged
'''
# Validate size_gb parameter
if not isinstance(size_gb, int) or size_gb <= 0:
raise ValueError('size_gb must be a positive integer.')
cmd = ['/usr/sbin/so-kvm-create-volume', '-v', vm_name, '-s', str(size_gb)]
if start:
cmd.append('-S')
log.info('qcow2 module: Executing command: {}'.format(' '.join(shlex.quote(arg) for arg in cmd)))
try:
result = subprocess.run(cmd, capture_output=True, text=True, check=False)
ret = {
'retcode': result.returncode,
'stdout': result.stdout,
'stderr': result.stderr
}
if result.returncode != 0:
log.error('qcow2 module: Script execution failed with return code {}: {}'.format(result.returncode, result.stderr))
else:
log.info('qcow2 module: Script executed successfully.')
return ret
except Exception as e:
log.error('qcow2 module: An error occurred while executing the script: {}'.format(e))
raise

File diff suppressed because it is too large Load Diff

View File

@@ -1,264 +1,180 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% set ISAIRGAP = salt['pillar.get']('global:airgap', False) %}
{% import_yaml 'salt/minion.defaults.yaml' as saltversion %}
{% set saltversion = saltversion.salt.minion.version %}
{# this is the list we are returning from this map file, it gets built below #}
{% set allowed_states= [] %}
{# Define common state groups to reduce redundancy #}
{% set base_states = [
'common',
'patch.os.schedule',
'motd',
'salt.minion-check',
'sensoroni',
'salt.lasthighstate',
'salt.minion'
] %}
{% set ssl_states = [
'ssl',
'telegraf',
'firewall',
'schedule',
'docker_clean'
] %}
{% set manager_states = [
'salt.master',
'ca',
'registry',
'manager',
'nginx',
'influxdb',
'soc',
'kratos',
'hydra',
'elasticfleet',
'elastic-fleet-package-registry',
'idstools',
'suricata.manager',
'utility'
] %}
{% set sensor_states = [
'pcap',
'suricata',
'healthcheck',
'tcpreplay',
'zeek',
'strelka'
] %}
{% set kafka_states = [
'kafka'
] %}
{% set stig_states = [
'stig'
] %}
{% set elastic_stack_states = [
'elasticsearch',
'elasticsearch.auth',
'kibana',
'kibana.secrets',
'elastalert',
'logstash',
'redis'
] %}
{# Initialize the allowed_states list #}
{% set allowed_states = [] %}
{% if grains.saltversion | string == saltversion | string %}
{# Map role-specific states #}
{% set role_states = {
'so-eval': (
ssl_states +
manager_states +
sensor_states +
elastic_stack_states | reject('equalto', 'logstash') | list
),
'so-heavynode': (
ssl_states +
sensor_states +
['elasticagent', 'elasticsearch', 'logstash', 'redis', 'nginx']
),
'so-idh': (
ssl_states +
['idh']
),
'so-import': (
ssl_states +
manager_states +
sensor_states | reject('equalto', 'strelka') | reject('equalto', 'healthcheck') | list +
['elasticsearch', 'elasticsearch.auth', 'kibana', 'kibana.secrets', 'strelka.manager']
),
'so-manager': (
ssl_states +
manager_states +
['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users', 'strelka.manager'] +
stig_states +
kafka_states +
elastic_stack_states
),
'so-managerhype': (
ssl_states +
manager_states +
['salt.cloud', 'strelka.manager', 'hypervisor', 'libvirt'] +
stig_states +
kafka_states +
elastic_stack_states
),
'so-managersearch': (
ssl_states +
manager_states +
['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users', 'strelka.manager'] +
stig_states +
kafka_states +
elastic_stack_states
),
'so-searchnode': (
ssl_states +
['kafka.ca', 'kafka.ssl', 'elasticsearch', 'logstash', 'nginx'] +
stig_states
),
'so-standalone': (
ssl_states +
manager_states +
['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users'] +
sensor_states +
stig_states +
kafka_states +
elastic_stack_states
),
'so-sensor': (
ssl_states +
sensor_states +
['nginx'] +
stig_states
),
'so-fleet': (
ssl_states +
stig_states +
['logstash', 'nginx', 'healthcheck', 'elasticfleet']
),
'so-receiver': (
ssl_states +
kafka_states +
stig_states +
['logstash', 'redis']
),
'so-hypervisor': (
ssl_states +
stig_states +
['hypervisor', 'libvirt']
),
'so-desktop': (
['ssl', 'docker_clean', 'telegraf'] +
stig_states
)
} %}
{% set allowed_states= salt['grains.filter_by']({
'so-eval': [
'salt.master',
'ca',
'ssl',
'registry',
'manager',
'nginx',
'telegraf',
'influxdb',
'soc',
'kratos',
'hydra',
'elasticfleet',
'elastic-fleet-package-registry',
'firewall',
'idstools',
'suricata.manager',
'healthcheck',
'pcap',
'suricata',
'utility',
'schedule',
'tcpreplay',
'docker_clean'
],
'so-heavynode': [
'ssl',
'nginx',
'telegraf',
'firewall',
'pcap',
'suricata',
'healthcheck',
'elasticagent',
'schedule',
'tcpreplay',
'docker_clean'
],
'so-idh': [
'ssl',
'telegraf',
'firewall',
'idh',
'schedule',
'docker_clean'
],
'so-import': [
'salt.master',
'ca',
'ssl',
'registry',
'manager',
'nginx',
'strelka.manager',
'soc',
'kratos',
'hydra',
'influxdb',
'telegraf',
'firewall',
'idstools',
'suricata.manager',
'pcap',
'utility',
'suricata',
'zeek',
'schedule',
'tcpreplay',
'docker_clean',
'elasticfleet',
'elastic-fleet-package-registry'
],
'so-manager': [
'salt.master',
'ca',
'ssl',
'registry',
'manager',
'nginx',
'telegraf',
'influxdb',
'strelka.manager',
'soc',
'kratos',
'hydra',
'elasticfleet',
'elastic-fleet-package-registry',
'firewall',
'idstools',
'suricata.manager',
'utility',
'schedule',
'docker_clean',
'stig',
'kafka'
],
'so-managersearch': [
'salt.master',
'ca',
'ssl',
'registry',
'nginx',
'telegraf',
'influxdb',
'strelka.manager',
'soc',
'kratos',
'hydra',
'elastic-fleet-package-registry',
'elasticfleet',
'firewall',
'manager',
'idstools',
'suricata.manager',
'utility',
'schedule',
'docker_clean',
'stig',
'kafka'
],
'so-searchnode': [
'ssl',
'nginx',
'telegraf',
'firewall',
'schedule',
'docker_clean',
'stig',
'kafka.ca',
'kafka.ssl'
],
'so-standalone': [
'salt.master',
'ca',
'ssl',
'registry',
'manager',
'nginx',
'telegraf',
'influxdb',
'soc',
'kratos',
'hydra',
'elastic-fleet-package-registry',
'elasticfleet',
'firewall',
'idstools',
'suricata.manager',
'pcap',
'suricata',
'healthcheck',
'utility',
'schedule',
'tcpreplay',
'docker_clean',
'stig',
'kafka'
],
'so-sensor': [
'ssl',
'telegraf',
'firewall',
'nginx',
'pcap',
'suricata',
'healthcheck',
'schedule',
'tcpreplay',
'docker_clean',
'stig'
],
'so-fleet': [
'ssl',
'telegraf',
'firewall',
'logstash',
'nginx',
'healthcheck',
'schedule',
'elasticfleet',
'docker_clean'
],
'so-receiver': [
'ssl',
'telegraf',
'firewall',
'schedule',
'docker_clean',
'kafka',
'stig'
],
'so-desktop': [
'ssl',
'docker_clean',
'telegraf',
'stig'
],
}, grain='role') %}
{%- if grains.role in ['so-sensor', 'so-eval', 'so-standalone', 'so-heavynode'] %}
{% do allowed_states.append('zeek') %}
{%- endif %}
{% if grains.role in ['so-sensor', 'so-eval', 'so-standalone', 'so-heavynode'] %}
{% do allowed_states.append('strelka') %}
{% endif %}
{% if grains.role in ['so-eval', 'so-manager', 'so-standalone', 'so-searchnode', 'so-managersearch', 'so-heavynode', 'so-import'] %}
{% do allowed_states.append('elasticsearch') %}
{% endif %}
{% if grains.role in ['so-eval', 'so-manager', 'so-standalone', 'so-managersearch', 'so-import'] %}
{% do allowed_states.append('elasticsearch.auth') %}
{% endif %}
{% if grains.role in ['so-eval', 'so-manager', 'so-standalone', 'so-managersearch', 'so-import'] %}
{% do allowed_states.append('kibana') %}
{% do allowed_states.append('kibana.secrets') %}
{% endif %}
{% if grains.role in ['so-eval', 'so-manager', 'so-standalone', 'so-managersearch'] %}
{% do allowed_states.append('elastalert') %}
{% endif %}
{% if grains.role in ['so-manager', 'so-standalone', 'so-searchnode', 'so-managersearch', 'so-heavynode', 'so-receiver'] %}
{% do allowed_states.append('logstash') %}
{% endif %}
{% if grains.role in ['so-manager', 'so-standalone', 'so-managersearch', 'so-heavynode', 'so-receiver', 'so-eval'] %}
{% do allowed_states.append('redis') %}
{% endif %}
{# all nodes on the right salt version can run the following states #}
{% do allowed_states.append('common') %}
{% do allowed_states.append('patch.os.schedule') %}
{% do allowed_states.append('motd') %}
{% do allowed_states.append('salt.minion-check') %}
{% do allowed_states.append('sensoroni') %}
{% do allowed_states.append('salt.lasthighstate') %}
{# Get states for the current role #}
{% if grains.role in role_states %}
{% set allowed_states = role_states[grains.role] %}
{% endif %}
{# Add base states that apply to all roles #}
{% for state in base_states %}
{% do allowed_states.append(state) %}
{% endfor %}
{% endif %}
{# Add airgap state if needed #}
{% if ISAIRGAP %}
{% do allowed_states.append('airgap') %}
{% do allowed_states.append('airgap') %}
{% endif %}
{# all nodes can always run salt.minion state #}
{% do allowed_states.append('salt.minion') %}

View File

@@ -11,6 +11,10 @@ TODAY=$(date '+%Y_%m_%d')
BACKUPDIR={{ DESTINATION }}
BACKUPFILE="$BACKUPDIR/so-config-backup-$TODAY.tar"
MAXBACKUPS=7
EXCLUSIONS=(
"--exclude=/opt/so/saltstack/local/salt/elasticfleet/files/so_agent-installers"
)
# Create backup dir if it does not exist
mkdir -p /nsm/backup
@@ -23,7 +27,7 @@ if [ ! -f $BACKUPFILE ]; then
# Loop through all paths defined in global.sls, and append them to backup file
{%- for LOCATION in BACKUPLOCATIONS %}
tar -rf $BACKUPFILE {{ LOCATION }}
tar -rf $BACKUPFILE "${EXCLUSIONS[@]}" {{ LOCATION }}
{%- endfor %}
fi

21
salt/common/grains.sls Normal file
View File

@@ -0,0 +1,21 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% set nsm_exists = salt['file.directory_exists']('/nsm') %}
{% if nsm_exists %}
{% set nsm_total = salt['cmd.shell']('df -BG /nsm | tail -1 | awk \'{print $2}\'') %}
nsm_total:
grains.present:
- name: nsm_total
- value: {{ nsm_total }}
{% else %}
nsm_missing:
test.succeed_without_changes:
- name: /nsm does not exist, skipping grain assignment
{% endif %}

View File

@@ -4,6 +4,7 @@
{% from 'vars/globals.map.jinja' import GLOBALS %}
include:
- common.grains
- common.packages
{% if GLOBALS.role in GLOBALS.manager_roles %}
- manager.elasticsearch # needed for elastic_curl_config state
@@ -106,7 +107,7 @@ Etc/UTC:
timezone.system
# Sync curl configuration for Elasticsearch authentication
{% if GLOBALS.role in ['so-eval', 'so-heavynode', 'so-import', 'so-manager', 'so-managersearch', 'so-searchnode', 'so-standalone'] %}
{% if GLOBALS.is_manager or GLOBALS.role in ['so-heavynode', 'so-searchnode'] %}
elastic_curl_config:
file.managed:
- name: /opt/so/conf/elasticsearch/curl.config
@@ -129,6 +130,10 @@ common_sbin:
- group: 939
- file_mode: 755
- show_changes: False
{% if GLOBALS.role == 'so-heavynode' %}
- exclude_pat:
- so-pcap-import
{% endif %}
common_sbin_jinja:
file.recurse:
@@ -139,6 +144,20 @@ common_sbin_jinja:
- file_mode: 755
- template: jinja
- show_changes: False
{% if GLOBALS.role == 'so-heavynode' %}
- exclude_pat:
- so-import-pcap
{% endif %}
{% if GLOBALS.role == 'so-heavynode' %}
remove_so-pcap-import_heavynode:
file.absent:
- name: /usr/sbin/so-pcap-import
remove_so-import-pcap_heavynode:
file.absent:
- name: /usr/sbin/so-import-pcap
{% endif %}
{% if not GLOBALS.is_manager%}
# prior to 2.4.50 these scripts were in common/tools/sbin on the manager because of soup and distributed to non managers

View File

@@ -1,6 +1,6 @@
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% if GLOBALS.os_family == 'Debian' %}
# we cannot import GLOBALS from vars/globals.map.jinja in this state since it is called in setup.virt.init
# since it is early in setup of a new VM, the pillars imported in GLOBALS are not yet defined
{% if grains.os_family == 'Debian' %}
commonpkgs:
pkg.installed:
- skip_suggestions: True
@@ -46,7 +46,7 @@ python-rich:
{% endif %}
{% endif %}
{% if GLOBALS.os_family == 'RedHat' %}
{% if grains.os_family == 'RedHat' %}
remove_mariadb:
pkg.removed:

View File

@@ -64,6 +64,12 @@ copy_so-repo-sync_manager_tools_sbin:
- source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-repo-sync
- preserve: True
copy_bootstrap-salt_manager_tools_sbin:
file.copy:
- name: /opt/so/saltstack/default/salt/salt/scripts/bootstrap-salt.sh
- source: {{UPDATE_DIR}}/salt/salt/scripts/bootstrap-salt.sh
- preserve: True
# This section is used to put the new script in place so that it can be called during soup.
# It is faster than calling the states that normally manage them to put them in place.
copy_so-common_sbin:
@@ -108,6 +114,13 @@ copy_so-repo-sync_sbin:
- force: True
- preserve: True
copy_bootstrap-salt_sbin:
file.copy:
- name: /usr/sbin/bootstrap-salt.sh
- source: {{UPDATE_DIR}}/salt/salt/scripts/bootstrap-salt.sh
- force: True
- preserve: True
{# this is added in 2.4.120 to remove salt repo files pointing to saltproject.io to accomodate the move to broadcom and new bootstrap-salt script #}
{% if salt['pkg.version_cmp'](SOVERSION, '2.4.120') == -1 %}
{% set saltrepofile = '/etc/yum.repos.d/salt.repo' %}

View File

@@ -99,6 +99,17 @@ add_interface_bond0() {
fi
}
airgap_playbooks() {
SRC_DIR=$1
# Copy playbooks if using airgap
mkdir -p /nsm/airgap-resources
# Purge old airgap playbooks to ensure SO only uses the latest released playbooks
rm -fr /nsm/airgap-resources/playbooks
tar xf $SRC_DIR/airgap-resources/playbooks.tgz -C /nsm/airgap-resources/
chown -R socore:socore /nsm/airgap-resources/playbooks
git config --global --add safe.directory /nsm/airgap-resources/playbooks
}
check_container() {
docker ps | grep "$1:" > /dev/null 2>&1
return $?
@@ -299,7 +310,8 @@ fail() {
get_agent_count() {
if [ -f /opt/so/log/agents/agentstatus.log ]; then
AGENTCOUNT=$(cat /opt/so/log/agents/agentstatus.log | grep -wF active | awk '{print $2}')
AGENTCOUNT=$(cat /opt/so/log/agents/agentstatus.log | grep -wF active | awk '{print $2}' | sed 's/,//')
[[ -z "$AGENTCOUNT" ]] && AGENTCOUNT="0"
else
AGENTCOUNT=0
fi
@@ -429,8 +441,7 @@ lookup_grain() {
lookup_role() {
id=$(lookup_grain id)
pieces=($(echo $id | tr '_' ' '))
echo ${pieces[1]}
echo "${id##*_}"
}
is_feature_enabled() {

View File

@@ -45,7 +45,7 @@ def check_for_fps():
result = subprocess.run([feat_full + '-mode-setup', '--is-enabled'], stdout=subprocess.PIPE)
if result.returncode == 0:
fps = 1
except FileNotFoundError:
except:
fn = '/proc/sys/crypto/' + feat_full + '_enabled'
try:
with open(fn, 'r') as f:

View File

@@ -4,22 +4,16 @@
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
import sys, argparse, re, docker
import sys, argparse, re, subprocess, json
from packaging.version import Version, InvalidVersion
from itertools import groupby, chain
def get_image_name(string) -> str:
return ':'.join(string.split(':')[:-1])
def get_so_image_basename(string) -> str:
return get_image_name(string).split('/so-')[-1]
def get_image_version(string) -> str:
ver = string.split(':')[-1]
if ver == 'latest':
@@ -35,56 +29,75 @@ def get_image_version(string) -> str:
return '999999.9.9'
return ver
def run_command(command):
process = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if process.returncode != 0:
print(f"Error executing command: {command}", file=sys.stderr)
print(f"Error message: {process.stderr}", file=sys.stderr)
exit(1)
return process.stdout
def main(quiet):
client = docker.from_env()
# Prune old/stopped containers
if not quiet: print('Pruning old containers')
client.containers.prune()
image_list = client.images.list(filters={ 'dangling': False })
# Map list of image objects to flattened list of tags (format: "name:version")
tag_list = list(chain.from_iterable(list(map(lambda x: x.attrs.get('RepoTags'), image_list))))
# Filter to only SO images (base name begins with "so-")
tag_list = list(filter(lambda x: re.match(r'^.*\/so-[^\/]*$', get_image_name(x)), tag_list))
# Group tags into lists by base name (sort by same projection first)
tag_list.sort(key=lambda x: get_so_image_basename(x))
grouped_tag_lists = [ list(it) for _, it in groupby(tag_list, lambda x: get_so_image_basename(x)) ]
no_prunable = True
for t_list in grouped_tag_lists:
try:
# Group tags by version, in case multiple images exist with the same version string
t_list.sort(key=lambda x: Version(get_image_version(x)), reverse=True)
grouped_t_list = [ list(it) for _,it in groupby(t_list, lambda x: get_image_version(x)) ]
# Keep the 2 most current version groups
if len(grouped_t_list) <= 2:
continue
else:
no_prunable = False
for group in grouped_t_list[2:]:
for tag in group:
if not quiet: print(f'Removing image {tag}')
# Prune old/stopped containers using docker CLI
if not quiet: print('Pruning old containers')
run_command('docker container prune -f')
# Get list of images using docker CLI
images_json = run_command('docker images --format "{{json .}}"')
# Parse the JSON output
image_list = []
for line in images_json.strip().split('\n'):
if line: # Skip empty lines
image_list.append(json.loads(line))
# Extract tags in the format "name:version"
tag_list = []
for img in image_list:
# Skip dangling images
if img.get('Repository') != "<none>" and img.get('Tag') != "<none>":
tag = f"{img.get('Repository')}:{img.get('Tag')}"
# Filter to only SO images (base name begins with "so-")
if re.match(r'^.*\/so-[^\/]*$', get_image_name(tag)):
tag_list.append(tag)
# Group tags into lists by base name (sort by same projection first)
tag_list.sort(key=lambda x: get_so_image_basename(x))
grouped_tag_lists = [list(it) for k, it in groupby(tag_list, lambda x: get_so_image_basename(x))]
no_prunable = True
for t_list in grouped_tag_lists:
try:
client.images.remove(tag, force=True)
except docker.errors.ClientError as e:
print(f'Could not remove image {tag}, continuing...')
except (docker.errors.APIError, InvalidVersion) as e:
print(f'so-{get_so_image_basename(t_list[0])}: {e}', file=sys.stderr)
exit(1)
# Group tags by version, in case multiple images exist with the same version string
t_list.sort(key=lambda x: Version(get_image_version(x)), reverse=True)
grouped_t_list = [list(it) for k, it in groupby(t_list, lambda x: get_image_version(x))]
# Keep the 2 most current version groups
if len(grouped_t_list) <= 2:
continue
else:
no_prunable = False
for group in grouped_t_list[2:]:
for tag in group:
if not quiet: print(f'Removing image {tag}')
try:
run_command(f'docker rmi -f {tag}')
except Exception as e:
print(f'Could not remove image {tag}, continuing...')
except (InvalidVersion) as e:
print(f'so-{get_so_image_basename(t_list[0])}: {e}', file=sys.stderr)
exit(1)
except Exception as e:
print('Unhandled exception occurred:')
print(f'so-{get_so_image_basename(t_list[0])}: {e}', file=sys.stderr)
exit(1)
if no_prunable and not quiet:
print('No Security Onion images to prune')
except Exception as e:
print('Unhandled exception occurred:')
print(f'so-{get_so_image_basename(t_list[0])}: {e}', file=sys.stderr)
exit(1)
if no_prunable and not quiet:
print('No Security Onion images to prune')
print(f"Error: {e}", file=sys.stderr)
exit(1)
if __name__ == "__main__":
main_parser = argparse.ArgumentParser(add_help=False)

View File

@@ -127,6 +127,8 @@ if [[ $EXCLUDE_STARTUP_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|process already finished" # Telegraf script finished just as the auto kill timeout kicked in
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|No shard available" # Typical error when making a query before ES has finished loading all indices
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|responded with status-code 503" # telegraf getting 503 from ES during startup
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|process_cluster_event_timeout_exception" # logstash waiting for elasticsearch to start
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|not configured for GeoIP" # SO does not bundle the maxminddb with Zeek
fi
if [[ $EXCLUDE_FALSE_POSITIVE_ERRORS == 'Y' ]]; then
@@ -155,6 +157,9 @@ if [[ $EXCLUDE_FALSE_POSITIVE_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|request_unauthorized" # false positive (login failures to Hydra result in an 'error' log)
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|adding index lifecycle policy" # false positive (elasticsearch policy names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|adding ingest pipeline" # false positive (elasticsearch ingest pipeline names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|updating index template" # false positive (elasticsearch index or template names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|updating component template" # false positive (elasticsearch index or template names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|upgrading composable template" # false positive (elasticsearch composable template names contain 'error')
fi
if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
@@ -217,6 +222,7 @@ if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Initialized license manager" # SOC log: before fields.status was changed to fields.licenseStatus
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|from NIC checksum offloading" # zeek reporter.log
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|marked for removal" # docker container getting recycled
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|tcp 127.0.0.1:6791: bind: address already in use" # so-elastic-fleet agent restarting. Seen starting w/ 8.18.8 https://github.com/elastic/kibana/issues/201459
fi
RESULT=0
@@ -263,6 +269,13 @@ for log_file in $(cat /tmp/log_check_files); do
tail -n $RECENT_LOG_LINES $log_file > /tmp/log_check
check_for_errors
done
# Look for OOM specific errors in /var/log/messages which can lead to odd behavior / test failures
if [[ -f /var/log/messages ]]; then
status "Checking log file /var/log/messages"
if journalctl --since "24 hours ago" | grep -iE 'out of memory|oom-kill'; then
RESULT=1
fi
fi
# Cleanup temp files
rm -f /tmp/log_check_files

View File

@@ -0,0 +1,53 @@
#!/usr/bin/python3
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
import logging
import os
import sys
def setup_logging(logger_name, log_file_path, log_level=logging.INFO, format_str='%(asctime)s - %(levelname)s - %(message)s'):
"""
Sets up logging for a script.
Parameters:
logger_name (str): The name of the logger.
log_file_path (str): The file path for the log file.
log_level (int): The logging level (e.g., logging.INFO, logging.DEBUG).
format_str (str): The format string for log messages.
Returns:
logging.Logger: Configured logger object.
"""
logger = logging.getLogger(logger_name)
logger.setLevel(log_level)
# Create directory for log file if it doesn't exist
log_file_dir = os.path.dirname(log_file_path)
if log_file_dir and not os.path.exists(log_file_dir):
try:
os.makedirs(log_file_dir)
except OSError as e:
print(f"Error creating directory {log_file_dir}: {e}")
sys.exit(1)
# Create handlers
c_handler = logging.StreamHandler()
f_handler = logging.FileHandler(log_file_path)
c_handler.setLevel(log_level)
f_handler.setLevel(log_level)
# Create formatter and add it to handlers
formatter = logging.Formatter(format_str)
c_handler.setFormatter(formatter)
f_handler.setFormatter(formatter)
# Add handlers to the logger if they are not already added
if not logger.hasHandlers():
logger.addHandler(c_handler)
logger.addHandler(f_handler)
return logger

View File

@@ -173,7 +173,7 @@ for PCAP in $INPUT_FILES; do
status "- assigning unique identifier to import: $HASH"
pcap_data=$(pcapinfo "${PCAP}")
if ! echo "$pcap_data" | grep -q "First packet time:" || echo "$pcap_data" |egrep -q "Last packet time: 1970-01-01|Last packet time: n/a"; then
if ! echo "$pcap_data" | grep -q "Earliest packet time:" || echo "$pcap_data" |egrep -q "Latest packet time: 1970-01-01|Latest packet time: n/a"; then
status "- this PCAP file is invalid; skipping"
INVALID_PCAPS_COUNT=$((INVALID_PCAPS_COUNT + 1))
else
@@ -205,8 +205,8 @@ for PCAP in $INPUT_FILES; do
HASHES="${HASHES} ${HASH}"
fi
START=$(pcapinfo "${PCAP}" -a |grep "First packet time:" | awk '{print $4}')
END=$(pcapinfo "${PCAP}" -e |grep "Last packet time:" | awk '{print $4}')
START=$(pcapinfo "${PCAP}" -a |grep "Earliest packet time:" | awk '{print $4}')
END=$(pcapinfo "${PCAP}" -e |grep "Latest packet time:" | awk '{print $4}')
status "- found PCAP data spanning dates $START through $END"
# compare $START to $START_OLDEST
@@ -248,7 +248,7 @@ fi
START_OLDEST_SLASH=$(echo $START_OLDEST | sed -e 's/-/%2F/g')
END_NEWEST_SLASH=$(echo $END_NEWEST | sed -e 's/-/%2F/g')
if [[ $VALID_PCAPS_COUNT -gt 0 ]] || [[ $SKIPPED_PCAPS_COUNT -gt 0 ]]; then
URL="https://{{ URLBASE }}/#/dashboards?q=$HASH_FILTERS%20%7C%20groupby%20event.module*%20%7C%20groupby%20-sankey%20event.module*%20event.dataset%20%7C%20groupby%20event.dataset%20%7C%20groupby%20source.ip%20%7C%20groupby%20destination.ip%20%7C%20groupby%20destination.port%20%7C%20groupby%20network.protocol%20%7C%20groupby%20rule.name%20rule.category%20event.severity_label%20%7C%20groupby%20dns.query.name%20%7C%20groupby%20file.mime_type%20%7C%20groupby%20http.virtual_host%20http.uri%20%7C%20groupby%20notice.note%20notice.message%20notice.sub_message%20%7C%20groupby%20ssl.server_name%20%7C%20groupby%20source_geo.organization_name%20source.geo.country_name%20%7C%20groupby%20destination_geo.organization_name%20destination.geo.country_name&t=${START_OLDEST_SLASH}%2000%3A00%3A00%20AM%20-%20${END_NEWEST_SLASH}%2000%3A00%3A00%20AM&z=UTC"
URL="https://{{ URLBASE }}/#/dashboards?q=$HASH_FILTERS%20%7C%20groupby%20event.module*%20%7C%20groupby%20-sankey%20event.module*%20event.dataset%20%7C%20groupby%20event.dataset%20%7C%20groupby%20source.ip%20%7C%20groupby%20destination.ip%20%7C%20groupby%20destination.port%20%7C%20groupby%20network.protocol%20%7C%20groupby%20rule.name%20rule.category%20event.severity_label%20%7C%20groupby%20dns.query.name%20%7C%20groupby%20file.mime_type%20%7C%20groupby%20http.virtual_host%20http.uri%20%7C%20groupby%20notice.note%20notice.message%20notice.sub_message%20%7C%20groupby%20ssl.server_name%20%7C%20groupby%20source.as.organization.name%20source.geo.country_name%20%7C%20groupby%20destination.as.organization.name%20destination.geo.country_name&t=${START_OLDEST_SLASH}%2000%3A00%3A00%20AM%20-%20${END_NEWEST_SLASH}%2000%3A00%3A00%20AM&z=UTC"
status "Import complete!"
status

View File

@@ -0,0 +1,132 @@
#!/opt/saltstack/salt/bin/python3
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
#
# Note: Per the Elastic License 2.0, the second limitation states:
#
# "You may not move, change, disable, or circumvent the license key functionality
# in the software, and you may not remove or obscure any functionality in the
# software that is protected by the license key."
{% if 'vrt' in salt['pillar.get']('features', []) -%}
"""
Script for emitting VM deployment status events to the Salt event bus.
This script provides functionality to emit status events for VM deployment operations,
used by various Security Onion VM management tools.
Usage:
so-salt-emit-vm-deployment-status-event -v <vm_name> -H <hypervisor> -s <status>
Arguments:
-v, --vm-name Name of the VM (hostname_role)
-H, --hypervisor Name of the hypervisor
-s, --status Current deployment status of the VM
Example:
so-salt-emit-vm-deployment-status-event -v sensor1_sensor -H hypervisor1 -s "Creating"
"""
import sys
import argparse
import logging
import salt.client
from typing import Dict, Any
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
log = logging.getLogger(__name__)
def emit_event(vm_name: str, hypervisor: str, status: str) -> bool:
"""
Emit a VM deployment status event to the salt event bus.
Args:
vm_name: Name of the VM (hostname_role)
hypervisor: Name of the hypervisor
status: Current deployment status of the VM
Returns:
bool: True if event was sent successfully, False otherwise
Raises:
ValueError: If status is not a valid deployment status
"""
log.info("Attempting to emit deployment event...")
try:
caller = salt.client.Caller()
event_data = {
'vm_name': vm_name,
'hypervisor': hypervisor,
'status': status
}
# Use consistent event tag structure
event_tag = f'soc/dyanno/hypervisor/{status.lower()}'
ret = caller.cmd(
'event.send',
event_tag,
event_data
)
if not ret:
log.error("Failed to emit VM deployment status event: %s", event_data)
return False
log.info("Successfully emitted VM deployment status event: %s", event_data)
return True
except Exception as e:
log.error("Error emitting VM deployment status event: %s", str(e))
return False
def parse_args():
"""Parse command line arguments."""
parser = argparse.ArgumentParser(
description='Emit VM deployment status events to the Salt event bus.'
)
parser.add_argument('-v', '--vm-name', required=True,
help='Name of the VM (hostname_role)')
parser.add_argument('-H', '--hypervisor', required=True,
help='Name of the hypervisor')
parser.add_argument('-s', '--status', required=True,
help='Current deployment status of the VM')
return parser.parse_args()
def main():
"""Main entry point for the script."""
try:
args = parse_args()
success = emit_event(
vm_name=args.vm_name,
hypervisor=args.hypervisor,
status=args.status
)
if not success:
sys.exit(1)
except Exception as e:
log.error("Failed to emit status event: %s", str(e))
sys.exit(1)
if __name__ == '__main__':
main()
{%- else -%}
echo "Hypervisor nodes are a feature supported only for customers with a valid license. \
Contact Security Onion Solutions, LLC via our website at https://securityonionsolutions.com \
for more information about purchasing a license to enable this feature."
{% endif -%}

View File

@@ -200,6 +200,7 @@ docker:
final_octet: 88
port_bindings:
- 0.0.0.0:9092:9092
- 0.0.0.0:29092:29092
- 0.0.0.0:9093:9093
- 0.0.0.0:8778:8778
custom_bind_mounts: []

View File

@@ -9,3 +9,6 @@ fleetartifactdir:
- user: 947
- group: 939
- makedirs: True
- recurse:
- user
- group

View File

@@ -9,6 +9,9 @@
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
{% set node_data = salt['pillar.get']('node_data') %}
include:
- elasticfleet.artifact_registry
# Add EA Group
elasticfleetgroup:
group.present:
@@ -166,7 +169,7 @@ eaoptionalintegrationsdir:
{% for minion in node_data %}
{% set role = node_data[minion]["role"] %}
{% if role in [ "eval","fleet","heavynode","import","manager","managersearch","standalone" ] %}
{% if role in [ "eval","fleet","heavynode","import","manager", "managerhype", "managersearch","standalone" ] %}
{% set optional_integrations = ELASTICFLEETMERGED.optional_integrations %}
{% set integration_keys = optional_integrations.keys() %}
fleet_server_integrations_{{ minion }}:

View File

@@ -11,6 +11,7 @@ elasticfleet:
defend_filters:
enable_auto_configuration: False
subscription_integrations: False
auto_upgrade_integrations: False
logging:
zeek:
excluded:
@@ -37,6 +38,7 @@ elasticfleet:
- elasticsearch
- endpoint
- fleet_server
- filestream
- http_endpoint
- httpjson
- log

View File

@@ -67,6 +67,8 @@ so-elastic-fleet-auto-configure-artifact-urls:
elasticagent_syncartifacts:
file.recurse:
- name: /nsm/elastic-fleet/artifacts/beats
- user: 947
- group: 947
- source: salt://beats
{% endif %}
@@ -133,12 +135,18 @@ so-elastic-fleet-package-statefile:
so-elastic-fleet-package-upgrade:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-package-upgrade
- retry:
attempts: 3
interval: 10
- onchanges:
- file: /opt/so/state/elastic_fleet_packages.txt
so-elastic-fleet-integrations:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-integration-policy-load
- retry:
attempts: 3
interval: 10
so-elastic-agent-grid-upgrade:
cmd.run:
@@ -150,7 +158,11 @@ so-elastic-agent-grid-upgrade:
so-elastic-fleet-integration-upgrade:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-integration-upgrade
- retry:
attempts: 3
interval: 10
{# Optional integrations script doesn't need the retries like so-elastic-fleet-integration-upgrade which loads the default integrations #}
so-elastic-fleet-addon-integrations:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-optional-integrations-load

View File

@@ -0,0 +1,46 @@
{%- set identities = salt['sqlite3.fetch']('/nsm/kratos/db/db.sqlite', 'SELECT id, json_extract(traits, "$.email") as email FROM identities;') -%}
{%- set valid_identities = false -%}
{%- if identities -%}
{%- set valid_identities = true -%}
{%- for id, email in identities -%}
{%- if not id or not email -%}
{%- set valid_identities = false -%}
{%- break -%}
{%- endif -%}
{%- endfor -%}
{%- endif -%}
{
"package": {
"name": "log",
"version": ""
},
"name": "kratos-logs",
"namespace": "so",
"description": "Kratos logs",
"policy_id": "so-grid-nodes_general",
"inputs": {
"logs-logfile": {
"enabled": true,
"streams": {
"log.logs": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/kratos/kratos.log"
],
"data_stream.dataset": "kratos",
"tags": ["so-kratos"],
{%- if valid_identities -%}
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- add_fields:\n target: event\n fields:\n category: iam\n module: kratos\n- if:\n has_fields:\n - identity_id\n then:{% for id, email in identities %}\n - if:\n equals:\n identity_id: \"{{ id }}\"\n then:\n - add_fields:\n target: ''\n fields:\n user.name: \"{{ email }}\"{% endfor %}",
{%- else -%}
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- add_fields:\n target: event\n fields:\n category: iam\n module: kratos",
{%- endif -%}
"custom": "pipeline: kratos"
}
}
}
}
},
"force": true
}

View File

@@ -1,32 +1,33 @@
{
"name": "elastic-defend-endpoints",
"namespace": "default",
"description": "",
"package": {
"name": "endpoint",
"title": "Elastic Defend",
"version": "8.17.0",
"requires_root": true
},
"enabled": true,
"policy_id": "endpoints-initial",
"vars": {},
"inputs": [
{
"type": "endpoint",
"enabled": true,
"config": {
"integration_config": {
"value": {
"type": "endpoint",
"endpointConfig": {
"preset": "DataCollection"
}
}
}
},
"streams": []
}
]
}
"name": "elastic-defend-endpoints",
"namespace": "default",
"description": "",
"package": {
"name": "endpoint",
"title": "Elastic Defend",
"version": "8.18.1",
"requires_root": true
},
"enabled": true,
"policy_ids": [
"endpoints-initial"
],
"vars": {},
"inputs": [
{
"type": "ENDPOINT_INTEGRATION_CONFIG",
"enabled": true,
"config": {
"_config": {
"value": {
"type": "endpoint",
"endpointConfig": {
"preset": "DataCollection"
}
}
}
},
"streams": []
}
]
}

View File

@@ -0,0 +1,48 @@
{
"package": {
"name": "filestream",
"version": ""
},
"name": "agent-monitor",
"namespace": "",
"description": "",
"policy_ids": [
"so-grid-nodes_general"
],
"output_id": null,
"vars": {},
"inputs": {
"filestream-filestream": {
"enabled": true,
"streams": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/agents/agent-monitor.log"
],
"data_stream.dataset": "agentmonitor",
"pipeline": "elasticagent.monitor",
"parsers": "",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- add_fields:\n target: event\n fields:\n module: gridmetrics",
"tags": [],
"recursive_glob": true,
"ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": true,
"fingerprint_offset": 0,
"fingerprint_length": 64,
"file_identity_native": false,
"exclude_lines": [],
"include_lines": []
}
}
}
}
}
}

View File

@@ -40,7 +40,7 @@
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/elasticsearch/*.log"
"/opt/so/log/elasticsearch/*.json"
]
}
},

View File

@@ -19,7 +19,7 @@
],
"data_stream.dataset": "idh",
"tags": [],
"processors": "\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- drop_fields:\n when:\n equals:\n logtype: \"1001\"\n fields: [\"src_host\", \"src_port\", \"dst_host\", \"dst_port\" ]\n ignore_missing: true\n- rename:\n fields:\n - from: \"src_host\"\n to: \"source.ip\"\n - from: \"src_port\"\n to: \"source.port\"\n - from: \"dst_host\"\n to: \"destination.host\"\n - from: \"dst_port\"\n to: \"destination.port\"\n ignore_missing: true\n- convert:\n fields:\n - {from: \"logtype\", to: \"event.code\", type: \"string\"}\n ignore_missing: true\n- drop_fields:\n fields: '[\"prospector\", \"input\", \"offset\", \"beat\"]'\n- add_fields:\n target: event\n fields:\n category: host\n module: opencanary",
"processors": "\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- convert:\n fields:\n - {from: \"logtype\", to: \"event.code\", type: \"string\"}\n- drop_fields:\n when:\n equals:\n event.code: \"1001\"\n fields: [\"src_host\", \"src_port\", \"dst_host\", \"dst_port\" ]\n ignore_missing: true\n- rename:\n fields:\n - from: \"src_host\"\n to: \"source.ip\"\n - from: \"src_port\"\n to: \"source.port\"\n - from: \"dst_host\"\n to: \"destination.host\"\n - from: \"dst_port\"\n to: \"destination.port\"\n ignore_missing: true\n- drop_fields:\n fields: '[\"prospector\", \"input\", \"offset\", \"beat\"]'\n- add_fields:\n target: event\n fields:\n category: host\n module: opencanary",
"custom": "pipeline: common"
}
}

View File

@@ -20,7 +20,7 @@
],
"data_stream.dataset": "import",
"custom": "",
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-1.67.0\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-2.5.0\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-1.67.0\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-1.67.0\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-2.5.0\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-2.6.1\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-3.1.2\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-2.6.1\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-2.6.1\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-3.1.2\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
"tags": [
"import"
]

View File

@@ -1,30 +0,0 @@
{
"package": {
"name": "log",
"version": ""
},
"name": "kratos-logs",
"namespace": "so",
"description": "Kratos logs",
"policy_id": "so-grid-nodes_general",
"inputs": {
"logs-logfile": {
"enabled": true,
"streams": {
"log.logs": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/kratos/kratos.log"
],
"data_stream.dataset": "kratos",
"tags": ["so-kratos"],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: iam\n module: kratos",
"custom": "pipeline: kratos"
}
}
}
}
},
"force": true
}

View File

@@ -11,7 +11,7 @@
"tcp-tcp": {
"enabled": true,
"streams": {
"tcp.generic": {
"tcp.tcp": {
"enabled": true,
"vars": {
"listen_address": "0.0.0.0",
@@ -23,7 +23,8 @@
"syslog"
],
"syslog_options": "field: message\n#format: auto\n#timezone: Local",
"ssl": ""
"ssl": "",
"custom": ""
}
}
}

View File

@@ -31,7 +31,8 @@
],
"tags": [
"so-grid-node"
]
],
"processors": "- if:\n contains:\n message: \"salt-minion\"\n then: \n - dissect:\n tokenizer: \"%{} %{} %{} %{} %{} %{}: [%{log.level}] %{*}\"\n field: \"message\"\n trim_values: \"all\"\n target_prefix: \"\"\n - drop_event:\n when:\n equals:\n log.level: \"INFO\""
}
}
}

View File

@@ -4,6 +4,7 @@
{% import_json '/opt/so/state/esfleet_package_components.json' as ADDON_PACKAGE_COMPONENTS %}
{% import_json '/opt/so/state/esfleet_component_templates.json' as INSTALLED_COMPONENT_TEMPLATES %}
{% import_yaml 'elasticfleet/defaults.yaml' as ELASTICFLEETDEFAULTS %}
{% set CORE_ESFLEET_PACKAGES = ELASTICFLEETDEFAULTS.get('elasticfleet', {}).get('packages', {}) %}
@@ -14,6 +15,7 @@
'awsfirehose.logs': 'awsfirehose',
'awsfirehose.metrics': 'aws.cloudwatch',
'cribl.logs': 'cribl',
'cribl.metrics': 'cribl',
'sentinel_one_cloud_funnel.logins': 'sentinel_one_cloud_funnel.login',
'azure_application_insights.app_insights': 'azure.app_insights',
'azure_application_insights.app_state': 'azure.app_state',
@@ -45,7 +47,10 @@
'synthetics.browser_screenshot': 'synthetics-browser.screenshot',
'synthetics.http': 'synthetics-http',
'synthetics.icmp': 'synthetics-icmp',
'synthetics.tcp': 'synthetics-tcp'
'synthetics.tcp': 'synthetics-tcp',
'swimlane.swimlane_api': 'swimlane.api',
'swimlane.tenant_api': 'swimlane.tenant',
'swimlane.turbine_api': 'turbine.api'
} %}
{% for pkg in ADDON_PACKAGE_COMPONENTS %}
@@ -62,70 +67,90 @@
{% else %}
{% set integration_type = "" %}
{% endif %}
{% set component_name = pkg.name ~ "." ~ pattern.title %}
{# fix weirdly named components #}
{% if component_name in WEIRD_INTEGRATIONS %}
{% set component_name = WEIRD_INTEGRATIONS[component_name] %}
{% endif %}
{% set component_name = pkg.name ~ "." ~ pattern.title %}
{% set index_pattern = pattern.name %}
{# fix weirdly named components #}
{% if component_name in WEIRD_INTEGRATIONS %}
{% set component_name = WEIRD_INTEGRATIONS[component_name] %}
{% endif %}
{# create duplicate of component_name, so we can split generics from @custom component templates in the index template below and overwrite the default @package when needed
eg. having to replace unifiedlogs.generic@package with filestream.generic@package, but keep the ability to customize unifiedlogs.generic@custom and its ILM policy #}
{% set custom_component_name = component_name %}
{# duplicate integration_type to assist with sometimes needing to overwrite component templates with 'logs-filestream.generic@package' (there is no metrics-filestream.generic@package) #}
{% set generic_integration_type = integration_type %}
{# component_name_x maintains the functionality of merging local pillar changes with generated 'defaults' via SOC UI #}
{% set component_name_x = component_name.replace(".","_x_") %}
{# pillar overrides/merge expects the key names to follow the naming in elasticsearch/defaults.yaml eg. so-logs-1password_x_item_usages . The _x_ is replaced later on in elasticsearch/template.map.jinja #}
{% set integration_key = "so-" ~ integration_type ~ component_name_x %}
{# if its a .generic template make sure that a .generic@package for the integration exists. Else default to logs-filestream.generic@package #}
{% if ".generic" in component_name and integration_type ~ component_name ~ "@package" not in INSTALLED_COMPONENT_TEMPLATES %}
{# these generic templates by default are directed to index_pattern of 'logs-generic-*', overwrite that here to point to eg gcp_pubsub.generic-* #}
{% set index_pattern = integration_type ~ component_name ~ "-*" %}
{# includes use of .generic component template, but it doesn't exist in installed component templates. Redirect it to filestream.generic@package #}
{% set component_name = "filestream.generic" %}
{% set generic_integration_type = "logs-" %}
{% endif %}
{# Default integration settings #}
{% set integration_defaults = {
"index_sorting": false,
"index_template": {
"composed_of": [integration_type ~ component_name ~ "@package", integration_type ~ component_name ~ "@custom", "so-fleet_integrations.ip_mappings-1", "so-fleet_globals-1", "so-fleet_agent_id_verification-1"],
"data_stream": {
"allow_custom_routing": false,
"hidden": false
},
"ignore_missing_component_templates": [integration_type ~ component_name ~ "@custom"],
"index_patterns": [pattern.name],
"priority": 501,
"template": {
"settings": {
"index": {
"lifecycle": {"name": "so-" ~ integration_type ~ component_name ~ "-logs"},
"number_of_replicas": 0
}
}
}
},
"policy": {
"phases": {
"cold": {
"actions": {
"set_priority": {"priority": 0}
},
"min_age": "60d"
"index_sorting": false,
"index_template": {
"composed_of": [generic_integration_type ~ component_name ~ "@package", integration_type ~ custom_component_name ~ "@custom", "so-fleet_integrations.ip_mappings-1", "so-fleet_globals-1", "so-fleet_agent_id_verification-1"],
"data_stream": {
"allow_custom_routing": false,
"hidden": false
},
"ignore_missing_component_templates": [integration_type ~ custom_component_name ~ "@custom"],
"index_patterns": [index_pattern],
"priority": 501,
"template": {
"settings": {
"index": {
"lifecycle": {"name": "so-" ~ integration_type ~ custom_component_name ~ "-logs"},
"number_of_replicas": 0
}
}
}
},
"policy": {
"phases": {
"cold": {
"actions": {
"set_priority": {"priority": 0}
},
"min_age": "60d"
},
"delete": {
"actions": {
"delete": {}
},
"min_age": "365d"
},
"hot": {
"actions": {
"rollover": {
"max_age": "30d",
"max_primary_shard_size": "50gb"
},
"set_priority": {"priority": 100}
},
"delete": {
"actions": {
"delete": {}
},
"min_age": "365d"
},
"hot": {
"actions": {
"rollover": {
"max_age": "30d",
"max_primary_shard_size": "50gb"
},
"set_priority": {"priority": 100}
},
"min_age": "0ms"
},
"warm": {
"actions": {
"set_priority": {"priority": 50}
},
"min_age": "30d"
}
}
}
} %}
"min_age": "0ms"
},
"warm": {
"actions": {
"set_priority": {"priority": 50}
},
"min_age": "30d"
}
}
}
} %}
{% do ADDON_INTEGRATION_DEFAULTS.update({integration_key: integration_defaults}) %}
{% endfor %}
{% endif %}

View File

@@ -45,6 +45,11 @@ elasticfleet:
global: True
forcedType: bool
helpLink: elastic-fleet.html
auto_upgrade_integrations:
description: Enables or disables automatically upgrading Elastic Agent integrations.
global: True
forcedType: bool
helpLink: elastic-fleet.html
server:
custom_fqdn:
description: Custom FQDN for Agents to connect to. One per line.

View File

@@ -23,6 +23,13 @@ fi
# Define a banner to separate sections
banner="========================================================================="
fleet_api() {
local QUERYPATH=$1
shift
curl -sK /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/${QUERYPATH}" "$@" --retry 3 --retry-delay 10 --fail 2>/dev/null
}
elastic_fleet_integration_check() {
AGENT_POLICY=$1
@@ -39,7 +46,9 @@ elastic_fleet_integration_create() {
JSON_STRING=$1
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/package_policies" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "package_policies" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPOST -d "$JSON_STRING"; then
return 1
fi
}
@@ -56,7 +65,10 @@ elastic_fleet_integration_remove() {
'{"packagePolicyIds":[$INTEGRATIONID]}'
)
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/package_policies/delete" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "package_policies/delete" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo "Error: Unable to delete '$NAME' from '$AGENT_POLICY'"
return 1
fi
}
elastic_fleet_integration_update() {
@@ -65,7 +77,9 @@ elastic_fleet_integration_update() {
JSON_STRING=$2
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/package_policies/$UPDATE_ID" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "package_policies/$UPDATE_ID" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -XPUT -d "$JSON_STRING"; then
return 1
fi
}
elastic_fleet_integration_policy_upgrade() {
@@ -77,101 +91,116 @@ elastic_fleet_integration_policy_upgrade() {
'{"packagePolicyIds":[$INTEGRATIONID]}'
)
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/package_policies/upgrade" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "package_policies/upgrade" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
return 1
fi
}
elastic_fleet_package_version_check() {
PACKAGE=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/epm/packages/$PACKAGE" | jq -r '.item.version'
if output=$(fleet_api "epm/packages/$PACKAGE"); then
echo "$output" | jq -r '.item.version'
else
echo "Error: Failed to get current package version for '$PACKAGE'"
return 1
fi
}
elastic_fleet_package_latest_version_check() {
PACKAGE=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/epm/packages/$PACKAGE" | jq -r '.item.latestVersion'
if output=$(fleet_api "epm/packages/$PACKAGE"); then
if version=$(jq -e -r '.item.latestVersion' <<< $output); then
echo "$version"
fi
else
echo "Error: Failed to get latest version for '$PACKAGE'"
return 1
fi
}
elastic_fleet_package_install() {
PKG=$1
VERSION=$2
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X POST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d '{"force":true}' "localhost:5601/api/fleet/epm/packages/$PKG/$VERSION"
if ! fleet_api "epm/packages/$PKG/$VERSION" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d '{"force":true}'; then
return 1
fi
}
elastic_fleet_bulk_package_install() {
BULK_PKG_LIST=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X POST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d@$1 "localhost:5601/api/fleet/epm/packages/_bulk"
}
elastic_fleet_package_is_installed() {
PACKAGE=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET -H 'kbn-xsrf: true' "localhost:5601/api/fleet/epm/packages/$PACKAGE" | jq -r '.item.status'
}
elastic_fleet_installed_packages() {
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET -H 'kbn-xsrf: true' -H 'Content-Type: application/json' "localhost:5601/api/fleet/epm/packages/installed?perPage=300"
}
elastic_fleet_agent_policy_ids() {
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies" | jq -r .items[].id
if [ $? -ne 0 ]; then
echo "Error: Failed to retrieve agent policies."
exit 1
if ! fleet_api "epm/packages/_bulk" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d@$BULK_PKG_LIST; then
return 1
fi
}
elastic_fleet_agent_policy_names() {
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies" | jq -r .items[].name
if [ $? -ne 0 ]; then
elastic_fleet_installed_packages() {
if ! fleet_api "epm/packages/installed?perPage=500"; then
return 1
fi
}
elastic_fleet_agent_policy_ids() {
if output=$(fleet_api "agent_policies"); then
echo "$output" | jq -r .items[].id
else
echo "Error: Failed to retrieve agent policies."
exit 1
return 1
fi
}
elastic_fleet_integration_policy_names() {
AGENT_POLICY=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies/$AGENT_POLICY" | jq -r .item.package_policies[].name
if [ $? -ne 0 ]; then
if output=$(fleet_api "agent_policies/$AGENT_POLICY"); then
echo "$output" | jq -r .item.package_policies[].name
else
echo "Error: Failed to retrieve integrations for '$AGENT_POLICY'."
exit 1
return 1
fi
}
elastic_fleet_integration_policy_package_name() {
AGENT_POLICY=$1
INTEGRATION=$2
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies/$AGENT_POLICY" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .package.name'
if [ $? -ne 0 ]; then
if output=$(fleet_api "agent_policies/$AGENT_POLICY"); then
echo "$output" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .package.name'
else
echo "Error: Failed to retrieve package name for '$INTEGRATION' in '$AGENT_POLICY'."
exit 1
return 1
fi
}
elastic_fleet_integration_policy_package_version() {
AGENT_POLICY=$1
INTEGRATION=$2
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies/$AGENT_POLICY" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .package.version'
if [ $? -ne 0 ]; then
echo "Error: Failed to retrieve package version for '$INTEGRATION' in '$AGENT_POLICY'."
exit 1
if output=$(fleet_api "agent_policies/$AGENT_POLICY"); then
if version=$(jq -e -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .package.version' <<< "$output"); then
echo "$version"
fi
else
echo "Error: Failed to retrieve integration version for '$INTEGRATION' in policy '$AGENT_POLICY'"
return 1
fi
}
elastic_fleet_integration_id() {
AGENT_POLICY=$1
INTEGRATION=$2
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -L -X GET "localhost:5601/api/fleet/agent_policies/$AGENT_POLICY" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .id'
if [ $? -ne 0 ]; then
if output=$(fleet_api "agent_policies/$AGENT_POLICY"); then
echo "$output" | jq -r --arg INTEGRATION "$INTEGRATION" '.item.package_policies[] | select(.name==$INTEGRATION)| .id'
else
echo "Error: Failed to retrieve integration ID for '$INTEGRATION' in '$AGENT_POLICY'."
exit 1
return 1
fi
}
elastic_fleet_integration_policy_dryrun_upgrade() {
INTEGRATION_ID=$1
curl -s -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -H "Content-Type: application/json" -H 'kbn-xsrf: true' -L -X POST "localhost:5601/api/fleet/package_policies/upgrade/dryrun" -d "{\"packagePolicyIds\":[\"$INTEGRATION_ID\"]}"
if [ $? -ne 0 ]; then
if ! fleet_api "package_policies/upgrade/dryrun" -H "Content-Type: application/json" -H 'kbn-xsrf: true' -XPOST -d "{\"packagePolicyIds\":[\"$INTEGRATION_ID\"]}"; then
echo "Error: Failed to complete dry run for '$INTEGRATION_ID'."
exit 1
return 1
fi
}
@@ -180,25 +209,18 @@ elastic_fleet_policy_create() {
NAME=$1
DESC=$2
FLEETSERVER=$3
TIMEOUT=$4
TIMEOUT=$4
JSON_STRING=$( jq -n \
--arg NAME "$NAME" \
--arg DESC "$DESC" \
--arg TIMEOUT $TIMEOUT \
--arg FLEETSERVER "$FLEETSERVER" \
'{"name": $NAME,"id":$NAME,"description":$DESC,"namespace":"default","monitoring_enabled":["logs"],"inactivity_timeout":$TIMEOUT,"has_fleet_server":$FLEETSERVER}'
)
--arg NAME "$NAME" \
--arg DESC "$DESC" \
--arg TIMEOUT $TIMEOUT \
--arg FLEETSERVER "$FLEETSERVER" \
'{"name": $NAME,"id":$NAME,"description":$DESC,"namespace":"default","monitoring_enabled":["logs"],"inactivity_timeout":$TIMEOUT,"has_fleet_server":$FLEETSERVER}'
)
# Create Fleet Policy
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/agent_policies" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "agent_policies" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
return 1
fi
}
elastic_fleet_policy_update() {
POLICYID=$1
JSON_STRING=$2
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/agent_policies/$POLICYID" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
}

View File

@@ -8,6 +8,7 @@
. /usr/sbin/so-elastic-fleet-common
ERROR=false
# Manage Elastic Defend Integration for Initial Endpoints Policy
for INTEGRATION in /opt/so/conf/elastic-fleet/integrations/elastic-defend/*.json
do
@@ -15,9 +16,20 @@ do
elastic_fleet_integration_check "endpoints-initial" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Upgrading integration policy\n"
elastic_fleet_integration_policy_upgrade "$INTEGRATION_ID"
if ! elastic_fleet_integration_policy_upgrade "$INTEGRATION_ID"; then
echo -e "\nFailed to upgrade integration policy for ${INTEGRATION##*/}"
ERROR=true
continue
fi
else
printf "\n\nIntegration does not exist - Creating integration\n"
elastic_fleet_integration_create "@$INTEGRATION"
if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
ERROR=true
continue
fi
fi
done
if [[ "$ERROR" == "true" ]]; then
exit 1
fi

View File

@@ -25,5 +25,9 @@ for POLICYNAME in $POLICY; do
.name = $name' /opt/so/conf/elastic-fleet/integrations/fleet-server/fleet-server.json)
# Now update the integration policy using the modified JSON
elastic_fleet_integration_update "$INTEGRATION_ID" "$UPDATED_INTEGRATION_POLICY"
if ! elastic_fleet_integration_update "$INTEGRATION_ID" "$UPDATED_INTEGRATION_POLICY"; then
# exit 1 on failure to update fleet integration policies, let salt handle retries
echo "Failed to update $POLICYNAME.."
exit 1
fi
done

View File

@@ -13,11 +13,10 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
/usr/sbin/so-elastic-fleet-package-upgrade
# Second, update Fleet Server policies
/sbin/so-elastic-fleet-integration-policy-elastic-fleet-server
/usr/sbin/so-elastic-fleet-integration-policy-elastic-fleet-server
# Third, configure Elastic Defend Integration seperately
/usr/sbin/so-elastic-fleet-integration-policy-elastic-defend
# Initial Endpoints
for INTEGRATION in /opt/so/conf/elastic-fleet/integrations/endpoints-initial/*.json
do
@@ -25,10 +24,18 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
elastic_fleet_integration_check "endpoints-initial" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Updating integration\n"
elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"
if ! elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"; then
echo -e "\nFailed to update integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
else
printf "\n\nIntegration does not exist - Creating integration\n"
elastic_fleet_integration_create "@$INTEGRATION"
if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
fi
done
@@ -39,10 +46,18 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
elastic_fleet_integration_check "so-grid-nodes_general" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Updating integration\n"
elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"
if ! elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"; then
echo -e "\nFailed to update integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
else
printf "\n\nIntegration does not exist - Creating integration\n"
elastic_fleet_integration_create "@$INTEGRATION"
if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
fi
done
if [[ "$RETURN_CODE" != "1" ]]; then
@@ -56,11 +71,19 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
elastic_fleet_integration_check "so-grid-nodes_heavy" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Updating integration\n"
elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"
if ! elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"; then
echo -e "\nFailed to update integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
else
printf "\n\nIntegration does not exist - Creating integration\n"
if [ "$NAME" != "elasticsearch-logs" ]; then
elastic_fleet_integration_create "@$INTEGRATION"
if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
fi
fi
done
@@ -77,11 +100,19 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
elastic_fleet_integration_check "$FLEET_POLICY" "$INTEGRATION"
if [ -n "$INTEGRATION_ID" ]; then
printf "\n\nIntegration $NAME exists - Updating integration\n"
elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"
if ! elastic_fleet_integration_update "$INTEGRATION_ID" "@$INTEGRATION"; then
echo -e "\nFailed to update integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
else
printf "\n\nIntegration does not exist - Creating integration\n"
if [ "$NAME" != "elasticsearch-logs" ]; then
elastic_fleet_integration_create "@$INTEGRATION"
if ! elastic_fleet_integration_create "@$INTEGRATION"; then
echo -e "\nFailed to create integration for ${INTEGRATION##*/}"
RETURN_CODE=1
continue
fi
fi
fi
fi

View File

@@ -1,62 +0,0 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-elastic-fleet-common
curl_output=$(curl -s -K /opt/so/conf/elasticsearch/curl.config -c - -X GET http://localhost:5601/)
if [ $? -ne 0 ]; then
echo "Error: Failed to connect to Kibana."
exit 1
fi
IFS=$'\n'
agent_policies=$(elastic_fleet_agent_policy_ids)
if [ $? -ne 0 ]; then
echo "Error: Failed to retrieve agent policies."
exit 1
fi
for AGENT_POLICY in $agent_policies; do
integrations=$(elastic_fleet_integration_policy_names "$AGENT_POLICY")
for INTEGRATION in $integrations; do
if ! [[ "$INTEGRATION" == "elastic-defend-endpoints" ]] && ! [[ "$INTEGRATION" == "fleet_server-"* ]]; then
# Get package name so we know what package to look for when checking the current and latest available version
PACKAGE_NAME=$(elastic_fleet_integration_policy_package_name "$AGENT_POLICY" "$INTEGRATION")
# Get currently installed version of package
PACKAGE_VERSION=$(elastic_fleet_integration_policy_package_version "$AGENT_POLICY" "$INTEGRATION")
# Get latest available version of package
AVAILABLE_VERSION=$(elastic_fleet_package_latest_version_check "$PACKAGE_NAME")
# Get integration ID
INTEGRATION_ID=$(elastic_fleet_integration_id "$AGENT_POLICY" "$INTEGRATION")
if [[ "$PACKAGE_VERSION" != "$AVAILABLE_VERSION" ]]; then
# Dry run of the upgrade
echo "Current $PACKAGE_NAME package version ($PACKAGE_VERSION) is not the same as the latest available package ($AVAILABLE_VERSION)..."
echo "Upgrading $INTEGRATION..."
echo "Starting dry run..."
DRYRUN_OUTPUT=$(elastic_fleet_integration_policy_dryrun_upgrade "$INTEGRATION_ID")
DRYRUN_ERRORS=$(echo "$DRYRUN_OUTPUT" | jq .[].hasErrors)
# If no errors with dry run, proceed with actual upgrade
if [[ "$DRYRUN_ERRORS" == "false" ]]; then
echo "No errors detected. Proceeding with upgrade..."
elastic_fleet_integration_policy_upgrade "$INTEGRATION_ID"
if [ $? -ne 0 ]; then
echo "Error: Upgrade failed for integration ID '$INTEGRATION_ID'."
exit 1
fi
else
echo "Errors detected during dry run. Stopping upgrade..."
exit 1
fi
fi
fi
done
done
echo

View File

@@ -83,5 +83,10 @@ docker run \
{{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-elastic-agent-builder:{{ GLOBALS.so_version }} wixl -o so-elastic-agent_windows_amd64_msi --arch x64 /workspace/so-elastic-agent.wxs
printf "\n### MSI Generated...\n"
printf "\n### Cleaning up temp files in /nsm/elastic-agent-workspace\n"
printf "\n### Cleaning up temp files \n"
rm -rf /nsm/elastic-agent-workspace
rm -rf /opt/so/saltstack/local/salt/elasticfleet/files/so_agent-installers/so-elastic-agent_windows_amd64.exe
printf "\n### Copying so_agent-installers to /nsm/elastic-fleet/ for nginx.\n"
\cp -vr /opt/so/saltstack/local/salt/elasticfleet/files/so_agent-installers/ /nsm/elastic-fleet/
chmod 644 /nsm/elastic-fleet/so_agent-installers/*

View File

@@ -14,7 +14,7 @@ if ! is_manager_node; then
fi
# Get current list of Grid Node Agents that need to be upgraded
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/agents?perPage=20&page=1&kuery=policy_id%20%3A%20so-grid-nodes_%2A&showInactive=false&showUpgradeable=true&getStatusSummary=true")
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/agents?perPage=20&page=1&kuery=NOT%20agent.version%20:%20%22{{ELASTICSEARCHDEFAULTS.elasticsearch.version}}%22%20and%20policy_id%20:%20%22so-grid-nodes_general%22&showInactive=false&getStatusSummary=true")
# Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.page' <<< "$RAW_JSON")

View File

@@ -0,0 +1,95 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{%- import_yaml 'elasticfleet/defaults.yaml' as ELASTICFLEETDEFAULTS %}
{%- set SUPPORTED_PACKAGES = salt['pillar.get']('elasticfleet:packages', default=ELASTICFLEETDEFAULTS.elasticfleet.packages, merge=True) %}
{%- set AUTO_UPGRADE_INTEGRATIONS = salt['pillar.get']('elasticfleet:config:auto_upgrade_integrations', default=false) %}
. /usr/sbin/so-elastic-fleet-common
curl_output=$(curl -s -K /opt/so/conf/elasticsearch/curl.config -c - -X GET http://localhost:5601/)
if [ $? -ne 0 ]; then
echo "Error: Failed to connect to Kibana."
exit 1
fi
IFS=$'\n'
agent_policies=$(elastic_fleet_agent_policy_ids)
if [ $? -ne 0 ]; then
echo "Error: Failed to retrieve agent policies."
exit 1
fi
default_packages=({% for pkg in SUPPORTED_PACKAGES %}"{{ pkg }}"{% if not loop.last %} {% endif %}{% endfor %})
ERROR=false
for AGENT_POLICY in $agent_policies; do
if ! integrations=$(elastic_fleet_integration_policy_names "$AGENT_POLICY"); then
# this script upgrades default integration packages, exit 1 and let salt handle retrying
exit 1
fi
for INTEGRATION in $integrations; do
if ! [[ "$INTEGRATION" == "elastic-defend-endpoints" ]] && ! [[ "$INTEGRATION" == "fleet_server-"* ]]; then
# Get package name so we know what package to look for when checking the current and latest available version
if ! PACKAGE_NAME=$(elastic_fleet_integration_policy_package_name "$AGENT_POLICY" "$INTEGRATION"); then
exit 1
fi
{%- if not AUTO_UPGRADE_INTEGRATIONS %}
if [[ " ${default_packages[@]} " =~ " $PACKAGE_NAME " ]]; then
{%- endif %}
# Get currently installed version of package
attempt=0
max_attempts=3
while [ $attempt -lt $max_attempts ]; do
if PACKAGE_VERSION=$(elastic_fleet_integration_policy_package_version "$AGENT_POLICY" "$INTEGRATION") && AVAILABLE_VERSION=$(elastic_fleet_package_latest_version_check "$PACKAGE_NAME"); then
break
fi
attempt=$((attempt + 1))
done
if [ $attempt -eq $max_attempts ]; then
echo "Error: Failed getting $PACKAGE_VERSION or $AVAILABLE_VERSION"
exit 1
fi
# Get integration ID
if ! INTEGRATION_ID=$(elastic_fleet_integration_id "$AGENT_POLICY" "$INTEGRATION"); then
exit 1
fi
if [[ "$PACKAGE_VERSION" != "$AVAILABLE_VERSION" ]]; then
# Dry run of the upgrade
echo ""
echo "Current $PACKAGE_NAME package version ($PACKAGE_VERSION) is not the same as the latest available package ($AVAILABLE_VERSION)..."
echo "Upgrading $INTEGRATION..."
echo "Starting dry run..."
if ! DRYRUN_OUTPUT=$(elastic_fleet_integration_policy_dryrun_upgrade "$INTEGRATION_ID"); then
exit 1
fi
DRYRUN_ERRORS=$(echo "$DRYRUN_OUTPUT" | jq .[].hasErrors)
# If no errors with dry run, proceed with actual upgrade
if [[ "$DRYRUN_ERRORS" == "false" ]]; then
echo "No errors detected. Proceeding with upgrade..."
if ! elastic_fleet_integration_policy_upgrade "$INTEGRATION_ID"; then
echo "Error: Upgrade failed for $PACKAGE_NAME with integration ID '$INTEGRATION_ID'."
ERROR=true
continue
fi
else
echo "Errors detected during dry run for $PACKAGE_NAME policy upgrade..."
ERROR=true
continue
fi
fi
{%- if not AUTO_UPGRADE_INTEGRATIONS %}
fi
{%- endif %}
fi
done
done
if [[ "$ERROR" == "true" ]]; then
exit 1
fi
echo

View File

@@ -3,7 +3,10 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0; you may not use
# this file except in compliance with the Elastic License 2.0.
{%- import_yaml 'elasticfleet/defaults.yaml' as ELASTICFLEETDEFAULTS %}
{% set SUB = salt['pillar.get']('elasticfleet:config:subscription_integrations', default=false) %}
{% set AUTO_UPGRADE_INTEGRATIONS = salt['pillar.get']('elasticfleet:config:auto_upgrade_integrations', default=false) %}
{%- set SUPPORTED_PACKAGES = salt['pillar.get']('elasticfleet:packages', default=ELASTICFLEETDEFAULTS.elasticfleet.packages, merge=True) %}
. /usr/sbin/so-common
. /usr/sbin/so-elastic-fleet-common
@@ -16,6 +19,7 @@ BULK_INSTALL_PACKAGE_LIST=/tmp/esfleet_bulk_install.json
BULK_INSTALL_PACKAGE_TMP=/tmp/esfleet_bulk_install_tmp.json
BULK_INSTALL_OUTPUT=/opt/so/state/esfleet_bulk_install_results.json
PACKAGE_COMPONENTS=/opt/so/state/esfleet_package_components.json
COMPONENT_TEMPLATES=/opt/so/state/esfleet_component_templates.json
PENDING_UPDATE=false
@@ -46,6 +50,36 @@ compare_versions() {
fi
}
IFS=$'\n'
agent_policies=$(elastic_fleet_agent_policy_ids)
if [ $? -ne 0 ]; then
echo "Error: Failed to retrieve agent policies."
exit 1
fi
default_packages=({% for pkg in SUPPORTED_PACKAGES %}"{{ pkg }}"{% if not loop.last %} {% endif %}{% endfor %})
in_use_integrations=()
for AGENT_POLICY in $agent_policies; do
if ! integrations=$(elastic_fleet_integration_policy_names "$AGENT_POLICY"); then
# skip the agent policy if we can't get required info, let salt retry. Integrations loaded by this script are non-default integrations.
echo "Skipping $AGENT_POLICY.. "
continue
fi
for INTEGRATION in $integrations; do
if ! PACKAGE_NAME=$(elastic_fleet_integration_policy_package_name "$AGENT_POLICY" "$INTEGRATION"); then
echo "Not adding $INTEGRATION, couldn't get package name"
continue
fi
# non-default integrations that are in-use in any policy
if ! [[ " ${default_packages[@]} " =~ " $PACKAGE_NAME " ]]; then
in_use_integrations+=("$PACKAGE_NAME")
fi
done
done
if [[ -f $STATE_FILE_SUCCESS ]]; then
if retry 3 1 "curl -s -K /opt/so/conf/elasticsearch/curl.config --output /dev/null --silent --head --fail localhost:5601/api/fleet/epm/packages"; then
# Package_list contains all integrations beta / non-beta.
@@ -77,10 +111,19 @@ if [[ -f $STATE_FILE_SUCCESS ]]; then
else
results=$(compare_versions "$latest_version" "$installed_version")
if [ $results == "greater" ]; then
echo "$package_name is at version $installed_version latest version is $latest_version... Adding to next update."
jq --argjson package "$bulk_package" '.packages += [$package]' $BULK_INSTALL_PACKAGE_LIST > $BULK_INSTALL_PACKAGE_TMP && mv $BULK_INSTALL_PACKAGE_TMP $BULK_INSTALL_PACKAGE_LIST
{#- When auto_upgrade_integrations is false, skip upgrading in_use_integrations #}
{%- if not AUTO_UPGRADE_INTEGRATIONS %}
if ! [[ " ${in_use_integrations[@]} " =~ " $package_name " ]]; then
{%- endif %}
echo "$package_name is at version $installed_version latest version is $latest_version... Adding to next update."
jq --argjson package "$bulk_package" '.packages += [$package]' $BULK_INSTALL_PACKAGE_LIST > $BULK_INSTALL_PACKAGE_TMP && mv $BULK_INSTALL_PACKAGE_TMP $BULK_INSTALL_PACKAGE_LIST
PENDING_UPDATE=true
PENDING_UPDATE=true
{%- if not AUTO_UPGRADE_INTEGRATIONS %}
else
echo "skipping available upgrade for in use integration - $package_name."
fi
{%- endif %}
fi
fi
fi
@@ -92,9 +135,18 @@ if [[ -f $STATE_FILE_SUCCESS ]]; then
else
results=$(compare_versions "$latest_version" "$installed_version")
if [ $results == "greater" ]; then
echo "$package_name is at version $installed_version latest version is $latest_version... Adding to next update."
jq --argjson package "$bulk_package" '.packages += [$package]' $BULK_INSTALL_PACKAGE_LIST > $BULK_INSTALL_PACKAGE_TMP && mv $BULK_INSTALL_PACKAGE_TMP $BULK_INSTALL_PACKAGE_LIST
PENDING_UPDATE=true
{#- When auto_upgrade_integrations is false, skip upgrading in_use_integrations #}
{%- if not AUTO_UPGRADE_INTEGRATIONS %}
if ! [[ " ${in_use_integrations[@]} " =~ " $package_name " ]]; then
{%- endif %}
echo "$package_name is at version $installed_version latest version is $latest_version... Adding to next update."
jq --argjson package "$bulk_package" '.packages += [$package]' $BULK_INSTALL_PACKAGE_LIST > $BULK_INSTALL_PACKAGE_TMP && mv $BULK_INSTALL_PACKAGE_TMP $BULK_INSTALL_PACKAGE_LIST
PENDING_UPDATE=true
{%- if not AUTO_UPGRADE_INTEGRATIONS %}
else
echo "skipping available upgrade for in use integration - $package_name."
fi
{%- endif %}
fi
fi
{% endif %}
@@ -104,14 +156,38 @@ if [[ -f $STATE_FILE_SUCCESS ]]; then
done <<< "$(jq -c '.packages[]' "$INSTALLED_PACKAGE_LIST")"
if [ "$PENDING_UPDATE" = true ]; then
# Run bulk install of packages
elastic_fleet_bulk_package_install $BULK_INSTALL_PACKAGE_LIST > $BULK_INSTALL_OUTPUT
# Run chunked install of packages
echo "" > $BULK_INSTALL_OUTPUT
pkg_group=1
pkg_filename="${BULK_INSTALL_PACKAGE_LIST%.json}"
jq -c '.packages | _nwise(25)' $BULK_INSTALL_PACKAGE_LIST | while read -r line; do
echo "$line" | jq '{ "packages": . }' > "${pkg_filename}_${pkg_group}.json"
pkg_group=$((pkg_group + 1))
done
for file in "${pkg_filename}_"*.json; do
[ -e "$file" ] || continue
if ! elastic_fleet_bulk_package_install $file >> $BULK_INSTALL_OUTPUT; then
# integrations loaded my this script are non-essential and shouldn't cause exit, skip them for now next highstate run can retry
echo "Failed to complete a chunk of bulk package installs -- $file "
continue
fi
done
# cleanup any temp files for chunked package install
rm -f ${pkg_filename}_*.json $BULK_INSTALL_PACKAGE_LIST
else
echo "Elastic integrations don't appear to need installation/updating..."
fi
# Write out file for generating index/component/ilm templates
latest_installed_package_list=$(elastic_fleet_installed_packages)
echo $latest_installed_package_list | jq '[.items[] | {name: .name, es_index_patterns: .dataStreams}]' > $PACKAGE_COMPONENTS
if latest_installed_package_list=$(elastic_fleet_installed_packages); then
echo $latest_installed_package_list | jq '[.items[] | {name: .name, es_index_patterns: .dataStreams}]' > $PACKAGE_COMPONENTS
fi
if retry 3 1 "so-elasticsearch-query / --fail --output /dev/null"; then
# Refresh installed component template list
latest_component_templates_list=$(so-elasticsearch-query _component_template | jq '.component_templates[] | .name' | jq -s '.')
echo $latest_component_templates_list > $COMPONENT_TEMPLATES
fi
else
# This is the installation of add-on integrations and upgrade of existing integrations. Exiting without error, next highstate will attempt to re-run.

View File

@@ -15,22 +15,49 @@ if ! is_manager_node; then
fi
function update_logstash_outputs() {
# Generate updated JSON payload
JSON_STRING=$(jq -n --arg UPDATEDLIST $NEW_LIST_JSON '{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":""}')
if logstash_policy=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_logstash" --retry 3 --retry-delay 10 --fail 2>/dev/null); then
SSL_CONFIG=$(echo "$logstash_policy" | jq -r '.item.ssl')
if SECRETS=$(echo "$logstash_policy" | jq -er '.item.secrets' 2>/dev/null); then
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SECRETS "$SECRETS" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":"","ssl": $SSL_CONFIG,"secrets": $SECRETS}')
else
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":"","ssl": $SSL_CONFIG}')
fi
fi
# Update Logstash Outputs
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_logstash" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq
}
function update_kafka_outputs() {
# Make sure SSL configuration is included in policy updates for Kafka output. SSL is configured in so-elastic-fleet-setup
SSL_CONFIG=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_kafka" | jq -r '.item.ssl')
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": $SSL_CONFIG}')
# Update Kafka outputs
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_kafka" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq
if kafka_policy=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_kafka" --fail 2>/dev/null); then
SSL_CONFIG=$(echo "$kafka_policy" | jq -r '.item.ssl')
if SECRETS=$(echo "$kafka_policy" | jq -er '.item.secrets' 2>/dev/null); then
# Update policy when fleet has secrets enabled
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
--argjson SECRETS "$SECRETS" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": $SSL_CONFIG,"secrets": $SECRETS}')
else
# Update policy when fleet has secrets disabled or policy hasn't been force updated
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": $SSL_CONFIG}')
fi
# Update Kafka outputs
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_kafka" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq
else
printf "Failed to get current Kafka output policy..."
exit 1
fi
}
{% if GLOBALS.pipeline == "KAFKA" %}

View File

@@ -10,8 +10,16 @@
{%- for PACKAGE in SUPPORTED_PACKAGES %}
echo "Setting up {{ PACKAGE }} package..."
VERSION=$(elastic_fleet_package_version_check "{{ PACKAGE }}")
elastic_fleet_package_install "{{ PACKAGE }}" "$VERSION"
if VERSION=$(elastic_fleet_package_version_check "{{ PACKAGE }}"); then
if ! elastic_fleet_package_install "{{ PACKAGE }}" "$VERSION"; then
# packages loaded by this script should never fail to install and REQUIRED before an installation of SO can be considered successful
echo -e "\nERROR: Failed to install default integration package -- $PACKAGE $VERSION"
exit 1
fi
else
echo -e "\nERROR: Failed to get version information for integration $PACKAGE"
exit 1
fi
echo
{%- endfor %}
echo

View File

@@ -10,8 +10,15 @@
{%- for PACKAGE in SUPPORTED_PACKAGES %}
echo "Upgrading {{ PACKAGE }} package..."
VERSION=$(elastic_fleet_package_latest_version_check "{{ PACKAGE }}")
elastic_fleet_package_install "{{ PACKAGE }}" "$VERSION"
if VERSION=$(elastic_fleet_package_latest_version_check "{{ PACKAGE }}"); then
if ! elastic_fleet_package_install "{{ PACKAGE }}" "$VERSION"; then
# exit 1 on failure to upgrade a default package, allow salt to handle retries
echo -e "\nERROR: Failed to upgrade $PACKAGE to version: $VERSION"
exit 1
fi
else
echo -e "\nERROR: Failed to get version information for integration $PACKAGE"
fi
echo
{%- endfor %}
echo

View File

@@ -23,18 +23,17 @@ if [[ "$RETURN_CODE" != "0" ]]; then
exit 1
fi
ALIASES=".fleet-servers .fleet-policies-leader .fleet-policies .fleet-agents .fleet-artifacts .fleet-enrollment-api-keys .kibana_ingest"
for ALIAS in ${ALIASES}
do
ALIASES=(.fleet-servers .fleet-policies-leader .fleet-policies .fleet-agents .fleet-artifacts .fleet-enrollment-api-keys .kibana_ingest)
for ALIAS in "${ALIASES[@]}"; do
# Get all concrete indices from alias
INDXS=$(curl -K /opt/so/conf/kibana/curl.config -s -k -L -H "Content-Type: application/json" "https://localhost:9200/_resolve/index/${ALIAS}" | jq -r '.aliases[].indices[]')
# Delete all resolved indices
for INDX in ${INDXS}
do
if INDXS_RAW=$(curl -sK /opt/so/conf/kibana/curl.config -s -k -L -H "Content-Type: application/json" "https://localhost:9200/_resolve/index/${ALIAS}" --fail 2>/dev/null); then
INDXS=$(echo "$INDXS_RAW" | jq -r '.aliases[].indices[]')
# Delete all resolved indices
for INDX in ${INDXS}; do
status "Deleting $INDX"
curl -K /opt/so/conf/kibana/curl.config -s -k -L -H "Content-Type: application/json" "https://localhost:9200/${INDX}" -XDELETE
done
done
fi
done
# Restarting Kibana...
@@ -51,22 +50,61 @@ if [[ "$RETURN_CODE" != "0" ]]; then
fi
printf "\n### Create ES Token ###\n"
ESTOKEN=$(curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/service_tokens" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq -r .value)
if ESTOKEN_RAW=$(fleet_api "service_tokens" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
ESTOKEN=$(echo "$ESTOKEN_RAW" | jq -r .value)
else
echo -e "\nFailed to create ES token..."
exit 1
fi
### Create Outputs, Fleet Policy and Fleet URLs ###
# Create the Manager Elasticsearch Output first and set it as the default output
printf "\nAdd Manager Elasticsearch Output...\n"
ESCACRT=$(openssl x509 -in $INTCA)
JSON_STRING=$( jq -n \
--arg ESCACRT "$ESCACRT" \
'{"name":"so-manager_elasticsearch","id":"so-manager_elasticsearch","type":"elasticsearch","hosts":["https://{{ GLOBALS.manager_ip }}:9200","https://{{ GLOBALS.manager }}:9200"],"is_default":true,"is_default_monitoring":true,"config_yaml":"","ssl":{"certificate_authorities": [$ESCACRT]}}' )
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
ESCACRT=$(openssl x509 -in "$INTCA" -outform DER | sha256sum | cut -d' ' -f1 | tr '[:lower:]' '[:upper:]')
JSON_STRING=$(jq -n \
--arg ESCACRT "$ESCACRT" \
'{"name":"so-manager_elasticsearch","id":"so-manager_elasticsearch","type":"elasticsearch","hosts":["https://{{ GLOBALS.manager_ip }}:9200","https://{{ GLOBALS.manager }}:9200"],"is_default":false,"is_default_monitoring":false,"config_yaml":"","ca_trusted_fingerprint": $ESCACRT}')
if ! fleet_api "outputs" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo -e "\nFailed to create so-elasticsearch_manager policy..."
exit 1
fi
printf "\n\n"
# so-manager_elasticsearch should exist and be disabled. Now update it before checking its the only default policy
MANAGER_OUTPUT_ENABLED=$(echo "$JSON_STRING" | jq 'del(.id) | .is_default = true | .is_default_monitoring = true')
if ! curl -sK /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_elasticsearch" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$MANAGER_OUTPUT_ENABLED"; then
echo -e "\n failed to update so-manager_elasticsearch"
exit 1
fi
# At this point there should only be two policies. fleet-default-output & so-manager_elasticsearch
status "Verifying so-manager_elasticsearch policy is configured as the current default"
# Grab the fleet-default-output policy instead of so-manager_elasticsearch, because a weird state can exist where both fleet-default-output & so-elasticsearch_manager can be set as the active default output for logs / metrics. Resulting in logs not ingesting on import/eval nodes
if DEFAULTPOLICY=$(fleet_api "outputs/fleet-default-output"); then
fleet_default=$(echo "$DEFAULTPOLICY" | jq -er '.item.is_default')
fleet_default_monitoring=$(echo "$DEFAULTPOLICY" | jq -er '.item.is_default_monitoring')
# Check that fleet-default-output isn't configured as a default for anything ( both variables return false )
if [[ $fleet_default == "false" ]] && [[ $fleet_default_monitoring == "false" ]]; then
echo -e "\nso-manager_elasticsearch is configured as the current default policy..."
else
echo -e "\nVerification of so-manager_elasticsearch policy failed... The default 'fleet-default-output' output is still active..."
exit 1
fi
else
# fleet-output-policy is created automatically by fleet when started. Should always exist on any installation type
echo -e "\nDefault fleet-default-output policy doesn't exist...\n"
exit 1
fi
# Create the Manager Fleet Server Host Agent Policy
# This has to be done while the Elasticsearch Output is set to the default Output
printf "Create Manager Fleet Server Policy...\n"
elastic_fleet_policy_create "FleetServer_{{ GLOBALS.hostname }}" "Fleet Server - {{ GLOBALS.hostname }}" "false" "120"
if ! elastic_fleet_policy_create "FleetServer_{{ GLOBALS.hostname }}" "Fleet Server - {{ GLOBALS.hostname }}" "false" "120"; then
echo -e "\n Failed to create Manager fleet server policy..."
exit 1
fi
# Modify the default integration policy to update the policy_id with the correct naming
UPDATED_INTEGRATION_POLICY=$(jq --arg policy_id "FleetServer_{{ GLOBALS.hostname }}" --arg name "fleet_server-{{ GLOBALS.hostname }}" '
@@ -74,7 +112,10 @@ UPDATED_INTEGRATION_POLICY=$(jq --arg policy_id "FleetServer_{{ GLOBALS.hostname
.name = $name' /opt/so/conf/elastic-fleet/integrations/fleet-server/fleet-server.json)
# Add the Fleet Server Integration to the new Fleet Policy
elastic_fleet_integration_create "$UPDATED_INTEGRATION_POLICY"
if ! elastic_fleet_integration_create "$UPDATED_INTEGRATION_POLICY"; then
echo -e "\nFailed to create Fleet server integration for Manager.."
exit 1
fi
# Now we can create the Logstash Output and set it to to be the default Output
printf "\n\nCreate Logstash Output Config if node is not an Import or Eval install\n"
@@ -86,9 +127,12 @@ JSON_STRING=$( jq -n \
--arg LOGSTASHCRT "$LOGSTASHCRT" \
--arg LOGSTASHKEY "$LOGSTASHKEY" \
--arg LOGSTASHCA "$LOGSTASHCA" \
'{"name":"grid-logstash","is_default":true,"is_default_monitoring":true,"id":"so-manager_logstash","type":"logstash","hosts":["{{ GLOBALS.manager_ip }}:5055", "{{ GLOBALS.manager }}:5055"],"config_yaml":"","ssl":{"certificate": $LOGSTASHCRT,"key": $LOGSTASHKEY,"certificate_authorities":[ $LOGSTASHCA ]},"proxy_id":null}'
'{"name":"grid-logstash","is_default":true,"is_default_monitoring":true,"id":"so-manager_logstash","type":"logstash","hosts":["{{ GLOBALS.manager_ip }}:5055", "{{ GLOBALS.manager }}:5055"],"config_yaml":"","ssl":{"certificate": $LOGSTASHCRT,"certificate_authorities":[ $LOGSTASHCA ]},"secrets":{"ssl":{"key": $LOGSTASHKEY }},"proxy_id":null}'
)
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "outputs" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo -e "\nFailed to create logstash fleet output"
exit 1
fi
printf "\n\n"
{%- endif %}
@@ -106,7 +150,10 @@ else
fi
## This array replaces whatever URLs are currently configured
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/fleet_server_hosts" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "fleet_server_hosts" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo -e "\nFailed to add manager fleet URL"
exit 1
fi
printf "\n\n"
### Create Policies & Associated Integration Configuration ###
@@ -117,13 +164,22 @@ printf "\n\n"
/usr/sbin/so-elasticsearch-templates-load
# Initial Endpoints Policy
elastic_fleet_policy_create "endpoints-initial" "Initial Endpoint Policy" "false" "1209600"
if ! elastic_fleet_policy_create "endpoints-initial" "Initial Endpoint Policy" "false" "1209600"; then
echo -e "\nFailed to create endpoints-initial policy..."
exit 1
fi
# Grid Nodes - General Policy
elastic_fleet_policy_create "so-grid-nodes_general" "SO Grid Nodes - General Purpose" "false" "1209600"
if ! elastic_fleet_policy_create "so-grid-nodes_general" "SO Grid Nodes - General Purpose" "false" "1209600"; then
echo -e "\nFailed to create so-grid-nodes_general policy..."
exit 1
fi
# Grid Nodes - Heavy Node Policy
elastic_fleet_policy_create "so-grid-nodes_heavy" "SO Grid Nodes - Heavy Node" "false" "1209600"
if ! elastic_fleet_policy_create "so-grid-nodes_heavy" "SO Grid Nodes - Heavy Node" "false" "1209600"; then
echo -e "\nFailed to create so-grid-nodes_heavy policy..."
exit 1
fi
# Load Integrations for default policies
so-elastic-fleet-integration-policy-load
@@ -135,14 +191,34 @@ JSON_STRING=$( jq -n \
'{"name":$NAME,"host":$URL,"is_default":true}'
)
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/agent_download_sources" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
if ! fleet_api "agent_download_sources" -XPOST -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"; then
echo -e "\nFailed to update Elastic Agent artifact URL"
exit 1
fi
### Finalization ###
# Query for Enrollment Tokens for default policies
ENDPOINTSENROLLMENTOKEN=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("endpoints-initial")) | .api_key')
GRIDNODESENROLLMENTOKENGENERAL=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("so-grid-nodes_general")) | .api_key')
GRIDNODESENROLLMENTOKENHEAVY=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("so-grid-nodes_heavy")) | .api_key')
if ENDPOINTSENROLLMENTOKEN_RAW=$(fleet_api "enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
ENDPOINTSENROLLMENTOKEN=$(echo "$ENDPOINTSENROLLMENTOKEN_RAW" | jq .list | jq -r -c '.[] | select(.policy_id | contains("endpoints-initial")) | .api_key')
else
echo -e "\nFailed to query for Endpoints enrollment token"
exit 1
fi
if GRIDNODESENROLLMENTOKENGENERAL_RAW=$(fleet_api "enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
GRIDNODESENROLLMENTOKENGENERAL=$(echo "$GRIDNODESENROLLMENTOKENGENERAL_RAW" | jq .list | jq -r -c '.[] | select(.policy_id | contains("so-grid-nodes_general")) | .api_key')
else
echo -e "\nFailed to query for Grid nodes - General enrollment token"
exit 1
fi
if GRIDNODESENROLLMENTOKENHEAVY_RAW=$(fleet_api "enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json'); then
GRIDNODESENROLLMENTOKENHEAVY=$(echo "$GRIDNODESENROLLMENTOKENHEAVY_RAW" | jq .list | jq -r -c '.[] | select(.policy_id | contains("so-grid-nodes_heavy")) | .api_key')
else
echo -e "\nFailed to query for Grid nodes - Heavy enrollment token"
exit 1
fi
# Store needed data in minion pillar
pillar_file=/opt/so/saltstack/local/pillar/minions/{{ GLOBALS.minion_id }}.sls

View File

@@ -5,46 +5,78 @@
# Elastic License 2.0.
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% if GLOBALS.role in ['so-manager', 'so-standalone', 'so-managersearch'] %}
{% if GLOBALS.role in ['so-manager', 'so-standalone', 'so-managersearch', 'so-managerhype'] %}
. /usr/sbin/so-common
force=false
while [[ $# -gt 0 ]]; do
case $1 in
-f|--force)
force=true
shift
;;
*)
echo "Unknown option $1"
echo "Usage: $0 [-f|--force]"
exit 1
;;
esac
done
# Check to make sure that Kibana API is up & ready
RETURN_CODE=0
wait_for_web_response "http://localhost:5601/api/fleet/settings" "fleet" 300 "curl -K /opt/so/conf/elasticsearch/curl.config"
RETURN_CODE=$?
if [[ "$RETURN_CODE" != "0" ]]; then
printf "Kibana API not accessible, can't setup Elastic Fleet output policy for Kafka..."
exit 1
echo -e "\nKibana API not accessible, can't setup Elastic Fleet output policy for Kafka...\n"
exit 1
fi
output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs" | jq -r .items[].id)
KAFKACRT=$(openssl x509 -in /etc/pki/elasticfleet-kafka.crt)
KAFKAKEY=$(openssl rsa -in /etc/pki/elasticfleet-kafka.key)
KAFKACA=$(openssl x509 -in /etc/pki/tls/certs/intca.crt)
KAFKA_OUTPUT_VERSION="2.6.0"
if ! echo "$output" | grep -q "so-manager_kafka"; then
KAFKACRT=$(openssl x509 -in /etc/pki/elasticfleet-kafka.crt)
KAFKAKEY=$(openssl rsa -in /etc/pki/elasticfleet-kafka.key)
KAFKACA=$(openssl x509 -in /etc/pki/tls/certs/intca.crt)
KAFKA_OUTPUT_VERSION="2.6.0"
if ! kafka_output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_kafka" --fail 2>/dev/null); then
# Create a new output policy for Kafka. Default is disabled 'is_default: false & is_default_monitoring: false'
JSON_STRING=$( jq -n \
--arg KAFKACRT "$KAFKACRT" \
--arg KAFKAKEY "$KAFKAKEY" \
--arg KAFKACA "$KAFKACA" \
--arg MANAGER_IP "{{ GLOBALS.manager_ip }}:9092" \
--arg KAFKA_OUTPUT_VERSION "$KAFKA_OUTPUT_VERSION" \
'{ "name": "grid-kafka", "id": "so-manager_kafka", "type": "kafka", "hosts": [ $MANAGER_IP ], "is_default": false, "is_default_monitoring": false, "config_yaml": "", "ssl": { "certificate_authorities": [ $KAFKACA ], "certificate": $KAFKACRT, "key": $KAFKAKEY, "verification_mode": "full" }, "proxy_id": null, "client_id": "Elastic", "version": $KAFKA_OUTPUT_VERSION, "compression": "none", "auth_type": "ssl", "partition": "round_robin", "round_robin": { "group_events": 1 }, "topics":[{"topic":"%{[event.module]}-securityonion","when":{"type":"regexp","condition":"event.module:.+"}},{"topic":"default-securityonion"}], "headers": [ { "key": "", "value": "" } ], "timeout": 30, "broker_timeout": 30, "required_acks": 1 }'
)
curl -sK /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" -o /dev/null
refresh_output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs" | jq -r .items[].id)
if ! echo "$refresh_output" | grep -q "so-manager_kafka"; then
echo -e "\nFailed to setup Elastic Fleet output policy for Kafka...\n"
--arg KAFKACRT "$KAFKACRT" \
--arg KAFKAKEY "$KAFKAKEY" \
--arg KAFKACA "$KAFKACA" \
--arg MANAGER_IP "{{ GLOBALS.manager_ip }}:9092" \
--arg KAFKA_OUTPUT_VERSION "$KAFKA_OUTPUT_VERSION" \
'{"name":"grid-kafka", "id":"so-manager_kafka","type":"kafka","hosts":[ $MANAGER_IP ],"is_default":false,"is_default_monitoring":false,"config_yaml":"","ssl":{"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"proxy_id":null,"client_id":"Elastic","version": $KAFKA_OUTPUT_VERSION ,"compression":"none","auth_type":"ssl","partition":"round_robin","round_robin":{"group_events":10},"topics":[{"topic":"default-securityonion"}],"headers":[{"key":"","value":""}],"timeout":30,"broker_timeout":30,"required_acks":1,"secrets":{"ssl":{"key": $KAFKAKEY }}}'
)
if ! response=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" --fail 2>/dev/null); then
echo -e "\nFailed to setup Elastic Fleet output policy for Kafka...\n"
exit 1
else
echo -e "\nSuccessfully setup Elastic Fleet output policy for Kafka...\n"
exit 0
fi
elif kafka_output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_kafka" --fail 2>/dev/null) && [[ "$force" == "true" ]]; then
# force an update to Kafka policy. Keep the current value of Kafka output policy (enabled/disabled).
ENABLED_DISABLED=$(echo "$kafka_output" | jq -e .item.is_default)
HOSTS=$(echo "$kafka_output" | jq -r '.item.hosts')
JSON_STRING=$( jq -n \
--arg KAFKACRT "$KAFKACRT" \
--arg KAFKAKEY "$KAFKAKEY" \
--arg KAFKACA "$KAFKACA" \
--arg ENABLED_DISABLED "$ENABLED_DISABLED"\
--arg KAFKA_OUTPUT_VERSION "$KAFKA_OUTPUT_VERSION" \
--argjson HOSTS "$HOSTS" \
'{"name":"grid-kafka","type":"kafka","hosts":$HOSTS,"is_default":$ENABLED_DISABLED,"is_default_monitoring":$ENABLED_DISABLED,"config_yaml":"","ssl":{"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"proxy_id":null,"client_id":"Elastic","version": $KAFKA_OUTPUT_VERSION ,"compression":"none","auth_type":"ssl","partition":"round_robin","round_robin":{"group_events":10},"topics":[{"topic":"default-securityonion"}],"headers":[{"key":"","value":""}],"timeout":30,"broker_timeout":30,"required_acks":1,"secrets":{"ssl":{"key": $KAFKAKEY }}}'
)
if ! response=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_kafka" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" --fail 2>/dev/null); then
echo -e "\nFailed to force update to Elastic Fleet output policy for Kafka...\n"
exit 1
elif echo "$refresh_output" | grep -q "so-manager_kafka"; then
echo -e "\nSuccessfully setup Elastic Fleet output policy for Kafka...\n"
else
echo -e "\nForced update to Elastic Fleet output policy for Kafka...\n"
fi
elif echo "$output" | grep -q "so-manager_kafka"; then
else
echo -e "\nElastic Fleet output policy for Kafka already exists...\n"
fi
{% else %}

View File

@@ -15,7 +15,7 @@
elastic_auth_pillar:
file.managed:
- name: /opt/so/saltstack/local/pillar/elasticsearch/auth.sls
- mode: 600
- mode: 640
- reload_pillar: True
- contents: |
elasticsearch:

View File

@@ -28,7 +28,7 @@
{% endfor %}
{% endfor %}
{% if grains.id.split('_') | last in ['manager','managersearch','standalone'] %}
{% if grains.id.split('_') | last in ['manager','managerhype','managersearch','standalone'] %}
{% if ELASTICSEARCH_SEED_HOSTS | length > 1 %}
{% do ELASTICSEARCHDEFAULTS.elasticsearch.config.update({'discovery': {'seed_hosts': []}}) %}
{% for NODE in ELASTICSEARCH_SEED_HOSTS %}

View File

@@ -1,6 +1,6 @@
elasticsearch:
enabled: false
version: 8.17.3
version: 8.18.8
index_clean: true
config:
action:
@@ -162,6 +162,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -274,7 +275,87 @@ elasticsearch:
number_of_replicas: 0
auto_expand_replicas: 0-2
number_of_shards: 1
refresh_interval: 30s
refresh_interval: 1s
sort:
field: '@timestamp'
order: desc
policy:
phases:
hot:
actions: {}
min_age: 0ms
so-assistant-chat:
index_sorting: false
index_template:
composed_of:
- assistant-chat-mappings
- assistant-chat-settings
data_stream:
allow_custom_routing: false
hidden: false
ignore_missing_component_templates: []
index_patterns:
- so-assistant-chat*
priority: 501
template:
mappings:
date_detection: false
dynamic_templates:
- strings_as_keyword:
mapping:
ignore_above: 1024
type: keyword
match_mapping_type: string
settings:
index:
lifecycle:
name: so-assistant-chat-logs
mapping:
total_fields:
limit: 1500
number_of_replicas: 0
number_of_shards: 1
refresh_interval: 1s
sort:
field: '@timestamp'
order: desc
policy:
phases:
hot:
actions: {}
min_age: 0ms
so-assistant-session:
index_sorting: false
index_template:
composed_of:
- assistant-session-mappings
- assistant-session-settings
data_stream:
allow_custom_routing: false
hidden: false
ignore_missing_component_templates: []
index_patterns:
- so-assistant-session*
priority: 501
template:
mappings:
date_detection: false
dynamic_templates:
- strings_as_keyword:
mapping:
ignore_above: 1024
type: keyword
match_mapping_type: string
settings:
index:
lifecycle:
name: so-assistant-session-logs
mapping:
total_fields:
limit: 1500
number_of_replicas: 0
number_of_shards: 1
refresh_interval: 1s
sort:
field: '@timestamp'
order: desc
@@ -316,6 +397,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -427,6 +509,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -534,6 +617,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -563,6 +647,7 @@ elasticsearch:
- common-settings
- common-dynamic-mappings
- winlog-mappings
- hash-mappings
data_stream: {}
ignore_missing_component_templates: []
index_patterns:
@@ -697,6 +782,7 @@ elasticsearch:
- client-mappings
- device-mappings
- network-mappings
- so-fleet_integrations.ip_mappings-1
- so-fleet_globals-1
- so-fleet_agent_id_verification-1
data_stream:
@@ -768,6 +854,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -878,6 +965,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -998,6 +1086,7 @@ elasticsearch:
index_template:
composed_of:
- so-data-streams-mappings
- so-fleet_integrations.ip_mappings-1
- so-fleet_globals-1
- so-fleet_agent_id_verification-1
- so-logs-mappings
@@ -1234,6 +1323,68 @@ elasticsearch:
set_priority:
priority: 50
min_age: 30d
so-elastic-agent-monitor:
index_sorting: false
index_template:
composed_of:
- event-mappings
- so-elastic-agent-monitor
- so-fleet_integrations.ip_mappings-1
- so-fleet_globals-1
- so-fleet_agent_id_verification-1
data_stream:
allow_custom_routing: false
hidden: false
index_patterns:
- logs-agentmonitor-*
priority: 501
template:
mappings:
_meta:
managed: true
managed_by: security_onion
package:
name: elastic_agent
settings:
index:
lifecycle:
name: so-elastic-agent-monitor-logs
mapping:
total_fields:
limit: 5000
number_of_replicas: 0
sort:
field: '@timestamp'
order: desc
policy:
_meta:
managed: true
managed_by: security_onion
package:
name: elastic_agent
phases:
cold:
actions:
set_priority:
priority: 0
min_age: 60d
delete:
actions:
delete: {}
min_age: 365d
hot:
actions:
rollover:
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
min_age: 30d
so-logs-elastic_agent_x_apm_server:
index_sorting: false
index_template:
@@ -1840,6 +1991,70 @@ elasticsearch:
set_priority:
priority: 50
min_age: 30d
so-logs-elasticsearch_x_server:
index_sorting: false
index_template:
composed_of:
- logs-elasticsearch.server@package
- logs-elasticsearch.server@custom
- so-fleet_integrations.ip_mappings-1
- so-fleet_globals-1
- so-fleet_agent_id_verification-1
data_stream:
allow_custom_routing: false
hidden: false
ignore_missing_component_templates:
- logs-elasticsearch.server@custom
index_patterns:
- logs-elasticsearch.server-*
priority: 501
template:
mappings:
_meta:
managed: true
managed_by: security_onion
package:
name: elastic_agent
settings:
index:
lifecycle:
name: so-logs-elasticsearch.server-logs
mapping:
total_fields:
limit: 5000
number_of_replicas: 0
sort:
field: '@timestamp'
order: desc
policy:
_meta:
managed: true
managed_by: security_onion
package:
name: elastic_agent
phases:
cold:
actions:
set_priority:
priority: 0
min_age: 60d
delete:
actions:
delete: {}
min_age: 365d
hot:
actions:
rollover:
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
min_age: 30d
so-logs-endpoint_x_actions:
index_sorting: false
index_template:
@@ -2832,6 +3047,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -3062,6 +3278,7 @@ elasticsearch:
- event-mappings
- logs-system.syslog@package
- logs-system.syslog@custom
- so-fleet_integrations.ip_mappings-1
- so-fleet_globals-1
- so-fleet_agent_id_verification-1
- so-system-mappings
@@ -3421,6 +3638,7 @@ elasticsearch:
- dtc-http-mappings
- log-mappings
- logstash-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -3505,6 +3723,7 @@ elasticsearch:
composed_of:
- metrics-endpoint.metadata@package
- metrics-endpoint.metadata@custom
- so-fleet_integrations.ip_mappings-1
- so-fleet_globals-1
- so-fleet_agent_id_verification-1
data_stream:
@@ -3551,6 +3770,7 @@ elasticsearch:
composed_of:
- metrics-endpoint.metrics@package
- metrics-endpoint.metrics@custom
- so-fleet_integrations.ip_mappings-1
- so-fleet_globals-1
- so-fleet_agent_id_verification-1
data_stream:
@@ -3597,6 +3817,7 @@ elasticsearch:
composed_of:
- metrics-endpoint.policy@package
- metrics-endpoint.policy@custom
- so-fleet_integrations.ip_mappings-1
- so-fleet_globals-1
- so-fleet_agent_id_verification-1
data_stream:
@@ -3645,6 +3866,7 @@ elasticsearch:
- metrics-fleet_server.agent_status@package
- metrics-fleet_server.agent_status@custom
- ecs@mappings
- so-fleet_integrations.ip_mappings-1
- so-fleet_globals-1
- so-fleet_agent_id_verification-1
data_stream:
@@ -3668,6 +3890,7 @@ elasticsearch:
- metrics-fleet_server.agent_versions@package
- metrics-fleet_server.agent_versions@custom
- ecs@mappings
- so-fleet_integrations.ip_mappings-1
- so-fleet_globals-1
- so-fleet_agent_id_verification-1
data_stream:
@@ -3715,6 +3938,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -3827,6 +4051,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -3856,6 +4081,7 @@ elasticsearch:
- vulnerability-mappings
- common-settings
- common-dynamic-mappings
- hash-mappings
data_stream: {}
ignore_missing_component_templates: []
index_patterns:
@@ -3939,6 +4165,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -3968,6 +4195,7 @@ elasticsearch:
- vulnerability-mappings
- common-settings
- common-dynamic-mappings
- hash-mappings
data_stream: {}
ignore_missing_component_templates: []
index_patterns:
@@ -4009,7 +4237,7 @@ elasticsearch:
hot:
actions:
rollover:
max_age: 1d
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
@@ -4051,6 +4279,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -4080,6 +4309,7 @@ elasticsearch:
- vulnerability-mappings
- common-settings
- common-dynamic-mappings
- hash-mappings
data_stream: {}
ignore_missing_component_templates: []
index_patterns:
@@ -4163,6 +4393,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -4276,6 +4507,7 @@ elasticsearch:
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
@@ -4307,6 +4539,7 @@ elasticsearch:
- zeek-mappings
- common-settings
- common-dynamic-mappings
- hash-mappings
data_stream: {}
ignore_missing_component_templates: []
index_patterns:
@@ -4479,6 +4712,14 @@ elasticsearch:
- data
- remote_cluster_client
- transform
so-managerhype:
config:
node:
roles:
- master
- data
- remote_cluster_client
- transform
so-managersearch:
config:
node:

View File

@@ -204,12 +204,17 @@ so-elasticsearch-roles-load:
- docker_container: so-elasticsearch
- file: elasticsearch_sbin_jinja
{% if grains.role in ['so-eval', 'so-standalone', 'so-managersearch', 'so-heavynode', 'so-manager'] %}
{% if grains.role in ['so-managersearch', 'so-manager', 'so-managerhype'] %}
{% set ap = "absent" %}
{% endif %}
{% if grains.role in ['so-eval', 'so-standalone', 'so-heavynode'] %}
{% if ELASTICSEARCHMERGED.index_clean %}
{% set ap = "present" %}
{% else %}
{% set ap = "absent" %}
{% endif %}
{% endif %}
{% if grains.role in ['so-eval', 'so-standalone', 'so-managersearch', 'so-heavynode', 'so-manager'] %}
so-elasticsearch-indices-delete:
cron.{{ap}}:
- name: /usr/sbin/so-elasticsearch-indices-delete > /opt/so/log/elasticsearch/cron-elasticsearch-indices-delete.log 2>&1

View File

@@ -26,7 +26,7 @@
{
"geoip": {
"field": "destination.ip",
"target_field": "destination_geo",
"target_field": "destination.as",
"database_file": "GeoLite2-ASN.mmdb",
"ignore_missing": true,
"ignore_failure": true,
@@ -36,13 +36,17 @@
{
"geoip": {
"field": "source.ip",
"target_field": "source_geo",
"target_field": "source.as",
"database_file": "GeoLite2-ASN.mmdb",
"ignore_missing": true,
"ignore_failure": true,
"properties": ["ip", "asn", "organization_name", "network"]
}
},
{ "rename": { "field": "destination.as.organization_name", "target_field": "destination.as.organization.name", "ignore_failure": true, "ignore_missing": true } },
{ "rename": { "field": "source.as.organization_name", "target_field": "source.as.organization.name", "ignore_failure": true, "ignore_missing": true } },
{ "rename": { "field": "destination.as.asn", "target_field": "destination.as.number", "ignore_failure": true, "ignore_missing": true } },
{ "rename": { "field": "source.as.asn", "target_field": "source.as.number", "ignore_failure": true, "ignore_missing": true } },
{ "set": { "if": "ctx.event?.severity == 1", "field": "event.severity_label", "value": "low", "override": true } },
{ "set": { "if": "ctx.event?.severity == 2", "field": "event.severity_label", "value": "medium", "override": true } },
{ "set": { "if": "ctx.event?.severity == 3", "field": "event.severity_label", "value": "high", "override": true } },

View File

@@ -0,0 +1,22 @@
{
"processors": [
{
"convert": {
"field": "_ingest._value",
"type": "ip",
"target_field": "_ingest._temp_ip",
"ignore_failure": true
}
},
{
"append": {
"field": "temp._valid_ips",
"allow_duplicates": false,
"value": [
"{{{_ingest._temp_ip}}}"
],
"ignore_failure": true
}
}
]
}

View File

@@ -0,0 +1,36 @@
{
"processors": [
{
"set": {
"field": "event.dataset",
"value": "gridmetrics.agents",
"ignore_failure": true
}
},
{
"set": {
"field": "event.module",
"value": "gridmetrics",
"ignore_failure": true
}
},
{
"remove": {
"field": [
"host",
"elastic_agent",
"agent"
],
"ignore_missing": true,
"ignore_failure": true
}
},
{
"json": {
"field": "message",
"add_to_root": true,
"ignore_failure": true
}
}
]
}

View File

@@ -8,7 +8,7 @@
"processors": [
{ "set": { "ignore_failure": true, "field": "event.module", "value": "elastic_agent" } },
{ "split": { "if": "ctx.event?.dataset != null && ctx.event.dataset.contains('.')", "field": "event.dataset", "separator": "\\.", "target_field": "module_temp" } },
{ "split": { "if": "ctx.data_stream?.dataset.contains('.')", "field":"data_stream.dataset", "separator":"\\.", "target_field":"datastream_dataset_temp", "ignore_missing":true } },
{ "split": { "if": "ctx.data_stream?.dataset != null && ctx.data_stream?.dataset.contains('.')", "field":"data_stream.dataset", "separator":"\\.", "target_field":"datastream_dataset_temp", "ignore_missing":true } },
{ "set": { "if": "ctx.module_temp != null", "override": true, "field": "event.module", "value": "{{module_temp.0}}" } },
{ "set": { "if": "ctx.datastream_dataset_temp != null && ctx.datastream_dataset_temp[0] == 'network_traffic'", "field":"event.module", "value":"{{ datastream_dataset_temp.0 }}", "ignore_failure":true, "ignore_empty_value":true, "description":"Fix EA network packet capture" } },
{ "gsub": { "if": "ctx.event?.dataset != null && ctx.event.dataset.contains('.')", "field": "event.dataset", "pattern": "^[^.]*.", "replacement": "", "target_field": "dataset_tag_temp" } },
@@ -19,11 +19,13 @@
{ "set": { "if": "ctx.network?.type == 'ipv6'", "override": true, "field": "destination.ipv6", "value": "true" } },
{ "set": { "if": "ctx.tags != null && ctx.tags.contains('import')", "override": true, "field": "data_stream.dataset", "value": "import" } },
{ "set": { "if": "ctx.tags != null && ctx.tags.contains('import')", "override": true, "field": "data_stream.namespace", "value": "so" } },
{ "date": { "if": "ctx.event?.module == 'system'", "field": "event.created", "target_field": "@timestamp","ignore_failure": true, "formats": ["yyyy-MM-dd'T'HH:mm:ss.SSSX","yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'"] } },
{ "community_id":{ "if": "ctx.event?.dataset == 'endpoint.events.network'", "ignore_failure":true } },
{ "set": { "if": "ctx.event?.module == 'fim'", "override": true, "field": "event.module", "value": "file_integrity" } },
{ "rename": { "if": "ctx.winlog?.provider_name == 'Microsoft-Windows-Windows Defender'", "ignore_missing": true, "field": "winlog.event_data.Threat Name", "target_field": "winlog.event_data.threat_name" } },
{ "rename": { "if": "ctx.winlog?.provider_name == 'Microsoft-Windows-Windows Defender'", "ignore_missing": true, "field": "winlog.event_data.Threat Name", "target_field": "winlog.event_data.threat_name" } },
{ "set": { "if": "ctx?.metadata?.kafka != null" , "field": "kafka.id", "value": "{{metadata.kafka.partition}}{{metadata.kafka.offset}}{{metadata.kafka.timestamp}}", "ignore_failure": true } },
{ "set": { "if": "ctx.event?.dataset != null && ctx.event?.dataset == 'elasticsearch.server'", "field": "event.module", "value":"elasticsearch" }},
{"append": {"field":"related.ip","value":["{{source.ip}}","{{destination.ip}}"],"allow_duplicates":false,"if":"ctx?.event?.dataset == 'endpoint.events.network' && ctx?.source?.ip != null","ignore_failure":true}},
{"foreach": {"field":"host.ip","processor":{"append":{"field":"related.ip","value":"{{_ingest._value}}","allow_duplicates":false}},"if":"ctx?.event?.module == 'endpoint' && ctx?.host?.ip != null","ignore_missing":true, "description":"Extract IPs from Elastic Agent events (host.ip) and adds them to related.ip"}},
{ "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "event.dataset_temp", "dataset_tag_temp", "module_temp", "datastream_dataset_temp" ], "ignore_missing": true, "ignore_failure": true } }
]
}

View File

@@ -1,11 +0,0 @@
{
"description" : "import.wel",
"processors" : [
{ "set": { "field": "event.ingested", "value": "{{ @timestamp }}" } },
{ "set" : { "field" : "@timestamp", "value" : "{{ event.created }}" } },
{ "remove": { "field": [ "event_record_id", "event.created" , "timestamp" , "winlog.event_data.UtcTime" ], "ignore_failure": true } },
{ "pipeline": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational'", "name": "sysmon" } },
{ "pipeline": { "if": "ctx.winlog?.channel != 'Microsoft-Windows-Sysmon/Operational'", "name":"win.eventlogs" } },
{ "pipeline": { "name": "common" } }
]
}

View File

@@ -107,61 +107,61 @@
},
{
"pipeline": {
"name": "logs-pfsense.log-1.21.0-firewall",
"name": "logs-pfsense.log-1.23.1-firewall",
"if": "ctx.event.provider == 'filterlog'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.21.0-openvpn",
"name": "logs-pfsense.log-1.23.1-openvpn",
"if": "ctx.event.provider == 'openvpn'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.21.0-ipsec",
"name": "logs-pfsense.log-1.23.1-ipsec",
"if": "ctx.event.provider == 'charon'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.21.0-dhcp",
"name": "logs-pfsense.log-1.23.1-dhcp",
"if": "[\"dhcpd\", \"dhclient\", \"dhcp6c\"].contains(ctx.event.provider)"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.21.0-unbound",
"name": "logs-pfsense.log-1.23.1-unbound",
"if": "ctx.event.provider == 'unbound'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.21.0-haproxy",
"name": "logs-pfsense.log-1.23.1-haproxy",
"if": "ctx.event.provider == 'haproxy'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.21.0-php-fpm",
"name": "logs-pfsense.log-1.23.1-php-fpm",
"if": "ctx.event.provider == 'php-fpm'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.21.0-squid",
"name": "logs-pfsense.log-1.23.1-squid",
"if": "ctx.event.provider == 'squid'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.21.0-snort",
"name": "logs-pfsense.log-1.23.1-snort",
"if": "ctx.event.provider == 'snort'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.21.0-suricata",
"name": "logs-pfsense.log-1.23.1-suricata",
"if": "ctx.event.provider == 'suricata'"
}
},
@@ -358,14 +358,6 @@
"source": "void handleMap(Map map) {\n for (def x : map.values()) {\n if (x instanceof Map) {\n handleMap(x);\n } else if (x instanceof List) {\n handleList(x);\n }\n }\n map.values().removeIf(v -> v == null || (v instanceof String && v == \"-\"));\n}\nvoid handleList(List list) {\n for (def x : list) {\n if (x instanceof Map) {\n handleMap(x);\n } else if (x instanceof List) {\n handleList(x);\n }\n }\n}\nhandleMap(ctx);\n"
}
},
{
"remove": {
"field": "event.original",
"if": "ctx.tags == null || !(ctx.tags.contains('preserve_original_event'))",
"ignore_failure": true,
"ignore_missing": true
}
},
{
"pipeline": {
"name": "global@custom",

View File

@@ -9,6 +9,7 @@
{ "rename":{ "field": "rule.signature_id", "target_field": "rule.uuid", "ignore_failure": true } },
{ "rename":{ "field": "rule.signature_id", "target_field": "rule.signature", "ignore_failure": true } },
{ "rename":{ "field": "message2.payload_printable", "target_field": "network.data.decoded", "ignore_failure": true } },
{ "dissect": { "field": "rule.rule", "pattern": "%{?prefix}content:\"%{dns.query_name}\"%{?remainder}", "ignore_missing": true, "ignore_failure": true } },
{ "pipeline": { "name": "common.nids" } }
]
}
}

View File

@@ -18,6 +18,13 @@
{ "set": { "field": "event.ingested", "value": "{{@timestamp}}" } },
{ "date": { "field": "message2.timestamp", "target_field": "@timestamp", "formats": ["ISO8601", "UNIX"], "timezone": "UTC", "ignore_failure": true } },
{ "remove":{ "field": "agent", "ignore_failure": true } },
{"append":{"field":"related.ip","value":["{{source.ip}}","{{destination.ip}}"],"allow_duplicates":false,"ignore_failure":true}},
{
"script": {
"source": "boolean isPrivate(def ip) { if (ip == null) return false; int dot1 = ip.indexOf('.'); if (dot1 == -1) return false; int dot2 = ip.indexOf('.', dot1 + 1); if (dot2 == -1) return false; int first = Integer.parseInt(ip.substring(0, dot1)); if (first == 10) return true; if (first == 192 && ip.startsWith('168.', dot1 + 1)) return true; if (first == 172) { int second = Integer.parseInt(ip.substring(dot1 + 1, dot2)); return second >= 16 && second <= 31; } return false; } String[] fields = new String[] {\"source\", \"destination\"}; for (int i = 0; i < fields.length; i++) { def field = fields[i]; def ip = ctx[field]?.ip; if (ip != null) { if (ctx.network == null) ctx.network = new HashMap(); if (isPrivate(ip)) { if (ctx.network.private_ip == null) ctx.network.private_ip = new ArrayList(); if (!ctx.network.private_ip.contains(ip)) ctx.network.private_ip.add(ip); } else { if (ctx.network.public_ip == null) ctx.network.public_ip = new ArrayList(); if (!ctx.network.public_ip.contains(ip)) ctx.network.public_ip.add(ip); } } }",
"ignore_failure": false
}
},
{ "pipeline": { "if": "ctx?.event?.dataset != null", "name": "suricata.{{event.dataset}}" } }
]
}

View File

@@ -12,7 +12,8 @@
{ "rename": { "field": "message2.id.orig_p", "target_field": "source.port", "ignore_missing": true } },
{ "rename": { "field": "message2.id.resp_h", "target_field": "destination.ip", "ignore_missing": true } },
{ "rename": { "field": "message2.id.resp_p", "target_field": "destination.port", "ignore_missing": true } },
{ "community_id": {} },
{ "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_missing": true } },
{ "community_id": { "if": "ctx.network?.community_id == null" } },
{ "set": { "if": "ctx.source?.ip != null", "field": "client.ip", "value": "{{source.ip}}" } },
{ "set": { "if": "ctx.source?.port != null", "field": "client.port", "value": "{{source.port}}" } },
{ "set": { "if": "ctx.destination?.ip != null", "field": "server.ip", "value": "{{destination.ip}}" } },

View File

@@ -24,6 +24,10 @@
{ "rename": { "field": "message2.resp_cc", "target_field": "server.country_code", "ignore_missing": true } },
{ "rename": { "field": "message2.sensorname", "target_field": "observer.name", "ignore_missing": true } },
{ "rename": { "field": "message2.vlan", "target_field": "network.vlan.id", "ignore_missing": true } },
{ "rename": { "field": "message2.ja4l", "target_field": "hash.ja4l", "ignore_missing" : true, "if": "ctx.message2?.ja4l != null && ctx.message2.ja4l.length() > 0" }},
{ "rename": { "field": "message2.ja4ls", "target_field": "hash.ja4ls", "ignore_missing" : true, "if": "ctx.message2?.ja4ls != null && ctx.message2.ja4ls.length() > 0" }},
{ "rename": { "field": "message2.ja4t", "target_field": "hash.ja4t", "ignore_missing" : true, "if": "ctx.message2?.ja4t != null && ctx.message2.ja4t.length() > 0" }},
{ "rename": { "field": "message2.ja4ts", "target_field": "hash.ja4ts", "ignore_missing" : true, "if": "ctx.message2?.ja4ts != null && ctx.message2.ja4ts.length() > 0" }},
{ "script": { "lang": "painless", "source": "ctx.network.bytes = (ctx.client.bytes + ctx.server.bytes)", "ignore_failure": true } },
{ "set": { "if": "ctx.connection?.state == 'S0'", "field": "connection.state_description", "value": "Connection attempt seen, no reply" } },
{ "set": { "if": "ctx.connection?.state == 'S1'", "field": "connection.state_description", "value": "Connection established, not terminated" } },

View File

@@ -20,7 +20,11 @@
{ "rename": { "field": "message2.RD", "target_field": "dns.recursion.desired", "ignore_missing": true } },
{ "rename": { "field": "message2.RA", "target_field": "dns.recursion.available", "ignore_missing": true } },
{ "rename": { "field": "message2.Z", "target_field": "dns.reserved", "ignore_missing": true } },
{ "rename": { "field": "message2.answers", "target_field": "dns.answers.name", "ignore_missing": true } },
{ "rename": { "field": "message2.answers", "target_field": "dns.answers.name", "ignore_missing": true } },
{ "foreach": {"field": "dns.answers.name","processor": {"pipeline": {"name": "common.ip_validation"}},"if": "ctx.dns != null && ctx.dns.answers != null && ctx.dns.answers.name != null","ignore_failure": true}},
{ "foreach": {"field": "temp._valid_ips","processor": {"append": {"field": "dns.resolved_ip","allow_duplicates": false,"value": "{{{_ingest._value}}}","ignore_failure": true}},"ignore_failure": true}},
{ "script": { "source": "if (ctx.dns.resolved_ip != null && ctx.dns.resolved_ip instanceof List) {\n ctx.dns.resolved_ip.removeIf(item -> item == null || item.toString().trim().isEmpty());\n }","ignore_failure": true }},
{ "remove": {"field": ["temp"], "ignore_missing": true ,"ignore_failure": true } },
{ "rename": { "field": "message2.TTLs", "target_field": "dns.ttls", "ignore_missing": true } },
{ "rename": { "field": "message2.rejected", "target_field": "dns.query.rejected", "ignore_missing": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.query.length = ctx.dns.query.name.length()", "ignore_failure": true } },
@@ -28,4 +32,4 @@
{ "pipeline": { "if": "ctx.dns?.query?.name != null && ctx.dns.query.name.contains('.')", "name": "dns.tld" } },
{ "pipeline": { "name": "zeek.common" } }
]
}
}

View File

@@ -27,6 +27,7 @@
{ "rename": { "field": "message2.resp_fuids", "target_field": "log.id.resp_fuids", "ignore_missing": true } },
{ "rename": { "field": "message2.resp_filenames", "target_field": "file.resp_filenames", "ignore_missing": true } },
{ "rename": { "field": "message2.resp_mime_types", "target_field": "file.resp_mime_types", "ignore_missing": true } },
{ "rename": { "field": "message2.ja4h", "target_field": "hash.ja4h", "ignore_missing": true, "if": "ctx?.message2?.ja4h != null && ctx.message2.ja4h.length() > 0" } },
{ "script": { "lang": "painless", "source": "ctx.uri_length = ctx.uri.length()", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.useragent_length = ctx.useragent.length()", "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.virtual_host_length = ctx.virtual_host.length()", "ignore_failure": true } },

View File

@@ -27,6 +27,7 @@
{ "rename": { "field": "message2.resp_filenames", "target_field": "file.resp_filenames", "ignore_missing": true } },
{ "rename": { "field": "message2.resp_mime_types", "target_field": "file.resp_mime_types", "ignore_missing": true } },
{ "rename": { "field": "message2.stream_id", "target_field": "http2.stream_id", "ignore_missing": true } },
{ "rename": { "field": "message2.ja4h", "target_field": "hash.ja4h", "ignore_missing": true, "if": "ctx?.message2?.ja4h != null && ctx.message2.ja4h.length() > 0" } },
{ "remove": { "field": "message2.tags", "ignore_failure": true } },
{ "remove": { "field": ["host"], "ignore_failure": true } },
{ "script": { "lang": "painless", "source": "ctx.uri_length = ctx.uri.length()", "ignore_failure": true } },

View File

@@ -0,0 +1,10 @@
{
"description": "zeek.ja4ssh",
"processors": [
{"set": {"field": "event.dataset","value": "ja4ssh"}},
{"remove": {"field": "host","ignore_missing": true,"ignore_failure": true}},
{"json": {"field": "message","target_field": "message2","ignore_failure": true}},
{"rename": {"field": "message2.ja4ssh", "target_field": "hash.ja4ssh", "ignore_missing": true, "if": "ctx?.message2?.ja4ssh != null && ctx.message2.ja4ssh.length() > 0" }},
{"pipeline": {"name": "zeek.common"}}
]
}

View File

@@ -1,9 +1,25 @@
{
"description":"zeek.ldap_search",
"processors":[
{"pipeline": {"name": "zeek.ldap", "ignore_missing_pipeline":true,"ignore_failure":true}},
{"set": {"field": "event.dataset", "value":"ldap_search"}},
{"remove": {"field": "tags", "ignore_missing":true}},
{"set": {"field": "event.dataset", "value":"ldap_search"}},
{"json": {"field": "message", "target_field": "message2", "ignore_failure": true}},
{"rename": {"field": "message2.message_id", "target_field": "ldap.message_id", "ignore_missing": true}},
{"rename": {"field": "message2.opcode", "target_field": "ldap.opcode", "ignore_missing": true}},
{"rename": {"field": "message2.result", "target_field": "ldap.result", "ignore_missing": true}},
{"rename": {"field": "message2.diagnostic_message", "target_field": "ldap.diagnostic_message", "ignore_missing": true}},
{"rename": {"field": "message2.version", "target_field": "ldap.version", "ignore_missing": true}},
{"rename": {"field": "message2.object", "target_field": "ldap.object", "ignore_missing": true}},
{"rename": {"field": "message2.argument", "target_field": "ldap.argument", "ignore_missing": true}},
{"rename": {"field": "message2.scope", "target_field": "ldap_search.scope", "ignore_missing":true}},
{"rename": {"field": "message2.deref_aliases", "target_field": "ldap_search.deref_aliases", "ignore_missing":true}},
{"rename": {"field": "message2.base_object", "target_field": "ldap.object", "ignore_missing":true}},
{"rename": {"field": "message2.result_count", "target_field": "ldap_search.result_count", "ignore_missing":true}},
{"rename": {"field": "message2.filter", "target_field": "ldap_search.filter", "ignore_missing":true}},
{"rename": {"field": "message2.attributes", "target_field": "ldap_search.attributes", "ignore_missing":true}},
{"script": {"source": "if (ctx.containsKey('ldap') && ctx.ldap.containsKey('diagnostic_message') && ctx.ldap.diagnostic_message != null) {\n String message = ctx.ldap.diagnostic_message;\n\n // get user and property from SASL success\n if (message.toLowerCase().contains(\"sasl(0): successful result\")) {\n Pattern pattern = /user:\\s*([^ ]+)\\s*property:\\s*([^ ]+)/i;\n Matcher matcher = pattern.matcher(message);\n if (matcher.find()) {\n ctx.ldap.user_email = matcher.group(1); // Extract user email\n ctx.ldap.property = matcher.group(2); // Extract property\n }\n }\n if (message.toLowerCase().contains(\"ldaperr:\")) {\n Pattern pattern = /comment:\\s*([^,]+)/i;\n Matcher matcher = pattern.matcher(message);\n\n if (matcher.find()) {\n ctx.ldap.comment = matcher.group(1);\n }\n }\n }","ignore_failure": true}},
{"script": {"source": "if (ctx.containsKey('ldap') && ctx.ldap.containsKey('object') && ctx.ldap.object != null) {\n String message = ctx.ldap.object;\n\n // parse common name from ldap object\n if (message.toLowerCase().contains(\"cn=\")) {\n Pattern pattern = /cn=([^,]+)/i;\n Matcher matcher = pattern.matcher(message);\n if (matcher.find()) {\n ctx.ldap.common_name = matcher.group(1); // Extract CN\n }\n }\n // build domain from ldap object\n if (message.toLowerCase().contains(\"dc=\")) {\n Pattern dcPattern = /dc=([^,]+)/i;\n Matcher dcMatcher = dcPattern.matcher(message);\n\n StringBuilder domainBuilder = new StringBuilder();\n while (dcMatcher.find()) {\n if (domainBuilder.length() > 0 ){\n domainBuilder.append(\".\");\n }\n domainBuilder.append(dcMatcher.group(1));\n }\n if (domainBuilder.length() > 0) {\n ctx.ldap.domain = domainBuilder.toString();\n }\n }\n // create list of any organizational units from ldap object\n if (message.toLowerCase().contains(\"ou=\")) {\n Pattern ouPattern = /ou=([^,]+)/i;\n Matcher ouMatcher = ouPattern.matcher(message);\n ctx.ldap.organizational_unit = [];\n\n while (ouMatcher.find()) {\n ctx.ldap.organizational_unit.add(ouMatcher.group(1));\n }\n if(ctx.ldap.organizational_unit.isEmpty()) {\n ctx.remove(\"ldap.organizational_unit\");\n }\n }\n}\n","ignore_failure": true}},
{"remove": {"field": "message2.tags", "ignore_failure": true}},
{"remove": {"field": ["host"], "ignore_failure": true}},
{"pipeline": {"name": "zeek.common"}}
]
}

View File

@@ -23,6 +23,8 @@
{ "rename": { "field": "message2.validation_status","target_field": "ssl.validation_status", "ignore_missing": true } },
{ "rename": { "field": "message2.ja3", "target_field": "hash.ja3", "ignore_missing": true } },
{ "rename": { "field": "message2.ja3s", "target_field": "hash.ja3s", "ignore_missing": true } },
{ "rename": { "field": "message2.ja4", "target_field": "hash.ja4", "ignore_missing": true, "if": "ctx?.message2?.ja4 != null && ctx.message2.ja4.length() > 0" } },
{ "rename": { "field": "message2.ja4s", "target_field": "hash.ja4s", "ignore_missing": true, "if": "ctx?.message2?.ja4s != null && ctx.message2.ja4s.length() > 0" } },
{ "foreach":
{
"if": "ctx?.tls?.client?.hash?.sha256 !=null",

View File

@@ -42,6 +42,7 @@
{ "dot_expander": { "field": "basic_constraints.path_length", "path": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.basic_constraints.path_length", "target_field": "x509.basic_constraints.path_length", "ignore_missing": true } },
{ "rename": { "field": "message2.fingerprint", "target_field": "hash.sha256", "ignore_missing": true } },
{ "rename": { "field": "message2.ja4x", "target_field": "hash.ja4x", "ignore_missing": true, "if": "ctx?.message2?.ja4x != null && ctx.message2.ja4x.length() > 0" } },
{ "pipeline": { "name": "zeek.common_ssl" } }
]
}

View File

@@ -20,8 +20,28 @@ appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = /var/log/elasticsearch
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = *.gz
appender.rolling.strategy.action.condition.glob = *.log.gz
appender.rolling.strategy.action.condition.nested_condition.type = IfLastModified
appender.rolling.strategy.action.condition.nested_condition.age = 7D
appender.rolling_json.type = RollingFile
appender.rolling_json.name = rolling_json
appender.rolling_json.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.json
appender.rolling_json.layout.type = ECSJsonLayout
appender.rolling_json.layout.dataset = elasticsearch.server
appender.rolling_json.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}.json.gz
appender.rolling_json.policies.type = Policies
appender.rolling_json.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling_json.policies.time.interval = 1
appender.rolling_json.policies.time.modulate = true
appender.rolling_json.strategy.type = DefaultRolloverStrategy
appender.rolling_json.strategy.action.type = Delete
appender.rolling_json.strategy.action.basepath = /var/log/elasticsearch
appender.rolling_json.strategy.action.condition.type = IfFileName
appender.rolling_json.strategy.action.condition.glob = *.json.gz
appender.rolling_json.strategy.action.condition.nested_condition.type = IfLastModified
appender.rolling_json.strategy.action.condition.nested_condition.age = 1D
rootLogger.level = info
rootLogger.appenderRef.rolling.ref = rolling
rootLogger.appenderRef.rolling_json.ref = rolling_json

View File

@@ -12,7 +12,7 @@ elasticsearch:
description: Specify the memory heap size in (m)egabytes for Elasticsearch.
helpLink: elasticsearch.html
index_clean:
description: Determines if indices should be considered for deletion by available disk space in the cluster. Otherwise, indices will only be deleted by the age defined in the ILM settings.
description: Determines if indices should be considered for deletion by available disk space in the cluster. Otherwise, indices will only be deleted by the age defined in the ILM settings. This setting only applies to EVAL, STANDALONE, and HEAVY NODE installations. Other installations can only use ILM settings.
forcedType: bool
helpLink: elasticsearch.html
retention:
@@ -392,6 +392,7 @@ elasticsearch:
so-logs-elastic_agent_x_metricbeat: *indexSettings
so-logs-elastic_agent_x_osquerybeat: *indexSettings
so-logs-elastic_agent_x_packetbeat: *indexSettings
so-logs-elasticsearch_x_server: *indexSettings
so-metrics-endpoint_x_metadata: *indexSettings
so-metrics-endpoint_x_metrics: *indexSettings
so-metrics-endpoint_x_policy: *indexSettings

View File

@@ -15,7 +15,7 @@
{% set ES_INDEX_SETTINGS_ORIG = ELASTICSEARCHDEFAULTS.elasticsearch.index_settings %}
{# start generation of integration default index_settings #}
{% if salt['file.file_exists']('/opt/so/state/esfleet_package_components.json') %}
{% if salt['file.file_exists']('/opt/so/state/esfleet_package_components.json') and salt['file.file_exists']('/opt/so/state/esfleet_component_templates.json') %}
{% set check_package_components = salt['file.stats']('/opt/so/state/esfleet_package_components.json') %}
{% if check_package_components.size > 1 %}
{% from 'elasticfleet/integration-defaults.map.jinja' import ADDON_INTEGRATION_DEFAULTS %}

View File

@@ -0,0 +1,69 @@
{
"template": {
"mappings": {
"properties": {
"hash": {
"type": "object",
"properties": {
"ja3": {
"type": "keyword",
"ignore_above": 1024
},
"ja3s": {
"type": "keyword",
"ignore_above": 1024
},
"hassh": {
"type": "keyword",
"ignore_above": 1024
},
"md5": {
"type": "keyword",
"ignore_above": 1024
},
"sha1": {
"type": "keyword",
"ignore_above": 1024
},
"sha256": {
"type": "keyword",
"ignore_above": 1024
},
"ja4": {
"type": "keyword",
"ignore_above": 1024
},
"ja4l": {
"type": "keyword",
"ignore_above": 1024
},
"ja4ls": {
"type": "keyword",
"ignore_above": 1024
},
"ja4t": {
"type": "keyword",
"ignore_above": 1024
},
"ja4ts": {
"type": "keyword",
"ignore_above": 1024
},
"ja4ssh": {
"type": "keyword",
"ignore_above": 1024
},
"ja4h": {
"type": "keyword",
"ignore_above": 1024
},
"ja4x": {
"type": "keyword",
"ignore_above": 1024
}
}
}
}
}
}
}

View File

@@ -29,7 +29,7 @@
"file": {
"properties": {
"line": {
"type": "integer"
"type": "long"
},
"name": {
"ignore_above": 1024,

View File

@@ -0,0 +1,26 @@
{
"template": {
"mappings": {
"dynamic_templates": [],
"properties": {
"metadata": {
"properties": {
"kafka": {
"properties": {
"timestamp": {
"type": "date"
}
}
}
}
}
}
}
},
"_meta": {
"_meta": {
"documentation": "https://www.elastic.co/guide/en/ecs/current/ecs-log.html",
"ecs_version": "1.12.2"
}
}
}

View File

@@ -0,0 +1,43 @@
{
"template": {
"mappings": {
"properties": {
"agent": {
"type": "object",
"properties": {
"hostname": {
"ignore_above": 1024,
"type": "keyword"
},
"id": {
"ignore_above": 1024,
"type": "keyword"
},
"last_checkin_status": {
"ignore_above": 1024,
"type": "keyword"
},
"last_checkin": {
"type": "date"
},
"name": {
"ignore_above": 1024,
"type": "keyword"
},
"offline_duration_hours": {
"type": "integer"
},
"policy_id": {
"ignore_above": 1024,
"type": "keyword"
},
"status": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
}
}
}
}

View File

@@ -5,6 +5,7 @@
"managed_by": "security_onion",
"managed": true
},
"date_detection": false,
"dynamic_templates": [
{
"strings_as_keyword": {
@@ -16,7 +17,19 @@
}
}
],
"date_detection": false
"properties": {
"metadata": {
"properties": {
"kafka": {
"properties": {
"timestamp": {
"type": "date"
}
}
}
}
}
}
}
},
"_meta": {

View File

@@ -1,37 +1,59 @@
{
"template": {
"mappings": {
"properties": {
"host": {
"properties":{
"ip": {
"type": "ip"
}
}
},
"related": {
"properties":{
"ip": {
"type": "ip"
}
"template": {
"mappings": {
"properties": {
"host": {
"properties": {
"ip": {
"type": "ip"
}
},
"destination": {
"properties":{
"ip": {
"type": "ip"
}
}
},
"related": {
"properties": {
"ip": {
"type": "ip"
}
},
"source": {
"properties":{
"ip": {
"type": "ip"
}
},
"destination": {
"properties": {
"ip": {
"type": "ip"
}
}
},
"source": {
"properties": {
"ip": {
"type": "ip"
}
}
},
"metadata": {
"properties": {
"input": {
"properties": {
"beats": {
"properties": {
"host": {
"properties": {
"ip": {
"type": "ip"
}
}
}
}
}
}
}
}
}
}
}
},
"_meta": {
"managed_by": "security_onion",
"managed": true
}
}

View File

@@ -0,0 +1,104 @@
{
"template": {
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"so_kind": {
"ignore_above": 1024,
"type": "keyword"
},
"so_operation": {
"ignore_above": 1024,
"type": "keyword"
},
"so_chat": {
"properties": {
"role": {
"ignore_above": 1024,
"type": "keyword"
},
"content": {
"type": "object",
"enabled": false
},
"sessionId": {
"ignore_above": 1024,
"type": "keyword"
},
"createTime": {
"type": "date"
},
"deletedAt": {
"type": "date"
},
"tags": {
"ignore_above": 1024,
"type": "keyword"
},
"tool_use_id": {
"ignore_above": 1024,
"type": "keyword"
},
"userId": {
"ignore_above": 1024,
"type": "keyword"
},
"message": {
"properties": {
"id": {
"ignore_above": 1024,
"type": "keyword"
},
"type": {
"ignore_above": 1024,
"type": "keyword"
},
"role": {
"ignore_above": 1024,
"type": "keyword"
},
"model": {
"ignore_above": 1024,
"type": "keyword"
},
"contentStr": {
"type": "text"
},
"contentBlocks": {
"type": "nested",
"enabled": false
},
"stopReason": {
"ignore_above": 1024,
"type": "keyword"
},
"stopSequence": {
"ignore_above": 1024,
"type": "keyword"
},
"usage": {
"properties": {
"input_tokens": {
"type": "long"
},
"output_tokens": {
"type": "long"
},
"credits": {
"type": "long"
}
}
}
}
}
}
}
}
}
},
"_meta": {
"ecs_version": "1.12.2"
}
}

View File

@@ -0,0 +1,7 @@
{
"template": {},
"version": 1,
"_meta": {
"description": "default settings for common Security Onion Assistant indices"
}
}

View File

@@ -0,0 +1,44 @@
{
"template": {
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"so_kind": {
"ignore_above": 1024,
"type": "keyword"
},
"so_session": {
"properties": {
"title": {
"ignore_above": 1024,
"type": "keyword"
},
"sessionId": {
"ignore_above": 1024,
"type": "keyword"
},
"createTime": {
"type": "date"
},
"deleteTime": {
"type": "date"
},
"tags": {
"ignore_above": 1024,
"type": "keyword"
},
"userId": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
}
}
},
"_meta": {
"ecs_version": "1.12.2"
}
}

View File

@@ -0,0 +1,7 @@
{
"template": {},
"version": 1,
"_meta": {
"description": "default settings for common Security Onion Assistant indices"
}
}

View File

@@ -1,4 +1,4 @@
/bin/bash
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
@@ -6,6 +6,6 @@
. /usr/sbin/so-common
echo "Starting ILM..."
curl -K /opt/so/conf/elasticsearch/curl.config -s -k -L -X POST https://localhost:9200/_ilm/start
echo

View File

@@ -8,3 +8,4 @@
echo "Stopping ILM..."
curl -K /opt/so/conf/elasticsearch/curl.config -s -k -L -X POST https://localhost:9200/_ilm/stop
echo

View File

@@ -0,0 +1,113 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
INFLUX_URL="https://localhost:8086/api/v2"
. /usr/sbin/so-common
request() {
curl -skK /opt/so/conf/influxdb/curl.config "$INFLUX_URL/$@"
}
lookup_org_id() {
response=$(request orgs?org=Security+Onion)
echo "$response" | jq -r ".orgs[] | select(.name == \"Security Onion\").id"
}
ORG_ID=$(lookup_org_id)
run_flux_query() {
local query=$1
request "query?org=$ORG_ID" -H 'Accept:application/csv' -H 'Content-type:application/vnd.flux' -d "$query" -XPOST 2>/dev/null
}
read_csv_result() {
local result="$1"
echo "$result" | grep '^,_result,' | head -1 | awk -F',' '{print $NF}' | tr -d '\r\n\t '
}
bytes_to_gb() {
local bytes="${1:-0}"
if [[ "$bytes" =~ ^-?[0-9]+$ ]]; then
echo "$bytes" | awk '{printf "%.2f", $1 / 1024 / 1024 / 1024}'
else
echo "0.00"
fi
}
indexes_query='from(bucket: "telegraf/so_long_term")
|> range(start: -7d)
|> filter(fn: (r) => r._measurement == "elasticsearch_index_size")
|> distinct(column: "_field")
|> keep(columns: ["_field"])'
indexes_result=$(run_flux_query "$indexes_query")
indexes=$(echo "$indexes_result" | tail -n +2 | cut -d',' -f4 | grep -v '^$' | grep -v '^_field$' | sed 's/\r$//' | sort -u)
printf "%-50s %15s %15s %15s\n" "Index Name" "Last 24hr (GB)" "Last 7d (GB)" "Last 30d (GB)"
printf "%-50s %15s %15s %15s\n" "$(printf '%.0s-' {1..50})" "$(printf '%.0s-' {1..15})" "$(printf '%.0s-' {1..15})" "$(printf '%.0s-' {1..15})"
for index in $indexes; do
[[ -z "$index" ]] && continue
current_query="from(bucket: \"telegraf/so_long_term\")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == \"elasticsearch_index_size\" and r._field == \"$index\")
|> last()
|> keep(columns: [\"_value\"])"
current_result=$(run_flux_query "$current_query")
current_size=$(read_csv_result "$current_result")
current_size=${current_size:-0}
size_24h_query="from(bucket: \"telegraf/so_long_term\")
|> range(start: -25h, stop: -23h)
|> filter(fn: (r) => r._measurement == \"elasticsearch_index_size\" and r._field == \"$index\")
|> last()
|> keep(columns: [\"_value\"])"
size_24h_result=$(run_flux_query "$size_24h_query")
size_24h_ago=$(read_csv_result "$size_24h_result")
size_24h_ago=${size_24h_ago:-$current_size}
size_7d_query="from(bucket: \"telegraf/so_long_term\")
|> range(start: -7d8h, stop: -7d)
|> filter(fn: (r) => r._measurement == \"elasticsearch_index_size\" and r._field == \"$index\")
|> last()
|> keep(columns: [\"_value\"])"
size_7d_result=$(run_flux_query "$size_7d_query")
size_7d_ago=$(read_csv_result "$size_7d_result")
size_7d_ago=${size_7d_ago:-$current_size}
size_30d_query="from(bucket: \"telegraf/so_long_term\")
|> range(start: -30d8h, stop: -30d)
|> filter(fn: (r) => r._measurement == \"elasticsearch_index_size\" and r._field == \"$index\")
|> last()
|> keep(columns: [\"_value\"])"
size_30d_result=$(run_flux_query "$size_30d_query")
size_30d_ago=$(read_csv_result "$size_30d_result")
size_30d_ago=${size_30d_ago:-$current_size}
# if an index was recently cleaned up by ilm it will result in a negative number for 'index growth'.
growth_24h=$(( current_size > size_24h_ago ? current_size - size_24h_ago : 0 ))
growth_7d=$(( current_size > size_7d_ago ? current_size - size_7d_ago : 0 ))
growth_30d=$(( current_size > size_30d_ago ? current_size - size_30d_ago : 0 ))
growth_24h_gb=$(bytes_to_gb "$growth_24h")
growth_7d_gb=$(bytes_to_gb "$growth_7d")
growth_30d_gb=$(bytes_to_gb "$growth_30d")
# Only results for indices with atleast 1 metric above 0.00
if [[ "$growth_24h_gb" != "0.00" ]] || [[ "$growth_7d_gb" != "0.00" ]] || [[ "$growth_30d_gb" != "0.00" ]]; then
printf "%020.2f|%-50s %15s %15s %15s\n" \
"$growth_24h" \
"$index" \
"$growth_24h_gb" \
"$growth_7d_gb" \
"$growth_30d_gb"
fi
done | sort -t'|' -k1,1nr | cut -d'|' -f2-

View File

@@ -21,7 +21,7 @@ while [[ "$COUNT" -le 240 ]]; do
ELASTICSEARCH_CONNECTED="yes"
echo "connected!"
# Check cluster health once connected
so-elasticsearch-query _cluster/health?wait_for_status=yellow > /dev/null 2>&1
so-elasticsearch-query _cluster/health?wait_for_status=yellow\&timeout=120s > /dev/null 2>&1
break
else
((COUNT+=1))

View File

@@ -0,0 +1,195 @@
#!/bin/bash
. /usr/sbin/so-common
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
BOLD='\033[1;37m'
NC='\033[0m'
log_title() {
if [ $1 == "LOG" ]; then
echo -e "\n${BOLD}================ $2 ================${NC}\n"
elif [ $1 == "OK" ]; then
echo -e "${GREEN} $2 ${NC}"
elif [ $1 == "WARN" ]; then
echo -e "${YELLOW} $2 ${NC}"
elif [ $1 == "ERROR" ]; then
echo -e "${RED} $2 ${NC}"
fi
}
health_report() {
if ! health_report_output=$(so-elasticsearch-query _health_report?format=json --fail 2>/dev/null); then
log_title "ERROR" "Failed to retrieve health report from Elasticsearch"
return 1
fi
non_green_count=$(echo "$health_report_output" | jq '[.indicators | to_entries[] | select(.value.status != "green")] | length')
if [ "$non_green_count" -gt 0 ]; then
echo "$health_report_output" | jq -r '.indicators | to_entries[] | select(.value.status != "green") | .key' | while read -r indicator_name; do
indicator=$(echo "$health_report_output" | jq -r ".indicators.\"$indicator_name\"")
status=$(echo "$indicator" | jq -r '.status')
symptom=$(echo "$indicator" | jq -r '.symptom // "No symptom available"')
# reormat indicator name
display_name=$(echo "$indicator_name" | tr '_' ' ' | sed 's/\b\(.\)/\u\1/g')
if [ "$status" = "yellow" ]; then
log_title "WARN" "$display_name: $symptom"
else
log_title "ERROR" "$display_name: $symptom"
fi
# diagnosis if available
echo "$indicator" | jq -c '.diagnosis[]? // empty' | while read -r diagnosis; do
cause=$(echo "$diagnosis" | jq -r '.cause // "Unknown"')
action=$(echo "$diagnosis" | jq -r '.action // "No action specified"')
echo -e " ${BOLD}Cause:${NC} $cause\n"
echo -e " ${BOLD}Action:${NC} $action\n"
# Check for affected indices
affected_indices=$(echo "$diagnosis" | jq -r '.affected_resources.indices[]? // empty')
if [ -n "$affected_indices" ]; then
echo -e " ${BOLD}Affected indices:${NC}"
total_indices=$(echo "$affected_indices" | wc -l)
echo "$affected_indices" | head -10 | while read -r index; do
echo " - $index"
done
if [ "$total_indices" -gt 10 ]; then
remaining=$((total_indices - 10))
echo " ... and $remaining more indices (truncated for readability)"
fi
fi
echo
done
done
else
log_title "OK" "All health indicators are green"
fi
}
elasticsearch_status() {
log_title "LOG" "Elasticsearch Status"
if so-elasticsearch-query / --fail --output /dev/null; then
health_report
else
log_title "ERROR" "Elasticsearch API is not accessible"
so-status
log_title "ERROR" "Make sure Elasticsearch is running. Addtionally, check for startup errors in /opt/so/log/elasticsearch/securityonion.log${NC}\n"
exit 1
fi
}
indices_by_age() {
log_title "LOG" "Indices by Creation Date - Size > 1KB"
log_title "WARN" "Since high/flood watermark has been reached consider updating ILM policies.\n"
if ! indices_output=$(so-elasticsearch-query '_cat/indices?v&s=creation.date:asc&h=creation.date.string,index,status,health,docs.count,pri.store.size&bytes=b&format=json' --fail 2>/dev/null); then
log_title "ERROR" "Failed to retrieve indices list from Elasticsearch"
return 1
fi
# Filter for indices with size > 1KB (1024 bytes) and format output
echo -e "${BOLD}Creation Date Name Size${NC}"
echo -e "${BOLD}--------------------------------------------------------------------------------------------------------------${NC}"
# Create list of indices excluding .internal, so-detection*, so-case*
echo "$indices_output" | jq -r '.[] | select((."pri.store.size" | tonumber) > 1024) | select(.index | (startswith(".internal") or startswith("so-detection") or startswith("so-case")) | not ) | "\(."creation.date.string") | \(.index) | \(."pri.store.size")"' | while IFS='|' read -r creation_date index_name size_bytes; do
# Convert bytes to GB / MB
if [ "$size_bytes" -gt 1073741824 ]; then
size_human=$(echo "scale=2; $size_bytes / 1073741824" | bc)GB
else
size_human=$(echo "scale=2; $size_bytes / 1048576" | bc)MB
fi
creation_date=$(date -d "$creation_date" '+%Y-%m-%dT%H:%MZ' )
# Format output with spacing
printf "%-19s %-76s %10s\n" "$creation_date" "$index_name" "$size_human"
done
}
watermark_settings() {
watermark_path=".defaults.cluster.routing.allocation.disk.watermark"
if ! watermark_output=$(so-elasticsearch-query _cluster/settings?include_defaults=true\&filter_path=*.cluster.routing.allocation.disk.* --fail 2>/dev/null); then
log_title "ERROR" "Failed to retrieve watermark settings from Elasticsearch"
return 1
fi
if ! disk_allocation_output=$(so-elasticsearch-query _cat/nodes?v\&h=name,ip,disk.used_percent,disk.avail,disk.total,node.role\&format=json --fail 2>/dev/null); then
log_title "ERROR" "Failed to retrieve disk allocation data from Elasticsearch"
return 1
fi
flood=$(echo $watermark_output | jq -r "$watermark_path.flood_stage" )
high=$(echo $watermark_output | jq -r "$watermark_path.high" )
low=$(echo $watermark_output | jq -r "$watermark_path.low" )
# Strip percentage signs for comparison
flood_num=${flood%\%}
high_num=${high%\%}
low_num=${low%\%}
# Check each nodes disk usage
log_title "LOG" "Disk Usage Check"
echo -e "${BOLD}LOW:${GREEN}$low${NC}${BOLD} HIGH:${YELLOW}${high}${NC}${BOLD} FLOOD:${RED}${flood}${NC}\n"
# Only show data nodes (d=data, h=hot, w=warm, c=cold, f=frozen, s=content)
echo "$disk_allocation_output" | jq -r '.[] | select(.["node.role"] | test("[dhwcfs]")) | "\(.name)|\(.["disk.used_percent"])"' | while IFS='|' read -r node_name disk_used; do
disk_used_num=$(echo $disk_used | bc)
if (( $(echo "$disk_used_num >= $flood_num" | bc -l) )); then
log_title "ERROR" "$node_name is at or above the flood watermark ($flood)! Disk usage: ${disk_used}%"
touch /tmp/watermark_reached
elif (( $(echo "$disk_used_num >= $high_num" | bc -l) )); then
log_title "ERROR" "$node_name is at or above the high watermark ($high)! Disk usage: ${disk_used}%"
touch /tmp/watermark_reached
else
log_title "OK" "$node_name disk usage: ${disk_used}%"
fi
done
# Check if we need to show indices by age
if [ -f /tmp/watermark_reached ]; then
indices_by_age
rm -f /tmp/watermark_reached
fi
}
unassigned_shards() {
if ! unassigned_shards_output=$(so-elasticsearch-query _cat/shards?v\&h=index,shard,prirep,state,unassigned.reason,unassigned.details\&s=state\&format=json --fail 2>/dev/null); then
log_title "ERROR" "Failed to retrieve shard data from Elasticsearch"
return 1
fi
log_title "LOG" "Unassigned Shards Check"
# Check if there are any UNASSIGNED shards
unassigned_count=$(echo "$unassigned_shards_output" | jq '[.[] | select(.state == "UNASSIGNED")] | length')
if [ "$unassigned_count" -gt 0 ]; then
echo "$unassigned_shards_output" | jq -r '.[] | select(.state == "UNASSIGNED") | "\(.index)|\(.shard)|\(.prirep)|\(."unassigned.reason")"' | while IFS='|' read -r index shard prirep reason; do
if [ "$prirep" = "r" ]; then
log_title "WARN" "Replica shard for index $index is unassigned. Reason: $reason"
elif [ "$prirep" = "p" ]; then
log_title "ERROR" "Primary shard for index $index is unassigned. Reason: $reason"
fi
done
else
log_title "OK" "All shards are assigned"
fi
}
main() {
elasticsearch_status
watermark_settings
unassigned_shards
}
main

View File

@@ -136,7 +136,7 @@ if [ ! -f $STATE_FILE_SUCCESS ]; then
TEMPLATE=${i::-14}
COMPONENT_PATTERN=${TEMPLATE:3}
MATCH=$(echo "$TEMPLATE" | grep -E "^so-logs-|^so-metrics" | grep -vE "detections|osquery")
if [[ -n "$MATCH" && ! "$COMPONENT_LIST" =~ "$COMPONENT_PATTERN" && ! "$COMPONENT_PATTERN" =~ logs-http_endpoint\.generic|logs-winlog\.winlog ]]; then
if [[ -n "$MATCH" && ! "$COMPONENT_LIST" =~ "$COMPONENT_PATTERN" && ! "$COMPONENT_PATTERN" =~ \.generic|logs-winlog\.winlog ]]; then
load_failures=$((load_failures+1))
echo "Component template does not exist for $COMPONENT_PATTERN. The index template will not be loaded. Load failures: $load_failures"
else

View File

@@ -21,7 +21,7 @@
'so-strelka-filestream'
] %}
{% elif GLOBALS.role == 'so-manager' or GLOBALS.role == 'so-standalone' or GLOBALS.role == 'so-managersearch' %}
{% elif GLOBALS.role in ['so-manager', 'so-standalone','so-managersearch', 'so-managerhype'] %}
{% set NODE_CONTAINERS = [
'so-dockerregistry',
'so-elasticsearch',

View File

@@ -11,13 +11,16 @@ firewall:
endgame: []
eval: []
external_suricata: []
external_kafka: []
fleet: []
heavynode: []
hypervisor: []
idh: []
import: []
localhost:
- 127.0.0.1
manager: []
managerhype: []
managersearch: []
receiver: []
searchnode: []
@@ -103,6 +106,10 @@ firewall:
tcp:
- 9092
udp: []
kafka_external_access:
tcp:
- 29092
udp: []
kibana:
tcp:
- 5601
@@ -473,6 +480,8 @@ firewall:
external_suricata:
portgroups:
- external_suricata
external_kafka:
portgroups: []
desktop:
portgroups:
- docker_registry
@@ -482,6 +491,15 @@ firewall:
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
hypervisor:
portgroups:
- yum
- docker_registry
- influxdb
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- sensoroni
customhostgroup0:
portgroups: []
customhostgroup1:
@@ -534,6 +552,218 @@ firewall:
desktop:
portgroups:
- salt_manager
hypervisor:
portgroups:
- salt_manager
self:
portgroups:
- syslog
syslog:
portgroups:
- syslog
customhostgroup0:
portgroups: []
customhostgroup1:
portgroups: []
customhostgroup2:
portgroups: []
customhostgroup3:
portgroups: []
customhostgroup4:
portgroups: []
customhostgroup5:
portgroups: []
customhostgroup6:
portgroups: []
customhostgroup7:
portgroups: []
customhostgroup8:
portgroups: []
customhostgroup9:
portgroups: []
managerhype:
chain:
DOCKER-USER:
hostgroups:
managerhype:
portgroups:
- kibana
- redis
- influxdb
- elasticsearch_rest
- elasticsearch_node
- docker_registry
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- localrules
- sensoroni
fleet:
portgroups:
- elasticsearch_rest
- docker_registry
- influxdb
- sensoroni
- yum
- beats_5044
- beats_5644
- beats_5056
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
idh:
portgroups:
- docker_registry
- influxdb
- sensoroni
- yum
- beats_5044
- beats_5644
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
sensor:
portgroups:
- beats_5044
- beats_5644
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- yum
- docker_registry
- influxdb
- sensoroni
searchnode:
portgroups:
- redis
- elasticsearch_rest
- elasticsearch_node
- beats_5644
- yum
- docker_registry
- influxdb
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- sensoroni
heavynode:
portgroups:
- redis
- elasticsearch_rest
- elasticsearch_node
- beats_5644
- yum
- docker_registry
- influxdb
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- sensoroni
receiver:
portgroups:
- yum
- docker_registry
- influxdb
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- sensoroni
analyst:
portgroups:
- nginx
beats_endpoint:
portgroups:
- beats_5044
beats_endpoint_ssl:
portgroups:
- beats_5644
elasticsearch_rest:
portgroups:
- elasticsearch_rest
elastic_agent_endpoint:
portgroups:
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
endgame:
portgroups:
- endgame
external_suricata:
portgroups:
- external_suricata
desktop:
portgroups:
- docker_registry
- influxdb
- sensoroni
- yum
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
hypervisor:
portgroups:
- yum
- docker_registry
- influxdb
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- sensoroni
customhostgroup0:
portgroups: []
customhostgroup1:
portgroups: []
customhostgroup2:
portgroups: []
customhostgroup3:
portgroups: []
customhostgroup4:
portgroups: []
customhostgroup5:
portgroups: []
customhostgroup6:
portgroups: []
customhostgroup7:
portgroups: []
customhostgroup8:
portgroups: []
customhostgroup9:
portgroups: []
INPUT:
hostgroups:
anywhere:
portgroups:
- ssh
dockernet:
portgroups:
- all
fleet:
portgroups:
- salt_manager
idh:
portgroups:
- salt_manager
localhost:
portgroups:
- all
sensor:
portgroups:
- salt_manager
searchnode:
portgroups:
- salt_manager
heavynode:
portgroups:
- salt_manager
receiver:
portgroups:
- salt_manager
desktop:
portgroups:
- salt_manager
hypervisor:
portgroups:
- salt_manager
self:
portgroups:
- syslog
@@ -668,6 +898,8 @@ firewall:
external_suricata:
portgroups:
- external_suricata
external_kafka:
portgroups: []
desktop:
portgroups:
- docker_registry
@@ -677,6 +909,15 @@ firewall:
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
hypervisor:
portgroups:
- yum
- docker_registry
- influxdb
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- sensoroni
customhostgroup0:
portgroups: []
customhostgroup1:
@@ -729,6 +970,9 @@ firewall:
desktop:
portgroups:
- salt_manager
hypervisor:
portgroups:
- salt_manager
self:
portgroups:
- syslog
@@ -867,6 +1111,8 @@ firewall:
external_suricata:
portgroups:
- external_suricata
external_kafka:
portgroups: []
strelka_frontend:
portgroups:
- strelka_frontend
@@ -879,6 +1125,15 @@ firewall:
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
hypervisor:
portgroups:
- yum
- docker_registry
- influxdb
- elastic_agent_control
- elastic_agent_data
- elastic_agent_update
- sensoroni
customhostgroup0:
portgroups: []
customhostgroup1:
@@ -934,6 +1189,9 @@ firewall:
desktop:
portgroups:
- salt_manager
hypervisor:
portgroups:
- salt_manager
self:
portgroups:
- syslog
@@ -972,6 +1230,10 @@ firewall:
portgroups:
- elasticsearch_node
- elasticsearch_rest
managerhype:
portgroups:
- elasticsearch_node
- elasticsearch_rest
standalone:
portgroups:
- elasticsearch_node
@@ -1119,6 +1381,10 @@ firewall:
portgroups:
- elasticsearch_node
- elasticsearch_rest
managerhype:
portgroups:
- elasticsearch_node
- elasticsearch_rest
standalone:
portgroups:
- elasticsearch_node
@@ -1321,6 +1587,9 @@ firewall:
portgroups:
- redis
- elastic_agent_data
managerhype:
portgroups:
- elastic_agent_data
self:
portgroups:
- redis
@@ -1337,6 +1606,8 @@ firewall:
endgame:
portgroups:
- endgame
external_kafka:
portgroups: []
receiver:
portgroups: []
customhostgroup0:
@@ -1436,6 +1707,9 @@ firewall:
managersearch:
portgroups:
- openssh
managerhype:
portgroups:
- openssh
standalone:
portgroups:
- openssh
@@ -1459,3 +1733,66 @@ firewall:
portgroups: []
customhostgroup9:
portgroups: []
hypervisor:
chain:
DOCKER-USER:
hostgroups:
customhostgroup0:
portgroups: []
customhostgroup1:
portgroups: []
customhostgroup2:
portgroups: []
customhostgroup3:
portgroups: []
customhostgroup4:
portgroups: []
customhostgroup5:
portgroups: []
customhostgroup6:
portgroups: []
customhostgroup7:
portgroups: []
customhostgroup8:
portgroups: []
customhostgroup9:
portgroups: []
INPUT:
hostgroups:
anywhere:
portgroups:
- ssh
dockernet:
portgroups:
- all
localhost:
portgroups:
- all
manager:
portgroups: []
managersearch:
portgroups: []
managerhype:
portgroups: []
standalone:
portgroups: []
customhostgroup0:
portgroups: []
customhostgroup1:
portgroups: []
customhostgroup2:
portgroups: []
customhostgroup3:
portgroups: []
customhostgroup4:
portgroups: []
customhostgroup5:
portgroups: []
customhostgroup6:
portgroups: []
customhostgroup7:
portgroups: []
customhostgroup8:
portgroups: []
customhostgroup9:
portgroups: []

View File

@@ -91,6 +91,10 @@ COMMIT
-A INPUT -m conntrack --ctstate INVALID -j DROP
-A INPUT -p icmp -j ACCEPT
-A INPUT -j LOGGING
{% if GLOBALS.role in ['so-hypervisor', 'so-managerhype'] -%}
-A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i br0 -o br0 -j ACCEPT
{%- endif %}
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o sobridge -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

View File

@@ -21,25 +21,38 @@
{# Only add Kafka firewall items when Kafka enabled #}
{% set role = GLOBALS.role.split('-')[1] %}
{% if GLOBALS.pipeline == 'KAFKA' and role in ['manager', 'managersearch', 'standalone'] %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[role].portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.receiver.portgroups.append('kafka_controller') %}
{% endif %}
{% if GLOBALS.pipeline == 'KAFKA' %}
{% set KAFKA_EXTERNAL_ACCESS = salt['pillar.get']('kafka:config:external_access:enabled', default=False) %}
{% set kafka_node_type = salt['pillar.get']('kafka:nodes:'+ GLOBALS.hostname + ':role') %}
{% if GLOBALS.pipeline == 'KAFKA' and role == 'receiver' %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.self.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.standalone.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.manager.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.managersearch.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.receiver.portgroups.append('kafka_controller') %}
{% endif %}
{% if role.startswith('manager') or role == 'standalone' %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[role].portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.receiver.portgroups.append('kafka_controller') %}
{% endif %}
{% if GLOBALS.pipeline == 'KAFKA' and role in ['manager', 'managersearch', 'standalone', 'receiver'] %}
{% for r in ['manager', 'managersearch', 'standalone', 'receiver', 'fleet', 'idh', 'sensor', 'searchnode','heavynode', 'elastic_agent_endpoint', 'desktop'] %}
{% if FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r] is defined %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r].portgroups.append('kafka_data') %}
{% if role == 'receiver' %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.self.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.standalone.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.manager.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.managersearch.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.receiver.portgroups.append('kafka_controller') %}
{% endif %}
{% if role.startswith('manager') or role in ['standalone', 'receiver'] %}
{% for r in ['manager', 'managersearch', 'managerhype', 'standalone', 'receiver', 'fleet', 'idh', 'sensor', 'searchnode','heavynode', 'elastic_agent_endpoint', 'desktop'] %}
{% if FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r] is defined %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r].portgroups.append('kafka_data') %}
{% endif %}
{% endfor %}
{% endif %}
{% if KAFKA_EXTERNAL_ACCESS %}
{# Kafka external access only applies for Kafka nodes with the broker role. #}
{% if role.startswith('manager') or role in ['standalone', 'receiver'] and 'broker' in kafka_node_type %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.external_kafka.portgroups.append('kafka_external_access') %}
{% endif %}
{% endfor %}
{% endif %}
{% endif %}
{% set FIREWALL_MERGED = salt['pillar.get']('firewall', FIREWALL_DEFAULT.firewall, merge=True) %}

Some files were not shown because too many files have changed in this diff Show More