Compare commits

...

367 Commits

Author SHA1 Message Date
reyesj2
55b3fa389e no dates 2026-01-23 16:33:22 -06:00
reyesj2
b3ae716929 ignore kratos file mapping error 2026-01-23 16:31:30 -06:00
Jorge Reyes
30d8cf5a6c Merge pull request #15412 from Security-Onion-Solutions/reyesj2-patch-9
missing  updates to variables
2026-01-22 17:01:53 -06:00
Jorge Reyes
07dbdb9f8f Merge pull request #15411 from Security-Onion-Solutions/reyesj2-patch-10
add retries to so-resources repo pull
2026-01-22 17:01:35 -06:00
reyesj2
b4c8f7924a missing updates to variables 2026-01-22 16:49:20 -06:00
reyesj2
809422c517 add retries to so-resources repo pull 2026-01-22 16:39:19 -06:00
Jorge Reyes
bb7593a53a Merge pull request #15410 from Security-Onion-Solutions/reyesj2-patch-9
fix auto soup - check for compatible versions and fallback to a known…
2026-01-22 16:36:40 -06:00
reyesj2
8e3ba8900f fix auto soup - check for compatible versions and fallback to a known good value as needed 2026-01-22 16:12:21 -06:00
Jorge Reyes
005ec87248 Merge pull request #15408 from Security-Onion-Solutions/reyesj2-patch-7
fix kafka state
2026-01-21 12:58:58 -06:00
reyesj2
4c6ff0641b fix kafka state 2026-01-21 12:47:58 -06:00
Jorge Reyes
3e242913e9 Merge pull request #15407 from Security-Onion-Solutions/reyesj2-patch-6
more better
2026-01-20 15:31:44 -06:00
reyesj2
ba68e3c9bd more better 2026-01-20 15:30:19 -06:00
Josh Patterson
e1199a91b9 Merge pull request #15406 from Security-Onion-Solutions/bravo
fix include
2026-01-20 16:29:49 -05:00
Josh Patterson
d381248e30 fix include 2026-01-20 16:27:37 -05:00
Jorge Reyes
f4f0218cae Merge pull request #15404 from Security-Onion-Solutions/reyesj2-patch-6
reinstall agent on grid nodes when service wasn't cleanly removed. eg…
2026-01-20 13:34:55 -06:00
Josh Patterson
7a38e52b01 Merge pull request #15405 from Security-Onion-Solutions/bravo
create dir if nonexistent
2026-01-20 14:34:16 -05:00
Josh Patterson
959fd55e32 create dir if nonexistent 2026-01-20 14:30:11 -05:00
reyesj2
a8e218a9ff reinstall agent on grid nodes when service wasn't cleanly removed. eg. manually deleting /opt/Elastic/Agent/ 2026-01-20 12:37:06 -06:00
Josh Patterson
3f5cd46d7d Merge pull request #15402 from Security-Onion-Solutions/bravo
allow logstash.ssl for eval and import. fix soup create_ca_pillar
2026-01-20 12:08:45 -05:00
Josh Patterson
627f0c2bcc allow logstash.ssl state for so-import 2026-01-20 11:58:31 -05:00
Josh Patterson
f6bde3eb04 remove double logging 2026-01-20 11:56:31 -05:00
Josh Patterson
f6e95c17a0 need to create_ca_pillar for 210 not 220 2026-01-20 11:55:57 -05:00
Josh Patterson
1234cbd04b allow logstash.ssl on so-eval 2026-01-20 09:30:32 -05:00
Josh Patterson
fd5b93542e Merge pull request #15400 from Security-Onion-Solutions/bravo
break out ssl state
2026-01-19 17:21:07 -05:00
Josh Patterson
a192455fae Merge remote-tracking branch 'origin/2.4/dev' into bravo 2026-01-19 17:17:58 -05:00
Josh Patterson
66f17e95aa Merge pull request #15397 from Security-Onion-Solutions/fstes
Fstes
2026-01-16 18:38:06 -05:00
Josh Patterson
6f4b96b61b removing time logging changes 2026-01-16 18:31:45 -05:00
Josh Patterson
9905d23976 inform which state is being applied 2026-01-16 18:27:24 -05:00
Josh Patterson
17532fe49d run a final highstate on managers prior to verify 2026-01-16 17:42:58 -05:00
Josh Patterson
074158b495 discard so-elasticsearch-templates-load running again during setup 2026-01-16 17:42:00 -05:00
Josh Patterson
82d5115b3f rerun so-elasticsearch-templates-load during setup 2026-01-16 16:43:10 -05:00
Josh Patterson
5c63111002 add timing to scripts to allow for debugging delays 2026-01-16 16:42:24 -05:00
Jorge Reyes
6eda7932e8 Merge pull request #15394 from Security-Onion-Solutions/reyesj2/elastic9-filestream
remove usage of deprecated 'logs' integration in favor of 'filestream'
2026-01-16 13:19:15 -06:00
Jorge Reyes
399b7567dd Merge pull request #15393 from Security-Onion-Solutions/reyesj2/esretries
add additional retries within scripts before salt re-runs the entire …
2026-01-16 13:11:47 -06:00
reyesj2
2133ada3a1 add additional retries within scripts before salt re-runs the entire script 2026-01-16 13:09:08 -06:00
Jorge Reyes
4f6d4738c4 Merge pull request #15391 from Security-Onion-Solutions/reyesj2-patch-3
follow symlinks for docker cp
2026-01-15 15:26:48 -06:00
reyesj2
d430ed6727 false positive 2026-01-15 15:25:28 -06:00
reyesj2
596bc178df ensure docker cp command follows container symlinks 2026-01-15 15:18:18 -06:00
reyesj2
0cd3d7b5a8 deprecated kibana config 2026-01-15 15:17:22 -06:00
reyesj2
349d77ffdf exclude kafka restart error 2026-01-15 14:43:57 -06:00
Josh Patterson
c3283b04e5 Merge pull request #15390 from Security-Onion-Solutions/fixmerge201210
Fixmerge201210
2026-01-15 15:11:00 -05:00
Josh Patterson
0da0788e6b move function to be with the rest of its friends 2026-01-15 14:56:36 -05:00
Jason Ertel
6f7e249aa2 Merge pull request #15389 from Security-Onion-Solutions/jertel/wip
Add version 2.4.201 to discussion template
2026-01-15 14:56:25 -05:00
Josh Patterson
dfaeed54b6 Merge remote-tracking branch 'origin/2.4/main' into fixmerge201210 2026-01-15 14:44:33 -05:00
Jason Ertel
4f59e46235 Add version 2.4.201 to discussion template 2026-01-15 14:38:40 -05:00
Mike Reeves
bf4cc7befb Merge pull request #15386 from Security-Onion-Solutions/patch/2.4.201
2.4.201
2026-01-15 14:21:38 -05:00
Mike Reeves
c63c6dc68b Merge pull request #15385 from Security-Onion-Solutions/2.4.201
2.4.201
2026-01-15 10:45:05 -05:00
Mike Reeves
e4225d6e9b 2.4.201 2026-01-15 10:40:21 -05:00
Mike Reeves
3fb153c43e Add support for version 2.4.201 upgrades 2026-01-13 16:41:39 -05:00
Mike Reeves
6de20c63d4 Update VERSION 2026-01-13 16:20:57 -05:00
Josh Patterson
00fbc1c259 add back individual signing policies 2026-01-12 09:25:15 -05:00
Josh Patterson
3bc552ef38 Merge remote-tracking branch 'origin/2.4/dev' into bravo 2026-01-08 17:15:48 -05:00
Josh Patterson
ee70d94e15 remove old key/crt used for telegraf on non managers 2026-01-08 17:15:35 -05:00
Josh Patterson
1887d2c0e9 update heavynode pattern 2026-01-08 17:15:00 -05:00
Matthew Wright
c99dd4e44f Merge pull request #15367 from Security-Onion-Solutions/mwright/assistant-case-reports 2026-01-08 15:33:53 -05:00
Jorge Reyes
541b8b288d Merge pull request #15363 from Security-Onion-Solutions/reyesj2/elastic9-autosoup
ES 9.0.8
2026-01-08 14:19:19 -06:00
Matthew Wright
db168a0452 update case report for attached ai sessions 2026-01-08 13:59:51 -05:00
reyesj2
aa96cf44d4 increase timeout commands timeout to account for time taken by salt minions to return data.
add note informing user a previously required ES upgrade was detected and being verified before soup continues
2026-01-07 19:26:46 -06:00
reyesj2
0d59c35d2a phrasing/typo 2026-01-07 19:20:27 -06:00
reyesj2
8463bde90d dont capture stderr from salt command failure 'ERROR: Minions returned with non-zero exit code' 2026-01-07 19:19:26 -06:00
reyesj2
150c31009e make sure so-elasticsearch-query exits non-zero on failure 2026-01-07 19:18:20 -06:00
Josh Patterson
693494024d block redirected to setup_log already, prevent double logging on these lines 2026-01-07 16:58:44 -05:00
reyesj2
ee66d6c7d1 Merge branch 'reyesj2/elastic9-autosoup' of github.com:Security-Onion-Solutions/securityonion into reyesj2/elastic9-autosoup 2026-01-07 14:50:21 -06:00
reyesj2
3effd30f7e unused var 2026-01-07 14:49:19 -06:00
Josh Patterson
4ab20c2454 dont remove ca in ssl.remove 2026-01-07 14:14:57 -05:00
Jorge Reyes
c075b5a1a7 Merge branch '2.4/dev' into reyesj2/elastic9-autosoup 2026-01-07 10:33:25 -06:00
reyesj2
cb1e59fa49 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2/elastic9-autosoup 2026-01-07 10:30:45 -06:00
reyesj2
588aa435ec update version 2026-01-07 10:21:36 -06:00
reyesj2
752c764066 autosoup preserve branch setting if set originally 2026-01-07 10:03:46 -06:00
reyesj2
af604c2ea8 autosoup functionality for non-airgap 2026-01-07 09:45:26 -06:00
Josh Patterson
6c3f9f149d create ca pillar during soup 2026-01-07 10:17:06 -05:00
Josh Patterson
152f2e03f1 Merge remote-tracking branch 'origin/2.4/dev' into bravo 2026-01-06 15:15:30 -05:00
Matthew Wright
605797c86a Merge pull request #15355 from Security-Onion-Solutions/mwright/session-reports
Assistant: Session Report Template
2026-01-06 13:58:18 -05:00
Jason Ertel
1ee5b1611a Merge pull request #15359 from Security-Onion-Solutions/jertel/wip
suppress config diffs to avoid false positive errors
2026-01-06 12:52:59 -05:00
Jason Ertel
5028729e4c suppress config diffs to avoid false positive errors 2026-01-06 12:50:18 -05:00
Jason Ertel
ab00fa8809 Merge pull request #15358 from Security-Onion-Solutions/jertel/wip
exempt kratos online check
2026-01-06 09:50:03 -05:00
Jason Ertel
2d705e7caa exempt kratos online check 2026-01-06 09:47:35 -05:00
Josh Patterson
f2370043a8 Merge remote-tracking branch 'origin/2.4/dev' into bravo 2026-01-06 09:12:00 -05:00
Jorge Reyes
3b349b9803 Merge pull request #15353 from Security-Onion-Solutions/reyesj2/kratos
update kratos index template
2026-01-05 14:56:08 -06:00
reyesj2
f2b7ffe0eb align with ECS fieldnames 2026-01-05 14:48:10 -06:00
Matthew Wright
3a410eed1a assistant session reports 2026-01-05 14:45:02 -05:00
reyesj2
a53619f10f update kratos index template 2026-01-05 12:22:01 -06:00
reyesj2
893aaafa1b foxtrot 2025-12-29 15:54:06 -06:00
reyesj2
33c34cdeca Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2/elastic9-autosoup 2025-12-29 15:49:49 -06:00
reyesj2
9b411867df update version 2025-12-29 10:27:38 -06:00
Jason Ertel
fd1596b3a0 Merge pull request #15347 from Security-Onion-Solutions/jertel/wip
expose login form lifespan in config scr
2025-12-24 15:09:36 -05:00
Jason Ertel
b05de22f58 expose login form lifespan in config scr 2025-12-24 14:39:55 -05:00
reyesj2
e9341ee8d3 remove usage of deprecated 'logs' integration in favor of 'filestream' 2025-12-24 10:40:23 -06:00
reyesj2
f666ad600f accept same version 'upgrades' 2025-12-23 16:27:22 -06:00
reyesj2
9345718967 verify pre-soup ES version is directly upgradable to post-soup ES version. 2025-12-19 16:15:05 -06:00
reyesj2
6c879cbd13 soup changes 2025-12-17 19:08:21 -06:00
reyesj2
089b5aaf44 Merge branch 'reyesj2/elastic9' of github.com:Security-Onion-Solutions/securityonion into reyesj2/elastic9 2025-12-17 16:03:18 -06:00
reyesj2
b61885add5 Fix Kafka output policy - singular topic key 2025-12-17 16:03:12 -06:00
Josh Patterson
702ba2e0a4 only allow ca.remove state to run if so-setup is running 2025-12-17 10:08:00 -05:00
Jorge Reyes
5cb1e284af Update VERSION 2025-12-17 06:54:32 -06:00
reyesj2
e3a4f0873e update expected version for elastalert state 2025-12-17 06:53:08 -06:00
reyesj2
7977a020ac elasticsearch 9.0.8 2025-12-16 16:03:47 -06:00
coreyogburn
1d63269883 Merge pull request #15323 from Security-Onion-Solutions/cogburn/non-advanced-apiurl
Un-Advanced Assistant ApiUrl
2025-12-16 12:08:14 -07:00
Corey Ogburn
dd8027480b Un-Advanced Assistant ApiUrl 2025-12-16 12:02:01 -07:00
Mike Reeves
c45bd77e44 Merge pull request #15320 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update VERSION
2025-12-16 11:25:35 -05:00
Mike Reeves
032e0abd61 Update 2-4.yml 2025-12-16 11:23:53 -05:00
Mike Reeves
8509d1e454 Update VERSION 2025-12-16 11:23:12 -05:00
Mike Reeves
8ff0c6828b Merge pull request #15319 from Security-Onion-Solutions/2.4/dev
2.4.200
2025-12-16 11:10:30 -05:00
Mike Reeves
ddd6935e50 Merge pull request #15318 from Security-Onion-Solutions/2.4.200
2.4.200
2025-12-16 09:15:32 -05:00
Mike Reeves
5588a56b24 2.4.200 2025-12-16 09:07:29 -05:00
Mike Reeves
12aed6e280 Merge pull request #15311 from Security-Onion-Solutions/TOoSmOotH-patch-5
Update so-minion
2025-12-15 12:07:37 -05:00
Mike Reeves
b2a469e08c Update so-minion 2025-12-15 11:56:23 -05:00
Jason Ertel
285b0e4af9 Merge pull request #15308 from Security-Onion-Solutions/idstools-refactor
Add trailing nl if it doesnt already exist
2025-12-14 15:35:24 -05:00
DefensiveDepth
f9edfd6391 Add trailing nl if it doesnt already exist 2025-12-14 12:03:44 -05:00
Josh Patterson
c0845e1612 restart docker if ca changes. cleanup dirs at key/crt location 2025-12-12 22:19:59 -05:00
Josh Patterson
9878d9d37e handle steno ca certs directory properly 2025-12-12 19:07:00 -05:00
Josh Patterson
a2196085d5 import allowed_states 2025-12-12 18:50:37 -05:00
Josh Patterson
ba62a8c10c need to restart docker service if ca changes 2025-12-12 18:50:22 -05:00
Josh Patterson
38f38e2789 fix allowed states for ca 2025-12-12 18:23:29 -05:00
Josh Patterson
1475f0fc2f timestamp logging for wait_for_salt_minion 2025-12-12 16:30:42 -05:00
Josh Patterson
a3396b77a3 Merge remote-tracking branch 'origin/2.4/dev' into bravo 2025-12-12 15:25:09 -05:00
Josh Patterson
8158fee8fc change how we determine if the salt-minion is ready 2025-12-12 15:24:47 -05:00
Josh Patterson
f6301bc3e5 Merge pull request #15304 from Security-Onion-Solutions/ggjorge
fix cleaning repos on remote nodes if airgap
2025-12-12 14:22:21 -05:00
Josh Patterson
6c5c176b7d fix cleaning repos on remote nodes if airgap 2025-12-12 14:18:54 -05:00
Josh Brower
c6d52b5eb1 Merge pull request #15303 from Security-Onion-Solutions/idstools-refactor
Add Airgap check
2025-12-12 09:59:19 -05:00
DefensiveDepth
7cac528389 Add Airgap check 2025-12-12 09:52:01 -05:00
reyesj2
d518f75468 update deprecated config items 2025-12-11 20:07:06 -06:00
Josh Patterson
c6fac8c36b need makedirs 2025-12-11 18:37:01 -05:00
Josh Patterson
17b5b81696 dont have py3 yaml module installed yet so do it like this 2025-12-11 18:04:02 -05:00
Josh Patterson
9960db200c Merge remote-tracking branch 'origin/2.4/dev' into bravo 2025-12-11 17:30:43 -05:00
Josh Patterson
b9ff1704b0 the great ssl refactor 2025-12-11 17:30:06 -05:00
Josh Brower
6fe817ca4a Merge pull request #15301 from Security-Onion-Solutions/idstools-refactor
Rework backup
2025-12-11 13:57:25 -05:00
DefensiveDepth
cb9a6fac25 Update tests for rework 2025-12-11 12:14:37 -05:00
DefensiveDepth
a945768251 Refactor backup 2025-12-11 11:15:30 -05:00
Mike Reeves
c6646e3821 Merge pull request #15289 from Security-Onion-Solutions/TOoSmOotH-patch-3
Update Assistant Models
2025-12-10 17:22:13 -05:00
Mike Reeves
99dc72cece Merge branch '2.4/dev' into TOoSmOotH-patch-3 2025-12-10 17:19:32 -05:00
Josh Brower
04d6cca204 Merge pull request #15298 from Security-Onion-Solutions/idstools-refactor
Fixup logic
2025-12-10 17:18:59 -05:00
DefensiveDepth
5ab6bda639 Fixup logic 2025-12-10 17:16:35 -05:00
Josh Brower
f433de7e12 Merge pull request #15297 from Security-Onion-Solutions/idstools-refactor
small fixes
2025-12-10 15:23:12 -05:00
DefensiveDepth
8ef6c2f91d small fixes 2025-12-10 15:19:44 -05:00
Mike Reeves
7575218697 Merge pull request #15293 from Security-Onion-Solutions/TOoSmOotH-patch-4
Remove Claude Sonnet 4 model configuration
2025-12-09 11:04:38 -05:00
Mike Reeves
dc945dad00 Remove Claude Sonnet 4 model configuration
Removed configuration for Claude Sonnet 4 model.
2025-12-09 11:00:53 -05:00
Josh Brower
ddcd74ffd2 Merge pull request #15292 from Security-Onion-Solutions/idstools-refactor
Fix custom name
2025-12-09 10:12:41 -05:00
DefensiveDepth
e105bd12e6 Fix custom name 2025-12-09 09:49:27 -05:00
Josh Brower
f5688175b6 Merge pull request #15290 from Security-Onion-Solutions/idstools-refactor
match correct custom ruleset name
2025-12-08 18:25:46 -05:00
DefensiveDepth
72a4ba405f match correct custom ruleset name 2025-12-08 16:45:40 -05:00
Mike Reeves
94694d394e Add origin field to model training configuration 2025-12-08 16:36:09 -05:00
Mike Reeves
03dd746601 Add origin field to model configurations 2025-12-08 16:34:19 -05:00
Mike Reeves
eec3373ae7 Update display name for Claude Sonnet 4 2025-12-08 16:30:50 -05:00
Mike Reeves
db45ce07ed Modify model display names and remove GPT-OSS 120B
Updated display names for models and removed GPT-OSS 120B.
2025-12-08 16:26:45 -05:00
Josh Brower
ba49765312 Merge pull request #15287 from Security-Onion-Solutions/idstools-refactor
Rework ordering
2025-12-08 12:42:48 -05:00
DefensiveDepth
72c8c2371e Rework ordering 2025-12-08 12:39:30 -05:00
Josh Brower
80411ab6cf Merge pull request #15286 from Security-Onion-Solutions/idstools-refactor
be more verbose
2025-12-08 10:31:39 -05:00
DefensiveDepth
0ff8fa57e7 be more verbose 2025-12-08 10:29:24 -05:00
Josh Brower
411f28a049 Merge pull request #15284 from Security-Onion-Solutions/idstools-refactor
Make sure local salt dir is created
2025-12-07 17:49:56 -05:00
DefensiveDepth
0f42233092 Make sure local salt dir is created 2025-12-07 16:13:55 -05:00
Josh Brower
2dd49f6d9b Merge pull request #15283 from Security-Onion-Solutions/idstools-refactor
Fixup Airgap
2025-12-06 16:06:57 -05:00
DefensiveDepth
271f545f4f Fixup Airgap 2025-12-06 15:26:44 -05:00
Josh Brower
c4a70b540e Merge pull request #15232 from Security-Onion-Solutions/idstools-refactor
Idstools refactor
2025-12-05 12:58:10 -05:00
DefensiveDepth
bef85772e3 Merge branch 'idstools-refactor' of https://github.com/Security-Onion-Solutions/securityonion into idstools-refactor 2025-12-05 12:17:06 -05:00
DefensiveDepth
a6b19c4a6c Remove idstools config from manager pillar file 2025-12-05 12:13:05 -05:00
Josh Brower
44f5e6659b Merge branch '2.4/dev' into idstools-refactor 2025-12-05 10:30:54 -05:00
DefensiveDepth
3f9a9b7019 tweak threshold 2025-12-05 10:23:24 -05:00
DefensiveDepth
b7ad985c7a Add cron.abset 2025-12-05 09:48:46 -05:00
Josh Brower
dba087ae25 Update version from 2.4.0-delta to 2.4.200 2025-12-05 09:43:31 -05:00
Jorge Reyes
bbc4b1b502 Merge pull request #15241 from Security-Onion-Solutions/reyesj2/advilm
FEATURE: Advanced ILM actions via SOC UI
2025-12-04 14:43:12 -06:00
DefensiveDepth
9304513ce8 Add support for suricata rules load status 2025-12-04 12:26:13 -05:00
reyesj2
0b127582cb 2.4.200 soup changes 2025-12-03 20:49:25 -06:00
reyesj2
6e9b8791c8 Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into reyesj2/advilm 2025-12-03 20:27:13 -06:00
reyesj2
ef87ad77c3 Merge branch 'reyesj2/advilm' of github.com:Security-Onion-Solutions/securityonion into reyesj2/advilm 2025-12-03 20:23:03 -06:00
reyesj2
8477420911 logstash adv config state file 2025-12-03 20:10:06 -06:00
Jason Ertel
f5741e318f Merge pull request #15281 from Security-Onion-Solutions/jertel/wip
skip continue prompt if user cannot actually continue
2025-12-03 16:37:07 -05:00
Josh Patterson
545060103a Merge remote-tracking branch 'origin/2.4/dev' into bravo 2025-12-03 16:33:27 -05:00
Josh Patterson
e010b5680a Merge pull request #15280 from Security-Onion-Solutions/reservegid
reserve group ids
2025-12-03 16:24:12 -05:00
Josh Patterson
8620d3987e add saltgid 2025-12-03 15:04:28 -05:00
Jason Ertel
30487a54c1 skip continue prompt if user cannot actually contine 2025-12-03 11:52:10 -05:00
DefensiveDepth
f15a39c153 Add historical hashes 2025-12-03 11:24:04 -05:00
Josh Patterson
aed27fa111 reserve group ids 2025-12-03 11:19:46 -05:00
Josh Brower
822c411e83 Update version to 2.4.0-delta 2025-12-02 21:24:24 -05:00
DefensiveDepth
41b3ac7554 Backup salt master config 2025-12-02 19:58:56 -05:00
DefensiveDepth
23575fdf6c edit actual file 2025-12-02 19:19:57 -05:00
DefensiveDepth
52f70dc49a Cleanup idstools 2025-12-02 17:40:30 -05:00
DefensiveDepth
79c9749ff7 Merge remote-tracking branch 'origin/2.4/dev' into idstools-refactor 2025-12-02 17:40:04 -05:00
Jorge Reyes
8d2701e143 Merge branch '2.4/dev' into reyesj2/advilm 2025-12-02 15:42:15 -06:00
reyesj2
877444ac29 cert update is a forced update 2025-12-02 15:16:59 -06:00
reyesj2
b0d9426f1b automated cert update for kafka fleet output policy 2025-12-02 15:11:00 -06:00
reyesj2
18accae47e annotation typo 2025-12-02 15:10:29 -06:00
Josh Patterson
55e3a2c6b6 Merge pull request #15277 from Security-Onion-Solutions/soyamllistremove
need additional line bw class
2025-12-02 15:09:47 -05:00
Josh Patterson
ef092e2893 rename to removelistitem 2025-12-02 15:01:32 -05:00
Josh Patterson
89eb95c077 add removefromlist 2025-12-02 14:46:24 -05:00
Josh Patterson
e871ec358e need additional line bw class 2025-12-02 14:43:33 -05:00
Josh Patterson
271a2f74ad Merge pull request #15275 from Security-Onion-Solutions/soyamllistremove
add new so-yaml_test for removefromlist
2025-12-02 14:34:09 -05:00
Josh Patterson
d6bd951c37 add new so-yaml_test for removefromlist 2025-12-02 14:31:57 -05:00
DefensiveDepth
8abd4c9c78 Remove idstools files 2025-12-02 12:42:15 -05:00
reyesj2
45a8c0acd1 merge 2.4/dev 2025-12-02 11:16:08 -06:00
DefensiveDepth
c372cd533d Merge remote-tracking branch 'origin/2.4/dev' into idstools-refactor 2025-12-01 16:10:22 -05:00
DefensiveDepth
999f83ce57 Create dir earlier 2025-12-01 14:21:58 -05:00
Jorge Reyes
6fbed2dd9f Merge pull request #15264 from Security-Onion-Solutions/reyesj2-patch-2
add force & certs flag to update fleet certs as needed
2025-12-01 11:11:25 -06:00
Josh Patterson
36a6a59d55 renew certs 7 days before expire 2025-12-01 11:54:10 -05:00
Mike Reeves
875de88cb4 Merge pull request #15271 from Security-Onion-Solutions/TOoSmOotH-patch-2
Add JA4D option to config.zeek.ja4
2025-12-01 10:03:12 -05:00
Mike Reeves
63bb44886e Add JA4D option to config.zeek.ja4 2025-12-01 10:00:42 -05:00
DefensiveDepth
bda83a47a2 Remove header 2025-11-29 17:45:22 -05:00
DefensiveDepth
e96cfd35f7 Refactor for simplicity 2025-11-29 17:00:51 -05:00
DefensiveDepth
65c96b2edf Add error handling 2025-11-29 16:27:22 -05:00
DefensiveDepth
87477ae4f6 Removed uneeded bind 2025-11-29 15:40:10 -05:00
DefensiveDepth
89a9106d79 Add context 2025-11-29 15:17:28 -05:00
DefensiveDepth
1284150382 Move to manager init 2025-11-27 08:39:19 -05:00
reyesj2
edf3c9464f add --certs flag to update certs. Used with --force, to ensure certs are updated even if hosts update isn't needed 2025-11-25 16:16:19 -06:00
DefensiveDepth
4bb0a7c9d9 Merge remote-tracking branch 'origin/2.4/dev' into idstools-refactor 2025-11-25 13:52:21 -05:00
DefensiveDepth
ced3af818c Refactor for Airgap 2025-11-25 13:51:50 -05:00
reyesj2
cc8fb96047 valid config for number_of_replicas in allocate action includes 0 2025-11-24 11:12:09 -06:00
reyesj2
3339b50daf drop forcemerge when max_num_segements doesn't exist or empty 2025-11-21 16:39:45 -06:00
reyesj2
415ea07a4f clean up 2025-11-21 16:04:26 -06:00
reyesj2
b80ec95fa8 update regex, revert to default will allow setting value back to '' | None 2025-11-21 14:41:03 -06:00
reyesj2
99cb51482f unneeded 'set' 2025-11-21 14:32:58 -06:00
reyesj2
90638f7a43 Merge branch 'reyesj2/advea' into reyesj2/advilm 2025-11-21 14:25:28 -06:00
reyesj2
1fb00c8eb6 update so-elastic-fleet-outputs-update to use advanced output options when set, else empty "". Also trigger update_logstash_outputs() when hash of config_yaml has changed 2025-11-21 14:22:42 -06:00
reyesj2
4490ea7635 format EA logstash output adv config items 2025-11-21 14:21:17 -06:00
reyesj2
bce7a20d8b soc configurable EA logstash output adv settings 2025-11-21 14:19:51 -06:00
Josh Patterson
9c06713f32 Merge pull request #15251 from Security-Onion-Solutions/bravo
use timestamp in volume path to prevent duplicates
2025-11-21 14:54:30 -05:00
Josh Patterson
23da0d4ba0 use timestamp in filename to prevent duplicates 2025-11-21 14:49:03 -05:00
Josh Patterson
d5f2cfb354 Merge pull request #15248 from Security-Onion-Solutions/bravo
clarify hypervisor annotation
2025-11-20 17:28:32 -05:00
Josh Patterson
fb5ad4193d indicate base image download start 2025-11-20 17:13:36 -05:00
Josh Patterson
1f5f283c06 update hypervisor annotaion. preinit instead of initialized 2025-11-20 16:53:55 -05:00
Josh Patterson
cf048030c4 Merge pull request #15247 from Security-Onion-Solutions/bravo
Notify user of hypervisor environment setup failures
2025-11-20 16:04:49 -05:00
Josh Patterson
2d716b44a8 update comment 2025-11-20 15:52:21 -05:00
Jorge Reyes
d70d652310 Merge pull request #15244 from Security-Onion-Solutions/reyesj2/suricapfile
suricata capture file
2025-11-20 14:31:43 -06:00
reyesj2
c5db7c8752 suricata.capture_file keyword 2025-11-20 14:26:12 -06:00
reyesj2
6f42ff3442 suricata capture_file 2025-11-20 14:16:49 -06:00
reyesj2
433dab7376 format json 2025-11-20 14:16:10 -06:00
Josh Patterson
97c1a46013 update annotation for general failure 2025-11-20 15:08:04 -05:00
Josh Patterson
fbe97221bb set initialized status 2025-11-20 14:43:09 -05:00
Josh Patterson
841ce6b6ec update hypervisor annotation for image download or ssh key creation failure 2025-11-20 13:55:22 -05:00
Josh Patterson
dd0b4c3820 fix failed or hung qcow2 image download 2025-11-19 15:48:53 -05:00
reyesj2
b52dd53e29 advanced ilm actions 2025-11-19 13:24:55 -06:00
reyesj2
a155f45036 always update annotation / defaults for managed integrations 2025-11-19 13:24:29 -06:00
Josh Patterson
b407c68d88 Merge remote-tracking branch 'origin/2.4/dev' into bravo 2025-11-19 10:23:11 -05:00
Josh Patterson
5b6a7035af need python_shell for pipes 2025-11-19 10:22:58 -05:00
Jason Ertel
12d490ad4a Merge pull request #15240 from Security-Onion-Solutions/jertel/wip
communicate to the viewer that OS patches may take some time
2025-11-19 10:01:03 -05:00
Jason Ertel
76cbd18d2c communicate to the viewer that OS patches may take some time 2025-11-19 09:56:42 -05:00
DefensiveDepth
148ef7ef21 add default ruleset 2025-11-18 11:57:30 -05:00
DefensiveDepth
1b55642c86 Refactor rules location 2025-11-18 09:58:14 -05:00
DefensiveDepth
af7f7d0728 Fix file paths 2025-11-17 12:00:08 -05:00
Jorge Reyes
a7337c95e1 Merge pull request #15234 from Security-Onion-Solutions/reyesj2/pipeline-upd
update zeek pipelines
2025-11-17 10:36:10 -06:00
Josh Patterson
3f7c3326ea Merge pull request #15237 from Security-Onion-Solutions/bravo
rm salt keyring and repo file for deb
2025-11-17 09:27:53 -05:00
Josh Patterson
bf41de8c14 rm salt keyring and repo file for deb 2025-11-17 08:56:02 -05:00
reyesj2
de4424fab0 remove typos 2025-11-14 19:15:51 -06:00
reyesj2
136a829509 detect-sqli deprecated in favor of detect-sql-injection 2025-11-14 16:51:00 -06:00
reyesj2
bcec999be4 zeek.dns reduce errors 2025-11-14 15:47:29 -06:00
reyesj2
7c73b4713f update analyzer pipeline 2025-11-14 15:47:29 -06:00
reyesj2
45b4b1d963 ingest zeek analyzer.log + update dpd dashboard with analyzer tag 2025-11-14 15:47:29 -06:00
reyesj2
fcfd74ec1e zeek.analyzer format json 2025-11-14 15:47:29 -06:00
reyesj2
68b0cd7549 rename zeek.dpd zeek.analyzer 2025-11-14 15:47:29 -06:00
reyesj2
715d801ce8 format json zeek.dns 2025-11-14 15:47:19 -06:00
Jorge Reyes
4a810696e7 Merge pull request #15231 from Security-Onion-Solutions/reyesj2/bond0
fix so-setup error duplicate bond0
2025-11-14 12:12:46 -06:00
reyesj2
6b525a2c21 fix so-setup error duplicate bond0 2025-11-14 11:19:32 -06:00
Jorge Reyes
a5d8385f07 Merge pull request #15230 from Security-Onion-Solutions/reyesj2/pipeline-upd
suricata pipeline updates
2025-11-14 10:43:33 -06:00
reyesj2
211bf7e77b ignore errors on tld script 2025-11-14 09:25:19 -06:00
reyesj2
1542b74133 move dns tld fields to its own pipeline 2025-11-14 09:24:58 -06:00
DefensiveDepth
431e5abf89 Extract ETPRO key if found 2025-11-14 09:39:33 -05:00
reyesj2
4314c79f85 bump suricata dns logging version 2025-11-14 08:24:31 -06:00
reyesj2
da9717bc79 don't attempt rename if field doesn't exist -- reducing pipeline stat errors 2025-11-14 08:15:40 -06:00
DefensiveDepth
f047677d8a Check correct files 2025-11-14 09:03:08 -05:00
Jason Ertel
045cf7866c Merge pull request #15225 from Security-Onion-Solutions/jertel/wip
pcap annotations
2025-11-14 08:37:37 -05:00
reyesj2
431e0b0780 format suricata.alert json 2025-11-13 19:29:50 -06:00
reyesj2
e782266caa suricata 8 dns v3 2025-11-13 19:21:31 -06:00
coreyogburn
a4666b2c08 Merge pull request #15229 from Security-Onion-Solutions/cogburn/toggle-models
Add Enabled Flag to Models
2025-11-13 16:13:24 -07:00
Corey Ogburn
dcc3206e51 Add Enabled Flag to Models 2025-11-13 15:32:28 -07:00
Josh Patterson
8358b6ea6f Merge pull request #15228 from Security-Onion-Solutions/bravo
wait for 200 from registry before proceeding
2025-11-13 16:34:43 -05:00
coreyogburn
d1a66a91c6 Merge pull request #15221 from Security-Onion-Solutions/cogburn/compress-context
CompressContextPrompt
2025-11-13 14:33:56 -07:00
Josh Patterson
7fdcb92614 wait for 200 from registry before proceeding 2025-11-13 16:30:58 -05:00
Jason Ertel
cec1890b6b pcap annotations 2025-11-13 16:15:47 -05:00
DefensiveDepth
b2606b6094 fix perms 2025-11-13 14:10:51 -05:00
Corey Ogburn
b1b66045ea Change in prompt wording 2025-11-13 12:08:47 -07:00
Corey Ogburn
33b22bf2e4 Shorten Prompt 2025-11-13 11:09:09 -07:00
Corey Ogburn
3a38886345 CompressContextPrompt 2025-11-13 11:09:08 -07:00
reyesj2
7be70faab6 format json 2025-11-13 10:49:37 -06:00
Josh Patterson
2729fdbea6 Merge pull request #15223 from Security-Onion-Solutions/bravo
configure salt, then install. update bootstrap-salt. reduce salt install fail timeout
2025-11-13 11:35:43 -05:00
Jorge Reyes
bfd08d1d2e Merge pull request #15204 from Security-Onion-Solutions/reyesj2/retention
update so-elasticsearch-retention-estimate
2025-11-13 10:05:49 -06:00
DefensiveDepth
37b3fd9b7b add detections backup 2025-11-13 10:41:12 -05:00
DefensiveDepth
573dded921 refactor to hash 2025-11-13 09:25:20 -05:00
Josh Patterson
fed75c7b39 use -r with bootstrap to disable script repo 2025-11-12 19:47:25 -05:00
Josh Patterson
3427df2a54 update bootstrap-salt to latest 2025-11-12 18:07:14 -05:00
Josh Patterson
be11c718f6 configure salt then install it 2025-11-12 18:06:55 -05:00
Josh Patterson
235dfd78f1 Revert "salt-minion service KillMode to control-group"
This reverts commit 7c8b9b4374.
2025-11-12 14:20:28 -05:00
Josh Patterson
7c8b9b4374 salt-minion service KillMode to control-group 2025-11-12 12:30:29 -05:00
DefensiveDepth
81d7c313af remove dupe 2025-11-12 11:11:01 -05:00
DefensiveDepth
9a6ff75793 Merge remote-tracking branch 'origin/2.4/dev' into idstools-refactor 2025-11-12 08:51:51 -05:00
DefensiveDepth
1f24796eba Fix ETPRO check 2025-11-12 08:48:47 -05:00
Jason Ertel
7762faf075 Merge pull request #15219 from Security-Onion-Solutions/jertel/wip
add support to so-yaml for using yaml file content for values
2025-11-12 08:12:23 -05:00
Jason Ertel
80fbb31372 fix test 2025-11-11 17:04:19 -05:00
Jason Ertel
7c45db2295 add support to so-yaml for using yaml file content for values 2025-11-11 16:57:54 -05:00
Jason Ertel
0545e1d33b add support to so-yaml for using yaml file content for values 2025-11-11 16:55:00 -05:00
DefensiveDepth
55bbbdb58d idstools removal refactor 2025-11-11 14:34:28 -05:00
DefensiveDepth
3a8a6bf5ff idstools removal refactor 2025-11-11 14:12:51 -05:00
DefensiveDepth
13789bc56f idstools removal refactor 2025-11-11 13:45:37 -05:00
DefensiveDepth
11518f6eea idstools removal refactor 2025-11-11 13:41:32 -05:00
Jason Ertel
08147e27b0 Merge pull request #15213 from Security-Onion-Solutions/jertel/wip
reduce pcapMaxCount to fit better with max upload size
2025-11-10 19:08:58 -05:00
Josh Patterson
c9153617be Merge pull request #15211 from Security-Onion-Solutions/bravo
Suricata 8.0.2
2025-11-10 17:09:43 -05:00
Josh Patterson
245ceb2d49 suricata defaults and annotation 2025-11-10 16:40:11 -05:00
Jason Ertel
4c65975907 reduce pcapMaxCount to fit better with max upload size 2025-11-10 15:44:05 -05:00
Mike Reeves
dfef7036ce Merge pull request #15209 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update defaults.yaml
2025-11-10 14:53:00 -05:00
Mike Reeves
44594ba726 Update defaults.yaml 2025-11-10 14:24:27 -05:00
Josh Patterson
1876c4d9df fix var name 2025-11-10 14:16:16 -05:00
Josh Patterson
a2ff66b5d0 update annotation 2025-11-10 14:12:20 -05:00
Josh Patterson
e3972dc5af Merge remote-tracking branch 'origin/2.4/dev' into bravo 2025-11-10 13:28:42 -05:00
Josh Patterson
18c0f197b2 suricata bpf 2025-11-10 13:28:19 -05:00
Jorge Reyes
5b371c220c Merge pull request #15207 from Security-Onion-Solutions/reyesj2/forwardnode-sensor 2025-11-10 08:46:12 -06:00
Josh Patterson
78c193f0a2 handle bpf for suricata 8 pcap 2025-11-07 17:40:24 -05:00
Josh Patterson
274295bc97 return exit codes 2025-11-07 17:39:13 -05:00
Josh Patterson
6c7ef622c1 spaces removed from expected output 2025-11-07 17:08:33 -05:00
Josh Patterson
da1cac0d53 tls-log, http-log and syslog outputs deprecated https://github.com/Security-Onion-Solutions/securityonion/issues/15203 2025-11-06 16:32:55 -05:00
reyesj2
a84df14137 rename forward node -> sensor node 2025-11-06 15:23:55 -06:00
Jorge Reyes
4a49f9d004 Merge branch '2.4/dev' into reyesj2/retention 2025-11-06 14:29:08 -06:00
reyesj2
1eb4b5379a show 30d scheduled deletions or 7d scheduled deletions depending on what historical data is available 2025-11-06 14:25:25 -06:00
reyesj2
35c7fc06d7 fix bug showing duplicate backing indices in recommendations 2025-11-06 14:24:58 -06:00
reyesj2
b69d453a68 typo 2025-11-06 14:24:29 -06:00
DefensiveDepth
2f6fb717c1 Merge remote-tracking branch 'origin/2.4/dev' into idstools-refactor 2025-11-06 10:38:37 -05:00
Josh Patterson
b7e1989d45 resolve block-size not large enough for max fragmented IP packet size warning 2025-11-06 09:49:46 -05:00
Jorge Reyes
202b03b32b Merge pull request #15201 from Security-Onion-Solutions/reyesj2-patch-5
update so-elasticsearch-retention-estimate
2025-11-06 08:18:38 -06:00
reyesj2
1aa871ec94 small fixes 2025-11-05 17:55:57 -06:00
Josh Patterson
4ffbb0bbd9 Merge remote-tracking branch 'origin/2.4/dev' into bravo 2025-11-05 15:22:11 -05:00
Jorge Reyes
f859fe6517 Merge pull request #15192 from Security-Onion-Solutions/securityonion-strelka
strelka use single master image
2025-11-05 08:07:01 -06:00
Jason Ertel
021b425b8b Merge pull request #15198 from Security-Onion-Solutions/jertel/wip
ensure previous setup outcomes are cleared
2025-11-04 16:10:53 -05:00
Jason Ertel
d95122ca01 ensure previous setup outcomes are cleared 2025-11-04 16:02:39 -05:00
Josh Patterson
81d3c7351b Merge pull request #15194 from Security-Onion-Solutions/reyesj2/ea-policy
move off of cmd.script with args \
2025-11-03 17:16:35 -05:00
Josh Patterson
ccb8ffd6eb Update install_agent_grid.sls 2025-11-03 17:05:48 -05:00
reyesj2
5a8ea57a1b move off of cmd.script with args \
https://github.com/saltstack/salt/issues/68298
2025-11-03 15:31:14 -06:00
Josh Patterson
60228ec6e6 Merge pull request #15193 from Security-Onion-Solutions/salt300616
Salt 3006.16
2025-11-03 16:02:25 -05:00
Josh Patterson
574703e551 unlock/lock salt-cloud if installed 2025-11-03 15:39:19 -05:00
Josh Patterson
fa154f1a8f update salt cloud config if configured 2025-11-03 14:12:19 -05:00
reyesj2
635545630b strelka use single master image 2025-11-03 09:36:46 -06:00
Mike Reeves
df8afda999 Merge pull request #15188 from Security-Onion-Solutions/cogburn/multiple-models
Available Models
2025-11-03 09:39:16 -05:00
Corey Ogburn
f80b090c93 Update limits 2025-10-31 14:48:30 -06:00
Corey Ogburn
806173f7e3 Available Models
Utilizes Jason's new Array of Objects UI.
2025-10-31 14:07:30 -06:00
Josh Patterson
2f6c1b82a6 Merge pull request #15185 from Security-Onion-Solutions/salt300616
Upgrade Salt 3006.16
2025-10-31 09:47:01 -04:00
Josh Patterson
b8c2808abe update salt-cloud profile after new code copied 2025-10-30 15:09:40 -04:00
Josh Patterson
9027e4e065 update salt-cloud profile after new code copied 2025-10-30 14:48:48 -04:00
Josh Patterson
8ca5276a0e update cloud profile with local and point to new code 2025-10-30 13:59:08 -04:00
Josh Patterson
ee45a5524d Merge remote-tracking branch 'origin/2.4/dev' into salt300616 2025-10-30 13:13:55 -04:00
Josh Patterson
70d4223a75 update salt-cloud config if salt was upgraded 2025-10-30 13:13:16 -04:00
Jorge Reyes
7ab2840381 Merge pull request #15182 from Security-Onion-Solutions/reyesj2-influxdb-metrics
add manager role to elasticsearch ingest time spent
2025-10-30 12:03:58 -05:00
reyesj2
78c951cb70 add manager role to elastic ingest time spent 2025-10-30 11:15:58 -05:00
Josh Patterson
a0a3a80151 Merge remote-tracking branch 'origin/2.4/dev' into salt300616 2025-10-30 11:57:15 -04:00
Josh Patterson
3ecffd5588 Merge pull request #15181 from Security-Onion-Solutions/volumes
create libvirt volumes directory
2025-10-30 11:31:30 -04:00
Josh Patterson
8ea66bb0e9 create libvirt volumes directory 2025-10-30 11:02:36 -04:00
Jorge Reyes
9359fbbad6 Merge pull request #15176 from Security-Onion-Solutions/reyesj2/ilmpolicyhelp 2025-10-29 16:49:07 -05:00
Josh Patterson
1949be90c2 allow to preserve files 2025-10-29 16:49:59 -04:00
Josh Patterson
30970acfaf var for SALTVERSION in cloud config 2025-10-29 16:05:12 -04:00
Josh Patterson
6d12a8bfa1 handle salt-cloud upgrade during soup 2025-10-29 15:31:46 -04:00
reyesj2
2fb41c8d65 elasticsearch retention estimate 2025-10-29 14:24:43 -05:00
reyesj2
835b2609b6 telegraf - increase esindexsize.sh script timeout 2025-10-29 13:45:55 -05:00
Josh Patterson
10ae53f108 upgrade salt 3006.16 2025-10-29 10:23:44 -04:00
Jason Ertel
68bfceb727 Merge pull request #15170 from Security-Onion-Solutions/jertel/wip
bump version
2025-10-24 16:46:24 -04:00
Jason Ertel
f348c7168f bump version 2025-10-24 16:19:24 -04:00
Jason Ertel
627d9bf45d Merge pull request #15169 from Security-Onion-Solutions/jertel/wip
bump version
2025-10-24 16:18:43 -04:00
Jason Ertel
2aee8ab511 bump version 2025-10-24 16:11:50 -04:00
Josh Patterson
92be8df95d Merge pull request #15122 from Security-Onion-Solutions/amv
nsm virtual disk and new nsm_total grain
2025-10-08 14:15:51 -04:00
Josh Patterson
b1acbf3114 Merge pull request #15098 from Security-Onion-Solutions/byoh
Byoh
2025-10-01 15:06:01 -04:00
Josh Patterson
8043e09ec1 Merge pull request #15076 from Security-Onion-Solutions/vlb2
Vlb2
2025-09-26 15:44:53 -04:00
Josh Patterson
25c746bb14 Merge pull request #15067 from Security-Onion-Solutions/vlb2
Vlb2
2025-09-25 16:12:52 -04:00
Josh Patterson
a91e8b26f6 Merge pull request #15066 from Security-Onion-Solutions/vlb2
set interface for network.ip_addrs for hypervisors
2025-09-24 16:51:07 -04:00
Josh Patterson
e826ea5d04 Merge pull request #15065 from Security-Onion-Solutions/vlb2
update service file, use salt.minion state to update mine_functions
2025-09-24 15:20:31 -04:00
Josh Patterson
23a9780ebb Merge pull request #15061 from Security-Onion-Solutions/vlb2
only update mine for managerhype during setup
2025-09-23 15:56:47 -04:00
Josh Patterson
9cb8ebbaa7 Merge pull request #15056 from Security-Onion-Solutions/vlb2
Vlb2
2025-09-23 09:05:55 -04:00
DefensiveDepth
ded520c2c1 Merge remote-tracking branch 'origin/2.4/dev' into idstools-refactor 2025-09-17 10:42:43 -04:00
DefensiveDepth
a77157391c remove idstools 2025-09-17 10:42:05 -04:00
Josh Patterson
03892bad5e Merge pull request #15015 from Security-Onion-Solutions/vlb2
Vlb2
2025-09-10 14:58:41 -04:00
Josh Patterson
77fef02116 Merge pull request #14994 from Security-Onion-Solutions/vlb2
pass pillar properly
2025-09-04 11:06:31 -04:00
Josh Patterson
f3328c41fb Merge pull request #14990 from Security-Onion-Solutions/vlb2
merge with 2.4/dev
2025-09-03 10:37:46 -04:00
Josh Patterson
23ae259c82 Merge pull request #14972 from Security-Onion-Solutions/vlb2
Vlb2
2025-08-28 10:41:23 -04:00
Josh Patterson
45f25ca62d Merge pull request #14966 from Security-Onion-Solutions/vlb2
managerhype
2025-08-26 15:07:36 -04:00
205 changed files with 6344 additions and 2594 deletions

View File

@@ -32,6 +32,9 @@ body:
- 2.4.170
- 2.4.180
- 2.4.190
- 2.4.200
- 2.4.201
- 2.4.210
- Other (please provide detail below)
validations:
required: true

View File

@@ -4,7 +4,7 @@ on:
pull_request:
paths:
- "salt/sensoroni/files/analyzers/**"
- "salt/manager/tools/sbin"
- "salt/manager/tools/sbin/**"
jobs:
build:

View File

@@ -1,17 +1,17 @@
### 2.4.190-20251024 ISO image released on 2025/10/24
### 2.4.201-20260114 ISO image released on 2026/1/15
### Download and Verify
2.4.190-20251024 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.190-20251024.iso
2.4.201-20260114 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.201-20260114.iso
MD5: 25358481FB876226499C011FC0710358
SHA1: 0B26173C0CE136F2CA40A15046D1DFB78BCA1165
SHA256: 4FD9F62EDA672408828B3C0C446FE5EA9FF3C4EE8488A7AB1101544A3C487872
MD5: 20E926E433203798512EF46E590C89B9
SHA1: 779E4084A3E1A209B494493B8F5658508B6014FA
SHA256: 3D10E7C885AEC5C5D4F4E50F9644FF9728E8C0A2E36EBB8C96B32569685A7C40
Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.190-20251024.iso.sig
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.201-20260114.iso.sig
Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS
@@ -25,22 +25,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.
Download the signature file for the ISO:
```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.190-20251024.iso.sig
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.201-20260114.iso.sig
```
Download the ISO image:
```
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.190-20251024.iso
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.201-20260114.iso
```
Verify the downloaded ISO image using the signature file:
```
gpg --verify securityonion-2.4.190-20251024.iso.sig securityonion-2.4.190-20251024.iso
gpg --verify securityonion-2.4.201-20260114.iso.sig securityonion-2.4.201-20260114.iso
```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
```
gpg: Signature made Thu 23 Oct 2025 07:21:46 AM EDT using RSA key ID FE507013
gpg: Signature made Wed 14 Jan 2026 05:23:39 PM EST using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.

View File

@@ -1 +1 @@
2.4.190
2.4.210

2
pillar/ca/init.sls Normal file
View File

@@ -0,0 +1,2 @@
ca:
server:

View File

@@ -1,5 +1,6 @@
base:
'*':
- ca
- global.soc_global
- global.adv_global
- docker.soc_docker
@@ -43,8 +44,6 @@ base:
- secrets
- manager.soc_manager
- manager.adv_manager
- idstools.soc_idstools
- idstools.adv_idstools
- logstash.nodes
- logstash.soc_logstash
- logstash.adv_logstash
@@ -117,8 +116,6 @@ base:
- elastalert.adv_elastalert
- manager.soc_manager
- manager.adv_manager
- idstools.soc_idstools
- idstools.adv_idstools
- soc.soc_soc
- soc.adv_soc
- kibana.soc_kibana
@@ -158,8 +155,6 @@ base:
{% endif %}
- secrets
- healthcheck.standalone
- idstools.soc_idstools
- idstools.adv_idstools
- kratos.soc_kratos
- kratos.adv_kratos
- hydra.soc_hydra

View File

@@ -172,7 +172,15 @@ MANAGER_HOSTNAME = socket.gethostname()
def _download_image():
"""
Download and validate the Oracle Linux KVM image.
Download and validate the Oracle Linux KVM image with retry logic and progress monitoring.
Features:
- Detects stalled downloads (no progress for 30 seconds)
- Retries up to 3 times on failure
- Connection timeout of 30 seconds
- Read timeout of 60 seconds
- Cleans up partial downloads on failure
Returns:
bool: True if successful or file exists with valid checksum, False on error
"""
@@ -185,45 +193,107 @@ def _download_image():
os.unlink(IMAGE_PATH)
log.info("Starting image download process")
# Retry configuration
max_attempts = 3
retry_delay = 5 # seconds to wait between retry attempts
stall_timeout = 30 # seconds without progress before considering download stalled
connection_timeout = 30 # seconds to establish connection
read_timeout = 60 # seconds to wait for data chunks
for attempt in range(1, max_attempts + 1):
log.info("Download attempt %d of %d", attempt, max_attempts)
try:
# Download file with timeouts
log.info("Downloading Oracle Linux KVM image from %s to %s", IMAGE_URL, IMAGE_PATH)
response = requests.get(
IMAGE_URL,
stream=True,
timeout=(connection_timeout, read_timeout)
)
response.raise_for_status()
try:
# Download file
log.info("Downloading Oracle Linux KVM image from %s to %s", IMAGE_URL, IMAGE_PATH)
response = requests.get(IMAGE_URL, stream=True)
response.raise_for_status()
# Get total file size for progress tracking
total_size = int(response.headers.get('content-length', 0))
downloaded_size = 0
last_log_time = 0
last_progress_time = time.time()
last_downloaded_size = 0
# Get total file size for progress tracking
total_size = int(response.headers.get('content-length', 0))
downloaded_size = 0
last_log_time = 0
# Save file with progress logging and stall detection
with salt.utils.files.fopen(IMAGE_PATH, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
downloaded_size += len(chunk)
current_time = time.time()
# Check for stalled download
if downloaded_size > last_downloaded_size:
# Progress made, reset stall timer
last_progress_time = current_time
last_downloaded_size = downloaded_size
elif current_time - last_progress_time > stall_timeout:
# No progress for stall_timeout seconds
raise Exception(
f"Download stalled: no progress for {stall_timeout} seconds "
f"at {downloaded_size}/{total_size} bytes"
)
# Log progress every second
if current_time - last_log_time >= 1:
progress = (downloaded_size / total_size) * 100 if total_size > 0 else 0
log.info("Progress - %.1f%% (%d/%d bytes)",
progress, downloaded_size, total_size)
last_log_time = current_time
# Save file with progress logging
with salt.utils.files.fopen(IMAGE_PATH, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
downloaded_size += len(chunk)
# Validate downloaded file
log.info("Download complete, validating checksum...")
if not _validate_image_checksum(IMAGE_PATH, IMAGE_SHA256):
log.error("Checksum validation failed on attempt %d", attempt)
os.unlink(IMAGE_PATH)
if attempt < max_attempts:
log.info("Will retry download...")
continue
else:
log.error("All download attempts failed due to checksum mismatch")
return False
log.info("Successfully downloaded and validated Oracle Linux KVM image")
return True
except requests.exceptions.Timeout as e:
log.error("Download attempt %d failed: Timeout - %s", attempt, str(e))
if os.path.exists(IMAGE_PATH):
os.unlink(IMAGE_PATH)
if attempt < max_attempts:
log.info("Will retry download in %d seconds...", retry_delay)
time.sleep(retry_delay)
else:
log.error("All download attempts failed due to timeout")
# Log progress every second
current_time = time.time()
if current_time - last_log_time >= 1:
progress = (downloaded_size / total_size) * 100 if total_size > 0 else 0
log.info("Progress - %.1f%% (%d/%d bytes)",
progress, downloaded_size, total_size)
last_log_time = current_time
# Validate downloaded file
if not _validate_image_checksum(IMAGE_PATH, IMAGE_SHA256):
os.unlink(IMAGE_PATH)
return False
log.info("Successfully downloaded and validated Oracle Linux KVM image")
return True
except Exception as e:
log.error("Error downloading hypervisor image: %s", str(e))
if os.path.exists(IMAGE_PATH):
os.unlink(IMAGE_PATH)
return False
except requests.exceptions.RequestException as e:
log.error("Download attempt %d failed: Network error - %s", attempt, str(e))
if os.path.exists(IMAGE_PATH):
os.unlink(IMAGE_PATH)
if attempt < max_attempts:
log.info("Will retry download in %d seconds...", retry_delay)
time.sleep(retry_delay)
else:
log.error("All download attempts failed due to network errors")
except Exception as e:
log.error("Download attempt %d failed: %s", attempt, str(e))
if os.path.exists(IMAGE_PATH):
os.unlink(IMAGE_PATH)
if attempt < max_attempts:
log.info("Will retry download in %d seconds...", retry_delay)
time.sleep(retry_delay)
else:
log.error("All download attempts failed")
return False
def _check_ssh_keys_exist():
"""
@@ -419,25 +489,28 @@ def _ensure_hypervisor_host_dir(minion_id: str = None):
log.error(f"Error creating hypervisor host directory: {str(e)}")
return False
def _apply_dyanno_hypervisor_state():
def _apply_dyanno_hypervisor_state(status):
"""
Apply the soc.dyanno.hypervisor state on the salt master.
This function applies the soc.dyanno.hypervisor state on the salt master
to update the hypervisor annotation and ensure all hypervisor host directories exist.
Args:
status: Status passed to the hypervisor annotation state
Returns:
bool: True if state was applied successfully, False otherwise
"""
try:
log.info("Applying soc.dyanno.hypervisor state on salt master")
log.info(f"Applying soc.dyanno.hypervisor state on salt master with status: {status}")
# Initialize the LocalClient
local = salt.client.LocalClient()
# Target the salt master to apply the soc.dyanno.hypervisor state
target = MANAGER_HOSTNAME + '_*'
state_result = local.cmd(target, 'state.apply', ['soc.dyanno.hypervisor', "pillar={'baseDomain': {'status': 'PreInit'}}", 'concurrent=True'], tgt_type='glob')
state_result = local.cmd(target, 'state.apply', ['soc.dyanno.hypervisor', f"pillar={{'baseDomain': {{'status': '{status}'}}}}", 'concurrent=True'], tgt_type='glob')
log.debug(f"state_result: {state_result}")
# Check if state was applied successfully
if state_result:
@@ -454,17 +527,17 @@ def _apply_dyanno_hypervisor_state():
success = False
if success:
log.info("Successfully applied soc.dyanno.hypervisor state")
log.info(f"Successfully applied soc.dyanno.hypervisor state with status: {status}")
return True
else:
log.error("Failed to apply soc.dyanno.hypervisor state")
log.error(f"Failed to apply soc.dyanno.hypervisor state with status: {status}")
return False
else:
log.error("No response from salt master when applying soc.dyanno.hypervisor state")
log.error(f"No response from salt master when applying soc.dyanno.hypervisor state with status: {status}")
return False
except Exception as e:
log.error(f"Error applying soc.dyanno.hypervisor state: {str(e)}")
log.error(f"Error applying soc.dyanno.hypervisor state with status: {status}: {str(e)}")
return False
def _apply_cloud_config_state():
@@ -598,11 +671,6 @@ def setup_environment(vm_name: str = 'sool9', disk_size: str = '220G', minion_id
log.warning("Failed to apply salt.cloud.config state, continuing with setup")
# We don't return an error here as we want to continue with the setup process
# Apply the soc.dyanno.hypervisor state on the salt master
if not _apply_dyanno_hypervisor_state():
log.warning("Failed to apply soc.dyanno.hypervisor state, continuing with setup")
# We don't return an error here as we want to continue with the setup process
log.info("Starting setup_environment in setup_hypervisor runner")
# Check if environment is already set up
@@ -616,9 +684,12 @@ def setup_environment(vm_name: str = 'sool9', disk_size: str = '220G', minion_id
# Handle image setup if needed
if not image_valid:
_apply_dyanno_hypervisor_state('ImageDownloadStart')
log.info("Starting image download/validation process")
if not _download_image():
log.error("Image download failed")
# Update hypervisor annotation with failure status
_apply_dyanno_hypervisor_state('ImageDownloadFailed')
return {
'success': False,
'error': 'Image download failed',
@@ -631,6 +702,8 @@ def setup_environment(vm_name: str = 'sool9', disk_size: str = '220G', minion_id
log.info("Setting up SSH keys")
if not _setup_ssh_keys():
log.error("SSH key setup failed")
# Update hypervisor annotation with failure status
_apply_dyanno_hypervisor_state('SSHKeySetupFailed')
return {
'success': False,
'error': 'SSH key setup failed',
@@ -655,6 +728,12 @@ def setup_environment(vm_name: str = 'sool9', disk_size: str = '220G', minion_id
success = vm_result.get('success', False)
log.info("Setup environment completed with status: %s", "SUCCESS" if success else "FAILED")
# Update hypervisor annotation with success status
if success:
_apply_dyanno_hypervisor_state('PreInit')
else:
_apply_dyanno_hypervisor_state('SetupFailed')
# If setup was successful and we have a minion_id, run highstate
if success and minion_id:
log.info("Running highstate on hypervisor %s", minion_id)

View File

@@ -15,11 +15,7 @@
'salt.minion-check',
'sensoroni',
'salt.lasthighstate',
'salt.minion'
] %}
{% set ssl_states = [
'ssl',
'salt.minion',
'telegraf',
'firewall',
'schedule',
@@ -28,7 +24,7 @@
{% set manager_states = [
'salt.master',
'ca',
'ca.server',
'registry',
'manager',
'nginx',
@@ -38,8 +34,6 @@
'hydra',
'elasticfleet',
'elastic-fleet-package-registry',
'idstools',
'suricata.manager',
'utility'
] %}
@@ -77,28 +71,24 @@
{# Map role-specific states #}
{% set role_states = {
'so-eval': (
ssl_states +
manager_states +
sensor_states +
elastic_stack_states | reject('equalto', 'logstash') | list
elastic_stack_states | reject('equalto', 'logstash') | list +
['logstash.ssl']
),
'so-heavynode': (
ssl_states +
sensor_states +
['elasticagent', 'elasticsearch', 'logstash', 'redis', 'nginx']
),
'so-idh': (
ssl_states +
['idh']
),
'so-import': (
ssl_states +
manager_states +
sensor_states | reject('equalto', 'strelka') | reject('equalto', 'healthcheck') | list +
['elasticsearch', 'elasticsearch.auth', 'kibana', 'kibana.secrets', 'strelka.manager']
['elasticsearch', 'elasticsearch.auth', 'kibana', 'kibana.secrets', 'logstash.ssl', 'strelka.manager']
),
'so-manager': (
ssl_states +
manager_states +
['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users', 'strelka.manager'] +
stig_states +
@@ -106,7 +96,6 @@
elastic_stack_states
),
'so-managerhype': (
ssl_states +
manager_states +
['salt.cloud', 'strelka.manager', 'hypervisor', 'libvirt'] +
stig_states +
@@ -114,7 +103,6 @@
elastic_stack_states
),
'so-managersearch': (
ssl_states +
manager_states +
['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users', 'strelka.manager'] +
stig_states +
@@ -122,12 +110,10 @@
elastic_stack_states
),
'so-searchnode': (
ssl_states +
['kafka.ca', 'kafka.ssl', 'elasticsearch', 'logstash', 'nginx'] +
stig_states
),
'so-standalone': (
ssl_states +
manager_states +
['salt.cloud', 'libvirt.packages', 'libvirt.ssh.users'] +
sensor_states +
@@ -136,29 +122,24 @@
elastic_stack_states
),
'so-sensor': (
ssl_states +
sensor_states +
['nginx'] +
stig_states
),
'so-fleet': (
ssl_states +
stig_states +
['logstash', 'nginx', 'healthcheck', 'elasticfleet']
),
'so-receiver': (
ssl_states +
kafka_states +
stig_states +
['logstash', 'redis']
),
'so-hypervisor': (
ssl_states +
stig_states +
['hypervisor', 'libvirt']
),
'so-desktop': (
['ssl', 'docker_clean', 'telegraf'] +
stig_states
)
} %}

View File

@@ -1,4 +1,7 @@
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set PCAP_BPF_STATUS = 0 %}
{% set STENO_BPF_COMPILED = "" %}
{% if GLOBALS.pcap_engine == "TRANSITION" %}
{% set PCAPBPF = ["ip and host 255.255.255.1 and port 1"] %}
{% else %}
@@ -8,3 +11,11 @@
{{ MACROS.remove_comments(BPFMERGED, 'pcap') }}
{% set PCAPBPF = BPFMERGED.pcap %}
{% endif %}
{% if PCAPBPF %}
{% set PCAP_BPF_CALC = salt['cmd.run_all']('/usr/sbin/so-bpf-compile ' ~ GLOBALS.sensor.interface ~ ' ' ~ PCAPBPF|join(" "), cwd='/root') %}
{% if PCAP_BPF_CALC['retcode'] == 0 %}
{% set PCAP_BPF_STATUS = 1 %}
{% set STENO_BPF_COMPILED = ",\\\"--filter=" + PCAP_BPF_CALC['stdout'] + "\\\"" %}
{% endif %}
{% endif %}

View File

@@ -1,11 +1,11 @@
bpf:
pcap:
description: List of BPF filters to apply to Stenographer.
description: List of BPF filters to apply to the PCAP engine.
multiline: True
forcedType: "[]string"
helpLink: bpf.html
suricata:
description: List of BPF filters to apply to Suricata.
description: List of BPF filters to apply to Suricata. This will apply to alerts and, if enabled, to metadata and PCAP logs generated by Suricata.
multiline: True
forcedType: "[]string"
helpLink: bpf.html

View File

@@ -1,7 +1,16 @@
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% import_yaml 'bpf/defaults.yaml' as BPFDEFAULTS %}
{% set BPFMERGED = salt['pillar.get']('bpf', BPFDEFAULTS.bpf, merge=True) %}
{% set SURICATA_BPF_STATUS = 0 %}
{% import 'bpf/macros.jinja' as MACROS %}
{{ MACROS.remove_comments(BPFMERGED, 'suricata') }}
{% set SURICATABPF = BPFMERGED.suricata %}
{% if SURICATABPF %}
{% set SURICATA_BPF_CALC = salt['cmd.run_all']('/usr/sbin/so-bpf-compile ' ~ GLOBALS.sensor.interface ~ ' ' ~ SURICATABPF|join(" "), cwd='/root') %}
{% if SURICATA_BPF_CALC['retcode'] == 0 %}
{% set SURICATA_BPF_STATUS = 1 %}
{% endif %}
{% endif %}

View File

@@ -1,7 +1,16 @@
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% import_yaml 'bpf/defaults.yaml' as BPFDEFAULTS %}
{% set BPFMERGED = salt['pillar.get']('bpf', BPFDEFAULTS.bpf, merge=True) %}
{% set ZEEK_BPF_STATUS = 0 %}
{% import 'bpf/macros.jinja' as MACROS %}
{{ MACROS.remove_comments(BPFMERGED, 'zeek') }}
{% set ZEEKBPF = BPFMERGED.zeek %}
{% if ZEEKBPF %}
{% set ZEEK_BPF_CALC = salt['cmd.run_all']('/usr/sbin/so-bpf-compile ' ~ GLOBALS.sensor.interface ~ ' ' ~ ZEEKBPF|join(" "), cwd='/root') %}
{% if ZEEK_BPF_CALC['retcode'] == 0 %}
{% set ZEEK_BPF_STATUS = 1 %}
{% endif %}
{% endif %}

View File

@@ -1,4 +0,0 @@
pki_issued_certs:
file.directory:
- name: /etc/pki/issued_certs
- makedirs: True

View File

@@ -3,70 +3,10 @@
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
include:
- ca.dirs
/etc/salt/minion.d/signing_policies.conf:
file.managed:
- source: salt://ca/files/signing_policies.conf
pki_private_key:
x509.private_key_managed:
- name: /etc/pki/ca.key
- keysize: 4096
- passphrase:
- backup: True
{% if salt['file.file_exists']('/etc/pki/ca.key') -%}
- prereq:
- x509: /etc/pki/ca.crt
{%- endif %}
pki_public_ca_crt:
x509.certificate_managed:
- name: /etc/pki/ca.crt
- signing_private_key: /etc/pki/ca.key
- CN: {{ GLOBALS.manager }}
- C: US
- ST: Utah
- L: Salt Lake City
- basicConstraints: "critical CA:true"
- keyUsage: "critical cRLSign, keyCertSign"
- extendedkeyUsage: "serverAuth, clientAuth"
- subjectKeyIdentifier: hash
- authorityKeyIdentifier: keyid:always, issuer
- days_valid: 3650
- days_remaining: 0
- backup: True
- replace: False
- require:
- sls: ca.dirs
- timeout: 30
- retry:
attempts: 5
interval: 30
mine_update_ca_crt:
module.run:
- mine.update: []
- onchanges:
- x509: pki_public_ca_crt
cakeyperms:
file.managed:
- replace: False
- name: /etc/pki/ca.key
- mode: 640
- group: 939
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% if GLOBALS.is_manager %}
- ca.server
{% endif %}
- ca.trustca

3
salt/ca/map.jinja Normal file
View File

@@ -0,0 +1,3 @@
{% set CA = {
'server': pillar.ca.server
}%}

View File

@@ -1,7 +1,35 @@
pki_private_key:
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% set setup_running = salt['cmd.retcode']('pgrep -x so-setup') == 0 %}
{% if setup_running%}
include:
- ssl.remove
remove_pki_private_key:
file.absent:
- name: /etc/pki/ca.key
pki_public_ca_crt:
remove_pki_public_ca_crt:
file.absent:
- name: /etc/pki/ca.crt
remove_trusttheca:
file.absent:
- name: /etc/pki/tls/certs/intca.crt
remove_pki_public_ca_crt_symlink:
file.absent:
- name: /opt/so/saltstack/local/salt/ca/files/ca.crt
{% else %}
so-setup_not_running:
test.show_notification:
- text: "This state is reserved for usage during so-setup."
{% endif %}

63
salt/ca/server.sls Normal file
View File

@@ -0,0 +1,63 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
pki_private_key:
x509.private_key_managed:
- name: /etc/pki/ca.key
- keysize: 4096
- passphrase:
- backup: True
{% if salt['file.file_exists']('/etc/pki/ca.key') -%}
- prereq:
- x509: /etc/pki/ca.crt
{%- endif %}
pki_public_ca_crt:
x509.certificate_managed:
- name: /etc/pki/ca.crt
- signing_private_key: /etc/pki/ca.key
- CN: {{ GLOBALS.manager }}
- C: US
- ST: Utah
- L: Salt Lake City
- basicConstraints: "critical CA:true"
- keyUsage: "critical cRLSign, keyCertSign"
- extendedkeyUsage: "serverAuth, clientAuth"
- subjectKeyIdentifier: hash
- authorityKeyIdentifier: keyid:always, issuer
- days_valid: 3650
- days_remaining: 7
- backup: True
- replace: False
- timeout: 30
- retry:
attempts: 5
interval: 30
pki_public_ca_crt_symlink:
file.symlink:
- name: /opt/so/saltstack/local/salt/ca/files/ca.crt
- target: /etc/pki/ca.crt
- require:
- x509: pki_public_ca_crt
cakeyperms:
file.managed:
- replace: False
- name: /etc/pki/ca.key
- mode: 640
- group: 939
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,12 +1,15 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# when the salt-minion signs the cert, a copy is stored here
issued_certs_copypath:
file.directory:
- name: /etc/pki/issued_certs
- makedirs: True
. /usr/sbin/so-common
/usr/sbin/so-start idstools $1
signing_policy:
file.managed:
- name: /etc/salt/minion.d/signing_policies.conf
- source: salt://ca/files/signing_policies.conf

26
salt/ca/trustca.sls Normal file
View File

@@ -0,0 +1,26 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'vars/globals.map.jinja' import GLOBALS %}
include:
- docker
# Trust the CA
trusttheca:
file.managed:
- name: /etc/pki/tls/certs/intca.crt
- source: salt://ca/files/ca.crt
- watch_in:
- service: docker_running
- show_changes: False
- makedirs: True
{% if GLOBALS.os_family == 'Debian' %}
symlinkca:
file.symlink:
- target: /etc/pki/tls/certs/intca.crt
- name: /etc/ssl/certs/intca.crt
{% endif %}

View File

@@ -177,7 +177,7 @@ so-status_script:
- source: salt://common/tools/sbin/so-status
- mode: 755
{% if GLOBALS.role in GLOBALS.sensor_roles %}
{% if GLOBALS.is_sensor %}
# Add sensor cleanup
so-sensor-clean:
cron.present:

View File

@@ -29,9 +29,26 @@ fi
interface="$1"
shift
tcpdump -i $interface -ddd $@ | tail -n+2 |
while read line; do
# Capture tcpdump output and exit code
tcpdump_output=$(tcpdump -i "$interface" -ddd "$@" 2>&1)
tcpdump_exit=$?
if [ $tcpdump_exit -ne 0 ]; then
echo "$tcpdump_output" >&2
exit $tcpdump_exit
fi
# Process the output, skipping the first line
echo "$tcpdump_output" | tail -n+2 | while read -r line; do
cols=( $line )
printf "%04x%02x%02x%08x" ${cols[0]} ${cols[1]} ${cols[2]} ${cols[3]}
printf "%04x%02x%02x%08x" "${cols[0]}" "${cols[1]}" "${cols[2]}" "${cols[3]}"
done
# Check if the pipeline succeeded
if [ "${PIPESTATUS[0]}" -ne 0 ]; then
exit 1
fi
echo ""
exit 0

View File

@@ -220,12 +220,22 @@ compare_es_versions() {
}
copy_new_files() {
# Define files to exclude from deletion (relative to their respective base directories)
local EXCLUDE_FILES=(
"salt/hypervisor/soc_hypervisor.yaml"
)
# Build rsync exclude arguments
local EXCLUDE_ARGS=()
for file in "${EXCLUDE_FILES[@]}"; do
EXCLUDE_ARGS+=(--exclude="$file")
done
# Copy new files over to the salt dir
cd $UPDATE_DIR
rsync -a salt $DEFAULT_SALT_DIR/ --delete
rsync -a pillar $DEFAULT_SALT_DIR/ --delete
rsync -a salt $DEFAULT_SALT_DIR/ --delete "${EXCLUDE_ARGS[@]}"
rsync -a pillar $DEFAULT_SALT_DIR/ --delete "${EXCLUDE_ARGS[@]}"
chown -R socore:socore $DEFAULT_SALT_DIR/
chmod 755 $DEFAULT_SALT_DIR/pillar/firewall/addfirewall.sh
cd /tmp
}
@@ -385,7 +395,7 @@ is_manager_node() {
}
is_sensor_node() {
# Check to see if this is a sensor (forward) node
# Check to see if this is a sensor node
is_single_node_grid && return 0
grep "role: so-" /etc/salt/grains | grep -E "sensor|heavynode" &> /dev/null
}
@@ -544,21 +554,39 @@ run_check_net_err() {
}
wait_for_salt_minion() {
local minion="$1"
local timeout="${2:-5}"
local logfile="${3:-'/dev/stdout'}"
retry 60 5 "journalctl -u salt-minion.service | grep 'Minion is ready to receive requests'" >> "$logfile" 2>&1 || fail
local attempt=0
# each attempts would take about 15 seconds
local maxAttempts=20
until check_salt_minion_status "$minion" "$timeout" "$logfile"; do
attempt=$((attempt+1))
if [[ $attempt -eq $maxAttempts ]]; then
return 1
fi
sleep 10
done
return 0
local minion="$1"
local max_wait="${2:-30}"
local interval="${3:-2}"
local logfile="${4:-'/dev/stdout'}"
local elapsed=0
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - Waiting for salt-minion '$minion' to be ready..."
while [ $elapsed -lt $max_wait ]; do
# Check if service is running
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - Check if salt-minion service is running"
if ! systemctl is-active --quiet salt-minion; then
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - salt-minion service not running (elapsed: ${elapsed}s)"
sleep $interval
elapsed=$((elapsed + interval))
continue
fi
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - salt-minion service is running"
# Check if minion responds to ping
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - Check if $minion responds to ping"
if salt "$minion" test.ping --timeout=3 --out=json 2>> "$logfile" | grep -q "true"; then
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - salt-minion '$minion' is connected and ready!"
return 0
fi
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - Waiting... (${elapsed}s / ${max_wait}s)"
sleep $interval
elapsed=$((elapsed + interval))
done
echo "$(date '+%a %d %b %Y %H:%M:%S.%6N') - ERROR: salt-minion '$minion' not ready after $max_wait seconds"
return 1
}
salt_minion_count() {

View File

@@ -25,7 +25,6 @@ container_list() {
if [ $MANAGERCHECK == 'so-import' ]; then
TRUSTED_CONTAINERS=(
"so-elasticsearch"
"so-idstools"
"so-influxdb"
"so-kibana"
"so-kratos"
@@ -49,7 +48,6 @@ container_list() {
"so-elastic-fleet-package-registry"
"so-elasticsearch"
"so-idh"
"so-idstools"
"so-influxdb"
"so-kafka"
"so-kibana"
@@ -62,8 +60,6 @@ container_list() {
"so-soc"
"so-steno"
"so-strelka-backend"
"so-strelka-filestream"
"so-strelka-frontend"
"so-strelka-manager"
"so-suricata"
"so-telegraf"
@@ -71,7 +67,6 @@ container_list() {
)
else
TRUSTED_CONTAINERS=(
"so-idstools"
"so-elasticsearch"
"so-logstash"
"so-nginx"

View File

@@ -129,6 +129,8 @@ if [[ $EXCLUDE_STARTUP_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|responded with status-code 503" # telegraf getting 503 from ES during startup
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|process_cluster_event_timeout_exception" # logstash waiting for elasticsearch to start
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|not configured for GeoIP" # SO does not bundle the maxminddb with Zeek
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|HTTP 404: Not Found" # Salt loops until Kratos returns 200, during startup Kratos may not be ready
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Cancelling deferred write event maybeFenceReplicas because the event queue is now closed" # Kafka controller log during shutdown/restart
fi
if [[ $EXCLUDE_FALSE_POSITIVE_ERRORS == 'Y' ]]; then
@@ -159,7 +161,9 @@ if [[ $EXCLUDE_FALSE_POSITIVE_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|adding ingest pipeline" # false positive (elasticsearch ingest pipeline names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|updating index template" # false positive (elasticsearch index or template names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|updating component template" # false positive (elasticsearch index or template names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|upgrading component template" # false positive (elasticsearch index or template names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|upgrading composable template" # false positive (elasticsearch composable template names contain 'error')
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Error while parsing document for index \[.ds-logs-kratos-so-.*object mapping for \[file\]" # false positive (mapping error occuring BEFORE kratos index has rolled over in 2.4.210)
fi
if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then

View File

@@ -85,7 +85,7 @@ function suricata() {
docker run --rm \
-v /opt/so/conf/suricata/suricata.yaml:/etc/suricata/suricata.yaml:ro \
-v /opt/so/conf/suricata/threshold.conf:/etc/suricata/threshold.conf:ro \
-v /opt/so/conf/suricata/rules:/etc/suricata/rules:ro \
-v /opt/so/rules/suricata/:/etc/suricata/rules:ro \
-v ${LOG_PATH}:/var/log/suricata/:rw \
-v ${NSM_PATH}/:/nsm/:rw \
-v "$PCAP:/input.pcap:ro" \

View File

@@ -3,29 +3,16 @@
{# we only want this state to run it is CentOS #}
{% if GLOBALS.os == 'OEL' %}
{% set global_ca_text = [] %}
{% set global_ca_server = [] %}
{% set manager = GLOBALS.manager %}
{% set x509dict = salt['mine.get'](manager | lower~'*', 'x509.get_pem_entries') %}
{% for host in x509dict %}
{% if host.split('_')|last in ['manager', 'managersearch', 'standalone', 'import', 'eval'] %}
{% do global_ca_text.append(x509dict[host].get('/etc/pki/ca.crt')|replace('\n', '')) %}
{% do global_ca_server.append(host) %}
{% endif %}
{% endfor %}
{% set trusttheca_text = global_ca_text[0] %}
{% set ca_server = global_ca_server[0] %}
trusted_ca:
x509.pem_managed:
file.managed:
- name: /etc/pki/ca-trust/source/anchors/ca.crt
- text: {{ trusttheca_text }}
- source: salt://ca/files/ca.crt
update_ca_certs:
cmd.run:
- name: update-ca-trust
- onchanges:
- x509: trusted_ca
- file: trusted_ca
{% else %}

View File

@@ -24,11 +24,6 @@ docker:
custom_bind_mounts: []
extra_hosts: []
extra_env: []
'so-idstools':
final_octet: 25
custom_bind_mounts: []
extra_hosts: []
extra_env: []
'so-influxdb':
final_octet: 26
port_bindings:

View File

@@ -6,9 +6,9 @@
{% from 'docker/docker.map.jinja' import DOCKER %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
# include ssl since docker service requires the intca
# docker service requires the ca.crt
include:
- ssl
- ca
dockergroup:
group.present:
@@ -89,10 +89,9 @@ docker_running:
- enable: True
- watch:
- file: docker_daemon
- x509: trusttheca
- require:
- file: docker_daemon
- x509: trusttheca
- file: trusttheca
# Reserve OS ports for Docker proxy in case boot settings are not already applied/present

View File

@@ -41,7 +41,6 @@ docker:
forcedType: "[]string"
so-elastic-fleet: *dockerOptions
so-elasticsearch: *dockerOptions
so-idstools: *dockerOptions
so-influxdb: *dockerOptions
so-kibana: *dockerOptions
so-kratos: *dockerOptions
@@ -102,4 +101,4 @@ docker:
multiline: True
forcedType: "[]string"
so-zeek: *dockerOptions
so-kafka: *dockerOptions
so-kafka: *dockerOptions

View File

@@ -60,7 +60,7 @@ so-elastalert:
- watch:
- file: elastaconf
- onlyif:
- "so-elasticsearch-query / | jq -r '.version.number[0:1]' | grep -q 8" {# only run this state if elasticsearch is version 8 #}
- "so-elasticsearch-query / | jq -r '.version.number[0:1]' | grep -q 9" {# only run this state if elasticsearch is version 9 #}
delete_so-elastalert_so-status.disabled:
file.uncomment:

View File

@@ -9,6 +9,7 @@
{% from 'docker/docker.map.jinja' import DOCKER %}
include:
- ca
- elasticagent.config
- elasticagent.sostatus
@@ -55,8 +56,10 @@ so-elastic-agent:
{% endif %}
- require:
- file: create-elastic-agent-config
- file: trusttheca
- watch:
- file: create-elastic-agent-config
- file: trusttheca
delete_so-elastic-agent_so-status.disabled:
file.uncomment:

View File

@@ -0,0 +1,34 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
{# advanced config_yaml options for elasticfleet logstash output #}
{% set ADV_OUTPUT_LOGSTASH_RAW = ELASTICFLEETMERGED.config.outputs.logstash %}
{% set ADV_OUTPUT_LOGSTASH = {} %}
{% for k, v in ADV_OUTPUT_LOGSTASH_RAW.items() %}
{% if v != "" and v is not none %}
{% if k == 'queue_mem_events' %}
{# rename queue_mem_events queue.mem.events #}
{% do ADV_OUTPUT_LOGSTASH.update({'queue.mem.events':v}) %}
{% elif k == 'loadbalance' %}
{% if v %}
{# only include loadbalance config when its True #}
{% do ADV_OUTPUT_LOGSTASH.update({k:v}) %}
{% endif %}
{% else %}
{% do ADV_OUTPUT_LOGSTASH.update({k:v}) %}
{% endif %}
{% endif %}
{% endfor %}
{% set LOGSTASH_CONFIG_YAML_RAW = [] %}
{% if ADV_OUTPUT_LOGSTASH %}
{% for k, v in ADV_OUTPUT_LOGSTASH.items() %}
{% do LOGSTASH_CONFIG_YAML_RAW.append(k ~ ': ' ~ v) %}
{% endfor %}
{% endif %}
{% set LOGSTASH_CONFIG_YAML = LOGSTASH_CONFIG_YAML_RAW | join('\\n') if LOGSTASH_CONFIG_YAML_RAW else '' %}

View File

@@ -95,6 +95,9 @@ soresourcesrepoclone:
- rev: 'main'
- depth: 1
- force_reset: True
- retry:
attempts: 3
interval: 10
{% endif %}
elasticdefendconfdir:

View File

@@ -10,12 +10,19 @@ elasticfleet:
grid_enrollment: ''
defend_filters:
enable_auto_configuration: False
outputs:
logstash:
bulk_max_size: ''
worker: ''
queue_mem_events: ''
timeout: ''
loadbalance: False
compression_level: ''
subscription_integrations: False
auto_upgrade_integrations: False
logging:
zeek:
excluded:
- analyzer
- broker
- capture_loss
- cluster

View File

@@ -13,9 +13,11 @@
{% set SERVICETOKEN = salt['pillar.get']('elasticfleet:config:server:es_token','') %}
include:
- ca
- logstash.ssl
- elasticfleet.ssl
- elasticfleet.config
- elasticfleet.sostatus
- ssl
{% if grains.role not in ['so-fleet'] %}
# Wait for Elasticsearch to be ready - no reason to try running Elastic Fleet server if ES is not ready
@@ -32,6 +34,17 @@ so-elastic-fleet-auto-configure-logstash-outputs:
- retry:
attempts: 4
interval: 30
{# Separate from above in order to catch elasticfleet-logstash.crt changes and force update to fleet output policy #}
so-elastic-fleet-auto-configure-logstash-outputs-force:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-outputs-update --certs
- retry:
attempts: 4
interval: 30
- onchanges:
- x509: etc_elasticfleet_logstash_crt
- x509: elasticfleet_kafka_crt
{% endif %}
# If enabled, automatically update Fleet Server URLs & ES Connection
@@ -122,6 +135,11 @@ so-elastic-fleet:
{% endfor %}
{% endif %}
- watch:
- file: trusttheca
- x509: etc_elasticfleet_key
- x509: etc_elasticfleet_crt
- require:
- file: trusttheca
- x509: etc_elasticfleet_key
- x509: etc_elasticfleet_crt
{% endif %}

View File

@@ -2,7 +2,7 @@
{%- raw -%}
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "import-zeek-logs",
@@ -10,19 +10,31 @@
"description": "Zeek Import logs",
"policy_id": "so-grid-nodes_general",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/nsm/import/*/zeek/logs/*.log"
],
"data_stream.dataset": "import",
"tags": [],
"pipeline": "",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": ["({%- endraw -%}{{ ELASTICFLEETMERGED.logging.zeek.excluded | join('|') }}{%- raw -%}).log$"],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/zeek/logs/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- script:\n lang: javascript\n source: >\n function process(event) {\n var pl = event.Get(\"import.file\").slice(0,-4);\n event.Put(\"@metadata.pipeline\", \"zeek.\" + pl);\n }\n- add_fields:\n target: event\n fields:\n category: network\n module: zeek\n imported: true\n- add_tags:\n tags: \"ics\"\n when:\n regexp:\n import.file: \"^bacnet*|^bsap*|^cip*|^cotp*|^dnp3*|^ecat*|^enip*|^modbus*|^opcua*|^profinet*|^s7comm*\"",
"custom": "exclude_files: [\"{%- endraw -%}{{ ELASTICFLEETMERGED.logging.zeek.excluded | join('|') }}{%- raw -%}.log$\"]\n"
"tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}

View File

@@ -11,36 +11,51 @@
{%- endif -%}
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "kratos-logs",
"namespace": "so",
"description": "Kratos logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/kratos/kratos.log"
],
"data_stream.dataset": "kratos",
"tags": ["so-kratos"],
"pipeline": "kratos",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
{%- if valid_identities -%}
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- add_fields:\n target: event\n fields:\n category: iam\n module: kratos\n- if:\n has_fields:\n - identity_id\n then:{% for id, email in identities %}\n - if:\n equals:\n identity_id: \"{{ id }}\"\n then:\n - add_fields:\n target: ''\n fields:\n user.name: \"{{ email }}\"{% endfor %}",
{%- else -%}
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- add_fields:\n target: event\n fields:\n category: iam\n module: kratos",
{%- endif -%}
"custom": "pipeline: kratos"
"tags": [
"so-kratos"
],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
}
},
"force": true
}
}

View File

@@ -2,28 +2,38 @@
{%- raw -%}
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"id": "zeek-logs",
"name": "zeek-logs",
"namespace": "so",
"description": "Zeek logs",
"policy_id": "so-grid-nodes_general",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/nsm/zeek/logs/current/*.log"
],
"data_stream.dataset": "zeek",
"tags": [],
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": ["({%- endraw -%}{{ ELASTICFLEETMERGED.logging.zeek.excluded | join('|') }}{%- raw -%}).log$"],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"/nsm/zeek/logs/current/%{pipeline}.log\"\n field: \"log.file.path\"\n trim_chars: \".log\"\n target_prefix: \"\"\n- script:\n lang: javascript\n source: >\n function process(event) {\n var pl = event.Get(\"pipeline\");\n event.Put(\"@metadata.pipeline\", \"zeek.\" + pl);\n }\n- add_fields:\n target: event\n fields:\n category: network\n module: zeek\n- add_tags:\n tags: \"ics\"\n when:\n regexp:\n pipeline: \"^bacnet*|^bsap*|^cip*|^cotp*|^dnp3*|^ecat*|^enip*|^modbus*|^opcua*|^profinet*|^s7comm*\"",
"custom": "exclude_files: [\"{%- endraw -%}{{ ELASTICFLEETMERGED.logging.zeek.excluded | join('|') }}{%- raw -%}.log$\"]\n"
"tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
@@ -31,4 +41,4 @@
},
"force": true
}
{%- endraw -%}
{%- endraw -%}

View File

@@ -5,7 +5,7 @@
"package": {
"name": "endpoint",
"title": "Elastic Defend",
"version": "8.18.1",
"version": "9.0.2",
"requires_root": true
},
"enabled": true,

View File

@@ -1,26 +1,43 @@
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "hydra-logs",
"namespace": "so",
"description": "Hydra logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/hydra/hydra.log"
],
"data_stream.dataset": "hydra",
"tags": ["so-hydra"],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: iam\n module: hydra",
"custom": "pipeline: hydra"
"pipeline": "hydra",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- add_fields:\n target: event\n fields:\n category: iam\n module: hydra",
"tags": [
"so-hydra"
],
"recursive_glob": true,
"ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
@@ -28,3 +45,5 @@
},
"force": true
}

View File

@@ -1,30 +1,44 @@
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "idh-logs",
"namespace": "so",
"description": "IDH integration",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/nsm/idh/opencanary.log"
],
"data_stream.dataset": "idh",
"tags": [],
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n add_error_key: true\n- convert:\n fields:\n - {from: \"logtype\", to: \"event.code\", type: \"string\"}\n- drop_fields:\n when:\n equals:\n event.code: \"1001\"\n fields: [\"src_host\", \"src_port\", \"dst_host\", \"dst_port\" ]\n ignore_missing: true\n- rename:\n fields:\n - from: \"src_host\"\n to: \"source.ip\"\n - from: \"src_port\"\n to: \"source.port\"\n - from: \"dst_host\"\n to: \"destination.host\"\n - from: \"dst_port\"\n to: \"destination.port\"\n ignore_missing: true\n- drop_fields:\n fields: '[\"prospector\", \"input\", \"offset\", \"beat\"]'\n- add_fields:\n target: event\n fields:\n category: host\n module: opencanary",
"custom": "pipeline: common"
"tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
}
},
"force": true
}
}

View File

@@ -1,33 +1,46 @@
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "import-evtx-logs",
"namespace": "so",
"description": "Import Windows EVTX logs",
"policy_id": "so-grid-nodes_general",
"vars": {},
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/nsm/import/*/evtx/*.json"
],
"data_stream.dataset": "import",
"custom": "",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-2.6.1\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-3.1.2\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-2.6.1\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-2.6.1\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-3.1.2\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
"tags": [
"import"
]
],
"recursive_glob": true,
"ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
}
},
"force": true
}
}

View File

@@ -1,30 +1,45 @@
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "import-suricata-logs",
"namespace": "so",
"description": "Import Suricata logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/nsm/import/*/suricata/eve*.json"
],
"data_stream.dataset": "import",
"pipeline": "suricata.common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- add_fields:\n target: event\n fields:\n category: network\n module: suricata\n imported: true\n- dissect:\n tokenizer: \"/nsm/import/%{import.id}/suricata/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n",
"tags": [],
"processors": "- add_fields:\n target: event\n fields:\n category: network\n module: suricata\n imported: true\n- dissect:\n tokenizer: \"/nsm/import/%{import.id}/suricata/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"",
"custom": "pipeline: suricata.common"
"recursive_glob": true,
"ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
}
},
"force": true
}
}

View File

@@ -1,18 +1,17 @@
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "rita-logs",
"namespace": "so",
"description": "RITA Logs",
"policy_id": "so-grid-nodes_general",
"vars": {},
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
@@ -20,15 +19,28 @@
"/nsm/rita/exploded-dns.csv",
"/nsm/rita/long-connections.csv"
],
"exclude_files": [],
"ignore_older": "72h",
"data_stream.dataset": "rita",
"tags": [],
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"/nsm/rita/%{pipeline}.csv\"\n field: \"log.file.path\"\n trim_chars: \".csv\"\n target_prefix: \"\"\n- script:\n lang: javascript\n source: >\n function process(event) {\n var pl = event.Get(\"pipeline\").split(\"-\");\n if (pl.length > 1) {\n pl = pl[1];\n }\n else {\n pl = pl[0];\n }\n event.Put(\"@metadata.pipeline\", \"rita.\" + pl);\n }\n- add_fields:\n target: event\n fields:\n category: network\n module: rita",
"custom": "exclude_lines: ['^Score', '^Source', '^Domain', '^No results']"
"tags": [],
"recursive_glob": true,
"ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
}
}
},
"force": true
}

View File

@@ -1,29 +1,41 @@
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "so-ip-mappings",
"namespace": "so",
"description": "IP Description mappings",
"policy_id": "so-grid-nodes_general",
"vars": {},
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/nsm/custom-mappings/ip-descriptions.csv"
],
"data_stream.dataset": "hostnamemappings",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- decode_csv_fields:\n fields:\n message: decoded.csv\n separator: \",\"\n ignore_missing: false\n overwrite_keys: true\n trim_leading_space: true\n fail_on_error: true\n\n- extract_array:\n field: decoded.csv\n mappings:\n so.ip_address: '0'\n so.description: '1'\n\n- script:\n lang: javascript\n source: >\n function process(event) {\n var ip = event.Get('so.ip_address');\n var validIpRegex = /^((25[0-5]|2[0-4]\\d|1\\d{2}|[1-9]?\\d)\\.){3}(25[0-5]|2[0-4]\\d|1\\d{2}|[1-9]?\\d)$/\n if (!validIpRegex.test(ip)) {\n event.Cancel();\n }\n }\n- fingerprint:\n fields: [\"so.ip_address\"]\n target_field: \"@metadata._id\"\n",
"tags": [
"so-ip-mappings"
],
"processors": "- decode_csv_fields:\n fields:\n message: decoded.csv\n separator: \",\"\n ignore_missing: false\n overwrite_keys: true\n trim_leading_space: true\n fail_on_error: true\n\n- extract_array:\n field: decoded.csv\n mappings:\n so.ip_address: '0'\n so.description: '1'\n\n- script:\n lang: javascript\n source: >\n function process(event) {\n var ip = event.Get('so.ip_address');\n var validIpRegex = /^((25[0-5]|2[0-4]\\d|1\\d{2}|[1-9]?\\d)\\.){3}(25[0-5]|2[0-4]\\d|1\\d{2}|[1-9]?\\d)$/\n if (!validIpRegex.test(ip)) {\n event.Cancel();\n }\n }\n- fingerprint:\n fields: [\"so.ip_address\"]\n target_field: \"@metadata._id\"\n",
"custom": ""
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
@@ -31,5 +43,3 @@
},
"force": true
}

View File

@@ -1,30 +1,44 @@
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "soc-auth-sync-logs",
"namespace": "so",
"description": "Security Onion - Elastic Auth Sync - Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/soc/sync.log"
],
"data_stream.dataset": "soc",
"tags": ["so-soc"],
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"%{event.action}\"\n field: \"message\"\n target_prefix: \"\"\n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: auth_sync",
"custom": "pipeline: common"
"tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
}
},
"force": true
}
}

View File

@@ -1,35 +1,48 @@
{
"policy_id": "so-grid-nodes_general",
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "soc-detections-logs",
"description": "Security Onion Console - Detections Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/soc/detections_runtime-status_sigma.log",
"/opt/so/log/soc/detections_runtime-status_yara.log"
],
"exclude_files": [],
"ignore_older": "72h",
"data_stream.dataset": "soc",
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"soc\"\n process_array: true\n max_depth: 2\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: detections\n- rename:\n fields:\n - from: \"soc.fields.sourceIp\"\n to: \"source.ip\"\n - from: \"soc.fields.status\"\n to: \"http.response.status_code\"\n - from: \"soc.fields.method\"\n to: \"http.request.method\"\n - from: \"soc.fields.path\"\n to: \"url.path\"\n - from: \"soc.message\"\n to: \"event.action\"\n - from: \"soc.level\"\n to: \"log.level\"\n ignore_missing: true",
"tags": [
"so-soc"
],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"soc\"\n process_array: true\n max_depth: 2\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: detections\n- rename:\n fields:\n - from: \"soc.fields.sourceIp\"\n to: \"source.ip\"\n - from: \"soc.fields.status\"\n to: \"http.response.status_code\"\n - from: \"soc.fields.method\"\n to: \"http.request.method\"\n - from: \"soc.fields.path\"\n to: \"url.path\"\n - from: \"soc.message\"\n to: \"event.action\"\n - from: \"soc.level\"\n to: \"log.level\"\n ignore_missing: true",
"custom": "pipeline: common"
"recursive_glob": true,
"ignore_older": "72h",
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
}
},
"force": true
}
}

View File

@@ -1,30 +1,46 @@
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "soc-salt-relay-logs",
"namespace": "so",
"description": "Security Onion - Salt Relay - Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/soc/salt-relay.log"
],
"data_stream.dataset": "soc",
"tags": ["so-soc"],
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- dissect:\n tokenizer: \"%{soc.ts} | %{event.action}\"\n field: \"message\"\n target_prefix: \"\"\n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: salt_relay",
"custom": "pipeline: common"
"tags": [
"so-soc"
],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
}
},
"force": true
}
}

View File

@@ -1,30 +1,44 @@
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "soc-sensoroni-logs",
"namespace": "so",
"description": "Security Onion - Sensoroni - Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/sensoroni/sensoroni.log"
],
"data_stream.dataset": "soc",
"tags": [],
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"sensoroni\"\n process_array: true\n max_depth: 2\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: sensoroni\n- rename:\n fields:\n - from: \"sensoroni.fields.sourceIp\"\n to: \"source.ip\"\n - from: \"sensoroni.fields.status\"\n to: \"http.response.status_code\"\n - from: \"sensoroni.fields.method\"\n to: \"http.request.method\"\n - from: \"sensoroni.fields.path\"\n to: \"url.path\"\n - from: \"sensoroni.message\"\n to: \"event.action\"\n - from: \"sensoroni.level\"\n to: \"log.level\"\n ignore_missing: true",
"custom": "pipeline: common"
"tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
}
},
"force": true
}
"force": true
}

View File

@@ -1,30 +1,46 @@
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "soc-server-logs",
"namespace": "so",
"description": "Security Onion Console Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/soc/sensoroni-server.log"
],
"data_stream.dataset": "soc",
"tags": ["so-soc"],
"pipeline": "common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"soc\"\n process_array: true\n max_depth: 2\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: server\n- rename:\n fields:\n - from: \"soc.fields.sourceIp\"\n to: \"source.ip\"\n - from: \"soc.fields.status\"\n to: \"http.response.status_code\"\n - from: \"soc.fields.method\"\n to: \"http.request.method\"\n - from: \"soc.fields.path\"\n to: \"url.path\"\n - from: \"soc.message\"\n to: \"event.action\"\n - from: \"soc.level\"\n to: \"log.level\"\n ignore_missing: true",
"custom": "pipeline: common"
"tags": [
"so-soc"
],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
}
},
"force": true
}
}

View File

@@ -1,30 +1,44 @@
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "strelka-logs",
"namespace": "so",
"description": "Strelka logs",
"description": "Strelka Logs",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/nsm/strelka/log/strelka.log"
],
"data_stream.dataset": "strelka",
"tags": [],
"pipeline": "strelka.file",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- add_fields:\n target: event\n fields:\n category: file\n module: strelka",
"custom": "pipeline: strelka.file"
"tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
}
},
"force": true
}
}

View File

@@ -1,30 +1,44 @@
{
"package": {
"name": "log",
"name": "filestream",
"version": ""
},
"name": "suricata-logs",
"namespace": "so",
"description": "Suricata integration",
"policy_id": "so-grid-nodes_general",
"namespace": "so",
"inputs": {
"logs-logfile": {
"filestream-filestream": {
"enabled": true,
"streams": {
"log.logs": {
"filestream.generic": {
"enabled": true,
"vars": {
"paths": [
"/nsm/suricata/eve*.json"
],
"data_stream.dataset": "suricata",
"tags": [],
"data_stream.dataset": "filestream.generic",
"pipeline": "suricata.common",
"parsers": "#- ndjson:\n# target: \"\"\n# message_key: msg\n#- multiline:\n# type: count\n# count_lines: 3\n",
"exclude_files": [
"\\.gz$"
],
"include_files": [],
"processors": "- add_fields:\n target: event\n fields:\n category: network\n module: suricata",
"custom": "pipeline: suricata.common"
"tags": [],
"recursive_glob": true,
"clean_inactive": -1,
"harvester_limit": 0,
"fingerprint": false,
"fingerprint_offset": 0,
"fingerprint_length": "64",
"file_identity_native": true,
"exclude_lines": [],
"include_lines": []
}
}
}
}
},
"force": true
}
}

View File

@@ -2,26 +2,32 @@
# or more contributor license agreements. Licensed under the Elastic License 2.0; you may not use
# this file except in compliance with the Elastic License 2.0.
{%- set GRIDNODETOKENGENERAL = salt['pillar.get']('global:fleet_grid_enrollment_token_general') -%}
{%- set GRIDNODETOKENHEAVY = salt['pillar.get']('global:fleet_grid_enrollment_token_heavy') -%}
{% set GRIDNODETOKEN = salt['pillar.get']('global:fleet_grid_enrollment_token_general') -%}
{% if grains.role == 'so-heavynode' %}
{% set GRIDNODETOKEN = salt['pillar.get']('global:fleet_grid_enrollment_token_heavy') -%}
{% endif %}
{% set AGENT_STATUS = salt['service.available']('elastic-agent') %}
{% if not AGENT_STATUS %}
{% set AGENT_EXISTS = salt['file.file_exists']('/opt/Elastic/Agent/elastic-agent') %}
{% if grains.role not in ['so-heavynode'] %}
run_installer:
cmd.script:
- name: salt://elasticfleet/files/so_agent-installers/so-elastic-agent_linux_amd64
- cwd: /opt/so
- args: -token={{ GRIDNODETOKENGENERAL }}
- retry: True
{% else %}
run_installer:
cmd.script:
- name: salt://elasticfleet/files/so_agent-installers/so-elastic-agent_linux_amd64
- cwd: /opt/so
- args: -token={{ GRIDNODETOKENHEAVY }}
- retry: True
{% endif %}
{% if not AGENT_STATUS or not AGENT_EXISTS %}
pull_agent_installer:
file.managed:
- name: /opt/so/so-elastic-agent_linux_amd64
- source: salt://elasticfleet/files/so_agent-installers/so-elastic-agent_linux_amd64
- mode: 755
- makedirs: True
run_installer:
cmd.run:
- name: ./so-elastic-agent_linux_amd64 -token={{ GRIDNODETOKEN }} -force
- cwd: /opt/so
- retry:
attempts: 3
interval: 20
cleanup_agent_installer:
file.absent:
- name: /opt/so/so-elastic-agent_linux_amd64
{% endif %}

View File

@@ -21,6 +21,7 @@
'azure_application_insights.app_state': 'azure.app_state',
'azure_billing.billing': 'azure.billing',
'azure_functions.metrics': 'azure.function',
'azure_ai_foundry.metrics': 'azure.ai_foundry',
'azure_metrics.compute_vm_scaleset': 'azure.compute_vm_scaleset',
'azure_metrics.compute_vm': 'azure.compute_vm',
'azure_metrics.container_instance': 'azure.container_instance',
@@ -121,6 +122,9 @@
"phases": {
"cold": {
"actions": {
"allocate":{
"number_of_replicas": ""
},
"set_priority": {"priority": 0}
},
"min_age": "60d"
@@ -137,12 +141,31 @@
"max_age": "30d",
"max_primary_shard_size": "50gb"
},
"forcemerge":{
"max_num_segments": ""
},
"shrink":{
"max_primary_shard_size": "",
"method": "COUNT",
"number_of_shards": ""
},
"set_priority": {"priority": 100}
},
"min_age": "0ms"
},
"warm": {
"actions": {
"allocate": {
"number_of_replicas": ""
},
"forcemerge": {
"max_num_segments": ""
},
"shrink":{
"max_primary_shard_size": "",
"method": "COUNT",
"number_of_shards": ""
},
"set_priority": {"priority": 50}
},
"min_age": "30d"

View File

@@ -50,6 +50,46 @@ elasticfleet:
global: True
forcedType: bool
helpLink: elastic-fleet.html
outputs:
logstash:
bulk_max_size:
description: The maximum number of events to bulk in a single Logstash request.
global: True
forcedType: int
advanced: True
helpLink: elastic-fleet.html
worker:
description: The number of workers per configured host publishing events.
global: True
forcedType: int
advanced: true
helpLink: elastic-fleet.html
queue_mem_events:
title: queued events
description: The number of events the queue can store. This value should be evenly divisible by the smaller of 'bulk_max_size' to avoid sending partial batches to the output.
global: True
forcedType: int
advanced: True
helpLink: elastic-fleet.html
timeout:
description: The number of seconds to wait for responses from the Logstash server before timing out. Eg 30s
regex: ^[0-9]+s$
advanced: True
global: True
helpLink: elastic-fleet.html
loadbalance:
description: If true and multiple Logstash hosts are configured, the output plugin load balances published events onto all Logstash hosts. If false, the output plugin sends all events to one host (determined at random) and switches to another host if the selected one becomes unresponsive.
forcedType: bool
advanced: True
global: True
helpLink: elastic-fleet.html
compression_level:
description: The gzip compression level. The compression level must be in the range of 1 (best speed) to 9 (best compression).
regex: ^[1-9]$
forcedType: int
advanced: True
global: True
helpLink: elastic-fleet.html
server:
custom_fqdn:
description: Custom FQDN for Agents to connect to. One per line.

186
salt/elasticfleet/ssl.sls Normal file
View File

@@ -0,0 +1,186 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
{% from 'ca/map.jinja' import CA %}
{% if GLOBALS.is_manager or GLOBALS.role in ['so-heavynode', 'so-fleet', 'so-receiver'] %}
{% if grains['role'] not in [ 'so-heavynode', 'so-receiver'] %}
# Start -- Elastic Fleet Host Cert
etc_elasticfleet_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-server.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-server.key') -%}
- prereq:
- x509: etc_elasticfleet_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
etc_elasticfleet_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-server.crt
- ca_server: {{ CA.server }}
- signing_policy: elasticfleet
- private_key: /etc/pki/elasticfleet-server.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }},DNS:{{ GLOBALS.url_base }},IP:{{ GLOBALS.node_ip }}{% if ELASTICFLEETMERGED.config.server.custom_fqdn | length > 0 %},DNS:{{ ELASTICFLEETMERGED.config.server.custom_fqdn | join(',DNS:') }}{% endif %}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
efperms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-server.key
- mode: 640
- group: 939
chownelasticfleetcrt:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-server.crt
- mode: 640
- user: 947
- group: 939
chownelasticfleetkey:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-server.key
- mode: 640
- user: 947
- group: 939
# End -- Elastic Fleet Host Cert
{% endif %} # endif is for not including HeavyNodes & Receivers
# Start -- Elastic Fleet Client Cert for Agent (Mutual Auth with Logstash Output)
etc_elasticfleet_agent_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-agent.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-agent.key') -%}
- prereq:
- x509: etc_elasticfleet_agent_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
etc_elasticfleet_agent_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-agent.crt
- ca_server: {{ CA.server }}
- signing_policy: elasticfleet
- private_key: /etc/pki/elasticfleet-agent.key
- CN: {{ GLOBALS.hostname }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/elasticfleet-agent.key -topk8 -out /etc/pki/elasticfleet-agent.p8 -nocrypt"
- onchanges:
- x509: etc_elasticfleet_agent_key
efagentperms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-agent.key
- mode: 640
- group: 939
chownelasticfleetagentcrt:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-agent.crt
- mode: 640
- user: 947
- group: 939
chownelasticfleetagentkey:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-agent.key
- mode: 640
- user: 947
- group: 939
# End -- Elastic Fleet Client Cert for Agent (Mutual Auth with Logstash Output)
{% endif %}
{% if GLOBALS.role in ['so-manager', 'so-managerhype', 'so-managersearch', 'so-standalone'] %}
elasticfleet_kafka_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-kafka.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-kafka.key') -%}
- prereq:
- x509: elasticfleet_kafka_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
elasticfleet_kafka_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-kafka.crt
- ca_server: {{ CA.server }}
- signing_policy: kafka
- private_key: /etc/pki/elasticfleet-kafka.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
elasticfleet_kafka_cert_perms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-kafka.crt
- mode: 640
- user: 947
- group: 939
elasticfleet_kafka_key_perms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-kafka.key
- mode: 640
- user: 947
- group: 939
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -14,7 +14,7 @@ if ! is_manager_node; then
fi
# Get current list of Grid Node Agents that need to be upgraded
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/agents?perPage=20&page=1&kuery=NOT%20agent.version%20:%20%22{{ELASTICSEARCHDEFAULTS.elasticsearch.version}}%22%20and%20policy_id%20:%20%22so-grid-nodes_general%22&showInactive=false&getStatusSummary=true")
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/agents?perPage=20&page=1&kuery=NOT%20agent.version%20:%20%22{{ELASTICSEARCHDEFAULTS.elasticsearch.version}}%22%20and%20policy_id%20:%20%22so-grid-nodes_general%22&showInactive=false&getStatusSummary=true" --retry 3 --retry-delay 30 --fail 2>/dev/null)
# Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.page' <<< "$RAW_JSON")

View File

@@ -26,7 +26,7 @@ function update_es_urls() {
}
# Get current list of Fleet Elasticsearch URLs
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_elasticsearch')
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_elasticsearch' --retry 3 --retry-delay 30 --fail 2>/dev/null)
# Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")

View File

@@ -86,7 +86,7 @@ if [[ -f $STATE_FILE_SUCCESS ]]; then
latest_package_list=$(/usr/sbin/so-elastic-fleet-package-list)
echo '{ "packages" : []}' > $BULK_INSTALL_PACKAGE_LIST
rm -f $INSTALLED_PACKAGE_LIST
echo $latest_package_list | jq '{packages: [.items[] | {name: .name, latest_version: .version, installed_version: .savedObject.attributes.install_version, subscription: .conditions.elastic.subscription }]}' >> $INSTALLED_PACKAGE_LIST
echo $latest_package_list | jq '{packages: [.items[] | {name: .name, latest_version: .version, installed_version: .installationInfo.version, subscription: .conditions.elastic.subscription }]}' >> $INSTALLED_PACKAGE_LIST
while read -r package; do
# get package details

View File

@@ -3,11 +3,36 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0; you may not use
# this file except in compliance with the Elastic License 2.0.
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
{%- from 'vars/globals.map.jinja' import GLOBALS %}
{%- from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
{%- from 'elasticfleet/config.map.jinja' import LOGSTASH_CONFIG_YAML %}
. /usr/sbin/so-common
FORCE_UPDATE=false
UPDATE_CERTS=false
LOGSTASH_PILLAR_CONFIG_YAML="{{ LOGSTASH_CONFIG_YAML }}"
LOGSTASH_PILLAR_STATE_FILE="/opt/so/state/esfleet_logstash_config_pillar"
while [[ $# -gt 0 ]]; do
case $1 in
-f|--force)
FORCE_UPDATE=true
shift
;;
-c| --certs)
UPDATE_CERTS=true
FORCE_UPDATE=true
shift
;;
*)
echo "Unknown option $1"
echo "Usage: $0 [-f|--force] [-c|--certs]"
exit 1
;;
esac
done
# Only run on Managers
if ! is_manager_node; then
printf "Not a Manager Node... Exiting"
@@ -17,17 +42,49 @@ fi
function update_logstash_outputs() {
if logstash_policy=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_logstash" --retry 3 --retry-delay 10 --fail 2>/dev/null); then
SSL_CONFIG=$(echo "$logstash_policy" | jq -r '.item.ssl')
LOGSTASHKEY=$(openssl rsa -in /etc/pki/elasticfleet-logstash.key)
LOGSTASHCRT=$(openssl x509 -in /etc/pki/elasticfleet-logstash.crt)
LOGSTASHCA=$(openssl x509 -in /etc/pki/tls/certs/intca.crt)
# Revert escaped \\n to \n for jq
LOGSTASH_PILLAR_CONFIG_YAML=$(printf '%b' "$LOGSTASH_PILLAR_CONFIG_YAML")
if SECRETS=$(echo "$logstash_policy" | jq -er '.item.secrets' 2>/dev/null); then
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SECRETS "$SECRETS" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":"","ssl": $SSL_CONFIG,"secrets": $SECRETS}')
if [[ "$UPDATE_CERTS" != "true" ]]; then
# Reuse existing secret
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--arg CONFIG_YAML "$LOGSTASH_PILLAR_CONFIG_YAML" \
--argjson SECRETS "$SECRETS" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":$CONFIG_YAML,"ssl": $SSL_CONFIG,"secrets": $SECRETS}')
else
# Update certs, creating new secret
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--arg CONFIG_YAML "$LOGSTASH_PILLAR_CONFIG_YAML" \
--arg LOGSTASHKEY "$LOGSTASHKEY" \
--arg LOGSTASHCRT "$LOGSTASHCRT" \
--arg LOGSTASHCA "$LOGSTASHCA" \
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":$CONFIG_YAML,"ssl": {"certificate": $LOGSTASHCRT,"certificate_authorities":[ $LOGSTASHCA ]},"secrets": {"ssl":{"key": $LOGSTASHKEY }}}')
fi
else
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":"","ssl": $SSL_CONFIG}')
if [[ "$UPDATE_CERTS" != "true" ]]; then
# Reuse existing ssl config
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--arg CONFIG_YAML "$LOGSTASH_PILLAR_CONFIG_YAML" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":$CONFIG_YAML,"ssl": $SSL_CONFIG}')
else
# Update ssl config
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--arg CONFIG_YAML "$LOGSTASH_PILLAR_CONFIG_YAML" \
--arg LOGSTASHKEY "$LOGSTASHKEY" \
--arg LOGSTASHCRT "$LOGSTASHCRT" \
--arg LOGSTASHCA "$LOGSTASHCA" \
'{"name":"grid-logstash","type":"logstash","hosts": $UPDATEDLIST,"is_default":true,"is_default_monitoring":true,"config_yaml":$CONFIG_YAML,"ssl": {"certificate": $LOGSTASHCRT,"key": $LOGSTASHKEY,"certificate_authorities":[ $LOGSTASHCA ]}}')
fi
fi
fi
@@ -38,19 +95,42 @@ function update_kafka_outputs() {
# Make sure SSL configuration is included in policy updates for Kafka output. SSL is configured in so-elastic-fleet-setup
if kafka_policy=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_kafka" --fail 2>/dev/null); then
SSL_CONFIG=$(echo "$kafka_policy" | jq -r '.item.ssl')
KAFKAKEY=$(openssl rsa -in /etc/pki/elasticfleet-kafka.key)
KAFKACRT=$(openssl x509 -in /etc/pki/elasticfleet-kafka.crt)
KAFKACA=$(openssl x509 -in /etc/pki/tls/certs/intca.crt)
if SECRETS=$(echo "$kafka_policy" | jq -er '.item.secrets' 2>/dev/null); then
# Update policy when fleet has secrets enabled
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
--argjson SECRETS "$SECRETS" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": $SSL_CONFIG,"secrets": $SECRETS}')
if [[ "$UPDATE_CERTS" != "true" ]]; then
# Update policy when fleet has secrets enabled
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
--argjson SECRETS "$SECRETS" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": $SSL_CONFIG,"secrets": $SECRETS}')
else
# Update certs, creating new secret
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--arg KAFKAKEY "$KAFKAKEY" \
--arg KAFKACRT "$KAFKACRT" \
--arg KAFKACA "$KAFKACA" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": {"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"secrets": {"ssl":{"key": $KAFKAKEY }}}')
fi
else
# Update policy when fleet has secrets disabled or policy hasn't been force updated
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": $SSL_CONFIG}')
if [[ "$UPDATE_CERTS" != "true" ]]; then
# Update policy when fleet has secrets disabled or policy hasn't been force updated
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": $SSL_CONFIG}')
else
# Update ssl config
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--arg KAFKAKEY "$KAFKAKEY" \
--arg KAFKACRT "$KAFKACRT" \
--arg KAFKACA "$KAFKACA" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": { "certificate_authorities": [ $KAFKACA ], "certificate": $KAFKACRT, "key": $KAFKAKEY, "verification_mode": "full" }}')
fi
fi
# Update Kafka outputs
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_kafka" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq
@@ -62,7 +142,7 @@ function update_kafka_outputs() {
{% if GLOBALS.pipeline == "KAFKA" %}
# Get current list of Kafka Outputs
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_kafka')
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_kafka' --retry 3 --retry-delay 30 --fail 2>/dev/null)
# Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")
@@ -73,7 +153,7 @@ function update_kafka_outputs() {
# Get the current list of kafka outputs & hash them
CURRENT_LIST=$(jq -c -r '.item.hosts' <<< "$RAW_JSON")
CURRENT_HASH=$(sha1sum <<< "$CURRENT_LIST" | awk '{print $1}')
CURRENT_HASH=$(sha256sum <<< "$CURRENT_LIST" | awk '{print $1}')
declare -a NEW_LIST=()
@@ -88,7 +168,7 @@ function update_kafka_outputs() {
{# If global pipeline isn't set to KAFKA then assume default of REDIS / logstash #}
{% else %}
# Get current list of Logstash Outputs
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_logstash')
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_logstash' --retry 3 --retry-delay 30 --fail 2>/dev/null)
# Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")
@@ -96,10 +176,19 @@ function update_kafka_outputs() {
printf "Failed to query for current Logstash Outputs..."
exit 1
fi
# logstash adv config - compare pillar to last state file value
if [[ -f "$LOGSTASH_PILLAR_STATE_FILE" ]]; then
PREVIOUS_LOGSTASH_PILLAR_CONFIG_YAML=$(cat "$LOGSTASH_PILLAR_STATE_FILE")
if [[ "$LOGSTASH_PILLAR_CONFIG_YAML" != "$PREVIOUS_LOGSTASH_PILLAR_CONFIG_YAML" ]]; then
echo "Logstash pillar config has changed - forcing update"
FORCE_UPDATE=true
fi
echo "$LOGSTASH_PILLAR_CONFIG_YAML" > "$LOGSTASH_PILLAR_STATE_FILE"
fi
# Get the current list of Logstash outputs & hash them
CURRENT_LIST=$(jq -c -r '.item.hosts' <<< "$RAW_JSON")
CURRENT_HASH=$(sha1sum <<< "$CURRENT_LIST" | awk '{print $1}')
CURRENT_HASH=$(sha256sum <<< "$CURRENT_LIST" | awk '{print $1}')
declare -a NEW_LIST=()
@@ -148,10 +237,10 @@ function update_kafka_outputs() {
# Sort & hash the new list of Logstash Outputs
NEW_LIST_JSON=$(jq --compact-output --null-input '$ARGS.positional' --args -- "${NEW_LIST[@]}")
NEW_HASH=$(sha1sum <<< "$NEW_LIST_JSON" | awk '{print $1}')
NEW_HASH=$(sha256sum <<< "$NEW_LIST_JSON" | awk '{print $1}')
# Compare the current & new list of outputs - if different, update the Logstash outputs
if [ "$NEW_HASH" = "$CURRENT_HASH" ]; then
if [[ "$NEW_HASH" = "$CURRENT_HASH" ]] && [[ "$FORCE_UPDATE" != "true" ]]; then
printf "\nHashes match - no update needed.\n"
printf "Current List: $CURRENT_LIST\nNew List: $NEW_LIST_JSON\n"

View File

@@ -241,9 +241,11 @@ printf '%s\n'\
"" >> "$global_pillar_file"
# Call Elastic-Fleet Salt State
printf "\nApplying elasticfleet state"
salt-call state.apply elasticfleet queue=True
# Generate installers & install Elastic Agent on the node
so-elastic-agent-gen-installers
printf "\nApplying elasticfleet.install_agent_grid state"
salt-call state.apply elasticfleet.install_agent_grid queue=True
exit 0

View File

@@ -23,7 +23,7 @@ function update_fleet_urls() {
}
# Get current list of Fleet Server URLs
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/fleet_server_hosts/grid-default')
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/fleet_server_hosts/grid-default' --retry 3 --retry-delay 30 --fail 2>/dev/null)
# Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")

View File

@@ -47,7 +47,7 @@ if ! kafka_output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://l
--arg KAFKACA "$KAFKACA" \
--arg MANAGER_IP "{{ GLOBALS.manager_ip }}:9092" \
--arg KAFKA_OUTPUT_VERSION "$KAFKA_OUTPUT_VERSION" \
'{"name":"grid-kafka", "id":"so-manager_kafka","type":"kafka","hosts":[ $MANAGER_IP ],"is_default":false,"is_default_monitoring":false,"config_yaml":"","ssl":{"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"proxy_id":null,"client_id":"Elastic","version": $KAFKA_OUTPUT_VERSION ,"compression":"none","auth_type":"ssl","partition":"round_robin","round_robin":{"group_events":10},"topics":[{"topic":"default-securityonion"}],"headers":[{"key":"","value":""}],"timeout":30,"broker_timeout":30,"required_acks":1,"secrets":{"ssl":{"key": $KAFKAKEY }}}'
'{"name":"grid-kafka", "id":"so-manager_kafka","type":"kafka","hosts":[ $MANAGER_IP ],"is_default":false,"is_default_monitoring":false,"config_yaml":"","ssl":{"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"proxy_id":null,"client_id":"Elastic","version": $KAFKA_OUTPUT_VERSION ,"compression":"none","auth_type":"ssl","partition":"round_robin","round_robin":{"group_events":10},"topic":"default-securityonion","headers":[{"key":"","value":""}],"timeout":30,"broker_timeout":30,"required_acks":1,"secrets":{"ssl":{"key": $KAFKAKEY }}}'
)
if ! response=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" --fail 2>/dev/null); then
echo -e "\nFailed to setup Elastic Fleet output policy for Kafka...\n"
@@ -67,7 +67,7 @@ elif kafka_output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://l
--arg ENABLED_DISABLED "$ENABLED_DISABLED"\
--arg KAFKA_OUTPUT_VERSION "$KAFKA_OUTPUT_VERSION" \
--argjson HOSTS "$HOSTS" \
'{"name":"grid-kafka","type":"kafka","hosts":$HOSTS,"is_default":$ENABLED_DISABLED,"is_default_monitoring":$ENABLED_DISABLED,"config_yaml":"","ssl":{"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"proxy_id":null,"client_id":"Elastic","version": $KAFKA_OUTPUT_VERSION ,"compression":"none","auth_type":"ssl","partition":"round_robin","round_robin":{"group_events":10},"topics":[{"topic":"default-securityonion"}],"headers":[{"key":"","value":""}],"timeout":30,"broker_timeout":30,"required_acks":1,"secrets":{"ssl":{"key": $KAFKAKEY }}}'
'{"name":"grid-kafka","type":"kafka","hosts":$HOSTS,"is_default":$ENABLED_DISABLED,"is_default_monitoring":$ENABLED_DISABLED,"config_yaml":"","ssl":{"certificate_authorities":[ $KAFKACA ],"certificate": $KAFKACRT ,"key":"","verification_mode":"full"},"proxy_id":null,"client_id":"Elastic","version": $KAFKA_OUTPUT_VERSION ,"compression":"none","auth_type":"ssl","partition":"round_robin","round_robin":{"group_events":10},"topic":"default-securityonion","headers":[{"key":"","value":""}],"timeout":30,"broker_timeout":30,"required_acks":1,"secrets":{"ssl":{"key": $KAFKAKEY }}}'
)
if ! response=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_kafka" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" --fail 2>/dev/null); then
echo -e "\nFailed to force update to Elastic Fleet output policy for Kafka...\n"

View File

@@ -26,14 +26,14 @@ catrustscript:
GLOBALS: {{ GLOBALS }}
{% endif %}
cacertz:
elasticsearch_cacerts:
file.managed:
- name: /opt/so/conf/ca/cacerts
- source: salt://elasticsearch/cacerts
- user: 939
- group: 939
capemz:
elasticsearch_capems:
file.managed:
- name: /opt/so/conf/ca/tls-ca-bundle.pem
- source: salt://elasticsearch/tls-ca-bundle.pem

View File

@@ -5,11 +5,6 @@
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
include:
- ssl
- elasticsearch.ca
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCHMERGED %}

View File

@@ -1,6 +1,6 @@
elasticsearch:
enabled: false
version: 8.18.8
version: 9.0.8
index_clean: true
config:
action:
@@ -72,6 +72,8 @@ elasticsearch:
actions:
set_priority:
priority: 0
allocate:
number_of_replicas: ""
min_age: 60d
delete:
actions:
@@ -84,11 +86,25 @@ elasticsearch:
max_primary_shard_size: 50gb
set_priority:
priority: 100
forcemerge:
max_num_segments: ""
shrink:
max_primary_shard_size: ""
method: COUNT
number_of_shards: ""
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
forcemerge:
max_num_segments: ""
shrink:
max_primary_shard_size: ""
method: COUNT
number_of_shards: ""
allocate:
number_of_replicas: ""
min_age: 30d
so-case:
index_sorting: false
@@ -245,7 +261,6 @@ elasticsearch:
set_priority:
priority: 50
min_age: 30d
warm: 7
so-detection:
index_sorting: false
index_template:
@@ -284,6 +299,19 @@ elasticsearch:
hot:
actions: {}
min_age: 0ms
sos-backup:
index_sorting: false
index_template:
composed_of: []
ignore_missing_component_templates: []
index_patterns:
- sos-backup-*
priority: 501
template:
settings:
index:
number_of_replicas: 0
number_of_shards: 1
so-assistant-chat:
index_sorting: false
index_template:
@@ -584,7 +612,6 @@ elasticsearch:
set_priority:
priority: 50
min_age: 30d
warm: 7
so-import:
index_sorting: false
index_template:
@@ -830,53 +857,11 @@ elasticsearch:
composed_of:
- agent-mappings
- dtc-agent-mappings
- base-mappings
- dtc-base-mappings
- client-mappings
- dtc-client-mappings
- container-mappings
- destination-mappings
- dtc-destination-mappings
- pb-override-destination-mappings
- dll-mappings
- dns-mappings
- dtc-dns-mappings
- ecs-mappings
- dtc-ecs-mappings
- error-mappings
- event-mappings
- dtc-event-mappings
- file-mappings
- dtc-file-mappings
- group-mappings
- host-mappings
- dtc-host-mappings
- http-mappings
- dtc-http-mappings
- log-mappings
- metadata-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
- dtc-observer-mappings
- organization-mappings
- package-mappings
- process-mappings
- dtc-process-mappings
- related-mappings
- rule-mappings
- dtc-rule-mappings
- server-mappings
- service-mappings
- dtc-service-mappings
- source-mappings
- dtc-source-mappings
- pb-override-source-mappings
- threat-mappings
- tls-mappings
- url-mappings
- user_agent-mappings
- dtc-user_agent-mappings
- common-settings
- common-dynamic-mappings
data_stream:
@@ -932,7 +917,6 @@ elasticsearch:
set_priority:
priority: 50
min_age: 30d
warm: 7
so-hydra:
close: 30
delete: 365
@@ -1043,7 +1027,6 @@ elasticsearch:
set_priority:
priority: 50
min_age: 30d
warm: 7
so-lists:
index_sorting: false
index_template:
@@ -1127,6 +1110,8 @@ elasticsearch:
actions:
set_priority:
priority: 0
allocate:
number_of_replicas: ""
min_age: 60d
delete:
actions:
@@ -1139,11 +1124,25 @@ elasticsearch:
max_primary_shard_size: 50gb
set_priority:
priority: 100
forcemerge:
max_num_segments: ""
shrink:
max_primary_shard_size: ""
method: COUNT
number_of_shards: ""
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
allocate:
number_of_replicas: ""
forcemerge:
max_num_segments: ""
shrink:
max_primary_shard_size: ""
method: COUNT
number_of_shards: ""
min_age: 30d
so-logs-detections_x_alerts:
index_sorting: false
@@ -3123,7 +3122,6 @@ elasticsearch:
set_priority:
priority: 50
min_age: 30d
warm: 7
so-logs-system_x_application:
index_sorting: false
index_template:

View File

@@ -14,6 +14,9 @@
{% from 'elasticsearch/template.map.jinja' import ES_INDEX_SETTINGS %}
include:
- ca
- elasticsearch.ca
- elasticsearch.ssl
- elasticsearch.config
- elasticsearch.sostatus
@@ -61,11 +64,7 @@ so-elasticsearch:
- /nsm/elasticsearch:/usr/share/elasticsearch/data:rw
- /opt/so/log/elasticsearch:/var/log/elasticsearch:rw
- /opt/so/conf/ca/cacerts:/usr/share/elasticsearch/jdk/lib/security/cacerts:ro
{% if GLOBALS.is_manager %}
- /etc/pki/ca.crt:/usr/share/elasticsearch/config/ca.crt:ro
{% else %}
- /etc/pki/tls/certs/intca.crt:/usr/share/elasticsearch/config/ca.crt:ro
{% endif %}
- /etc/pki/elasticsearch.crt:/usr/share/elasticsearch/config/elasticsearch.crt:ro
- /etc/pki/elasticsearch.key:/usr/share/elasticsearch/config/elasticsearch.key:ro
- /etc/pki/elasticsearch.p12:/usr/share/elasticsearch/config/elasticsearch.p12:ro
@@ -82,22 +81,21 @@ so-elasticsearch:
{% endfor %}
{% endif %}
- watch:
- file: cacertz
- file: trusttheca
- x509: elasticsearch_crt
- x509: elasticsearch_key
- file: elasticsearch_cacerts
- file: esyml
- require:
- file: trusttheca
- x509: elasticsearch_crt
- x509: elasticsearch_key
- file: elasticsearch_cacerts
- file: esyml
- file: eslog4jfile
- file: nsmesdir
- file: eslogdir
- file: cacertz
- x509: /etc/pki/elasticsearch.crt
- x509: /etc/pki/elasticsearch.key
- file: elasticp12perms
{% if GLOBALS.is_manager %}
- x509: pki_public_ca_crt
{% else %}
- x509: trusttheca
{% endif %}
- cmd: auth_users_roles_inode
- cmd: auth_users_inode

View File

@@ -1,9 +1,90 @@
{
"description" : "kratos",
"processors" : [
{"set":{"field":"audience","value":"access","override":false,"ignore_failure":true}},
{"set":{"field":"event.dataset","ignore_empty_value":true,"ignore_failure":true,"value":"kratos.{{{audience}}}","media_type":"text/plain"}},
{"set":{"field":"event.action","ignore_failure":true,"copy_from":"msg" }},
{ "pipeline": { "name": "common" } }
]
"description": "kratos",
"processors": [
{
"set": {
"field": "audience",
"value": "access",
"override": false,
"ignore_failure": true
}
},
{
"set": {
"field": "event.dataset",
"ignore_empty_value": true,
"ignore_failure": true,
"value": "kratos.{{{audience}}}",
"media_type": "text/plain"
}
},
{
"set": {
"field": "event.action",
"ignore_failure": true,
"copy_from": "msg"
}
},
{
"rename": {
"field": "http_request",
"target_field": "http.request",
"ignore_failure": true,
"ignore_missing": true
}
},
{
"rename": {
"field": "http_response",
"target_field": "http.response",
"ignore_failure": true,
"ignore_missing": true
}
},
{
"rename": {
"field": "http.request.path",
"target_field": "http.uri",
"ignore_failure": true,
"ignore_missing": true
}
},
{
"rename": {
"field": "http.request.method",
"target_field": "http.method",
"ignore_failure": true,
"ignore_missing": true
}
},
{
"rename": {
"field": "http.request.method",
"target_field": "http.method",
"ignore_failure": true,
"ignore_missing": true
}
},
{
"rename": {
"field": "http.request.query",
"target_field": "http.query",
"ignore_failure": true,
"ignore_missing": true
}
},
{
"rename": {
"field": "http.request.headers.user-agent",
"target_field": "http.useragent",
"ignore_failure": true,
"ignore_missing": true
}
},
{
"pipeline": {
"name": "common"
}
}
]
}

View File

@@ -1,15 +1,79 @@
{
"description" : "suricata.alert",
"processors" : [
{ "set": { "if": "ctx.event?.imported != true", "field": "_index", "value": "logs-suricata.alerts-so" } },
{ "set": { "field": "tags","value": "alert" }},
{ "rename":{ "field": "message2.alert", "target_field": "rule", "ignore_failure": true } },
{ "rename":{ "field": "rule.signature", "target_field": "rule.name", "ignore_failure": true } },
{ "rename":{ "field": "rule.ref", "target_field": "rule.version", "ignore_failure": true } },
{ "rename":{ "field": "rule.signature_id", "target_field": "rule.uuid", "ignore_failure": true } },
{ "rename":{ "field": "rule.signature_id", "target_field": "rule.signature", "ignore_failure": true } },
{ "rename":{ "field": "message2.payload_printable", "target_field": "network.data.decoded", "ignore_failure": true } },
{ "dissect": { "field": "rule.rule", "pattern": "%{?prefix}content:\"%{dns.query_name}\"%{?remainder}", "ignore_missing": true, "ignore_failure": true } },
{ "pipeline": { "name": "common.nids" } }
]
"description": "suricata.alert",
"processors": [
{
"set": {
"if": "ctx.event?.imported != true",
"field": "_index",
"value": "logs-suricata.alerts-so"
}
},
{
"set": {
"field": "tags",
"value": "alert"
}
},
{
"rename": {
"field": "message2.alert",
"target_field": "rule",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"rename": {
"field": "rule.signature",
"target_field": "rule.name",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"rename": {
"field": "rule.ref",
"target_field": "rule.version",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"rename": {
"field": "rule.signature_id",
"target_field": "rule.uuid",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"rename": {
"field": "rule.signature_id",
"target_field": "rule.signature",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.payload_printable",
"target_field": "network.data.decoded",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"dissect": {
"field": "rule.rule",
"pattern": "%{?prefix}content:\"%{dns.query_name}\"%{?remainder}",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"pipeline": {
"name": "common.nids"
}
}
]
}

View File

@@ -1,30 +1,155 @@
{
"description" : "suricata.common",
"processors" : [
{ "json": { "field": "message", "target_field": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.pkt_src", "target_field": "network.packet_source","ignore_failure": true } },
{ "rename": { "field": "message2.proto", "target_field": "network.transport", "ignore_failure": true } },
{ "rename": { "field": "message2.in_iface", "target_field": "observer.ingress.interface.name", "ignore_failure": true } },
{ "rename": { "field": "message2.flow_id", "target_field": "log.id.uid", "ignore_failure": true } },
{ "rename": { "field": "message2.src_ip", "target_field": "source.ip", "ignore_failure": true } },
{ "rename": { "field": "message2.src_port", "target_field": "source.port", "ignore_failure": true } },
{ "rename": { "field": "message2.dest_ip", "target_field": "destination.ip", "ignore_failure": true } },
{ "rename": { "field": "message2.dest_port", "target_field": "destination.port", "ignore_failure": true } },
{ "rename": { "field": "message2.vlan", "target_field": "network.vlan.id", "ignore_failure": true } },
{ "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_missing": true } },
{ "rename": { "field": "message2.xff", "target_field": "xff.ip", "ignore_missing": true } },
{ "set": { "field": "event.dataset", "value": "{{ message2.event_type }}" } },
{ "set": { "field": "observer.name", "value": "{{agent.name}}" } },
{ "set": { "field": "event.ingested", "value": "{{@timestamp}}" } },
{ "date": { "field": "message2.timestamp", "target_field": "@timestamp", "formats": ["ISO8601", "UNIX"], "timezone": "UTC", "ignore_failure": true } },
{ "remove":{ "field": "agent", "ignore_failure": true } },
{"append":{"field":"related.ip","value":["{{source.ip}}","{{destination.ip}}"],"allow_duplicates":false,"ignore_failure":true}},
{
"script": {
"source": "boolean isPrivate(def ip) { if (ip == null) return false; int dot1 = ip.indexOf('.'); if (dot1 == -1) return false; int dot2 = ip.indexOf('.', dot1 + 1); if (dot2 == -1) return false; int first = Integer.parseInt(ip.substring(0, dot1)); if (first == 10) return true; if (first == 192 && ip.startsWith('168.', dot1 + 1)) return true; if (first == 172) { int second = Integer.parseInt(ip.substring(dot1 + 1, dot2)); return second >= 16 && second <= 31; } return false; } String[] fields = new String[] {\"source\", \"destination\"}; for (int i = 0; i < fields.length; i++) { def field = fields[i]; def ip = ctx[field]?.ip; if (ip != null) { if (ctx.network == null) ctx.network = new HashMap(); if (isPrivate(ip)) { if (ctx.network.private_ip == null) ctx.network.private_ip = new ArrayList(); if (!ctx.network.private_ip.contains(ip)) ctx.network.private_ip.add(ip); } else { if (ctx.network.public_ip == null) ctx.network.public_ip = new ArrayList(); if (!ctx.network.public_ip.contains(ip)) ctx.network.public_ip.add(ip); } } }",
"ignore_failure": false
}
},
{ "pipeline": { "if": "ctx?.event?.dataset != null", "name": "suricata.{{event.dataset}}" } }
]
}
"description": "suricata.common",
"processors": [
{
"json": {
"field": "message",
"target_field": "message2",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.pkt_src",
"target_field": "network.packet_source",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.proto",
"target_field": "network.transport",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.in_iface",
"target_field": "observer.ingress.interface.name",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.flow_id",
"target_field": "log.id.uid",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.src_ip",
"target_field": "source.ip",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.src_port",
"target_field": "source.port",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.dest_ip",
"target_field": "destination.ip",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.dest_port",
"target_field": "destination.port",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.vlan",
"target_field": "network.vlan.id",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.community_id",
"target_field": "network.community_id",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.xff",
"target_field": "xff.ip",
"ignore_missing": true
}
},
{
"set": {
"field": "event.dataset",
"value": "{{ message2.event_type }}"
}
},
{
"set": {
"field": "observer.name",
"value": "{{agent.name}}"
}
},
{
"set": {
"field": "event.ingested",
"value": "{{@timestamp}}"
}
},
{
"date": {
"field": "message2.timestamp",
"target_field": "@timestamp",
"formats": [
"ISO8601",
"UNIX"
],
"timezone": "UTC",
"ignore_failure": true
}
},
{
"remove": {
"field": "agent",
"ignore_failure": true
}
},
{
"append": {
"field": "related.ip",
"value": [
"{{source.ip}}",
"{{destination.ip}}"
],
"allow_duplicates": false,
"ignore_failure": true
}
},
{
"script": {
"source": "boolean isPrivate(def ip) { if (ip == null) return false; int dot1 = ip.indexOf('.'); if (dot1 == -1) return false; int dot2 = ip.indexOf('.', dot1 + 1); if (dot2 == -1) return false; int first = Integer.parseInt(ip.substring(0, dot1)); if (first == 10) return true; if (first == 192 && ip.startsWith('168.', dot1 + 1)) return true; if (first == 172) { int second = Integer.parseInt(ip.substring(dot1 + 1, dot2)); return second >= 16 && second <= 31; } return false; } String[] fields = new String[] {\"source\", \"destination\"}; for (int i = 0; i < fields.length; i++) { def field = fields[i]; def ip = ctx[field]?.ip; if (ip != null) { if (ctx.network == null) ctx.network = new HashMap(); if (isPrivate(ip)) { if (ctx.network.private_ip == null) ctx.network.private_ip = new ArrayList(); if (!ctx.network.private_ip.contains(ip)) ctx.network.private_ip.add(ip); } else { if (ctx.network.public_ip == null) ctx.network.public_ip = new ArrayList(); if (!ctx.network.public_ip.contains(ip)) ctx.network.public_ip.add(ip); } } }",
"ignore_failure": false
}
},
{
"rename": {
"field": "message2.capture_file",
"target_field": "suricata.capture_file",
"ignore_missing": true
}
},
{
"pipeline": {
"if": "ctx?.event?.dataset != null",
"name": "suricata.{{event.dataset}}"
}
}
]
}

View File

@@ -1,21 +1,136 @@
{
"description" : "suricata.dns",
"processors" : [
{ "rename": { "field": "message2.proto", "target_field": "network.transport", "ignore_missing": true } },
{ "rename": { "field": "message2.app_proto", "target_field": "network.protocol", "ignore_missing": true } },
{ "rename": { "field": "message2.dns.type", "target_field": "dns.query.type", "ignore_missing": true } },
{ "rename": { "field": "message2.dns.tx_id", "target_field": "dns.id", "ignore_missing": true } },
{ "rename": { "field": "message2.dns.version", "target_field": "dns.version", "ignore_missing": true } },
{ "rename": { "field": "message2.dns.rrname", "target_field": "dns.query.name", "ignore_missing": true } },
{ "rename": { "field": "message2.dns.rrtype", "target_field": "dns.query.type_name", "ignore_missing": true } },
{ "rename": { "field": "message2.dns.flags", "target_field": "dns.flags", "ignore_missing": true } },
{ "rename": { "field": "message2.dns.qr", "target_field": "dns.qr", "ignore_missing": true } },
{ "rename": { "field": "message2.dns.rd", "target_field": "dns.recursion.desired", "ignore_missing": true } },
{ "rename": { "field": "message2.dns.ra", "target_field": "dns.recursion.available", "ignore_missing": true } },
{ "rename": { "field": "message2.dns.rcode", "target_field": "dns.response.code_name", "ignore_missing": true } },
{ "rename": { "field": "message2.dns.grouped.A", "target_field": "dns.answers.data", "ignore_missing": true } },
{ "rename": { "field": "message2.dns.grouped.CNAME", "target_field": "dns.answers.name", "ignore_missing": true } },
{ "pipeline": { "if": "ctx.dns.query?.name != null && ctx.dns.query.name.contains('.')", "name": "dns.tld" } },
{ "pipeline": { "name": "common" } }
]
}
"description": "suricata.dns",
"processors": [
{
"rename": {
"field": "message2.proto",
"target_field": "network.transport",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.app_proto",
"target_field": "network.protocol",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.type",
"target_field": "dns.query.type",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.tx_id",
"target_field": "dns.tx_id",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.id",
"target_field": "dns.id",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.version",
"target_field": "dns.version",
"ignore_missing": true
}
},
{
"pipeline": {
"name": "suricata.dnsv3",
"ignore_missing_pipeline": true,
"if": "ctx?.dns?.version != null && ctx?.dns?.version == 3",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.dns.rrname",
"target_field": "dns.query.name",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.rrtype",
"target_field": "dns.query.type_name",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.flags",
"target_field": "dns.flags",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.qr",
"target_field": "dns.qr",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.rd",
"target_field": "dns.recursion.desired",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.ra",
"target_field": "dns.recursion.available",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.opcode",
"target_field": "dns.opcode",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.rcode",
"target_field": "dns.response.code_name",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.grouped.A",
"target_field": "dns.answers.data",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.dns.grouped.CNAME",
"target_field": "dns.answers.name",
"ignore_missing": true
}
},
{
"pipeline": {
"if": "ctx.dns.query?.name != null && ctx.dns.query.name.contains('.')",
"name": "dns.tld"
}
},
{
"pipeline": {
"name": "common"
}
}
]
}

View File

@@ -0,0 +1,56 @@
{
"processors": [
{
"rename": {
"field": "message2.dns.queries",
"target_field": "dns.queries",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"script": {
"source": "if (ctx?.dns?.queries != null && ctx?.dns?.queries.length > 0) {\n if (ctx.dns == null) {\n ctx.dns = new HashMap();\n }\n if (ctx.dns.query == null) {\n ctx.dns.query = new HashMap();\n }\n ctx.dns.query.name = ctx?.dns?.queries[0].rrname;\n}"
}
},
{
"script": {
"source": "if (ctx?.dns?.queries != null && ctx?.dns?.queries.length > 0) {\n if (ctx.dns == null) {\n ctx.dns = new HashMap();\n }\n if (ctx.dns.query == null) {\n ctx.dns.query = new HashMap();\n }\n ctx.dns.query.type_name = ctx?.dns?.queries[0].rrtype;\n}"
}
},
{
"foreach": {
"field": "dns.queries",
"processor": {
"rename": {
"field": "_ingest._value.rrname",
"target_field": "_ingest._value.name",
"ignore_missing": true
}
},
"ignore_failure": true
}
},
{
"foreach": {
"field": "dns.queries",
"processor": {
"rename": {
"field": "_ingest._value.rrtype",
"target_field": "_ingest._value.type_name",
"ignore_missing": true
}
},
"ignore_failure": true
}
},
{
"pipeline": {
"name": "suricata.tld",
"ignore_missing_pipeline": true,
"if": "ctx?.dns?.queries != null && ctx?.dns?.queries.length > 0",
"ignore_failure": true
}
}
]
}

View File

@@ -0,0 +1,52 @@
{
"processors": [
{
"script": {
"source": "if (ctx.dns != null && ctx.dns.queries != null) {\n for (def q : ctx.dns.queries) {\n if (q.name != null && q.name.contains('.')) {\n q.top_level_domain = q.name.substring(q.name.lastIndexOf('.') + 1);\n }\n }\n}",
"ignore_failure": true
}
},
{
"script": {
"source": "if (ctx.dns != null && ctx.dns.queries != null) {\n for (def q : ctx.dns.queries) {\n if (q.name != null && q.name.contains('.')) {\n q.query_without_tld = q.name.substring(0, q.name.lastIndexOf('.'));\n }\n }\n}",
"ignore_failure": true
}
},
{
"script": {
"source": "if (ctx.dns != null && ctx.dns.queries != null) {\n for (def q : ctx.dns.queries) {\n if (q.query_without_tld != null && q.query_without_tld.contains('.')) {\n q.parent_domain = q.query_without_tld.substring(q.query_without_tld.lastIndexOf('.') + 1);\n }\n }\n}",
"ignore_failure": true
}
},
{
"script": {
"source": "if (ctx.dns != null && ctx.dns.queries != null) {\n for (def q : ctx.dns.queries) {\n if (q.query_without_tld != null && q.query_without_tld.contains('.')) {\n q.subdomain = q.query_without_tld.substring(0, q.query_without_tld.lastIndexOf('.'));\n }\n }\n}",
"ignore_failure": true
}
},
{
"script": {
"source": "if (ctx.dns != null && ctx.dns.queries != null) {\n for (def q : ctx.dns.queries) {\n if (q.parent_domain != null && q.top_level_domain != null) {\n q.highest_registered_domain = q.parent_domain + \".\" + q.top_level_domain;\n }\n }\n}",
"ignore_failure": true
}
},
{
"script": {
"source": "if (ctx.dns != null && ctx.dns.queries != null) {\n for (def q : ctx.dns.queries) {\n if (q.subdomain != null) {\n q.subdomain_length = q.subdomain.length();\n }\n }\n}",
"ignore_failure": true
}
},
{
"script": {
"source": "if (ctx.dns != null && ctx.dns.queries != null) {\n for (def q : ctx.dns.queries) {\n if (q.parent_domain != null) {\n q.parent_domain_length = q.parent_domain.length();\n }\n }\n}",
"ignore_failure": true
}
},
{
"script": {
"source": "if (ctx.dns != null && ctx.dns.queries != null) {\n for (def q : ctx.dns.queries) {\n q.remove('query_without_tld');\n }\n}",
"ignore_failure": true
}
}
]
}

View File

@@ -0,0 +1,61 @@
{
"description": "zeek.analyzer",
"processors": [
{
"set": {
"field": "event.dataset",
"value": "analyzer"
}
},
{
"remove": {
"field": [
"host"
],
"ignore_failure": true
}
},
{
"json": {
"field": "message",
"target_field": "message2",
"ignore_failure": true
}
},
{
"set": {
"field": "network.protocol",
"copy_from": "message2.analyzer_name",
"ignore_empty_value": true,
"if": "ctx?.message2?.analyzer_kind == 'protocol'"
}
},
{
"set": {
"field": "network.protocol",
"ignore_empty_value": true,
"if": "ctx?.message2?.analyzer_kind != 'protocol'",
"copy_from": "message2.proto"
}
},
{
"lowercase": {
"field": "network.protocol",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.failure_reason",
"target_field": "error.reason",
"ignore_missing": true
}
},
{
"pipeline": {
"name": "zeek.common"
}
}
]
}

View File

@@ -1,35 +1,227 @@
{
"description" : "zeek.dns",
"processors" : [
{ "set": { "field": "event.dataset", "value": "dns" } },
{ "remove": { "field": ["host"], "ignore_failure": true } },
{ "json": { "field": "message", "target_field": "message2", "ignore_failure": true } },
{ "dot_expander": { "field": "id.orig_h", "path": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.proto", "target_field": "network.transport", "ignore_missing": true } },
{ "rename": { "field": "message2.trans_id", "target_field": "dns.id", "ignore_missing": true } },
{ "rename": { "field": "message2.rtt", "target_field": "event.duration", "ignore_missing": true } },
{ "rename": { "field": "message2.query", "target_field": "dns.query.name", "ignore_missing": true } },
{ "rename": { "field": "message2.qclass", "target_field": "dns.query.class", "ignore_missing": true } },
{ "rename": { "field": "message2.qclass_name", "target_field": "dns.query.class_name", "ignore_missing": true } },
{ "rename": { "field": "message2.qtype", "target_field": "dns.query.type", "ignore_missing": true } },
{ "rename": { "field": "message2.qtype_name", "target_field": "dns.query.type_name", "ignore_missing": true } },
{ "rename": { "field": "message2.rcode", "target_field": "dns.response.code", "ignore_missing": true } },
{ "rename": { "field": "message2.rcode_name", "target_field": "dns.response.code_name", "ignore_missing": true } },
{ "rename": { "field": "message2.AA", "target_field": "dns.authoritative", "ignore_missing": true } },
{ "rename": { "field": "message2.TC", "target_field": "dns.truncated", "ignore_missing": true } },
{ "rename": { "field": "message2.RD", "target_field": "dns.recursion.desired", "ignore_missing": true } },
{ "rename": { "field": "message2.RA", "target_field": "dns.recursion.available", "ignore_missing": true } },
{ "rename": { "field": "message2.Z", "target_field": "dns.reserved", "ignore_missing": true } },
{ "rename": { "field": "message2.answers", "target_field": "dns.answers.name", "ignore_missing": true } },
{ "foreach": {"field": "dns.answers.name","processor": {"pipeline": {"name": "common.ip_validation"}},"if": "ctx.dns != null && ctx.dns.answers != null && ctx.dns.answers.name != null","ignore_failure": true}},
{ "foreach": {"field": "temp._valid_ips","processor": {"append": {"field": "dns.resolved_ip","allow_duplicates": false,"value": "{{{_ingest._value}}}","ignore_failure": true}},"ignore_failure": true}},
{ "script": { "source": "if (ctx.dns.resolved_ip != null && ctx.dns.resolved_ip instanceof List) {\n ctx.dns.resolved_ip.removeIf(item -> item == null || item.toString().trim().isEmpty());\n }","ignore_failure": true }},
{ "remove": {"field": ["temp"], "ignore_missing": true ,"ignore_failure": true } },
{ "rename": { "field": "message2.TTLs", "target_field": "dns.ttls", "ignore_missing": true } },
{ "rename": { "field": "message2.rejected", "target_field": "dns.query.rejected", "ignore_missing": true } },
{ "script": { "lang": "painless", "source": "ctx.dns.query.length = ctx.dns.query.name.length()", "ignore_failure": true } },
{ "set": { "if": "ctx._index == 'so-zeek'", "field": "_index", "value": "so-zeek_dns", "override": true } },
{ "pipeline": { "if": "ctx.dns?.query?.name != null && ctx.dns.query.name.contains('.')", "name": "dns.tld" } },
{ "pipeline": { "name": "zeek.common" } }
]
"description": "zeek.dns",
"processors": [
{
"set": {
"field": "event.dataset",
"value": "dns"
}
},
{
"remove": {
"field": [
"host"
],
"ignore_failure": true
}
},
{
"json": {
"field": "message",
"target_field": "message2",
"ignore_failure": true
}
},
{
"dot_expander": {
"field": "id.orig_h",
"path": "message2",
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.proto",
"target_field": "network.transport",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.trans_id",
"target_field": "dns.id",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.rtt",
"target_field": "event.duration",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.query",
"target_field": "dns.query.name",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.qclass",
"target_field": "dns.query.class",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.qclass_name",
"target_field": "dns.query.class_name",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.qtype",
"target_field": "dns.query.type",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.qtype_name",
"target_field": "dns.query.type_name",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.rcode",
"target_field": "dns.response.code",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.rcode_name",
"target_field": "dns.response.code_name",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.AA",
"target_field": "dns.authoritative",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.TC",
"target_field": "dns.truncated",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.RD",
"target_field": "dns.recursion.desired",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.RA",
"target_field": "dns.recursion.available",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.Z",
"target_field": "dns.reserved",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.answers",
"target_field": "dns.answers.name",
"ignore_missing": true
}
},
{
"foreach": {
"field": "dns.answers.name",
"processor": {
"pipeline": {
"name": "common.ip_validation"
}
},
"if": "ctx.dns != null && ctx.dns.answers != null && ctx.dns.answers.name != null",
"ignore_failure": true
}
},
{
"foreach": {
"field": "temp._valid_ips",
"processor": {
"append": {
"field": "dns.resolved_ip",
"allow_duplicates": false,
"value": "{{{_ingest._value}}}",
"ignore_failure": true
}
},
"if": "ctx.dns != null && ctx.dns.answers != null && ctx.dns.answers.name != null",
"ignore_failure": true
}
},
{
"script": {
"source": "if (ctx.dns.resolved_ip != null && ctx.dns.resolved_ip instanceof List) {\n ctx.dns.resolved_ip.removeIf(item -> item == null || item.toString().trim().isEmpty());\n }",
"ignore_failure": true
}
},
{
"remove": {
"field": [
"temp"
],
"ignore_missing": true,
"ignore_failure": true
}
},
{
"rename": {
"field": "message2.TTLs",
"target_field": "dns.ttls",
"ignore_missing": true
}
},
{
"rename": {
"field": "message2.rejected",
"target_field": "dns.query.rejected",
"ignore_missing": true
}
},
{
"script": {
"lang": "painless",
"source": "ctx.dns.query.length = ctx.dns.query.name.length()",
"ignore_failure": true
}
},
{
"set": {
"if": "ctx._index == 'so-zeek'",
"field": "_index",
"value": "so-zeek_dns",
"override": true
}
},
{
"pipeline": {
"if": "ctx.dns?.query?.name != null && ctx.dns.query.name.contains('.')",
"name": "dns.tld"
}
},
{
"pipeline": {
"name": "zeek.common"
}
}
]
}

View File

@@ -1,20 +0,0 @@
{
"description" : "zeek.dpd",
"processors" : [
{ "set": { "field": "event.dataset", "value": "dpd" } },
{ "remove": { "field": ["host"], "ignore_failure": true } },
{ "json": { "field": "message", "target_field": "message2", "ignore_failure": true } },
{ "dot_expander": { "field": "id.orig_h", "path": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.id.orig_h", "target_field": "source.ip", "ignore_missing": true } },
{ "dot_expander": { "field": "id.orig_p", "path": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.id.orig_p", "target_field": "source.port", "ignore_missing": true } },
{ "dot_expander": { "field": "id.resp_h", "path": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.id.resp_h", "target_field": "destination.ip", "ignore_missing": true } },
{ "dot_expander": { "field": "id.resp_p", "path": "message2", "ignore_failure": true } },
{ "rename": { "field": "message2.id.resp_p", "target_field": "destination.port", "ignore_missing": true } },
{ "rename": { "field": "message2.proto", "target_field": "network.protocol", "ignore_missing": true } },
{ "rename": { "field": "message2.analyzer", "target_field": "observer.analyzer", "ignore_missing": true } },
{ "rename": { "field": "message2.failure_reason", "target_field": "error.reason", "ignore_missing": true } },
{ "pipeline": { "name": "zeek.common" } }
]
}

View File

@@ -131,6 +131,47 @@ elasticsearch:
description: Maximum primary shard size. Once an index reaches this limit, it will be rolled over into a new index.
global: True
helpLink: elasticsearch.html
shrink:
method:
description: Shrink the index to a new index with fewer primary shards. Shrink operation is by count or size.
options:
- COUNT
- SIZE
global: True
advanced: True
forcedType: string
number_of_shards:
title: shard count
description: Desired shard count. Note that this value is only used when the shrink method selected is 'COUNT'.
global: True
forcedType: int
advanced: True
max_primary_shard_size:
title: max shard size
description: Desired shard size in gb/tb/pb eg. 100gb. Note that this value is only used when the shrink method selected is 'SIZE'.
regex: ^[0-9]+(?:gb|tb|pb)$
global: True
forcedType: string
advanced: True
allow_write_after_shrink:
description: Allow writes after shrink.
global: True
forcedType: bool
default: False
advanced: True
forcemerge:
max_num_segments:
description: Reduce the number of segments in each index shard and clean up deleted documents.
global: True
forcedType: int
advanced: True
index_codec:
title: compression
description: Use higher compression for stored fields at the cost of slower performance.
forcedType: bool
global: True
default: False
advanced: True
cold:
min_age:
description: Minimum age of index. ex. 60d - This determines when the index should be moved to the cold tier. While still searchable, this tier is typically optimized for lower storage costs rather than search speed. Its important to note that this is calculated relative to the rollover date (NOT the original creation date of the index). For example, if you have an index that is set to rollover after 30 days and cold min_age set to 60 then there will be 30 days from index creation to rollover and then an additional 60 days before moving to cold tier.
@@ -144,6 +185,12 @@ elasticsearch:
description: Used for index recovery after a node restart. Indices with higher priorities are recovered before indices with lower priorities.
global: True
helpLink: elasticsearch.html
allocate:
number_of_replicas:
description: Set the number of replicas. Remains the same as the previous phase by default.
forcedType: int
global: True
advanced: True
warm:
min_age:
description: Minimum age of index. ex. 30d - This determines when the index should be moved to the warm tier. Nodes in the warm tier generally dont need to be as fast as those in the hot tier. Its important to note that this is calculated relative to the rollover date (NOT the original creation date of the index). For example, if you have an index that is set to rollover after 30 days and warm min_age set to 30 then there will be 30 days from index creation to rollover and then an additional 30 days before moving to warm tier.
@@ -158,6 +205,52 @@ elasticsearch:
forcedType: int
global: True
helpLink: elasticsearch.html
shrink:
method:
description: Shrink the index to a new index with fewer primary shards. Shrink operation is by count or size.
options:
- COUNT
- SIZE
global: True
advanced: True
number_of_shards:
title: shard count
description: Desired shard count. Note that this value is only used when the shrink method selected is 'COUNT'.
global: True
forcedType: int
advanced: True
max_primary_shard_size:
title: max shard size
description: Desired shard size in gb/tb/pb eg. 100gb. Note that this value is only used when the shrink method selected is 'SIZE'.
regex: ^[0-9]+(?:gb|tb|pb)$
global: True
forcedType: string
advanced: True
allow_write_after_shrink:
description: Allow writes after shrink.
global: True
forcedType: bool
default: False
advanced: True
forcemerge:
max_num_segments:
description: Reduce the number of segments in each index shard and clean up deleted documents.
global: True
forcedType: int
advanced: True
index_codec:
title: compression
description: Use higher compression for stored fields at the cost of slower performance.
forcedType: bool
global: True
default: False
advanced: True
allocate:
number_of_replicas:
description: Set the number of replicas. Remains the same as the previous phase by default.
forcedType: int
global: True
advanced: True
delete:
min_age:
description: Minimum age of index. ex. 90d - This determines when the index should be deleted. Its important to note that this is calculated relative to the rollover date (NOT the original creation date of the index). For example, if you have an index that is set to rollover after 30 days and delete min_age set to 90 then there will be 30 days from index creation to rollover and then an additional 90 days before deletion.
@@ -287,6 +380,47 @@ elasticsearch:
global: True
advanced: True
helpLink: elasticsearch.html
shrink:
method:
description: Shrink the index to a new index with fewer primary shards. Shrink operation is by count or size.
options:
- COUNT
- SIZE
global: True
advanced: True
forcedType: string
number_of_shards:
title: shard count
description: Desired shard count. Note that this value is only used when the shrink method selected is 'COUNT'.
global: True
forcedType: int
advanced: True
max_primary_shard_size:
title: max shard size
description: Desired shard size in gb/tb/pb eg. 100gb. Note that this value is only used when the shrink method selected is 'SIZE'.
regex: ^[0-9]+(?:gb|tb|pb)$
global: True
forcedType: string
advanced: True
allow_write_after_shrink:
description: Allow writes after shrink.
global: True
forcedType: bool
default: False
advanced: True
forcemerge:
max_num_segments:
description: Reduce the number of segments in each index shard and clean up deleted documents.
global: True
forcedType: int
advanced: True
index_codec:
title: compression
description: Use higher compression for stored fields at the cost of slower performance.
forcedType: bool
global: True
default: False
advanced: True
warm:
min_age:
description: Minimum age of index. ex. 30d - This determines when the index should be moved to the warm tier. Nodes in the warm tier generally dont need to be as fast as those in the hot tier. Its important to note that this is calculated relative to the rollover date (NOT the original creation date of the index). For example, if you have an index that is set to rollover after 30 days and warm min_age set to 30 then there will be 30 days from index creation to rollover and then an additional 30 days before moving to warm tier.
@@ -314,6 +448,52 @@ elasticsearch:
global: True
advanced: True
helpLink: elasticsearch.html
shrink:
method:
description: Shrink the index to a new index with fewer primary shards. Shrink operation is by count or size.
options:
- COUNT
- SIZE
global: True
advanced: True
number_of_shards:
title: shard count
description: Desired shard count. Note that this value is only used when the shrink method selected is 'COUNT'.
global: True
forcedType: int
advanced: True
max_primary_shard_size:
title: max shard size
description: Desired shard size in gb/tb/pb eg. 100gb. Note that this value is only used when the shrink method selected is 'SIZE'.
regex: ^[0-9]+(?:gb|tb|pb)$
global: True
forcedType: string
advanced: True
allow_write_after_shrink:
description: Allow writes after shrink.
global: True
forcedType: bool
default: False
advanced: True
forcemerge:
max_num_segments:
description: Reduce the number of segments in each index shard and clean up deleted documents.
global: True
forcedType: int
advanced: True
index_codec:
title: compression
description: Use higher compression for stored fields at the cost of slower performance.
forcedType: bool
global: True
default: False
advanced: True
allocate:
number_of_replicas:
description: Set the number of replicas. Remains the same as the previous phase by default.
forcedType: int
global: True
advanced: True
cold:
min_age:
description: Minimum age of index. ex. 60d - This determines when the index should be moved to the cold tier. While still searchable, this tier is typically optimized for lower storage costs rather than search speed. Its important to note that this is calculated relative to the rollover date (NOT the original creation date of the index). For example, if you have an index that is set to rollover after 30 days and cold min_age set to 60 then there will be 30 days from index creation to rollover and then an additional 60 days before moving to cold tier.
@@ -330,6 +510,12 @@ elasticsearch:
global: True
advanced: True
helpLink: elasticsearch.html
allocate:
number_of_replicas:
description: Set the number of replicas. Remains the same as the previous phase by default.
forcedType: int
global: True
advanced: True
delete:
min_age:
description: Minimum age of index. ex. 90d - This determines when the index should be deleted. Its important to note that this is calculated relative to the rollover date (NOT the original creation date of the index). For example, if you have an index that is set to rollover after 30 days and delete min_age set to 90 then there will be 30 days from index creation to rollover and then an additional 90 days before deletion.

View File

@@ -0,0 +1,66 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'ca/map.jinja' import CA %}
# Create a cert for elasticsearch
elasticsearch_key:
x509.private_key_managed:
- name: /etc/pki/elasticsearch.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticsearch.key') -%}
- prereq:
- x509: /etc/pki/elasticsearch.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
elasticsearch_crt:
x509.certificate_managed:
- name: /etc/pki/elasticsearch.crt
- ca_server: {{ CA.server }}
- signing_policy: registry
- private_key: /etc/pki/elasticsearch.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 7
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs12 -inkey /etc/pki/elasticsearch.key -in /etc/pki/elasticsearch.crt -export -out /etc/pki/elasticsearch.p12 -nodes -passout pass:"
- onchanges:
- x509: /etc/pki/elasticsearch.key
elastickeyperms:
file.managed:
- replace: False
- name: /etc/pki/elasticsearch.key
- mode: 640
- group: 930
elasticp12perms:
file.managed:
- replace: False
- name: /etc/pki/elasticsearch.p12
- mode: 640
- group: 930
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -61,5 +61,55 @@
{% do settings.index_template.template.settings.index.pop('sort') %}
{% endif %}
{% endif %}
{# advanced ilm actions #}
{% if settings.policy is defined and settings.policy.phases is defined %}
{% set PHASE_NAMES = ["hot", "warm", "cold"] %}
{% for P in PHASE_NAMES %}
{% if settings.policy.phases[P] is defined and settings.policy.phases[P].actions is defined %}
{% set PHASE = settings.policy.phases[P].actions %}
{# remove allocate action if number_of_replicas isn't configured #}
{% if PHASE.allocate is defined %}
{% if PHASE.allocate.number_of_replicas is not defined or PHASE.allocate.number_of_replicas == "" %}
{% do PHASE.pop('allocate', none) %}
{% endif %}
{% endif %}
{# start shrink action #}
{% if PHASE.shrink is defined %}
{% if PHASE.shrink.method is defined %}
{% if PHASE.shrink.method == 'COUNT' and PHASE.shrink.number_of_shards is defined and PHASE.shrink.number_of_shards %}
{# remove max_primary_shard_size value when doing shrink operation by count vs size #}
{% do PHASE.shrink.pop('max_primary_shard_size', none) %}
{% elif PHASE.shrink.method == 'SIZE' and PHASE.shrink.max_primary_shard_size is defined and PHASE.shrink.max_primary_shard_size %}
{# remove number_of_shards value when doing shrink operation by size vs count #}
{% do PHASE.shrink.pop('number_of_shards', none) %}
{% else %}
{# method isn't defined or missing a required config number_of_shards/max_primary_shard_size #}
{% do PHASE.pop('shrink', none) %}
{% endif %}
{% endif %}
{% endif %}
{# always remove shrink method since its only used for SOC config, not in the actual ilm policy #}
{% if PHASE.shrink is defined %}
{% do PHASE.shrink.pop('method', none) %}
{% endif %}
{# end shrink action #}
{# start force merge #}
{% if PHASE.forcemerge is defined %}
{% if PHASE.forcemerge.index_codec is defined and PHASE.forcemerge.index_codec %}
{% do PHASE.forcemerge.update({'index_codec': 'best_compression'}) %}
{% else %}
{% do PHASE.forcemerge.pop('index_codec', none) %}
{% endif %}
{% if PHASE.forcemerge.max_num_segments is not defined or not PHASE.forcemerge.max_num_segments %}
{# max_num_segments is empty, drop it #}
{% do PHASE.pop('forcemerge', none) %}
{% endif %}
{% endif %}
{# end force merge #}
{% endif %}
{% endfor %}
{% endif %}
{% do ES_INDEX_SETTINGS.update({index | replace("_x_", "."): ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index]}) %}
{% endfor %}

View File

@@ -841,6 +841,10 @@
"type": "long"
}
}
},
"capture_file": {
"type": "keyword",
"ignore_above": 1024
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -14,8 +14,9 @@ set -e
# Check to see if we have extracted the ca cert.
if [ ! -f /opt/so/saltstack/local/salt/elasticsearch/cacerts ]; then
docker run -v /etc/pki/ca.crt:/etc/ssl/ca.crt --name so-elasticsearchca --user root --entrypoint jdk/bin/keytool {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-elasticsearch:$ELASTIC_AGENT_TARBALL_VERSION -keystore /usr/share/elasticsearch/jdk/lib/security/cacerts -alias SOSCA -import -file /etc/ssl/ca.crt -storepass changeit -noprompt
docker cp so-elasticsearchca:/usr/share/elasticsearch/jdk/lib/security/cacerts /opt/so/saltstack/local/salt/elasticsearch/cacerts
docker cp so-elasticsearchca:/etc/ssl/certs/ca-certificates.crt /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem
# Make sure symbolic links are followed when copying from container
docker cp -L so-elasticsearchca:/usr/share/elasticsearch/jdk/lib/security/cacerts /opt/so/saltstack/local/salt/elasticsearch/cacerts
docker cp -L so-elasticsearchca:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem
docker rm so-elasticsearchca
echo "" >> /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem
echo "sosca" >> /opt/so/saltstack/local/salt/elasticsearch/tls-ca-bundle.pem

View File

@@ -121,7 +121,7 @@ if [ ! -f $STATE_FILE_SUCCESS ]; then
echo "Loading Security Onion index templates..."
shopt -s extglob
{% if GLOBALS.role == 'so-heavynode' %}
pattern="!(*1password*|*aws*|*azure*|*cloudflare*|*elastic_agent*|*fim*|*github*|*google*|*osquery*|*system*|*windows*)"
pattern="!(*1password*|*aws*|*azure*|*cloudflare*|*elastic_agent*|*fim*|*github*|*google*|*osquery*|*system*|*windows*|*endpoint*|*elasticsearch*|*generic*|*fleet_server*|*soc*)"
{% else %}
pattern="*"
{% endif %}

View File

@@ -45,7 +45,7 @@ used during VM provisioning to add dedicated NSM storage volumes.
This command creates and attaches a volume with the following settings:
- VM Name: `vm1_sensor`
- Volume Size: `500` GB
- Volume Path: `/nsm/libvirt/volumes/vm1_sensor-nsm.img`
- Volume Path: `/nsm/libvirt/volumes/vm1_sensor-nsm-<epoch_timestamp>.img`
- Device: `/dev/vdb` (virtio-blk)
- VM remains stopped after attachment
@@ -75,7 +75,8 @@ used during VM provisioning to add dedicated NSM storage volumes.
- The script automatically stops the VM if it's running before creating and attaching the volume.
- Volumes are created with full pre-allocation for optimal performance.
- Volume files are stored in `/nsm/libvirt/volumes/` with naming pattern `<vm_name>-nsm.img`.
- Volume files are stored in `/nsm/libvirt/volumes/` with naming pattern `<vm_name>-nsm-<epoch_timestamp>.img`.
- The epoch timestamp ensures unique volume names and prevents conflicts.
- Volumes are attached as `/dev/vdb` using virtio-blk for high performance.
- The script checks available disk space before creating the volume.
- Ownership is set to `qemu:qemu` with permissions `640`.
@@ -142,6 +143,7 @@ import socket
import subprocess
import pwd
import grp
import time
import xml.etree.ElementTree as ET
from io import StringIO
from so_vm_utils import start_vm, stop_vm
@@ -242,10 +244,13 @@ def create_volume_file(vm_name, size_gb, logger):
Raises:
VolumeCreationError: If volume creation fails
"""
# Define volume path (directory already created in main())
volume_path = os.path.join(VOLUME_DIR, f"{vm_name}-nsm.img")
# Generate epoch timestamp for unique volume naming
epoch_timestamp = int(time.time())
# Check if volume already exists
# Define volume path with epoch timestamp for uniqueness
volume_path = os.path.join(VOLUME_DIR, f"{vm_name}-nsm-{epoch_timestamp}.img")
# Check if volume already exists (shouldn't be possible with timestamp)
if os.path.exists(volume_path):
logger.error(f"VOLUME: Volume already exists: {volume_path}")
raise VolumeCreationError(f"Volume already exists: {volume_path}")

View File

@@ -1,65 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
include:
- idstools.sync_files
idstoolslogdir:
file.directory:
- name: /opt/so/log/idstools
- user: 939
- group: 939
- makedirs: True
idstools_sbin:
file.recurse:
- name: /usr/sbin
- source: salt://idstools/tools/sbin
- user: 939
- group: 939
- file_mode: 755
# If this is used, exclude so-rule-update
#idstools_sbin_jinja:
# file.recurse:
# - name: /usr/sbin
# - source: salt://idstools/tools/sbin_jinja
# - user: 939
# - group: 939
# - file_mode: 755
# - template: jinja
idstools_so-rule-update:
file.managed:
- name: /usr/sbin/so-rule-update
- source: salt://idstools/tools/sbin_jinja/so-rule-update
- user: 939
- group: 939
- mode: 755
- template: jinja
suricatacustomdirsfile:
file.directory:
- name: /nsm/rules/detect-suricata/custom_file
- user: 939
- group: 939
- makedirs: True
suricatacustomdirsurl:
file.directory:
- name: /nsm/rules/detect-suricata/custom_temp
- user: 939
- group: 939
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,10 +0,0 @@
idstools:
enabled: False
config:
urls: []
ruleset: ETOPEN
oinkcode: ""
sids:
enabled: []
disabled: []
modify: []

View File

@@ -1,31 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
include:
- idstools.sostatus
so-idstools:
docker_container.absent:
- force: True
so-idstools_so-status.disabled:
file.comment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-idstools$
so-rule-update:
cron.absent:
- identifier: so-rule-update
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,91 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'docker/docker.map.jinja' import DOCKER %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set proxy = salt['pillar.get']('manager:proxy') %}
include:
- idstools.config
- idstools.sostatus
so-idstools:
docker_container.running:
- image: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-idstools:{{ GLOBALS.so_version }}
- hostname: so-idstools
- user: socore
- networks:
- sobridge:
- ipv4_address: {{ DOCKER.containers['so-idstools'].ip }}
{% if proxy %}
- environment:
- http_proxy={{ proxy }}
- https_proxy={{ proxy }}
- no_proxy={{ salt['pillar.get']('manager:no_proxy') }}
{% if DOCKER.containers['so-idstools'].extra_env %}
{% for XTRAENV in DOCKER.containers['so-idstools'].extra_env %}
- {{ XTRAENV }}
{% endfor %}
{% endif %}
{% elif DOCKER.containers['so-idstools'].extra_env %}
- environment:
{% for XTRAENV in DOCKER.containers['so-idstools'].extra_env %}
- {{ XTRAENV }}
{% endfor %}
{% endif %}
- binds:
- /opt/so/conf/idstools/etc:/opt/so/idstools/etc:ro
- /opt/so/rules/nids/suri:/opt/so/rules/nids/suri:rw
- /nsm/rules/:/nsm/rules/:rw
{% if DOCKER.containers['so-idstools'].custom_bind_mounts %}
{% for BIND in DOCKER.containers['so-idstools'].custom_bind_mounts %}
- {{ BIND }}
{% endfor %}
{% endif %}
- extra_hosts:
- {{ GLOBALS.manager }}:{{ GLOBALS.manager_ip }}
{% if DOCKER.containers['so-idstools'].extra_hosts %}
{% for XTRAHOST in DOCKER.containers['so-idstools'].extra_hosts %}
- {{ XTRAHOST }}
{% endfor %}
{% endif %}
- watch:
- file: idstoolsetcsync
- file: idstools_so-rule-update
delete_so-idstools_so-status.disabled:
file.uncomment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-idstools$
so-rule-update:
cron.present:
- name: /usr/sbin/so-rule-update > /opt/so/log/idstools/download_cron.log 2>&1
- identifier: so-rule-update
- user: root
- minute: '1'
- hour: '7'
# order this last to give so-idstools container time to be ready
run_so-rule-update:
cmd.run:
- name: '/usr/sbin/so-rule-update > /opt/so/log/idstools/download_idstools_state.log 2>&1'
- require:
- docker_container: so-idstools
- onchanges:
- file: idstools_so-rule-update
- file: idstoolsetcsync
- file: synclocalnidsrules
- order: last
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,16 +0,0 @@
{%- set disabled_sids = salt['pillar.get']('idstools:sids:disabled', {}) -%}
# idstools - disable.conf
# Example of disabling a rule by signature ID (gid is optional).
# 1:2019401
# 2019401
# Example of disabling a rule by regular expression.
# - All regular expression matches are case insensitive.
# re:hearbleed
# re:MS(0[7-9]|10)-\d+
{%- if disabled_sids != None %}
{%- for sid in disabled_sids %}
{{ sid }}
{%- endfor %}
{%- endif %}

View File

@@ -1,16 +0,0 @@
{%- set enabled_sids = salt['pillar.get']('idstools:sids:enabled', {}) -%}
# idstools-rulecat - enable.conf
# Example of enabling a rule by signature ID (gid is optional).
# 1:2019401
# 2019401
# Example of enabling a rule by regular expression.
# - All regular expression matches are case insensitive.
# re:hearbleed
# re:MS(0[7-9]|10)-\d+
{%- if enabled_sids != None %}
{%- for sid in enabled_sids %}
{{ sid }}
{%- endfor %}
{%- endif %}

View File

@@ -1,12 +0,0 @@
{%- set modify_sids = salt['pillar.get']('idstools:sids:modify', {}) -%}
# idstools-rulecat - modify.conf
# Format: <sid> "<from>" "<to>"
# Example changing the seconds for rule 2019401 to 3600.
#2019401 "seconds \d+" "seconds 3600"
{%- if modify_sids != None %}
{%- for sid in modify_sids %}
{{ sid }}
{%- endfor %}
{%- endif %}

View File

@@ -1,23 +0,0 @@
{%- from 'vars/globals.map.jinja' import GLOBALS -%}
{%- from 'soc/merged.map.jinja' import SOCMERGED -%}
--suricata-version=7.0.3
--merged=/opt/so/rules/nids/suri/all.rules
--output=/nsm/rules/detect-suricata/custom_temp
--local=/opt/so/rules/nids/suri/local.rules
{%- if GLOBALS.md_engine == "SURICATA" %}
--local=/opt/so/rules/nids/suri/extraction.rules
--local=/opt/so/rules/nids/suri/filters.rules
{%- endif %}
--url=http://{{ GLOBALS.manager }}:7788/suricata/emerging-all.rules
--disable=/opt/so/idstools/etc/disable.conf
--enable=/opt/so/idstools/etc/enable.conf
--modify=/opt/so/idstools/etc/modify.conf
{%- if SOCMERGED.config.server.modules.suricataengine.customRulesets %}
{%- for ruleset in SOCMERGED.config.server.modules.suricataengine.customRulesets %}
{%- if 'url' in ruleset %}
--url={{ ruleset.url }}
{%- elif 'file' in ruleset %}
--local={{ ruleset.file }}
{%- endif %}
{%- endfor %}
{%- endif %}

View File

@@ -1,13 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'idstools/map.jinja' import IDSTOOLSMERGED %}
include:
{% if IDSTOOLSMERGED.enabled %}
- idstools.enabled
{% else %}
- idstools.disabled
{% endif %}

View File

@@ -1,7 +0,0 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% import_yaml 'idstools/defaults.yaml' as IDSTOOLSDEFAULTS with context %}
{% set IDSTOOLSMERGED = salt['pillar.get']('idstools', IDSTOOLSDEFAULTS.idstools, merge=True) %}

View File

@@ -1 +0,0 @@
# Add your custom Suricata rules in this file.

View File

@@ -1,72 +0,0 @@
idstools:
enabled:
description: Enables or disables the IDStools process which is used by the Detection system.
config:
oinkcode:
description: Enter your registration code or oinkcode for paid NIDS rulesets.
title: Registration Code
global: True
forcedType: string
helpLink: rules.html
ruleset:
description: 'Defines the ruleset you want to run. Options are ETOPEN or ETPRO. Once you have changed the ruleset here, you will need to wait for the rule update to take place (every 24 hours), or you can force the update by nagivating to Detections --> Options dropdown menu --> Suricata --> Full Update. WARNING! Changing the ruleset will remove all existing non-overlapping Suricata rules of the previous ruleset and their associated overrides. This removal cannot be undone.'
global: True
regex: ETPRO\b|ETOPEN\b
helpLink: rules.html
urls:
description: This is a list of additional rule download locations. This feature is currently disabled.
global: True
multiline: True
forcedType: "[]string"
readonly: True
helpLink: rules.html
sids:
disabled:
description: Contains the list of NIDS rules (or regex patterns) disabled across the grid. This setting is readonly; Use the Detections screen to disable rules.
global: True
multiline: True
forcedType: "[]string"
regex: \d*|re:.*
helpLink: managing-alerts.html
readonlyUi: True
advanced: true
enabled:
description: Contains the list of NIDS rules (or regex patterns) enabled across the grid. This setting is readonly; Use the Detections screen to enable rules.
global: True
multiline: True
forcedType: "[]string"
regex: \d*|re:.*
helpLink: managing-alerts.html
readonlyUi: True
advanced: true
modify:
description: Contains the list of NIDS rules (SID "REGEX_SEARCH_TERM" "REGEX_REPLACE_TERM"). This setting is readonly; Use the Detections screen to modify rules.
global: True
multiline: True
forcedType: "[]string"
helpLink: managing-alerts.html
readonlyUi: True
advanced: true
rules:
local__rules:
description: Contains the list of custom NIDS rules applied to the grid. This setting is readonly; Use the Detections screen to adjust rules.
file: True
global: True
advanced: True
title: Local Rules
helpLink: local-rules.html
readonlyUi: True
filters__rules:
description: If you are using Suricata for metadata, then you can set custom filters for that metadata here.
file: True
global: True
advanced: True
title: Filter Rules
helpLink: suricata.html
extraction__rules:
description: If you are using Suricata for metadata, then you can set a list of MIME types for file extraction here.
file: True
global: True
advanced: True
title: Extraction Rules
helpLink: suricata.html

View File

@@ -1,37 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
idstoolsdir:
file.directory:
- name: /opt/so/conf/idstools/etc
- user: 939
- group: 939
- makedirs: True
idstoolsetcsync:
file.recurse:
- name: /opt/so/conf/idstools/etc
- source: salt://idstools/etc
- user: 939
- group: 939
- template: jinja
rulesdir:
file.directory:
- name: /opt/so/rules/nids/suri
- user: 939
- group: 939
- makedirs: True
# Don't show changes because all.rules can be large
synclocalnidsrules:
file.recurse:
- name: /opt/so/rules/nids/suri/
- source: salt://idstools/rules/
- user: 939
- group: 939
- show_changes: False
- include_pat: 'E@.rules'

View File

@@ -1,12 +0,0 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
/usr/sbin/so-restart idstools $1

View File

@@ -1,12 +0,0 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
/usr/sbin/so-stop idstools $1

View File

@@ -1,40 +0,0 @@
#!/bin/bash
# if this script isn't already running
if [[ ! "`pidof -x $(basename $0) -o %PPID`" ]]; then
. /usr/sbin/so-common
{%- from 'vars/globals.map.jinja' import GLOBALS %}
{%- from 'idstools/map.jinja' import IDSTOOLSMERGED %}
{%- set proxy = salt['pillar.get']('manager:proxy') %}
{%- set noproxy = salt['pillar.get']('manager:no_proxy', '') %}
{%- if proxy %}
# Download the rules from the internet
export http_proxy={{ proxy }}
export https_proxy={{ proxy }}
export no_proxy="{{ noproxy }}"
{%- endif %}
mkdir -p /nsm/rules/suricata
chown -R socore:socore /nsm/rules/suricata
{%- if not GLOBALS.airgap %}
# Download the rules from the internet
{%- if IDSTOOLSMERGED.config.ruleset == 'ETOPEN' %}
docker exec so-idstools idstools-rulecat -v --suricata-version 7.0.3 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force
{%- elif IDSTOOLSMERGED.config.ruleset == 'ETPRO' %}
docker exec so-idstools idstools-rulecat -v --suricata-version 7.0.3 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force --etpro={{ IDSTOOLSMERGED.config.oinkcode }}
{%- endif %}
{%- endif %}
argstr=""
for arg in "$@"; do
argstr="${argstr} \"${arg}\""
done
docker exec so-idstools /bin/bash -c "cd /opt/so/idstools/etc && idstools-rulecat --force ${argstr}"
fi

Some files were not shown because too many files have changed in this diff Show More