Compare commits

...

552 Commits

Author SHA1 Message Date
Mike Reeves
c2d43e5d22 Merge pull request #13255 from Security-Onion-Solutions/2.4/dev
2.4.80
2024-06-25 15:28:13 -04:00
Mike Reeves
51bb4837f5 Merge pull request #13259 from Security-Onion-Solutions/TOoSmOotH-patch-5
Update .gitleaks.toml
2024-06-25 14:48:41 -04:00
Mike Reeves
caec424e44 Update .gitleaks.toml 2024-06-25 14:47:50 -04:00
Mike Reeves
156176c628 Merge pull request #13256 from Security-Onion-Solutions/fixmain
Fix git
2024-06-25 08:30:19 -04:00
Mike Reeves
81b4c4e2c0 Merge branch '2.4/main' of github.com:Security-Onion-Solutions/securityonion into fixmain 2024-06-25 08:24:27 -04:00
Mike Reeves
d4107dc60a Merge pull request #13254 from Security-Onion-Solutions/2.4.80
2.4.80
2024-06-25 08:17:59 -04:00
Mike Reeves
d34605a512 Update DOWNLOAD_AND_VERIFY_ISO.md 2024-06-25 08:16:31 -04:00
Mike Reeves
af5e7cd72c 2.4.80 2024-06-24 15:41:47 -04:00
Jorge Reyes
93378e92e6 Merge pull request #13253 from Security-Onion-Solutions/kafkaflt
Remove unused sbin_jinja for kafka
2024-06-24 14:18:32 -04:00
reyesj2
81ce762250 delete commented block
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-24 14:06:48 -04:00
reyesj2
cb727bf48d remove unused sbin_jinja from kafka config
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-24 13:45:13 -04:00
Jorge Reyes
9a0bad88cc Merge pull request #13251 from Security-Onion-Solutions/kafkaflt
FIX: update firewall defaults
2024-06-24 12:29:48 -04:00
reyesj2
680e84851b Re-add manager sbin_jinja file recurse
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-24 12:27:52 -04:00
reyesj2
ea771ed21b update firewall
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-24 12:01:01 -04:00
reyesj2
c332cd777c remove import/heavynode artifact caused by kafka cert not existing but being bound in docker. (empty dir created)
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-24 08:50:37 -04:00
Mike Reeves
9fce85c988 Merge pull request #13245 from Security-Onion-Solutions/proxysoup
Fix soup for proxy servers
2024-06-21 16:13:02 -04:00
weslambert
6141c7a849 Merge pull request #13246 from Security-Onion-Solutions/fix/detections_license_none
Add option for detections without a license
2024-06-21 15:59:09 -04:00
weslambert
bf91030204 Add option for detections without license 2024-06-21 15:33:11 -04:00
Mike Reeves
9577c3f59d Make soup use reposync from the repo 2024-06-21 15:24:54 -04:00
Mike Reeves
77dedc575e Make soup use reposync from the repo 2024-06-21 15:20:07 -04:00
Mike Reeves
0295b8d658 Make soup use reposync from the repo 2024-06-21 15:11:23 -04:00
Mike Reeves
6a9d78fa7c Make soup use reposync from the repo 2024-06-21 15:10:44 -04:00
Mike Reeves
b84521cdd2 Make soup use reposync from the repo 2024-06-21 14:49:16 -04:00
Mike Reeves
ff4679ec08 Make soup use reposync from the repo 2024-06-21 14:45:06 -04:00
Mike Reeves
c5ce7102e8 Make soup use reposync from the repo 2024-06-21 14:41:27 -04:00
Mike Reeves
70c001e22b Update so-repo-sync 2024-06-21 13:37:36 -04:00
Mike Reeves
f1dc22a200 Merge pull request #13244 from Security-Onion-Solutions/TOoSmOotH-patch-4
Update soc_manager.yaml
2024-06-21 12:36:17 -04:00
Mike Reeves
aae1b69093 Update soc_manager.yaml 2024-06-21 12:35:01 -04:00
Jorge Reyes
8781419b4a Merge pull request #13242 from Security-Onion-Solutions/annotupd
update kafka annotations
2024-06-20 16:18:40 -04:00
reyesj2
2eea671857 more precise wording in kafka annotation
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-20 16:16:55 -04:00
reyesj2
73acfbf864 update kafka annotations
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-20 16:02:45 -04:00
Doug Burks
ae0e994461 Merge pull request #13239 from Security-Onion-Solutions/dougburks-patch-1
Update defaults.yaml to put Process actions in logical order
2024-06-20 10:12:06 -04:00
Doug Burks
07b9011636 Update defaults.yaml to put Process actions in logical order 2024-06-20 10:09:27 -04:00
Matthew Wright
bc2b3b7f8f Merge pull request #13236 from Security-Onion-Solutions/mwright/licenseDropdown
Added license presets to defaults.yaml file
2024-06-18 18:05:15 -04:00
unknown
ea02a2b868 Added license presets to defaults.yaml file 2024-06-18 16:52:00 -04:00
Jorge Reyes
ba3a6cbe87 Merge pull request #13234 from Security-Onion-Solutions/reyesj2-patch-4
update receiver node allowed states
2024-06-18 15:55:32 -04:00
reyesj2
268dcbe00b update receiver node allowed states
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-18 15:44:51 -04:00
Josh Patterson
6be97f13d0 Merge pull request #13233 from Security-Onion-Solutions/minefunc
fix ca mine_function
2024-06-18 13:58:35 -04:00
Jorge Reyes
95d6c93a07 Merge pull request #13231 from Security-Onion-Solutions/kfeval 2024-06-18 13:15:18 -04:00
m0duspwnens
a2bb220043 fix x509 mine_function 2024-06-18 12:33:33 -04:00
reyesj2
911d6dcce1 update kafka output policy only on eligible grid types
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-18 12:09:59 -04:00
Doug Burks
5f6a9850eb Merge pull request #13227 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add new Process actions #13226
2024-06-18 10:57:52 -04:00
Doug Burks
de18bf06c3 FEATURE: Add new Process actions #13226 2024-06-18 10:36:41 -04:00
Jorge Reyes
73473d671d Merge pull request #13222 from Security-Onion-Solutions/reyesj2-patch-3
update profile
2024-06-18 09:16:35 -04:00
Josh Brower
3fbab7c3af Merge pull request #13223 from Security-Onion-Solutions/2.4/timeout
Update defaults
2024-06-18 08:55:30 -04:00
DefensiveDepth
521cccaed6 Update defaults 2024-06-18 08:43:00 -04:00
reyesj2
35da3408dc update profile
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-17 15:53:49 -04:00
Jorge Reyes
c03096e806 Merge pull request #13221 from Security-Onion-Solutions/reyesj2/ksoup
suppress fleet policy update in soup
2024-06-17 14:18:34 -04:00
reyesj2
2afc947d6c suppress fleet policy update in soup
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-17 14:12:33 -04:00
Doug Burks
076da649cf Merge pull request #13217 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add more links and descriptions to SOC MOTD #13216
2024-06-17 12:18:29 -04:00
Doug Burks
93ced0959c FEATURE: Add more links and descriptions to SOC MOTD #13216 2024-06-17 09:25:01 -04:00
Doug Burks
6f13fa50bf FEATURE: Add more links and descriptions to SOC MOTD #13216 2024-06-17 09:24:32 -04:00
Doug Burks
3bface12e0 FEATURE: Add more links and descriptions to SOC MOTD #13216 2024-06-17 09:23:14 -04:00
Doug Burks
b584c8e353 FEATURE: Add more links and descriptions to SOC MOTD #13216 2024-06-17 09:13:17 -04:00
Jason Ertel
6caf87df2d Merge pull request #13209 from Security-Onion-Solutions/kfix
Fix errors on new installs
2024-06-15 05:09:48 -04:00
reyesj2
4d1f2c2bc1 fix kafka elastic fleet output policy setup
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-14 23:04:08 -04:00
reyesj2
0b1175b46c kafka logstash input plugin handle empty brokers list
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-14 23:03:36 -04:00
reyesj2
4e50dabc56 refix typos
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-14 23:03:06 -04:00
Jason Ertel
ce45a5926a Merge pull request #13207 from Security-Onion-Solutions/kaffix
Standalone logstash error
2024-06-14 18:01:35 -04:00
Josh Brower
c540a4f257 Merge pull request #13208 from Security-Onion-Solutions/2.4/ruletemplates
Update rule templates
2024-06-14 16:01:26 -04:00
DefensiveDepth
7af94c172f Change spelling 2024-06-14 16:00:22 -04:00
DefensiveDepth
7556587e35 Update rule templates 2024-06-14 15:47:57 -04:00
reyesj2
a0030b27e2 add additional retries to elasticfleet scripts
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-14 15:34:40 -04:00
reyesj2
8080e05444 on fresh install kafka nodes pillar may not have populated. Avoiding this by only generating kafka input pipeline when kafka nodes pillar is not empty
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-14 14:17:26 -04:00
Josh Brower
af11879545 Merge pull request #13205 from Security-Onion-Solutions/2.4/customsuricatasources
Initial support for custom suricata urls and local rulesets
2024-06-14 13:50:06 -04:00
DefensiveDepth
c89f1c9d95 remove multiline 2024-06-14 13:48:55 -04:00
DefensiveDepth
b7ac599a42 set to empty 2024-06-14 13:21:36 -04:00
DefensiveDepth
8363877c66 move to custom rules 2024-06-14 12:41:44 -04:00
DefensiveDepth
4bcb4b5b9c removed unneeded import 2024-06-14 09:32:34 -04:00
DefensiveDepth
68302e14b9 add to defaults and tweaks 2024-06-14 09:28:23 -04:00
DefensiveDepth
c1abc7a7f1 Update description 2024-06-14 08:51:34 -04:00
DefensiveDepth
484717d57d initial support for custom suricata urls and local rulesets 2024-06-14 08:42:10 -04:00
Jorge Reyes
b91c608fcf Merge pull request #13204 from Security-Onion-Solutions/kaffix
Only comment out so-kafka from so-status when it exists & only run en…
2024-06-13 15:54:50 -04:00
reyesj2
8f8ece2b34 Only comment out so-kafka from so-status when it exists & only run ensure_default_pipeline when Kafka is configured
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-13 15:50:34 -04:00
Jorge Reyes
9b5c1c01e9 Merge pull request #13200 from Security-Onion-Solutions/kafka/fix 2024-06-13 12:26:57 -04:00
reyesj2
816a1d446e Generate kafka-logstash cert on standalone,manager,managersearch in addition to searchnodes.
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-13 12:18:13 -04:00
reyesj2
19bfd5beca fix kafka nodeid assignment to increment correctly
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-13 12:16:39 -04:00
Jorge Reyes
9ac7e051b3 Merge pull request #13190 from Security-Onion-Solutions/reyesj2/kafka
Initial Kafka support
2024-06-13 09:42:59 -04:00
reyesj2
80b1d51f76 wrong location for global.pipeline check
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-13 08:50:53 -04:00
Doug Burks
6340ebb36d Merge pull request #13197 from Security-Onion-Solutions/dougburks-patch-1
Update DOWNLOAD_AND_VERIFY_ISO.md
2024-06-12 16:49:21 -04:00
Doug Burks
70721afa51 Update DOWNLOAD_AND_VERIFY_ISO.md 2024-06-12 16:47:26 -04:00
reyesj2
9c31622598 telegraft should only include jolokia config when Kafka is set as the global.pipeline
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 15:42:00 -04:00
reyesj2
f372b0907b Use kafka:password for kafka certs
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 15:41:10 -04:00
coreyogburn
fac96e0b08 Merge pull request #13183 from Security-Onion-Solutions/cogburn/cleanup-config
Fix unnecessary escaping
2024-06-12 11:57:31 -06:00
reyesj2
2bc53f9868 Merge remote-tracking branch 'remotes/origin/2.4/dev' into reyesj2/kafka 2024-06-12 12:36:58 -04:00
reyesj2
e8106befe9 Append '-securityonion' to all Security Onion related Kafka topics. Adjust logstash to ingest all topics ending in '-securityonion' to avoid having to manually list topic names
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 12:05:16 -04:00
reyesj2
83412b813f Renamed Kafka pillar
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 11:19:25 -04:00
reyesj2
b56d497543 Revert a so-setup change. Kafka is not an installable option
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 11:17:06 -04:00
reyesj2
dd40962288 Revert a whiptail menu change. Kafka is not an install option
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 11:07:23 -04:00
reyesj2
b7eebad2a5 Update Kafka self reset & add initial Kafka wrapper scripts to build out
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 11:01:40 -04:00
Josh Patterson
092f716f12 Merge pull request #13189 from Security-Onion-Solutions/soupmsgq
remove this \n
2024-06-12 10:41:49 -04:00
m0duspwnens
c38f48c7f2 remove this \n 2024-06-12 10:34:32 -04:00
Corey Ogburn
d5ef0e5744 Fix unnecessary escaping 2024-06-11 12:34:32 -06:00
Josh Brower
e90557d7dc Merge pull request #13179 from Security-Onion-Solutions/2.4/fixintegritycheck
Add new bind - suricata all.rules
2024-06-11 13:08:40 -04:00
reyesj2
628893fd5b remove redundant 'kafka_' from annotations & defaults
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-11 11:56:21 -04:00
reyesj2
a81e4c3362 remove dash(-) from kafka.id
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-11 11:55:17 -04:00
reyesj2
ca7b89c308 Added Kafka reset to SOC UI. Incase of changing an active broker to a controller topics may become unavailable. Resolving this would require manual intervention. This option allows running a reset to start from a clean slate to then configure cluster to desired state before reenabling Kafka as global pipeline.
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-11 11:21:13 -04:00
Josh Patterson
03335cc015 Merge pull request #13182 from Security-Onion-Solutions/dockerup
upgrade docker
2024-06-11 11:08:40 -04:00
reyesj2
08557ae287 kafka.id field should only be present when metadata for kafka exists
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-11 11:01:34 -04:00
DefensiveDepth
08d2a6242d Add new bind - suricata all.rules 2024-06-11 10:03:33 -04:00
m0duspwnens
4b481bd405 add epoch to docker for oracle 2024-06-11 09:41:58 -04:00
m0duspwnens
0b1e3b2a7f upgrade docker for focal 2024-06-10 16:24:44 -04:00
m0duspwnens
dbd9873450 upgrade docker for jammy 2024-06-10 16:04:11 -04:00
m0duspwnens
c6d0a17669 docker upgrade debian 12 2024-06-10 15:43:29 -04:00
m0duspwnens
adeab10f6d upgrade docker and containerd.io for oracle 2024-06-10 12:14:27 -04:00
reyesj2
824f852ed7 merge 2.4/dev
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-10 11:26:23 -04:00
reyesj2
284c1be85f Update Kafka controller(s) via SOC UI
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-10 11:08:54 -04:00
Jason Ertel
7ad6baf483 Merge pull request #13171 from Security-Onion-Solutions/jertel/yaml
correct placement of error check override
2024-06-08 08:21:20 -04:00
Jason Ertel
f1638faa3a correct placement of error check override 2024-06-08 08:18:34 -04:00
Jason Ertel
dea786abfa Merge pull request #13170 from Security-Onion-Solutions/jertel/yaml
gracefully handle missing parent key
2024-06-08 07:49:49 -04:00
Jason Ertel
f96b82b112 gracefully handle missing parent key 2024-06-08 07:44:46 -04:00
Josh Patterson
95fe11c6b4 Merge pull request #13162 from Security-Onion-Solutions/soupmsgq
fix elastic templates not loading due to global_override phases
2024-06-07 16:23:03 -04:00
Jason Ertel
f2f688b9b8 Update soup 2024-06-07 16:18:09 -04:00
m0duspwnens
0139e18271 additional description 2024-06-07 16:03:21 -04:00
Mike Reeves
657995d744 Merge pull request #13165 from Security-Onion-Solutions/TOoSmOotH-patch-3
Update defaults.yaml
2024-06-07 15:38:01 -04:00
Mike Reeves
4057238185 Update defaults.yaml 2024-06-07 15:33:49 -04:00
coreyogburn
fb07ff65c9 Merge pull request #13164 from Security-Onion-Solutions/cogburn/tls-options
AdditionalCA and InsecureSkipVerify
2024-06-07 13:10:45 -06:00
Mike Reeves
dbc56ffee7 Update defaults.yaml 2024-06-07 15:09:09 -04:00
Corey Ogburn
ee696be51d Remove rootCA and insecureSkipVerify from SOC defaults 2024-06-07 13:07:04 -06:00
Corey Ogburn
5d3fd3d389 AdditionalCA and InsecureSkipVerify
New fields have been added to manager and then duplicated over to SOC's config in the same vein as how proxy was updated earlier this week.

AdditionalCA holds the PEM formatted public keys that should be trusted when making requests. It has been implemented for both Sigma's zip downloads and Sigma and Suricata's repository clones and pulls.

InsecureSkipVerify has been added to help our users troubleshoot their configuration. Setting it to true will not verify the cert on outgoing requests. Self signed, missing, or invalid certs will not throw an error.
2024-06-07 12:47:09 -06:00
Corey Ogburn
fa063722e1 RootCA and InsecureSkipVerify
New empty settings and their annotations.
2024-06-07 09:10:14 -06:00
m0duspwnens
f5cc35509b fix output alignment 2024-06-07 11:03:26 -04:00
m0duspwnens
d39c8fae54 format output 2024-06-07 09:01:16 -04:00
m0duspwnens
d3b81babec check for phases with so-yaml, remove if exists 2024-06-06 16:15:21 -04:00
coreyogburn
f35f6bd4c8 Merge pull request #13154 from Security-Onion-Solutions/cogburn/soc-proxy
SOC Proxy Setting
2024-06-06 14:03:16 -06:00
Mike Reeves
d5cfef94a3 Merge pull request #13156 from Security-Onion-Solutions/TOoSmOotH-patch-3 2024-06-06 16:01:22 -04:00
Mike Reeves
f37f5ba97b Update soc_suricata.yaml 2024-06-06 15:57:58 -04:00
Corey Ogburn
42818a9950 Remove proxy from SOC defaults 2024-06-06 13:28:07 -06:00
Corey Ogburn
e85c3e5b27 SOC Proxy Setting
The so_proxy value we build during install is now copied to SOC's config.
2024-06-06 11:55:27 -06:00
m0duspwnens
a39c88c7b4 add set to troubleshoot failure 2024-06-06 12:56:24 -04:00
m0duspwnens
73ebf5256a Merge remote-tracking branch 'origin/2.4/dev' into soupmsgq 2024-06-06 12:44:45 -04:00
Jason Ertel
6d31cd2a41 Merge pull request #13150 from Security-Onion-Solutions/jertel/yaml
add ability to retrieve yaml values via so-yaml.py; improve so-minion id matching
2024-06-06 12:09:03 -04:00
Jason Ertel
5600fed9c4 add ability to retrieve yaml values via so-yaml.py; improve so-minion id matching 2024-06-06 11:56:07 -04:00
m0duspwnens
6920b77b4a fix msg 2024-06-06 11:00:43 -04:00
m0duspwnens
ccd6b3914c add final msg queue for soup. 2024-06-06 10:33:55 -04:00
reyesj2
c4723263a4 Remove unused kafka reactor
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-06 08:59:17 -04:00
reyesj2
4581a46529 Merge remote-tracking branch 'remotes/origin/2.4/dev' into reyesj2/kafka 2024-06-05 20:47:41 -04:00
Josh Patterson
33a2c5dcd8 Merge pull request #13141 from Security-Onion-Solutions/sotcprp
move so-tcpreplay from common state to sensor state
2024-06-05 09:49:39 -04:00
m0duspwnens
f6a8a21f94 remove space 2024-06-05 08:58:46 -04:00
m0duspwnens
ff5773c837 move so-tcpreplay back to common. return empty string if no sensor.interface pillar 2024-06-05 08:56:32 -04:00
m0duspwnens
66f8084916 Merge remote-tracking branch 'origin/2.4/dev' into sotcprp 2024-06-05 08:32:54 -04:00
m0duspwnens
a2467d0418 move so-tcpreplay to sensor state 2024-06-05 08:24:57 -04:00
reyesj2
3b0339a9b3 create kafka.id from kafka {partition}-{offset}-{timestamp} for tracking event
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-04 14:27:52 -04:00
reyesj2
fb1d4fdd3c update license
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-04 12:33:51 -04:00
Josh Patterson
56a16539ae Merge pull request #13134 from Security-Onion-Solutions/sotcprp
so-tcpreplay now runs if manager is offline
2024-06-04 10:43:33 -04:00
m0duspwnens
c0b2cf7388 add the curlys 2024-06-04 10:28:21 -04:00
reyesj2
d9c58d9333 update receiver pillar access
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-04 08:33:45 -04:00
Josh Patterson
ef3a52468f Merge pull request #13129 from Security-Onion-Solutions/salt3006.8
salt 3006.6
2024-06-03 15:29:19 -04:00
m0duspwnens
c88b731793 revert to 3006.6 2024-06-03 15:27:08 -04:00
reyesj2
2e85a28c02 Remove so-kafka-clusterid script, created during soup
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-02 18:25:59 -04:00
weslambert
964fef1aab Merge pull request #13117 from Security-Onion-Solutions/fix/items_and_lists
Add templates for .items and .lists indices
2024-05-31 16:34:29 -04:00
reyesj2
1a832fa0a5 Move soup kafka needfuls to up_to_2.4.80
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-31 14:04:46 -04:00
reyesj2
75bdc92bbf Merge remote-tracking branch 'remotes/origin/2.4/dev' into reyesj2/kafka 2024-05-31 14:02:43 -04:00
Wes
a8c231ad8c Add component templates 2024-05-31 17:47:01 +00:00
Wes
f396247838 Add index templates and lifecycle policies 2024-05-31 17:46:19 +00:00
reyesj2
e3ea4776c7 Update kafka nodes pillar before running highstate with pillarwatch engine. This allows configuring your Kafka controllers before cluster comes up for the first time
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-31 13:34:28 -04:00
coreyogburn
37a928b065 Merge pull request #13107 from Security-Onion-Solutions/cogburn/detection-templates
Added TemplateDetections To Detection ClientParams
2024-05-30 16:26:17 -06:00
Corey Ogburn
85c269e697 Added TemplateDetections To Detection ClientParams
The UI can now insert templates when you select a Detection language. These are those templates, annotated.
2024-05-30 15:59:03 -06:00
m0duspwnens
6e70268ab9 Merge remote-tracking branch 'origin/2.4/dev' into sotcprp 2024-05-30 16:34:37 -04:00
Josh Patterson
fb8929ea37 Merge pull request #13103 from Security-Onion-Solutions/salt3006.8
Salt3006.8
2024-05-30 16:32:05 -04:00
weslambert
5d9c0dd8b5 Merge pull request #13101 from Security-Onion-Solutions/fix/separate_suricata
Separate Suricata alerts into a specific data stream
2024-05-30 16:30:55 -04:00
m0duspwnens
debf093c54 Merge remote-tracking branch 'origin/2.4/dev' into salt3006.8 2024-05-30 15:58:10 -04:00
reyesj2
00b5a5cc0c Revert "revert version for soup test before 2.4.80 pipeline unpaused"
This reverts commit 48713a4e7b.
2024-05-30 15:13:16 -04:00
reyesj2
dbb99d0367 Remove bad config
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-30 15:10:15 -04:00
m0duspwnens
7702f05756 upgrade salt 3006.8. soup for 2.4.80 2024-05-30 15:00:32 -04:00
Wes
2c635bce62 Set index for Suricata alerts 2024-05-30 17:02:31 +00:00
reyesj2
48713a4e7b revert version for soup test before 2.4.80 pipeline unpaused
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-30 13:00:34 -04:00
Wes
e831354401 Add Suricata alerts setting for configuration 2024-05-30 17:00:11 +00:00
Wes
55c5ea5c4c Add template for Suricata alerts 2024-05-30 16:58:56 +00:00
reyesj2
1fd5165079 Merge remote-tracking branch 'origin/2.4/dev' into reyesj2/kafka
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-29 23:37:40 -04:00
reyesj2
949cea95f4 Update pillarWatch config for global.pipeline
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-29 23:19:44 -04:00
Mike Reeves
12762e08ef Merge pull request #13093 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update VERSION
2024-05-29 16:54:31 -04:00
Mike Reeves
62bdb2627a Update VERSION 2024-05-29 16:53:27 -04:00
reyesj2
386be4e746 WIP: Manage Kafka nodes pillar role value
This way when kafka_controllers is updated the pillar value gets updated and any non-controllers get updated to revert to 'broker' only role.
 Needs more testing when a new controller joins in this manner Kafka errors due to cluster metadata being out of sync. One solution is to remove /nsm/kafka/data/__cluster_metadata-0/quorum-state and restart cluster. Alternative is working with Kafka cli tools to inform cluster of new voter, likely best option but requires a wrapper script of some sort to be created for updating cluster in-place.
Easiest option is to have all receivers join grid and then configure Kafka with specific controllers via SOC UI prior to enabling Kafka. This way Kafka cluster comes up in the desired configuration with no need for immediately modifying cluster

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-29 16:48:39 -04:00
Mike Reeves
dfcf7a436f Merge pull request #13091 from Security-Onion-Solutions/2.4/dev
2.4.70
2024-05-29 16:41:54 -04:00
reyesj2
d9ec556061 Update some annotations and defaults
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-29 16:41:02 -04:00
reyesj2
876d860488 elastic agent should be able to communicate over 9092 for sending logs to kafka brokers
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-29 16:40:15 -04:00
Mike Reeves
88651219a6 Merge pull request #13090 from Security-Onion-Solutions/2.4.70
2.4.70
2024-05-29 14:54:16 -04:00
Mike Reeves
a655f8dc04 2.4.70 2024-05-29 14:52:47 -04:00
Mike Reeves
e98b8566c9 2.4.70 2024-05-29 14:50:22 -04:00
Josh Brower
ef10794e3b Merge pull request #13089 from Security-Onion-Solutions/2.4/realert
fix rsync
2024-05-29 11:12:45 -04:00
DefensiveDepth
0d034e7adc fix rsync 2024-05-29 10:55:56 -04:00
reyesj2
59097070ef Revert "Remove unneeded jolokia aggregate metrics to reduce data ingested to influx"
This reverts commit 1c1a1a1d3f.
2024-05-28 12:17:43 -04:00
reyesj2
77b5aa4369 Correct dashboard name
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-28 11:34:35 -04:00
reyesj2
0d7c331ff0 only show specific fields when hovering over Kafka influxdb panels
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-28 11:29:38 -04:00
reyesj2
1c1a1a1d3f Remove unneeded jolokia aggregate metrics to reduce data ingested to influx
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-28 11:14:19 -04:00
reyesj2
47efcfd6e2 Add basic Kafka metrics to 'Security Onion Performance' influxdb dashboard
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-28 10:55:11 -04:00
reyesj2
15a0b959aa Add jolokia metrics for influxdb dashboard
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-28 10:51:39 -04:00
Josh Brower
ca49943a7f Merge pull request #13085 from Security-Onion-Solutions/2.4/soupchange
Check to see if local exists
2024-05-28 10:25:46 -04:00
DefensiveDepth
ee4ca0d7a2 Check to see if local exists 2024-05-28 10:24:09 -04:00
Josh Brower
0d634f3b8e Merge pull request #13084 from Security-Onion-Solutions/2.4/soupchange
Fix fi
2024-05-28 10:05:33 -04:00
DefensiveDepth
f68ac23f0e Fix fi
Signed-off-by: DefensiveDepth <Josh@defensivedepth.com>
2024-05-28 10:03:31 -04:00
Josh Brower
825c4a9adb Merge pull request #13083 from Security-Onion-Solutions/2.4/soupchange
Backup .yml files too
2024-05-28 09:45:53 -04:00
DefensiveDepth
2a2b86ebe6 Dont overwrite 2024-05-28 09:43:45 -04:00
DefensiveDepth
74dfc25376 backup local rules 2024-05-28 09:29:10 -04:00
DefensiveDepth
81ee60e658 Backup .yml files too 2024-05-28 06:42:18 -04:00
reyesj2
fcb6a47e8c Remove redis.sh telegraf script when Kafka is global pipeline
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-26 21:10:41 -04:00
Josh Brower
49fd84a3a7 Merge pull request #13081 from Security-Onion-Solutions/2.4/soupchange
Dont bail - just wait for enter
2024-05-24 16:28:40 -04:00
DefensiveDepth
58b565558d Dont bail - just wait for enter 2024-05-24 16:21:59 -04:00
Josh Brower
185fb38b2d Merge pull request #13079 from Security-Onion-Solutions/2.4/sigmapipelineupdates
Add IDH mappings
2024-05-24 14:48:22 -04:00
DefensiveDepth
550b3ee92d Add IDH mappings 2024-05-24 14:46:24 -04:00
Josh Brower
29a87fd166 Merge pull request #13078 from Security-Onion-Solutions/2.4/socdefaultsdet
Add instructions for sigma and yara repos
2024-05-24 13:02:01 -04:00
DefensiveDepth
f90d40b471 Fix typo 2024-05-24 12:56:17 -04:00
DefensiveDepth
4344988abe Add instructions for sigma and yara repos 2024-05-24 12:54:36 -04:00
Josh Brower
979147a111 Merge pull request #13062 from Security-Onion-Solutions/2.4/backupscript
Detections backup script
2024-05-24 10:06:56 -04:00
DefensiveDepth
66725b11b3 Added unit tests 2024-05-24 09:55:10 -04:00
Jason Ertel
19f9c4e389 Merge pull request #13076 from Security-Onion-Solutions/jertel/eaconfig
provide default columns when viewing SOC logs
2024-05-24 08:39:17 -04:00
Jason Ertel
bd11d59c15 add event.dataset since there are other datasets in soc logs 2024-05-24 08:38:12 -04:00
Jason Ertel
15155613c3 provide default columns when viewing SOC logs 2024-05-24 08:23:45 -04:00
m0duspwnens
b5f656ae58 dont render pillar each time so-tcpreplay runs 2024-05-23 13:22:22 -04:00
Josh Patterson
7177392adc Merge pull request #13071 from Security-Onion-Solutions/telfinwip
Telfinwip
2024-05-23 10:46:54 -04:00
m0duspwnens
ea7715f729 use waitforstate var instead. 2024-05-23 10:41:10 -04:00
m0duspwnens
0b9ebefdb6 only show telem status in final whiptail if new deployment 2024-05-23 10:08:23 -04:00
Mike Reeves
19e66604d0 Merge pull request #13069 from Security-Onion-Solutions/TOoSmOotH-patch-8
Update defaults.yaml
2024-05-23 08:22:05 -04:00
Mike Reeves
1e6161f89c Update defaults.yaml 2024-05-23 08:19:43 -04:00
Josh Brower
a8c287c491 Merge pull request #13067 from Security-Onion-Solutions/2.4/fixpipeline
Fix strelka rule.uuid
2024-05-23 07:53:14 -04:00
Doug Burks
2c4f5f0a91 Merge pull request #13066 from Security-Onion-Solutions/dougburks-patch-1
Update defaults.yaml to fix order of groupby tables and eliminate dup…
2024-05-23 06:02:49 -04:00
DefensiveDepth
8e7c487cb0 Fix strelka rule.uuid 2024-05-23 05:59:31 -04:00
Doug Burks
3d4f3a04a3 Update defaults.yaml to fix order of groupby tables and eliminate duplicate 2024-05-23 05:56:18 -04:00
Josh Brower
ce063cf435 Merge pull request #13063 from Security-Onion-Solutions/2.4/yarafix
Fix casing issue
2024-05-22 18:51:54 -04:00
DefensiveDepth
a072e34cfe Fix casing issue 2024-05-22 17:12:41 -04:00
DefensiveDepth
d19c1a514b Detections backup script 2024-05-22 15:12:23 -04:00
weslambert
b415810485 Merge pull request #13061 from Security-Onion-Solutions/fix/tab_casing
Change tab casing to be consistent with other whiptail prompts
2024-05-22 13:44:09 -04:00
weslambert
3cfd710756 Change tab casing to be consistent with other whiptail prompts 2024-05-22 13:41:32 -04:00
reyesj2
382cd24a57 Small changes needed for using new Kafka docker image + added Kafka logging output to /opt/so/log/kafka/
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-22 13:39:21 -04:00
reyesj2
b1beb617b3 Logstash should be disabled when Kafka is enabled except when a minion override exists OR node is a standalone
- Standalone subscribes to Kafka topics via logstash for ingest

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-22 13:38:09 -04:00
reyesj2
91f8b1fef7 Set default replication factor back to Kafka default
If replication factor is > 1 Kafka will fail to start until another broker is added
  - For internal automated testing purposes a Standalone will be utilized

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-22 13:35:09 -04:00
Jason Ertel
ca6e2b8e22 Merge pull request #13054 from Security-Onion-Solutions/jertel/eaconfig
fix elastalert settings
2024-05-21 18:38:03 -04:00
Jason Ertel
8af3158ea7 fix elastalert settings 2024-05-21 18:28:21 -04:00
Josh Brower
8b011b8d7e Merge pull request #13053 from Security-Onion-Solutions/2.4/alertsefaults
Add rule.uuid to default groupbys
2024-05-21 17:54:27 -04:00
DefensiveDepth
f9e9b825cf Removed unneeded groupby 2024-05-21 17:53:20 -04:00
DefensiveDepth
3992ef1082 Add rule.uuid to default groupbys 2024-05-21 17:45:56 -04:00
weslambert
556fdfdcf9 Merge pull request #13052 from Security-Onion-Solutions/fix/add_rule_uuid
Add rule.uuid for YARA matches
2024-05-21 17:09:49 -04:00
weslambert
f4490fab58 Add rule.uuid for YARA matches 2024-05-21 17:05:39 -04:00
weslambert
5aaf44ebb2 Merge pull request #13049 from Security-Onion-Solutions/fix/detections_alerts_component_template
Exclude detections from template name matching
2024-05-21 13:45:19 -04:00
weslambert
deb140e38e Exclude detections from template name matching 2024-05-21 13:38:52 -04:00
Jason Ertel
3de6454d4f Merge pull request #13047 from Security-Onion-Solutions/jertel/eaconfig
Jertel/eaconfig
2024-05-21 13:34:20 -04:00
Jason Ertel
d57cc9627f exclude false positives related to detections 2024-05-21 13:31:50 -04:00
Jason Ertel
8ce19a93b9 exclude false positives related to detections 2024-05-21 13:29:20 -04:00
Jason Ertel
d315b95d77 elastalert settings 2024-05-21 07:15:19 -04:00
Doug Burks
6172816f61 Merge pull request #13044 from Security-Onion-Solutions/dougburks-patch-1
Update README.md with new Detections screenshot number
2024-05-21 06:49:35 -04:00
Doug Burks
03826dd32c Update README.md with new Detections screenshot number 2024-05-21 06:43:07 -04:00
Jason Ertel
b7a4f20c61 elastalert settings 2024-05-20 20:11:30 -04:00
Jason Ertel
02b4d37c11 elastalert settings 2024-05-20 20:00:31 -04:00
Jason Ertel
f8ce039065 elastalert settings 2024-05-20 19:58:12 -04:00
Jason Ertel
e2d0b8f4c7 elastalert settings 2024-05-20 19:38:36 -04:00
Jason Ertel
8a3061fe3e elastalert settings 2024-05-20 19:36:06 -04:00
Jason Ertel
c594168b65 elastalert settings 2024-05-20 19:05:43 -04:00
Jason Ertel
31fdf15ce1 Merge branch '2.4/dev' into jertel/eaconfig 2024-05-20 18:59:35 -04:00
Jason Ertel
6b2219b7f2 elastalert settings 2024-05-20 18:52:37 -04:00
coreyogburn
64144b4759 Merge pull request #13041 from Security-Onion-Solutions/cogburn/integrity-checker-annotations
Annotate integrityCheckFrequencySeconds per det engine
2024-05-20 14:52:38 -06:00
Corey Ogburn
6e97c39f58 Marked as Advanced 2024-05-20 14:52:05 -06:00
Corey Ogburn
026023fd0a Annotate integrityCheckFrequencySeconds per det engine 2024-05-20 14:35:11 -06:00
Jorge Reyes
d7ee89542a Merge pull request #13040 from Security-Onion-Solutions/lkscript
Create helper script for tpm enrollment
2024-05-20 15:25:50 -04:00
reyesj2
6fac6eebce Helper script for enrolling tpm into luks
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-20 14:37:54 -04:00
coreyogburn
3c3497c2fd Merge pull request #13039 from Security-Onion-Solutions/cogburn/integrity-check
Add Default IntegrityCheck Frequency Values
2024-05-20 11:26:30 -06:00
Corey Ogburn
fcc72a4f4e Add Default IntegrityCheck Frequency Values 2024-05-20 11:23:25 -06:00
coreyogburn
28dea9be58 Merge pull request #13037 from Security-Onion-Solutions/cogburn/comp-report-path-change
Change Compilation Report Path
2024-05-17 15:48:52 -06:00
Corey Ogburn
0cc57fc240 Change Compilation Report Path
Move compilation report path to /opt/so/state and mount that foulder in SOC
2024-05-17 15:47:23 -06:00
weslambert
17518b90ca Merge pull request #13036 from Security-Onion-Solutions/fix/yara_compile_report
Create YARA compile report for SOC integrity check
2024-05-17 16:15:21 -04:00
weslambert
d9edff38df Create compile report for SOC integrity check 2024-05-17 16:10:10 -04:00
Jason Ertel
300d8436a8 Merge pull request #13035 from Security-Onion-Solutions/jertel/eaconfig
add support for custom alerters
2024-05-17 15:01:54 -04:00
Jason Ertel
1c4d36760a add support for custom alerters 2024-05-17 14:49:39 -04:00
reyesj2
34a5985311 Create tpm enrollment script
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-16 21:14:57 -04:00
Josh Patterson
aa0163349b Merge pull request #13031 from Security-Onion-Solutions/issue/13021
Issue/13021
2024-05-16 16:40:17 -04:00
Josh Patterson
572b8d08d9 Merge branch '2.4/dev' into issue/13021 2024-05-16 16:39:17 -04:00
m0duspwnens
cc6cb346e7 fix issue/13030 2024-05-16 16:31:45 -04:00
m0duspwnens
b54632080e check if exists in override before popping 2024-05-16 16:04:17 -04:00
Josh Patterson
44d3468f65 Merge pull request #13029 from Security-Onion-Solutions/revert-13028-issue/13021
Revert "dont merge policy from global_overrides if not defined in default index_settings"
2024-05-16 15:48:05 -04:00
Josh Patterson
9d4668f4d3 Revert "dont merge policy from global_overrides if not defined in default index_settings" 2024-05-16 15:45:55 -04:00
Josh Patterson
da2ac4776e Merge pull request #13028 from Security-Onion-Solutions/issue/13021
dont merge policy from global_overrides if not defined in default index_settings
2024-05-16 14:33:51 -04:00
m0duspwnens
9796354b48 dont merge policy from global_overrides if not defined in default index_settings 2024-05-16 14:27:32 -04:00
Jason Ertel
aa32eb9c0e Merge pull request #13025 from Security-Onion-Solutions/jertel/suridp
exclude detect-parse errors
2024-05-15 19:21:30 -04:00
Jason Ertel
4771810361 exclude detect-parse errors 2024-05-15 19:10:50 -04:00
Mike Reeves
52f27c00ce Merge pull request #13024 from Security-Onion-Solutions/TOoSmOotH-patch-7
Update soup
2024-05-15 18:07:28 -04:00
Mike Reeves
ab9ec2ec6b Update soup 2024-05-15 18:04:01 -04:00
Josh Patterson
4d7835612d Merge pull request #13022 from Security-Onion-Solutions/soupaml
add a newline to final output of so-elastic-agent-gen-installers
2024-05-15 16:37:53 -04:00
m0duspwnens
8076ea0e0a add another space 2024-05-15 16:34:05 -04:00
Josh Brower
320ae641b1 Merge pull request #13023 from Security-Onion-Solutions/2.4/sigmapipelineupdates
alphabetical order
2024-05-15 16:30:45 -04:00
DefensiveDepth
b4aec9a9d0 alphabetical order 2024-05-15 16:29:21 -04:00
m0duspwnens
6af0308482 add a newline 2024-05-15 16:26:44 -04:00
Josh Patterson
08024c7511 Merge pull request #13020 from Security-Onion-Solutions/issue/13012
Issue/13012
2024-05-15 15:33:01 -04:00
m0duspwnens
3a56058f7f update description 2024-05-15 15:31:31 -04:00
Mike Reeves
795de7ab07 Merge pull request #13019 from Security-Onion-Solutions/TOoSmOotH-patch-6
Update enabled.sls
2024-05-15 14:08:40 -04:00
Mike Reeves
8803ad4018 Update enabled.sls 2024-05-15 14:05:48 -04:00
m0duspwnens
62a8024c6c Merge remote-tracking branch 'origin/2.4/dev' into issue/13012 2024-05-15 13:48:46 -04:00
m0duspwnens
ea253726a0 fix soup 2024-05-15 13:48:32 -04:00
Mike Reeves
a0af25c314 Merge pull request #13017 from Security-Onion-Solutions/surimigrate
Update enabled.sls
2024-05-15 11:40:50 -04:00
Mike Reeves
e3a0847867 Update soup 2024-05-15 11:31:41 -04:00
Mike Reeves
7345d2c5a6 Update enabled.sls 2024-05-15 11:16:20 -04:00
Josh Patterson
7cbc3a83c6 Merge pull request #13016 from Security-Onion-Solutions/soupaml
so-yaml in soup_scripts
2024-05-15 10:49:56 -04:00
m0duspwnens
427b1e4524 revert soup_scripts back to common 2024-05-15 10:28:02 -04:00
m0duspwnens
2dbbe8dec4 soup_scripts put so-yaml in salt file system. move soup scripts to manager.soup_scripts 2024-05-15 10:07:06 -04:00
Josh Patterson
e76c2c95a9 Merge pull request #13013 from Security-Onion-Solutions/issue/13012
remove idh.services from idh node pillar files
2024-05-15 08:37:15 -04:00
m0duspwnens
51862e5803 remove idh.services from idh node pillar files 2024-05-14 13:08:51 -04:00
Doug Burks
27ad84ebd9 Merge pull request #13011 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add NetFlow dashboard #13009
2024-05-14 10:15:25 -04:00
Doug Burks
67645a662d FEATURE: Add NetFlow dashboard #13009 2024-05-14 10:14:16 -04:00
Doug Burks
1d16f6b7ed Merge pull request #13010 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add NetFlow dashboard #13009
2024-05-14 10:02:40 -04:00
Doug Burks
5b45c80a62 FEATURE: Add NetFlow dashboard #13009 2024-05-14 10:01:18 -04:00
weslambert
6dec9b4cf7 Merge pull request #12986 from Security-Onion-Solutions/fix/old_strelka
Remove old Strelka configuration for YARA
2024-05-14 09:27:19 -04:00
weslambert
13062099b3 Remove YARA script update and reference to exclusions 2024-05-13 18:04:16 -04:00
weslambert
7250fb1188 Merge pull request #13004 from Security-Onion-Solutions/fix/detections_alerts_indices
FIX: Detections alerts indices
2024-05-13 17:02:52 -04:00
Josh Patterson
437d0028db Merge pull request #13003 from Security-Onion-Solutions/localdirs
create local directories during soup if needed
2024-05-13 16:33:04 -04:00
m0duspwnens
1ef9509aac define local_salt_dir 2024-05-13 14:34:22 -04:00
weslambert
d606f259d1 Add detection alerts 2024-05-13 14:25:11 -04:00
weslambert
c8870eae65 Add detection alerts template 2024-05-13 14:23:47 -04:00
Josh Brower
2419066dc8 Merge pull request #13001 from Security-Onion-Solutions/2.4/socdefaults
2.4/socdefaults
2024-05-13 13:39:31 -04:00
DefensiveDepth
e430de88d3 Change rule updates to 24h 2024-05-13 13:15:06 -04:00
DefensiveDepth
c4c38f58cb Update descriptions 2024-05-13 13:13:57 -04:00
weslambert
26b5a39912 Change index to detections.alerts 2024-05-13 12:59:17 -04:00
m0duspwnens
eb03858230 missed one 2024-05-13 12:44:57 -04:00
m0duspwnens
2643da978b those functions in so-functions 2024-05-13 11:51:10 -04:00
m0duspwnens
649f52dac7 create_local_directories in soup too 2024-05-13 10:37:56 -04:00
Mike Reeves
927fe91f25 Merge pull request #13000 from Security-Onion-Solutions/soupz
Backup Suricata for migration
2024-05-13 10:12:34 -04:00
Mike Reeves
9d6f6c7893 Update soup 2024-05-13 10:09:35 -04:00
Mike Reeves
28e40e42b3 Update soc_soc.yaml 2024-05-13 09:58:32 -04:00
Mike Reeves
6c71c45ef6 Update soup 2024-05-13 09:55:57 -04:00
Mike Reeves
641899ad56 Backup Suricata for migration and remove advanced from reverselookups 2024-05-13 09:50:14 -04:00
Doug Burks
d120326cb9 Merge pull request #12999 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add more fields to the SOC Dashboards URL for so-import-pcap #12972
2024-05-13 09:20:01 -04:00
Doug Burks
a4f2d8f327 Merge pull request #12998 from Security-Onion-Solutions/dougburks-patch-2
Update README.md to reference new screenshots for 2.4.70
2024-05-13 08:42:33 -04:00
Doug Burks
ae323cf385 Update README.md to include new Detections screenshot 2024-05-13 08:34:44 -04:00
Doug Burks
788c31014d Update README.md to reference new screenshots for 2.4.70 2024-05-13 08:30:48 -04:00
Jason Ertel
154dc605ef Merge pull request #12994 from Security-Onion-Solutions/jertel/testcy
support upgrade tests
2024-05-10 16:57:19 -04:00
Jason Ertel
2a0e33401d support upgrade tests 2024-05-10 16:54:50 -04:00
Josh Patterson
79b4d7b6b6 Merge pull request #12992 from Security-Onion-Solutions/issue/12991
Fix IDH node
2024-05-10 12:43:09 -04:00
m0duspwnens
986cbb129a pkg not file 2024-05-10 12:33:56 -04:00
m0duspwnens
950c68783c add pkg policycoreutils-python-utils to idh node 2024-05-10 11:46:00 -04:00
Doug Burks
cec75ba475 Merge pull request #12989 from Security-Onion-Solutions/dougburks-patch-2
FIX: so-index-list typo #12988
2024-05-10 08:06:29 -04:00
Doug Burks
26cb8d43e1 FIX: so-index-list typo #12988 2024-05-10 08:01:56 -04:00
Doug Burks
a1291e43c3 FIX: so-index-list typo #12988 2024-05-10 07:58:13 -04:00
Jason Ertel
45fd07cdf8 Merge pull request #12987 from Security-Onion-Solutions/jertel/testcy
Add quick action to find related alerts for a detection
2024-05-09 18:08:08 -04:00
Jason Ertel
fecd674fdb Add quick action to find related alerts for a detection 2024-05-09 17:55:41 -04:00
Jason Ertel
dff2de4527 Merge pull request #12984 from Security-Onion-Solutions/jertel/testcy
tests will retry on any rule import failure
2024-05-09 15:50:37 -04:00
Jason Ertel
19e1aaa1a6 exclude detection rule errors 2024-05-09 15:45:33 -04:00
Jason Ertel
074d063fee tests will retry on any rule import failure 2024-05-09 14:52:58 -04:00
Wes
6ed82d7b29 Remove YARA download in setup 2024-05-09 17:27:46 +00:00
Wes
ea4cf42913 Remove old YARA update script 2024-05-09 17:26:54 +00:00
Wes
8a34f5621c Remove old YARA download script 2024-05-09 17:26:45 +00:00
Wes
823ff7ce11 Remove exclusions and repos 2024-05-09 17:03:13 +00:00
Josh Patterson
fb8456b4a6 Merge pull request #12983 from Security-Onion-Solutions/fix/strelka
fix strelka errors
2024-05-09 12:04:40 -04:00
m0duspwnens
c864fec70c allow strelka.manager to run on standalone 2024-05-09 11:53:50 -04:00
m0duspwnens
a74fee4cd0 strelka compiled rules 2024-05-09 11:26:02 -04:00
m0duspwnens
3a99624eb8 seperate manager states for strelka 2024-05-09 10:03:02 -04:00
Mike Reeves
656bf60fda Merge pull request #12973 from Security-Onion-Solutions/TOoSmOotH-patch-5
Update config.sls
2024-05-08 16:42:19 -04:00
weslambert
cdc47cb1cd Merge pull request #12975 from Security-Onion-Solutions/fix/strelka_watch
Use state
2024-05-08 16:39:49 -04:00
weslambert
01a68568a6 Use state 2024-05-08 16:37:13 -04:00
reyesj2
2ad87bf1fe merge 2.4/dev
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-08 16:30:45 -04:00
reyesj2
eca2a4a9c8 Logstash consumer threads should match topic partition count
- Default is set to 3. If there are too many consumer threads it may lead to idle logstash worker threads and could require decreasing this value to saturate workers

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-08 16:17:09 -04:00
reyesj2
dff609d829 Add basic read-only metric collection from Kafka
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-08 16:13:09 -04:00
weslambert
b916465b06 Merge pull request #12974 from Security-Onion-Solutions/fix/strelka_yara
Account for 0 active rules and change watch
2024-05-08 15:59:20 -04:00
weslambert
0567b93534 Remove mode 2024-05-08 15:39:59 -04:00
Mike Reeves
ad9fdf064b Update config.sls 2024-05-08 15:24:29 -04:00
Wes
77e2117051 Account for 0 active rules and change watch 2024-05-08 18:47:52 +00:00
Doug Burks
5b7b6e5fb8 FEATURE: Add more fields to the SOC Dashboards URL for so-import-pcap #12972 2024-05-08 14:00:23 -04:00
Doug Burks
c7845bdf56 Merge pull request #12970 from Security-Onion-Solutions/dougburks-patch-1
FIX: Adjust so-import-pcap so that suricata works when it is pcapengine #12969
2024-05-08 13:28:05 -04:00
Doug Burks
5a5a1e86ac FIX: Adjust so-import-pcap so that suricata works when it is pcapengine #12969 2024-05-08 13:26:36 -04:00
Josh Patterson
796eefc2f0 Merge pull request #12965 from Security-Onion-Solutions/orchit
searchnode installation improvements
2024-05-08 10:24:33 -04:00
m0duspwnens
1862deaf5e add copyright 2024-05-08 10:14:08 -04:00
m0duspwnens
0d2e5e0065 need repo and docker first 2024-05-08 09:50:01 -04:00
m0duspwnens
5dc098f0fc remove test file 2024-05-08 08:54:24 -04:00
Mike Reeves
af681881e6 Merge pull request #12963 from Security-Onion-Solutions/TOoSmOotH-patch-4
Make the url list read only
2024-05-08 08:45:34 -04:00
Josh Brower
47dc911b79 Merge pull request #12964 from Security-Onion-Solutions/2.4/agstrelka
remove old yara airgap code
2024-05-08 08:45:16 -04:00
DefensiveDepth
6d2ecce9b7 remove old yara airgap code 2024-05-08 08:43:37 -04:00
Mike Reeves
326c59bb26 Update soc_idstools.yaml 2024-05-08 08:42:38 -04:00
Mike Reeves
c1257f1c13 Merge pull request #12961 from Security-Onion-Solutions/TOoSmOotH-patch-3
Change so soc writes urls as a list
2024-05-07 17:23:12 -04:00
Mike Reeves
2eee617788 Update soc_idstools.yaml 2024-05-07 17:21:01 -04:00
Jason Ertel
70ef8092a7 Merge pull request #12959 from Security-Onion-Solutions/jertel/testcy
update suri regex for testing
2024-05-07 11:37:31 -07:00
Jason Ertel
8364b2a730 update for testing 2024-05-07 14:30:52 -04:00
coreyogburn
cb7dea1295 Merge pull request #12957 from Security-Onion-Solutions/cogburn/retry-import
Specify Error Retry Wait and Error Limit for All Detection Engines
2024-05-07 11:20:26 -06:00
Corey Ogburn
1da88b70ac Specify Error Retry Wait and Error Limit for All Detection Engines
If a sync errors out, the engine will wait `communityRulesImportErrorSeconds` seconds instead of the usual `communityRulesImportFrequencySeconds` seconds wait.

If `failAfterConsecutiveErrorCount` errors happen in a row when syncing detections to ElasticSearch then the sync is considered a failure and will give up and try again later. This assumes ElasticSearch is the source of the errors and backs of in hopes it'll be able to fix itself.
2024-05-07 10:34:50 -06:00
Jason Ertel
b4817fa062 Merge pull request #12956 from Security-Onion-Solutions/jertel/testcy
test regexes for detections
2024-05-07 08:45:38 -07:00
weslambert
bc24227732 Merge pull request #12955 from Security-Onion-Solutions/fix/cef
Add CEF
2024-05-07 11:23:53 -04:00
weslambert
2e70d157e2 Add ref 2024-05-07 11:13:51 -04:00
m0duspwnens
5e2e5b2724 Merge remote-tracking branch 'origin/2.4/dev' into orchit 2024-05-07 10:44:14 -04:00
m0duspwnens
dcc1f656ee predownload logstash and elastic for new searchnode and heavynode 2024-05-07 10:13:51 -04:00
weslambert
23da1f6ee9 Merge pull request #12951 from Security-Onion-Solutions/fix/remove_watch
Remove watch
2024-05-07 09:23:56 -04:00
Wes
bee8c2c1ce Remove watch 2024-05-07 13:21:59 +00:00
Jason Ertel
4ebe070cd8 test regexes for detections 2024-05-06 19:03:12 -04:00
weslambert
a5e89c0854 Merge pull request #12947 from Security-Onion-Solutions/fix/strelka_yara_distributed
Fix YARA rules for distributed deployments
2024-05-06 15:53:08 -04:00
weslambert
a25e43db8f Merge pull request #12948 from Security-Onion-Solutions/fix/strelka_yara_watch
Restart Strelka backend when YARA rules change
2024-05-06 15:52:57 -04:00
Josh Brower
b997e44715 Merge pull request #12939 from Security-Onion-Solutions/2.4/detections-airgap
Initial airgap support for detections
2024-05-06 15:46:29 -04:00
Wes
1e48955376 Restart when rules change 2024-05-06 19:39:03 +00:00
Wes
5056ec526b Add compiled directory 2024-05-06 19:27:38 +00:00
m0duspwnens
2431d7b028 Merge branch '2.4/detections-airgap' of https://github.com/Security-Onion-Solutions/securityonion into 2.4/detections-airgap 2024-05-06 15:27:27 -04:00
Wes
d2fa77ae10 Update compile script 2024-05-06 19:10:41 +00:00
Wes
445fb31634 Add manager SLS 2024-05-06 19:09:37 +00:00
Wes
5aa611302a Handle YARA rules for distributed deployments 2024-05-06 19:08:01 +00:00
m0duspwnens
554a203541 update airgapEnabled in map file 2024-05-06 12:59:45 -04:00
DefensiveDepth
be1758aea7 Fix license and folder 2024-05-06 12:22:44 -04:00
m0duspwnens
38f74d2e9e change quotes 2024-05-06 11:38:30 -04:00
m0duspwnens
5b966b83a9 change rulesRepos for airgap or not 2024-05-06 09:26:52 -04:00
Doug Burks
a67f0d93a0 Merge pull request #12942 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add event.dataset to all Events table layouts #12641
2024-05-06 09:23:09 -04:00
Doug Burks
3f73b14a6a FEATURE: Add event.dataset to all Events table layouts #12641 2024-05-06 09:20:47 -04:00
Doug Burks
e57d1a5fb5 Merge pull request #12941 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for stun logs #12940
2024-05-06 08:57:58 -04:00
Doug Burks
f689cfcd0a FEATURE: Add Events table columns for stun logs #12940 2024-05-06 08:52:43 -04:00
DefensiveDepth
26c6a98b45 Initial airgap support for detections 2024-05-06 08:43:01 -04:00
Doug Burks
45c344e3fa Merge pull request #12938 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for tunnel logs #12937
2024-05-06 08:40:02 -04:00
Doug Burks
7b905f5a94 FEATURE: Add Events table columns for tunnel logs #12937 2024-05-06 08:22:08 -04:00
Josh Brower
6d5ff59657 Merge pull request #12929 from Security-Onion-Solutions/2.4/verifyexclude
Exclude new sigma rules
2024-05-03 15:38:25 -04:00
DefensiveDepth
7f12d4c815 Exclude new sigma rules 2024-05-03 15:22:53 -04:00
Josh Patterson
b50789a77c Merge pull request #12928 from Security-Onion-Solutions/orchit
Orchit
2024-05-03 15:17:34 -04:00
m0duspwnens
bdf1b45a07 redirect and throw in bg 2024-05-03 14:54:44 -04:00
m0duspwnens
3d4fd59a15 orchit 2024-05-03 13:48:51 -04:00
Doug Burks
91c9f26a0c Merge pull request #12926 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add hyperlink to airgap screen in setup #12925
2024-05-03 13:02:30 -04:00
Doug Burks
6cbbb81cad FEATURE: Add hyperlink to airgap screen in setup #12925 2024-05-03 12:59:41 -04:00
m0duspwnens
442a717d75 orchit 2024-05-03 12:08:57 -04:00
m0duspwnens
fa3522a233 fix requirement 2024-05-03 11:10:21 -04:00
m0duspwnens
bbc374b56e add logic in orch 2024-05-03 09:56:52 -04:00
Doug Burks
9ae6fc5666 Merge pull request #12922 from Security-Onion-Solutions/dougburks-patch-1
FIX: Update so-whiptail to make installation screen more consistent #12921
2024-05-03 09:43:59 -04:00
Doug Burks
5fe8c6a95f Update so-whiptail to make installation screen more consistent 2024-05-03 09:38:34 -04:00
m0duspwnens
2929877042 fix var 2024-05-02 16:37:54 -04:00
m0duspwnens
8035740d2b Merge remote-tracking branch 'origin/2.4/dev' into orchit 2024-05-02 16:34:24 -04:00
Josh Patterson
4f8aaba6c6 Merge pull request #12918 from Security-Onion-Solutions/pw
run so-rule-update if ruleset or code changes for idstools
2024-05-02 16:33:24 -04:00
m0duspwnens
e9b1263249 orchestate searchnode deployment 2024-05-02 16:32:43 -04:00
Josh Patterson
3b2d3573d8 Update pillarWatch.py 2024-05-02 16:06:04 -04:00
reyesj2
e960ae66a3 Merge remote-tracking branch 'remotes/origin/2.4/dev' into reyesj2/kafka 2024-05-02 15:12:27 -04:00
reyesj2
093cbc5ebc Reconfigure Kafka defaults
- Set default number of partitions per topic -> 3. Helps ensure that out of the box we can take advantage of multi-node Kafka clusters via load balancing across atleast 3 brokers. Also multiple searchnodes will be able to pull from each topic. In this case 3 searchnodes (consumers) would be able to pull from all topics concurrently.
- Set default replication factor -> 2. This is the minimum value required for redundancy. Every partition will have 1 replica. In this case if we have 2 brokers each topic will have 3 partitions (load balanced across brokers) and each partition will have a replica on separate broker for redundancy

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-02 15:10:13 -04:00
reyesj2
f663ef8c16 Setup Kafka to use PKCS12 and remove need for converting to JKS
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-02 14:53:28 -04:00
reyesj2
de9f6425f9 Automatically switch between Kafka output policy and logstash output policy when globals.pipeline changes
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-02 12:13:46 -04:00
m0duspwnens
33d1170a91 add default pillar value for pillarWatch 2024-05-02 11:58:39 -04:00
Doug Burks
240ffc0862 Merge pull request #12915 from Security-Onion-Solutions/dougburks-patch-1
FIX: Improve File dashboard #12914
2024-05-02 10:44:58 -04:00
Doug Burks
0822a46e94 FIX: Improve File dashboard #12914 2024-05-02 10:42:34 -04:00
Doug Burks
1be3e6204d FIX: Improve File dashboard #12914 2024-05-02 10:38:56 -04:00
weslambert
956ae7a7ae Merge pull request #12909 from Security-Onion-Solutions/fix/detection_mappings
Update mappings for detection fields
2024-05-01 16:15:40 -04:00
Wes
3285ae9366 Update mappings for detection fields 2024-05-01 20:11:56 +00:00
reyesj2
47ced60243 Create new Kafka output policy using salt
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 14:49:51 -04:00
Josh Patterson
72b2503b49 Merge pull request #12906 from Security-Onion-Solutions/det_easr
Apply autoEnabledSigmaRules based on role if defined and default if not
2024-05-01 13:05:36 -04:00
reyesj2
58ebbfba20 Add kafka state to standalone highstate
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 13:03:14 -04:00
reyesj2
e164d15ec6 Generate different Kafka certs for different SO nodetypes
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 13:02:47 -04:00
reyesj2
3efdb4e532 Reconfigure logstash Kafka input
- TODO: Configure what topics are pulled to searchnodes via the SOC UI

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 13:01:29 -04:00
Mike Reeves
854799fabb Merge pull request #12902 from Security-Onion-Solutions/TOoSmOotH-patch-2
Update config.sls
2024-05-01 12:56:04 -04:00
m0duspwnens
47ba4c0f57 add new annotation for soc autoEnabledSigmaRules 2024-05-01 12:55:29 -04:00
Mike Reeves
10c8e4203c Update config.sls 2024-05-01 12:54:21 -04:00
Jason Ertel
05c69925c9 Merge pull request #12904 from Security-Onion-Solutions/jertel/wf
mark detections settings as read-only via the UI
2024-05-01 09:54:03 -07:00
Jason Ertel
252d9a5320 make rule settings advanced 2024-05-01 12:51:04 -04:00
m0duspwnens
7122709bbf set Sigma rules based on role if defined and default if not 2024-05-01 12:25:34 -04:00
Mike Reeves
f7223f132a Update config.sls 2024-05-01 12:00:39 -04:00
Mike Reeves
8cd75902f2 Update config.sls 2024-05-01 11:47:51 -04:00
Jason Ertel
c71af9127b mark detections settings as read-only via the UI 2024-05-01 11:47:38 -04:00
weslambert
e6f45161c1 Merge pull request #12900 from Security-Onion-Solutions/fix/cold_min_age
Cold min_age to 60d
2024-05-01 11:24:48 -04:00
weslambert
fe2edeb2fb 30d to 60d 2024-05-01 11:01:59 -04:00
weslambert
6294f751ee Cold min_age to 60d 2024-05-01 10:59:41 -04:00
reyesj2
de0af58cf8 Write out Kafka pillar path
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 10:45:46 -04:00
reyesj2
84abfa6881 Remove check for existing value since Kafka pillar is made empty on upgrade
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 10:45:05 -04:00
reyesj2
6b60e85a33 Make kafka configuration changes prior to 2.4.70 upgrade
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 10:15:26 -04:00
reyesj2
63f3e23e2b soup typo
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 09:54:19 -04:00
Jason Ertel
ad1cda1746 Merge pull request #12893 from Security-Onion-Solutions/jertel/wf
update annotations for duplication
2024-05-01 06:32:13 -07:00
Jason Ertel
66563a4da0 zeek networks will only ever have one HOME_NETWORKS setting 2024-05-01 09:31:11 -04:00
Jason Ertel
d0e140cf7b zeek networks will only ever have one HOME_NETWORKS setting 2024-05-01 09:30:52 -04:00
Jason Ertel
87c6d0a820 zeek networks will only ever have one HOME_NETWORKS setting 2024-05-01 09:29:36 -04:00
reyesj2
eb1249618b Update soup for Kafka
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 09:27:01 -04:00
reyesj2
cef9bb1487 Dynamically create Kafka topics based on event.module from elastic agent logs eg. zeek-topic. Depends on Kafka brokers having auto.create.topics.enable set to true
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 09:16:13 -04:00
Doug Burks
9a25d3c30f Merge pull request #12897 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Lower EVAL memory requirement to 8GB RAM #12896
2024-05-01 08:01:20 -04:00
Doug Burks
9a4a85e3ae FEATURE: Lower EVAL memory requirement to 8GB RAM #12896 2024-05-01 07:54:38 -04:00
reyesj2
bb49944b96 Setup elastic fleet rollover from logstash -> kafka output policy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-30 16:47:40 -04:00
Jason Ertel
72db369fbb Merge branch '2.4/dev' into jertel/wf 2024-04-30 15:16:41 -04:00
Jason Ertel
84db82852c annotation updates for custom settings 2024-04-30 15:14:56 -04:00
reyesj2
fcc4050f86 Add id to grid-kafka fleet output policy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-30 12:59:53 -04:00
reyesj2
9c83a52c6d Add Kafka output to elastic-fleet setup. Includes separating topics by event.module with fallback to default-logs if no event.module is specified or doesn't match processors
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-30 12:01:31 -04:00
coreyogburn
ea4750d8ad Merge pull request #12882 from Security-Onion-Solutions/cogburn/community-repos
Mark Repos as Community
2024-04-30 09:12:25 -06:00
Doug Burks
e9944796c8 Merge pull request #12886 from Security-Onion-Solutions/dougburks-patch-1
FIX: Elasticsearch min_age regex #12885
2024-04-30 10:26:04 -04:00
Doug Burks
4d6124f982 FIX: Elasticsearch min_age regex #12885 2024-04-30 10:18:34 -04:00
Jorge Reyes
dd168e1cca Merge pull request #12881 from Security-Onion-Solutions/2.4/finalpipefix
Update expected timestamp format in final pipeline for system events
2024-04-30 09:39:18 -04:00
Corey Ogburn
ddf662bdb4 Mark Repos as Community
Indicate that detection rules pulled from configured repos should be marked as Community rules.
2024-04-29 16:22:30 -06:00
reyesj2
fadb6e2aa9 Re-add original timestamp format + ignore failures with this processor
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 16:57:48 -04:00
reyesj2
192d91565d Update final pipeline timestamp format for event.module system events
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 16:34:29 -04:00
Josh Patterson
82ef4c96c3 Merge pull request #12880 from Security-Onion-Solutions/issue/12878
set Suricata as default pcap engine for eval
2024-04-29 15:54:25 -04:00
reyesj2
a6e8b25969 Add Kafka connectivity between manager - > receiver nodes.
Add connectivity to Kafka between other node types that may need to publish to Kafka.

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 15:48:57 -04:00
reyesj2
529bc01d69 Add missing configuration for nodes running Kafka broker role only
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 14:53:52 -04:00
m0duspwnens
a663bf63c6 set Suricata as default pcap engine for eval 2024-04-29 14:22:04 -04:00
reyesj2
11055b1d32 Rename kafkapass -> kafka_pass
Run so-kafka-clusterid within nodes.sls state so switchover is consistent

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 14:09:09 -04:00
reyesj2
fd9a91420d Use SOC UI to configure list of KRaft (Kafka) controllers for cluster
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 11:37:24 -04:00
reyesj2
529c8d7cf2 Remove salt reactor for Kafka
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 11:35:46 -04:00
Josh Brower
13ccb58f84 Merge pull request #12876 from Security-Onion-Solutions/2.4/sigmafix
Sigma pivot fix and cleanup
2024-04-29 09:12:09 -04:00
reyesj2
086ebe1a7c Split kafka defaults between broker / controller
Setup config.map.jinja to update broker / controller / combined node types

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 09:08:14 -04:00
reyesj2
29c964cca1 Set kafka.nodes state to run first to populate kafka.nodes pillar
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 09:04:52 -04:00
DefensiveDepth
f2c3c928fc Sigma pivot fix and cleanup 2024-04-29 08:49:05 -04:00
Jason Ertel
3cbc29e767 Merge pull request #12875 from Security-Onion-Solutions/jertel/wf
restrict workflows to so
2024-04-29 05:16:07 -07:00
Jason Ertel
89cb8b79fd restrict workflows to so 2024-04-29 08:07:19 -04:00
Mike Reeves
b5c5c7857b Merge pull request #12846 from petiepooo/fix/check-srvc-status
check status before stopping service
2024-04-25 15:10:42 -04:00
Josh Patterson
ed05d51969 Merge pull request #12865 from Security-Onion-Solutions/issue/12637
only apply ulimits to suricata container if user enable mmap-locked
2024-04-25 10:08:05 -04:00
m0duspwnens
2c7eb3c755 only apply ulimits to suricata container if user enable mmap-locked 2024-04-25 10:05:59 -04:00
weslambert
cc17de2184 Merge pull request #12864 from Security-Onion-Solutions/fix/exclude_suricata
Exclude suricata from disk space-based index deletion
2024-04-25 09:23:38 -04:00
weslambert
b424426298 Exclude suricata 2024-04-25 09:14:18 -04:00
Josh Patterson
03f9160fcc Merge pull request #12860 from Security-Onion-Solutions/issue/12856
allow for enabled/disable of so-elasticsearch-indices-delete cronjob
2024-04-25 09:07:44 -04:00
m0duspwnens
d50de804a8 update annotation 2024-04-25 09:04:34 -04:00
weslambert
983ef362e9 Merge pull request #12858 from Security-Onion-Solutions/fix/index_sorting
Change index sorting to account for older so-prefixed indices
2024-04-25 08:54:22 -04:00
Josh Brower
d88c1a5e0a Merge pull request #12861 from Security-Onion-Solutions/2.4/detectionlogs
Add runtime status logs
2024-04-24 20:07:32 -04:00
weslambert
44afa55274 Fix comments about deletion 2024-04-24 17:41:37 -04:00
weslambert
ab832e4bb2 Include logstash-prefixed indices 2024-04-24 17:17:53 -04:00
DefensiveDepth
3c3ed8b5c5 Add runtime status logs 2024-04-24 16:33:47 -04:00
m0duspwnens
c9d9979f22 allow for enabled/disable of so-elasticsearch-indices-delete cronjob 2024-04-24 16:18:45 -04:00
Josh Patterson
383420b554 Merge pull request #12859 from Security-Onion-Solutions/issue/12637
Issue/12637
2024-04-24 15:44:37 -04:00
m0duspwnens
73b5bb1a75 add memlock to so-suricata container 2024-04-24 15:35:17 -04:00
weslambert
59a02635ed Change index sorting 2024-04-24 15:18:49 -04:00
m0duspwnens
13a6520a8c mmap-locked default no 2024-04-24 13:50:12 -04:00
m0duspwnens
4b7f826a2a quote is so true becomes yes 2024-04-24 13:29:55 -04:00
m0duspwnens
0bd0c7b1ec allow for mmap-locked to be configured 2024-04-24 13:26:25 -04:00
weslambert
428fe787c4 Merge pull request #12852 from Security-Onion-Solutions/fix/elastic_max_age
Remove hot max_age
2024-04-24 10:15:06 -04:00
weslambert
1b3a0a3de8 Remove hot max_age 2024-04-24 10:11:02 -04:00
weslambert
96ec285241 Merge pull request #12848 from Security-Onion-Solutions/fix/elastic_annotation
Fix description, regex, and type for cold, warm, and hot
2024-04-24 09:22:05 -04:00
weslambert
75b5e16696 Update description, type, and regex 2024-04-24 09:14:39 -04:00
weslambert
8a0a435700 Fix warm description 2024-04-24 08:35:19 -04:00
Pete
e53e7768a0 check status before stopping service
resolves #12811 so-verify detects rare false error

If salt is uninstalled during call to so-setup where it detects a previous install, the "Failed" keyword from "systemctl stop $service" causes so-verify to falsely detect an installation error.  This might happen if the user removes the salt packages between calls to so-setup, or if upgrading from Ubuntu 20.04 to 22.04 then installing 2.4.xx on top of a 2.3.xx installation.

The fix is to wrap the call to stop the service in a check if the service is running.

This ignores the setting of pid var, as the next use of pid is within a while loop that will not execute for the same reason the systemctl stop call was not launched in the background.
2024-04-23 21:24:39 +00:00
reyesj2
36573d6005 Update kafka cert permissions
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-23 16:45:36 -04:00
reyesj2
aa0c589361 Update kafka managed node pillar template to include its process.role
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-23 13:51:12 -04:00
weslambert
bef408b944 Merge pull request #12844 from Security-Onion-Solutions/fix/elastic_annotation
Fix warm description
2024-04-23 10:47:04 -04:00
weslambert
691b02a15e Fix warm description 2024-04-23 10:40:09 -04:00
Josh Brower
fc1c41e5a4 Merge pull request #12841 from Security-Onion-Solutions/2.4/logfix
Temp exclude yara runtime status log
2024-04-23 07:36:02 -04:00
DefensiveDepth
58ddd55123 Exclude yara runtime log 2024-04-23 07:28:07 -04:00
reyesj2
685b80e519 Merge remote-tracking branch 'remotes/origin/kaffytaffy' into reyesj2/kafka 2024-04-22 16:45:59 -04:00
reyesj2
5a401af1fd Update kafka process_x_roles annotation
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-22 16:44:35 -04:00
reyesj2
25d63f7516 Setup kafka reactor for managing kafka controllers globally
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-22 16:42:59 -04:00
Jorge Reyes
d402943403 Merge pull request #12773 from Security-Onion-Solutions/reyesj2/kismet
Kismet integration for WiFi devices
2024-04-22 15:59:22 -04:00
Josh Brower
64c43b1a55 Merge pull request #12805 from Security-Onion-Solutions/2.4/detectiondefaults
Strelka fixes and more
2024-04-19 16:53:07 -04:00
DefensiveDepth
a237ef5d96 Update default queries 2024-04-19 16:33:35 -04:00
reyesj2
4ac04a1a46 add kafkapass soc annotation
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-18 16:46:36 -04:00
reyesj2
746128e37b update so-kafka-clusterid
This is a temporary script used to setup kafka secret and clusterid needed for kafka to start. This scripts functionality will be replaced by soup/setup scripts

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-18 15:13:29 -04:00
reyesj2
fe81ffaf78 Variables no longer used. Replaced by map file
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-18 15:11:22 -04:00
Doug Burks
c48da45ac3 Merge pull request #12820 from Security-Onion-Solutions/dougburks-patch-1
FIX: Elastic retention setting not being honored when manager hostname is a subset of search node hostname #12819
2024-04-18 11:59:57 -04:00
reyesj2
5cc358de4e Update map files to handle empty kafka:nodes pillar
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-18 11:58:25 -04:00
Doug Burks
406dda6051 Update so-elasticsearch-cluster-space-used 2024-04-18 11:48:15 -04:00
Doug Burks
229a989914 Update so-elasticsearch-cluster-space-total 2024-04-18 11:47:01 -04:00
DefensiveDepth
6c6647629c Refactor yara for compilation 2024-04-18 11:32:17 -04:00
Doug Burks
7f9bc1fc0f Merge pull request #12817 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add queue=True to so-checkin so that it will wait for any ru…
2024-04-18 09:30:55 -04:00
Doug Burks
8d9aae1983 FEATURE: Add queue=True to so-checkin so that it will wait for any running states #12815 2024-04-18 09:28:30 -04:00
reyesj2
665b7197a6 Update Kafka nodeid
Update so-minion to include running kafka.nodes state to ensure nodeid is generated for new brokers

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-17 17:08:41 -04:00
Mike Reeves
3854620bcd Merge pull request #12810 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update limited-analyst.json
2024-04-17 13:21:04 -04:00
Mike Reeves
67a57e9df7 Update limited-analyst.json 2024-04-17 13:14:45 -04:00
DefensiveDepth
ff28476191 Fix compile_yara path 2024-04-16 13:10:17 -04:00
DefensiveDepth
8cc4d2668e Move compile_yara 2024-04-16 12:52:14 -04:00
DefensiveDepth
dbfb178556 Add test 2024-04-16 12:22:53 -04:00
reyesj2
eedea2ca88 Merge remote-tracking branch 'remotes/origin/kaffytaffy' into reyesj2/kafka 2024-04-12 16:24:33 -04:00
reyesj2
de6ea29e3b update default process.role to broker only
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 16:18:53 -04:00
Josh Brower
5e8b16569f Merge pull request #12793 from Security-Onion-Solutions/2.4/detectiondefaults
Add docs for ruleset change
2024-04-12 13:54:06 -04:00
DefensiveDepth
f5e42e73af Add docs for ruleset change 2024-04-12 13:30:20 -04:00
Josh Brower
5b81a73e58 Merge pull request #12791 from Security-Onion-Solutions/2.4/detectiondefaults
Fix fingerprint paths
2024-04-12 09:01:38 -04:00
DefensiveDepth
49ccd86c39 Fix fingerprint paths 2024-04-12 08:35:44 -04:00
reyesj2
55cf90f477 merge 2.4/dev
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 14:44:59 -04:00
reyesj2
c269fb90ac Added a Kismet Wifi devices dashboard for an overview of kismet data
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 14:41:54 -04:00
Mike Reeves
1250a728ac Merge pull request #12769 from Security-Onion-Solutions/TOoSmOotH-patch-3
Update analyst.json
2024-04-11 14:30:17 -04:00
reyesj2
68e016090b Fix network.wireless.ssid not parsing
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 13:21:54 -04:00
reyesj2
fd689a4607 Fix typo in ingest pipeline
Test to fix duplicate events in SOC, by removing conflicting field event.created

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 11:18:04 -04:00
Josh Brower
ae09869417 Merge pull request #12780 from Security-Onion-Solutions/2.4/detectiondefaults
Enable Detections Adv by default
2024-04-11 09:32:34 -04:00
DefensiveDepth
1c5f02ade2 Update annotations 2024-04-11 09:21:08 -04:00
DefensiveDepth
ed97aa4e78 Enable Detections Adv by default 2024-04-11 08:21:20 -04:00
reyesj2
7124f04138 Update ingest pipelines to match updated mappings
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-10 16:13:06 -04:00
reyesj2
2ab9cbba61 Update wording for Kismet poll interval annotation
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-10 16:12:22 -04:00
reyesj2
4097e1d81a Create mappings for Kismet integration
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-10 16:10:27 -04:00
Mike Reeves
2206553e03 Update analyst.json 2024-04-10 09:49:21 -04:00
reyesj2
000d15a53c Kismet integration: TODO Elasticsearch mappings
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-03-29 13:56:01 -04:00
Mike Reeves
b658c82cdc Merge pull request #12616 from Security-Onion-Solutions/2.4/dev
2.4.60
2024-03-20 10:55:42 -04:00
161 changed files with 4457 additions and 1437 deletions

View File

@@ -536,7 +536,7 @@ secretGroup = 4
[allowlist] [allowlist]
description = "global allow lists" description = "global allow lists"
regexes = ['''219-09-9999''', '''078-05-1120''', '''(9[0-9]{2}|666)-\d{2}-\d{4}''', '''RPM-GPG-KEY.*''', '''.*:.*StrelkaHexDump.*''', '''.*:.*PLACEHOLDER.*'''] regexes = ['''219-09-9999''', '''078-05-1120''', '''(9[0-9]{2}|666)-\d{2}-\d{4}''', '''RPM-GPG-KEY.*''', '''.*:.*StrelkaHexDump.*''', '''.*:.*PLACEHOLDER.*''', '''ssl_.*password''']
paths = [ paths = [
'''gitleaks.toml''', '''gitleaks.toml''',
'''(.*?)(jpg|gif|doc|pdf|bin|svg|socket)$''', '''(.*?)(jpg|gif|doc|pdf|bin|svg|socket)$''',

View File

@@ -15,6 +15,7 @@ concurrency:
jobs: jobs:
close-threads: close-threads:
if: github.repository_owner == 'security-onion-solutions'
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: permissions:
issues: write issues: write

View File

@@ -15,6 +15,7 @@ concurrency:
jobs: jobs:
lock-threads: lock-threads:
if: github.repository_owner == 'security-onion-solutions'
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: jertel/lock-threads@main - uses: jertel/lock-threads@main

View File

@@ -1,17 +1,17 @@
### 2.4.60-20240320 ISO image released on 2024/03/20 ### 2.4.80-20240624 ISO image released on 2024/06/25
### Download and Verify ### Download and Verify
2.4.60-20240320 ISO image: 2.4.80-20240624 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.60-20240320.iso https://download.securityonion.net/file/securityonion/securityonion-2.4.80-20240624.iso
MD5: 178DD42D06B2F32F3870E0C27219821E MD5: 139F9762E926F9CB3C4A9528A3752C31
SHA1: 73EDCD50817A7F6003FE405CF1808A30D034F89D SHA1: BC6CA2C5F4ABC1A04E83A5CF8FFA6A53B1583CC9
SHA256: DD334B8D7088A7B78160C253B680D645E25984BA5CCAB5CC5C327CA72137FC06 SHA256: 70E90845C84FFA30AD6CF21504634F57C273E7996CA72F7250428DDBAAC5B1BD
Signature for ISO image: Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.60-20240320.iso.sig https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.80-20240624.iso.sig
Signing key: Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS
@@ -25,27 +25,29 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.
Download the signature file for the ISO: Download the signature file for the ISO:
``` ```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.60-20240320.iso.sig wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.80-20240624.iso.sig
``` ```
Download the ISO image: Download the ISO image:
``` ```
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.60-20240320.iso wget https://download.securityonion.net/file/securityonion/securityonion-2.4.80-20240624.iso
``` ```
Verify the downloaded ISO image using the signature file: Verify the downloaded ISO image using the signature file:
``` ```
gpg --verify securityonion-2.4.60-20240320.iso.sig securityonion-2.4.60-20240320.iso gpg --verify securityonion-2.4.80-20240624.iso.sig securityonion-2.4.80-20240624.iso
``` ```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below: The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
``` ```
gpg: Signature made Tue 19 Mar 2024 03:17:58 PM EDT using RSA key ID FE507013 gpg: Signature made Mon 24 Jun 2024 02:42:03 PM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>" gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature! gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner. gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: C804 A93D 36BE 0C73 3EA1 9644 7C10 60B7 FE50 7013 Primary key fingerprint: C804 A93D 36BE 0C73 3EA1 9644 7C10 60B7 FE50 7013
``` ```
If it fails to verify, try downloading again. If it still fails to verify, try downloading from another computer or another network.
Once you've verified the ISO image, you're ready to proceed to our Installation guide: Once you've verified the ISO image, you're ready to proceed to our Installation guide:
https://docs.securityonion.net/en/2.4/installation.html https://docs.securityonion.net/en/2.4/installation.html

View File

@@ -8,19 +8,22 @@ Alerts
![Alerts](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/50_alerts.png) ![Alerts](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/50_alerts.png)
Dashboards Dashboards
![Dashboards](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/51_dashboards.png) ![Dashboards](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/53_dashboards.png)
Hunt Hunt
![Hunt](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/52_hunt.png) ![Hunt](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/56_hunt.png)
Detections
![Detections](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/57_detections.png)
PCAP PCAP
![PCAP](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/53_pcap.png) ![PCAP](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/62_pcap.png)
Grid Grid
![Grid](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/57_grid.png) ![Grid](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/75_grid.png)
Config Config
![Config](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/61_config.png) ![Config](https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion-docs/2.4/images/87_config.png)
### Release Notes ### Release Notes

View File

@@ -1 +1 @@
2.4.70 2.4.80

View File

@@ -1,30 +1,2 @@
{% set current_kafkanodes = salt.saltutil.runner('mine.get', tgt='G@role:so-manager or G@role:so-managersearch or G@role:so-standalone or G@role:so-receiver', fun='network.ip_addrs', tgt_type='compound') %}
{% set pillar_kafkanodes = salt['pillar.get']('kafka:nodes', default={}, merge=True) %}
{% set existing_ids = [] %}
{% for node in pillar_kafkanodes.values() %}
{% if node.get('id') %}
{% do existing_ids.append(node['nodeid']) %}
{% endif %}
{% endfor %}
{% set all_possible_ids = range(1, 256)|list %}
{% set available_ids = [] %}
{% for id in all_possible_ids %}
{% if id not in existing_ids %}
{% do available_ids.append(id) %}
{% endif %}
{% endfor %}
{% set final_nodes = pillar_kafkanodes.copy() %}
{% for minionid, ip in current_kafkanodes.items() %}
{% set hostname = minionid.split('_')[0] %}
{% if hostname not in final_nodes %}
{% set new_id = available_ids.pop(0) %}
{% do final_nodes.update({hostname: {'nodeid': new_id, 'ip': ip[0]}}) %}
{% endif %}
{% endfor %}
kafka: kafka:
nodes: {{ final_nodes|tojson }} nodes:

View File

@@ -226,6 +226,7 @@ base:
- minions.adv_{{ grains.id }} - minions.adv_{{ grains.id }}
- stig.soc_stig - stig.soc_stig
- soc.license - soc.license
- kafka.nodes
'*_receiver': '*_receiver':
- logstash.nodes - logstash.nodes
@@ -241,6 +242,7 @@ base:
- kafka.nodes - kafka.nodes
- kafka.soc_kafka - kafka.soc_kafka
- kafka.adv_kafka - kafka.adv_kafka
- soc.license
'*_import': '*_import':
- secrets - secrets

14
pyci.sh
View File

@@ -15,12 +15,16 @@ TARGET_DIR=${1:-.}
PATH=$PATH:/usr/local/bin PATH=$PATH:/usr/local/bin
if ! which pytest &> /dev/null || ! which flake8 &> /dev/null ; then if [ ! -d .venv ]; then
echo "Missing dependencies. Consider running the following command:" python -m venv .venv
echo " python -m pip install flake8 pytest pytest-cov" fi
source .venv/bin/activate
if ! pip install flake8 pytest pytest-cov pyyaml; then
echo "Unable to install dependencies."
exit 1 exit 1
fi fi
pip install pytest pytest-cov
flake8 "$TARGET_DIR" "--config=${HOME_DIR}/pytest.ini" flake8 "$TARGET_DIR" "--config=${HOME_DIR}/pytest.ini"
python3 -m pytest "--cov-config=${HOME_DIR}/pytest.ini" "--cov=$TARGET_DIR" --doctest-modules --cov-report=term --cov-fail-under=100 "$TARGET_DIR" python3 -m pytest "--cov-config=${HOME_DIR}/pytest.ini" "--cov=$TARGET_DIR" --doctest-modules --cov-report=term --cov-fail-under=100 "$TARGET_DIR"

View File

@@ -65,6 +65,7 @@
'registry', 'registry',
'manager', 'manager',
'nginx', 'nginx',
'strelka.manager',
'soc', 'soc',
'kratos', 'kratos',
'influxdb', 'influxdb',
@@ -91,6 +92,7 @@
'nginx', 'nginx',
'telegraf', 'telegraf',
'influxdb', 'influxdb',
'strelka.manager',
'soc', 'soc',
'kratos', 'kratos',
'elasticfleet', 'elasticfleet',
@@ -112,6 +114,7 @@
'nginx', 'nginx',
'telegraf', 'telegraf',
'influxdb', 'influxdb',
'strelka.manager',
'soc', 'soc',
'kratos', 'kratos',
'elastic-fleet-package-registry', 'elastic-fleet-package-registry',
@@ -192,7 +195,8 @@
'schedule', 'schedule',
'docker_clean', 'docker_clean',
'kafka', 'kafka',
'elasticsearch.ca' 'elasticsearch.ca',
'stig'
], ],
'so-desktop': [ 'so-desktop': [
'ssl', 'ssl',

View File

@@ -1,6 +1,3 @@
mine_functions:
x509.get_pem_entries: [/etc/pki/ca.crt]
x509_signing_policies: x509_signing_policies:
filebeat: filebeat:
- minions: '*' - minions: '*'

View File

@@ -1,3 +1,8 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% if '2.4' in salt['cp.get_file_str']('/etc/soversion') %} {% if '2.4' in salt['cp.get_file_str']('/etc/soversion') %}
{% import_yaml '/opt/so/saltstack/local/pillar/global/soc_global.sls' as SOC_GLOBAL %} {% import_yaml '/opt/so/saltstack/local/pillar/global/soc_global.sls' as SOC_GLOBAL %}
@@ -15,6 +20,8 @@ remove_common_so-firewall:
file.absent: file.absent:
- name: /opt/so/saltstack/default/salt/common/tools/sbin/so-firewall - name: /opt/so/saltstack/default/salt/common/tools/sbin/so-firewall
# This section is used to put the scripts in place in the Salt file system
# in case a state run tries to overwrite what we do in the next section.
copy_so-common_common_tools_sbin: copy_so-common_common_tools_sbin:
file.copy: file.copy:
- name: /opt/so/saltstack/default/salt/common/tools/sbin/so-common - name: /opt/so/saltstack/default/salt/common/tools/sbin/so-common
@@ -43,6 +50,21 @@ copy_so-firewall_manager_tools_sbin:
- force: True - force: True
- preserve: True - preserve: True
copy_so-yaml_manager_tools_sbin:
file.copy:
- name: /opt/so/saltstack/default/salt/manager/tools/sbin/so-yaml.py
- source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-yaml.py
- force: True
- preserve: True
copy_so-repo-sync_manager_tools_sbin:
file.copy:
- name: /opt/so/saltstack/default/salt/manager/tools/sbin/so-repo-sync
- source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-repo-sync
- preserve: True
# This section is used to put the new script in place so that it can be called during soup.
# It is faster than calling the states that normally manage them to put them in place.
copy_so-common_sbin: copy_so-common_sbin:
file.copy: file.copy:
- name: /usr/sbin/so-common - name: /usr/sbin/so-common
@@ -78,6 +100,13 @@ copy_so-yaml_sbin:
- force: True - force: True
- preserve: True - preserve: True
copy_so-repo-sync_sbin:
file.copy:
- name: /usr/sbin/so-repo-sync
- source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-repo-sync
- force: True
- preserve: True
{% else %} {% else %}
fix_23_soup_sbin: fix_23_soup_sbin:
cmd.run: cmd.run:

View File

@@ -5,8 +5,13 @@
# https://securityonion.net/license; you may not use this file except in compliance with the # https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0. # Elastic License 2.0.
. /usr/sbin/so-common . /usr/sbin/so-common
salt-call state.highstate -l info cat << EOF
so-checkin will run a full salt highstate to apply all salt states. If a highstate is already running, this request will be queued and so it may pause for a few minutes before you see any more output. For more information about so-checkin and salt, please see:
https://docs.securityonion.net/en/2.4/salt.html
EOF
salt-call state.highstate -l info queue=True

View File

@@ -31,6 +31,11 @@ if ! echo "$PATH" | grep -q "/usr/sbin"; then
export PATH="$PATH:/usr/sbin" export PATH="$PATH:/usr/sbin"
fi fi
# See if a proxy is set. If so use it.
if [ -f /etc/profile.d/so-proxy.sh ]; then
. /etc/profile.d/so-proxy.sh
fi
# Define a banner to separate sections # Define a banner to separate sections
banner="=========================================================================" banner="========================================================================="
@@ -179,6 +184,21 @@ copy_new_files() {
cd /tmp cd /tmp
} }
create_local_directories() {
echo "Creating local pillar and salt directories if needed"
PILLARSALTDIR=$1
local_salt_dir="/opt/so/saltstack/local"
for i in "pillar" "salt"; do
for d in $(find $PILLARSALTDIR/$i -type d); do
suffixdir=${d//$PILLARSALTDIR/}
if [ ! -d "$local_salt_dir/$suffixdir" ]; then
mkdir -pv $local_salt_dir$suffixdir
fi
done
chown -R socore:socore $local_salt_dir/$i
done
}
disable_fastestmirror() { disable_fastestmirror() {
sed -i 's/enabled=1/enabled=0/' /etc/yum/pluginconf.d/fastestmirror.conf sed -i 's/enabled=1/enabled=0/' /etc/yum/pluginconf.d/fastestmirror.conf
} }

View File

@@ -201,6 +201,10 @@ if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Unknown column" # Elastalert errors from running EQL queries EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Unknown column" # Elastalert errors from running EQL queries
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|parsing_exception" # Elastalert EQL parsing issue. Temp. EXCLUDED_ERRORS="$EXCLUDED_ERRORS|parsing_exception" # Elastalert EQL parsing issue. Temp.
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|context deadline exceeded" EXCLUDED_ERRORS="$EXCLUDED_ERRORS|context deadline exceeded"
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Error running query:" # Specific issues with detection rules
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|detect-parse" # Suricata encountering a malformed rule
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|integrity check failed" # Detections: Exclude false positive due to automated testing
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|syncErrors" # Detections: Not an actual error
fi fi
RESULT=0 RESULT=0
@@ -236,6 +240,7 @@ exclude_log "playbook.log" # Playbook is removed as of 2.4.70, logs may still be
exclude_log "mysqld.log" # MySQL is removed as of 2.4.70, logs may still be on disk exclude_log "mysqld.log" # MySQL is removed as of 2.4.70, logs may still be on disk
exclude_log "soctopus.log" # Soctopus is removed as of 2.4.70, logs may still be on disk exclude_log "soctopus.log" # Soctopus is removed as of 2.4.70, logs may still be on disk
exclude_log "agentstatus.log" # ignore this log since it tracks agents in error state exclude_log "agentstatus.log" # ignore this log since it tracks agents in error state
exclude_log "detections_runtime-status_yara.log" # temporarily ignore this log until Detections is more stable
for log_file in $(cat /tmp/log_check_files); do for log_file in $(cat /tmp/log_check_files); do
status "Checking log file $log_file" status "Checking log file $log_file"

View File

@@ -0,0 +1,98 @@
#!/bin/bash
#
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0."
set -e
# This script is intended to be used in the case the ISO install did not properly setup TPM decrypt for LUKS partitions at boot.
if [ -z $NOROOT ]; then
# Check for prerequisites
if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run using sudo!"
exit 1
fi
fi
ENROLL_TPM=N
while [[ $# -gt 0 ]]; do
case $1 in
--enroll-tpm)
ENROLL_TPM=Y
;;
*)
echo "Usage: $0 [options]"
echo ""
echo "where options are:"
echo " --enroll-tpm for when TPM enrollment was not selected during ISO install."
echo ""
exit 1
;;
esac
shift
done
check_for_tpm() {
echo -n "Checking for TPM: "
if [ -d /sys/class/tpm/tpm0 ]; then
echo -e "tpm0 found."
TPM="yes"
# Check if TPM is using sha1 or sha256
if [ -d /sys/class/tpm/tpm0/pcr-sha1 ]; then
echo -e "TPM is using sha1.\n"
TPM_PCR="sha1"
elif [ -d /sys/class/tpm/tpm0/pcr-sha256 ]; then
echo -e "TPM is using sha256.\n"
TPM_PCR="sha256"
fi
else
echo -e "No TPM found.\n"
exit 1
fi
}
check_for_luks_partitions() {
echo "Checking for LUKS partitions"
for part in $(lsblk -o NAME,FSTYPE -ln | grep crypto_LUKS | awk '{print $1}'); do
echo "Found LUKS partition: $part"
LUKS_PARTITIONS+=("$part")
done
if [ ${#LUKS_PARTITIONS[@]} -eq 0 ]; then
echo -e "No LUKS partitions found.\n"
exit 1
fi
echo ""
}
enroll_tpm_in_luks() {
read -s -p "Enter the LUKS passphrase used during ISO install: " LUKS_PASSPHRASE
echo ""
for part in "${LUKS_PARTITIONS[@]}"; do
echo "Enrolling TPM for LUKS device: /dev/$part"
if [ "$TPM_PCR" == "sha1" ]; then
clevis luks bind -d /dev/$part tpm2 '{"pcr_bank":"sha1","pcr_ids":"7"}' <<< $LUKS_PASSPHRASE
elif [ "$TPM_PCR" == "sha256" ]; then
clevis luks bind -d /dev/$part tpm2 '{"pcr_bank":"sha256","pcr_ids":"7"}' <<< $LUKS_PASSPHRASE
fi
done
}
regenerate_tpm_enrollment_token() {
for part in "${LUKS_PARTITIONS[@]}"; do
clevis luks regen -d /dev/$part -s 1 -q
done
}
check_for_tpm
check_for_luks_partitions
if [[ $ENROLL_TPM == "Y" ]]; then
enroll_tpm_in_luks
else
regenerate_tpm_enrollment_token
fi
echo "Running dracut"
dracut -fv
echo -e "\nTPM configuration complete. Reboot the system to verify the TPM is correctly decrypting the LUKS partition(s) at boot.\n"

View File

@@ -89,6 +89,7 @@ function suricata() {
-v ${LOG_PATH}:/var/log/suricata/:rw \ -v ${LOG_PATH}:/var/log/suricata/:rw \
-v ${NSM_PATH}/:/nsm/:rw \ -v ${NSM_PATH}/:/nsm/:rw \
-v "$PCAP:/input.pcap:ro" \ -v "$PCAP:/input.pcap:ro" \
-v /dev/null:/nsm/suripcap:rw \
-v /opt/so/conf/suricata/bpf:/etc/suricata/bpf:ro \ -v /opt/so/conf/suricata/bpf:/etc/suricata/bpf:ro \
{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-suricata:{{ VERSION }} \ {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-suricata:{{ VERSION }} \
--runmode single -k none -r /input.pcap > $LOG_PATH/console.log 2>&1 --runmode single -k none -r /input.pcap > $LOG_PATH/console.log 2>&1
@@ -247,7 +248,7 @@ fi
START_OLDEST_SLASH=$(echo $START_OLDEST | sed -e 's/-/%2F/g') START_OLDEST_SLASH=$(echo $START_OLDEST | sed -e 's/-/%2F/g')
END_NEWEST_SLASH=$(echo $END_NEWEST | sed -e 's/-/%2F/g') END_NEWEST_SLASH=$(echo $END_NEWEST | sed -e 's/-/%2F/g')
if [[ $VALID_PCAPS_COUNT -gt 0 ]] || [[ $SKIPPED_PCAPS_COUNT -gt 0 ]]; then if [[ $VALID_PCAPS_COUNT -gt 0 ]] || [[ $SKIPPED_PCAPS_COUNT -gt 0 ]]; then
URL="https://{{ URLBASE }}/#/dashboards?q=$HASH_FILTERS%20%7C%20groupby%20-sankey%20event.dataset%20event.category%2a%20%7C%20groupby%20-pie%20event.category%20%7C%20groupby%20-bar%20event.module%20%7C%20groupby%20event.dataset%20%7C%20groupby%20event.module%20%7C%20groupby%20event.category%20%7C%20groupby%20observer.name%20%7C%20groupby%20source.ip%20%7C%20groupby%20destination.ip%20%7C%20groupby%20destination.port&t=${START_OLDEST_SLASH}%2000%3A00%3A00%20AM%20-%20${END_NEWEST_SLASH}%2000%3A00%3A00%20AM&z=UTC" URL="https://{{ URLBASE }}/#/dashboards?q=$HASH_FILTERS%20%7C%20groupby%20event.module*%20%7C%20groupby%20-sankey%20event.module*%20event.dataset%20%7C%20groupby%20event.dataset%20%7C%20groupby%20source.ip%20%7C%20groupby%20destination.ip%20%7C%20groupby%20destination.port%20%7C%20groupby%20network.protocol%20%7C%20groupby%20rule.name%20rule.category%20event.severity_label%20%7C%20groupby%20dns.query.name%20%7C%20groupby%20file.mime_type%20%7C%20groupby%20http.virtual_host%20http.uri%20%7C%20groupby%20notice.note%20notice.message%20notice.sub_message%20%7C%20groupby%20ssl.server_name%20%7C%20groupby%20source_geo.organization_name%20source.geo.country_name%20%7C%20groupby%20destination_geo.organization_name%20destination.geo.country_name&t=${START_OLDEST_SLASH}%2000%3A00%3A00%20AM%20-%20${END_NEWEST_SLASH}%2000%3A00%3A00%20AM&z=UTC"
status "Import complete!" status "Import complete!"
status status

View File

@@ -10,7 +10,7 @@
. /usr/sbin/so-common . /usr/sbin/so-common
. /usr/sbin/so-image-common . /usr/sbin/so-image-common
REPLAYIFACE=${REPLAYIFACE:-$(lookup_pillar interface sensor)} REPLAYIFACE=${REPLAYIFACE:-"{{salt['pillar.get']('sensor:interface', '')}}"}
REPLAYSPEED=${REPLAYSPEED:-10} REPLAYSPEED=${REPLAYSPEED:-10}
mkdir -p /opt/so/samples mkdir -p /opt/so/samples

View File

@@ -180,6 +180,8 @@ docker:
custom_bind_mounts: [] custom_bind_mounts: []
extra_hosts: [] extra_hosts: []
extra_env: [] extra_env: []
ulimits:
- memlock=524288000
'so-zeek': 'so-zeek':
final_octet: 99 final_octet: 99
custom_bind_mounts: [] custom_bind_mounts: []
@@ -190,6 +192,7 @@ docker:
port_bindings: port_bindings:
- 0.0.0.0:9092:9092 - 0.0.0.0:9092:9092
- 0.0.0.0:9093:9093 - 0.0.0.0:9093:9093
- 0.0.0.0:8778:8778
custom_bind_mounts: [] custom_bind_mounts: []
extra_hosts: [] extra_hosts: []
extra_env: [] extra_env: []

View File

@@ -20,30 +20,30 @@ dockergroup:
dockerheldpackages: dockerheldpackages:
pkg.installed: pkg.installed:
- pkgs: - pkgs:
- containerd.io: 1.6.21-1 - containerd.io: 1.6.33-1
- docker-ce: 5:24.0.3-1~debian.12~bookworm - docker-ce: 5:26.1.4-1~debian.12~bookworm
- docker-ce-cli: 5:24.0.3-1~debian.12~bookworm - docker-ce-cli: 5:26.1.4-1~debian.12~bookworm
- docker-ce-rootless-extras: 5:24.0.3-1~debian.12~bookworm - docker-ce-rootless-extras: 5:26.1.4-1~debian.12~bookworm
- hold: True - hold: True
- update_holds: True - update_holds: True
{% elif grains.oscodename == 'jammy' %} {% elif grains.oscodename == 'jammy' %}
dockerheldpackages: dockerheldpackages:
pkg.installed: pkg.installed:
- pkgs: - pkgs:
- containerd.io: 1.6.21-1 - containerd.io: 1.6.33-1
- docker-ce: 5:24.0.2-1~ubuntu.22.04~jammy - docker-ce: 5:26.1.4-1~ubuntu.22.04~jammy
- docker-ce-cli: 5:24.0.2-1~ubuntu.22.04~jammy - docker-ce-cli: 5:26.1.4-1~ubuntu.22.04~jammy
- docker-ce-rootless-extras: 5:24.0.2-1~ubuntu.22.04~jammy - docker-ce-rootless-extras: 5:26.1.4-1~ubuntu.22.04~jammy
- hold: True - hold: True
- update_holds: True - update_holds: True
{% else %} {% else %}
dockerheldpackages: dockerheldpackages:
pkg.installed: pkg.installed:
- pkgs: - pkgs:
- containerd.io: 1.4.9-1 - containerd.io: 1.6.33-1
- docker-ce: 5:20.10.8~3-0~ubuntu-focal - docker-ce: 5:26.1.4-1~ubuntu.20.04~focal
- docker-ce-cli: 5:20.10.5~3-0~ubuntu-focal - docker-ce-cli: 5:26.1.4-1~ubuntu.20.04~focal
- docker-ce-rootless-extras: 5:20.10.5~3-0~ubuntu-focal - docker-ce-rootless-extras: 5:26.1.4-1~ubuntu.20.04~focal
- hold: True - hold: True
- update_holds: True - update_holds: True
{% endif %} {% endif %}
@@ -51,10 +51,10 @@ dockerheldpackages:
dockerheldpackages: dockerheldpackages:
pkg.installed: pkg.installed:
- pkgs: - pkgs:
- containerd.io: 1.6.21-3.1.el9 - containerd.io: 1.6.33-3.1.el9
- docker-ce: 24.0.4-1.el9 - docker-ce: 3:26.1.4-1.el9
- docker-ce-cli: 24.0.4-1.el9 - docker-ce-cli: 1:26.1.4-1.el9
- docker-ce-rootless-extras: 24.0.4-1.el9 - docker-ce-rootless-extras: 26.1.4-1.el9
- hold: True - hold: True
- update_holds: True - update_holds: True
{% endif %} {% endif %}

View File

@@ -63,6 +63,42 @@ docker:
so-elastic-agent: *dockerOptions so-elastic-agent: *dockerOptions
so-telegraf: *dockerOptions so-telegraf: *dockerOptions
so-steno: *dockerOptions so-steno: *dockerOptions
so-suricata: *dockerOptions so-suricata:
final_octet:
description: Last octet of the container IP address.
helpLink: docker.html
readonly: True
advanced: True
global: True
port_bindings:
description: List of port bindings for the container.
helpLink: docker.html
advanced: True
multiline: True
forcedType: "[]string"
custom_bind_mounts:
description: List of custom local volume bindings.
advanced: True
helpLink: docker.html
multiline: True
forcedType: "[]string"
extra_hosts:
description: List of additional host entries for the container.
advanced: True
helpLink: docker.html
multiline: True
forcedType: "[]string"
extra_env:
description: List of additional ENV entries for the container.
advanced: True
helpLink: docker.html
multiline: True
forcedType: "[]string"
ulimits:
description: Ulimits for the container, in bytes.
advanced: True
helpLink: docker.html
multiline: True
forcedType: "[]string"
so-zeek: *dockerOptions so-zeek: *dockerOptions
so-kafka: *dockerOptions so-kafka: *dockerOptions

View File

@@ -82,6 +82,36 @@ elastasomodulesync:
- group: 933 - group: 933
- makedirs: True - makedirs: True
elastacustomdir:
file.directory:
- name: /opt/so/conf/elastalert/custom
- user: 933
- group: 933
- makedirs: True
elastacustomsync:
file.recurse:
- name: /opt/so/conf/elastalert/custom
- source: salt://elastalert/files/custom
- user: 933
- group: 933
- makedirs: True
- file_mode: 660
- show_changes: False
elastapredefinedsync:
file.recurse:
- name: /opt/so/conf/elastalert/predefined
- source: salt://elastalert/files/predefined
- user: 933
- group: 933
- makedirs: True
- template: jinja
- file_mode: 660
- context:
elastalert: {{ ELASTALERTMERGED }}
- show_changes: False
elastaconf: elastaconf:
file.managed: file.managed:
- name: /opt/so/conf/elastalert/elastalert_config.yaml - name: /opt/so/conf/elastalert/elastalert_config.yaml

View File

@@ -1,5 +1,6 @@
elastalert: elastalert:
enabled: False enabled: False
alerter_parameters: ""
config: config:
rules_folder: /opt/elastalert/rules/ rules_folder: /opt/elastalert/rules/
scan_subdirectories: true scan_subdirectories: true

View File

@@ -30,6 +30,8 @@ so-elastalert:
- /opt/so/rules/elastalert:/opt/elastalert/rules/:ro - /opt/so/rules/elastalert:/opt/elastalert/rules/:ro
- /opt/so/log/elastalert:/var/log/elastalert:rw - /opt/so/log/elastalert:/var/log/elastalert:rw
- /opt/so/conf/elastalert/modules/:/opt/elastalert/modules/:ro - /opt/so/conf/elastalert/modules/:/opt/elastalert/modules/:ro
- /opt/so/conf/elastalert/predefined/:/opt/elastalert/predefined/:ro
- /opt/so/conf/elastalert/custom/:/opt/elastalert/custom/:ro
- /opt/so/conf/elastalert/elastalert_config.yaml:/opt/elastalert/config.yaml:ro - /opt/so/conf/elastalert/elastalert_config.yaml:/opt/elastalert/config.yaml:ro
{% if DOCKER.containers['so-elastalert'].custom_bind_mounts %} {% if DOCKER.containers['so-elastalert'].custom_bind_mounts %}
{% for BIND in DOCKER.containers['so-elastalert'].custom_bind_mounts %} {% for BIND in DOCKER.containers['so-elastalert'].custom_bind_mounts %}

View File

@@ -0,0 +1 @@
THIS IS A PLACEHOLDER FILE

View File

@@ -1,38 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
from time import gmtime, strftime
import requests,json
from elastalert.alerts import Alerter
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
class PlaybookESAlerter(Alerter):
"""
Use matched data to create alerts in elasticsearch
"""
required_options = set(['play_title','play_url','sigma_level'])
def alert(self, matches):
for match in matches:
today = strftime("%Y.%m.%d", gmtime())
timestamp = strftime("%Y-%m-%d"'T'"%H:%M:%S"'.000Z', gmtime())
headers = {"Content-Type": "application/json"}
creds = None
if 'es_username' in self.rule and 'es_password' in self.rule:
creds = (self.rule['es_username'], self.rule['es_password'])
payload = {"tags":"alert","rule": { "name": self.rule['play_title'],"case_template": self.rule['play_id'],"uuid": self.rule['play_id'],"category": self.rule['rule.category']},"event":{ "severity": self.rule['event.severity'],"module": self.rule['event.module'],"dataset": self.rule['event.dataset'],"severity_label": self.rule['sigma_level']},"kibana_pivot": self.rule['kibana_pivot'],"soc_pivot": self.rule['soc_pivot'],"play_url": self.rule['play_url'],"sigma_level": self.rule['sigma_level'],"event_data": match, "@timestamp": timestamp}
url = f"https://{self.rule['es_host']}:{self.rule['es_port']}/logs-playbook.alerts-so/_doc/"
requests.post(url, data=json.dumps(payload), headers=headers, verify=False, auth=creds)
def get_info(self):
return {'type': 'PlaybookESAlerter'}

View File

@@ -0,0 +1,63 @@
# -*- coding: utf-8 -*-
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
from time import gmtime, strftime
import requests,json
from elastalert.alerts import Alerter
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
class SecurityOnionESAlerter(Alerter):
"""
Use matched data to create alerts in Elasticsearch.
"""
required_options = set(['detection_title', 'sigma_level'])
optional_fields = ['sigma_category', 'sigma_product', 'sigma_service']
def alert(self, matches):
for match in matches:
timestamp = strftime("%Y-%m-%d"'T'"%H:%M:%S"'.000Z', gmtime())
headers = {"Content-Type": "application/json"}
creds = None
if 'es_username' in self.rule and 'es_password' in self.rule:
creds = (self.rule['es_username'], self.rule['es_password'])
# Start building the rule dict
rule_info = {
"name": self.rule['detection_title'],
"uuid": self.rule['detection_public_id']
}
# Add optional fields if they are present in the rule
for field in self.optional_fields:
rule_key = field.split('_')[-1] # Assumes field format "sigma_<key>"
if field in self.rule:
rule_info[rule_key] = self.rule[field]
# Construct the payload with the conditional rule_info
payload = {
"tags": "alert",
"rule": rule_info,
"event": {
"severity": self.rule['event.severity'],
"module": self.rule['event.module'],
"dataset": self.rule['event.dataset'],
"severity_label": self.rule['sigma_level']
},
"sigma_level": self.rule['sigma_level'],
"event_data": match,
"@timestamp": timestamp
}
url = f"https://{self.rule['es_host']}:{self.rule['es_port']}/logs-detections.alerts-so/_doc/"
requests.post(url, data=json.dumps(payload), headers=headers, verify=False, auth=creds)
def get_info(self):
return {'type': 'SecurityOnionESAlerter'}

View File

@@ -0,0 +1,6 @@
{% if elastalert.get('jira_user', '') | length > 0 and elastalert.get('jira_pass', '') | length > 0 %}
user: {{ elastalert.jira_user }}
password: {{ elastalert.jira_pass }}
{% else %}
apikey: {{ elastalert.get('jira_api_key', '') }}
{% endif %}

View File

@@ -0,0 +1,2 @@
user: {{ elastalert.get('smtp_user', '') }}
password: {{ elastalert.get('smtp_pass', '') }}

View File

@@ -13,3 +13,19 @@
{% do ELASTALERTDEFAULTS.elastalert.config.update({'es_password': pillar.elasticsearch.auth.users.so_elastic_user.pass}) %} {% do ELASTALERTDEFAULTS.elastalert.config.update({'es_password': pillar.elasticsearch.auth.users.so_elastic_user.pass}) %}
{% set ELASTALERTMERGED = salt['pillar.get']('elastalert', ELASTALERTDEFAULTS.elastalert, merge=True) %} {% set ELASTALERTMERGED = salt['pillar.get']('elastalert', ELASTALERTDEFAULTS.elastalert, merge=True) %}
{% if 'ntf' in salt['pillar.get']('features', []) %}
{% set params = ELASTALERTMERGED.get('alerter_parameters', '') | load_yaml %}
{% if params != None and params | length > 0 %}
{% do ELASTALERTMERGED.config.update(params) %}
{% endif %}
{% if ELASTALERTMERGED.get('smtp_user', '') | length > 0 %}
{% do ELASTALERTMERGED.config.update({'smtp_auth_file': '/opt/elastalert/predefined/smtp_auth.yaml'}) %}
{% endif %}
{% if ELASTALERTMERGED.get('jira_user', '') | length > 0 or ELASTALERTMERGED.get('jira_key', '') | length > 0 %}
{% do ELASTALERTMERGED.config.update({'jira_account_file': '/opt/elastalert/predefined/jira_auth.yaml'}) %}
{% endif %}
{% endif %}

View File

@@ -2,6 +2,99 @@ elastalert:
enabled: enabled:
description: You can enable or disable Elastalert. description: You can enable or disable Elastalert.
helpLink: elastalert.html helpLink: elastalert.html
alerter_parameters:
title: Alerter Parameters
description: Optional configuration parameters for additional alerters that can be enabled for all Sigma rules. Filter for 'Alerter' in this Configuration screen to find the setting that allows these alerters to be enabled within the SOC ElastAlert module. Use YAML format for these parameters, and reference the ElastAlert 2 documentation, located at https://elastalert2.readthedocs.io, for available alerters and their required configuration parameters. A full update of the ElastAlert rule engine, via the Detections screen, is required in order to apply these changes. Requires a valid Security Onion license key.
global: True
multiline: True
syntax: yaml
helpLink: elastalert.html
forcedType: string
jira_api_key:
title: Jira API Key
description: Optional configuration parameter for Jira API Key, used instead of the Jira username and password. Requires a valid Security Onion license key.
global: True
sensitive: True
helpLink: elastalert.html
forcedType: string
jira_pass:
title: Jira Password
description: Optional configuration parameter for Jira password. Requires a valid Security Onion license key.
global: True
sensitive: True
helpLink: elastalert.html
forcedType: string
jira_user:
title: Jira Username
description: Optional configuration parameter for Jira username. Requires a valid Security Onion license key.
global: True
helpLink: elastalert.html
forcedType: string
smtp_pass:
title: SMTP Password
description: Optional configuration parameter for SMTP password, required for authenticating email servers. Requires a valid Security Onion license key.
global: True
sensitive: True
helpLink: elastalert.html
forcedType: string
smtp_user:
title: SMTP Username
description: Optional configuration parameter for SMTP username, required for authenticating email servers. Requires a valid Security Onion license key.
global: True
helpLink: elastalert.html
forcedType: string
files:
custom:
alertmanager_ca__crt:
description: Optional custom Certificate Authority for connecting to an AlertManager server. To utilize this custom file, the alertmanager_ca_certs key must be set to /opt/elastalert/custom/alertmanager_ca.crt in the Alerter Parameters setting. Requires a valid Security Onion license key.
global: True
file: True
helpLink: elastalert.html
gelf_ca__crt:
description: Optional custom Certificate Authority for connecting to a Graylog server. To utilize this custom file, the graylog_ca_certs key must be set to /opt/elastalert/custom/graylog_ca.crt in the Alerter Parameters setting. Requires a valid Security Onion license key.
global: True
file: True
helpLink: elastalert.html
http_post_ca__crt:
description: Optional custom Certificate Authority for connecting to a generic HTTP server, via the legacy HTTP POST alerter. To utilize this custom file, the http_post_ca_certs key must be set to /opt/elastalert/custom/http_post2_ca.crt in the Alerter Parameters setting. Requires a valid Security Onion license key.
global: True
file: True
helpLink: elastalert.html
http_post2_ca__crt:
description: Optional custom Certificate Authority for connecting to a generic HTTP server, via the newer HTTP POST 2 alerter. To utilize this custom file, the http_post2_ca_certs key must be set to /opt/elastalert/custom/http_post2_ca.crt in the Alerter Parameters setting. Requires a valid Security Onion license key.
global: True
file: True
helpLink: elastalert.html
ms_teams_ca__crt:
description: Optional custom Certificate Authority for connecting to Microsoft Teams server. To utilize this custom file, the ms_teams_ca_certs key must be set to /opt/elastalert/custom/ms_teams_ca.crt in the Alerter Parameters setting. Requires a valid Security Onion license key.
global: True
file: True
helpLink: elastalert.html
pagerduty_ca__crt:
description: Optional custom Certificate Authority for connecting to PagerDuty server. To utilize this custom file, the pagerduty_ca_certs key must be set to /opt/elastalert/custom/pagerduty_ca.crt in the Alerter Parameters setting. Requires a valid Security Onion license key.
global: True
file: True
helpLink: elastalert.html
rocket_chat_ca__crt:
description: Optional custom Certificate Authority for connecting to PagerDuty server. To utilize this custom file, the rocket_chart_ca_certs key must be set to /opt/elastalert/custom/rocket_chat_ca.crt in the Alerter Parameters setting. Requires a valid Security Onion license key.
global: True
file: True
helpLink: elastalert.html
smtp__crt:
description: Optional custom certificate for connecting to an SMTP server. To utilize this custom file, the smtp_cert_file key must be set to /opt/elastalert/custom/smtp.crt in the Alerter Parameters setting. Requires a valid Security Onion license key.
global: True
file: True
helpLink: elastalert.html
smtp__key:
description: Optional custom certificate key for connecting to an SMTP server. To utilize this custom file, the smtp_key_file key must be set to /opt/elastalert/custom/smtp.key in the Alerter Parameters setting. Requires a valid Security Onion license key.
global: True
file: True
helpLink: elastalert.html
slack_ca__crt:
description: Optional custom Certificate Authority for connecting to Slack. To utilize this custom file, the slack_ca_certs key must be set to /opt/elastalert/custom/slack_ca.crt in the Alerter Parameters setting. Requires a valid Security Onion license key.
global: True
file: True
helpLink: elastalert.html
config: config:
disable_rules_on_error: disable_rules_on_error:
description: Disable rules on failure. description: Disable rules on failure.

View File

@@ -37,6 +37,7 @@ elasticfleet:
- azure - azure
- barracuda - barracuda
- carbonblack_edr - carbonblack_edr
- cef
- checkpoint - checkpoint
- cisco_asa - cisco_asa
- cisco_duo - cisco_duo
@@ -118,3 +119,8 @@ elasticfleet:
base_url: https://api.platform.sublimesecurity.com base_url: https://api.platform.sublimesecurity.com
poll_interval: 5m poll_interval: 5m
limit: 100 limit: 100
kismet:
base_url: http://localhost:2501
poll_interval: 1m
api_key:
enabled_nodes: []

View File

@@ -27,7 +27,9 @@ wait_for_elasticsearch_elasticfleet:
so-elastic-fleet-auto-configure-logstash-outputs: so-elastic-fleet-auto-configure-logstash-outputs:
cmd.run: cmd.run:
- name: /usr/sbin/so-elastic-fleet-outputs-update - name: /usr/sbin/so-elastic-fleet-outputs-update
- retry: True - retry:
attempts: 4
interval: 30
{% endif %} {% endif %}
# If enabled, automatically update Fleet Server URLs & ES Connection # If enabled, automatically update Fleet Server URLs & ES Connection
@@ -35,7 +37,9 @@ so-elastic-fleet-auto-configure-logstash-outputs:
so-elastic-fleet-auto-configure-server-urls: so-elastic-fleet-auto-configure-server-urls:
cmd.run: cmd.run:
- name: /usr/sbin/so-elastic-fleet-urls-update - name: /usr/sbin/so-elastic-fleet-urls-update
- retry: True - retry:
attempts: 4
interval: 30
{% endif %} {% endif %}
# Automatically update Fleet Server Elasticsearch URLs & Agent Artifact URLs # Automatically update Fleet Server Elasticsearch URLs & Agent Artifact URLs
@@ -43,12 +47,16 @@ so-elastic-fleet-auto-configure-server-urls:
so-elastic-fleet-auto-configure-elasticsearch-urls: so-elastic-fleet-auto-configure-elasticsearch-urls:
cmd.run: cmd.run:
- name: /usr/sbin/so-elastic-fleet-es-url-update - name: /usr/sbin/so-elastic-fleet-es-url-update
- retry: True - retry:
attempts: 4
interval: 30
so-elastic-fleet-auto-configure-artifact-urls: so-elastic-fleet-auto-configure-artifact-urls:
cmd.run: cmd.run:
- name: /usr/sbin/so-elastic-fleet-artifacts-url-update - name: /usr/sbin/so-elastic-fleet-artifacts-url-update
- retry: True - retry:
attempts: 4
interval: 30
{% endif %} {% endif %}

View File

@@ -0,0 +1,36 @@
{% from 'elasticfleet/map.jinja' import ELASTICFLEETMERGED %}
{% raw %}
{
"package": {
"name": "httpjson",
"version": ""
},
"name": "kismet-logs",
"namespace": "so",
"description": "Kismet Logs",
"policy_id": "FleetServer_{% endraw %}{{ NAME }}{% raw %}",
"inputs": {
"generic-httpjson": {
"enabled": true,
"streams": {
"httpjson.generic": {
"enabled": true,
"vars": {
"data_stream.dataset": "kismet",
"request_url": "{% endraw %}{{ ELASTICFLEETMERGED.optional_integrations.kismet.base_url }}{% raw %}/devices/last-time/-600/devices.tjson",
"request_interval": "{% endraw %}{{ ELASTICFLEETMERGED.optional_integrations.kismet.poll_interval }}{% raw %}",
"request_method": "GET",
"request_transforms": "- set:\r\n target: header.Cookie\r\n value: 'KISMET={% endraw %}{{ ELASTICFLEETMERGED.optional_integrations.kismet.api_key }}{% raw %}'",
"request_redirect_headers_ban_list": [],
"oauth_scopes": [],
"processors": "",
"tags": [],
"pipeline": "kismet.common"
}
}
}
}
},
"force": true
}
{% endraw %}

View File

@@ -0,0 +1,35 @@
{
"policy_id": "so-grid-nodes_general",
"package": {
"name": "log",
"version": ""
},
"name": "soc-detections-logs",
"description": "Security Onion Console - Detections Logs",
"namespace": "so",
"inputs": {
"logs-logfile": {
"enabled": true,
"streams": {
"log.logs": {
"enabled": true,
"vars": {
"paths": [
"/opt/so/log/soc/detections_runtime-status_sigma.log",
"/opt/so/log/soc/detections_runtime-status_yara.log"
],
"exclude_files": [],
"ignore_older": "72h",
"data_stream.dataset": "soc",
"tags": [
"so-soc"
],
"processors": "- decode_json_fields:\n fields: [\"message\"]\n target: \"soc\"\n process_array: true\n max_depth: 2\n add_error_key: true \n- add_fields:\n target: event\n fields:\n category: host\n module: soc\n dataset_temp: detections\n- rename:\n fields:\n - from: \"soc.fields.sourceIp\"\n to: \"source.ip\"\n - from: \"soc.fields.status\"\n to: \"http.response.status_code\"\n - from: \"soc.fields.method\"\n to: \"http.request.method\"\n - from: \"soc.fields.path\"\n to: \"url.path\"\n - from: \"soc.message\"\n to: \"event.action\"\n - from: \"soc.level\"\n to: \"log.level\"\n ignore_missing: true",
"custom": "pipeline: common"
}
}
}
}
},
"force": true
}

View File

@@ -79,3 +79,29 @@ elasticfleet:
helpLink: elastic-fleet.html helpLink: elastic-fleet.html
advanced: True advanced: True
forcedType: int forcedType: int
kismet:
base_url:
description: Base URL for Kismet.
global: True
helpLink: elastic-fleet.html
advanced: True
forcedType: string
poll_interval:
description: Poll interval for wireless device data from Kismet. Integration is currently configured to return devices seen as active by any Kismet sensor within the last 10 minutes.
global: True
helpLink: elastic-fleet.html
advanced: True
forcedType: string
api_key:
description: API key for Kismet.
global: True
helpLink: elastic-fleet.html
advanced: True
forcedType: string
sensitive: True
enabled_nodes:
description: Fleet nodes with the Kismet integration enabled. Enter one per line.
global: True
helpLink: elastic-fleet.html
advanced: True
forcedType: "[]string"

View File

@@ -19,7 +19,7 @@ NUM_RUNNING=$(pgrep -cf "/bin/bash /sbin/so-elastic-agent-gen-installers")
for i in {1..30} for i in {1..30}
do do
ENROLLMENTOKEN=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("endpoints-initial")) | .api_key') ENROLLMENTOKEN=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys?perPage=100" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("endpoints-initial")) | .api_key')
FLEETHOST=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/fleet_server_hosts/grid-default' | jq -r '.item.host_urls[]' | paste -sd ',') FLEETHOST=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/fleet_server_hosts/grid-default' | jq -r '.item.host_urls[]' | paste -sd ',')
if [[ $FLEETHOST ]] && [[ $ENROLLMENTOKEN ]]; then break; else sleep 10; fi if [[ $FLEETHOST ]] && [[ $ENROLLMENTOKEN ]]; then break; else sleep 10; fi
done done
@@ -72,5 +72,5 @@ do
printf "\n### $GOOS/$GOARCH Installer Generated...\n" printf "\n### $GOOS/$GOARCH Installer Generated...\n"
done done
printf "\n### Cleaning up temp files in /nsm/elastic-agent-workspace" printf "\n### Cleaning up temp files in /nsm/elastic-agent-workspace\n"
rm -rf /nsm/elastic-agent-workspace rm -rf /nsm/elastic-agent-workspace

View File

@@ -21,64 +21,104 @@ function update_logstash_outputs() {
# Update Logstash Outputs # Update Logstash Outputs
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_logstash" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_logstash" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq
} }
function update_kafka_outputs() {
# Make sure SSL configuration is included in policy updates for Kafka output. SSL is configured in so-elastic-fleet-setup
SSL_CONFIG=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_kafka" | jq -r '.item.ssl')
# Get current list of Logstash Outputs JSON_STRING=$(jq -n \
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_logstash') --arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": $SSL_CONFIG}')
# Update Kafka outputs
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_kafka" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq
}
# Check to make sure that the server responded with good data - else, bail from script {% if GLOBALS.pipeline == "KAFKA" %}
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON") # Get current list of Kafka Outputs
if [ "$CHECKSUM" != "so-manager_logstash" ]; then RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_kafka')
printf "Failed to query for current Logstash Outputs..."
exit 1
fi
# Get the current list of Logstash outputs & hash them # Check to make sure that the server responded with good data - else, bail from script
CURRENT_LIST=$(jq -c -r '.item.hosts' <<< "$RAW_JSON") CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")
CURRENT_HASH=$(sha1sum <<< "$CURRENT_LIST" | awk '{print $1}') if [ "$CHECKSUM" != "so-manager_kafka" ]; then
printf "Failed to query for current Kafka Outputs..."
exit 1
fi
declare -a NEW_LIST=() # Get the current list of kafka outputs & hash them
CURRENT_LIST=$(jq -c -r '.item.hosts' <<< "$RAW_JSON")
CURRENT_HASH=$(sha1sum <<< "$CURRENT_LIST" | awk '{print $1}')
declare -a NEW_LIST=()
# Query for the current Grid Nodes that are running kafka
KAFKANODES=$(salt-call --out=json pillar.get kafka:nodes | jq '.local')
# Query for Kafka nodes with Broker role and add hostname to list
while IFS= read -r line; do
NEW_LIST+=("$line")
done < <(jq -r 'to_entries | .[] | select(.value.role | contains("broker")) | .key + ":9092"' <<< $KAFKANODES)
{# If global pipeline isn't set to KAFKA then assume default of REDIS / logstash #}
{% else %}
# Get current list of Logstash Outputs
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_logstash')
# Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")
if [ "$CHECKSUM" != "so-manager_logstash" ]; then
printf "Failed to query for current Logstash Outputs..."
exit 1
fi
# Get the current list of Logstash outputs & hash them
CURRENT_LIST=$(jq -c -r '.item.hosts' <<< "$RAW_JSON")
CURRENT_HASH=$(sha1sum <<< "$CURRENT_LIST" | awk '{print $1}')
declare -a NEW_LIST=()
{# If we select to not send to manager via SOC, then omit the code that adds manager to NEW_LIST #}
{% if ELASTICFLEETMERGED.enable_manager_output %}
# Create array & add initial elements
if [ "{{ GLOBALS.hostname }}" = "{{ GLOBALS.url_base }}" ]; then
NEW_LIST+=("{{ GLOBALS.url_base }}:5055")
else
NEW_LIST+=("{{ GLOBALS.url_base }}:5055" "{{ GLOBALS.hostname }}:5055")
fi
{% endif %}
# Query for FQDN entries & add them to the list
{% if ELASTICFLEETMERGED.config.server.custom_fqdn | length > 0 %}
CUSTOMFQDNLIST=('{{ ELASTICFLEETMERGED.config.server.custom_fqdn | join(' ') }}')
readarray -t -d ' ' CUSTOMFQDN < <(printf '%s' "$CUSTOMFQDNLIST")
for CUSTOMNAME in "${CUSTOMFQDN[@]}"
do
NEW_LIST+=("$CUSTOMNAME:5055")
done
{% endif %}
# Query for the current Grid Nodes that are running Logstash
LOGSTASHNODES=$(salt-call --out=json pillar.get logstash:nodes | jq '.local')
# Query for Receiver Nodes & add them to the list
if grep -q "receiver" <<< $LOGSTASHNODES; then
readarray -t RECEIVERNODES < <(jq -r ' .receiver | keys_unsorted[]' <<< $LOGSTASHNODES)
for NODE in "${RECEIVERNODES[@]}"
do
NEW_LIST+=("$NODE:5055")
done
fi
# Query for Fleet Nodes & add them to the list
if grep -q "fleet" <<< $LOGSTASHNODES; then
readarray -t FLEETNODES < <(jq -r ' .fleet | keys_unsorted[]' <<< $LOGSTASHNODES)
for NODE in "${FLEETNODES[@]}"
do
NEW_LIST+=("$NODE:5055")
done
fi
{# If we select to not send to manager via SOC, then omit the code that adds manager to NEW_LIST #}
{% if ELASTICFLEETMERGED.enable_manager_output %}
# Create array & add initial elements
if [ "{{ GLOBALS.hostname }}" = "{{ GLOBALS.url_base }}" ]; then
NEW_LIST+=("{{ GLOBALS.url_base }}:5055")
else
NEW_LIST+=("{{ GLOBALS.url_base }}:5055" "{{ GLOBALS.hostname }}:5055")
fi
{% endif %} {% endif %}
# Query for FQDN entries & add them to the list
{% if ELASTICFLEETMERGED.config.server.custom_fqdn | length > 0 %}
CUSTOMFQDNLIST=('{{ ELASTICFLEETMERGED.config.server.custom_fqdn | join(' ') }}')
readarray -t -d ' ' CUSTOMFQDN < <(printf '%s' "$CUSTOMFQDNLIST")
for CUSTOMNAME in "${CUSTOMFQDN[@]}"
do
NEW_LIST+=("$CUSTOMNAME:5055")
done
{% endif %}
# Query for the current Grid Nodes that are running Logstash
LOGSTASHNODES=$(salt-call --out=json pillar.get logstash:nodes | jq '.local')
# Query for Receiver Nodes & add them to the list
if grep -q "receiver" <<< $LOGSTASHNODES; then
readarray -t RECEIVERNODES < <(jq -r ' .receiver | keys_unsorted[]' <<< $LOGSTASHNODES)
for NODE in "${RECEIVERNODES[@]}"
do
NEW_LIST+=("$NODE:5055")
done
fi
# Query for Fleet Nodes & add them to the list
if grep -q "fleet" <<< $LOGSTASHNODES; then
readarray -t FLEETNODES < <(jq -r ' .fleet | keys_unsorted[]' <<< $LOGSTASHNODES)
for NODE in "${FLEETNODES[@]}"
do
NEW_LIST+=("$NODE:5055")
done
fi
# Sort & hash the new list of Logstash Outputs # Sort & hash the new list of Logstash Outputs
NEW_LIST_JSON=$(jq --compact-output --null-input '$ARGS.positional' --args -- "${NEW_LIST[@]}") NEW_LIST_JSON=$(jq --compact-output --null-input '$ARGS.positional' --args -- "${NEW_LIST[@]}")
NEW_HASH=$(sha1sum <<< "$NEW_LIST_JSON" | awk '{print $1}') NEW_HASH=$(sha1sum <<< "$NEW_LIST_JSON" | awk '{print $1}')
@@ -87,9 +127,28 @@ NEW_HASH=$(sha1sum <<< "$NEW_LIST_JSON" | awk '{print $1}')
if [ "$NEW_HASH" = "$CURRENT_HASH" ]; then if [ "$NEW_HASH" = "$CURRENT_HASH" ]; then
printf "\nHashes match - no update needed.\n" printf "\nHashes match - no update needed.\n"
printf "Current List: $CURRENT_LIST\nNew List: $NEW_LIST_JSON\n" printf "Current List: $CURRENT_LIST\nNew List: $NEW_LIST_JSON\n"
# Since output can be KAFKA or LOGSTASH, we need to check if the policy set as default matches the value set in GLOBALS.pipeline and update if needed
printf "Checking if the correct output policy is set as default\n"
OUTPUT_DEFAULT=$(jq -r '.item.is_default' <<< $RAW_JSON)
OUTPUT_DEFAULT_MONITORING=$(jq -r '.item.is_default_monitoring' <<< $RAW_JSON)
if [[ "$OUTPUT_DEFAULT" = "false" || "$OUTPUT_DEFAULT_MONITORING" = "false" ]]; then
printf "Default output policy needs to be updated.\n"
{%- if GLOBALS.pipeline == "KAFKA" and 'gmd' in salt['pillar.get']('features', []) %}
update_kafka_outputs
{%- else %}
update_logstash_outputs
{%- endif %}
else
printf "Default output policy is set - no update needed.\n"
fi
exit 0 exit 0
else else
printf "\nHashes don't match - update needed.\n" printf "\nHashes don't match - update needed.\n"
printf "Current List: $CURRENT_LIST\nNew List: $NEW_LIST_JSON\n" printf "Current List: $CURRENT_LIST\nNew List: $NEW_LIST_JSON\n"
{%- if GLOBALS.pipeline == "KAFKA" and 'gmd' in salt['pillar.get']('features', []) %}
update_kafka_outputs
{%- else %}
update_logstash_outputs update_logstash_outputs
{%- endif %}
fi fi

View File

@@ -77,6 +77,11 @@ curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fl
printf "\n\n" printf "\n\n"
{%- endif %} {%- endif %}
printf "\nCreate Kafka Output Config if node is not an Import or Eval install\n"
{% if grains.role not in ['so-import', 'so-eval'] %}
/usr/sbin/so-kafka-fleet-output-policy
{% endif %}
# Add Manager Hostname & URL Base to Fleet Host URLs # Add Manager Hostname & URL Base to Fleet Host URLs
printf "\nAdd SO-Manager Fleet URL\n" printf "\nAdd SO-Manager Fleet URL\n"
if [ "{{ GLOBALS.hostname }}" = "{{ GLOBALS.url_base }}" ]; then if [ "{{ GLOBALS.hostname }}" = "{{ GLOBALS.url_base }}" ]; then

View File

@@ -0,0 +1,52 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% if GLOBALS.role in ['so-manager', 'so-standalone', 'so-managersearch'] %}
. /usr/sbin/so-common
# Check to make sure that Kibana API is up & ready
RETURN_CODE=0
wait_for_web_response "http://localhost:5601/api/fleet/settings" "fleet" 300 "curl -K /opt/so/conf/elasticsearch/curl.config"
RETURN_CODE=$?
if [[ "$RETURN_CODE" != "0" ]]; then
printf "Kibana API not accessible, can't setup Elastic Fleet output policy for Kafka..."
exit 1
fi
output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs" | jq -r .items[].id)
if ! echo "$output" | grep -q "so-manager_kafka"; then
KAFKACRT=$(openssl x509 -in /etc/pki/elasticfleet-kafka.crt)
KAFKAKEY=$(openssl rsa -in /etc/pki/elasticfleet-kafka.key)
KAFKACA=$(openssl x509 -in /etc/pki/tls/certs/intca.crt)
KAFKA_OUTPUT_VERSION="2.6.0"
JSON_STRING=$( jq -n \
--arg KAFKACRT "$KAFKACRT" \
--arg KAFKAKEY "$KAFKAKEY" \
--arg KAFKACA "$KAFKACA" \
--arg MANAGER_IP "{{ GLOBALS.manager_ip }}:9092" \
--arg KAFKA_OUTPUT_VERSION "$KAFKA_OUTPUT_VERSION" \
'{ "name": "grid-kafka", "id": "so-manager_kafka", "type": "kafka", "hosts": [ $MANAGER_IP ], "is_default": false, "is_default_monitoring": false, "config_yaml": "", "ssl": { "certificate_authorities": [ $KAFKACA ], "certificate": $KAFKACRT, "key": $KAFKAKEY, "verification_mode": "full" }, "proxy_id": null, "client_id": "Elastic", "version": $KAFKA_OUTPUT_VERSION, "compression": "none", "auth_type": "ssl", "partition": "round_robin", "round_robin": { "group_events": 1 }, "topics":[{"topic":"%{[event.module]}-securityonion","when":{"type":"regexp","condition":"event.module:.+"}},{"topic":"default-securityonion"}], "headers": [ { "key": "", "value": "" } ], "timeout": 30, "broker_timeout": 30, "required_acks": 1 }'
)
curl -sK /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" -o /dev/null
refresh_output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs" | jq -r .items[].id)
if ! echo "$refresh_output" | grep -q "so-manager_kafka"; then
echo -e "\nFailed to setup Elastic Fleet output policy for Kafka...\n"
exit 1
elif echo "$refresh_output" | grep -q "so-manager_kafka"; then
echo -e "\nSuccessfully setup Elastic Fleet output policy for Kafka...\n"
fi
elif echo "$output" | grep -q "so-manager_kafka"; then
echo -e "\nElastic Fleet output policy for Kafka already exists...\n"
fi
{% else %}
echo -e "\nNo update required...\n"
{% endif %}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,20 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
so-elasticsearch_image:
docker_image.present:
- name: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-elasticsearch:{{ GLOBALS.so_version }}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -200,9 +200,15 @@ so-elasticsearch-roles-load:
- require: - require:
- docker_container: so-elasticsearch - docker_container: so-elasticsearch
- file: elasticsearch_sbin_jinja - file: elasticsearch_sbin_jinja
{% if grains.role in ['so-eval', 'so-standalone', 'so-managersearch', 'so-heavynode', 'so-manager'] %}
{% if grains.role in ['so-eval', 'so-standalone', 'so-managersearch', 'so-heavynode', 'so-manager'] %}
{% if ELASTICSEARCHMERGED.index_clean %}
{% set ap = "present" %}
{% else %}
{% set ap = "absent" %}
{% endif %}
so-elasticsearch-indices-delete: so-elasticsearch-indices-delete:
cron.present: cron.{{ap}}:
- name: /usr/sbin/so-elasticsearch-indices-delete > /opt/so/log/elasticsearch/cron-elasticsearch-indices-delete.log 2>&1 - name: /usr/sbin/so-elasticsearch-indices-delete > /opt/so/log/elasticsearch/cron-elasticsearch-indices-delete.log 2>&1
- identifier: so-elasticsearch-indices-delete - identifier: so-elasticsearch-indices-delete
- user: root - user: root
@@ -211,7 +217,8 @@ so-elasticsearch-indices-delete:
- daymonth: '*' - daymonth: '*'
- month: '*' - month: '*'
- dayweek: '*' - dayweek: '*'
{% endif %} {% endif %}
{% endif %} {% endif %}
{% else %} {% else %}

View File

@@ -80,10 +80,11 @@
{ "set": { "if": "ctx.network?.type == 'ipv6'", "override": true, "field": "destination.ipv6", "value": "true" } }, { "set": { "if": "ctx.network?.type == 'ipv6'", "override": true, "field": "destination.ipv6", "value": "true" } },
{ "set": { "if": "ctx.tags.0 == 'import'", "override": true, "field": "data_stream.dataset", "value": "import" } }, { "set": { "if": "ctx.tags.0 == 'import'", "override": true, "field": "data_stream.dataset", "value": "import" } },
{ "set": { "if": "ctx.tags.0 == 'import'", "override": true, "field": "data_stream.namespace", "value": "so" } }, { "set": { "if": "ctx.tags.0 == 'import'", "override": true, "field": "data_stream.namespace", "value": "so" } },
{ "date": { "if": "ctx.event?.module == 'system'", "field": "event.created", "target_field": "@timestamp", "formats": ["yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'"] } }, { "date": { "if": "ctx.event?.module == 'system'", "field": "event.created", "target_field": "@timestamp","ignore_failure": true, "formats": ["yyyy-MM-dd'T'HH:mm:ss.SSSX","yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'"] } },
{ "community_id":{ "if": "ctx.event?.dataset == 'endpoint.events.network'", "ignore_failure":true } }, { "community_id":{ "if": "ctx.event?.dataset == 'endpoint.events.network'", "ignore_failure":true } },
{ "set": { "if": "ctx.event?.module == 'fim'", "override": true, "field": "event.module", "value": "file_integrity" } }, { "set": { "if": "ctx.event?.module == 'fim'", "override": true, "field": "event.module", "value": "file_integrity" } },
{ "rename": { "if": "ctx.winlog?.provider_name == 'Microsoft-Windows-Windows Defender'", "ignore_missing": true, "field": "winlog.event_data.Threat Name", "target_field": "winlog.event_data.threat_name" } }, { "rename": { "if": "ctx.winlog?.provider_name == 'Microsoft-Windows-Windows Defender'", "ignore_missing": true, "field": "winlog.event_data.Threat Name", "target_field": "winlog.event_data.threat_name" } },
{ "set": { "if": "ctx?.metadata?.kafka != null" , "field": "kafka.id", "value": "{{metadata.kafka.partition}}{{metadata.kafka.offset}}{{metadata.kafka.timestamp}}", "ignore_failure": true } },
{ "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "event.dataset_temp", "dataset_tag_temp", "module_temp" ], "ignore_missing": true, "ignore_failure": true } } { "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "event.dataset_temp", "dataset_tag_temp", "module_temp" ], "ignore_missing": true, "ignore_failure": true } }
], ],
"on_failure": [ "on_failure": [

View File

@@ -0,0 +1,10 @@
{
"processors": [
{
"rename": {
"field": "message2.kismet_device_base_macaddr",
"target_field": "network.wireless.bssid"
}
}
]
}

View File

@@ -0,0 +1,50 @@
{
"processors": [
{
"rename": {
"field": "message2.dot11_device.dot11_device_last_beaconed_ssid_record.dot11_advertisedssid_cloaked",
"target_field": "network.wireless.ssid_cloaked",
"if": "ctx?.message2?.dot11_device?.dot11_device_last_beaconed_ssid_record?.dot11_advertisedssid_cloaked != null"
}
},
{
"rename": {
"field": "message2.dot11_device.dot11_device_last_beaconed_ssid_record.dot11_advertisedssid_ssid",
"target_field": "network.wireless.ssid",
"if": "ctx?.message2?.dot11_device?.dot11_device_last_beaconed_ssid_record?.dot11_advertisedssid_ssid != null"
}
},
{
"set": {
"field": "network.wireless.ssid",
"value": "Hidden",
"if": "ctx?.network?.wireless?.ssid_cloaked != null && ctx?.network?.wireless?.ssid_cloaked == 1"
}
},
{
"rename": {
"field": "message2.dot11_device.dot11_device_last_beaconed_ssid_record.dot11_advertisedssid_dot11e_channel_utilization_perc",
"target_field": "network.wireless.channel_utilization",
"if": "ctx?.message2?.dot11_device?.dot11_device_last_beaconed_ssid_record?.dot11_advertisedssid_dot11e_channel_utilization_perc != null"
}
},
{
"rename": {
"field": "message2.dot11_device.dot11_device_last_bssid",
"target_field": "network.wireless.bssid"
}
},
{
"foreach": {
"field": "message2.dot11_device.dot11_device_associated_client_map",
"processor": {
"append": {
"field": "network.wireless.associated_clients",
"value": "{{_ingest._key}}"
}
},
"if": "ctx?.message2?.dot11_device?.dot11_device_associated_client_map != null"
}
}
]
}

View File

@@ -0,0 +1,16 @@
{
"processors": [
{
"rename": {
"field": "message2.kismet_device_base_macaddr",
"target_field": "client.mac"
}
},
{
"rename": {
"field": "message2.dot11_device.dot11_device_last_bssid",
"target_field": "network.wireless.bssid"
}
}
]
}

View File

@@ -0,0 +1,29 @@
{
"processors": [
{
"rename": {
"field": "message2.kismet_device_base_macaddr",
"target_field": "client.mac"
}
},
{
"rename": {
"field": "message2.dot11_device.dot11_device_last_bssid",
"target_field": "network.wireless.last_connected_bssid",
"if": "ctx?.message2?.dot11_device?.dot11_device_last_bssid != null"
}
},
{
"foreach": {
"field": "message2.dot11_device.dot11_device_client_map",
"processor": {
"append": {
"field": "network.wireless.known_connected_bssid",
"value": "{{_ingest._key}}"
}
},
"if": "ctx?.message2?.dot11_device?.dot11_device_client_map != null"
}
}
]
}

View File

@@ -0,0 +1,159 @@
{
"processors": [
{
"json": {
"field": "message",
"target_field": "message2"
}
},
{
"date": {
"field": "message2.kismet_device_base_mod_time",
"formats": [
"epoch_second"
],
"target_field": "@timestamp"
}
},
{
"set": {
"field": "event.category",
"value": "network"
}
},
{
"dissect": {
"field": "message2.kismet_device_base_type",
"pattern": "%{wifi} %{device_type}"
}
},
{
"lowercase": {
"field": "device_type"
}
},
{
"set": {
"field": "event.dataset",
"value": "kismet.{{device_type}}"
}
},
{
"set": {
"field": "event.dataset",
"value": "kismet.wds_ap",
"if": "ctx?.device_type == 'wds ap'"
}
},
{
"set": {
"field": "event.dataset",
"value": "kismet.ad_hoc",
"if": "ctx?.device_type == 'ad-hoc'"
}
},
{
"set": {
"field": "event.module",
"value": "kismet"
}
},
{
"rename": {
"field": "message2.kismet_device_base_packets_tx_total",
"target_field": "source.packets"
}
},
{
"rename": {
"field": "message2.kismet_device_base_num_alerts",
"target_field": "kismet.alerts.count"
}
},
{
"rename": {
"field": "message2.kismet_device_base_channel",
"target_field": "network.wireless.channel",
"if": "ctx?.message2?.kismet_device_base_channel != ''"
}
},
{
"rename": {
"field": "message2.kismet_device_base_frequency",
"target_field": "network.wireless.frequency",
"if": "ctx?.message2?.kismet_device_base_frequency != 0"
}
},
{
"rename": {
"field": "message2.kismet_device_base_last_time",
"target_field": "kismet.last_seen"
}
},
{
"date": {
"field": "kismet.last_seen",
"formats": [
"epoch_second"
],
"target_field": "kismet.last_seen"
}
},
{
"rename": {
"field": "message2.kismet_device_base_first_time",
"target_field": "kismet.first_seen"
}
},
{
"date": {
"field": "kismet.first_seen",
"formats": [
"epoch_second"
],
"target_field": "kismet.first_seen"
}
},
{
"rename": {
"field": "message2.kismet_device_base_seenby",
"target_field": "kismet.seenby"
}
},
{
"foreach": {
"field": "kismet.seenby",
"processor": {
"pipeline": {
"name": "kismet.seenby"
}
}
}
},
{
"rename": {
"field": "message2.kismet_device_base_manuf",
"target_field": "device.manufacturer"
}
},
{
"pipeline": {
"name": "{{event.dataset}}"
}
},
{
"remove": {
"field": [
"message2",
"message",
"device_type",
"wifi",
"agent",
"host",
"event.created"
],
"ignore_failure": true
}
}
]
}

View File

@@ -0,0 +1,9 @@
{
"processors": [
{
"pipeline": {
"name": "kismet.client"
}
}
]
}

View File

@@ -0,0 +1,52 @@
{
"processors": [
{
"rename": {
"field": "_ingest._value.kismet_common_seenby_num_packets",
"target_field": "_ingest._value.packets_seen",
"ignore_missing": true
}
},
{
"rename": {
"field": "_ingest._value.kismet_common_seenby_uuid",
"target_field": "_ingest._value.serial_number",
"ignore_missing": true
}
},
{
"rename": {
"field": "_ingest._value.kismet_common_seenby_first_time",
"target_field": "_ingest._value.first_seen",
"ignore_missing": true
}
},
{
"rename": {
"field": "_ingest._value.kismet_common_seenby_last_time",
"target_field": "_ingest._value.last_seen",
"ignore_missing": true
}
},
{
"date": {
"field": "_ingest._value.first_seen",
"formats": [
"epoch_second"
],
"target_field": "_ingest._value.first_seen",
"ignore_failure": true
}
},
{
"date": {
"field": "_ingest._value.last_seen",
"formats": [
"epoch_second"
],
"target_field": "_ingest._value.last_seen",
"ignore_failure": true
}
}
]
}

View File

@@ -0,0 +1,10 @@
{
"processors": [
{
"rename": {
"field": "message2.kismet_device_base_macaddr",
"target_field": "client.mac"
}
}
]
}

View File

@@ -0,0 +1,22 @@
{
"processors": [
{
"rename": {
"field": "message2.kismet_device_base_commonname",
"target_field": "network.wireless.bssid"
}
},
{
"foreach": {
"field": "message2.dot11_device.dot11_device_associated_client_map",
"processor": {
"append": {
"field": "network.wireless.associated_clients",
"value": "{{_ingest._key}}"
}
},
"if": "ctx?.message2?.dot11_device?.dot11_device_associated_client_map != null"
}
}
]
}

View File

@@ -56,6 +56,7 @@
{ "set": { "if": "ctx.exiftool?.Subsystem != null", "field": "host.subsystem", "value": "{{exiftool.Subsystem}}", "ignore_failure": true }}, { "set": { "if": "ctx.exiftool?.Subsystem != null", "field": "host.subsystem", "value": "{{exiftool.Subsystem}}", "ignore_failure": true }},
{ "set": { "if": "ctx.scan?.yara?.matches instanceof List", "field": "rule.name", "value": "{{scan.yara.matches.0}}" }}, { "set": { "if": "ctx.scan?.yara?.matches instanceof List", "field": "rule.name", "value": "{{scan.yara.matches.0}}" }},
{ "set": { "if": "ctx.rule?.name != null", "field": "event.dataset", "value": "alert", "override": true }}, { "set": { "if": "ctx.rule?.name != null", "field": "event.dataset", "value": "alert", "override": true }},
{ "set": { "if": "ctx.rule?.name != null", "field": "rule.uuid", "value": "{{rule.name}}", "override": true }},
{ "rename": { "field": "file.flavors.mime", "target_field": "file.mime_type", "ignore_missing": true }}, { "rename": { "field": "file.flavors.mime", "target_field": "file.mime_type", "ignore_missing": true }},
{ "set": { "if": "ctx.rule?.name != null && ctx.rule?.score == null", "field": "event.severity", "value": 3, "override": true } }, { "set": { "if": "ctx.rule?.name != null && ctx.rule?.score == null", "field": "event.severity", "value": 3, "override": true } },
{ "convert" : { "if": "ctx.rule?.score != null", "field" : "rule.score","type": "integer"}}, { "convert" : { "if": "ctx.rule?.score != null", "field" : "rule.score","type": "integer"}},

View File

@@ -1,6 +1,7 @@
{ {
"description" : "suricata.alert", "description" : "suricata.alert",
"processors" : [ "processors" : [
{ "set": { "field": "_index", "value": "logs-suricata.alerts-so" } },
{ "set": { "field": "tags","value": "alert" }}, { "set": { "field": "tags","value": "alert" }},
{ "rename":{ "field": "message2.alert", "target_field": "rule", "ignore_failure": true } }, { "rename":{ "field": "message2.alert", "target_field": "rule", "ignore_failure": true } },
{ "rename":{ "field": "rule.signature", "target_field": "rule.name", "ignore_failure": true } }, { "rename":{ "field": "rule.signature", "target_field": "rule.name", "ignore_failure": true } },

View File

@@ -27,7 +27,8 @@
"monitor", "monitor",
"read", "read",
"read_cross_cluster", "read_cross_cluster",
"view_index_metadata" "view_index_metadata",
"write"
] ]
} }
], ],

View File

@@ -13,7 +13,8 @@
"monitor", "monitor",
"read", "read",
"read_cross_cluster", "read_cross_cluster",
"view_index_metadata" "view_index_metadata",
"write"
] ]
} }
], ],

View File

@@ -5,6 +5,10 @@ elasticsearch:
esheap: esheap:
description: Specify the memory heap size in (m)egabytes for Elasticsearch. description: Specify the memory heap size in (m)egabytes for Elasticsearch.
helpLink: elasticsearch.html helpLink: elasticsearch.html
index_clean:
description: Determines if indices should be considered for deletion by available disk space in the cluster. Otherwise, indices will only be deleted by the age defined in the ILM settings.
forcedType: bool
helpLink: elasticsearch.html
retention: retention:
retention_pct: retention_pct:
decription: Total percentage of space used by Elasticsearch for multi node clusters decription: Total percentage of space used by Elasticsearch for multi node clusters
@@ -98,10 +102,6 @@ elasticsearch:
policy: policy:
phases: phases:
hot: hot:
max_age:
description: Maximum age of index. ex. 7d - This determines when the index should be moved out of the hot tier.
global: True
helpLink: elasticsearch.html
actions: actions:
set_priority: set_priority:
priority: priority:
@@ -120,7 +120,9 @@ elasticsearch:
helpLink: elasticsearch.html helpLink: elasticsearch.html
cold: cold:
min_age: min_age:
description: Minimum age of index. ex. 30d - This determines when the index should be moved to the cold tier. While still searchable, this tier is typically optimized for lower storage costs rather than search speed. description: Minimum age of index. ex. 60d - This determines when the index should be moved to the cold tier. While still searchable, this tier is typically optimized for lower storage costs rather than search speed.
regex: ^[0-9]{1,5}d$
forcedType: string
global: True global: True
helpLink: elasticsearch.html helpLink: elasticsearch.html
actions: actions:
@@ -131,8 +133,8 @@ elasticsearch:
helpLink: elasticsearch.html helpLink: elasticsearch.html
warm: warm:
min_age: min_age:
description: Minimum age of index. ex. 30d - This determines when the index should be moved to the cold tier. While still searchable, this tier is typically optimized for lower storage costs rather than search speed. description: Minimum age of index. ex. 30d - This determines when the index should be moved to the warm tier. Nodes in the warm tier generally dont need to be as fast as those in the hot tier.
regex: ^\[0-9\]{1,5}d$ regex: ^[0-9]{1,5}d$
forcedType: string forcedType: string
global: True global: True
actions: actions:
@@ -145,6 +147,8 @@ elasticsearch:
delete: delete:
min_age: min_age:
description: Minimum age of index. ex. 90d - This determines when the index should be deleted. description: Minimum age of index. ex. 90d - This determines when the index should be deleted.
regex: ^[0-9]{1,5}d$
forcedType: string
global: True global: True
helpLink: elasticsearch.html helpLink: elasticsearch.html
so-logs: &indexSettings so-logs: &indexSettings
@@ -271,7 +275,9 @@ elasticsearch:
helpLink: elasticsearch.html helpLink: elasticsearch.html
warm: warm:
min_age: min_age:
description: Minimum age of index. This determines when the index should be moved to the hot tier. description: Minimum age of index. ex. 30d - This determines when the index should be moved to the warm tier. Nodes in the warm tier generally dont need to be as fast as those in the hot tier.
regex: ^[0-9]{1,5}d$
forcedType: string
global: True global: True
advanced: True advanced: True
helpLink: elasticsearch.html helpLink: elasticsearch.html
@@ -296,7 +302,9 @@ elasticsearch:
helpLink: elasticsearch.html helpLink: elasticsearch.html
cold: cold:
min_age: min_age:
description: Minimum age of index. This determines when the index should be moved to the cold tier. While still searchable, this tier is typically optimized for lower storage costs rather than search speed. description: Minimum age of index. ex. 60d - This determines when the index should be moved to the cold tier. While still searchable, this tier is typically optimized for lower storage costs rather than search speed.
regex: ^[0-9]{1,5}d$
forcedType: string
global: True global: True
advanced: True advanced: True
helpLink: elasticsearch.html helpLink: elasticsearch.html
@@ -311,6 +319,8 @@ elasticsearch:
delete: delete:
min_age: min_age:
description: Minimum age of index. This determines when the index should be deleted. description: Minimum age of index. This determines when the index should be deleted.
regex: ^[0-9]{1,5}d$
forcedType: string
global: True global: True
advanced: True advanced: True
helpLink: elasticsearch.html helpLink: elasticsearch.html
@@ -384,6 +394,7 @@ elasticsearch:
so-logs-darktrace_x_ai_analyst_alert: *indexSettings so-logs-darktrace_x_ai_analyst_alert: *indexSettings
so-logs-darktrace_x_model_breach_alert: *indexSettings so-logs-darktrace_x_model_breach_alert: *indexSettings
so-logs-darktrace_x_system_status_alert: *indexSettings so-logs-darktrace_x_system_status_alert: *indexSettings
so-logs-detections_x_alerts: *indexSettings
so-logs-f5_bigip_x_log: *indexSettings so-logs-f5_bigip_x_log: *indexSettings
so-logs-fim_x_event: *indexSettings so-logs-fim_x_event: *indexSettings
so-logs-fortinet_x_clientendpoint: *indexSettings so-logs-fortinet_x_clientendpoint: *indexSettings
@@ -510,8 +521,10 @@ elasticsearch:
so-endgame: *indexSettings so-endgame: *indexSettings
so-idh: *indexSettings so-idh: *indexSettings
so-suricata: *indexSettings so-suricata: *indexSettings
so-suricata_x_alerts: *indexSettings
so-import: *indexSettings so-import: *indexSettings
so-kratos: *indexSettings so-kratos: *indexSettings
so-kismet: *indexSettings
so-logstash: *indexSettings so-logstash: *indexSettings
so-redis: *indexSettings so-redis: *indexSettings
so-strelka: *indexSettings so-strelka: *indexSettings

View File

@@ -2,11 +2,9 @@
{% set DEFAULT_GLOBAL_OVERRIDES = ELASTICSEARCHDEFAULTS.elasticsearch.index_settings.pop('global_overrides') %} {% set DEFAULT_GLOBAL_OVERRIDES = ELASTICSEARCHDEFAULTS.elasticsearch.index_settings.pop('global_overrides') %}
{% set PILLAR_GLOBAL_OVERRIDES = {} %} {% set PILLAR_GLOBAL_OVERRIDES = {} %}
{% if salt['pillar.get']('elasticsearch:index_settings') is defined %} {% set ES_INDEX_PILLAR = salt['pillar.get']('elasticsearch:index_settings', {}) %}
{% set ES_INDEX_PILLAR = salt['pillar.get']('elasticsearch:index_settings') %} {% if ES_INDEX_PILLAR.global_overrides is defined %}
{% if ES_INDEX_PILLAR.global_overrides is defined %} {% set PILLAR_GLOBAL_OVERRIDES = ES_INDEX_PILLAR.pop('global_overrides') %}
{% set PILLAR_GLOBAL_OVERRIDES = ES_INDEX_PILLAR.pop('global_overrides') %}
{% endif %}
{% endif %} {% endif %}
{% set ES_INDEX_SETTINGS_ORIG = ELASTICSEARCHDEFAULTS.elasticsearch.index_settings %} {% set ES_INDEX_SETTINGS_ORIG = ELASTICSEARCHDEFAULTS.elasticsearch.index_settings %}
@@ -19,6 +17,12 @@
{% set ES_INDEX_SETTINGS = {} %} {% set ES_INDEX_SETTINGS = {} %}
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.update(salt['defaults.merge'](ES_INDEX_SETTINGS_GLOBAL_OVERRIDES, ES_INDEX_PILLAR, in_place=False)) %} {% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.update(salt['defaults.merge'](ES_INDEX_SETTINGS_GLOBAL_OVERRIDES, ES_INDEX_PILLAR, in_place=False)) %}
{% for index, settings in ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.items() %} {% for index, settings in ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.items() %}
{# if policy isn't defined in the original index settings, then dont merge policy from the global_overrides #}
{# this will prevent so-elasticsearch-ilm-policy-load from trying to load policy on non ILM manged indices #}
{% if not ES_INDEX_SETTINGS_ORIG[index].policy is defined and ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy is defined %}
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].pop('policy') %}
{% endif %}
{% if settings.index_template is defined %} {% if settings.index_template is defined %}
{% if not settings.get('index_sorting', False) | to_bool and settings.index_template.template.settings.index.sort is defined %} {% if not settings.get('index_sorting', False) | to_bool and settings.index_template.template.settings.index.sort is defined %}
{% do settings.index_template.template.settings.index.pop('sort') %} {% do settings.index_template.template.settings.index.pop('sort') %}

View File

@@ -0,0 +1,36 @@
{
"_meta": {
"documentation": "https://www.elastic.co/guide/en/ecs/current/ecs-device.html",
"ecs_version": "1.12.2"
},
"template": {
"mappings": {
"properties": {
"device": {
"properties": {
"id": {
"ignore_above": 1024,
"type": "keyword"
},
"manufacturer": {
"ignore_above": 1024,
"type": "keyword"
},
"model": {
"properties": {
"identifier": {
"ignore_above": 1024,
"type": "keyword"
},
"name": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
}
}
}
}
}
}

View File

@@ -0,0 +1,32 @@
{
"_meta": {
"documentation": "https://www.elastic.co/guide/en/ecs/current/ecs-base.html",
"ecs_version": "1.12.2"
},
"template": {
"mappings": {
"properties": {
"kismet": {
"properties": {
"alerts": {
"properties": {
"count": {
"type": "long"
}
}
},
"first_seen": {
"type": "date"
},
"last_seen": {
"type": "date"
},
"seenby": {
"type": "nested"
}
}
}
}
}
}
}

View File

@@ -77,6 +77,43 @@
"type": "keyword" "type": "keyword"
} }
} }
},
"wireless": {
"properties": {
"associated_clients": {
"ignore_above": 1024,
"type": "keyword"
},
"bssid": {
"ignore_above": 1024,
"type": "keyword"
},
"channel": {
"ignore_above": 1024,
"type": "keyword"
},
"channel_utilization": {
"type": "float"
},
"frequency": {
"type": "double"
},
"ssid": {
"ignore_above": 1024,
"type": "keyword"
},
"ssid_cloaked": {
"type": "integer"
},
"known_connected_bssid": {
"ignore_above": 1024,
"type": "keyword"
},
"last_connected_bssid": {
"ignore_above": 1024,
"type": "keyword"
}
}
} }
} }
} }

View File

@@ -0,0 +1,112 @@
{
"template": {
"mappings": {
"dynamic": "strict",
"properties": {
"binary": {
"type": "binary"
},
"boolean": {
"type": "boolean"
},
"byte": {
"type": "byte"
},
"created_at": {
"type": "date"
},
"created_by": {
"type": "keyword"
},
"date": {
"type": "date"
},
"date_nanos": {
"type": "date_nanos"
},
"date_range": {
"type": "date_range"
},
"deserializer": {
"type": "keyword"
},
"double": {
"type": "double"
},
"double_range": {
"type": "double_range"
},
"float": {
"type": "float"
},
"float_range": {
"type": "float_range"
},
"geo_point": {
"type": "geo_point"
},
"geo_shape": {
"type": "geo_shape"
},
"half_float": {
"type": "half_float"
},
"integer": {
"type": "integer"
},
"integer_range": {
"type": "integer_range"
},
"ip": {
"type": "ip"
},
"ip_range": {
"type": "ip_range"
},
"keyword": {
"type": "keyword"
},
"list_id": {
"type": "keyword"
},
"long": {
"type": "long"
},
"long_range": {
"type": "long_range"
},
"meta": {
"type": "object",
"enabled": false
},
"serializer": {
"type": "keyword"
},
"shape": {
"type": "shape"
},
"short": {
"type": "short"
},
"text": {
"type": "text"
},
"tie_breaker_id": {
"type": "keyword"
},
"updated_at": {
"type": "date"
},
"updated_by": {
"type": "keyword"
}
}
},
"aliases": {}
},
"version": 2,
"_meta": {
"managed": true,
"description": "default mappings for the .items index template installed by Kibana/Security"
}
}

View File

@@ -0,0 +1,55 @@
{
"template": {
"mappings": {
"dynamic": "strict",
"properties": {
"created_at": {
"type": "date"
},
"created_by": {
"type": "keyword"
},
"description": {
"type": "keyword"
},
"deserializer": {
"type": "keyword"
},
"immutable": {
"type": "boolean"
},
"meta": {
"type": "object",
"enabled": false
},
"name": {
"type": "keyword"
},
"serializer": {
"type": "keyword"
},
"tie_breaker_id": {
"type": "keyword"
},
"type": {
"type": "keyword"
},
"updated_at": {
"type": "date"
},
"updated_by": {
"type": "keyword"
},
"version": {
"type": "keyword"
}
}
},
"aliases": {}
},
"version": 2,
"_meta": {
"managed": true,
"description": "default mappings for the .lists index template installed by Kibana/Security"
}
}

View File

@@ -20,10 +20,12 @@
"so_detection": { "so_detection": {
"properties": { "properties": {
"publicId": { "publicId": {
"type": "text" "ignore_above": 1024,
"type": "keyword"
}, },
"title": { "title": {
"type": "text" "ignore_above": 1024,
"type": "keyword"
}, },
"severity": { "severity": {
"ignore_above": 1024, "ignore_above": 1024,
@@ -36,6 +38,18 @@
"description": { "description": {
"type": "text" "type": "text"
}, },
"category": {
"ignore_above": 1024,
"type": "keyword"
},
"product": {
"ignore_above": 1024,
"type": "keyword"
},
"service": {
"ignore_above": 1024,
"type": "keyword"
},
"content": { "content": {
"type": "text" "type": "text"
}, },
@@ -49,7 +63,8 @@
"type": "boolean" "type": "boolean"
}, },
"tags": { "tags": {
"type": "text" "ignore_above": 1024,
"type": "keyword"
}, },
"ruleset": { "ruleset": {
"ignore_above": 1024, "ignore_above": 1024,
@@ -136,4 +151,4 @@
"_meta": { "_meta": {
"ecs_version": "1.12.2" "ecs_version": "1.12.2"
} }
} }

View File

@@ -5,6 +5,6 @@
# https://securityonion.net/license; you may not use this file except in compliance with the # https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0. # Elastic License 2.0.
. /usr/sbin/so-common
curl -K /opt/so/conf/elasticsearch/curl.config -s -k -L "https://localhost:9200/_cat/indices?pretty&v&s=index"
curl -K /opt/so/conf/elasticsearch/curl.config-X GET -k -L "https://localhost:9200/_cat/indices?v&s=index"

View File

@@ -40,7 +40,7 @@ fi
# Iterate through the output of _cat/allocation for each node in the cluster to determine the total available space # Iterate through the output of _cat/allocation for each node in the cluster to determine the total available space
{% if GLOBALS.role == 'so-manager' %} {% if GLOBALS.role == 'so-manager' %}
for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | grep -v {{ GLOBALS.manager }} | awk '{print $5}'); do for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | grep -v "{{ GLOBALS.manager }}$" | awk '{print $5}'); do
{% else %} {% else %}
for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | awk '{print $5}'); do for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | awk '{print $5}'); do
{% endif %} {% endif %}

View File

@@ -13,7 +13,7 @@ TOTAL_USED_SPACE=0
# Iterate through the output of _cat/allocation for each node in the cluster to determine the total used space # Iterate through the output of _cat/allocation for each node in the cluster to determine the total used space
{% if GLOBALS.role == 'so-manager' %} {% if GLOBALS.role == 'so-manager' %}
# Get total disk space - disk.total # Get total disk space - disk.total
for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | grep -v {{ GLOBALS.manager }} | awk '{print $3}'); do for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | grep -v "{{ GLOBALS.manager }}$" | awk '{print $3}'); do
{% else %} {% else %}
# Get disk space taken up by indices - disk.indices # Get disk space taken up by indices - disk.indices
for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | awk '{print $2}'); do for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | awk '{print $2}'); do

View File

@@ -27,6 +27,7 @@ overlimit() {
# 2. Check if the maximum number of iterations - MAX_ITERATIONS - has been exceeded. If so, exit. # 2. Check if the maximum number of iterations - MAX_ITERATIONS - has been exceeded. If so, exit.
# Closed indices will be deleted first. If we are able to bring disk space under LOG_SIZE_LIMIT, or the number of iterations has exceeded the maximum allowed number of iterations, we will break out of the loop. # Closed indices will be deleted first. If we are able to bring disk space under LOG_SIZE_LIMIT, or the number of iterations has exceeded the maximum allowed number of iterations, we will break out of the loop.
while overlimit && [[ $ITERATION -lt $MAX_ITERATIONS ]]; do while overlimit && [[ $ITERATION -lt $MAX_ITERATIONS ]]; do
# If we can't query Elasticsearch, then immediately return false. # If we can't query Elasticsearch, then immediately return false.
@@ -34,28 +35,36 @@ while overlimit && [[ $ITERATION -lt $MAX_ITERATIONS ]]; do
[ $? -eq 1 ] && echo "$(date) - Could not query Elasticsearch." >> ${LOG} && exit [ $? -eq 1 ] && echo "$(date) - Could not query Elasticsearch." >> ${LOG} && exit
# We iterate through the closed and open indices # We iterate through the closed and open indices
CLOSED_INDICES=$(/usr/sbin/so-elasticsearch-query _cat/indices?h=index,status | grep 'close$' | awk '{print $1}' | grep -vE "playbook|so-case" | grep -E "(logstash-|so-|.ds-logs-)" | sort -t- -k3) CLOSED_SO_INDICES=$(/usr/sbin/so-elasticsearch-query _cat/indices?h=index,status | grep 'close$' | awk '{print $1}' | grep -E "(^logstash-.*|^so-.*)" | grep -vE "so-case|so-detection" | sort -t- -k3)
OPEN_INDICES=$(/usr/sbin/so-elasticsearch-query _cat/indices?h=index,status | grep 'open$' | awk '{print $1}' | grep -vE "playbook|so-case" | grep -E "(logstash-|so-|.ds-logs-)" | sort -t- -k3) CLOSED_INDICES=$(/usr/sbin/so-elasticsearch-query _cat/indices?h=index,status | grep 'close$' | awk '{print $1}' | grep -E "^.ds-logs-.*" | grep -v "suricata" | sort -t- -k4)
OPEN_SO_INDICES=$(/usr/sbin/so-elasticsearch-query _cat/indices?h=index,status | grep 'open$' | awk '{print $1}' | grep -E "(^logstash-.*|^so-.*)" | grep -vE "so-case|so-detection" | sort -t- -k3)
OPEN_INDICES=$(/usr/sbin/so-elasticsearch-query _cat/indices?h=index,status | grep 'open$' | awk '{print $1}' | grep -E "^.ds-logs-.*" | grep -v "suricata" | sort -t- -k4)
for INDEX in ${CLOSED_INDICES} ${OPEN_INDICES}; do for INDEX in ${CLOSED_SO_INDICES} ${OPEN_SO_INDICES} ${CLOSED_INDICES} ${OPEN_INDICES}; do
# Now that we've sorted the indices from oldest to newest, we need to check each index to see if it is assigned as the current write index for a data stream # Check if index is an older index. If it is an older index, delete it before moving on to newer indices.
# To do so, we need to identify to which data stream this index is associated if [[ "$INDEX" =~ "^logstash-.*|so-.*" ]]; then
# We extract the data stream name using the pattern below printf "\n$(date) - Used disk space exceeds LOG_SIZE_LIMIT (${LOG_SIZE_LIMIT_GB} GB) - Deleting ${INDEX} index...\n" >> ${LOG}
DATASTREAM_PATTERN="logs-[a-zA-Z_.]+-[a-zA-Z_.]+" /usr/sbin/so-elasticsearch-query ${INDEX} -XDELETE >> ${LOG} 2>&1
DATASTREAM=$(echo "${INDEX}" | grep -oE "$DATASTREAM_PATTERN")
# We look up the data stream, and determine the write index. If there is only one backing index, we delete the entire data stream
BACKING_INDICES=$(/usr/sbin/so-elasticsearch-query _data_stream/${DATASTREAM} | jq -r '.data_streams[0].indices | length')
if [ "$BACKING_INDICES" -gt 1 ]; then
CURRENT_WRITE_INDEX=$(/usr/sbin/so-elasticsearch-query _data_stream/$DATASTREAM | jq -r .data_streams[0].indices[-1].index_name)
# We make sure we are not trying to delete a write index
if [ "${INDEX}" != "${CURRENT_WRITE_INDEX}" ]; then
# This should not be a write index, so we should be allowed to delete it
printf "\n$(date) - Used disk space exceeds LOG_SIZE_LIMIT (${LOG_SIZE_LIMIT_GB} GB) - Deleting ${INDEX} index...\n" >> ${LOG}
/usr/sbin/so-elasticsearch-query ${INDEX} -XDELETE >> ${LOG} 2>&1
fi
else else
printf "\n$(date) - Used disk space exceeds LOG_SIZE_LIMIT (${LOG_SIZE_LIMIT_GB} GB) - There is only one backing index (${INDEX}). Deleting ${DATASTREAM} data stream...\n" >> ${LOG} # Now that we've sorted the indices from oldest to newest, we need to check each index to see if it is assigned as the current write index for a data stream
# To do so, we need to identify to which data stream this index is associated
# We extract the data stream name using the pattern below
DATASTREAM_PATTERN="logs-[a-zA-Z_.]+-[a-zA-Z_.]+"
DATASTREAM=$(echo "${INDEX}" | grep -oE "$DATASTREAM_PATTERN")
# We look up the data stream, and determine the write index. If there is only one backing index, we delete the entire data stream
BACKING_INDICES=$(/usr/sbin/so-elasticsearch-query _data_stream/${DATASTREAM} | jq -r '.data_streams[0].indices | length')
if [ "$BACKING_INDICES" -gt 1 ]; then
CURRENT_WRITE_INDEX=$(/usr/sbin/so-elasticsearch-query _data_stream/$DATASTREAM | jq -r .data_streams[0].indices[-1].index_name)
# We make sure we are not trying to delete a write index
if [ "${INDEX}" != "${CURRENT_WRITE_INDEX}" ]; then
# This should not be a write index, so we should be allowed to delete it
printf "\n$(date) - Used disk space exceeds LOG_SIZE_LIMIT (${LOG_SIZE_LIMIT_GB} GB) - Deleting ${INDEX} index...\n" >> ${LOG}
/usr/sbin/so-elasticsearch-query ${INDEX} -XDELETE >> ${LOG} 2>&1
fi
else
printf "\n$(date) - Used disk space exceeds LOG_SIZE_LIMIT (${LOG_SIZE_LIMIT_GB} GB) - There is only one backing index (${INDEX}). Deleting ${DATASTREAM} data stream...\n" >> ${LOG}
/usr/sbin/so-elasticsearch-query _data_stream/$DATASTREAM -XDELETE >> ${LOG} 2>&1 /usr/sbin/so-elasticsearch-query _data_stream/$DATASTREAM -XDELETE >> ${LOG} 2>&1
fi
fi fi
if ! overlimit ; then if ! overlimit ; then
exit exit

View File

@@ -133,7 +133,7 @@ if [ ! -f $STATE_FILE_SUCCESS ]; then
for i in $pattern; do for i in $pattern; do
TEMPLATE=${i::-14} TEMPLATE=${i::-14}
COMPONENT_PATTERN=${TEMPLATE:3} COMPONENT_PATTERN=${TEMPLATE:3}
MATCH=$(echo "$TEMPLATE" | grep -E "^so-logs-|^so-metrics" | grep -v osquery) MATCH=$(echo "$TEMPLATE" | grep -E "^so-logs-|^so-metrics" | grep -vE "detections|osquery")
if [[ -n "$MATCH" && ! "$COMPONENT_LIST" =~ "$COMPONENT_PATTERN" ]]; then if [[ -n "$MATCH" && ! "$COMPONENT_LIST" =~ "$COMPONENT_PATTERN" ]]; then
load_failures=$((load_failures+1)) load_failures=$((load_failures+1))
echo "Component template does not exist for $COMPONENT_PATTERN. The index template will not be loaded. Load failures: $load_failures" echo "Component template does not exist for $COMPONENT_PATTERN. The index template will not be loaded. Load failures: $load_failures"

View File

@@ -90,17 +90,20 @@ firewall:
tcp: tcp:
- 8086 - 8086
udp: [] udp: []
kafka: kafka_controller:
tcp:
- 9093
udp: []
kafka_data:
tcp: tcp:
- 9092 - 9092
- 9093
udp: [] udp: []
kibana: kibana:
tcp: tcp:
- 5601 - 5601
udp: [] udp: []
localrules: localrules:
tcp: tcp:
- 7788 - 7788
udp: [] udp: []
nginx: nginx:
@@ -369,7 +372,6 @@ firewall:
- elastic_agent_update - elastic_agent_update
- localrules - localrules
- sensoroni - sensoroni
- kafka
fleet: fleet:
portgroups: portgroups:
- elasticsearch_rest - elasticsearch_rest
@@ -405,7 +407,6 @@ firewall:
- docker_registry - docker_registry
- influxdb - influxdb
- sensoroni - sensoroni
- kafka
searchnode: searchnode:
portgroups: portgroups:
- redis - redis
@@ -419,7 +420,6 @@ firewall:
- elastic_agent_data - elastic_agent_data
- elastic_agent_update - elastic_agent_update
- sensoroni - sensoroni
- kafka
heavynode: heavynode:
portgroups: portgroups:
- redis - redis
@@ -761,7 +761,6 @@ firewall:
- beats_5044 - beats_5044
- beats_5644 - beats_5644
- beats_5056 - beats_5056
- redis
- elasticsearch_node - elasticsearch_node
- elastic_agent_control - elastic_agent_control
- elastic_agent_data - elastic_agent_data
@@ -1275,38 +1274,51 @@ firewall:
chain: chain:
DOCKER-USER: DOCKER-USER:
hostgroups: hostgroups:
desktop:
portgroups:
- elastic_agent_data
fleet: fleet:
portgroups: portgroups:
- beats_5056 - elastic_agent_data
idh:
portgroups:
- elastic_agent_data
sensor: sensor:
portgroups: portgroups:
- beats_5044
- beats_5644
- elastic_agent_data - elastic_agent_data
- kafka
searchnode: searchnode:
portgroups: portgroups:
- redis - redis
- beats_5644 - elastic_agent_data
- kafka standalone:
portgroups:
- redis
- elastic_agent_data
manager:
portgroups:
- elastic_agent_data
managersearch: managersearch:
portgroups: portgroups:
- redis - redis
- beats_5644 - elastic_agent_data
- kafka
self: self:
portgroups: portgroups:
- redis - redis
- beats_5644 - elastic_agent_data
beats_endpoint: beats_endpoint:
portgroups: portgroups:
- beats_5044 - beats_5044
beats_endpoint_ssl: beats_endpoint_ssl:
portgroups: portgroups:
- beats_5644 - beats_5644
elastic_agent_endpoint:
portgroups:
- elastic_agent_data
endgame: endgame:
portgroups: portgroups:
- endgame - endgame
receiver:
portgroups: []
customhostgroup0: customhostgroup0:
portgroups: [] portgroups: []
customhostgroup1: customhostgroup1:

View File

@@ -18,4 +18,28 @@
{% endfor %} {% endfor %}
{% endif %} {% endif %}
{# Only add Kafka firewall items when Kafka enabled #}
{% set role = GLOBALS.role.split('-')[1] %}
{% if GLOBALS.pipeline == 'KAFKA' and role in ['manager', 'managersearch', 'standalone'] %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[role].portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.receiver.portgroups.append('kafka_controller') %}
{% endif %}
{% if GLOBALS.pipeline == 'KAFKA' and role == 'receiver' %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.self.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.standalone.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.manager.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.managersearch.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.receiver.portgroups.append('kafka_controller') %}
{% endif %}
{% if GLOBALS.pipeline == 'KAFKA' and role in ['manager', 'managersearch', 'standalone', 'receiver'] %}
{% for r in ['manager', 'managersearch', 'standalone', 'receiver', 'fleet', 'idh', 'sensor', 'searchnode','heavynode', 'elastic_agent_endpoint', 'desktop'] %}
{% if FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r] is defined %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r].portgroups.append('kafka_data') %}
{% endif %}
{% endfor %}
{% endif %}
{% set FIREWALL_MERGED = salt['pillar.get']('firewall', FIREWALL_DEFAULT.firewall, merge=True) %} {% set FIREWALL_MERGED = salt['pillar.get']('firewall', FIREWALL_DEFAULT.firewall, merge=True) %}

View File

@@ -7,6 +7,7 @@ firewall:
multiline: True multiline: True
regex: ^(([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))?)?$ regex: ^(([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))?)?$
regexFailureMessage: You must enter a valid IP address or CIDR. regexFailureMessage: You must enter a valid IP address or CIDR.
duplicates: True
anywhere: &hostgroupsettingsadv anywhere: &hostgroupsettingsadv
description: List of IP or CIDR blocks to allow access to this hostgroup. description: List of IP or CIDR blocks to allow access to this hostgroup.
forcedType: "[]string" forcedType: "[]string"
@@ -15,6 +16,7 @@ firewall:
advanced: True advanced: True
regex: ^(([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))?)?$ regex: ^(([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))?)?$
regexFailureMessage: You must enter a valid IP address or CIDR. regexFailureMessage: You must enter a valid IP address or CIDR.
duplicates: True
beats_endpoint: *hostgroupsettings beats_endpoint: *hostgroupsettings
beats_endpoint_ssl: *hostgroupsettings beats_endpoint_ssl: *hostgroupsettings
dockernet: &ROhostgroupsettingsadv dockernet: &ROhostgroupsettingsadv
@@ -53,6 +55,7 @@ firewall:
multiline: True multiline: True
regex: ^(([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))?)?$ regex: ^(([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))?)?$
regexFailureMessage: You must enter a valid IP address or CIDR. regexFailureMessage: You must enter a valid IP address or CIDR.
duplicates: True
customhostgroup1: *customhostgroupsettings customhostgroup1: *customhostgroupsettings
customhostgroup2: *customhostgroupsettings customhostgroup2: *customhostgroupsettings
customhostgroup3: *customhostgroupsettings customhostgroup3: *customhostgroupsettings
@@ -70,12 +73,14 @@ firewall:
helpLink: firewall.html helpLink: firewall.html
advanced: True advanced: True
multiline: True multiline: True
duplicates: True
udp: &udpsettings udp: &udpsettings
description: List of UDP ports for this port group. description: List of UDP ports for this port group.
forcedType: "[]string" forcedType: "[]string"
helpLink: firewall.html helpLink: firewall.html
advanced: True advanced: True
multiline: True multiline: True
duplicates: True
agrules: agrules:
tcp: *tcpsettings tcp: *tcpsettings
udp: *udpsettings udp: *udpsettings
@@ -190,6 +195,7 @@ firewall:
multiline: True multiline: True
forcedType: "[]string" forcedType: "[]string"
helpLink: firewall.html helpLink: firewall.html
duplicates: True
sensor: sensor:
portgroups: *portgroupsdocker portgroups: *portgroupsdocker
searchnode: searchnode:
@@ -243,6 +249,7 @@ firewall:
multiline: True multiline: True
forcedType: "[]string" forcedType: "[]string"
helpLink: firewall.html helpLink: firewall.html
duplicates: True
dockernet: dockernet:
portgroups: *portgroupshost portgroups: *portgroupshost
localhost: localhost:

View File

@@ -11,6 +11,8 @@ idh_sshd_selinux:
- sel_type: ssh_port_t - sel_type: ssh_port_t
- prereq: - prereq:
- file: openssh_config - file: openssh_config
- require:
- pkg: python_selinux_mgmt_tools
{% endif %} {% endif %}
openssh_config: openssh_config:

View File

@@ -15,3 +15,9 @@ openssh:
- enable: False - enable: False
- name: {{ openssh_map.service }} - name: {{ openssh_map.service }}
{% endif %} {% endif %}
{% if grains.os_family == 'RedHat' %}
python_selinux_mgmt_tools:
pkg.installed:
- name: policycoreutils-python-utils
{% endif %}

View File

@@ -33,6 +33,19 @@ idstools_sbin_jinja:
- file_mode: 755 - file_mode: 755
- template: jinja - template: jinja
suricatacustomdirsfile:
file.directory:
- name: /nsm/rules/detect-suricata/custom_file
- user: 939
- group: 939
- makedirs: True
suricatacustomdirsurl:
file.directory:
- name: /nsm/rules/detect-suricata/custom_temp
- user: 939
- group: 939
{% else %} {% else %}
{{sls}}_state_not_allowed: {{sls}}_state_not_allowed:

View File

@@ -1,6 +1,8 @@
{%- from 'vars/globals.map.jinja' import GLOBALS -%} {%- from 'vars/globals.map.jinja' import GLOBALS -%}
{%- from 'idstools/map.jinja' import IDSTOOLSMERGED -%} {%- from 'soc/merged.map.jinja' import SOCMERGED -%}
--suricata-version=6.0
--merged=/opt/so/rules/nids/suri/all.rules --merged=/opt/so/rules/nids/suri/all.rules
--output=/nsm/rules/detect-suricata/custom_temp
--local=/opt/so/rules/nids/suri/local.rules --local=/opt/so/rules/nids/suri/local.rules
{%- if GLOBALS.md_engine == "SURICATA" %} {%- if GLOBALS.md_engine == "SURICATA" %}
--local=/opt/so/rules/nids/suri/extraction.rules --local=/opt/so/rules/nids/suri/extraction.rules
@@ -10,8 +12,12 @@
--disable=/opt/so/idstools/etc/disable.conf --disable=/opt/so/idstools/etc/disable.conf
--enable=/opt/so/idstools/etc/enable.conf --enable=/opt/so/idstools/etc/enable.conf
--modify=/opt/so/idstools/etc/modify.conf --modify=/opt/so/idstools/etc/modify.conf
{%- if IDSTOOLSMERGED.config.urls | length > 0 %} {%- if SOCMERGED.config.server.modules.suricataengine.customRulesets %}
{%- for URL in IDSTOOLSMERGED.config.urls %} {%- for ruleset in SOCMERGED.config.server.modules.suricataengine.customRulesets %}
--url={{ URL }} {%- if 'url' in ruleset %}
{%- endfor %} --url={{ ruleset.url }}
{%- endif %} {%- elif 'file' in ruleset %}
--local={{ ruleset.file }}
{%- endif %}
{%- endfor %}
{%- endif %}

View File

@@ -9,43 +9,53 @@ idstools:
forcedType: string forcedType: string
helpLink: rules.html helpLink: rules.html
ruleset: ruleset:
description: 'Defines the ruleset you want to run. Options are ETOPEN or ETPRO. WARNING! Changing the ruleset will remove all existing Suricata rules of the previous ruleset and their associated overrides. This removal cannot be undone.' description: 'Defines the ruleset you want to run. Options are ETOPEN or ETPRO. Once you have changed the ruleset here, you will need to wait for the rule update to take place (every 24 hours), or you can force the update by nagivating to Detections --> Options dropdown menu --> Suricata --> Full Update. WARNING! Changing the ruleset will remove all existing non-overlapping Suricata rules of the previous ruleset and their associated overrides. This removal cannot be undone.'
global: True global: True
regex: ETPRO\b|ETOPEN\b regex: ETPRO\b|ETOPEN\b
helpLink: rules.html helpLink: rules.html
urls: urls:
description: This is a list of additional rule download locations. description: This is a list of additional rule download locations. This feature is currently disabled.
global: True global: True
multiline: True
forcedType: "[]string"
readonly: True
helpLink: rules.html helpLink: rules.html
sids: sids:
disabled: disabled:
description: Contains the list of NIDS rules manually disabled across the grid. To disable a rule, add its Signature ID (SID) to the Current Grid Value box, one entry per line. To disable multiple rules, you can use regular expressions. description: Contains the list of NIDS rules (or regex patterns) disabled across the grid. This setting is readonly; Use the Detections screen to disable rules.
global: True global: True
multiline: True multiline: True
forcedType: "[]string" forcedType: "[]string"
regex: \d*|re:.* regex: \d*|re:.*
helpLink: managing-alerts.html helpLink: managing-alerts.html
readonlyUi: True
advanced: true
enabled: enabled:
description: Contains the list of NIDS rules manually enabled across the grid. To enable a rule, add its Signature ID (SID) to the Current Grid Value box, one entry per line. To enable multiple rules, you can use regular expressions. description: Contains the list of NIDS rules (or regex patterns) enabled across the grid. This setting is readonly; Use the Detections screen to enable rules.
global: True global: True
multiline: True multiline: True
forcedType: "[]string" forcedType: "[]string"
regex: \d*|re:.* regex: \d*|re:.*
helpLink: managing-alerts.html helpLink: managing-alerts.html
readonlyUi: True
advanced: true
modify: modify:
description: Contains the list of NIDS rules that were modified from their default values. Entries must adhere to the following format - SID "REGEX_SEARCH_TERM" "REGEX_REPLACE_TERM" description: Contains the list of NIDS rules (SID "REGEX_SEARCH_TERM" "REGEX_REPLACE_TERM"). This setting is readonly; Use the Detections screen to modify rules.
global: True global: True
multiline: True multiline: True
forcedType: "[]string" forcedType: "[]string"
helpLink: managing-alerts.html helpLink: managing-alerts.html
readonlyUi: True
advanced: true
rules: rules:
local__rules: local__rules:
description: Contains the list of custom NIDS rules applied to the grid. To add custom NIDS rules to the grid, enter one rule per line in the Current Grid Value box. description: Contains the list of custom NIDS rules applied to the grid. This setting is readonly; Use the Detections screen to adjust rules.
file: True file: True
global: True global: True
advanced: True advanced: True
title: Local Rules title: Local Rules
helpLink: local-rules.html helpLink: local-rules.html
readonlyUi: True
filters__rules: filters__rules:
description: If you are using Suricata for metadata, then you can set custom filters for that metadata here. description: If you are using Suricata for metadata, then you can set custom filters for that metadata here.
file: True file: True

View File

@@ -26,8 +26,6 @@ if [[ ! "`pidof -x $(basename $0) -o %PPID`" ]]; then
docker exec so-idstools idstools-rulecat -v --suricata-version 6.0 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force docker exec so-idstools idstools-rulecat -v --suricata-version 6.0 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force
{%- elif IDSTOOLSMERGED.config.ruleset == 'ETPRO' %} {%- elif IDSTOOLSMERGED.config.ruleset == 'ETPRO' %}
docker exec so-idstools idstools-rulecat -v --suricata-version 6.0 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force --etpro={{ IDSTOOLSMERGED.config.oinkcode }} docker exec so-idstools idstools-rulecat -v --suricata-version 6.0 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force --etpro={{ IDSTOOLSMERGED.config.oinkcode }}
{%- elif IDSTOOLSMERGED.config.ruleset == 'TALOS' %}
docker exec so-idstools idstools-rulecat -v --suricata-version 6.0 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force --url=https://www.snort.org/rules/snortrules-snapshot-2983.tar.gz?oinkcode={{ IDSTOOLSMERGED.config.oinkcode }}
{%- endif %} {%- endif %}
{%- endif %} {%- endif %}

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,91 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% from 'kafka/map.jinja' import KAFKAMERGED %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set KAFKA_NODES_PILLAR = salt['pillar.get']('kafka:nodes') %}
{% set KAFKA_PASSWORD = salt['pillar.get']('kafka:password') %}
{# Create list of KRaft controllers #}
{% set controllers = [] %}
{# Check for Kafka nodes with controller in process_x_roles #}
{% for node in KAFKA_NODES_PILLAR %}
{% if 'controller' in KAFKA_NODES_PILLAR[node].role %}
{% do controllers.append(KAFKA_NODES_PILLAR[node].nodeid ~ "@" ~ node ~ ":9093") %}
{% endif %}
{% endfor %}
{% set kafka_controller_quorum_voters = ','.join(controllers) %}
{# By default all Kafka eligible nodes are given the role of broker, except for
grid MANAGER (broker,controller) until overridden through SOC UI #}
{% set node_type = salt['pillar.get']('kafka:nodes:'+ GLOBALS.hostname + ':role') %}
{# Generate server.properties for 'broker' , 'controller', 'broker,controller' node types
anything above this line is a configuration needed for ALL Kafka nodes #}
{% if node_type == 'broker' %}
{% do KAFKAMERGED.config.broker.update({'advertised_x_listeners': 'BROKER://'+ GLOBALS.node_ip +':9092' }) %}
{% do KAFKAMERGED.config.broker.update({'controller_x_quorum_x_voters': kafka_controller_quorum_voters }) %}
{% do KAFKAMERGED.config.broker.update({'node_x_id': salt['pillar.get']('kafka:nodes:'+ GLOBALS.hostname +':nodeid') }) %}
{% do KAFKAMERGED.config.broker.update({'ssl_x_keystore_x_password': KAFKA_PASSWORD }) %}
{# Nodes with only the 'broker' role need to have the below settings for communicating with controller nodes #}
{% do KAFKAMERGED.config.broker.update({'controller_x_listener_x_names': KAFKAMERGED.config.controller.controller_x_listener_x_names }) %}
{% do KAFKAMERGED.config.broker.update({
'listener_x_security_x_protocol_x_map': KAFKAMERGED.config.broker.listener_x_security_x_protocol_x_map
+ ',' + KAFKAMERGED.config.controller.listener_x_security_x_protocol_x_map })
%}
{% endif %}
{% if node_type == 'controller' %}
{% do KAFKAMERGED.config.controller.update({'controller_x_quorum_x_voters': kafka_controller_quorum_voters }) %}
{% do KAFKAMERGED.config.controller.update({'node_x_id': salt['pillar.get']('kafka:nodes:'+ GLOBALS.hostname +':nodeid') }) %}
{% do KAFKAMERGED.config.controller.update({'ssl_x_keystore_x_password': KAFKA_PASSWORD }) %}
{% endif %}
{# Kafka nodes of this type are not recommended for use outside of development / testing. #}
{% if node_type == 'broker,controller' %}
{% do KAFKAMERGED.config.broker.update({'advertised_x_listeners': 'BROKER://'+ GLOBALS.node_ip +':9092' }) %}
{% do KAFKAMERGED.config.broker.update({'controller_x_listener_x_names': KAFKAMERGED.config.controller.controller_x_listener_x_names }) %}
{% do KAFKAMERGED.config.broker.update({'controller_x_quorum_x_voters': kafka_controller_quorum_voters }) %}
{% do KAFKAMERGED.config.broker.update({'node_x_id': salt['pillar.get']('kafka:nodes:'+ GLOBALS.hostname +':nodeid') }) %}
{% do KAFKAMERGED.config.broker.update({'process_x_roles': 'broker,controller' }) %}
{% do KAFKAMERGED.config.broker.update({'ssl_x_keystore_x_password': KAFKA_PASSWORD }) %}
{% do KAFKAMERGED.config.broker.update({
'listeners': KAFKAMERGED.config.broker.listeners + ',' + KAFKAMERGED.config.controller.listeners })
%}
{% do KAFKAMERGED.config.broker.update({
'listener_x_security_x_protocol_x_map': KAFKAMERGED.config.broker.listener_x_security_x_protocol_x_map
+ ',' + KAFKAMERGED.config.controller.listener_x_security_x_protocol_x_map })
%}
{% endif %}
{# If a password other than PLACEHOLDER isn't set remove it from the server.properties #}
{% if KAFKAMERGED.config.broker.ssl_x_truststore_x_password == 'PLACEHOLDER' %}
{% do KAFKAMERGED.config.broker.pop('ssl_x_truststore_x_password') %}
{% endif %}
{% if KAFKAMERGED.config.controller.ssl_x_truststore_x_password == 'PLACEHOLDER' %}
{% do KAFKAMERGED.config.controller.pop('ssl_x_truststore_x_password') %}
{% endif %}
{# Client properties stuff #}
{% if KAFKAMERGED.config.client.ssl_x_truststore_x_password == 'PLACEHOLDER' %}
{% do KAFKAMERGED.config.client.pop('ssl_x_truststore_x_password') %}
{% endif %}
{% do KAFKAMERGED.config.client.update({'ssl_x_keystore_x_password': KAFKA_PASSWORD }) %}
{% if 'broker' in node_type %}
{% set KAFKACONFIG = KAFKAMERGED.config.broker %}
{% else %}
{% set KAFKACONFIG = KAFKAMERGED.config.controller %}
{% endif %}
{% set KAFKACLIENT = KAFKAMERGED.config.client %}

View File

@@ -7,23 +7,6 @@
{% if sls.split('.')[0] in allowed_states %} {% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
{% set kafka_ips_logstash = [] %}
{% set kafka_ips_kraft = [] %}
{% set kafkanodes = salt['pillar.get']('kafka:nodes', {}) %}
{% set kafka_ip = GLOBALS.node_ip %}
{# Create list for kafka <-> logstash/searchnode communcations #}
{% for node, node_data in kafkanodes.items() %}
{% do kafka_ips_logstash.append(node_data['ip'] + ":9092") %}
{% endfor %}
{% set kafka_server_list = "','".join(kafka_ips_logstash) %}
{# Create a list for kraft controller <-> kraft controller communications. Used for Kafka metadata management #}
{% for node, node_data in kafkanodes.items() %}
{% do kafka_ips_kraft.append(node_data['nodeid'] ~ "@" ~ node_data['ip'] ~ ":9093") %}
{% endfor %}
{% set kraft_server_list = "','".join(kafka_ips_kraft) %}
include: include:
- ssl - ssl
@@ -37,27 +20,15 @@ kafka:
- uid: 960 - uid: 960
- gid: 960 - gid: 960
{# Future tools to query kafka directly / show consumer groups
kafka_sbin_tools: kafka_sbin_tools:
file.recurse: file.recurse:
- name: /usr/sbin - name: /usr/sbin
- source: salt://kafka/tools/sbin - source: salt://kafka/tools/sbin
- user: 960 - user: 960
- group: 960 - group: 960
- file_mode: 755 #}
kafka_sbin_jinja_tools:
file.recurse:
- name: /usr/sbin
- source: salt://kafka/tools/sbin_jinja
- user: 960
- group: 960
- file_mode: 755 - file_mode: 755
- template: jinja
- defaults:
GLOBALS: {{ GLOBALS }}
kakfa_log_dir: kafka_log_dir:
file.directory: file.directory:
- name: /opt/so/log/kafka - name: /opt/so/log/kafka
- user: 960 - user: 960
@@ -71,20 +42,6 @@ kafka_data_dir:
- group: 960 - group: 960
- makedirs: True - makedirs: True
kafka_generate_keystore:
cmd.run:
- name: "/usr/sbin/so-kafka-generate-keystore"
- onchanges:
- x509: /etc/pki/kafka.key
kafka_keystore_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka.jks
- mode: 640
- user: 960
- group: 939
{% for sc in ['server', 'client'] %} {% for sc in ['server', 'client'] %}
kafka_kraft_{{sc}}_properties: kafka_kraft_{{sc}}_properties:
file.managed: file.managed:
@@ -97,6 +54,12 @@ kafka_kraft_{{sc}}_properties:
- show_changes: False - show_changes: False
{% endfor %} {% endfor %}
reset_quorum_on_changes:
cmd.run:
- name: rm -f /nsm/kafka/data/__cluster_metadata-0/quorum-state
- onchanges:
- file: /opt/so/conf/kafka/server.properties
{% else %} {% else %}
{{sls}}_state_not_allowed: {{sls}}_state_not_allowed:

View File

@@ -1,14 +1,18 @@
kafka: kafka:
enabled: False enabled: False
cluster_id:
password:
controllers:
reset:
config: config:
server: broker:
advertised_x_listeners: advertised_x_listeners:
auto_x_create_x_topics_x_enable: true auto_x_create_x_topics_x_enable: true
controller_x_listener_x_names: CONTROLLER
controller_x_quorum_x_voters: controller_x_quorum_x_voters:
default_x_replication_x_factor: 1
inter_x_broker_x_listener_x_name: BROKER inter_x_broker_x_listener_x_name: BROKER
listeners: BROKER://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093 listeners: BROKER://0.0.0.0:9092
listener_x_security_x_protocol_x_map: CONTROLLER:SSL,BROKER:SSL listener_x_security_x_protocol_x_map: BROKER:SSL
log_x_dirs: /nsm/kafka/data log_x_dirs: /nsm/kafka/data
log_x_retention_x_check_x_interval_x_ms: 300000 log_x_retention_x_check_x_interval_x_ms: 300000
log_x_retention_x_hours: 168 log_x_retention_x_hours: 168
@@ -16,24 +20,43 @@ kafka:
node_x_id: node_x_id:
num_x_io_x_threads: 8 num_x_io_x_threads: 8
num_x_network_x_threads: 3 num_x_network_x_threads: 3
num_x_partitions: 1 num_x_partitions: 3
num_x_recovery_x_threads_x_per_x_data_x_dir: 1 num_x_recovery_x_threads_x_per_x_data_x_dir: 1
offsets_x_topic_x_replication_x_factor: 1 offsets_x_topic_x_replication_x_factor: 1
process_x_roles: broker process_x_roles: broker
socket_x_receive_x_buffer_x_bytes: 102400 socket_x_receive_x_buffer_x_bytes: 102400
socket_x_request_x_max_x_bytes: 104857600 socket_x_request_x_max_x_bytes: 104857600
socket_x_send_x_buffer_x_bytes: 102400 socket_x_send_x_buffer_x_bytes: 102400
ssl_x_keystore_x_location: /etc/pki/kafka.jks ssl_x_keystore_x_location: /etc/pki/kafka.p12
ssl_x_keystore_x_password: changeit ssl_x_keystore_x_type: PKCS12
ssl_x_keystore_x_type: JKS ssl_x_keystore_x_password:
ssl_x_truststore_x_location: /etc/pki/java/sos/cacerts ssl_x_truststore_x_location: /etc/pki/java/sos/cacerts
ssl_x_truststore_x_password: changeit ssl_x_truststore_x_password: PLACEHOLDER
ssl_x_truststore_x_type: PEM
transaction_x_state_x_log_x_min_x_isr: 1 transaction_x_state_x_log_x_min_x_isr: 1
transaction_x_state_x_log_x_replication_x_factor: 1 transaction_x_state_x_log_x_replication_x_factor: 1
client: client:
security_x_protocol: SSL security_x_protocol: SSL
ssl_x_truststore_x_location: /etc/pki/java/sos/cacerts ssl_x_truststore_x_location: /etc/pki/java/sos/cacerts
ssl_x_truststore_x_password: changeit ssl_x_truststore_x_password: PLACEHOLDER
ssl_x_keystore_x_location: /etc/pki/kafka.jks ssl_x_truststore_x_type: PEM
ssl_x_keystore_x_type: JKS ssl_x_keystore_x_location: /etc/pki/kafka.p12
ssl_x_keystore_x_password: changeit ssl_x_keystore_x_type: PKCS12
ssl_x_keystore_x_password:
controller:
controller_x_listener_x_names: CONTROLLER
controller_x_quorum_x_voters:
listeners: CONTROLLER://0.0.0.0:9093
listener_x_security_x_protocol_x_map: CONTROLLER:SSL
log_x_dirs: /nsm/kafka/data
log_x_retention_x_check_x_interval_x_ms: 300000
log_x_retention_x_hours: 168
log_x_segment_x_bytes: 1073741824
node_x_id:
process_x_roles: controller
ssl_x_keystore_x_location: /etc/pki/kafka.p12
ssl_x_keystore_x_type: PKCS12
ssl_x_keystore_x_password:
ssl_x_truststore_x_location: /etc/pki/java/sos/cacerts
ssl_x_truststore_x_password: PLACEHOLDER
ssl_x_truststore_x_type: PEM

View File

@@ -3,8 +3,8 @@
# https://securityonion.net/license; you may not use this file except in compliance with the # https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0. # Elastic License 2.0.
include: {% from 'kafka/map.jinja' import KAFKAMERGED %}
- kafka.sostatus {% from 'vars/globals.map.jinja' import GLOBALS %}
so-kafka: so-kafka:
docker_container.absent: docker_container.absent:
@@ -14,3 +14,12 @@ so-kafka_so-status.disabled:
file.comment: file.comment:
- name: /opt/so/conf/so-status/so-status.conf - name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-kafka$ - regex: ^so-kafka$
- onlyif: grep -q '^so-kafka$' /opt/so/conf/so-status/so-status.conf
{% if GLOBALS.is_manager and KAFKAMERGED.enabled or GLOBALS.pipeline == "KAFKA" %}
ensure_default_pipeline:
cmd.run:
- name: |
/usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled False;
/usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/global/soc_global.sls global.pipeline REDIS
{% endif %}

View File

@@ -1,13 +1,20 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one # Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at # or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the # https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0. # Elastic License 2.0.
#
# Note: Per the Elastic License 2.0, the second limitation states:
#
# "You may not move, change, disable, or circumvent the license key functionality
# in the software, and you may not remove or obscure any functionality in the
# software that is protected by the license key."
{% from 'allowed_states.map.jinja' import allowed_states %} {% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %} {% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'docker/docker.map.jinja' import DOCKER %} {% from 'docker/docker.map.jinja' import DOCKER %}
{% set KAFKANODES = salt['pillar.get']('kafka:nodes', {}) %} {% set KAFKANODES = salt['pillar.get']('kafka:nodes') %}
{% if 'gmd' in salt['pillar.get']('features', []) %}
include: include:
- elasticsearch.ca - elasticsearch.ca
@@ -25,7 +32,8 @@ so-kafka:
- ipv4_address: {{ DOCKER.containers['so-kafka'].ip }} - ipv4_address: {{ DOCKER.containers['so-kafka'].ip }}
- user: kafka - user: kafka
- environment: - environment:
- KAFKA_HEAP_OPTS=-Xmx2G -Xms1G KAFKA_HEAP_OPTS: -Xmx2G -Xms1G
KAFKA_OPTS: -javaagent:/opt/jolokia/agents/jolokia-agent-jvm-javaagent.jar=port=8778,host={{ DOCKER.containers['so-kafka'].ip }},policyLocation=file:/opt/jolokia/jolokia.xml
- extra_hosts: - extra_hosts:
{% for node in KAFKANODES %} {% for node in KAFKANODES %}
- {{ node }}:{{ KAFKANODES[node].ip }} - {{ node }}:{{ KAFKANODES[node].ip }}
@@ -40,11 +48,12 @@ so-kafka:
- {{ BINDING }} - {{ BINDING }}
{% endfor %} {% endfor %}
- binds: - binds:
- /etc/pki/kafka.jks:/etc/pki/kafka.jks - /etc/pki/kafka.p12:/etc/pki/kafka.p12:ro
- /opt/so/conf/ca/cacerts:/etc/pki/java/sos/cacerts - /etc/pki/tls/certs/intca.crt:/etc/pki/java/sos/cacerts:ro
- /nsm/kafka/data/:/nsm/kafka/data/:rw - /nsm/kafka/data/:/nsm/kafka/data/:rw
- /opt/so/conf/kafka/server.properties:/kafka/config/kraft/server.properties - /opt/so/log/kafka:/opt/kafka/logs/:rw
- /opt/so/conf/kafka/client.properties:/kafka/config/kraft/client.properties - /opt/so/conf/kafka/server.properties:/opt/kafka/config/kraft/server.properties:ro
- /opt/so/conf/kafka/client.properties:/opt/kafka/config/kraft/client.properties
- watch: - watch:
{% for sc in ['server', 'client'] %} {% for sc in ['server', 'client'] %}
- file: kafka_kraft_{{sc}}_properties - file: kafka_kraft_{{sc}}_properties
@@ -55,6 +64,19 @@ delete_so-kafka_so-status.disabled:
- name: /opt/so/conf/so-status/so-status.conf - name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-kafka$ - regex: ^so-kafka$
{% else %}
{{sls}}_no_license_detected:
test.fail_without_changes:
- name: {{sls}}_no_license_detected
- comment:
- "Kafka for Guaranteed Message Delivery is a feature supported only for customers with a valid license.
Contact Security Onion Solutions, LLC via our website at https://securityonionsolutions.com
for more information about purchasing a license to enable this feature."
include:
- kafka.disabled
{% endif %}
{% else %} {% else %}
{{sls}}_state_not_allowed: {{sls}}_state_not_allowed:

View File

@@ -3,5 +3,5 @@
https://securityonion.net/license; you may not use this file except in compliance with the https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #} Elastic License 2.0. #}
{% from 'kafka/map.jinja' import KAFKAMERGED -%} {% from 'kafka/config.map.jinja' import KAFKACLIENT -%}
{{ KAFKAMERGED.config.client | yaml(False) | replace("_x_", ".") }} {{ KAFKACLIENT | yaml(False) | replace("_x_", ".") }}

View File

@@ -3,5 +3,5 @@
https://securityonion.net/license; you may not use this file except in compliance with the https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #} Elastic License 2.0. #}
{% from 'kafka/map.jinja' import KAFKAMERGED -%} {% from 'kafka/config.map.jinja' import KAFKACONFIG -%}
{{ KAFKAMERGED.config.server | yaml(False) | replace("_x_", ".") }} {{ KAFKACONFIG | yaml(False) | replace("_x_", ".") }}

View File

@@ -0,0 +1,10 @@
kafka:
nodes:
{% for node, values in COMBINED_KAFKANODES.items() %}
{{ node }}:
ip: {{ values['ip'] }}
nodeid: {{ values['nodeid'] }}
{%- if values['role'] != none %}
role: {{ values['role'] }}
{%- endif %}
{% endfor %}

View File

@@ -1,12 +1,22 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one # Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at # or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the # https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0. # Elastic License 2.0.
#
# Note: Per the Elastic License 2.0, the second limitation states:
#
# "You may not move, change, disable, or circumvent the license key functionality
# in the software, and you may not remove or obscure any functionality in the
# software that is protected by the license key."
{% from 'kafka/map.jinja' import KAFKAMERGED %} {% from 'kafka/map.jinja' import KAFKAMERGED %}
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
include: include:
{# Run kafka/nodes.sls before Kafka is enabled, so kafka nodes pillar is setup #}
{% if grains.role in ['so-manager','so-managersearch', 'so-standalone'] %}
- kafka.nodes
{% endif %}
{% if GLOBALS.pipeline == "KAFKA" and KAFKAMERGED.enabled %} {% if GLOBALS.pipeline == "KAFKA" and KAFKAMERGED.enabled %}
- kafka.enabled - kafka.enabled
{% else %} {% else %}

View File

@@ -3,18 +3,8 @@
https://securityonion.net/license; you may not use this file except in compliance with the https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #} Elastic License 2.0. #}
{# This is only used to determine if Kafka is enabled / disabled. Configuration is found in kafka/config.map.jinja #}
{# kafka/config.map.jinja depends on there being a kafka nodes pillar being populated #}
{% import_yaml 'kafka/defaults.yaml' as KAFKADEFAULTS %} {% import_yaml 'kafka/defaults.yaml' as KAFKADEFAULTS %}
{% set KAFKAMERGED = salt['pillar.get']('kafka', KAFKADEFAULTS.kafka, merge=True) %} {% set KAFKAMERGED = salt['pillar.get']('kafka', KAFKADEFAULTS.kafka, merge=True) %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% do KAFKAMERGED.config.server.update({ 'node_x_id': salt['pillar.get']('kafka:nodes:' ~ GLOBALS.hostname ~ ':nodeid')}) %}
{% do KAFKAMERGED.config.server.update({'advertised_x_listeners': 'BROKER://' ~ GLOBALS.node_ip ~ ':9092'}) %}
{% set nodes = salt['pillar.get']('kafka:nodes', {}) %}
{% set combined = [] %}
{% for hostname, data in nodes.items() %}
{% do combined.append(data.nodeid ~ "@" ~ hostname ~ ":9093") %}
{% endfor %}
{% set kraft_controller_quorum_voters = ','.join(combined) %}
{% do KAFKAMERGED.config.server.update({'controller_x_quorum_x_voters': kraft_controller_quorum_voters}) %}

View File

@@ -0,0 +1,88 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{# USED TO GENERATE PILLAR/KAFKA/NODES.SLS. #}
{% import_yaml 'kafka/defaults.yaml' as KAFKADEFAULTS %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set process_x_roles = KAFKADEFAULTS.kafka.config.broker.process_x_roles %}
{% set current_kafkanodes = salt.saltutil.runner(
'mine.get',
tgt='G@role:so-manager or G@role:so-managersearch or G@role:so-standalone or G@role:so-receiver',
fun='network.ip_addrs',
tgt_type='compound') %}
{% set STORED_KAFKANODES = salt['pillar.get']('kafka:nodes', default=None) %}
{% set KAFKA_CONTROLLERS_PILLAR = salt['pillar.get']('kafka:controllers', default=None) %}
{% set existing_ids = [] %}
{# Check STORED_KAFKANODES for existing kafka nodes and pull their IDs so they are not reused across the grid #}
{% if STORED_KAFKANODES != none %}
{% for node, values in STORED_KAFKANODES.items() %}
{% if values.get('nodeid') %}
{% do existing_ids.append(values['nodeid']) %}
{% endif %}
{% endfor %}
{% endif %}
{# Create list of possible node ids #}
{% set all_possible_ids = range(1, 2000)|list %}
{# Create list of available node ids by looping through all_possible_ids and ensuring it isn't in existing_ids #}
{% set available_ids = [] %}
{% for id in all_possible_ids %}
{% if id not in existing_ids %}
{% do available_ids.append(id) %}
{% endif %}
{% endfor %}
{# Collect kafka eligible nodes and check if they're already in STORED_KAFKANODES to avoid potentially reassigning a nodeid #}
{% set NEW_KAFKANODES = {} %}
{% for minionid, ip in current_kafkanodes.items() %}
{% set hostname = minionid.split('_')[0] %}
{% if not STORED_KAFKANODES or hostname not in STORED_KAFKANODES %}
{% set new_id = available_ids.pop(0) %}
{% do NEW_KAFKANODES.update({hostname: {'nodeid': new_id, 'ip': ip[0], 'role': process_x_roles }}) %}
{% endif %}
{% endfor %}
{# Combine STORED_KAFKANODES and NEW_KAFKANODES for writing to the pillar/kafka/nodes.sls #}
{% set COMBINED_KAFKANODES = {} %}
{% for node, details in NEW_KAFKANODES.items() %}
{% do COMBINED_KAFKANODES.update({node: details}) %}
{% endfor %}
{% if STORED_KAFKANODES != none %}
{% for node, details in STORED_KAFKANODES.items() %}
{% do COMBINED_KAFKANODES.update({node: details}) %}
{% endfor %}
{% endif %}
{# Update the process_x_roles value for any host in the kafka_controllers_pillar configured from SOC UI #}
{% set ns = namespace(has_controller=false) %}
{% if KAFKA_CONTROLLERS_PILLAR != none %}
{% set KAFKA_CONTROLLERS_PILLAR_LIST = KAFKA_CONTROLLERS_PILLAR.split(',') %}
{% for hostname in KAFKA_CONTROLLERS_PILLAR_LIST %}
{% if hostname in COMBINED_KAFKANODES %}
{% do COMBINED_KAFKANODES[hostname].update({'role': 'controller'}) %}
{% set ns.has_controller = true %}
{% endif %}
{% endfor %}
{% for hostname in COMBINED_KAFKANODES %}
{% if hostname not in KAFKA_CONTROLLERS_PILLAR_LIST %}
{% do COMBINED_KAFKANODES[hostname].update({'role': 'broker'}) %}
{% endif %}
{% endfor %}
{# If the kafka_controllers_pillar is NOT empty check that atleast one node contains the controller role.
otherwise default to GLOBALS.manager having broker,controller role #}
{% if not ns.has_controller %}
{% do COMBINED_KAFKANODES[GLOBALS.manager].update({'role': 'broker,controller'}) %}
{% endif %}
{# If kafka_controllers_pillar is empty, default to having grid manager as 'broker,controller'
so there is always atleast 1 controller in the cluster #}
{% else %}
{% do COMBINED_KAFKANODES[GLOBALS.manager].update({'role': 'broker,controller'}) %}
{% endif %}

18
salt/kafka/nodes.sls Normal file
View File

@@ -0,0 +1,18 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'kafka/nodes.map.jinja' import COMBINED_KAFKANODES %}
{% set kafka_cluster_id = salt['pillar.get']('kafka:cluster_id', default=None) %}
{# Write Kafka pillar, so all grid members have access to nodeid of other kafka nodes and their roles #}
write_kafka_pillar_yaml:
file.managed:
- name: /opt/so/saltstack/local/pillar/kafka/nodes.sls
- mode: 644
- user: socore
- source: salt://kafka/files/managed_node_pillar.jinja
- template: jinja
- context:
COMBINED_KAFKANODES: {{ COMBINED_KAFKANODES }}

9
salt/kafka/reset.sls Normal file
View File

@@ -0,0 +1,9 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
wipe_kafka_data:
file.absent:
- name: /nsm/kafka/data/
- force: True

View File

@@ -1,6 +1,6 @@
kafka: kafka:
enabled: enabled:
description: Enable or disable Kafka. description: Set to True to enable Kafka. To avoid grid problems, do not enable Kafka until the related configuration is in place. Requires a valid Security Onion license key.
helpLink: kafka.html helpLink: kafka.html
cluster_id: cluster_id:
description: The ID of the Kafka cluster. description: The ID of the Kafka cluster.
@@ -8,8 +8,20 @@ kafka:
advanced: True advanced: True
sensitive: True sensitive: True
helpLink: kafka.html helpLink: kafka.html
password:
description: The password to use for the Kafka certificates.
sensitive: True
helpLink: kafka.html
controllers:
description: A comma-separated list of hostnames that will act as Kafka controllers. These hosts will be responsible for managing the Kafka cluster. Note that only manager and receiver nodes are eligible to run Kafka. This configuration needs to be set before enabling Kafka. Failure to do so may result in Kafka topics becoming unavailable requiring manual intervention to restore functionality or reset Kafka, either of which can result in data loss.
forcedType: "string"
helpLink: kafka.html
reset:
description: Disable and reset the Kafka cluster. This will remove all Kafka data including logs that may have not yet been ingested into Elasticsearch and reverts the grid to using REDIS as the global pipeline. This is useful when testing different Kafka configurations such as rearranging Kafka brokers / controllers allowing you to reset the cluster rather than manually fixing any issues arising from attempting to reassign a Kafka broker into a controller. Enter 'YES_RESET_KAFKA' and submit to disable and reset Kafka. Make any configuration changes required and re-enable Kafka when ready. This action CANNOT be reversed.
advanced: True
helpLink: kafka.html
config: config:
server: broker:
advertised_x_listeners: advertised_x_listeners:
description: Specify the list of listeners (hostname and port) that Kafka brokers provide to clients for communication. description: Specify the list of listeners (hostname and port) that Kafka brokers provide to clients for communication.
title: advertised.listeners title: advertised.listeners
@@ -19,13 +31,10 @@ kafka:
title: auto.create.topics.enable title: auto.create.topics.enable
forcedType: bool forcedType: bool
helpLink: kafka.html helpLink: kafka.html
controller_x_listener_x_names: default_x_replication_x_factor:
description: Set listeners used by the controller in a comma-seperated list. description: The default replication factor for automatically created topics. This value must be less than the amount of brokers in the cluster. Hosts specified in controllers should not be counted towards total broker count.
title: controller.listener.names title: default.replication.factor
helpLink: kafka.html forcedType: int
controller_x_quorum_x_voters:
description: A comma-seperated list of ID and endpoint information mapped for a set of voters.
title: controller.quorum.voters
helpLink: kafka.html helpLink: kafka.html
inter_x_broker_x_listener_x_name: inter_x_broker_x_listener_x_name:
description: The name of the listener used for inter-broker communication. description: The name of the listener used for inter-broker communication.
@@ -56,12 +65,6 @@ kafka:
title: log.segment.bytes title: log.segment.bytes
forcedType: int forcedType: int
helpLink: kafka.html helpLink: kafka.html
node_x_id:
description: The node ID corresponds to the roles performed by this process whenever process.roles is populated.
title: node.id
forcedType: int
readonly: True
helpLink: kafka.html
num_x_io_x_threads: num_x_io_x_threads:
description: The number of threads used by Kafka. description: The number of threads used by Kafka.
title: num.io.threads title: num.io.threads
@@ -88,8 +91,9 @@ kafka:
forcedType: int forcedType: int
helpLink: kafka.html helpLink: kafka.html
process_x_roles: process_x_roles:
description: The roles the process performs. Use a comma-seperated list is multiple. description: The role performed by Kafka brokers.
title: process.roles title: process.roles
readonly: True
helpLink: kafka.html helpLink: kafka.html
socket_x_receive_x_buffer_x_bytes: socket_x_receive_x_buffer_x_bytes:
description: Size, in bytes of the SO_RCVBUF buffer. A value of -1 will use the OS default. description: Size, in bytes of the SO_RCVBUF buffer. A value of -1 will use the OS default.
@@ -168,3 +172,38 @@ kafka:
title: ssl.truststore.password title: ssl.truststore.password
sensitive: True sensitive: True
helpLink: kafka.html helpLink: kafka.html
controller:
controller_x_listener_x_names:
description: Set listeners used by the controller in a comma-seperated list.
title: controller.listener.names
helpLink: kafka.html
listeners:
description: Set of URIs that is listened on and the listener names in a comma-seperated list.
helpLink: kafka.html
listener_x_security_x_protocol_x_map:
description: Comma-seperated mapping of listener name and security protocols.
title: listener.security.protocol.map
helpLink: kafka.html
log_x_dirs:
description: Where Kafka logs are stored within the Docker container.
title: log.dirs
helpLink: kafka.html
log_x_retention_x_check_x_interval_x_ms:
description: Frequency at which log files are checked if they are qualified for deletion.
title: log.retention.check.interval.ms
helpLink: kafka.html
log_x_retention_x_hours:
description: How long, in hours, a log file is kept.
title: log.retention.hours
forcedType: int
helpLink: kafka.html
log_x_segment_x_bytes:
description: The maximum allowable size for a log file.
title: log.segment.bytes
forcedType: int
helpLink: kafka.html
process_x_roles:
description: The role performed by controller node.
title: process.roles
readonly: True
helpLink: kafka.html

View File

@@ -6,22 +6,14 @@
{% from 'allowed_states.map.jinja' import allowed_states %} {% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %} {% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
{% set kafka_cluster_id = salt['pillar.get']('kafka:cluster_id', default=None) %} {% set kafka_cluster_id = salt['pillar.get']('kafka:cluster_id') %}
{% if GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone'] %}
{% if kafka_cluster_id is none %}
generate_kafka_cluster_id:
cmd.run:
- name: /usr/sbin/so-kafka-clusterid
{% endif %}
{% endif %}
{# Initialize kafka storage if it doesn't already exist. Just looking for meta.properties in /nsm/kafka/data #} {# Initialize kafka storage if it doesn't already exist. Just looking for meta.properties in /nsm/kafka/data #}
{% if not salt['file.file_exists']('/nsm/kafka/data/meta.properties') %} {% if not salt['file.file_exists']('/nsm/kafka/data/meta.properties') %}
kafka_storage_init: kafka_storage_init:
cmd.run: cmd.run:
- name: | - name: |
docker run -v /nsm/kafka/data:/nsm/kafka/data -v /opt/so/conf/kafka/server.properties:/kafka/config/kraft/newserver.properties --name so-kafkainit --user root --entrypoint /kafka/bin/kafka-storage.sh {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-kafka:{{ GLOBALS.so_version }} format -t {{ kafka_cluster_id }} -c /kafka/config/kraft/newserver.properties docker run -v /nsm/kafka/data:/nsm/kafka/data -v /opt/so/conf/kafka/server.properties:/opt/kafka/config/kraft/newserver.properties --name so-kafkainit --user root --entrypoint /opt/kafka/bin/kafka-storage.sh {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-kafka:{{ GLOBALS.so_version }} format -t {{ kafka_cluster_id }} -c /opt/kafka/config/kraft/newserver.properties
kafka_rm_kafkainit: kafka_rm_kafkainit:
cmd.run: cmd.run:
- name: | - name: |

View File

@@ -0,0 +1,47 @@
#! /bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
if [ -z "$NOROOT" ]; then
# Check for prerequisites
if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run using sudo!"
exit 1
fi
fi
function usage() {
echo -e "\nUsage: $0 <script> [options]"
echo ""
echo "Available scripts:"
show_available_kafka_cli_tools
}
function show_available_kafka_cli_tools(){
docker exec so-kafka ls /opt/kafka/bin | grep kafka
}
if [ -z $1 ]; then
usage
exit 1
fi
available_tools=$(show_available_kafka_cli_tools)
script_exists=false
for script in $available_tools; do
if [ "$script" == "$1" ]; then
script_exists=true
break
fi
done
if [ "$script_exists" == true ]; then
docker exec so-kafka /opt/kafka/bin/$1 "${@:2}"
else
echo -e "\nInvalid script: $1"
usage
exit 1
fi

View File

@@ -0,0 +1,87 @@
#! /bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
if [ -z "$NOROOT" ]; then
# Check for prerequisites
if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run using sudo!"
exit 1
fi
fi
usage() {
cat <<USAGE_EOF
Usage: $0 <operation> [parameters]
Where <operation> is one of the following:
topic-partitions: Increase the number of partitions for a Kafka topic
Required arguments: topic-partitions <topic name> <# partitions>
Example: $0 topic-partitions suricata-topic 6
list-topics: List of Kafka topics
Example: $0 list-topics
USAGE_EOF
exit 1
}
if [[ $# -lt 1 || $1 == --help || $1 == -h ]]; then
usage
fi
kafka_client_config="/opt/kafka/config/kraft/client.properties"
too_few_arguments() {
echo -e "\nMissing one or more required arguments!\n"
usage
}
get_kafka_brokers() {
brokers_cache="/opt/so/state/kafka_brokers"
broker_port="9092"
if [[ ! -f "$brokers_cache" ]] || [[ $(find "/$brokers_cache" -mmin +120) ]]; then
echo "Refreshing Kafka brokers list"
salt-call pillar.get kafka:nodes --out=json | jq -r --arg broker_port "$broker_port" '.local | to_entries[] | select(.value.role | contains("broker")) | "\(.value.ip):\($broker_port)"' | paste -sd "," - > "$brokers_cache"
else
echo "Using cached Kafka brokers list"
fi
brokers=$(cat "$brokers_cache")
}
increase_topic_partitions() {
get_kafka_brokers
command=$(so-kafka-cli kafka-topics.sh --bootstrap-server $brokers --command-config $kafka_client_config --alter --topic $topic --partitions $partition_count)
if $command; then
echo -e "Successfully increased the number of partitions for topic $topic to $partition_count\n"
so-kafka-cli kafka-topics.sh --bootstrap-server $brokers --command-config $kafka_client_config --describe --topic $topic
fi
}
get_kafka_topics_list() {
get_kafka_brokers
so-kafka-cli kafka-topics.sh --bootstrap-server $brokers --command-config $kafka_client_config --exclude-internal --list | sort
}
operation=$1
case "${operation}" in
"topic-partitions")
if [[ $# -lt 3 ]]; then
too_few_arguments
fi
topic=$2
partition_count=$3
increase_topic_partitions
;;
"list-topics")
get_kafka_topics_list
;;
*)
usage
;;
esac

View File

@@ -1,13 +0,0 @@
#!/bin/bash
#
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
# Generate a new keystore
docker run -v /etc/pki/kafka.p12:/etc/pki/kafka.p12 --name so-kafka-keystore --user root --entrypoint keytool {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-kafka:{{ GLOBALS.so_version }} -importkeystore -srckeystore /etc/pki/kafka.p12 -srcstoretype PKCS12 -srcstorepass changeit -destkeystore /etc/pki/kafka.jks -deststoretype JKS -deststorepass changeit -noprompt
docker cp so-kafka-keystore:/etc/pki/kafka.jks /etc/pki/kafka.jks
docker rm so-kafka-keystore

View File

@@ -37,6 +37,7 @@ logstash:
- so/0900_input_redis.conf.jinja - so/0900_input_redis.conf.jinja
- so/9805_output_elastic_agent.conf.jinja - so/9805_output_elastic_agent.conf.jinja
- so/9900_output_endgame.conf.jinja - so/9900_output_endgame.conf.jinja
- so/0800_input_kafka.conf.jinja
custom0: [] custom0: []
custom1: [] custom1: []
custom2: [] custom2: []

View File

@@ -0,0 +1,20 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
so-logstash_image:
docker_image.present:
- name: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-logstash:{{ GLOBALS.so_version }}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

Some files were not shown because too many files have changed in this diff Show More