Compare commits

...

563 Commits

Author SHA1 Message Date
Mike Reeves
a9f2dfc4b8 Merge pull request #13576 from Security-Onion-Solutions/2.4/dev
2.4.100
2024-08-29 16:18:20 -04:00
Mike Reeves
b7e047d149 Merge pull request #13575 from Security-Onion-Solutions/2.4.100
2.4.100
2024-08-29 15:46:15 -04:00
Mike Reeves
f69137b38d 2.4.100 2024-08-29 15:43:42 -04:00
Josh Brower
9746f6e5e2 Merge pull request #13570 from Security-Onion-Solutions/2.4/ignore-logstash-err
Exclude logstash startup errors
2024-08-28 16:51:35 -04:00
DefensiveDepth
89a1e2500e Exclude logstash startup errors 2024-08-28 16:50:11 -04:00
Jason Ertel
394ce29ea3 Merge pull request #13565 from Security-Onion-Solutions/jertel/an2
move custom alerters to subgroup; avoid false positives on log check
2024-08-28 09:39:44 -04:00
Jason Ertel
f19a35ff06 move custom alerters to subgroup; avoid false positives on log check 2024-08-28 09:32:25 -04:00
weslambert
8943e88ca8 Merge pull request #13562 from Security-Onion-Solutions/fix/evtx_pipelines
Update pipeline version for EVTX
2024-08-27 13:12:10 -04:00
Jason Ertel
18774aa0a7 Merge pull request #13561 from Security-Onion-Solutions/jertel/an2
annotation updates
2024-08-27 13:09:20 -04:00
weslambert
af80a78406 Update pipeline version 2024-08-27 13:08:35 -04:00
Jason Ertel
6043da4424 annotation updates 2024-08-27 13:04:43 -04:00
Josh Brower
75086bac7f Merge pull request #13556 from Security-Onion-Solutions/2.4/fixpolicyload
Fix policy load
2024-08-26 16:49:54 -04:00
DefensiveDepth
726df310ee Add context 2024-08-26 16:15:56 -04:00
DefensiveDepth
b952728b2c Fix policy load 2024-08-26 15:57:21 -04:00
weslambert
1cac2ff1d4 Merge pull request #13554 from Security-Onion-Solutions/fix/ilm_soc_logs
FIX: Add so-soc-logs
2024-08-26 12:54:03 -04:00
weslambert
a93c77a1cc Merge pull request #13548 from Security-Onion-Solutions/fix/global_custom
Use global@custom from common pipeline
2024-08-26 10:42:12 -04:00
weslambert
dd09f5b153 Add so-soc-logs 2024-08-26 10:32:27 -04:00
Josh Brower
29f996de66 Merge pull request #13547 from Security-Onion-Solutions/2.4/soupchanges
Elastic Fleet refactoring
2024-08-23 13:56:05 -04:00
DefensiveDepth
c575e02fbb Use correct name 2024-08-23 13:52:20 -04:00
weslambert
e96a0108c3 Add global@custom 2024-08-23 13:05:34 -04:00
DefensiveDepth
e86fce692c Merge remote-tracking branch 'origin/2.4/dev' into 2.4/soupchanges 2024-08-23 11:44:39 -04:00
DefensiveDepth
8d35c7c139 Merge branch '2.4/soupchanges' of https://github.com/Security-Onion-Solutions/securityonion into 2.4/soupchanges 2024-08-23 11:37:16 -04:00
DefensiveDepth
0a5725a62e Refactor for Elastic Upgrade 2024-08-23 11:36:47 -04:00
Jorge Reyes
1c6f5126db Merge pull request #13546 from Security-Onion-Solutions/reyesj2/kfano
set kafka.id in common ingest pipeline
2024-08-23 09:50:08 -04:00
reyesj2
1ec5e3bf2a add kafka.id to common ingest pipeline
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-08-23 09:47:21 -04:00
Jason Ertel
d29727c869 Merge pull request #13540 from Security-Onion-Solutions/jertel/an2
exclude all logstash errors related to license manager init log line
2024-08-22 18:17:23 -04:00
Jason Ertel
eabb894580 exclude all logstash errors related to license manager init log line 2024-08-22 17:52:37 -04:00
weslambert
96339f0de6 Merge pull request #13537 from Security-Onion-Solutions/fix/elastic_template_check
FIX: Check Elasticsearch for endpoint component template before loading templates
2024-08-22 10:46:49 -04:00
weslambert
d7e3e134a5 Check Elasticsearch for template 2024-08-22 10:33:13 -04:00
Jason Ertel
dfb0ff7a98 Merge pull request #13535 from Security-Onion-Solutions/jertel/an2
notification updates
2024-08-22 09:19:43 -04:00
Jason Ertel
48f1e24bf5 notification updates 2024-08-22 09:04:43 -04:00
Jason Ertel
cf47508185 notification updates 2024-08-22 09:02:32 -04:00
weslambert
2a024039bf Merge pull request #13528 from Security-Onion-Solutions/fix/detections_alerts_ilm
Create detections.alerts ILM policy with corresponding name
2024-08-21 14:50:10 -04:00
weslambert
212cc478de Change back to so 2024-08-21 14:39:24 -04:00
weslambert
88ea60df2a Fix name 2024-08-21 14:38:57 -04:00
weslambert
c1b7232a88 Fix for detections-alerts 2024-08-21 14:38:29 -04:00
Mike Reeves
04577a48be Merge pull request #13530 from Security-Onion-Solutions/raidtools 2024-08-21 14:33:40 -04:00
weslambert
18ef37a2d0 Merge pull request #13531 from Security-Onion-Solutions/fix/elastic_templates_fleet_package_check
Check for endpoint package
2024-08-21 14:28:12 -04:00
weslambert
4108e67178 Check for endpoint package 2024-08-21 14:22:28 -04:00
Mike Reeves
ff479de7bd Add support for new appliance raid controllers 2024-08-21 14:10:24 -04:00
weslambert
4afac201b9 Change ILM policy name 2024-08-21 13:25:26 -04:00
weslambert
c30537fe6a Ensure endpoint is installed 2024-08-21 13:00:04 -04:00
weslambert
1ed73b6f8e Merge pull request #13526 from Security-Onion-Solutions/feature/tenable_io
Add Tenable IO
2024-08-21 09:03:33 -04:00
DefensiveDepth
f01825166d Update Fleet Server policy 2024-08-21 08:31:37 -04:00
DefensiveDepth
07f8bda27e Update agent 2024-08-20 15:23:31 -04:00
DefensiveDepth
e3ecc9d4be Directly manage the Fleet Server integration config 2024-08-20 15:06:16 -04:00
DefensiveDepth
ca209ed54c Disable auto-upgrade 2024-08-20 09:14:08 -04:00
DefensiveDepth
df6ff027b5 Remove unneeded elastic upgrade config 2024-08-19 16:05:27 -04:00
weslambert
e772497e12 Merge pull request #13511 from Security-Onion-Solutions/fix/logcheck_unprovisioned
Ignore older SOC logs before licenseStatus field
2024-08-16 14:48:56 -04:00
weslambert
205bbd9c61 Use more specific match 2024-08-16 14:31:11 -04:00
weslambert
224bc6b429 Ignore old SOC logs before licenseStatus 2024-08-16 14:15:10 -04:00
weslambert
dc197f6a5c Add tenable settings 2024-08-15 23:06:53 -04:00
weslambert
f182833a8d Add tenable_io 2024-08-15 23:03:32 -04:00
weslambert
61ab1f1ef2 Add tenable_io templates 2024-08-15 23:03:07 -04:00
Josh Brower
dea582f24a Merge pull request #13487 from Security-Onion-Solutions/2.4/logcheck
Add influxdb known error
2024-08-15 11:57:59 -04:00
DefensiveDepth
b860bf753a Add influxdb known error 2024-08-15 11:50:34 -04:00
Mike Reeves
b5690f6879 Merge pull request #13483 from Security-Onion-Solutions/TOoSmOotH-patch-2
Update registry version
2024-08-15 09:36:30 -04:00
Mike Reeves
a39ad55578 Update registry version 2024-08-15 09:34:20 -04:00
weslambert
4c276d1211 Merge pull request #13482 from Security-Onion-Solutions/fix/cluster_space_total_field
Update column number because of changes to API
2024-08-15 08:29:39 -04:00
weslambert
5f74b1b730 Update column number because of changes to API 2024-08-15 08:26:56 -04:00
Doug Burks
b9040eb0de Merge pull request #13481 from Security-Onion-Solutions/dougburks-patch-1
Update so-elasticsearch-cluster-space-used for changes in _cat/alloca…
2024-08-15 08:20:09 -04:00
Doug Burks
ab63d5dbdb Update so-elasticsearch-cluster-space-used for changes in _cat/allocation API 2024-08-15 08:01:22 -04:00
Josh Patterson
f233f13637 Merge pull request #13478 from Security-Onion-Solutions/fixsurivars
handle suricata network and port vars as string or list
2024-08-13 15:52:11 -04:00
m0duspwnens
c8a8236401 handle suricata network and port vars as string or list 2024-08-13 15:44:08 -04:00
Doug Burks
f5603b1274 Merge pull request #13473 from Security-Onion-Solutions/dougburks-patch-1
Update SECURITY.md
2024-08-13 08:50:03 -04:00
Doug Burks
1d27fcc50e Update SECURITY.md 2024-08-13 08:48:49 -04:00
Jason Ertel
dd2926201d Merge pull request #13470 from Security-Onion-Solutions/jertel/chgpw
fix issue with reset pw and mfa
2024-08-12 17:29:50 -04:00
Jason Ertel
ebcef8adbd fix issue with reset pw and mfa 2024-08-12 13:35:06 -04:00
Doug Burks
ff14217d38 Merge pull request #13467 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add warning to soup about ssh #13466
2024-08-12 09:23:28 -04:00
Doug Burks
46596f01fa FEATURE: Add warning to soup about ssh #13466 2024-08-12 09:18:29 -04:00
Doug Burks
c1388a68f0 FEATURE: Add warning to soup about ssh #13466 2024-08-12 09:12:49 -04:00
Jason Ertel
374da11037 Merge pull request #13457 from Security-Onion-Solutions/jerte/fixrepos
fix repo path
2024-08-09 07:01:00 -04:00
Jason Ertel
caa8d9ecb0 fix repo path 2024-08-09 06:58:40 -04:00
coreyogburn
02c7de6b1a Merge pull request #13453 from Security-Onion-Solutions/cogburn/ai-summaries
Cogburn/ai summaries
2024-08-08 14:55:11 -06:00
Corey Ogburn
c71b9f6e8f Fix CopyPasta
Strelka annotations referenced ElastAlert. Fixed.
2024-08-08 13:31:08 -06:00
Corey Ogburn
8c1feccbe0 Tweak value 2024-08-08 12:53:51 -06:00
Corey Ogburn
5ee15c8b41 Tweak value 2024-08-08 12:00:07 -06:00
Corey Ogburn
5328f55322 Remove new config value 2024-08-08 11:43:15 -06:00
Corey Ogburn
712f904c43 Config for Repo Folder
The folder we checkout the AI Summary repo into should definitely exist.
2024-08-08 10:57:07 -06:00
Corey Ogburn
ccd7d86302 More AI Summaries Config/Annotations
Added aiRepoBranch to all 3 detection engines.

Added showUnreviewedAiSummaries to client parameters.

Added annotations.
2024-08-08 10:46:41 -06:00
Corey Ogburn
fc89604982 New Config Values/Annotations for Ai Summaries
Each engine pulls the same repo into the same location and shows the summaries.

Which repo and where to keep them is advanced, but turning AI summaries on or off is not.
2024-08-06 13:55:54 -06:00
Jorge Reyes
09f7329a21 Merge pull request #13443 from Security-Onion-Solutions/reyesj2/kfano
correct firewall annotation for kafka
2024-08-06 15:29:02 -04:00
reyesj2
cfd6676583 update kafka firewall annotations config
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-08-06 14:40:53 -04:00
Josh Patterson
3713ee9d93 Merge pull request #13441 from Security-Onion-Solutions/issue/13438
Issue/13438
2024-08-06 10:43:23 -04:00
m0duspwnens
009c8d55c3 unhold all verions for upgrade 2024-08-06 09:26:58 -04:00
m0duspwnens
c0c01f0d17 lock and unlock salt in soup 2024-08-05 16:50:19 -04:00
m0duspwnens
2fe5dccbb4 fix hold/unhold 2024-08-05 15:25:28 -04:00
m0duspwnens
c83a143eef apply holds to salt each state run 2024-08-05 15:13:07 -04:00
Jason Ertel
56ef2a4e1c Merge pull request #13430 from Security-Onion-Solutions/jertel/retryreposync
retry up to 5 times if reposync fails
2024-08-02 14:59:27 -04:00
Jason Ertel
c36e8abc19 retry up to 5 times if reposync fails 2024-08-02 14:52:08 -04:00
Jason Ertel
e76293acdb Merge pull request #13429 from Security-Onion-Solutions/jertel/retryreposync
retry up to 5 times if reposync fails
2024-08-02 14:19:30 -04:00
Jason Ertel
5bdb4ed51b retry up to 5 times if reposync fails 2024-08-02 14:17:14 -04:00
Josh Patterson
aaf5d76071 Merge pull request #13425 from Security-Onion-Solutions/salt3006.9
Salt3006.9
2024-08-02 13:37:07 -04:00
m0duspwnens
d9a696a411 run state from local 2024-08-01 14:02:21 -04:00
m0duspwnens
76ab4c92f0 use salt to install py modules during setup 2024-08-01 13:37:22 -04:00
m0duspwnens
60beaf51bc fail hard if docker py module upgrade failes 2024-08-01 12:32:24 -04:00
m0duspwnens
9ab17ff79c change dir name 2024-08-01 11:23:34 -04:00
m0duspwnens
1a363790a0 upgrade docker python module 2024-08-01 11:20:08 -04:00
m0duspwnens
d488bb6393 upgrade to salt 3006.9 2024-08-01 08:49:03 -04:00
weslambert
114ad779b4 Merge pull request #13418 from Security-Onion-Solutions/fix/system_mapping
Change name for system component
2024-07-31 16:27:32 -04:00
weslambert
49d2ac2b13 Change name for system component 2024-07-31 16:17:57 -04:00
weslambert
9a2252ed3f Merge pull request #13414 from Security-Onion-Solutions/fix/system_mapping
Fix system mapping
2024-07-31 14:26:50 -04:00
Wes
9264a03dbc Add custom system component 2024-07-31 17:03:26 +00:00
Wes
fb2a42a9af Use custom system component 2024-07-31 17:02:45 +00:00
weslambert
63531cdbb6 Merge pull request #13410 from Security-Onion-Solutions/fix/elastic_agent_pipeline_version
Change agent pipeline version
2024-07-30 17:00:15 -04:00
weslambert
bae348bef7 Change version 2024-07-30 16:44:44 -04:00
weslambert
bd223d8643 Merge pull request #13409 from Security-Onion-Solutions/fix/elastic_fleet_defender
Fix defender winlog name change
2024-07-30 15:47:45 -04:00
weslambert
3fa6c72620 Fix name change 2024-07-30 15:45:55 -04:00
weslambert
2b90bdc86a Merge pull request #13408 from Security-Onion-Solutions/fix/fleet_setup
Fix fleet setup
2024-07-30 14:49:29 -04:00
weslambert
6831b72804 Fix fleet setup 2024-07-30 14:46:00 -04:00
weslambert
5e12b928d9 Merge pull request #13407 from Security-Onion-Solutions/fix/merge_revert
Add removed changes
2024-07-30 13:04:28 -04:00
weslambert
0453f51e64 Actually ignore missing templates 2024-07-30 12:54:07 -04:00
weslambert
9594e4115c Elastic 8.14.3 2024-07-30 12:47:56 -04:00
weslambert
201e14f287 Elastic 8.14.3 2024-07-30 12:46:42 -04:00
weslambert
d833bd0d55 Elastic 8.14.3 2024-07-30 12:45:25 -04:00
weslambert
46eeb014af Add metrics settings 2024-07-30 12:39:50 -04:00
weslambert
8e7a2cf353 Ignore missing templates 2024-07-30 12:38:29 -04:00
Jason Ertel
2c528811cc Merge pull request #13406 from Security-Onion-Solutions/jertel/force
Provide new setting to require OTP
2024-07-30 10:42:11 -04:00
Jason Ertel
3130b56d58 Provide new setting to require OTP 2024-07-30 10:39:57 -04:00
weslambert
b466d83625 Merge pull request #13402 from Security-Onion-Solutions/foxtrot
Elastic 8.14.3
2024-07-30 09:28:19 -04:00
weslambert
6d008546f1 Fix pre and add post for 2.4.100 2024-07-30 09:26:46 -04:00
weslambert
c60b14e2e7 Merge branch '2.4/dev' into foxtrot 2024-07-30 08:52:48 -04:00
weslambert
c753a7cffa Add function for 2.4.100 2024-07-29 13:18:07 -04:00
weslambert
5cba4d7d9b Update VERSION 2024-07-29 13:16:14 -04:00
Mike Reeves
685df9e5ea Merge pull request #13373 from Security-Onion-Solutions/suri7rules
Update so-rule-update
2024-07-29 13:06:51 -04:00
Mike Reeves
ef5a42cf40 Merge pull request #13381 from Security-Onion-Solutions/consolemsg
Turn off console messages
2024-07-29 13:04:40 -04:00
Mike Reeves
45ab6c7309 Merge pull request #13401 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update VERSION
2024-07-29 12:59:31 -04:00
Mike Reeves
1b54a109d5 Update VERSION 2024-07-29 12:59:00 -04:00
Mike Reeves
945d04a510 Merge pull request #13391 from Security-Onion-Solutions/2.4/dev
2.4.90
2024-07-29 12:49:11 -04:00
Mike Reeves
658db27a46 Merge pull request #13399 from Security-Onion-Solutions/2.4.90
2.4.90
2024-07-29 11:45:55 -04:00
Mike Reeves
3e248da14d 2.4.90 2024-07-29 11:37:42 -04:00
coreyogburn
ed7f8dbf1d Merge pull request #13392 from Security-Onion-Solutions/cogburn/sodet-refresh-interval
so-detection refresh_interval => 1s
2024-07-25 14:10:39 -06:00
Corey Ogburn
d6af3aab6d Use a wildcard instead of making 2 requests 2024-07-25 14:05:09 -06:00
Corey Ogburn
0cb067f6f2 Don't forget history
Also update so-detectionhistory to have a refresh_interval of 1s.
2024-07-25 14:01:10 -06:00
Corey Ogburn
ccf88fa62b Add step to soup to set refresh_interval during upgrade
The so-detection index needs it's refresh_interval reset during an upgrade. If the index doesn't exist, the config change will set it correctly when it is created.
2024-07-25 13:44:22 -06:00
Corey Ogburn
20f915f649 so-detection refresh_interval => 1s
Speeds up the refresh_interval so bulk indexing a single rule does not wait 30s.
2024-07-25 12:53:04 -06:00
Mike Reeves
f447b6b698 Merge pull request #13390 from Security-Onion-Solutions/2.4.90
2.4.90
2024-07-25 11:55:59 -04:00
Mike Reeves
66b087f12f 2.4.90 2024-07-25 11:49:57 -04:00
weslambert
f2ad4c40e6 Fix update for 2.4.90 2024-07-24 10:38:05 -04:00
weslambert
8538f2eca2 Elastic Agent update 2024-07-24 09:40:30 -04:00
Wes
c55fa6dc6a Fix pattern for pipelines 2024-07-23 17:48:32 +00:00
Wes
17f37750e5 Remove onchanges condition 2024-07-23 16:46:18 +00:00
Wes
e789c17bc3 Add global@custom pipeline file 2024-07-23 16:37:37 +00:00
Wes
6f44d39b18 Remove Fleet final pipeline file 2024-07-23 16:37:03 +00:00
Wes
dd85249781 Remove Fleet final pipeline 2024-07-23 16:36:41 +00:00
Wes
bdba621442 Remove soup changes 2024-07-23 16:32:28 +00:00
Mike Reeves
034315ed85 Turn off console messages 2024-07-23 09:46:51 -04:00
Jason Ertel
224c668c31 Merge pull request #13374 from Security-Onion-Solutions/jertel/rmtestparm
remove unused test parameters from setup
2024-07-22 11:08:34 -04:00
Jason Ertel
2e17e93cfe remove unused test parameters from setup 2024-07-22 11:04:45 -04:00
Jason Ertel
7dfb75ba6b remove unused test parameters from setup 2024-07-22 11:02:56 -04:00
Mike Reeves
af0425b8f1 Update rulecat.conf 2024-07-22 10:20:30 -04:00
Mike Reeves
6cf0a0bb42 Update so-rule-update 2024-07-22 10:19:34 -04:00
Jorge Reyes
d97400e6f5 Merge pull request #13368 from Security-Onion-Solutions/reyesj2/kfps
fix kafka-logstash cert for searchnodes
2024-07-21 20:11:42 -04:00
reyesj2
cf1335dd84 searchnode logstash-kafka cert generation
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-20 11:31:33 -04:00
coreyogburn
be74449fb9 Merge pull request #13365 from Security-Onion-Solutions/cogburn/suricata-regex-support
Cogburn/suricata regex support
2024-07-19 12:47:10 -06:00
Corey Ogburn
45b2413175 Removed Allow/Deny Regexes, Added Enable/Disable Regex
Update config and annotations for new regex support for suricata.
2024-07-19 12:45:24 -06:00
Corey Ogburn
022df966c7 Remove Allow/Deny Regex, Add Suricata Enable/Disable Regex 2024-07-19 12:28:04 -06:00
Jorge Reyes
92385d652e Merge pull request #13363 from Security-Onion-Solutions/reyesj2/ksoup
kafka soup pillar
2024-07-19 10:50:48 -04:00
reyesj2
4478d7b55a kafka soup pillar fix
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-19 09:32:47 -04:00
Wes
612716ee69 Apply ES to load pipelines 2024-07-17 17:35:41 +00:00
Wes
f78a5d1a78 Remove pipeline file 2024-07-17 15:42:40 +00:00
Wes
2d0de87530 Add component templates for Fleet metrics 2024-07-17 15:19:46 +00:00
Josh Patterson
18df491f7e Merge pull request #13355 from Security-Onion-Solutions/silsll
Exclude policy phases if not defined in defaults
2024-07-17 11:09:18 -04:00
m0duspwnens
cee6ee7a2a Merge remote-tracking branch 'origin/2.4/dev' into silsll 2024-07-17 10:16:36 -04:00
m0duspwnens
6d18177f98 only include global phases if defined in default for that index 2024-07-17 10:16:11 -04:00
weslambert
c0bb395571 Remove pipeline file removal 2024-07-17 09:51:51 -04:00
weslambert
f051ddc7f0 Remove pipelines 2024-07-17 09:50:26 -04:00
m0duspwnens
72ad49ed12 add policy for so-lists and so-items 2024-07-16 14:36:06 -04:00
Jorge Reyes
d11f4ef9ba Merge pull request #13350 from Security-Onion-Solutions/reyesj2/kflux
Kafka influxdb metrics & pillar update
2024-07-16 14:26:09 -04:00
reyesj2
03ca7977a0 quote variables
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-16 14:14:55 -04:00
m0duspwnens
91b2e7d400 Merge remote-tracking branch 'origin/2.4/dev' into silsll 2024-07-16 14:06:56 -04:00
m0duspwnens
34c3a58efe add cold policy 2024-07-16 14:03:48 -04:00
Josh Patterson
a867557f54 Merge pull request #13353 from Security-Onion-Solutions/fci
fix custom indices
2024-07-16 13:18:11 -04:00
m0duspwnens
b814f32e0a fix custom indices 2024-07-16 12:39:30 -04:00
coreyogburn
2df44721d0 Merge pull request #13349 from Security-Onion-Solutions/cogburn/bulk-indexer
New Config Values for Detections Bulk Indexer
2024-07-15 15:34:01 -06:00
Corey Ogburn
d0565baaa3 New Config Values for Detections Bulk Indexer
`maxScrollSize` defines the "page size" of each scroll request.

`bulkIndexerWorkerCount` defines how many worker threads a bulk indexer should use. 0 or fewer indicates that 1 thread per CPU core should be used.
2024-07-15 14:43:47 -06:00
weslambert
38e7da1334 Merge pull request #13347 from Security-Onion-Solutions/upgrade/elastic_8_14_3
Elastic 8.14.3
2024-07-15 16:29:24 -04:00
reyesj2
1b623c5c7a Show Kafka EPS for nodes with broker role only
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-15 16:27:48 -04:00
reyesj2
542a116b8c use so-yaml add for kafka pillar change
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-15 16:26:52 -04:00
Doug Burks
e7b6496f98 Merge pull request #13348 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add new action to SOC Actions list to allow users to more easily add their own actions #13346
2024-07-15 15:59:49 -04:00
Doug Burks
3991c7b5fe FEATURE: Add new action to SOC Actions list to allow users to more easily add their own actions #13346 2024-07-15 15:52:00 -04:00
weslambert
678b232c24 Elastic 8.14.3 2024-07-15 15:48:01 -04:00
weslambert
fbd0dbd048 Elastic 8.14.3 2024-07-15 15:46:55 -04:00
weslambert
1df19faf5c Elastic 8.14.3 2024-07-15 15:44:50 -04:00
weslambert
8ec5794833 Update VERSION 2024-07-15 15:42:40 -04:00
weslambert
bf07d56da6 Merge pull request #13341 from Security-Onion-Solutions/revert-13323-fix/agent_pipeline
Revert "Change pipeline version for agent"
2024-07-15 11:38:56 -04:00
weslambert
cdbffa2323 Merge pull request #13342 from Security-Onion-Solutions/revert-13316-foxtrot
Revert "Elastic 8.14.2"
2024-07-15 11:38:48 -04:00
Josh Patterson
55469ebd24 Merge pull request #13340 from Security-Onion-Solutions/surianno
force var to be list of string
2024-07-15 11:34:00 -04:00
weslambert
4e81860a13 Revert "Change pipeline version for agent" 2024-07-15 11:33:52 -04:00
m0duspwnens
a23789287e force var to be list of string 2024-07-15 11:29:47 -04:00
weslambert
fe1824aedd Revert "Elastic 8.14.2" 2024-07-15 11:28:59 -04:00
Jorge Reyes
e58b2c45dd Merge pull request #13335 from Security-Onion-Solutions/reyesj2/kgz
FIX: Kafka configuration updates
2024-07-12 15:55:43 -04:00
reyesj2
5d322ebc0b Allow searchnodes to run kafka.ssl state for kafka-logstash cert generation
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-12 14:45:11 -04:00
reyesj2
7ea8d5efd0 Remove redis input pipeline from searchnodes when global pipeline is Kafka
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-12 14:44:10 -04:00
reyesj2
4182ff66a0 rearrange kafka pillar, declutters SOC ui
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-11 16:37:16 -04:00
reyesj2
ff29d9ca51 Update log-check to ignore kafka data directories
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-11 10:23:51 -04:00
reyesj2
4a88dedcb8 Fixin kafka.ssl state and include name for kafka_user
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-10 16:18:46 -04:00
reyesj2
cfe5c1d76a remove elasticsearch.ca from receiver allowed_states. Replaced by generated kafka trust
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-10 13:24:02 -04:00
weslambert
ebf5159c95 Merge pull request #13323 from Security-Onion-Solutions/fix/agent_pipeline
Change pipeline version for agent
2024-07-10 13:01:29 -04:00
weslambert
d432019ad9 Change version from 1.13.1 to 1.20.0 2024-07-10 12:48:08 -04:00
reyesj2
0d8fd42be3 update pillarwatch engine
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-10 11:37:07 -04:00
reyesj2
d5faf535c3 Only interact with logstash configuration when Kafka pipeline is enabled otherwise leave it default
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-10 11:36:44 -04:00
reyesj2
8e1edd1d91 split Kafka ssl from ssl/init. Certs won't be generated until Kafka is enabled. Also runs some clean up for old Kafka certs
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-10 11:32:43 -04:00
reyesj2
d791b23838 Generate new Kafka truststore
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-07-10 11:29:09 -04:00
weslambert
0db0754ee5 Merge pull request #13316 from Security-Onion-Solutions/foxtrot
Elastic 8.14.2
2024-07-10 08:53:03 -04:00
Wes
1f5a990b1e Remove lines that aren't needed right now 2024-07-09 18:32:06 +00:00
weslambert
7a2f01be53 Update VERSION 2024-07-09 13:58:13 -04:00
Doug Burks
dadb0db8f3 Merge pull request #13321 from Security-Onion-Solutions/dougburks-patch-1
FIX: Update SOC MOTD #13320
2024-07-09 12:58:22 -04:00
Doug Burks
dfd8ac3626 FIX: Update SOC MOTD #13320 2024-07-09 12:55:58 -04:00
weslambert
9716e09b83 Temp change for testing 2024-07-09 12:51:34 -04:00
Wes
669f68ad88 Fleet metric annotations 2024-07-09 15:39:59 +00:00
Doug Burks
32af2d8436 Merge pull request #13318 from Security-Onion-Solutions/dougburks-patch-1
FIX: Update MOTD #13317
2024-07-09 10:07:47 -04:00
Doug Burks
24e945eee4 FIX: Update MOTD #13317 2024-07-09 10:06:16 -04:00
weslambert
8615e5d5ea Move enabled and index_clean back to the top 2024-07-08 16:50:06 -04:00
weslambert
2dd5ff4333 Update VERSION 2024-07-08 16:19:53 -04:00
weslambert
6a396ec1aa Fix accidental double quote removal 2024-07-08 11:44:27 -04:00
weslambert
34f558c023 Merge pull request #13314 from Security-Onion-Solutions/upgrade/elastic_8_14_2
Elastic 8.14.2
2024-07-08 10:02:02 -04:00
weslambert
9504f0885a Elastic 8.14.2 2024-07-08 09:49:07 -04:00
weslambert
ef59678441 Elastic 8.14.2 2024-07-08 09:48:12 -04:00
weslambert
c6f6811f47 Elastic 8.14.2 2024-07-08 09:47:34 -04:00
Mike Reeves
ce8f9fe024 Merge pull request #13299 from Security-Onion-Solutions/TOoSmOotH-patch-2
Delete old user commands
2024-07-02 14:46:56 -04:00
Mike Reeves
40b7999786 Delete salt/manager/tools/sbin/so-user-list 2024-07-02 14:36:51 -04:00
Mike Reeves
69be03f86a Delete salt/manager/tools/sbin/so-user-enable 2024-07-02 14:36:36 -04:00
Mike Reeves
8dc8092241 Delete salt/manager/tools/sbin/so-user-disable 2024-07-02 14:36:02 -04:00
Mike Reeves
578c6c567f Delete old user commands 2024-07-02 14:34:45 -04:00
weslambert
662df1208d Merge pull request #13296 from Security-Onion-Solutions/fix/soc_ilm_policy
Change name for ILM
2024-07-02 09:06:11 -04:00
weslambert
745b6775f1 Change name for ILM 2024-07-02 09:05:35 -04:00
weslambert
176aaa8f3d Merge pull request #13295 from Security-Onion-Solutions/fix/custom_windows_integration
Change name to winlog.winlogs
2024-07-02 09:03:52 -04:00
weslambert
4d499be1a8 Change name 2024-07-02 08:47:29 -04:00
weslambert
c27225d91f Merge pull request #13290 from Security-Onion-Solutions/fix/elastic_template_changes
Changes for Elastic 8.14.1
2024-07-01 11:19:02 -04:00
Wes
1b47d5c622 Changes for Elastic 8.14.1 2024-07-01 15:16:58 +00:00
Wes
32d7927a49 Template changes for Elastic 8.14.1 2024-07-01 15:16:06 +00:00
Jorge Reyes
861630681c Merge pull request #13282 from Security-Onion-Solutions/reyesj2/rupd
FIX: so-rule-update airgap check
2024-06-28 16:26:34 -04:00
reyesj2
9d725f2b0b fix rule update
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-28 13:45:50 -04:00
Josh Patterson
132263ac1a Merge pull request #13278 from Security-Onion-Solutions/issue/13073
Issue/13073 - disable Logstash on heavynodes
2024-06-27 14:50:18 -04:00
DefensiveDepth
92a847e3bd Fix Fleet setup 2024-06-27 11:48:54 -04:00
DefensiveDepth
75bbc41d38 Merge remote-tracking branch 'refs/remotes/origin/foxtrot' into foxtrot 2024-06-27 11:48:05 -04:00
weslambert
7716f4aff8 Elastic 8.14.1 2024-06-27 10:49:52 -04:00
weslambert
8eb6dcc5b7 Elastic 8.14.1 2024-06-27 10:49:06 -04:00
weslambert
847638442b Elastic 8.14.1 2024-06-27 10:48:28 -04:00
weslambert
5743189eef Elastic 8.14.1 2024-06-27 10:47:46 -04:00
weslambert
81d874c6ae Update VERSION 2024-06-27 10:42:58 -04:00
m0duspwnens
bfe8a3a01b Merge remote-tracking branch 'origin/2.4/dev' into issue/13073 2024-06-27 09:20:12 -04:00
weslambert
71ed9204ff Merge pull request #13275 from Security-Onion-Solutions/fix/elastic_8_10_4
Revert back to 8.10.4
2024-06-27 09:16:54 -04:00
weslambert
222ebbdec1 Revert back to 8.10.4 2024-06-27 09:05:29 -04:00
weslambert
260d4e44bc Revert back to 8.10.4 2024-06-27 09:04:07 -04:00
weslambert
0c5b3f7c1c Revert back to 8.10.4 2024-06-27 09:03:28 -04:00
weslambert
feee80cad9 Revert back to 8.10.4 2024-06-27 09:01:55 -04:00
m0duspwnens
5f69456e22 Merge remote-tracking branch 'origin/2.4/dev' into issue/13073 2024-06-27 08:56:44 -04:00
weslambert
e59d124c82 Merge pull request #13271 from Security-Onion-Solutions/upgrade/elastic
Elastic 8.14.1
2024-06-26 14:47:54 -04:00
Wes
13d4738e8f Elastic 8.14.1 2024-06-26 18:39:53 +00:00
weslambert
abdfbba32a Elastic 8.14.1 2024-06-26 14:06:24 -04:00
weslambert
7d0a961482 Elastic 8.14.1 2024-06-26 14:00:54 -04:00
weslambert
0f226cc08e Elastic 8.14.1 2024-06-26 13:59:23 -04:00
m0duspwnens
cfcfc6819f disable logstash in heavynode pillars 2024-06-26 12:53:32 -04:00
m0duspwnens
fe4e2a9540 Merge remote-tracking branch 'origin/2.4/dev' into issue/13073 2024-06-26 12:46:01 -04:00
Josh Patterson
492554d951 Merge pull request #13270 from Security-Onion-Solutions/90soup
start soup 2.4.90
2024-06-26 12:40:44 -04:00
m0duspwnens
dfd5e95c93 start soup 2.4.90 2024-06-26 12:37:28 -04:00
m0duspwnens
50f0c43212 merge dev 2024-06-26 12:33:32 -04:00
Mike Reeves
7fe8715bce Merge pull request #13260 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update VERSION
2024-06-25 15:40:26 -04:00
Mike Reeves
f837ea944a Update VERSION 2024-06-25 15:39:39 -04:00
Mike Reeves
c2d43e5d22 Merge pull request #13255 from Security-Onion-Solutions/2.4/dev
2.4.80
2024-06-25 15:28:13 -04:00
Mike Reeves
51bb4837f5 Merge pull request #13259 from Security-Onion-Solutions/TOoSmOotH-patch-5
Update .gitleaks.toml
2024-06-25 14:48:41 -04:00
Mike Reeves
caec424e44 Update .gitleaks.toml 2024-06-25 14:47:50 -04:00
Mike Reeves
156176c628 Merge pull request #13256 from Security-Onion-Solutions/fixmain
Fix git
2024-06-25 08:30:19 -04:00
Mike Reeves
81b4c4e2c0 Merge branch '2.4/main' of github.com:Security-Onion-Solutions/securityonion into fixmain 2024-06-25 08:24:27 -04:00
Mike Reeves
d4107dc60a Merge pull request #13254 from Security-Onion-Solutions/2.4.80
2.4.80
2024-06-25 08:17:59 -04:00
Mike Reeves
d34605a512 Update DOWNLOAD_AND_VERIFY_ISO.md 2024-06-25 08:16:31 -04:00
Mike Reeves
af5e7cd72c 2.4.80 2024-06-24 15:41:47 -04:00
Jorge Reyes
93378e92e6 Merge pull request #13253 from Security-Onion-Solutions/kafkaflt
Remove unused sbin_jinja for kafka
2024-06-24 14:18:32 -04:00
reyesj2
81ce762250 delete commented block
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-24 14:06:48 -04:00
reyesj2
cb727bf48d remove unused sbin_jinja from kafka config
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-24 13:45:13 -04:00
Jorge Reyes
9a0bad88cc Merge pull request #13251 from Security-Onion-Solutions/kafkaflt
FIX: update firewall defaults
2024-06-24 12:29:48 -04:00
reyesj2
680e84851b Re-add manager sbin_jinja file recurse
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-24 12:27:52 -04:00
reyesj2
ea771ed21b update firewall
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-24 12:01:01 -04:00
reyesj2
c332cd777c remove import/heavynode artifact caused by kafka cert not existing but being bound in docker. (empty dir created)
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-24 08:50:37 -04:00
Mike Reeves
9fce85c988 Merge pull request #13245 from Security-Onion-Solutions/proxysoup
Fix soup for proxy servers
2024-06-21 16:13:02 -04:00
weslambert
6141c7a849 Merge pull request #13246 from Security-Onion-Solutions/fix/detections_license_none
Add option for detections without a license
2024-06-21 15:59:09 -04:00
weslambert
bf91030204 Add option for detections without license 2024-06-21 15:33:11 -04:00
Mike Reeves
9577c3f59d Make soup use reposync from the repo 2024-06-21 15:24:54 -04:00
Mike Reeves
77dedc575e Make soup use reposync from the repo 2024-06-21 15:20:07 -04:00
Mike Reeves
0295b8d658 Make soup use reposync from the repo 2024-06-21 15:11:23 -04:00
Mike Reeves
6a9d78fa7c Make soup use reposync from the repo 2024-06-21 15:10:44 -04:00
Mike Reeves
b84521cdd2 Make soup use reposync from the repo 2024-06-21 14:49:16 -04:00
Mike Reeves
ff4679ec08 Make soup use reposync from the repo 2024-06-21 14:45:06 -04:00
Mike Reeves
c5ce7102e8 Make soup use reposync from the repo 2024-06-21 14:41:27 -04:00
Mike Reeves
70c001e22b Update so-repo-sync 2024-06-21 13:37:36 -04:00
Mike Reeves
f1dc22a200 Merge pull request #13244 from Security-Onion-Solutions/TOoSmOotH-patch-4
Update soc_manager.yaml
2024-06-21 12:36:17 -04:00
Mike Reeves
aae1b69093 Update soc_manager.yaml 2024-06-21 12:35:01 -04:00
m0duspwnens
469ca44016 fix maps 2024-06-20 16:53:12 -04:00
m0duspwnens
81fcd68e9b create and use redis:nodes and elasticsearch:nodes pillars 2024-06-20 16:42:11 -04:00
Jorge Reyes
8781419b4a Merge pull request #13242 from Security-Onion-Solutions/annotupd
update kafka annotations
2024-06-20 16:18:40 -04:00
reyesj2
2eea671857 more precise wording in kafka annotation
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-20 16:16:55 -04:00
reyesj2
73acfbf864 update kafka annotations
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-20 16:02:45 -04:00
Doug Burks
ae0e994461 Merge pull request #13239 from Security-Onion-Solutions/dougburks-patch-1
Update defaults.yaml to put Process actions in logical order
2024-06-20 10:12:06 -04:00
Doug Burks
07b9011636 Update defaults.yaml to put Process actions in logical order 2024-06-20 10:09:27 -04:00
Matthew Wright
bc2b3b7f8f Merge pull request #13236 from Security-Onion-Solutions/mwright/licenseDropdown
Added license presets to defaults.yaml file
2024-06-18 18:05:15 -04:00
unknown
ea02a2b868 Added license presets to defaults.yaml file 2024-06-18 16:52:00 -04:00
Jorge Reyes
ba3a6cbe87 Merge pull request #13234 from Security-Onion-Solutions/reyesj2-patch-4
update receiver node allowed states
2024-06-18 15:55:32 -04:00
reyesj2
268dcbe00b update receiver node allowed states
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-18 15:44:51 -04:00
Josh Patterson
6be97f13d0 Merge pull request #13233 from Security-Onion-Solutions/minefunc
fix ca mine_function
2024-06-18 13:58:35 -04:00
Jorge Reyes
95d6c93a07 Merge pull request #13231 from Security-Onion-Solutions/kfeval 2024-06-18 13:15:18 -04:00
m0duspwnens
a2bb220043 fix x509 mine_function 2024-06-18 12:33:33 -04:00
reyesj2
911d6dcce1 update kafka output policy only on eligible grid types
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-18 12:09:59 -04:00
Doug Burks
5f6a9850eb Merge pull request #13227 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add new Process actions #13226
2024-06-18 10:57:52 -04:00
Doug Burks
de18bf06c3 FEATURE: Add new Process actions #13226 2024-06-18 10:36:41 -04:00
Jorge Reyes
73473d671d Merge pull request #13222 from Security-Onion-Solutions/reyesj2-patch-3
update profile
2024-06-18 09:16:35 -04:00
Josh Brower
3fbab7c3af Merge pull request #13223 from Security-Onion-Solutions/2.4/timeout
Update defaults
2024-06-18 08:55:30 -04:00
DefensiveDepth
521cccaed6 Update defaults 2024-06-18 08:43:00 -04:00
reyesj2
35da3408dc update profile
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-17 15:53:49 -04:00
Jorge Reyes
c03096e806 Merge pull request #13221 from Security-Onion-Solutions/reyesj2/ksoup
suppress fleet policy update in soup
2024-06-17 14:18:34 -04:00
reyesj2
2afc947d6c suppress fleet policy update in soup
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-17 14:12:33 -04:00
Doug Burks
076da649cf Merge pull request #13217 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add more links and descriptions to SOC MOTD #13216
2024-06-17 12:18:29 -04:00
m0duspwnens
55f8303dc2 remove manager and search pipelines from heavynode 2024-06-17 10:06:43 -04:00
Doug Burks
93ced0959c FEATURE: Add more links and descriptions to SOC MOTD #13216 2024-06-17 09:25:01 -04:00
Doug Burks
6f13fa50bf FEATURE: Add more links and descriptions to SOC MOTD #13216 2024-06-17 09:24:32 -04:00
Doug Burks
3bface12e0 FEATURE: Add more links and descriptions to SOC MOTD #13216 2024-06-17 09:23:14 -04:00
Doug Burks
b584c8e353 FEATURE: Add more links and descriptions to SOC MOTD #13216 2024-06-17 09:13:17 -04:00
Jason Ertel
6caf87df2d Merge pull request #13209 from Security-Onion-Solutions/kfix
Fix errors on new installs
2024-06-15 05:09:48 -04:00
reyesj2
4d1f2c2bc1 fix kafka elastic fleet output policy setup
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-14 23:04:08 -04:00
reyesj2
0b1175b46c kafka logstash input plugin handle empty brokers list
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-14 23:03:36 -04:00
reyesj2
4e50dabc56 refix typos
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-14 23:03:06 -04:00
Jason Ertel
ce45a5926a Merge pull request #13207 from Security-Onion-Solutions/kaffix
Standalone logstash error
2024-06-14 18:01:35 -04:00
Josh Brower
c540a4f257 Merge pull request #13208 from Security-Onion-Solutions/2.4/ruletemplates
Update rule templates
2024-06-14 16:01:26 -04:00
DefensiveDepth
7af94c172f Change spelling 2024-06-14 16:00:22 -04:00
DefensiveDepth
7556587e35 Update rule templates 2024-06-14 15:47:57 -04:00
reyesj2
a0030b27e2 add additional retries to elasticfleet scripts
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-14 15:34:40 -04:00
reyesj2
8080e05444 on fresh install kafka nodes pillar may not have populated. Avoiding this by only generating kafka input pipeline when kafka nodes pillar is not empty
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-14 14:17:26 -04:00
Josh Brower
af11879545 Merge pull request #13205 from Security-Onion-Solutions/2.4/customsuricatasources
Initial support for custom suricata urls and local rulesets
2024-06-14 13:50:06 -04:00
DefensiveDepth
c89f1c9d95 remove multiline 2024-06-14 13:48:55 -04:00
DefensiveDepth
b7ac599a42 set to empty 2024-06-14 13:21:36 -04:00
DefensiveDepth
8363877c66 move to custom rules 2024-06-14 12:41:44 -04:00
DefensiveDepth
4bcb4b5b9c removed unneeded import 2024-06-14 09:32:34 -04:00
DefensiveDepth
68302e14b9 add to defaults and tweaks 2024-06-14 09:28:23 -04:00
DefensiveDepth
c1abc7a7f1 Update description 2024-06-14 08:51:34 -04:00
DefensiveDepth
484717d57d initial support for custom suricata urls and local rulesets 2024-06-14 08:42:10 -04:00
Jorge Reyes
b91c608fcf Merge pull request #13204 from Security-Onion-Solutions/kaffix
Only comment out so-kafka from so-status when it exists & only run en…
2024-06-13 15:54:50 -04:00
reyesj2
8f8ece2b34 Only comment out so-kafka from so-status when it exists & only run ensure_default_pipeline when Kafka is configured
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-13 15:50:34 -04:00
Jorge Reyes
9b5c1c01e9 Merge pull request #13200 from Security-Onion-Solutions/kafka/fix 2024-06-13 12:26:57 -04:00
reyesj2
816a1d446e Generate kafka-logstash cert on standalone,manager,managersearch in addition to searchnodes.
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-13 12:18:13 -04:00
reyesj2
19bfd5beca fix kafka nodeid assignment to increment correctly
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-13 12:16:39 -04:00
Jorge Reyes
9ac7e051b3 Merge pull request #13190 from Security-Onion-Solutions/reyesj2/kafka
Initial Kafka support
2024-06-13 09:42:59 -04:00
reyesj2
80b1d51f76 wrong location for global.pipeline check
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-13 08:50:53 -04:00
Doug Burks
6340ebb36d Merge pull request #13197 from Security-Onion-Solutions/dougburks-patch-1
Update DOWNLOAD_AND_VERIFY_ISO.md
2024-06-12 16:49:21 -04:00
Doug Burks
70721afa51 Update DOWNLOAD_AND_VERIFY_ISO.md 2024-06-12 16:47:26 -04:00
reyesj2
9c31622598 telegraft should only include jolokia config when Kafka is set as the global.pipeline
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 15:42:00 -04:00
reyesj2
f372b0907b Use kafka:password for kafka certs
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 15:41:10 -04:00
coreyogburn
fac96e0b08 Merge pull request #13183 from Security-Onion-Solutions/cogburn/cleanup-config
Fix unnecessary escaping
2024-06-12 11:57:31 -06:00
reyesj2
2bc53f9868 Merge remote-tracking branch 'remotes/origin/2.4/dev' into reyesj2/kafka 2024-06-12 12:36:58 -04:00
reyesj2
e8106befe9 Append '-securityonion' to all Security Onion related Kafka topics. Adjust logstash to ingest all topics ending in '-securityonion' to avoid having to manually list topic names
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 12:05:16 -04:00
reyesj2
83412b813f Renamed Kafka pillar
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 11:19:25 -04:00
reyesj2
b56d497543 Revert a so-setup change. Kafka is not an installable option
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 11:17:06 -04:00
reyesj2
dd40962288 Revert a whiptail menu change. Kafka is not an install option
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 11:07:23 -04:00
reyesj2
b7eebad2a5 Update Kafka self reset & add initial Kafka wrapper scripts to build out
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-12 11:01:40 -04:00
m0duspwnens
8f8698fd02 Merge remote-tracking branch 'origin/2.4/dev' into issue/13073 2024-06-12 10:50:18 -04:00
Josh Patterson
092f716f12 Merge pull request #13189 from Security-Onion-Solutions/soupmsgq
remove this \n
2024-06-12 10:41:49 -04:00
m0duspwnens
c38f48c7f2 remove this \n 2024-06-12 10:34:32 -04:00
m0duspwnens
98837bc379 this method does not cause soup to fail 2024-06-12 09:11:02 -04:00
m0duspwnens
0f243bb6ec Merge remote-tracking branch 'origin/2.4/dev' into issue/13073 2024-06-11 16:33:23 -04:00
m0duspwnens
88fc1bbe32 quotes on vars 2024-06-11 16:32:57 -04:00
Corey Ogburn
d5ef0e5744 Fix unnecessary escaping 2024-06-11 12:34:32 -06:00
m0duspwnens
2ecac38f6d disable logstash on heavynodes 2024-06-11 13:50:29 -04:00
Josh Brower
e90557d7dc Merge pull request #13179 from Security-Onion-Solutions/2.4/fixintegritycheck
Add new bind - suricata all.rules
2024-06-11 13:08:40 -04:00
reyesj2
628893fd5b remove redundant 'kafka_' from annotations & defaults
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-11 11:56:21 -04:00
reyesj2
a81e4c3362 remove dash(-) from kafka.id
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-11 11:55:17 -04:00
reyesj2
ca7b89c308 Added Kafka reset to SOC UI. Incase of changing an active broker to a controller topics may become unavailable. Resolving this would require manual intervention. This option allows running a reset to start from a clean slate to then configure cluster to desired state before reenabling Kafka as global pipeline.
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-11 11:21:13 -04:00
Josh Patterson
03335cc015 Merge pull request #13182 from Security-Onion-Solutions/dockerup
upgrade docker
2024-06-11 11:08:40 -04:00
reyesj2
08557ae287 kafka.id field should only be present when metadata for kafka exists
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-11 11:01:34 -04:00
DefensiveDepth
08d2a6242d Add new bind - suricata all.rules 2024-06-11 10:03:33 -04:00
m0duspwnens
4b481bd405 add epoch to docker for oracle 2024-06-11 09:41:58 -04:00
m0duspwnens
0b1e3b2a7f upgrade docker for focal 2024-06-10 16:24:44 -04:00
m0duspwnens
dbd9873450 upgrade docker for jammy 2024-06-10 16:04:11 -04:00
m0duspwnens
c6d0a17669 docker upgrade debian 12 2024-06-10 15:43:29 -04:00
m0duspwnens
adeab10f6d upgrade docker and containerd.io for oracle 2024-06-10 12:14:27 -04:00
reyesj2
824f852ed7 merge 2.4/dev
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-10 11:26:23 -04:00
reyesj2
284c1be85f Update Kafka controller(s) via SOC UI
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-10 11:08:54 -04:00
Jason Ertel
7ad6baf483 Merge pull request #13171 from Security-Onion-Solutions/jertel/yaml
correct placement of error check override
2024-06-08 08:21:20 -04:00
Jason Ertel
f1638faa3a correct placement of error check override 2024-06-08 08:18:34 -04:00
Jason Ertel
dea786abfa Merge pull request #13170 from Security-Onion-Solutions/jertel/yaml
gracefully handle missing parent key
2024-06-08 07:49:49 -04:00
Jason Ertel
f96b82b112 gracefully handle missing parent key 2024-06-08 07:44:46 -04:00
Josh Patterson
95fe11c6b4 Merge pull request #13162 from Security-Onion-Solutions/soupmsgq
fix elastic templates not loading due to global_override phases
2024-06-07 16:23:03 -04:00
Jason Ertel
f2f688b9b8 Update soup 2024-06-07 16:18:09 -04:00
m0duspwnens
0139e18271 additional description 2024-06-07 16:03:21 -04:00
Mike Reeves
657995d744 Merge pull request #13165 from Security-Onion-Solutions/TOoSmOotH-patch-3
Update defaults.yaml
2024-06-07 15:38:01 -04:00
Mike Reeves
4057238185 Update defaults.yaml 2024-06-07 15:33:49 -04:00
coreyogburn
fb07ff65c9 Merge pull request #13164 from Security-Onion-Solutions/cogburn/tls-options
AdditionalCA and InsecureSkipVerify
2024-06-07 13:10:45 -06:00
Mike Reeves
dbc56ffee7 Update defaults.yaml 2024-06-07 15:09:09 -04:00
Corey Ogburn
ee696be51d Remove rootCA and insecureSkipVerify from SOC defaults 2024-06-07 13:07:04 -06:00
Corey Ogburn
5d3fd3d389 AdditionalCA and InsecureSkipVerify
New fields have been added to manager and then duplicated over to SOC's config in the same vein as how proxy was updated earlier this week.

AdditionalCA holds the PEM formatted public keys that should be trusted when making requests. It has been implemented for both Sigma's zip downloads and Sigma and Suricata's repository clones and pulls.

InsecureSkipVerify has been added to help our users troubleshoot their configuration. Setting it to true will not verify the cert on outgoing requests. Self signed, missing, or invalid certs will not throw an error.
2024-06-07 12:47:09 -06:00
Corey Ogburn
fa063722e1 RootCA and InsecureSkipVerify
New empty settings and their annotations.
2024-06-07 09:10:14 -06:00
m0duspwnens
f5cc35509b fix output alignment 2024-06-07 11:03:26 -04:00
m0duspwnens
d39c8fae54 format output 2024-06-07 09:01:16 -04:00
m0duspwnens
d3b81babec check for phases with so-yaml, remove if exists 2024-06-06 16:15:21 -04:00
coreyogburn
f35f6bd4c8 Merge pull request #13154 from Security-Onion-Solutions/cogburn/soc-proxy
SOC Proxy Setting
2024-06-06 14:03:16 -06:00
Mike Reeves
d5cfef94a3 Merge pull request #13156 from Security-Onion-Solutions/TOoSmOotH-patch-3 2024-06-06 16:01:22 -04:00
Mike Reeves
f37f5ba97b Update soc_suricata.yaml 2024-06-06 15:57:58 -04:00
Corey Ogburn
42818a9950 Remove proxy from SOC defaults 2024-06-06 13:28:07 -06:00
Corey Ogburn
e85c3e5b27 SOC Proxy Setting
The so_proxy value we build during install is now copied to SOC's config.
2024-06-06 11:55:27 -06:00
m0duspwnens
a39c88c7b4 add set to troubleshoot failure 2024-06-06 12:56:24 -04:00
m0duspwnens
73ebf5256a Merge remote-tracking branch 'origin/2.4/dev' into soupmsgq 2024-06-06 12:44:45 -04:00
Jason Ertel
6d31cd2a41 Merge pull request #13150 from Security-Onion-Solutions/jertel/yaml
add ability to retrieve yaml values via so-yaml.py; improve so-minion id matching
2024-06-06 12:09:03 -04:00
Jason Ertel
5600fed9c4 add ability to retrieve yaml values via so-yaml.py; improve so-minion id matching 2024-06-06 11:56:07 -04:00
m0duspwnens
6920b77b4a fix msg 2024-06-06 11:00:43 -04:00
m0duspwnens
ccd6b3914c add final msg queue for soup. 2024-06-06 10:33:55 -04:00
reyesj2
c4723263a4 Remove unused kafka reactor
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-06 08:59:17 -04:00
reyesj2
4581a46529 Merge remote-tracking branch 'remotes/origin/2.4/dev' into reyesj2/kafka 2024-06-05 20:47:41 -04:00
Josh Patterson
33a2c5dcd8 Merge pull request #13141 from Security-Onion-Solutions/sotcprp
move so-tcpreplay from common state to sensor state
2024-06-05 09:49:39 -04:00
m0duspwnens
f6a8a21f94 remove space 2024-06-05 08:58:46 -04:00
m0duspwnens
ff5773c837 move so-tcpreplay back to common. return empty string if no sensor.interface pillar 2024-06-05 08:56:32 -04:00
m0duspwnens
66f8084916 Merge remote-tracking branch 'origin/2.4/dev' into sotcprp 2024-06-05 08:32:54 -04:00
m0duspwnens
a2467d0418 move so-tcpreplay to sensor state 2024-06-05 08:24:57 -04:00
reyesj2
3b0339a9b3 create kafka.id from kafka {partition}-{offset}-{timestamp} for tracking event
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-04 14:27:52 -04:00
reyesj2
fb1d4fdd3c update license
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-04 12:33:51 -04:00
Josh Patterson
56a16539ae Merge pull request #13134 from Security-Onion-Solutions/sotcprp
so-tcpreplay now runs if manager is offline
2024-06-04 10:43:33 -04:00
m0duspwnens
c0b2cf7388 add the curlys 2024-06-04 10:28:21 -04:00
reyesj2
d9c58d9333 update receiver pillar access
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-04 08:33:45 -04:00
Josh Patterson
ef3a52468f Merge pull request #13129 from Security-Onion-Solutions/salt3006.8
salt 3006.6
2024-06-03 15:29:19 -04:00
m0duspwnens
c88b731793 revert to 3006.6 2024-06-03 15:27:08 -04:00
reyesj2
2e85a28c02 Remove so-kafka-clusterid script, created during soup
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-06-02 18:25:59 -04:00
weslambert
964fef1aab Merge pull request #13117 from Security-Onion-Solutions/fix/items_and_lists
Add templates for .items and .lists indices
2024-05-31 16:34:29 -04:00
reyesj2
1a832fa0a5 Move soup kafka needfuls to up_to_2.4.80
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-31 14:04:46 -04:00
reyesj2
75bdc92bbf Merge remote-tracking branch 'remotes/origin/2.4/dev' into reyesj2/kafka 2024-05-31 14:02:43 -04:00
Wes
a8c231ad8c Add component templates 2024-05-31 17:47:01 +00:00
Wes
f396247838 Add index templates and lifecycle policies 2024-05-31 17:46:19 +00:00
reyesj2
e3ea4776c7 Update kafka nodes pillar before running highstate with pillarwatch engine. This allows configuring your Kafka controllers before cluster comes up for the first time
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-31 13:34:28 -04:00
coreyogburn
37a928b065 Merge pull request #13107 from Security-Onion-Solutions/cogburn/detection-templates
Added TemplateDetections To Detection ClientParams
2024-05-30 16:26:17 -06:00
Corey Ogburn
85c269e697 Added TemplateDetections To Detection ClientParams
The UI can now insert templates when you select a Detection language. These are those templates, annotated.
2024-05-30 15:59:03 -06:00
m0duspwnens
6e70268ab9 Merge remote-tracking branch 'origin/2.4/dev' into sotcprp 2024-05-30 16:34:37 -04:00
Josh Patterson
fb8929ea37 Merge pull request #13103 from Security-Onion-Solutions/salt3006.8
Salt3006.8
2024-05-30 16:32:05 -04:00
weslambert
5d9c0dd8b5 Merge pull request #13101 from Security-Onion-Solutions/fix/separate_suricata
Separate Suricata alerts into a specific data stream
2024-05-30 16:30:55 -04:00
m0duspwnens
debf093c54 Merge remote-tracking branch 'origin/2.4/dev' into salt3006.8 2024-05-30 15:58:10 -04:00
reyesj2
00b5a5cc0c Revert "revert version for soup test before 2.4.80 pipeline unpaused"
This reverts commit 48713a4e7b.
2024-05-30 15:13:16 -04:00
reyesj2
dbb99d0367 Remove bad config
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-30 15:10:15 -04:00
m0duspwnens
7702f05756 upgrade salt 3006.8. soup for 2.4.80 2024-05-30 15:00:32 -04:00
Wes
2c635bce62 Set index for Suricata alerts 2024-05-30 17:02:31 +00:00
reyesj2
48713a4e7b revert version for soup test before 2.4.80 pipeline unpaused
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-30 13:00:34 -04:00
Wes
e831354401 Add Suricata alerts setting for configuration 2024-05-30 17:00:11 +00:00
Wes
55c5ea5c4c Add template for Suricata alerts 2024-05-30 16:58:56 +00:00
reyesj2
1fd5165079 Merge remote-tracking branch 'origin/2.4/dev' into reyesj2/kafka
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-29 23:37:40 -04:00
reyesj2
949cea95f4 Update pillarWatch config for global.pipeline
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-29 23:19:44 -04:00
Mike Reeves
12762e08ef Merge pull request #13093 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update VERSION
2024-05-29 16:54:31 -04:00
Mike Reeves
62bdb2627a Update VERSION 2024-05-29 16:53:27 -04:00
reyesj2
386be4e746 WIP: Manage Kafka nodes pillar role value
This way when kafka_controllers is updated the pillar value gets updated and any non-controllers get updated to revert to 'broker' only role.
 Needs more testing when a new controller joins in this manner Kafka errors due to cluster metadata being out of sync. One solution is to remove /nsm/kafka/data/__cluster_metadata-0/quorum-state and restart cluster. Alternative is working with Kafka cli tools to inform cluster of new voter, likely best option but requires a wrapper script of some sort to be created for updating cluster in-place.
Easiest option is to have all receivers join grid and then configure Kafka with specific controllers via SOC UI prior to enabling Kafka. This way Kafka cluster comes up in the desired configuration with no need for immediately modifying cluster

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-29 16:48:39 -04:00
reyesj2
d9ec556061 Update some annotations and defaults
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-29 16:41:02 -04:00
reyesj2
876d860488 elastic agent should be able to communicate over 9092 for sending logs to kafka brokers
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-29 16:40:15 -04:00
reyesj2
59097070ef Revert "Remove unneeded jolokia aggregate metrics to reduce data ingested to influx"
This reverts commit 1c1a1a1d3f.
2024-05-28 12:17:43 -04:00
reyesj2
77b5aa4369 Correct dashboard name
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-28 11:34:35 -04:00
reyesj2
0d7c331ff0 only show specific fields when hovering over Kafka influxdb panels
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-28 11:29:38 -04:00
reyesj2
1c1a1a1d3f Remove unneeded jolokia aggregate metrics to reduce data ingested to influx
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-28 11:14:19 -04:00
reyesj2
47efcfd6e2 Add basic Kafka metrics to 'Security Onion Performance' influxdb dashboard
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-28 10:55:11 -04:00
reyesj2
15a0b959aa Add jolokia metrics for influxdb dashboard
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-28 10:51:39 -04:00
reyesj2
fcb6a47e8c Remove redis.sh telegraf script when Kafka is global pipeline
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-26 21:10:41 -04:00
m0duspwnens
b5f656ae58 dont render pillar each time so-tcpreplay runs 2024-05-23 13:22:22 -04:00
reyesj2
382cd24a57 Small changes needed for using new Kafka docker image + added Kafka logging output to /opt/so/log/kafka/
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-22 13:39:21 -04:00
reyesj2
b1beb617b3 Logstash should be disabled when Kafka is enabled except when a minion override exists OR node is a standalone
- Standalone subscribes to Kafka topics via logstash for ingest

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-22 13:38:09 -04:00
reyesj2
91f8b1fef7 Set default replication factor back to Kafka default
If replication factor is > 1 Kafka will fail to start until another broker is added
  - For internal automated testing purposes a Standalone will be utilized

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-22 13:35:09 -04:00
reyesj2
2ad87bf1fe merge 2.4/dev
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-08 16:30:45 -04:00
reyesj2
eca2a4a9c8 Logstash consumer threads should match topic partition count
- Default is set to 3. If there are too many consumer threads it may lead to idle logstash worker threads and could require decreasing this value to saturate workers

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-08 16:17:09 -04:00
reyesj2
dff609d829 Add basic read-only metric collection from Kafka
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-08 16:13:09 -04:00
reyesj2
e960ae66a3 Merge remote-tracking branch 'remotes/origin/2.4/dev' into reyesj2/kafka 2024-05-02 15:12:27 -04:00
reyesj2
093cbc5ebc Reconfigure Kafka defaults
- Set default number of partitions per topic -> 3. Helps ensure that out of the box we can take advantage of multi-node Kafka clusters via load balancing across atleast 3 brokers. Also multiple searchnodes will be able to pull from each topic. In this case 3 searchnodes (consumers) would be able to pull from all topics concurrently.
- Set default replication factor -> 2. This is the minimum value required for redundancy. Every partition will have 1 replica. In this case if we have 2 brokers each topic will have 3 partitions (load balanced across brokers) and each partition will have a replica on separate broker for redundancy

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-02 15:10:13 -04:00
reyesj2
f663ef8c16 Setup Kafka to use PKCS12 and remove need for converting to JKS
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-02 14:53:28 -04:00
reyesj2
de9f6425f9 Automatically switch between Kafka output policy and logstash output policy when globals.pipeline changes
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-02 12:13:46 -04:00
reyesj2
47ced60243 Create new Kafka output policy using salt
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 14:49:51 -04:00
reyesj2
58ebbfba20 Add kafka state to standalone highstate
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 13:03:14 -04:00
reyesj2
e164d15ec6 Generate different Kafka certs for different SO nodetypes
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 13:02:47 -04:00
reyesj2
3efdb4e532 Reconfigure logstash Kafka input
- TODO: Configure what topics are pulled to searchnodes via the SOC UI

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 13:01:29 -04:00
reyesj2
de0af58cf8 Write out Kafka pillar path
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 10:45:46 -04:00
reyesj2
84abfa6881 Remove check for existing value since Kafka pillar is made empty on upgrade
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 10:45:05 -04:00
reyesj2
6b60e85a33 Make kafka configuration changes prior to 2.4.70 upgrade
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 10:15:26 -04:00
reyesj2
63f3e23e2b soup typo
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 09:54:19 -04:00
reyesj2
eb1249618b Update soup for Kafka
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 09:27:01 -04:00
reyesj2
cef9bb1487 Dynamically create Kafka topics based on event.module from elastic agent logs eg. zeek-topic. Depends on Kafka brokers having auto.create.topics.enable set to true
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-05-01 09:16:13 -04:00
reyesj2
bb49944b96 Setup elastic fleet rollover from logstash -> kafka output policy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-30 16:47:40 -04:00
reyesj2
fcc4050f86 Add id to grid-kafka fleet output policy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-30 12:59:53 -04:00
reyesj2
9c83a52c6d Add Kafka output to elastic-fleet setup. Includes separating topics by event.module with fallback to default-logs if no event.module is specified or doesn't match processors
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-30 12:01:31 -04:00
reyesj2
a6e8b25969 Add Kafka connectivity between manager - > receiver nodes.
Add connectivity to Kafka between other node types that may need to publish to Kafka.

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 15:48:57 -04:00
reyesj2
529bc01d69 Add missing configuration for nodes running Kafka broker role only
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 14:53:52 -04:00
reyesj2
11055b1d32 Rename kafkapass -> kafka_pass
Run so-kafka-clusterid within nodes.sls state so switchover is consistent

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 14:09:09 -04:00
reyesj2
fd9a91420d Use SOC UI to configure list of KRaft (Kafka) controllers for cluster
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 11:37:24 -04:00
reyesj2
529c8d7cf2 Remove salt reactor for Kafka
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 11:35:46 -04:00
reyesj2
086ebe1a7c Split kafka defaults between broker / controller
Setup config.map.jinja to update broker / controller / combined node types

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 09:08:14 -04:00
reyesj2
29c964cca1 Set kafka.nodes state to run first to populate kafka.nodes pillar
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-29 09:04:52 -04:00
reyesj2
36573d6005 Update kafka cert permissions
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-23 16:45:36 -04:00
reyesj2
aa0c589361 Update kafka managed node pillar template to include its process.role
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-23 13:51:12 -04:00
reyesj2
685b80e519 Merge remote-tracking branch 'remotes/origin/kaffytaffy' into reyesj2/kafka 2024-04-22 16:45:59 -04:00
reyesj2
5a401af1fd Update kafka process_x_roles annotation
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-22 16:44:35 -04:00
reyesj2
25d63f7516 Setup kafka reactor for managing kafka controllers globally
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-22 16:42:59 -04:00
m0duspwnens
6c5e0579cf logging changes. ensure salt master has pillarWatch engine 2024-04-19 09:32:32 -04:00
reyesj2
4ac04a1a46 add kafkapass soc annotation
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-18 16:46:36 -04:00
reyesj2
746128e37b update so-kafka-clusterid
This is a temporary script used to setup kafka secret and clusterid needed for kafka to start. This scripts functionality will be replaced by soup/setup scripts

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-18 15:13:29 -04:00
reyesj2
fe81ffaf78 Variables no longer used. Replaced by map file
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-18 15:11:22 -04:00
m0duspwnens
1f6eb9cdc3 match keys better. go through files reverse first found is prio 2024-04-18 13:50:37 -04:00
reyesj2
5cc358de4e Update map files to handle empty kafka:nodes pillar
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-18 11:58:25 -04:00
m0duspwnens
610dd2c08d improve it 2024-04-18 11:11:14 -04:00
m0duspwnens
506bbd314d more comments, better logging 2024-04-18 10:26:10 -04:00
m0duspwnens
4caa6a10b5 watch a pillar in files and take action 2024-04-17 18:09:04 -04:00
reyesj2
665b7197a6 Update Kafka nodeid
Update so-minion to include running kafka.nodes state to ensure nodeid is generated for new brokers

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-17 17:08:41 -04:00
m0duspwnens
4b79623ce3 watch pillar files for changes and do something 2024-04-16 16:51:35 -04:00
m0duspwnens
c4994a208b restart salt minion if a manager and signing policies change 2024-04-15 11:37:21 -04:00
reyesj2
eedea2ca88 Merge remote-tracking branch 'remotes/origin/kaffytaffy' into reyesj2/kafka 2024-04-12 16:24:33 -04:00
reyesj2
de6ea29e3b update default process.role to broker only
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 16:18:53 -04:00
m0duspwnens
bb983d4ba2 just broker as default process 2024-04-12 16:16:03 -04:00
m0duspwnens
c014508519 need /opt/so/conf/ca/cacerts on receiver for kafka to run 2024-04-12 13:50:25 -04:00
reyesj2
fcfbb1e857 Merge kaffytaffy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 12:50:56 -04:00
reyesj2
911ee579a9 Typo
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 12:16:20 -04:00
reyesj2
a6ff92b099 Note to remove so-kafka-clusterid. Update soup and setup to generate needed kafka pillar values
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 12:11:18 -04:00
m0duspwnens
d73ba7dd3e order kafka pillar assignment 2024-04-12 11:55:26 -04:00
m0duspwnens
04ddcd5c93 add receiver managersearch and standalone to kafka.nodes pillar 2024-04-12 11:52:57 -04:00
reyesj2
af29ae1968 Merge kaffytaffy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 11:43:46 -04:00
reyesj2
fbd3cff90d Make global.pipeline use GLOBALMERGED value
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 11:21:19 -04:00
m0duspwnens
0ed9894b7e create kratos local pillar dirs during setup 2024-04-12 11:19:46 -04:00
m0duspwnens
a54a72c269 move kafka_cluster_id to kafka:cluster_id 2024-04-12 11:19:20 -04:00
m0duspwnens
f514e5e9bb add kafka to receiver 2024-04-11 16:23:05 -04:00
reyesj2
3955587372 Use global.pipeline for redis / kafka states
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 16:20:09 -04:00
reyesj2
6b28dc72e8 Update annotation for global.pipeline
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 15:38:33 -04:00
reyesj2
ca7253a589 Run kafka-clusterid script when pillar values are missing
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 15:38:03 -04:00
reyesj2
af53dcda1b Remove references to kafkanode
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 15:32:00 -04:00
m0duspwnens
d3bd56b131 disable logstash and redis if kafka enabled 2024-04-10 14:13:27 -04:00
m0duspwnens
e9e61ea2d8 Merge remote-tracking branch 'origin/2.4/dev' into kaffytaffy 2024-04-10 13:14:13 -04:00
m0duspwnens
86b984001d annotations and enable/disable from ui 2024-04-10 10:39:06 -04:00
m0duspwnens
fa7f8104c8 Merge remote-tracking branch 'origin/reyesj2/kafka' into kaffytaffy 2024-04-09 11:13:02 -04:00
m0duspwnens
bd5fe43285 jinja config files 2024-04-09 11:07:53 -04:00
m0duspwnens
d38051e806 fix client and server properties formatting 2024-04-09 10:36:37 -04:00
m0duspwnens
daa5342986 items not keys in for loop 2024-04-09 10:22:05 -04:00
m0duspwnens
c48436ccbf fix dict update 2024-04-09 10:19:17 -04:00
m0duspwnens
7aa00faa6c fix var 2024-04-09 09:31:54 -04:00
m0duspwnens
6217a7b9a9 add defaults and jijafy kafka config 2024-04-09 09:27:21 -04:00
reyesj2
d67ebabc95 Remove logstash output to kafka pipeline. Add additional topics for searchnodes to ingest and add partition/offset info to event
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-08 16:38:03 -04:00
reyesj2
65274e89d7 Add client_id to logstash pipeline. To identify which searchnode is pulling messages
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-05 15:38:00 -04:00
reyesj2
721e04f793 initial logstash input from kafka over ssl
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-05 13:37:14 -04:00
reyesj2
433309ef1a Generate kafka cluster id if it doesn't exist
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-05 09:35:12 -04:00
reyesj2
735cfb4c29 Autogenerate kafka topics when a message it sent to non-existing topic
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-04 16:45:58 -04:00
reyesj2
6202090836 Merge remote-tracking branch 'origin/kaffytaffy' into reyesj2/kafka 2024-04-04 16:27:06 -04:00
reyesj2
436cbc1f06 Add kafka signing_policy for client/server auth. Add kafka-client cert on manager so manager can interact with kafka using its own cert
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-04 16:21:29 -04:00
reyesj2
40b08d737c Generate kafka keystore on changes to kafka.key
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-04 16:16:53 -04:00
m0duspwnens
4c5b42b898 restart container on server config changes 2024-04-04 15:47:01 -04:00
m0duspwnens
7a6b72ebac add so-kafka to manager for firewall 2024-04-04 15:46:11 -04:00
m0duspwnens
1b8584d4bb allow manager to manager on kafka ports 2024-04-03 15:36:35 -04:00
reyesj2
13105c4ab3 Generate certs for use with elasticfleet kafka output policy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-03 14:34:07 -04:00
reyesj2
dc27bbb01d Set kafka heap size. To be later configured from SOC
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-03 14:30:52 -04:00
m0duspwnens
b863060df1 kafka broker and listener on 0.0.0.0 2024-04-03 11:05:24 -04:00
m0duspwnens
18f95e867f port 9093 for kafka docker 2024-04-03 10:24:53 -04:00
m0duspwnens
ed6137a76a allow sensor and searchnode to connect to manager kafka ports 2024-04-03 10:24:10 -04:00
m0duspwnens
c3f02a698e add kafka nodes as extra hosts for the container 2024-04-03 10:23:36 -04:00
m0duspwnens
db106f8ca1 listen on 0.0.0.0 for CONTROLLER 2024-04-03 10:22:47 -04:00
m0duspwnens
8e47cc73a5 kafka.nodes pillar to lf 2024-04-03 08:54:17 -04:00
m0duspwnens
639bf05081 add so-manager to kafka.nodes pillar 2024-04-03 08:52:26 -04:00
m0duspwnens
4e142e0212 put alphabetical 2024-04-02 16:47:35 -04:00
m0duspwnens
c9bf1c86c6 Merge remote-tracking branch 'origin/reyesj2/kafka' into kaffytaffy 2024-04-02 16:40:47 -04:00
reyesj2
82830c8173 Fix typos and fix error related to elasticsearch saltstate being called from logstash state. Logstash will be removed from kafkanodes in future
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:37:39 -04:00
reyesj2
7f5741c43b Fix kafka storage setup
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:36:22 -04:00
reyesj2
643d4831c1 CRLF -> LF
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:35:14 -04:00
reyesj2
b032eed22a Update kafka to use manager docker registry
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:34:06 -04:00
reyesj2
1b49c8540e Fix kafka keystore script
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:32:15 -04:00
m0duspwnens
f7534a0ae3 make manager download so-kafka container 2024-04-02 16:01:12 -04:00
m0duspwnens
780ad9eb10 add kafka to manager nodes 2024-04-02 15:50:25 -04:00
m0duspwnens
e25bc8efe4 Merge remote-tracking branch 'origin/reyesj2/kafka' into kaffytaffy 2024-04-02 13:36:47 -04:00
reyesj2
26abe90671 Removed duplicate kafka setup
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 12:19:46 -04:00
reyesj2
446f1ffdf5 merge 2.4/dev
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-03-25 13:55:48 -04:00
weslambert
2168698595 Update VERSION 2024-01-22 20:27:19 -05:00
reyesj2
8cf29682bb Update to merge in 2.4/dev
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2023-11-29 13:41:23 -05:00
reyesj2
86dc7cc804 Kafka init
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2023-11-29 13:34:25 -05:00
161 changed files with 5410 additions and 1596 deletions

View File

@@ -536,7 +536,7 @@ secretGroup = 4
[allowlist]
description = "global allow lists"
regexes = ['''219-09-9999''', '''078-05-1120''', '''(9[0-9]{2}|666)-\d{2}-\d{4}''', '''RPM-GPG-KEY.*''', '''.*:.*StrelkaHexDump.*''', '''.*:.*PLACEHOLDER.*''']
regexes = ['''219-09-9999''', '''078-05-1120''', '''(9[0-9]{2}|666)-\d{2}-\d{4}''', '''RPM-GPG-KEY.*''', '''.*:.*StrelkaHexDump.*''', '''.*:.*PLACEHOLDER.*''', '''ssl_.*password''']
paths = [
'''gitleaks.toml''',
'''(.*?)(jpg|gif|doc|pdf|bin|svg|socket)$''',

View File

@@ -1,17 +1,17 @@
### 2.4.70-20240529 ISO image released on 2024/05/29
### 2.4.100-20240829 ISO image released on 2024/08/29
### Download and Verify
2.4.70-20240529 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.70-20240529.iso
2.4.100-20240829 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.100-20240829.iso
MD5: 8FCCF31C2470D1ABA380AF196B611DEC
SHA1: EE5E8F8C14819E7A1FE423E6920531A97F39600B
SHA256: EF5E781D50D50660F452ADC54FD4911296ECBECED7879FA8E04687337CA89BEC
MD5: 377586C143FABD662DB414DEA49D46B7
SHA1: 69D4B94522789AF47075A9FF1354B069679AC366
SHA256: 52FBA5C8762B8DCF2945AD2837B3A19E63ADCC209AB510D7FD0F86AE713AA153
Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.70-20240529.iso.sig
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.100-20240829.iso.sig
Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS
@@ -25,27 +25,29 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.
Download the signature file for the ISO:
```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.70-20240529.iso.sig
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.100-20240829.iso.sig
```
Download the ISO image:
```
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.70-20240529.iso
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.100-20240829.iso
```
Verify the downloaded ISO image using the signature file:
```
gpg --verify securityonion-2.4.70-20240529.iso.sig securityonion-2.4.70-20240529.iso
gpg --verify securityonion-2.4.100-20240829.iso.sig securityonion-2.4.100-20240829.iso
```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
```
gpg: Signature made Wed 29 May 2024 11:40:59 AM EDT using RSA key ID FE507013
gpg: Signature made Thu 29 Aug 2024 12:02:55 PM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: C804 A93D 36BE 0C73 3EA1 9644 7C10 60B7 FE50 7013
```
If it fails to verify, try downloading again. If it still fails to verify, try downloading from another computer or another network.
Once you've verified the ISO image, you're ready to proceed to our Installation guide:
https://docs.securityonion.net/en/2.4/installation.html

View File

@@ -5,9 +5,11 @@
| Version | Supported |
| ------- | ------------------ |
| 2.4.x | :white_check_mark: |
| 2.3.x | :white_check_mark: |
| 2.3.x | :x: |
| 16.04.x | :x: |
Security Onion 2.3 has reached End Of Life and is no longer supported.
Security Onion 16.04 has reached End Of Life and is no longer supported.
## Reporting a Vulnerability

View File

@@ -1 +1 @@
2.4.70
2.4.100

View File

@@ -0,0 +1,34 @@
{% set node_types = {} %}
{% for minionid, ip in salt.saltutil.runner(
'mine.get',
tgt='elasticsearch:enabled:true',
fun='network.ip_addrs',
tgt_type='pillar') | dictsort()
%}
# only add a node to the pillar if it returned an ip from the mine
{% if ip | length > 0%}
{% set hostname = minionid.split('_') | first %}
{% set node_type = minionid.split('_') | last %}
{% if node_type not in node_types.keys() %}
{% do node_types.update({node_type: {hostname: ip[0]}}) %}
{% else %}
{% if hostname not in node_types[node_type] %}
{% do node_types[node_type].update({hostname: ip[0]}) %}
{% else %}
{% do node_types[node_type][hostname].update(ip[0]) %}
{% endif %}
{% endif %}
{% endif %}
{% endfor %}
elasticsearch:
nodes:
{% for node_type, values in node_types.items() %}
{{node_type}}:
{% for hostname, ip in values.items() %}
{{hostname}}:
ip: {{ip}}
{% endfor %}
{% endfor %}

2
pillar/kafka/nodes.sls Normal file
View File

@@ -0,0 +1,2 @@
kafka:
nodes:

View File

@@ -1,16 +1,15 @@
{% set node_types = {} %}
{% set cached_grains = salt.saltutil.runner('cache.grains', tgt='*') %}
{% for minionid, ip in salt.saltutil.runner(
'mine.get',
tgt='G@role:so-manager or G@role:so-managersearch or G@role:so-standalone or G@role:so-searchnode or G@role:so-heavynode or G@role:so-receiver or G@role:so-fleet ',
tgt='logstash:enabled:true',
fun='network.ip_addrs',
tgt_type='compound') | dictsort()
tgt_type='pillar') | dictsort()
%}
# only add a node to the pillar if it returned an ip from the mine
{% if ip | length > 0%}
{% set hostname = cached_grains[minionid]['host'] %}
{% set node_type = minionid.split('_')[1] %}
{% set hostname = minionid.split('_') | first %}
{% set node_type = minionid.split('_') | last %}
{% if node_type not in node_types.keys() %}
{% do node_types.update({node_type: {hostname: ip[0]}}) %}
{% else %}

34
pillar/redis/nodes.sls Normal file
View File

@@ -0,0 +1,34 @@
{% set node_types = {} %}
{% for minionid, ip in salt.saltutil.runner(
'mine.get',
tgt='redis:enabled:true',
fun='network.ip_addrs',
tgt_type='pillar') | dictsort()
%}
# only add a node to the pillar if it returned an ip from the mine
{% if ip | length > 0%}
{% set hostname = minionid.split('_') | first %}
{% set node_type = minionid.split('_') | last %}
{% if node_type not in node_types.keys() %}
{% do node_types.update({node_type: {hostname: ip[0]}}) %}
{% else %}
{% if hostname not in node_types[node_type] %}
{% do node_types[node_type].update({hostname: ip[0]}) %}
{% else %}
{% do node_types[node_type][hostname].update(ip[0]) %}
{% endif %}
{% endif %}
{% endif %}
{% endfor %}
redis:
nodes:
{% for node_type, values in node_types.items() %}
{{node_type}}:
{% for hostname, ip in values.items() %}
{{hostname}}:
ip: {{ip}}
{% endfor %}
{% endfor %}

View File

@@ -47,10 +47,12 @@ base:
- kibana.adv_kibana
- kratos.soc_kratos
- kratos.adv_kratos
- redis.nodes
- redis.soc_redis
- redis.adv_redis
- influxdb.soc_influxdb
- influxdb.adv_influxdb
- elasticsearch.nodes
- elasticsearch.soc_elasticsearch
- elasticsearch.adv_elasticsearch
- elasticfleet.soc_elasticfleet
@@ -61,6 +63,9 @@ base:
- backup.adv_backup
- minions.{{ grains.id }}
- minions.adv_{{ grains.id }}
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
- stig.soc_stig
'*_sensor':
@@ -144,10 +149,12 @@ base:
- idstools.adv_idstools
- kratos.soc_kratos
- kratos.adv_kratos
- redis.nodes
- redis.soc_redis
- redis.adv_redis
- influxdb.soc_influxdb
- influxdb.adv_influxdb
- elasticsearch.nodes
- elasticsearch.soc_elasticsearch
- elasticsearch.adv_elasticsearch
- elasticfleet.soc_elasticfleet
@@ -176,6 +183,9 @@ base:
- minions.{{ grains.id }}
- minions.adv_{{ grains.id }}
- stig.soc_stig
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
'*_heavynode':
- elasticsearch.auth
@@ -209,17 +219,22 @@ base:
- logstash.nodes
- logstash.soc_logstash
- logstash.adv_logstash
- elasticsearch.nodes
- elasticsearch.soc_elasticsearch
- elasticsearch.adv_elasticsearch
{% if salt['file.file_exists']('/opt/so/saltstack/local/pillar/elasticsearch/auth.sls') %}
- elasticsearch.auth
{% endif %}
- redis.nodes
- redis.soc_redis
- redis.adv_redis
- minions.{{ grains.id }}
- minions.adv_{{ grains.id }}
- stig.soc_stig
- soc.license
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
'*_receiver':
- logstash.nodes
@@ -232,6 +247,10 @@ base:
- redis.adv_redis
- minions.{{ grains.id }}
- minions.adv_{{ grains.id }}
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
- soc.license
'*_import':
- secrets

12
pyci.sh
View File

@@ -15,12 +15,16 @@ TARGET_DIR=${1:-.}
PATH=$PATH:/usr/local/bin
if ! which pytest &> /dev/null || ! which flake8 &> /dev/null ; then
echo "Missing dependencies. Consider running the following command:"
echo " python -m pip install flake8 pytest pytest-cov"
if [ ! -d .venv ]; then
python -m venv .venv
fi
source .venv/bin/activate
if ! pip install flake8 pytest pytest-cov pyyaml; then
echo "Unable to install dependencies."
exit 1
fi
pip install pytest pytest-cov
flake8 "$TARGET_DIR" "--config=${HOME_DIR}/pytest.ini"
python3 -m pytest "--cov-config=${HOME_DIR}/pytest.ini" "--cov=$TARGET_DIR" --doctest-modules --cov-report=term --cov-fail-under=100 "$TARGET_DIR"

View File

@@ -103,7 +103,8 @@
'utility',
'schedule',
'docker_clean',
'stig'
'stig',
'kafka'
],
'so-managersearch': [
'salt.master',
@@ -125,7 +126,8 @@
'utility',
'schedule',
'docker_clean',
'stig'
'stig',
'kafka'
],
'so-searchnode': [
'ssl',
@@ -134,7 +136,9 @@
'firewall',
'schedule',
'docker_clean',
'stig'
'stig',
'kafka.ca',
'kafka.ssl'
],
'so-standalone': [
'salt.master',
@@ -159,7 +163,8 @@
'schedule',
'tcpreplay',
'docker_clean',
'stig'
'stig',
'kafka'
],
'so-sensor': [
'ssl',
@@ -190,7 +195,9 @@
'telegraf',
'firewall',
'schedule',
'docker_clean'
'docker_clean',
'kafka',
'stig'
],
'so-desktop': [
'ssl',

View File

@@ -1,6 +1,3 @@
mine_functions:
x509.get_pem_entries: [/etc/pki/ca.crt]
x509_signing_policies:
filebeat:
- minions: '*'
@@ -70,3 +67,17 @@ x509_signing_policies:
- authorityKeyIdentifier: keyid,issuer:always
- days_valid: 820
- copypath: /etc/pki/issued_certs/
kafka:
- minions: '*'
- signing_private_key: /etc/pki/ca.key
- signing_cert: /etc/pki/ca.crt
- C: US
- ST: Utah
- L: Salt Lake City
- basicConstraints: "critical CA:false"
- keyUsage: "digitalSignature, keyEncipherment"
- subjectKeyIdentifier: hash
- authorityKeyIdentifier: keyid,issuer:always
- extendedKeyUsage: "serverAuth, clientAuth"
- days_valid: 820
- copypath: /etc/pki/issued_certs/

View File

@@ -14,6 +14,11 @@ net.core.wmem_default:
sysctl.present:
- value: 26214400
# Users are not a fan of console messages
kernel.printk:
sysctl.present:
- value: "3 4 1 3"
# Remove variables.txt from /tmp - This is temp
rmvariablesfile:
file.absent:

View File

@@ -57,6 +57,12 @@ copy_so-yaml_manager_tools_sbin:
- force: True
- preserve: True
copy_so-repo-sync_manager_tools_sbin:
file.copy:
- name: /opt/so/saltstack/default/salt/manager/tools/sbin/so-repo-sync
- source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-repo-sync
- preserve: True
# This section is used to put the new script in place so that it can be called during soup.
# It is faster than calling the states that normally manage them to put them in place.
copy_so-common_sbin:
@@ -94,6 +100,13 @@ copy_so-yaml_sbin:
- force: True
- preserve: True
copy_so-repo-sync_sbin:
file.copy:
- name: /usr/sbin/so-repo-sync
- source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-repo-sync
- force: True
- preserve: True
{% else %}
fix_23_soup_sbin:
cmd.run:

View File

@@ -8,7 +8,7 @@
# Elastic agent is not managed by salt. Because of this we must store this base information in a
# script that accompanies the soup system. Since so-common is one of those special soup files,
# and since this same logic is required during installation, it's included in this file.
ELASTIC_AGENT_TARBALL_VERSION="8.10.4"
ELASTIC_AGENT_TARBALL_VERSION="8.14.3"
ELASTIC_AGENT_URL="https://repo.securityonion.net/file/so-repo/prod/2.4/elasticagent/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.tar.gz"
ELASTIC_AGENT_MD5_URL="https://repo.securityonion.net/file/so-repo/prod/2.4/elasticagent/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.md5"
ELASTIC_AGENT_FILE="/nsm/elastic-fleet/artifacts/elastic-agent_SO-$ELASTIC_AGENT_TARBALL_VERSION.tar.gz"
@@ -31,6 +31,11 @@ if ! echo "$PATH" | grep -q "/usr/sbin"; then
export PATH="$PATH:/usr/sbin"
fi
# See if a proxy is set. If so use it.
if [ -f /etc/profile.d/so-proxy.sh ]; then
. /etc/profile.d/so-proxy.sh
fi
# Define a banner to separate sections
banner="========================================================================="

View File

@@ -50,6 +50,7 @@ container_list() {
"so-idh"
"so-idstools"
"so-influxdb"
"so-kafka"
"so-kibana"
"so-kratos"
"so-logstash"

View File

@@ -95,6 +95,8 @@ if [[ $EXCLUDE_STARTUP_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|shutdown process" # server not yet ready (logstash waiting on elastic)
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|contain valid certificates" # server not yet ready (logstash waiting on elastic)
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|failedaction" # server not yet ready (logstash waiting on elastic)
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|block in start_workers" # server not yet ready (logstash waiting on elastic)
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|block in buffer_initialize" # server not yet ready (logstash waiting on elastic)
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|no route to host" # server not yet ready
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|not running" # server not yet ready
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|unavailable" # server not yet ready
@@ -147,6 +149,7 @@ if [[ $EXCLUDE_FALSE_POSITIVE_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|status 200" # false positive (request successful, contained error string in content)
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|app_layer.error" # false positive (suricata 7) in stats.log e.g. app_layer.error.imap.parser | Total | 0
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|is not an ip string literal" # false positive (Open Canary logging out blank IP addresses)
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|syncing rule" # false positive (rule sync log line includes rule name which can contain 'error')
fi
if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
@@ -170,6 +173,7 @@ if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|cannot join on an empty table" # InfluxDB flux query, import nodes
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|exhausting result iterator" # InfluxDB flux query mismatched table results (temporary data issue)
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|failed to finish run" # InfluxDB rare error, self-recoverable
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Unable to gather disk name" # InfluxDB known error, can't read disks because the container doesn't have them mounted
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|iteration"
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|communication packets"
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|use of closed"
@@ -205,6 +209,7 @@ if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|detect-parse" # Suricata encountering a malformed rule
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|integrity check failed" # Detections: Exclude false positive due to automated testing
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|syncErrors" # Detections: Not an actual error
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Initialized license manager" # SOC log: before fields.status was changed to fields.licenseStatus
fi
RESULT=0
@@ -241,6 +246,7 @@ exclude_log "mysqld.log" # MySQL is removed as of 2.4.70, logs may still be on
exclude_log "soctopus.log" # Soctopus is removed as of 2.4.70, logs may still be on disk
exclude_log "agentstatus.log" # ignore this log since it tracks agents in error state
exclude_log "detections_runtime-status_yara.log" # temporarily ignore this log until Detections is more stable
exclude_log "/nsm/kafka/data/" # ignore Kafka data directory from log check.
for log_file in $(cat /tmp/log_check_files); do
status "Checking log file $log_file"

View File

@@ -9,6 +9,9 @@
. /usr/sbin/so-common
software_raid=("SOSMN" "SOSMN-DE02" "SOSSNNV" "SOSSNNV-DE02" "SOS10k-DE02" "SOS10KNV" "SOS10KNV-DE02" "SOS10KNV-DE02" "SOS2000-DE02" "SOS-GOFAST-LT-DE02" "SOS-GOFAST-MD-DE02" "SOS-GOFAST-HV-DE02")
hardware_raid=("SOS1000" "SOS1000F" "SOSSN7200" "SOS5000" "SOS4000")
{%- if salt['grains.get']('sosmodel', '') %}
{%- set model = salt['grains.get']('sosmodel') %}
model={{ model }}
@@ -16,25 +19,35 @@ model={{ model }}
if [[ $model =~ ^(SO2AMI01|SO2AZI01|SO2GCI01)$ ]]; then
exit 0
fi
for i in "${software_raid[@]}"; do
if [[ "$model" == $i ]]; then
is_softwareraid=true
is_hwraid=false
break
fi
done
for i in "${hardware_raid[@]}"; do
if [[ "$model" == $i ]]; then
is_softwareraid=false
is_hwraid=true
break
fi
done
{%- else %}
echo "This is not an appliance"
exit 0
{%- endif %}
if [[ $model =~ ^(SOS10K|SOS500|SOS1000|SOS1000F|SOS4000|SOSSN7200|SOSSNNV|SOSMN)$ ]]; then
is_bossraid=true
fi
if [[ $model =~ ^(SOSSNNV|SOSMN)$ ]]; then
is_swraid=true
fi
if [[ $model =~ ^(SOS10K|SOS500|SOS1000|SOS1000F|SOS4000|SOSSN7200)$ ]]; then
is_hwraid=true
fi
check_nsm_raid() {
PERCCLI=$(/opt/raidtools/perccli/perccli64 /c0/v0 show|grep RAID|grep Optl)
MEGACTL=$(/opt/raidtools/megasasctl |grep optimal)
if [[ $APPLIANCE == '1' ]]; then
if [[ "$model" == "SOS500" || "$model" == "SOS500-DE02" ]]; then
#This doesn't have raid
HWRAID=0
else
if [[ -n $PERCCLI ]]; then
HWRAID=0
elif [[ -n $MEGACTL ]]; then
@@ -42,7 +55,6 @@ check_nsm_raid() {
else
HWRAID=1
fi
fi
}
@@ -50,17 +62,27 @@ check_nsm_raid() {
check_boss_raid() {
MVCLI=$(/usr/local/bin/mvcli info -o vd |grep status |grep functional)
MVTEST=$(/usr/local/bin/mvcli info -o vd | grep "No adapter")
BOSSNVMECLI=$(/usr/local/bin/mnv_cli info -o vd -i 0 | grep Functional)
# Check to see if this is a SM based system
if [[ -z $MVTEST ]]; then
if [[ -n $MVCLI ]]; then
# Is this NVMe Boss Raid?
if [[ "$model" =~ "-DE02" ]]; then
if [[ -n $BOSSNVMECLI ]]; then
BOSSRAID=0
else
BOSSRAID=1
fi
else
# This doesn't have boss raid so lets make it 0
BOSSRAID=0
# Check to see if this is a SM based system
if [[ -z $MVTEST ]]; then
if [[ -n $MVCLI ]]; then
BOSSRAID=0
else
BOSSRAID=1
fi
else
# This doesn't have boss raid so lets make it 0
BOSSRAID=0
fi
fi
}
@@ -79,14 +101,13 @@ SWRAID=0
BOSSRAID=0
HWRAID=0
if [[ $is_hwraid ]]; then
if [[ "$is_hwraid" == "true" ]]; then
check_nsm_raid
check_boss_raid
fi
if [[ $is_bossraid ]]; then
check_boss_raid
fi
if [[ $is_swraid ]]; then
if [[ "$is_softwareraid" == "true" ]]; then
check_software_raid
check_boss_raid
fi
sum=$(($SWRAID + $BOSSRAID + $HWRAID))

View File

@@ -10,7 +10,7 @@
. /usr/sbin/so-common
. /usr/sbin/so-image-common
REPLAYIFACE=${REPLAYIFACE:-$(lookup_pillar interface sensor)}
REPLAYIFACE=${REPLAYIFACE:-"{{salt['pillar.get']('sensor:interface', '')}}"}
REPLAYSPEED=${REPLAYSPEED:-10}
mkdir -p /opt/so/samples

View File

@@ -187,3 +187,12 @@ docker:
custom_bind_mounts: []
extra_hosts: []
extra_env: []
'so-kafka':
final_octet: 88
port_bindings:
- 0.0.0.0:9092:9092
- 0.0.0.0:9093:9093
- 0.0.0.0:8778:8778
custom_bind_mounts: []
extra_hosts: []
extra_env: []

View File

@@ -20,30 +20,30 @@ dockergroup:
dockerheldpackages:
pkg.installed:
- pkgs:
- containerd.io: 1.6.21-1
- docker-ce: 5:24.0.3-1~debian.12~bookworm
- docker-ce-cli: 5:24.0.3-1~debian.12~bookworm
- docker-ce-rootless-extras: 5:24.0.3-1~debian.12~bookworm
- containerd.io: 1.6.33-1
- docker-ce: 5:26.1.4-1~debian.12~bookworm
- docker-ce-cli: 5:26.1.4-1~debian.12~bookworm
- docker-ce-rootless-extras: 5:26.1.4-1~debian.12~bookworm
- hold: True
- update_holds: True
{% elif grains.oscodename == 'jammy' %}
dockerheldpackages:
pkg.installed:
- pkgs:
- containerd.io: 1.6.21-1
- docker-ce: 5:24.0.2-1~ubuntu.22.04~jammy
- docker-ce-cli: 5:24.0.2-1~ubuntu.22.04~jammy
- docker-ce-rootless-extras: 5:24.0.2-1~ubuntu.22.04~jammy
- containerd.io: 1.6.33-1
- docker-ce: 5:26.1.4-1~ubuntu.22.04~jammy
- docker-ce-cli: 5:26.1.4-1~ubuntu.22.04~jammy
- docker-ce-rootless-extras: 5:26.1.4-1~ubuntu.22.04~jammy
- hold: True
- update_holds: True
{% else %}
dockerheldpackages:
pkg.installed:
- pkgs:
- containerd.io: 1.4.9-1
- docker-ce: 5:20.10.8~3-0~ubuntu-focal
- docker-ce-cli: 5:20.10.5~3-0~ubuntu-focal
- docker-ce-rootless-extras: 5:20.10.5~3-0~ubuntu-focal
- containerd.io: 1.6.33-1
- docker-ce: 5:26.1.4-1~ubuntu.20.04~focal
- docker-ce-cli: 5:26.1.4-1~ubuntu.20.04~focal
- docker-ce-rootless-extras: 5:26.1.4-1~ubuntu.20.04~focal
- hold: True
- update_holds: True
{% endif %}
@@ -51,10 +51,10 @@ dockerheldpackages:
dockerheldpackages:
pkg.installed:
- pkgs:
- containerd.io: 1.6.21-3.1.el9
- docker-ce: 24.0.4-1.el9
- docker-ce-cli: 24.0.4-1.el9
- docker-ce-rootless-extras: 24.0.4-1.el9
- containerd.io: 1.6.33-3.1.el9
- docker-ce: 3:26.1.4-1.el9
- docker-ce-cli: 1:26.1.4-1.el9
- docker-ce-rootless-extras: 26.1.4-1.el9
- hold: True
- update_holds: True
{% endif %}

View File

@@ -101,3 +101,4 @@ docker:
multiline: True
forcedType: "[]string"
so-zeek: *dockerOptions
so-kafka: *dockerOptions

View File

@@ -3,8 +3,8 @@ elastalert:
description: You can enable or disable Elastalert.
helpLink: elastalert.html
alerter_parameters:
title: Alerter Parameters
description: Optional configuration parameters for additional alerters that can be enabled for all Sigma rules. Filter for 'Alerter' in this Configuration screen to find the setting that allows these alerters to be enabled within the SOC ElastAlert module. Use YAML format for these parameters, and reference the ElastAlert 2 documentation, located at https://elastalert2.readthedocs.io, for available alerters and their required configuration parameters. A full update of the ElastAlert rule engine, via the Detections screen, is required in order to apply these changes. Requires a valid Security Onion license key.
title: Custom Configuration Parameters
description: Optional configuration parameters made available as defaults for all rules and alerters. Use YAML format for these parameters, and reference the ElastAlert 2 documentation, located at https://elastalert2.readthedocs.io, for available configuration parameters. Requires a valid Security Onion license key.
global: True
multiline: True
syntax: yaml

View File

@@ -97,6 +97,7 @@ elasticfleet:
- symantec_endpoint
- system
- tcp
- tenable_io
- tenable_sc
- ti_abusech
- ti_anomali

View File

@@ -27,7 +27,9 @@ wait_for_elasticsearch_elasticfleet:
so-elastic-fleet-auto-configure-logstash-outputs:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-outputs-update
- retry: True
- retry:
attempts: 4
interval: 30
{% endif %}
# If enabled, automatically update Fleet Server URLs & ES Connection
@@ -35,7 +37,9 @@ so-elastic-fleet-auto-configure-logstash-outputs:
so-elastic-fleet-auto-configure-server-urls:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-urls-update
- retry: True
- retry:
attempts: 4
interval: 30
{% endif %}
# Automatically update Fleet Server Elasticsearch URLs & Agent Artifact URLs
@@ -43,12 +47,16 @@ so-elastic-fleet-auto-configure-server-urls:
so-elastic-fleet-auto-configure-elasticsearch-urls:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-es-url-update
- retry: True
- retry:
attempts: 4
interval: 30
so-elastic-fleet-auto-configure-artifact-urls:
cmd.run:
- name: /usr/sbin/so-elastic-fleet-artifacts-url-update
- retry: True
- retry:
attempts: 4
interval: 30
{% endif %}

View File

@@ -0,0 +1,19 @@
{
"package": {
"name": "fleet_server",
"version": ""
},
"name": "fleet_server-1",
"namespace": "default",
"policy_id": "FleetServer_hostname",
"vars": {},
"inputs": {
"fleet_server-fleet-server": {
"enabled": true,
"vars": {
"custom": "server.ssl.supported_protocols: [\"TLSv1.2\", \"TLSv1.3\"]\nserver.ssl.cipher_suites: [ \"ECDHE-RSA-AES-128-GCM-SHA256\", \"ECDHE-RSA-AES-256-GCM-SHA384\", \"ECDHE-RSA-AES-128-CBC-SHA\", \"ECDHE-RSA-AES-256-CBC-SHA\", \"RSA-AES-128-GCM-SHA256\", \"RSA-AES-256-GCM-SHA384\"]"
},
"streams": {}
}
}
}

View File

@@ -5,7 +5,7 @@
"package": {
"name": "endpoint",
"title": "Elastic Defend",
"version": "8.10.2"
"version": "8.14.0"
},
"enabled": true,
"policy_id": "endpoints-initial",

View File

@@ -11,7 +11,7 @@
"winlogs-winlog": {
"enabled": true,
"streams": {
"winlog.winlog": {
"winlog.winlogs": {
"enabled": true,
"vars": {
"channel": "Microsoft-Windows-Windows Defender/Operational",

View File

@@ -20,7 +20,7 @@
],
"data_stream.dataset": "import",
"custom": "",
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-1.43.0\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-1.38.0\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-1.43.0\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-1.43.0\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-1.38.0\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
"processors": "- dissect:\n tokenizer: \"/nsm/import/%{import.id}/evtx/%{import.file}\"\n field: \"log.file.path\"\n target_prefix: \"\"\n- decode_json_fields:\n fields: [\"message\"]\n target: \"\"\n- drop_fields:\n fields: [\"host\"]\n ignore_missing: true\n- add_fields:\n target: data_stream\n fields:\n type: logs\n dataset: system.security\n- add_fields:\n target: event\n fields:\n dataset: system.security\n module: system\n imported: true\n- add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.security-1.59.0\n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-Sysmon/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.sysmon_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.sysmon_operational\n module: windows\n imported: true\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.sysmon_operational-1.45.1\n- if:\n equals:\n winlog.channel: 'Application'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.application\n - add_fields:\n target: event\n fields:\n dataset: system.application\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.application-1.59.0\n- if:\n equals:\n winlog.channel: 'System'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: system.system\n - add_fields:\n target: event\n fields:\n dataset: system.system\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-system.system-1.59.0\n \n- if:\n equals:\n winlog.channel: 'Microsoft-Windows-PowerShell/Operational'\n then: \n - add_fields:\n target: data_stream\n fields:\n dataset: windows.powershell_operational\n - add_fields:\n target: event\n fields:\n dataset: windows.powershell_operational\n module: windows\n - add_fields:\n target: \"@metadata\"\n fields:\n pipeline: logs-windows.powershell_operational-1.45.1\n- add_fields:\n target: data_stream\n fields:\n dataset: import",
"tags": [
"import"
]

View File

@@ -0,0 +1,29 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-elastic-fleet-common
# Get all the fleet policies
json_output=$(curl -s -K /opt/so/conf/elasticsearch/curl.config -L -X GET "localhost:5601/api/fleet/agent_policies" -H 'kbn-xsrf: true')
# Extract the IDs that start with "FleetServer_"
POLICY=$(echo "$json_output" | jq -r '.items[] | select(.id | startswith("FleetServer_")) | .id')
# Iterate over each ID in the POLICY variable
for POLICYNAME in $POLICY; do
printf "\nUpdating Policy: $POLICYNAME\n"
# First get the Integration ID
INTEGRATION_ID=$(/usr/sbin/so-elastic-fleet-agent-policy-view "$POLICYNAME" | jq -r '.item.package_policies[] | select(.package.name == "fleet_server") | .id')
# Modify the default integration policy to update the policy_id and an with the correct naming
UPDATED_INTEGRATION_POLICY=$(jq --arg policy_id "$POLICYNAME" --arg name "fleet_server-$POLICYNAME" '
.policy_id = $policy_id |
.name = $name' /opt/so/conf/elastic-fleet/integrations/fleet-server/fleet-server.json)
# Now update the integration policy using the modified JSON
elastic_fleet_integration_update "$INTEGRATION_ID" "$UPDATED_INTEGRATION_POLICY"
done

View File

@@ -12,7 +12,10 @@ if [ ! -f /opt/so/state/eaintegrations.txt ]; then
# First, check for any package upgrades
/usr/sbin/so-elastic-fleet-package-upgrade
# Second, configure Elastic Defend Integration seperately
# Second, update Fleet Server policies
/sbin/so-elastic-fleet-integration-policy-elastic-fleet-server
# Third, configure Elastic Defend Integration seperately
/usr/sbin/so-elastic-fleet-integration-policy-elastic-defend
# Initial Endpoints

View File

@@ -19,7 +19,7 @@ NUM_RUNNING=$(pgrep -cf "/bin/bash /sbin/so-elastic-agent-gen-installers")
for i in {1..30}
do
ENROLLMENTOKEN=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("endpoints-initial")) | .api_key')
ENROLLMENTOKEN=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "localhost:5601/api/fleet/enrollment_api_keys?perPage=100" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq .list | jq -r -c '.[] | select(.policy_id | contains("endpoints-initial")) | .api_key')
FLEETHOST=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/fleet_server_hosts/grid-default' | jq -r '.item.host_urls[]' | paste -sd ',')
if [[ $FLEETHOST ]] && [[ $ENROLLMENTOKEN ]]; then break; else sleep 10; fi
done

View File

@@ -21,64 +21,104 @@ function update_logstash_outputs() {
# Update Logstash Outputs
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_logstash" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq
}
function update_kafka_outputs() {
# Make sure SSL configuration is included in policy updates for Kafka output. SSL is configured in so-elastic-fleet-setup
SSL_CONFIG=$(curl -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs/so-manager_kafka" | jq -r '.item.ssl')
# Get current list of Logstash Outputs
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_logstash')
JSON_STRING=$(jq -n \
--arg UPDATEDLIST "$NEW_LIST_JSON" \
--argjson SSL_CONFIG "$SSL_CONFIG" \
'{"name": "grid-kafka","type": "kafka","hosts": $UPDATEDLIST,"is_default": true,"is_default_monitoring": true,"config_yaml": "","ssl": $SSL_CONFIG}')
# Update Kafka outputs
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/outputs/so-manager_kafka" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" | jq
}
# Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")
if [ "$CHECKSUM" != "so-manager_logstash" ]; then
printf "Failed to query for current Logstash Outputs..."
exit 1
fi
{% if GLOBALS.pipeline == "KAFKA" %}
# Get current list of Kafka Outputs
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_kafka')
# Get the current list of Logstash outputs & hash them
CURRENT_LIST=$(jq -c -r '.item.hosts' <<< "$RAW_JSON")
CURRENT_HASH=$(sha1sum <<< "$CURRENT_LIST" | awk '{print $1}')
# Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")
if [ "$CHECKSUM" != "so-manager_kafka" ]; then
printf "Failed to query for current Kafka Outputs..."
exit 1
fi
declare -a NEW_LIST=()
# Get the current list of kafka outputs & hash them
CURRENT_LIST=$(jq -c -r '.item.hosts' <<< "$RAW_JSON")
CURRENT_HASH=$(sha1sum <<< "$CURRENT_LIST" | awk '{print $1}')
declare -a NEW_LIST=()
# Query for the current Grid Nodes that are running kafka
KAFKANODES=$(salt-call --out=json pillar.get kafka:nodes | jq '.local')
# Query for Kafka nodes with Broker role and add hostname to list
while IFS= read -r line; do
NEW_LIST+=("$line")
done < <(jq -r 'to_entries | .[] | select(.value.role | contains("broker")) | .key + ":9092"' <<< $KAFKANODES)
{# If global pipeline isn't set to KAFKA then assume default of REDIS / logstash #}
{% else %}
# Get current list of Logstash Outputs
RAW_JSON=$(curl -K /opt/so/conf/elasticsearch/curl.config 'http://localhost:5601/api/fleet/outputs/so-manager_logstash')
# Check to make sure that the server responded with good data - else, bail from script
CHECKSUM=$(jq -r '.item.id' <<< "$RAW_JSON")
if [ "$CHECKSUM" != "so-manager_logstash" ]; then
printf "Failed to query for current Logstash Outputs..."
exit 1
fi
# Get the current list of Logstash outputs & hash them
CURRENT_LIST=$(jq -c -r '.item.hosts' <<< "$RAW_JSON")
CURRENT_HASH=$(sha1sum <<< "$CURRENT_LIST" | awk '{print $1}')
declare -a NEW_LIST=()
{# If we select to not send to manager via SOC, then omit the code that adds manager to NEW_LIST #}
{% if ELASTICFLEETMERGED.enable_manager_output %}
# Create array & add initial elements
if [ "{{ GLOBALS.hostname }}" = "{{ GLOBALS.url_base }}" ]; then
NEW_LIST+=("{{ GLOBALS.url_base }}:5055")
else
NEW_LIST+=("{{ GLOBALS.url_base }}:5055" "{{ GLOBALS.hostname }}:5055")
fi
{% endif %}
# Query for FQDN entries & add them to the list
{% if ELASTICFLEETMERGED.config.server.custom_fqdn | length > 0 %}
CUSTOMFQDNLIST=('{{ ELASTICFLEETMERGED.config.server.custom_fqdn | join(' ') }}')
readarray -t -d ' ' CUSTOMFQDN < <(printf '%s' "$CUSTOMFQDNLIST")
for CUSTOMNAME in "${CUSTOMFQDN[@]}"
do
NEW_LIST+=("$CUSTOMNAME:5055")
done
{% endif %}
# Query for the current Grid Nodes that are running Logstash
LOGSTASHNODES=$(salt-call --out=json pillar.get logstash:nodes | jq '.local')
# Query for Receiver Nodes & add them to the list
if grep -q "receiver" <<< $LOGSTASHNODES; then
readarray -t RECEIVERNODES < <(jq -r ' .receiver | keys_unsorted[]' <<< $LOGSTASHNODES)
for NODE in "${RECEIVERNODES[@]}"
do
NEW_LIST+=("$NODE:5055")
done
fi
# Query for Fleet Nodes & add them to the list
if grep -q "fleet" <<< $LOGSTASHNODES; then
readarray -t FLEETNODES < <(jq -r ' .fleet | keys_unsorted[]' <<< $LOGSTASHNODES)
for NODE in "${FLEETNODES[@]}"
do
NEW_LIST+=("$NODE:5055")
done
fi
{# If we select to not send to manager via SOC, then omit the code that adds manager to NEW_LIST #}
{% if ELASTICFLEETMERGED.enable_manager_output %}
# Create array & add initial elements
if [ "{{ GLOBALS.hostname }}" = "{{ GLOBALS.url_base }}" ]; then
NEW_LIST+=("{{ GLOBALS.url_base }}:5055")
else
NEW_LIST+=("{{ GLOBALS.url_base }}:5055" "{{ GLOBALS.hostname }}:5055")
fi
{% endif %}
# Query for FQDN entries & add them to the list
{% if ELASTICFLEETMERGED.config.server.custom_fqdn | length > 0 %}
CUSTOMFQDNLIST=('{{ ELASTICFLEETMERGED.config.server.custom_fqdn | join(' ') }}')
readarray -t -d ' ' CUSTOMFQDN < <(printf '%s' "$CUSTOMFQDNLIST")
for CUSTOMNAME in "${CUSTOMFQDN[@]}"
do
NEW_LIST+=("$CUSTOMNAME:5055")
done
{% endif %}
# Query for the current Grid Nodes that are running Logstash
LOGSTASHNODES=$(salt-call --out=json pillar.get logstash:nodes | jq '.local')
# Query for Receiver Nodes & add them to the list
if grep -q "receiver" <<< $LOGSTASHNODES; then
readarray -t RECEIVERNODES < <(jq -r ' .receiver | keys_unsorted[]' <<< $LOGSTASHNODES)
for NODE in "${RECEIVERNODES[@]}"
do
NEW_LIST+=("$NODE:5055")
done
fi
# Query for Fleet Nodes & add them to the list
if grep -q "fleet" <<< $LOGSTASHNODES; then
readarray -t FLEETNODES < <(jq -r ' .fleet | keys_unsorted[]' <<< $LOGSTASHNODES)
for NODE in "${FLEETNODES[@]}"
do
NEW_LIST+=("$NODE:5055")
done
fi
# Sort & hash the new list of Logstash Outputs
NEW_LIST_JSON=$(jq --compact-output --null-input '$ARGS.positional' --args -- "${NEW_LIST[@]}")
NEW_HASH=$(sha1sum <<< "$NEW_LIST_JSON" | awk '{print $1}')
@@ -87,9 +127,28 @@ NEW_HASH=$(sha1sum <<< "$NEW_LIST_JSON" | awk '{print $1}')
if [ "$NEW_HASH" = "$CURRENT_HASH" ]; then
printf "\nHashes match - no update needed.\n"
printf "Current List: $CURRENT_LIST\nNew List: $NEW_LIST_JSON\n"
# Since output can be KAFKA or LOGSTASH, we need to check if the policy set as default matches the value set in GLOBALS.pipeline and update if needed
printf "Checking if the correct output policy is set as default\n"
OUTPUT_DEFAULT=$(jq -r '.item.is_default' <<< $RAW_JSON)
OUTPUT_DEFAULT_MONITORING=$(jq -r '.item.is_default_monitoring' <<< $RAW_JSON)
if [[ "$OUTPUT_DEFAULT" = "false" || "$OUTPUT_DEFAULT_MONITORING" = "false" ]]; then
printf "Default output policy needs to be updated.\n"
{%- if GLOBALS.pipeline == "KAFKA" and 'gmd' in salt['pillar.get']('features', []) %}
update_kafka_outputs
{%- else %}
update_logstash_outputs
{%- endif %}
else
printf "Default output policy is set - no update needed.\n"
fi
exit 0
else
printf "\nHashes don't match - update needed.\n"
printf "Current List: $CURRENT_LIST\nNew List: $NEW_LIST_JSON\n"
{%- if GLOBALS.pipeline == "KAFKA" and 'gmd' in salt['pillar.get']('features', []) %}
update_kafka_outputs
{%- else %}
update_logstash_outputs
{%- endif %}
fi

View File

@@ -53,7 +53,8 @@ fi
printf "\n### Create ES Token ###\n"
ESTOKEN=$(curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/service_tokens" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' | jq -r .value)
### Create Outputs & Fleet URLs ###
### Create Outputs, Fleet Policy and Fleet URLs ###
# Create the Manager Elasticsearch Output first and set it as the default output
printf "\nAdd Manager Elasticsearch Output...\n"
ESCACRT=$(openssl x509 -in $INTCA)
JSON_STRING=$( jq -n \
@@ -62,7 +63,21 @@ JSON_STRING=$( jq -n \
curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
printf "\n\n"
printf "\nCreate Logstash Output Config if node is not an Import or Eval install\n"
# Create the Manager Fleet Server Host Agent Policy
# This has to be done while the Elasticsearch Output is set to the default Output
printf "Create Manager Fleet Server Policy...\n"
elastic_fleet_policy_create "FleetServer_{{ GLOBALS.hostname }}" "Fleet Server - {{ GLOBALS.hostname }}" "false" "120"
# Modify the default integration policy to update the policy_id with the correct naming
UPDATED_INTEGRATION_POLICY=$(jq --arg policy_id "FleetServer_{{ GLOBALS.hostname }}" --arg name "fleet_server-{{ GLOBALS.hostname }}" '
.policy_id = $policy_id |
.name = $name' /opt/so/conf/elastic-fleet/integrations/fleet-server/fleet-server.json)
# Add the Fleet Server Integration to the new Fleet Policy
elastic_fleet_integration_create "$UPDATED_INTEGRATION_POLICY"
# Now we can create the Logstash Output and set it to to be the default Output
printf "\n\nCreate Logstash Output Config if node is not an Import or Eval install\n"
{% if grains.role not in ['so-import', 'so-eval'] %}
LOGSTASHCRT=$(openssl x509 -in /etc/pki/elasticfleet-logstash.crt)
LOGSTASHKEY=$(openssl rsa -in /etc/pki/elasticfleet-logstash.key)
@@ -77,6 +92,11 @@ curl -K /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fl
printf "\n\n"
{%- endif %}
printf "\nCreate Kafka Output Config if node is not an Import or Eval install\n"
{% if grains.role not in ['so-import', 'so-eval'] %}
/usr/sbin/so-kafka-fleet-output-policy
{% endif %}
# Add Manager Hostname & URL Base to Fleet Host URLs
printf "\nAdd SO-Manager Fleet URL\n"
if [ "{{ GLOBALS.hostname }}" = "{{ GLOBALS.url_base }}" ]; then
@@ -96,16 +116,6 @@ printf "\n\n"
# Load Elasticsearch templates
/usr/sbin/so-elasticsearch-templates-load
# Manager Fleet Server Host
elastic_fleet_policy_create "FleetServer_{{ GLOBALS.hostname }}" "Fleet Server - {{ GLOBALS.hostname }}" "true" "120"
#Temp Fixup for ES Output bug
JSON_STRING=$( jq -n \
--arg NAME "FleetServer_{{ GLOBALS.hostname }}" \
'{"name": $NAME,"description": $NAME,"namespace":"default","monitoring_enabled":["logs"],"inactivity_timeout":120,"data_output_id":"so-manager_elasticsearch"}'
)
curl -K /opt/so/conf/elasticsearch/curl.config -L -X PUT "localhost:5601/api/fleet/agent_policies/FleetServer_{{ GLOBALS.hostname }}" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING"
# Initial Endpoints Policy
elastic_fleet_policy_create "endpoints-initial" "Initial Endpoint Policy" "false" "1209600"

View File

@@ -0,0 +1,52 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% if GLOBALS.role in ['so-manager', 'so-standalone', 'so-managersearch'] %}
. /usr/sbin/so-common
# Check to make sure that Kibana API is up & ready
RETURN_CODE=0
wait_for_web_response "http://localhost:5601/api/fleet/settings" "fleet" 300 "curl -K /opt/so/conf/elasticsearch/curl.config"
RETURN_CODE=$?
if [[ "$RETURN_CODE" != "0" ]]; then
printf "Kibana API not accessible, can't setup Elastic Fleet output policy for Kafka..."
exit 1
fi
output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs" | jq -r .items[].id)
if ! echo "$output" | grep -q "so-manager_kafka"; then
KAFKACRT=$(openssl x509 -in /etc/pki/elasticfleet-kafka.crt)
KAFKAKEY=$(openssl rsa -in /etc/pki/elasticfleet-kafka.key)
KAFKACA=$(openssl x509 -in /etc/pki/tls/certs/intca.crt)
KAFKA_OUTPUT_VERSION="2.6.0"
JSON_STRING=$( jq -n \
--arg KAFKACRT "$KAFKACRT" \
--arg KAFKAKEY "$KAFKAKEY" \
--arg KAFKACA "$KAFKACA" \
--arg MANAGER_IP "{{ GLOBALS.manager_ip }}:9092" \
--arg KAFKA_OUTPUT_VERSION "$KAFKA_OUTPUT_VERSION" \
'{ "name": "grid-kafka", "id": "so-manager_kafka", "type": "kafka", "hosts": [ $MANAGER_IP ], "is_default": false, "is_default_monitoring": false, "config_yaml": "", "ssl": { "certificate_authorities": [ $KAFKACA ], "certificate": $KAFKACRT, "key": $KAFKAKEY, "verification_mode": "full" }, "proxy_id": null, "client_id": "Elastic", "version": $KAFKA_OUTPUT_VERSION, "compression": "none", "auth_type": "ssl", "partition": "round_robin", "round_robin": { "group_events": 1 }, "topics":[{"topic":"%{[event.module]}-securityonion","when":{"type":"regexp","condition":"event.module:.+"}},{"topic":"default-securityonion"}], "headers": [ { "key": "", "value": "" } ], "timeout": 30, "broker_timeout": 30, "required_acks": 1 }'
)
curl -sK /opt/so/conf/elasticsearch/curl.config -L -X POST "localhost:5601/api/fleet/outputs" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d "$JSON_STRING" -o /dev/null
refresh_output=$(curl -sK /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/outputs" | jq -r .items[].id)
if ! echo "$refresh_output" | grep -q "so-manager_kafka"; then
echo -e "\nFailed to setup Elastic Fleet output policy for Kafka...\n"
exit 1
elif echo "$refresh_output" | grep -q "so-manager_kafka"; then
echo -e "\nSuccessfully setup Elastic Fleet output policy for Kafka...\n"
fi
elif echo "$output" | grep -q "so-manager_kafka"; then
echo -e "\nElastic Fleet output policy for Kafka already exists...\n"
fi
{% else %}
echo -e "\nNo update required...\n"
{% endif %}

View File

@@ -4,7 +4,7 @@
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% if sls.split('.')[0] in allowed_states or sls in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
# Move our new CA over so Elastic and Logstash can use SSL with the internal CA

View File

@@ -1,23 +1,37 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% import_yaml 'elasticsearch/defaults.yaml' as ELASTICSEARCHDEFAULTS with context %}
{% set HIGHLANDER = salt['pillar.get']('global:highlander', False) %}
{# ES_LOGSTASH_NODES is the same as LOGSTASH_NODES from logstash/map.jinja but heavynodes and fleet nodes are removed #}
{% set ES_LOGSTASH_NODES = [] %}
{% set node_data = salt['pillar.get']('logstash:nodes', {GLOBALS.role.split('-')[1]: {GLOBALS.hostname: {'ip': GLOBALS.node_ip}}}) %}
{# this is a list of dicts containing hostname:ip for elasticsearch nodes that need to know about each other for cluster #}
{% set ELASTICSEARCH_SEED_HOSTS = [] %}
{% set node_data = salt['pillar.get']('elasticsearch:nodes', {GLOBALS.role.split('-')[1]: {GLOBALS.hostname: {'ip': GLOBALS.node_ip}}}) %}
{% for node_type, node_details in node_data.items() | sort %}
{% if node_type not in ['heavynode', 'fleet'] %}
{% if node_type != 'heavynode' %}
{% for hostname in node_data[node_type].keys() %}
{% do ES_LOGSTASH_NODES.append({hostname:node_details[hostname].ip}) %}
{% do ELASTICSEARCH_SEED_HOSTS.append({hostname:node_details[hostname].ip}) %}
{% endfor %}
{% endif %}
{% endfor %}
{# this is a list of dicts containing hostname:ip of all nodes running elasticsearch #}
{% set ELASTICSEARCH_NODES = [] %}
{% set node_data = salt['pillar.get']('elasticsearch:nodes', {GLOBALS.role.split('-')[1]: {GLOBALS.hostname: {'ip': GLOBALS.node_ip}}}) %}
{% for node_type, node_details in node_data.items() %}
{% for hostname in node_data[node_type].keys() %}
{% do ELASTICSEARCH_NODES.append({hostname:node_details[hostname].ip}) %}
{% endfor %}
{% endfor %}
{% if grains.id.split('_') | last in ['manager','managersearch','standalone'] %}
{% if ES_LOGSTASH_NODES | length > 1 %}
{% if ELASTICSEARCH_SEED_HOSTS | length > 1 %}
{% do ELASTICSEARCHDEFAULTS.elasticsearch.config.update({'discovery': {'seed_hosts': []}}) %}
{% for NODE in ES_LOGSTASH_NODES %}
{% for NODE in ELASTICSEARCH_SEED_HOSTS %}
{% do ELASTICSEARCHDEFAULTS.elasticsearch.config.discovery.seed_hosts.append(NODE.keys()|first) %}
{% endfor %}
{% endif %}

View File

@@ -118,6 +118,11 @@ esingestconf:
- user: 930
- group: 939
# Remove .fleet_final_pipeline-1 because we are using global@custom now
so-fleet-final-pipeline-remove:
file.absent:
- name: /opt/so/conf/elasticsearch/ingest/.fleet_final_pipeline-1
# Auto-generate Elasticsearch ingest node pipelines from pillar
{% for pipeline, config in ELASTICSEARCHMERGED.pipelines.items() %}
es_ingest_conf_{{pipeline}}:

File diff suppressed because it is too large Load Diff

View File

@@ -7,8 +7,8 @@
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'docker/docker.map.jinja' import DOCKER %}
{% from 'logstash/map.jinja' import LOGSTASH_NODES %}
{% from 'elasticsearch/config.map.jinja' import ES_LOGSTASH_NODES %}
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCH_NODES %}
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCH_SEED_HOSTS %}
{% from 'elasticsearch/config.map.jinja' import ELASTICSEARCHMERGED %}
{% set TEMPLATES = salt['pillar.get']('elasticsearch:templates', {}) %}
{% from 'elasticsearch/template.map.jinja' import ES_INDEX_SETTINGS %}
@@ -27,7 +27,7 @@ so-elasticsearch:
- sobridge:
- ipv4_address: {{ DOCKER.containers['so-elasticsearch'].ip }}
- extra_hosts:
{% for node in LOGSTASH_NODES %}
{% for node in ELASTICSEARCH_NODES %}
{% for hostname, ip in node.items() %}
- {{hostname}}:{{ip}}
{% endfor %}
@@ -38,7 +38,7 @@ so-elasticsearch:
{% endfor %}
{% endif %}
- environment:
{% if ES_LOGSTASH_NODES | length == 1 or GLOBALS.role == 'so-heavynode' %}
{% if ELASTICSEARCH_SEED_HOSTS | length == 1 or GLOBALS.role == 'so-heavynode' %}
- discovery.type=single-node
{% endif %}
- ES_JAVA_OPTS=-Xms{{ GLOBALS.elasticsearch.es_heap }} -Xmx{{ GLOBALS.elasticsearch.es_heap }} -Des.transport.cname_in_publish_address=true -Dlog4j2.formatMsgNoLookups=true

View File

@@ -62,6 +62,7 @@
{ "split": { "if": "ctx.event?.dataset != null && ctx.event.dataset.contains('.')", "field": "event.dataset", "separator": "\\.", "target_field": "dataset_tag_temp" } },
{ "append": { "if": "ctx.dataset_tag_temp != null", "field": "tags", "value": "{{dataset_tag_temp.1}}" } },
{ "grok": { "if": "ctx.http?.response?.status_code != null", "field": "http.response.status_code", "patterns": ["%{NUMBER:http.response.status_code:long} %{GREEDYDATA}"]} },
{ "set": { "if": "ctx?.metadata?.kafka != null" , "field": "kafka.id", "value": "{{metadata.kafka.partition}}{{metadata.kafka.offset}}{{metadata.kafka.timestamp}}", "ignore_failure": true } },
{ "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "dataset_tag_temp", "event.dataset_temp" ], "ignore_missing": true, "ignore_failure": true } }
{%- endraw %}
{%- if HIGHLANDER %}
@@ -73,6 +74,8 @@
}
{%- endif %}
{%- raw %}
,
{ "pipeline": { "name": "global@custom", "ignore_missing_pipeline": true, "description": "[Fleet] Global pipeline for all data streams" } }
]
}
{% endraw %}

View File

@@ -1,106 +0,0 @@
{
"version": 3,
"_meta": {
"managed_by": "fleet",
"managed": true
},
"description": "Final pipeline for processing all incoming Fleet Agent documents. \n",
"processors": [
{
"date": {
"description": "Add time when event was ingested (and remove sub-seconds to improve storage efficiency)",
"tag": "truncate-subseconds-event-ingested",
"field": "_ingest.timestamp",
"target_field": "event.ingested",
"formats": [
"ISO8601"
],
"output_format": "date_time_no_millis",
"ignore_failure": true
}
},
{
"remove": {
"description": "Remove any pre-existing untrusted values.",
"field": [
"event.agent_id_status",
"_security"
],
"ignore_missing": true
}
},
{
"set_security_user": {
"field": "_security",
"properties": [
"authentication_type",
"username",
"realm",
"api_key"
]
}
},
{
"script": {
"description": "Add event.agent_id_status based on the API key metadata and the agent.id contained in the event.\n",
"tag": "agent-id-status",
"source": "boolean is_user_trusted(def ctx, def users) {\n if (ctx?._security?.username == null) {\n return false;\n }\n\n def user = null;\n for (def item : users) {\n if (item?.username == ctx._security.username) {\n user = item;\n break;\n }\n }\n\n if (user == null || user?.realm == null || ctx?._security?.realm?.name == null) {\n return false;\n }\n\n if (ctx._security.realm.name != user.realm) {\n return false;\n }\n\n return true;\n}\n\nString verified(def ctx, def params) {\n // No agent.id field to validate.\n if (ctx?.agent?.id == null) {\n return \"missing\";\n }\n\n // Check auth metadata from API key.\n if (ctx?._security?.authentication_type == null\n // Agents only use API keys.\n || ctx._security.authentication_type != 'API_KEY'\n // Verify the API key owner before trusting any metadata it contains.\n || !is_user_trusted(ctx, params.trusted_users)\n // Verify the API key has metadata indicating the assigned agent ID.\n || ctx?._security?.api_key?.metadata?.agent_id == null) {\n return \"auth_metadata_missing\";\n }\n\n // The API key can only be used represent the agent.id it was issued to.\n if (ctx._security.api_key.metadata.agent_id != ctx.agent.id) {\n // Potential masquerade attempt.\n return \"mismatch\";\n }\n\n return \"verified\";\n}\n\nif (ctx?.event == null) {\n ctx.event = [:];\n}\n\nctx.event.agent_id_status = verified(ctx, params);",
"params": {
"trusted_users": [
{
"username": "elastic/fleet-server",
"realm": "_service_account"
},
{
"username": "cloud-internal-agent-server",
"realm": "found"
},
{
"username": "elastic",
"realm": "reserved"
}
]
}
}
},
{
"remove": {
"field": "_security",
"ignore_missing": true
}
},
{ "set": { "ignore_failure": true, "field": "event.module", "value": "elastic_agent" } },
{ "split": { "if": "ctx.event?.dataset != null && ctx.event.dataset.contains('.')", "field": "event.dataset", "separator": "\\.", "target_field": "module_temp" } },
{ "set": { "if": "ctx.module_temp != null", "override": true, "field": "event.module", "value": "{{module_temp.0}}" } },
{ "gsub": { "if": "ctx.event?.dataset != null && ctx.event.dataset.contains('.')", "field": "event.dataset", "pattern": "^[^.]*.", "replacement": "", "target_field": "dataset_tag_temp" } },
{ "append": { "if": "ctx.dataset_tag_temp != null", "field": "tags", "value": "{{dataset_tag_temp}}" } },
{ "set": { "if": "ctx.network?.direction == 'egress'", "override": true, "field": "network.initiated", "value": "true" } },
{ "set": { "if": "ctx.network?.direction == 'ingress'", "override": true, "field": "network.initiated", "value": "false" } },
{ "set": { "if": "ctx.network?.type == 'ipv4'", "override": true, "field": "destination.ipv6", "value": "false" } },
{ "set": { "if": "ctx.network?.type == 'ipv6'", "override": true, "field": "destination.ipv6", "value": "true" } },
{ "set": { "if": "ctx.tags.0 == 'import'", "override": true, "field": "data_stream.dataset", "value": "import" } },
{ "set": { "if": "ctx.tags.0 == 'import'", "override": true, "field": "data_stream.namespace", "value": "so" } },
{ "date": { "if": "ctx.event?.module == 'system'", "field": "event.created", "target_field": "@timestamp","ignore_failure": true, "formats": ["yyyy-MM-dd'T'HH:mm:ss.SSSX","yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'"] } },
{ "community_id":{ "if": "ctx.event?.dataset == 'endpoint.events.network'", "ignore_failure":true } },
{ "set": { "if": "ctx.event?.module == 'fim'", "override": true, "field": "event.module", "value": "file_integrity" } },
{ "rename": { "if": "ctx.winlog?.provider_name == 'Microsoft-Windows-Windows Defender'", "ignore_missing": true, "field": "winlog.event_data.Threat Name", "target_field": "winlog.event_data.threat_name" } },
{ "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "event.dataset_temp", "dataset_tag_temp", "module_temp" ], "ignore_missing": true, "ignore_failure": true } }
],
"on_failure": [
{
"remove": {
"field": "_security",
"ignore_missing": true,
"ignore_failure": true
}
},
{
"append": {
"field": "error.message",
"value": [
"failed in Fleet agent final_pipeline: {{ _ingest.on_failure_message }}"
]
}
}
]
}

View File

@@ -0,0 +1,27 @@
{
"version": 3,
"_meta": {
"managed_by": "securityonion",
"managed": true
},
"description": "Custom pipeline for processing all incoming Fleet Agent documents. \n",
"processors": [
{ "set": { "ignore_failure": true, "field": "event.module", "value": "elastic_agent" } },
{ "split": { "if": "ctx.event?.dataset != null && ctx.event.dataset.contains('.')", "field": "event.dataset", "separator": "\\.", "target_field": "module_temp" } },
{ "set": { "if": "ctx.module_temp != null", "override": true, "field": "event.module", "value": "{{module_temp.0}}" } },
{ "gsub": { "if": "ctx.event?.dataset != null && ctx.event.dataset.contains('.')", "field": "event.dataset", "pattern": "^[^.]*.", "replacement": "", "target_field": "dataset_tag_temp" } },
{ "append": { "if": "ctx.dataset_tag_temp != null", "field": "tags", "value": "{{dataset_tag_temp}}" } },
{ "set": { "if": "ctx.network?.direction == 'egress'", "override": true, "field": "network.initiated", "value": "true" } },
{ "set": { "if": "ctx.network?.direction == 'ingress'", "override": true, "field": "network.initiated", "value": "false" } },
{ "set": { "if": "ctx.network?.type == 'ipv4'", "override": true, "field": "destination.ipv6", "value": "false" } },
{ "set": { "if": "ctx.network?.type == 'ipv6'", "override": true, "field": "destination.ipv6", "value": "true" } },
{ "set": { "if": "ctx.tags.0 == 'import'", "override": true, "field": "data_stream.dataset", "value": "import" } },
{ "set": { "if": "ctx.tags.0 == 'import'", "override": true, "field": "data_stream.namespace", "value": "so" } },
{ "date": { "if": "ctx.event?.module == 'system'", "field": "event.created", "target_field": "@timestamp","ignore_failure": true, "formats": ["yyyy-MM-dd'T'HH:mm:ss.SSSX","yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'"] } },
{ "community_id":{ "if": "ctx.event?.dataset == 'endpoint.events.network'", "ignore_failure":true } },
{ "set": { "if": "ctx.event?.module == 'fim'", "override": true, "field": "event.module", "value": "file_integrity" } },
{ "rename": { "if": "ctx.winlog?.provider_name == 'Microsoft-Windows-Windows Defender'", "ignore_missing": true, "field": "winlog.event_data.Threat Name", "target_field": "winlog.event_data.threat_name" } },
{ "set": { "if": "ctx?.metadata?.kafka != null" , "field": "kafka.id", "value": "{{metadata.kafka.partition}}{{metadata.kafka.offset}}{{metadata.kafka.timestamp}}", "ignore_failure": true } },
{ "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "event.dataset_temp", "dataset_tag_temp", "module_temp" ], "ignore_missing": true, "ignore_failure": true } }
]
}

View File

@@ -1,6 +1,7 @@
{
"description" : "suricata.alert",
"processors" : [
{ "set": { "field": "_index", "value": "logs-suricata.alerts-so" } },
{ "set": { "field": "tags","value": "alert" }},
{ "rename":{ "field": "message2.alert", "target_field": "rule", "ignore_failure": true } },
{ "rename":{ "field": "rule.signature", "target_field": "rule.name", "ignore_failure": true } },

View File

@@ -466,6 +466,13 @@ elasticsearch:
so-logs-sonicwall_firewall_x_log: *indexSettings
so-logs-snort_x_log: *indexSettings
so-logs-symantec_endpoint_x_log: *indexSettings
so-logs-tenable_io_x_asset: *indexSettings
so-logs-tenable_io_x_plugin: *indexSettings
so-logs-tenable_io_x_scan: *indexSettings
so-logs-tenable_io_x_vulnerability: *indexSettings
so-logs-tenable_sc_x_asset: *indexSettings
so-logs-tenable_sc_x_plugin: *indexSettings
so-logs-tenable_sc_x_vulnerability: *indexSettings
so-logs-ti_abusech_x_malware: *indexSettings
so-logs-ti_abusech_x_malwarebazaar: *indexSettings
so-logs-ti_abusech_x_threatfox: *indexSettings
@@ -521,6 +528,7 @@ elasticsearch:
so-endgame: *indexSettings
so-idh: *indexSettings
so-suricata: *indexSettings
so-suricata_x_alerts: *indexSettings
so-import: *indexSettings
so-kratos: *indexSettings
so-kismet: *indexSettings
@@ -529,6 +537,58 @@ elasticsearch:
so-strelka: *indexSettings
so-syslog: *indexSettings
so-zeek: *indexSettings
so-metrics-fleet_server_x_agent_status: &fleetMetricsSettings
index_sorting:
description: Sorts the index by event time, at the cost of additional processing resource consumption.
advanced: True
readonly: True
helpLink: elasticsearch.html
index_template:
ignore_missing_component_templates:
description: Ignore component templates if they aren't in Elasticsearch.
advanced: True
readonly: True
helpLink: elasticsearch.html
index_patterns:
description: Patterns for matching multiple indices or tables.
advanced: True
readonly: True
helpLink: elasticsearch.html
template:
settings:
index:
mode:
description: Type of mode used for this index. Time series indices can be used for metrics to reduce necessary storage.
advanced: True
readonly: True
helpLink: elasticsearch.html
number_of_replicas:
description: Number of replicas required for this index. Multiple replicas protects against data loss, but also increases storage costs.
advanced: True
readonly: True
helpLink: elasticsearch.html
composed_of:
description: The index template is composed of these component templates.
advanced: True
readonly: True
helpLink: elasticsearch.html
priority:
description: The priority of the index template.
advanced: True
readonly: True
helpLink: elasticsearch.html
data_stream:
hidden:
description: Hide the data stream.
advanced: True
readonly: True
helpLink: elasticsearch.html
allow_custom_routing:
description: Allow custom routing for the data stream.
advanced: True
readonly: True
helpLink: elasticsearch.html
so-metrics-fleet_server_x_agent_versions: *fleetMetricsSettings
so_roles:
so-manager: &soroleSettings
config:

View File

@@ -1,3 +1,8 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% import_yaml 'elasticsearch/defaults.yaml' as ELASTICSEARCHDEFAULTS %}
{% set DEFAULT_GLOBAL_OVERRIDES = ELASTICSEARCHDEFAULTS.elasticsearch.index_settings.pop('global_overrides') %}
@@ -17,10 +22,26 @@
{% set ES_INDEX_SETTINGS = {} %}
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.update(salt['defaults.merge'](ES_INDEX_SETTINGS_GLOBAL_OVERRIDES, ES_INDEX_PILLAR, in_place=False)) %}
{% for index, settings in ES_INDEX_SETTINGS_GLOBAL_OVERRIDES.items() %}
{# if policy isn't defined in the original index settings, then dont merge policy from the global_overrides #}
{# this will prevent so-elasticsearch-ilm-policy-load from trying to load policy on non ILM manged indices #}
{% if not ES_INDEX_SETTINGS_ORIG[index].policy is defined and ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy is defined %}
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].pop('policy') %}
{# prevent this action from being performed on custom defined indices. #}
{# the custom defined index is not present in either of the dictionaries and fails to reder. #}
{% if index in ES_INDEX_SETTINGS_ORIG and index in ES_INDEX_SETTINGS_GLOBAL_OVERRIDES %}
{# dont merge policy from the global_overrides if policy isn't defined in the original index settingss #}
{# this will prevent so-elasticsearch-ilm-policy-load from trying to load policy on non ILM manged indices #}
{% if not ES_INDEX_SETTINGS_ORIG[index].policy is defined and ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy is defined %}
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].pop('policy') %}
{% endif %}
{# this prevents and index from inderiting a policy phase from global overrides if it wasnt defined in the defaults. #}
{% if ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy is defined %}
{% for phase in ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy.phases.copy() %}
{% if ES_INDEX_SETTINGS_ORIG[index].policy.phases[phase] is not defined %}
{% do ES_INDEX_SETTINGS_GLOBAL_OVERRIDES[index].policy.phases.pop(phase) %}
{% endif %}
{% endfor %}
{% endif %}
{% endif %}
{% if settings.index_template is defined %}

View File

@@ -6,7 +6,7 @@
"name": "logs"
},
"codec": "best_compression",
"default_pipeline": "logs-elastic_agent-1.13.1",
"default_pipeline": "logs-elastic_agent-1.20.0",
"mapping": {
"total_fields": {
"limit": "10000"

View File

@@ -0,0 +1,201 @@
{
"template": {
"settings": {
"index": {
"lifecycle": {
"name": "metrics"
},
"default_pipeline": "metrics-fleet_server.agent_status-1.5.0",
"mapping": {
"total_fields": {
"limit": "1000"
}
}
}
},
"mappings": {
"dynamic": false,
"_source": {
"mode": "synthetic"
},
"properties": {
"cluster": {
"properties": {
"id": {
"time_series_dimension": true,
"type": "keyword"
}
}
},
"fleet": {
"properties": {
"agents": {
"properties": {
"offline": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"total": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"updating": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"inactive": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"healthy": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"unhealthy": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"unenrolled": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"enrolled": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"unhealthy_reason": {
"properties": {
"output": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"input": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"other": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
}
}
},
"upgrading_step": {
"properties": {
"rollback": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"requested": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"restarting": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"downloading": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"scheduled": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"extracting": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"replacing": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"failed": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"watching": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
}
}
}
}
}
}
},
"agent": {
"properties": {
"id": {
"ignore_above": 1024,
"type": "keyword"
},
"type": {
"ignore_above": 1024,
"type": "keyword"
},
"version": {
"ignore_above": 1024,
"type": "keyword"
}
}
},
"@timestamp": {
"ignore_malformed": false,
"type": "date"
},
"data_stream": {
"properties": {
"namespace": {
"type": "constant_keyword"
},
"type": {
"type": "constant_keyword"
},
"dataset": {
"type": "constant_keyword"
}
}
},
"kibana": {
"properties": {
"uuid": {
"path": "agent.id",
"type": "alias"
},
"version": {
"path": "agent.version",
"type": "alias"
}
}
}
}
}
},
"_meta": {
"package": {
"name": "fleet_server"
},
"managed_by": "fleet",
"managed": true
}
}

View File

@@ -0,0 +1,102 @@
{
"template": {
"settings": {
"index": {
"lifecycle": {
"name": "metrics"
},
"default_pipeline": "metrics-fleet_server.agent_versions-1.5.0",
"mapping": {
"total_fields": {
"limit": "1000"
}
}
}
},
"mappings": {
"dynamic": false,
"_source": {
"mode": "synthetic"
},
"properties": {
"cluster": {
"properties": {
"id": {
"time_series_dimension": true,
"type": "keyword"
}
}
},
"fleet": {
"properties": {
"agent": {
"properties": {
"count": {
"time_series_metric": "gauge",
"meta": {},
"type": "long"
},
"version": {
"time_series_dimension": true,
"type": "keyword"
}
}
}
}
},
"agent": {
"properties": {
"id": {
"ignore_above": 1024,
"type": "keyword"
},
"type": {
"ignore_above": 1024,
"type": "keyword"
},
"version": {
"ignore_above": 1024,
"type": "keyword"
}
}
},
"@timestamp": {
"ignore_malformed": false,
"type": "date"
},
"data_stream": {
"properties": {
"namespace": {
"type": "constant_keyword"
},
"type": {
"type": "constant_keyword"
},
"dataset": {
"type": "constant_keyword"
}
}
},
"kibana": {
"properties": {
"uuid": {
"path": "agent.id",
"type": "alias"
},
"version": {
"path": "agent.version",
"type": "alias"
}
}
}
}
}
},
"_meta": {
"package": {
"name": "fleet_server"
},
"managed_by": "fleet",
"managed": true
}
}

View File

@@ -0,0 +1,112 @@
{
"template": {
"mappings": {
"dynamic": "strict",
"properties": {
"binary": {
"type": "binary"
},
"boolean": {
"type": "boolean"
},
"byte": {
"type": "byte"
},
"created_at": {
"type": "date"
},
"created_by": {
"type": "keyword"
},
"date": {
"type": "date"
},
"date_nanos": {
"type": "date_nanos"
},
"date_range": {
"type": "date_range"
},
"deserializer": {
"type": "keyword"
},
"double": {
"type": "double"
},
"double_range": {
"type": "double_range"
},
"float": {
"type": "float"
},
"float_range": {
"type": "float_range"
},
"geo_point": {
"type": "geo_point"
},
"geo_shape": {
"type": "geo_shape"
},
"half_float": {
"type": "half_float"
},
"integer": {
"type": "integer"
},
"integer_range": {
"type": "integer_range"
},
"ip": {
"type": "ip"
},
"ip_range": {
"type": "ip_range"
},
"keyword": {
"type": "keyword"
},
"list_id": {
"type": "keyword"
},
"long": {
"type": "long"
},
"long_range": {
"type": "long_range"
},
"meta": {
"type": "object",
"enabled": false
},
"serializer": {
"type": "keyword"
},
"shape": {
"type": "shape"
},
"short": {
"type": "short"
},
"text": {
"type": "text"
},
"tie_breaker_id": {
"type": "keyword"
},
"updated_at": {
"type": "date"
},
"updated_by": {
"type": "keyword"
}
}
},
"aliases": {}
},
"version": 2,
"_meta": {
"managed": true,
"description": "default mappings for the .items index template installed by Kibana/Security"
}
}

View File

@@ -0,0 +1,55 @@
{
"template": {
"mappings": {
"dynamic": "strict",
"properties": {
"created_at": {
"type": "date"
},
"created_by": {
"type": "keyword"
},
"description": {
"type": "keyword"
},
"deserializer": {
"type": "keyword"
},
"immutable": {
"type": "boolean"
},
"meta": {
"type": "object",
"enabled": false
},
"name": {
"type": "keyword"
},
"serializer": {
"type": "keyword"
},
"tie_breaker_id": {
"type": "keyword"
},
"type": {
"type": "keyword"
},
"updated_at": {
"type": "date"
},
"updated_by": {
"type": "keyword"
},
"version": {
"type": "keyword"
}
}
},
"aliases": {}
},
"version": 2,
"_meta": {
"managed": true,
"description": "default mappings for the .lists index template installed by Kibana/Security"
}
}

View File

@@ -0,0 +1,29 @@
{
"template": {
"mappings": {
"properties": {
"host": {
"properties":{
"ip": {
"type": "ip"
}
}
},
"related": {
"properties":{
"ip": {
"type": "ip"
}
}
},
"source": {
"properties":{
"ip": {
"type": "ip"
}
}
}
}
}
}
}

View File

@@ -20,7 +20,7 @@ if [ ! -f /opt/so/state/espipelines.txt ]; then
cd ${ELASTICSEARCH_INGEST_PIPELINES}
echo "Loading pipelines..."
for i in .[a-z]* *;
for i in *;
do
echo $i;
retry 5 5 "so-elasticsearch-query _ingest/pipeline/$i -d@$i -XPUT | grep '{\"acknowledged\":true}'" || fail "Could not load pipeline: $i"

View File

@@ -4,7 +4,7 @@
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{%- from 'vars/globals.map.jinja' import GLOBALS %}
{%- set node_data = salt['pillar.get']('logstash:nodes', {GLOBALS.role.split('-')[1]: {GLOBALS.hostname: {'ip': GLOBALS.node_ip}}}) %}
{%- set node_data = salt['pillar.get']('elasticsearch:nodes', {GLOBALS.role.split('-')[1]: {GLOBALS.hostname: {'ip': GLOBALS.node_ip}}}) %}
. /usr/sbin/so-common

View File

@@ -40,9 +40,9 @@ fi
# Iterate through the output of _cat/allocation for each node in the cluster to determine the total available space
{% if GLOBALS.role == 'so-manager' %}
for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | grep -v "{{ GLOBALS.manager }}$" | awk '{print $5}'); do
for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | grep -v "{{ GLOBALS.manager }}$" | awk '{print $8}'); do
{% else %}
for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | awk '{print $5}'); do
for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | awk '{print $8}'); do
{% endif %}
size=$(echo $i | grep -oE '[0-9].*' | awk '{print int($1+0.5)}')
unit=$(echo $i | grep -oE '[A-Za-z]+')

View File

@@ -13,10 +13,10 @@ TOTAL_USED_SPACE=0
# Iterate through the output of _cat/allocation for each node in the cluster to determine the total used space
{% if GLOBALS.role == 'so-manager' %}
# Get total disk space - disk.total
for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | grep -v "{{ GLOBALS.manager }}$" | awk '{print $3}'); do
for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | grep -v "{{ GLOBALS.manager }}$" | awk '{print $6}'); do
{% else %}
# Get disk space taken up by indices - disk.indices
for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | awk '{print $2}'); do
for i in $(/usr/sbin/so-elasticsearch-query _cat/allocation | awk '{print $5}'); do
{% endif %}
size=$(echo $i | grep -oE '[0-9].*' | awk '{print int($1+0.5)}')
unit=$(echo $i | grep -oE '[A-Za-z]+')

View File

@@ -10,10 +10,26 @@
{%- for index, settings in ES_INDEX_SETTINGS.items() %}
{%- if settings.policy is defined %}
echo
echo "Setting up {{ index }}-logs policy..."
curl -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -s -k -L -X PUT "https://localhost:9200/_ilm/policy/{{ index }}-logs" -H 'Content-Type: application/json' -d'{ "policy": {{ settings.policy | tojson(true) }} }'
echo
{%- if index == 'so-logs-detections.alerts' %}
echo
echo "Setting up so-logs-detections.alerts-so policy..."
curl -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -s -k -L -X PUT "https://localhost:9200/_ilm/policy/{{ index }}-so" -H 'Content-Type: application/json' -d'{ "policy": {{ settings.policy | tojson(true) }} }'
echo
{%- elif index == 'so-logs-soc' %}
echo
echo "Setting up so-soc-logs policy..."
curl -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -s -k -L -X PUT "https://localhost:9200/_ilm/policy/so-soc-logs" -H 'Content-Type: application/json' -d'{ "policy": {{ settings.policy | tojson(true) }} }'
echo
echo
echo "Setting up {{ index }}-logs policy..."
curl -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -s -k -L -X PUT "https://localhost:9200/_ilm/policy/{{ index }}-logs" -H 'Content-Type: application/json' -d'{ "policy": {{ settings.policy | tojson(true) }} }'
echo
{%- else %}
echo
echo "Setting up {{ index }}-logs policy..."
curl -K /opt/so/conf/elasticsearch/curl.config -b "sid=$SESSIONCOOKIE" -s -k -L -X PUT "https://localhost:9200/_ilm/policy/{{ index }}-logs" -H 'Content-Type: application/json' -d'{ "policy": {{ settings.policy | tojson(true) }} }'
echo
{%- endif %}
{%- endif %}
{%- endfor %}
echo

View File

@@ -5,7 +5,6 @@
# Elastic License 2.0.
{%- import_yaml 'elasticfleet/defaults.yaml' as ELASTICFLEETDEFAULTS %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{%- set SUPPORTED_PACKAGES = salt['pillar.get']('elasticfleet:packages', default=ELASTICFLEETDEFAULTS.elasticfleet.packages, merge=True) %}
STATE_FILE_INITIAL=/opt/so/state/estemplates_initial_load_attempt.txt
STATE_FILE_SUCCESS=/opt/so/state/estemplates.txt
@@ -68,9 +67,9 @@ if [ ! -f $STATE_FILE_SUCCESS ]; then
echo -n "Waiting for ElasticSearch..."
retry 240 1 "so-elasticsearch-query / -k --output /dev/null --silent --head --fail" || fail "Connection attempt timed out. Unable to connect to ElasticSearch. \nPlease try: \n -checking log(s) in /var/log/elasticsearch/\n -running 'sudo docker ps' \n -running 'sudo so-elastic-restart'"
{% if GLOBALS.role != 'so-heavynode' %}
SESSIONCOOKIE=$(curl -s -K /opt/so/conf/elasticsearch/curl.config -c - -X GET http://localhost:5601/ | grep sid | awk '{print $7}')
INSTALLED=$(elastic_fleet_package_is_installed {{ SUPPORTED_PACKAGES[0] }} )
if [ "$INSTALLED" != "installed" ]; then
TEMPLATE="logs-endpoint.alerts@package"
INSTALLED=$(so-elasticsearch-query _component_template/$TEMPLATE | jq -r .component_templates[0].name)
if [ "$INSTALLED" != "$TEMPLATE" ]; then
echo
echo "Packages not yet installed."
echo
@@ -134,7 +133,7 @@ if [ ! -f $STATE_FILE_SUCCESS ]; then
TEMPLATE=${i::-14}
COMPONENT_PATTERN=${TEMPLATE:3}
MATCH=$(echo "$TEMPLATE" | grep -E "^so-logs-|^so-metrics" | grep -vE "detections|osquery")
if [[ -n "$MATCH" && ! "$COMPONENT_LIST" =~ "$COMPONENT_PATTERN" ]]; then
if [[ -n "$MATCH" && ! "$COMPONENT_LIST" =~ "$COMPONENT_PATTERN" && ! "$COMPONENT_PATTERN" =~ logs-http_endpoint\.generic|logs-winlog\.winlog ]]; then
load_failures=$((load_failures+1))
echo "Component template does not exist for $COMPONENT_PATTERN. The index template will not be loaded. Load failures: $load_failures"
else
@@ -153,7 +152,7 @@ if [ ! -f $STATE_FILE_SUCCESS ]; then
cd - >/dev/null
if [[ $load_failures -eq 0 ]]; then
echo "All template loaded successfully"
echo "All templates loaded successfully"
touch $STATE_FILE_SUCCESS
else
echo "Encountered $load_failures templates that were unable to load, likely due to missing dependencies that will be available later; will retry on next highstate"

View File

@@ -27,6 +27,7 @@
'so-elastic-fleet',
'so-elastic-fleet-package-registry',
'so-influxdb',
'so-kafka',
'so-kibana',
'so-kratos',
'so-logstash',
@@ -80,6 +81,7 @@
{% set NODE_CONTAINERS = [
'so-logstash',
'so-redis',
'so-kafka'
] %}
{% elif GLOBALS.role == 'so-idh' %}

View File

@@ -90,6 +90,14 @@ firewall:
tcp:
- 8086
udp: []
kafka_controller:
tcp:
- 9093
udp: []
kafka_data:
tcp:
- 9092
udp: []
kibana:
tcp:
- 5601
@@ -753,7 +761,6 @@ firewall:
- beats_5044
- beats_5644
- beats_5056
- redis
- elasticsearch_node
- elastic_agent_control
- elastic_agent_data
@@ -1267,35 +1274,51 @@ firewall:
chain:
DOCKER-USER:
hostgroups:
desktop:
portgroups:
- elastic_agent_data
fleet:
portgroups:
- beats_5056
- elastic_agent_data
idh:
portgroups:
- elastic_agent_data
sensor:
portgroups:
- beats_5044
- beats_5644
- elastic_agent_data
searchnode:
portgroups:
- redis
- beats_5644
- elastic_agent_data
standalone:
portgroups:
- redis
- elastic_agent_data
manager:
portgroups:
- elastic_agent_data
managersearch:
portgroups:
- redis
- beats_5644
- elastic_agent_data
self:
portgroups:
- redis
- beats_5644
- elastic_agent_data
beats_endpoint:
portgroups:
- beats_5044
beats_endpoint_ssl:
portgroups:
- beats_5644
elastic_agent_endpoint:
portgroups:
- elastic_agent_data
endgame:
portgroups:
- endgame
receiver:
portgroups: []
customhostgroup0:
portgroups: []
customhostgroup1:

View File

@@ -18,4 +18,28 @@
{% endfor %}
{% endif %}
{# Only add Kafka firewall items when Kafka enabled #}
{% set role = GLOBALS.role.split('-')[1] %}
{% if GLOBALS.pipeline == 'KAFKA' and role in ['manager', 'managersearch', 'standalone'] %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[role].portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.receiver.portgroups.append('kafka_controller') %}
{% endif %}
{% if GLOBALS.pipeline == 'KAFKA' and role == 'receiver' %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.self.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.standalone.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.manager.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.managersearch.portgroups.append('kafka_controller') %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups.receiver.portgroups.append('kafka_controller') %}
{% endif %}
{% if GLOBALS.pipeline == 'KAFKA' and role in ['manager', 'managersearch', 'standalone', 'receiver'] %}
{% for r in ['manager', 'managersearch', 'standalone', 'receiver', 'fleet', 'idh', 'sensor', 'searchnode','heavynode', 'elastic_agent_endpoint', 'desktop'] %}
{% if FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r] is defined %}
{% do FIREWALL_DEFAULT.firewall.role[role].chain["DOCKER-USER"].hostgroups[r].portgroups.append('kafka_data') %}
{% endif %}
{% endfor %}
{% endif %}
{% set FIREWALL_MERGED = salt['pillar.get']('firewall', FIREWALL_DEFAULT.firewall, merge=True) %}

View File

@@ -120,6 +120,12 @@ firewall:
influxdb:
tcp: *tcpsettings
udp: *udpsettings
kafka_controller:
tcp: *tcpsettings
udp: *udpsettings
kafka_data:
tcp: *tcpsettings
udp: *udpsettings
kibana:
tcp: *tcpsettings
udp: *udpsettings
@@ -939,7 +945,6 @@ firewall:
portgroups: *portgroupshost
customhostgroup9:
portgroups: *portgroupshost
idh:
chain:
DOCKER-USER:

View File

@@ -1,2 +1,3 @@
global:
pcapengine: STENO
pipeline: REDIS

View File

@@ -36,9 +36,10 @@ global:
global: True
advanced: True
pipeline:
description: Sets which pipeline technology for events to use. Currently only Redis is supported.
description: Sets which pipeline technology for events to use. Currently only Redis is fully supported. Kafka is experimental and requires a Security Onion Pro license.
regex: ^(REDIS|KAFKA)$
regexFailureMessage: You must enter either REDIS or KAFKA.
global: True
readonly: True
advanced: True
repo_host:
description: Specify the host where operating system packages will be served from.

View File

@@ -33,6 +33,19 @@ idstools_sbin_jinja:
- file_mode: 755
- template: jinja
suricatacustomdirsfile:
file.directory:
- name: /nsm/rules/detect-suricata/custom_file
- user: 939
- group: 939
- makedirs: True
suricatacustomdirsurl:
file.directory:
- name: /nsm/rules/detect-suricata/custom_temp
- user: 939
- group: 939
{% else %}
{{sls}}_state_not_allowed:

View File

@@ -1,6 +1,8 @@
{%- from 'vars/globals.map.jinja' import GLOBALS -%}
{%- from 'idstools/map.jinja' import IDSTOOLSMERGED -%}
{%- from 'soc/merged.map.jinja' import SOCMERGED -%}
--suricata-version=7.0.3
--merged=/opt/so/rules/nids/suri/all.rules
--output=/nsm/rules/detect-suricata/custom_temp
--local=/opt/so/rules/nids/suri/local.rules
{%- if GLOBALS.md_engine == "SURICATA" %}
--local=/opt/so/rules/nids/suri/extraction.rules
@@ -10,8 +12,12 @@
--disable=/opt/so/idstools/etc/disable.conf
--enable=/opt/so/idstools/etc/enable.conf
--modify=/opt/so/idstools/etc/modify.conf
{%- if IDSTOOLSMERGED.config.urls | length > 0 %}
{%- for URL in IDSTOOLSMERGED.config.urls %}
--url={{ URL }}
{%- endfor %}
{%- if SOCMERGED.config.server.modules.suricataengine.customRulesets %}
{%- for ruleset in SOCMERGED.config.server.modules.suricataengine.customRulesets %}
{%- if 'url' in ruleset %}
--url={{ ruleset.url }}
{%- elif 'file' in ruleset %}
--local={{ ruleset.file }}
{%- endif %}
{%- endfor %}
{%- endif %}

View File

@@ -11,8 +11,8 @@ if [[ ! "`pidof -x $(basename $0) -o %PPID`" ]]; then
{%- set proxy = salt['pillar.get']('manager:proxy') %}
{%- set noproxy = salt['pillar.get']('manager:no_proxy', '') %}
# Download the rules from the internet
{%- if proxy %}
# Download the rules from the internet
export http_proxy={{ proxy }}
export https_proxy={{ proxy }}
export no_proxy="{{ noproxy }}"
@@ -20,14 +20,12 @@ if [[ ! "`pidof -x $(basename $0) -o %PPID`" ]]; then
mkdir -p /nsm/rules/suricata
chown -R socore:socore /nsm/rules/suricata
{%- if not GLOBALS.airgap %}
# Download the rules from the internet
{%- if GLOBALS.airgap != 'True' %}
{%- if IDSTOOLSMERGED.config.ruleset == 'ETOPEN' %}
docker exec so-idstools idstools-rulecat -v --suricata-version 6.0 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force
docker exec so-idstools idstools-rulecat -v --suricata-version 7.0.3 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force
{%- elif IDSTOOLSMERGED.config.ruleset == 'ETPRO' %}
docker exec so-idstools idstools-rulecat -v --suricata-version 6.0 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force --etpro={{ IDSTOOLSMERGED.config.oinkcode }}
{%- elif IDSTOOLSMERGED.config.ruleset == 'TALOS' %}
docker exec so-idstools idstools-rulecat -v --suricata-version 6.0 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force --url=https://www.snort.org/rules/snortrules-snapshot-2983.tar.gz?oinkcode={{ IDSTOOLSMERGED.config.oinkcode }}
docker exec so-idstools idstools-rulecat -v --suricata-version 7.0.3 -o /nsm/rules/suricata/ --merged=/nsm/rules/suricata/emerging-all.rules --force --etpro={{ IDSTOOLSMERGED.config.oinkcode }}
{%- endif %}
{%- endif %}

File diff suppressed because one or more lines are too long

37
salt/kafka/ca.sls Normal file
View File

@@ -0,0 +1,37 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states or sls in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set KAFKATRUST = salt['pillar.get']('kafka:truststore') %}
kafkaconfdir:
file.directory:
- name: /opt/so/conf/kafka
- user: 960
- group: 960
- makedirs: True
{% if GLOBALS.is_manager %}
# Manager runs so-kafka-trust to create truststore for Kafka ssl communication
kafka_truststore:
cmd.script:
- source: salt://kafka/tools/sbin_jinja/so-kafka-trust
- template: jinja
- cwd: /opt/so
- defaults:
GLOBALS: {{ GLOBALS }}
KAFKATRUST: {{ KAFKATRUST }}
{% endif %}
kafkacertz:
file.managed:
- name: /opt/so/conf/kafka/kafka-truststore.jks
- source: salt://kafka/files/kafka-truststore
- user: 960
- group: 931
{% endif %}

View File

@@ -0,0 +1,85 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% from 'kafka/map.jinja' import KAFKAMERGED %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set KAFKA_NODES_PILLAR = salt['pillar.get']('kafka:nodes') %}
{% set KAFKA_PASSWORD = salt['pillar.get']('kafka:config:password') %}
{% set KAFKA_TRUSTPASS = salt['pillar.get']('kafka:config:trustpass') %}
{# Create list of KRaft controllers #}
{% set controllers = [] %}
{# Check for Kafka nodes with controller in process_x_roles #}
{% for node in KAFKA_NODES_PILLAR %}
{% if 'controller' in KAFKA_NODES_PILLAR[node].role %}
{% do controllers.append(KAFKA_NODES_PILLAR[node].nodeid ~ "@" ~ node ~ ":9093") %}
{% endif %}
{% endfor %}
{% set kafka_controller_quorum_voters = ','.join(controllers) %}
{# By default all Kafka eligible nodes are given the role of broker, except for
grid MANAGER (broker,controller) until overridden through SOC UI #}
{% set node_type = salt['pillar.get']('kafka:nodes:'+ GLOBALS.hostname + ':role') %}
{# Generate server.properties for 'broker' , 'controller', 'broker,controller' node types
anything above this line is a configuration needed for ALL Kafka nodes #}
{% if node_type == 'broker' %}
{% do KAFKAMERGED.config.broker.update({'advertised_x_listeners': 'BROKER://'+ GLOBALS.node_ip +':9092' }) %}
{% do KAFKAMERGED.config.broker.update({'controller_x_quorum_x_voters': kafka_controller_quorum_voters }) %}
{% do KAFKAMERGED.config.broker.update({'node_x_id': salt['pillar.get']('kafka:nodes:'+ GLOBALS.hostname +':nodeid') }) %}
{% do KAFKAMERGED.config.broker.update({'ssl_x_keystore_x_password': KAFKA_PASSWORD }) %}
{# Nodes with only the 'broker' role need to have the below settings for communicating with controller nodes #}
{% do KAFKAMERGED.config.broker.update({'controller_x_listener_x_names': KAFKAMERGED.config.controller.controller_x_listener_x_names }) %}
{% do KAFKAMERGED.config.broker.update({
'listener_x_security_x_protocol_x_map': KAFKAMERGED.config.broker.listener_x_security_x_protocol_x_map
+ ',' + KAFKAMERGED.config.controller.listener_x_security_x_protocol_x_map })
%}
{% endif %}
{% if node_type == 'controller' %}
{% do KAFKAMERGED.config.controller.update({'controller_x_quorum_x_voters': kafka_controller_quorum_voters }) %}
{% do KAFKAMERGED.config.controller.update({'node_x_id': salt['pillar.get']('kafka:nodes:'+ GLOBALS.hostname +':nodeid') }) %}
{% do KAFKAMERGED.config.controller.update({'ssl_x_keystore_x_password': KAFKA_PASSWORD }) %}
{% endif %}
{# Kafka nodes of this type are not recommended for use outside of development / testing. #}
{% if node_type == 'broker,controller' %}
{% do KAFKAMERGED.config.broker.update({'advertised_x_listeners': 'BROKER://'+ GLOBALS.node_ip +':9092' }) %}
{% do KAFKAMERGED.config.broker.update({'controller_x_listener_x_names': KAFKAMERGED.config.controller.controller_x_listener_x_names }) %}
{% do KAFKAMERGED.config.broker.update({'controller_x_quorum_x_voters': kafka_controller_quorum_voters }) %}
{% do KAFKAMERGED.config.broker.update({'node_x_id': salt['pillar.get']('kafka:nodes:'+ GLOBALS.hostname +':nodeid') }) %}
{% do KAFKAMERGED.config.broker.update({'process_x_roles': 'broker,controller' }) %}
{% do KAFKAMERGED.config.broker.update({'ssl_x_keystore_x_password': KAFKA_PASSWORD }) %}
{% do KAFKAMERGED.config.broker.update({
'listeners': KAFKAMERGED.config.broker.listeners + ',' + KAFKAMERGED.config.controller.listeners })
%}
{% do KAFKAMERGED.config.broker.update({
'listener_x_security_x_protocol_x_map': KAFKAMERGED.config.broker.listener_x_security_x_protocol_x_map
+ ',' + KAFKAMERGED.config.controller.listener_x_security_x_protocol_x_map })
%}
{% endif %}
{# Truststore config #}
{% do KAFKAMERGED.config.broker.update({'ssl_x_truststore_x_password': KAFKA_TRUSTPASS }) %}
{% do KAFKAMERGED.config.controller.update({'ssl_x_truststore_x_password': KAFKA_TRUSTPASS }) %}
{% do KAFKAMERGED.config.client.update({'ssl_x_truststore_x_password': KAFKA_TRUSTPASS }) %}
{# Client properties stuff #}
{% do KAFKAMERGED.config.client.update({'ssl_x_keystore_x_password': KAFKA_PASSWORD }) %}
{% if 'broker' in node_type %}
{% set KAFKACONFIG = KAFKAMERGED.config.broker %}
{% else %}
{% set KAFKACONFIG = KAFKAMERGED.config.controller %}
{% endif %}
{% set KAFKACLIENT = KAFKAMERGED.config.client %}

84
salt/kafka/config.sls Normal file
View File

@@ -0,0 +1,84 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
kafka_group:
group.present:
- name: kafka
- gid: 960
kafka_user:
user.present:
- name: kafka
- uid: 960
- gid: 960
- home: /opt/so/conf/kafka
- createhome: False
kafka_home_dir:
file.absent:
- name: /home/kafka
kafka_sbin_tools:
file.recurse:
- name: /usr/sbin
- source: salt://kafka/tools/sbin
- user: 960
- group: 960
- file_mode: 755
kafka_sbin_jinja_tools:
file.recurse:
- name: /usr/sbin
- source: salt://kafka/tools/sbin_jinja
- user: 960
- group: 960
- file_mode: 755
- template: jinja
- defaults:
GLOBALS: {{ GLOBALS }}
kafka_log_dir:
file.directory:
- name: /opt/so/log/kafka
- user: 960
- group: 960
- makedirs: True
kafka_data_dir:
file.directory:
- name: /nsm/kafka/data
- user: 960
- group: 960
- makedirs: True
{% for sc in ['server', 'client'] %}
kafka_kraft_{{sc}}_properties:
file.managed:
- source: salt://kafka/etc/{{sc}}.properties.jinja
- name: /opt/so/conf/kafka/{{sc}}.properties
- template: jinja
- user: 960
- group: 960
- makedirs: True
- show_changes: False
{% endfor %}
reset_quorum_on_changes:
cmd.run:
- name: rm -f /nsm/kafka/data/__cluster_metadata-0/quorum-state
- onchanges:
- file: /opt/so/conf/kafka/server.properties
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

64
salt/kafka/defaults.yaml Normal file
View File

@@ -0,0 +1,64 @@
kafka:
enabled: False
cluster_id:
controllers:
reset:
logstash: []
config:
password:
trustpass:
broker:
advertised_x_listeners:
auto_x_create_x_topics_x_enable: true
controller_x_quorum_x_voters:
default_x_replication_x_factor: 1
inter_x_broker_x_listener_x_name: BROKER
listeners: BROKER://0.0.0.0:9092
listener_x_security_x_protocol_x_map: BROKER:SSL
log_x_dirs: /nsm/kafka/data
log_x_retention_x_check_x_interval_x_ms: 300000
log_x_retention_x_hours: 168
log_x_segment_x_bytes: 1073741824
node_x_id:
num_x_io_x_threads: 8
num_x_network_x_threads: 3
num_x_partitions: 3
num_x_recovery_x_threads_x_per_x_data_x_dir: 1
offsets_x_topic_x_replication_x_factor: 1
process_x_roles: broker
socket_x_receive_x_buffer_x_bytes: 102400
socket_x_request_x_max_x_bytes: 104857600
socket_x_send_x_buffer_x_bytes: 102400
ssl_x_keystore_x_location: /etc/pki/kafka.p12
ssl_x_keystore_x_type: PKCS12
ssl_x_keystore_x_password:
ssl_x_truststore_x_location: /etc/pki/kafka-truststore.jks
ssl_x_truststore_x_type: JKS
ssl_x_truststore_x_password:
transaction_x_state_x_log_x_min_x_isr: 1
transaction_x_state_x_log_x_replication_x_factor: 1
client:
security_x_protocol: SSL
ssl_x_truststore_x_location: /etc/pki/kafka-truststore.jks
ssl_x_truststore_x_type: JKS
ssl_x_truststore_x_password:
ssl_x_keystore_x_location: /etc/pki/kafka.p12
ssl_x_keystore_x_type: PKCS12
ssl_x_keystore_x_password:
controller:
controller_x_listener_x_names: CONTROLLER
controller_x_quorum_x_voters:
listeners: CONTROLLER://0.0.0.0:9093
listener_x_security_x_protocol_x_map: CONTROLLER:SSL
log_x_dirs: /nsm/kafka/data
log_x_retention_x_check_x_interval_x_ms: 300000
log_x_retention_x_hours: 168
log_x_segment_x_bytes: 1073741824
node_x_id:
process_x_roles: controller
ssl_x_keystore_x_location: /etc/pki/kafka.p12
ssl_x_keystore_x_type: PKCS12
ssl_x_keystore_x_password:
ssl_x_truststore_x_location: /etc/pki/kafka-truststore.jks
ssl_x_truststore_x_type: JKS
ssl_x_truststore_x_password:

34
salt/kafka/disabled.sls Normal file
View File

@@ -0,0 +1,34 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'kafka/map.jinja' import KAFKAMERGED %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
so-kafka:
docker_container.absent:
- force: True
so-kafka_so-status.disabled:
file.comment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-kafka$
- onlyif: grep -q '^so-kafka$' /opt/so/conf/so-status/so-status.conf
{% if GLOBALS.is_manager and KAFKAMERGED.enabled or GLOBALS.pipeline == "KAFKA" %}
ensure_default_pipeline:
cmd.run:
- name: |
/usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled False;
/usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/global/soc_global.sls global.pipeline REDIS
{% endif %}
{# If Kafka has never been manually enabled, the 'Kafka' user does not exist. In this case certs for Kafka should not exist since they'll be owned by uid 960 #}
{% for cert in ['kafka-client.crt','kafka-client.key','kafka.crt','kafka.key','kafka-logstash.crt','kafka-logstash.key','kafka-logstash.p12','kafka.p12','elasticfleet-kafka.p8'] %}
check_kafka_cert_{{cert}}:
file.absent:
- name: /etc/pki/{{cert}}
- onlyif: stat -c %U /etc/pki/{{cert}} | grep -q UNKNOWN
- show_changes: False
{% endfor %}

90
salt/kafka/enabled.sls Normal file
View File

@@ -0,0 +1,90 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
#
# Note: Per the Elastic License 2.0, the second limitation states:
#
# "You may not move, change, disable, or circumvent the license key functionality
# in the software, and you may not remove or obscure any functionality in the
# software that is protected by the license key."
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'docker/docker.map.jinja' import DOCKER %}
{% set KAFKANODES = salt['pillar.get']('kafka:nodes') %}
{% if 'gmd' in salt['pillar.get']('features', []) %}
include:
- kafka.ca
- kafka.config
- kafka.ssl
- kafka.storage
- kafka.sostatus
so-kafka:
docker_container.running:
- image: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-kafka:{{ GLOBALS.so_version }}
- hostname: so-kafka
- name: so-kafka
- networks:
- sobridge:
- ipv4_address: {{ DOCKER.containers['so-kafka'].ip }}
- user: kafka
- environment:
KAFKA_HEAP_OPTS: -Xmx2G -Xms1G
KAFKA_OPTS: -javaagent:/opt/jolokia/agents/jolokia-agent-jvm-javaagent.jar=port=8778,host={{ DOCKER.containers['so-kafka'].ip }},policyLocation=file:/opt/jolokia/jolokia.xml
- extra_hosts:
{% for node in KAFKANODES %}
- {{ node }}:{{ KAFKANODES[node].ip }}
{% endfor %}
{% if DOCKER.containers['so-kafka'].extra_hosts %}
{% for XTRAHOST in DOCKER.containers['so-kafka'].extra_hosts %}
- {{ XTRAHOST }}
{% endfor %}
{% endif %}
- port_bindings:
{% for BINDING in DOCKER.containers['so-kafka'].port_bindings %}
- {{ BINDING }}
{% endfor %}
- binds:
- /etc/pki/kafka.p12:/etc/pki/kafka.p12:ro
- /opt/so/conf/kafka/kafka-truststore.jks:/etc/pki/kafka-truststore.jks:ro
- /nsm/kafka/data/:/nsm/kafka/data/:rw
- /opt/so/log/kafka:/opt/kafka/logs/:rw
- /opt/so/conf/kafka/server.properties:/opt/kafka/config/kraft/server.properties:ro
- /opt/so/conf/kafka/client.properties:/opt/kafka/config/kraft/client.properties
- watch:
{% for sc in ['server', 'client'] %}
- file: kafka_kraft_{{sc}}_properties
{% endfor %}
- file: kafkacertz
- require:
- file: kafkacertz
delete_so-kafka_so-status.disabled:
file.uncomment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-kafka$
{% else %}
{{sls}}_no_license_detected:
test.fail_without_changes:
- name: {{sls}}_no_license_detected
- comment:
- "Kafka for Guaranteed Message Delivery is a feature supported only for customers with a valid license.
Contact Security Onion Solutions, LLC via our website at https://securityonionsolutions.com
for more information about purchasing a license to enable this feature."
include:
- kafka.disabled
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -0,0 +1,7 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% from 'kafka/config.map.jinja' import KAFKACLIENT -%}
{{ KAFKACLIENT | yaml(False) | replace("_x_", ".") }}

View File

@@ -0,0 +1,7 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% from 'kafka/config.map.jinja' import KAFKACONFIG -%}
{{ KAFKACONFIG | yaml(False) | replace("_x_", ".") }}

View File

@@ -0,0 +1,10 @@
kafka:
nodes:
{% for node, values in COMBINED_KAFKANODES.items() %}
{{ node }}:
ip: {{ values['ip'] }}
nodeid: {{ values['nodeid'] }}
{%- if values['role'] != none %}
role: {{ values['role'] }}
{%- endif %}
{% endfor %}

29
salt/kafka/init.sls Normal file
View File

@@ -0,0 +1,29 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
#
# Note: Per the Elastic License 2.0, the second limitation states:
#
# "You may not move, change, disable, or circumvent the license key functionality
# in the software, and you may not remove or obscure any functionality in the
# software that is protected by the license key."
{% from 'kafka/map.jinja' import KAFKAMERGED %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
include:
{# Run kafka/nodes.sls before Kafka is enabled, so kafka nodes pillar is setup #}
{% if grains.role in ['so-manager','so-managersearch', 'so-standalone'] %}
- kafka.nodes
{% endif %}
{% if GLOBALS.pipeline == "KAFKA" and KAFKAMERGED.enabled %}
{% if grains.role in ['so-manager', 'so-managersearch', 'so-standalone', 'so-receiver'] %}
- kafka.enabled
{# Searchnodes only run kafka.ssl state when Kafka is enabled #}
{% elif grains.role == "so-searchnode" %}
- kafka.ssl
{% endif %}
{% else %}
- kafka.disabled
{% endif %}

10
salt/kafka/map.jinja Normal file
View File

@@ -0,0 +1,10 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{# This is only used to determine if Kafka is enabled / disabled. Configuration is found in kafka/config.map.jinja #}
{# kafka/config.map.jinja depends on there being a kafka nodes pillar being populated #}
{% import_yaml 'kafka/defaults.yaml' as KAFKADEFAULTS %}
{% set KAFKAMERGED = salt['pillar.get']('kafka', KAFKADEFAULTS.kafka, merge=True) %}

View File

@@ -0,0 +1,88 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{# USED TO GENERATE PILLAR/KAFKA/NODES.SLS. #}
{% import_yaml 'kafka/defaults.yaml' as KAFKADEFAULTS %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set process_x_roles = KAFKADEFAULTS.kafka.config.broker.process_x_roles %}
{% set current_kafkanodes = salt.saltutil.runner(
'mine.get',
tgt='G@role:so-manager or G@role:so-managersearch or G@role:so-standalone or G@role:so-receiver',
fun='network.ip_addrs',
tgt_type='compound') %}
{% set STORED_KAFKANODES = salt['pillar.get']('kafka:nodes', default=None) %}
{% set KAFKA_CONTROLLERS_PILLAR = salt['pillar.get']('kafka:controllers', default=None) %}
{% set existing_ids = [] %}
{# Check STORED_KAFKANODES for existing kafka nodes and pull their IDs so they are not reused across the grid #}
{% if STORED_KAFKANODES != none %}
{% for node, values in STORED_KAFKANODES.items() %}
{% if values.get('nodeid') %}
{% do existing_ids.append(values['nodeid']) %}
{% endif %}
{% endfor %}
{% endif %}
{# Create list of possible node ids #}
{% set all_possible_ids = range(1, 2000)|list %}
{# Create list of available node ids by looping through all_possible_ids and ensuring it isn't in existing_ids #}
{% set available_ids = [] %}
{% for id in all_possible_ids %}
{% if id not in existing_ids %}
{% do available_ids.append(id) %}
{% endif %}
{% endfor %}
{# Collect kafka eligible nodes and check if they're already in STORED_KAFKANODES to avoid potentially reassigning a nodeid #}
{% set NEW_KAFKANODES = {} %}
{% for minionid, ip in current_kafkanodes.items() %}
{% set hostname = minionid.split('_')[0] %}
{% if not STORED_KAFKANODES or hostname not in STORED_KAFKANODES %}
{% set new_id = available_ids.pop(0) %}
{% do NEW_KAFKANODES.update({hostname: {'nodeid': new_id, 'ip': ip[0], 'role': process_x_roles }}) %}
{% endif %}
{% endfor %}
{# Combine STORED_KAFKANODES and NEW_KAFKANODES for writing to the pillar/kafka/nodes.sls #}
{% set COMBINED_KAFKANODES = {} %}
{% for node, details in NEW_KAFKANODES.items() %}
{% do COMBINED_KAFKANODES.update({node: details}) %}
{% endfor %}
{% if STORED_KAFKANODES != none %}
{% for node, details in STORED_KAFKANODES.items() %}
{% do COMBINED_KAFKANODES.update({node: details}) %}
{% endfor %}
{% endif %}
{# Update the process_x_roles value for any host in the kafka_controllers_pillar configured from SOC UI #}
{% set ns = namespace(has_controller=false) %}
{% if KAFKA_CONTROLLERS_PILLAR != none %}
{% set KAFKA_CONTROLLERS_PILLAR_LIST = KAFKA_CONTROLLERS_PILLAR.split(',') %}
{% for hostname in KAFKA_CONTROLLERS_PILLAR_LIST %}
{% if hostname in COMBINED_KAFKANODES %}
{% do COMBINED_KAFKANODES[hostname].update({'role': 'controller'}) %}
{% set ns.has_controller = true %}
{% endif %}
{% endfor %}
{% for hostname in COMBINED_KAFKANODES %}
{% if hostname not in KAFKA_CONTROLLERS_PILLAR_LIST %}
{% do COMBINED_KAFKANODES[hostname].update({'role': 'broker'}) %}
{% endif %}
{% endfor %}
{# If the kafka_controllers_pillar is NOT empty check that atleast one node contains the controller role.
otherwise default to GLOBALS.manager having broker,controller role #}
{% if not ns.has_controller %}
{% do COMBINED_KAFKANODES[GLOBALS.manager].update({'role': 'broker,controller'}) %}
{% endif %}
{# If kafka_controllers_pillar is empty, default to having grid manager as 'broker,controller'
so there is always atleast 1 controller in the cluster #}
{% else %}
{% do COMBINED_KAFKANODES[GLOBALS.manager].update({'role': 'broker,controller'}) %}
{% endif %}

18
salt/kafka/nodes.sls Normal file
View File

@@ -0,0 +1,18 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'kafka/nodes.map.jinja' import COMBINED_KAFKANODES %}
{% set kafka_cluster_id = salt['pillar.get']('kafka:cluster_id', default=None) %}
{# Write Kafka pillar, so all grid members have access to nodeid of other kafka nodes and their roles #}
write_kafka_pillar_yaml:
file.managed:
- name: /opt/so/saltstack/local/pillar/kafka/nodes.sls
- mode: 644
- user: socore
- source: salt://kafka/files/managed_node_pillar.jinja
- template: jinja
- context:
COMBINED_KAFKANODES: {{ COMBINED_KAFKANODES }}

9
salt/kafka/reset.sls Normal file
View File

@@ -0,0 +1,9 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
wipe_kafka_data:
file.absent:
- name: /nsm/kafka/data/
- force: True

229
salt/kafka/soc_kafka.yaml Normal file
View File

@@ -0,0 +1,229 @@
kafka:
enabled:
description: Set to True to enable Kafka. To avoid grid problems, do not enable Kafka until the related configuration is in place. Requires a valid Security Onion license key.
helpLink: kafka.html
cluster_id:
description: The ID of the Kafka cluster.
readonly: True
advanced: True
sensitive: True
helpLink: kafka.html
controllers:
description: A comma-separated list of hostnames that will act as Kafka controllers. These hosts will be responsible for managing the Kafka cluster. Note that only manager and receiver nodes are eligible to run Kafka. This configuration needs to be set before enabling Kafka. Failure to do so may result in Kafka topics becoming unavailable requiring manual intervention to restore functionality or reset Kafka, either of which can result in data loss.
forcedType: string
helpLink: kafka.html
reset:
description: Disable and reset the Kafka cluster. This will remove all Kafka data including logs that may have not yet been ingested into Elasticsearch and reverts the grid to using REDIS as the global pipeline. This is useful when testing different Kafka configurations such as rearranging Kafka brokers / controllers allowing you to reset the cluster rather than manually fixing any issues arising from attempting to reassign a Kafka broker into a controller. Enter 'YES_RESET_KAFKA' and submit to disable and reset Kafka. Make any configuration changes required and re-enable Kafka when ready. This action CANNOT be reversed.
advanced: True
helpLink: kafka.html
logstash:
description: By default logstash is disabled when Kafka is enabled. This option allows you to specify any hosts you would like to re-enable logstash on alongside Kafka.
forcedType: "[]string"
multiline: True
advanced: True
helpLink: kafka.html
config:
password:
description: The password used for the Kafka certificates.
readonly: True
sensitive: True
helpLink: kafka.html
trustpass:
description: The password used for the Kafka truststore.
readonly: True
sensitive: True
helpLink: kafka.html
broker:
advertised_x_listeners:
description: Specify the list of listeners (hostname and port) that Kafka brokers provide to clients for communication.
title: advertised.listeners
helpLink: kafka.html
auto_x_create_x_topics_x_enable:
description: Enable the auto creation of topics.
title: auto.create.topics.enable
forcedType: bool
helpLink: kafka.html
default_x_replication_x_factor:
description: The default replication factor for automatically created topics. This value must be less than the amount of brokers in the cluster. Hosts specified in controllers should not be counted towards total broker count.
title: default.replication.factor
forcedType: int
helpLink: kafka.html
inter_x_broker_x_listener_x_name:
description: The name of the listener used for inter-broker communication.
title: inter.broker.listener.name
helpLink: kafka.html
listeners:
description: Set of URIs that is listened on and the listener names in a comma-seperated list.
helpLink: kafka.html
listener_x_security_x_protocol_x_map:
description: Comma-seperated mapping of listener name and security protocols.
title: listener.security.protocol.map
helpLink: kafka.html
log_x_dirs:
description: Where Kafka logs are stored within the Docker container.
title: log.dirs
helpLink: kafka.html
log_x_retention_x_check_x_interval_x_ms:
description: Frequency at which log files are checked if they are qualified for deletion.
title: log.retention.check.interval.ms
helpLink: kafka.html
log_x_retention_x_hours:
description: How long, in hours, a log file is kept.
title: log.retention.hours
forcedType: int
helpLink: kafka.html
log_x_segment_x_bytes:
description: The maximum allowable size for a log file.
title: log.segment.bytes
forcedType: int
helpLink: kafka.html
num_x_io_x_threads:
description: The number of threads used by Kafka.
title: num.io.threads
forcedType: int
helpLink: kafka.html
num_x_network_x_threads:
description: The number of threads used for network communication.
title: num.network.threads
forcedType: int
helpLink: kafka.html
num_x_partitions:
description: The number of log partitions assigned per topic.
title: num.partitions
forcedType: int
helpLink: kafka.html
num_x_recovery_x_threads_x_per_x_data_x_dir:
description: The number of threads used for log recuperation at startup and purging at shutdown. This ammount of threads is used per data directory.
title: num.recovery.threads.per.data.dir
forcedType: int
helpLink: kafka.html
offsets_x_topic_x_replication_x_factor:
description: The offsets topic replication factor.
title: offsets.topic.replication.factor
forcedType: int
helpLink: kafka.html
process_x_roles:
description: The role performed by Kafka brokers.
title: process.roles
readonly: True
helpLink: kafka.html
socket_x_receive_x_buffer_x_bytes:
description: Size, in bytes of the SO_RCVBUF buffer. A value of -1 will use the OS default.
title: socket.receive.buffer.bytes
#forcedType: int - soc needs to allow -1 as an int before we can use this
helpLink: kafka.html
socket_x_request_x_max_x_bytes:
description: The maximum bytes allowed for a request to the socket.
title: socket.request.max.bytes
forcedType: int
helpLink: kafka.html
socket_x_send_x_buffer_x_bytes:
description: Size, in bytes of the SO_SNDBUF buffer. A value of -1 will use the OS default.
title: socket.send.buffer.byte
#forcedType: int - soc needs to allow -1 as an int before we can use this
helpLink: kafka.html
ssl_x_keystore_x_location:
description: The key store file location within the Docker container.
title: ssl.keystore.location
helpLink: kafka.html
ssl_x_keystore_x_password:
description: The key store file password. Invalid for PEM format.
title: ssl.keystore.password
sensitive: True
helpLink: kafka.html
ssl_x_keystore_x_type:
description: The key store file format.
title: ssl.keystore.type
regex: ^(JKS|PKCS12|PEM)$
helpLink: kafka.html
ssl_x_truststore_x_location:
description: The trust store file location within the Docker container.
title: ssl.truststore.location
helpLink: kafka.html
ssl_x_truststore_x_type:
description: The trust store file format.
title: ssl.truststore.type
helpLink: kafka.html
ssl_x_truststore_x_password:
description: The trust store file password. If null, the trust store file is still use, but integrity checking is disabled. Invalid for PEM format.
title: ssl.truststore.password
sensitive: True
helpLink: kafka.html
transaction_x_state_x_log_x_min_x_isr:
description: Overrides min.insync.replicas for the transaction topic. When a producer configures acks to "all" (or "-1"), this setting determines the minimum number of replicas required to acknowledge a write as successful. Failure to meet this minimum triggers an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used in conjunction, min.insync.replicas and acks enable stronger durability guarantees. For instance, creating a topic with a replication factor of 3, setting min.insync.replicas to 2, and using acks of "all" ensures that the producer raises an exception if a majority of replicas fail to receive a write.
title: transaction.state.log.min.isr
forcedType: int
helpLink: kafka.html
transaction_x_state_x_log_x_replication_x_factor:
description: Set the replication factor higher for the transaction topic to ensure availability. Internal topic creation will not proceed until the cluster size satisfies this replication factor prerequisite.
title: transaction.state.log.replication.factor
forcedType: int
helpLink: kafka.html
client:
security_x_protocol:
description: 'Broker communication protocol. Options are: SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT'
title: security.protocol
regex: ^(SASL_SSL|PLAINTEXT|SSL|SASL_PLAINTEXT)
helpLink: kafka.html
ssl_x_keystore_x_location:
description: The key store file location within the Docker container.
title: ssl.keystore.location
helpLink: kafka.html
ssl_x_keystore_x_password:
description: The key store file password. Invalid for PEM format.
title: ssl.keystore.password
sensitive: True
helpLink: kafka.html
ssl_x_keystore_x_type:
description: The key store file format.
title: ssl.keystore.type
regex: ^(JKS|PKCS12|PEM)$
helpLink: kafka.html
ssl_x_truststore_x_location:
description: The trust store file location within the Docker container.
title: ssl.truststore.location
helpLink: kafka.html
ssl_x_truststore_x_type:
description: The trust store file format.
title: ssl.truststore.type
helpLink: kafka.html
ssl_x_truststore_x_password:
description: The trust store file password. If null, the trust store file is still use, but integrity checking is disabled. Invalid for PEM format.
title: ssl.truststore.password
sensitive: True
helpLink: kafka.html
controller:
controller_x_listener_x_names:
description: Set listeners used by the controller in a comma-seperated list.
title: controller.listener.names
helpLink: kafka.html
listeners:
description: Set of URIs that is listened on and the listener names in a comma-seperated list.
helpLink: kafka.html
listener_x_security_x_protocol_x_map:
description: Comma-seperated mapping of listener name and security protocols.
title: listener.security.protocol.map
helpLink: kafka.html
log_x_dirs:
description: Where Kafka logs are stored within the Docker container.
title: log.dirs
helpLink: kafka.html
log_x_retention_x_check_x_interval_x_ms:
description: Frequency at which log files are checked if they are qualified for deletion.
title: log.retention.check.interval.ms
helpLink: kafka.html
log_x_retention_x_hours:
description: How long, in hours, a log file is kept.
title: log.retention.hours
forcedType: int
helpLink: kafka.html
log_x_segment_x_bytes:
description: The maximum allowable size for a log file.
title: log.segment.bytes
forcedType: int
helpLink: kafka.html
process_x_roles:
description: The role performed by controller node.
title: process.roles
readonly: True
helpLink: kafka.html

21
salt/kafka/sostatus.sls Normal file
View File

@@ -0,0 +1,21 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
append_so-kafka_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-kafka
- unless: grep -q so-kafka /opt/so/conf/so-status/so-status.conf
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

201
salt/kafka/ssl.sls Normal file
View File

@@ -0,0 +1,201 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states or sls in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set kafka_password = salt['pillar.get']('kafka:config:password') %}
include:
- ca.dirs
{% set global_ca_server = [] %}
{% set x509dict = salt['mine.get'](GLOBALS.manager | lower~'*', 'x509.get_pem_entries') %}
{% for host in x509dict %}
{% if 'manager' in host.split('_')|last or host.split('_')|last == 'standalone' %}
{% do global_ca_server.append(host) %}
{% endif %}
{% endfor %}
{% set ca_server = global_ca_server[0] %}
{% if GLOBALS.pipeline == "KAFKA" %}
{% if GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone'] %}
kafka_client_key:
x509.private_key_managed:
- name: /etc/pki/kafka-client.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/kafka-client.key') -%}
- prereq:
- x509: /etc/pki/kafka-client.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
kafka_client_crt:
x509.certificate_managed:
- name: /etc/pki/kafka-client.crt
- ca_server: {{ ca_server }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- signing_policy: kafka
- private_key: /etc/pki/kafka-client.key
- CN: {{ GLOBALS.hostname }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
kafka_client_key_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka-client.key
- mode: 640
- user: 960
- group: 939
kafka_client_crt_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka-client.crt
- mode: 640
- user: 960
- group: 939
{% endif %}
{% if GLOBALS.role in ['so-manager', 'so-managersearch','so-receiver', 'so-standalone'] %}
kafka_key:
x509.private_key_managed:
- name: /etc/pki/kafka.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/kafka.key') -%}
- prereq:
- x509: /etc/pki/kafka.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
kafka_crt:
x509.certificate_managed:
- name: /etc/pki/kafka.crt
- ca_server: {{ ca_server }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- signing_policy: kafka
- private_key: /etc/pki/kafka.key
- CN: {{ GLOBALS.hostname }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs12 -inkey /etc/pki/kafka.key -in /etc/pki/kafka.crt -export -out /etc/pki/kafka.p12 -nodes -passout pass:{{ kafka_password }}"
- onchanges:
- x509: /etc/pki/kafka.key
kafka_key_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka.key
- mode: 640
- user: 960
- group: 939
kafka_crt_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka.crt
- mode: 640
- user: 960
- group: 939
kafka_pkcs12_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka.p12
- mode: 640
- user: 960
- group: 939
{% endif %}
# Standalone needs kafka-logstash for automated testing. Searchnode/manager search need it for logstash to consume from Kafka.
# Manager will have cert, but be unused until a pipeline is created and logstash enabled.
{% if GLOBALS.role in ['so-standalone', 'so-managersearch', 'so-searchnode', 'so-manager'] %}
kafka_logstash_key:
x509.private_key_managed:
- name: /etc/pki/kafka-logstash.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/kafka-logstash.key') -%}
- prereq:
- x509: /etc/pki/kafka-logstash.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
kafka_logstash_crt:
x509.certificate_managed:
- name: /etc/pki/kafka-logstash.crt
- ca_server: {{ ca_server }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- signing_policy: kafka
- private_key: /etc/pki/kafka-logstash.key
- CN: {{ GLOBALS.hostname }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs12 -inkey /etc/pki/kafka-logstash.key -in /etc/pki/kafka-logstash.crt -export -out /etc/pki/kafka-logstash.p12 -nodes -passout pass:{{ kafka_password }}"
- onchanges:
- x509: /etc/pki/kafka-logstash.key
kafka_logstash_key_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka-logstash.key
- mode: 640
- user: 931
- group: 939
kafka_logstash_crt_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka-logstash.crt
- mode: 640
- user: 931
- group: 939
kafka_logstash_pkcs12_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka-logstash.p12
- mode: 640
- user: 931
- group: 939
{% endif %}
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

30
salt/kafka/storage.sls Normal file
View File

@@ -0,0 +1,30 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set kafka_cluster_id = salt['pillar.get']('kafka:cluster_id') %}
{# Initialize kafka storage if it doesn't already exist. Just looking for meta.properties in /nsm/kafka/data #}
{% if not salt['file.file_exists']('/nsm/kafka/data/meta.properties') %}
kafka_storage_init:
cmd.run:
- name: |
docker run -v /nsm/kafka/data:/nsm/kafka/data -v /opt/so/conf/kafka/server.properties:/opt/kafka/config/kraft/newserver.properties --name so-kafkainit --user root --entrypoint /opt/kafka/bin/kafka-storage.sh {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-kafka:{{ GLOBALS.so_version }} format -t {{ kafka_cluster_id }} -c /opt/kafka/config/kraft/newserver.properties
kafka_rm_kafkainit:
cmd.run:
- name: |
docker rm so-kafkainit
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -0,0 +1,47 @@
#! /bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
if [ -z "$NOROOT" ]; then
# Check for prerequisites
if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run using sudo!"
exit 1
fi
fi
function usage() {
echo -e "\nUsage: $0 <script> [options]"
echo ""
echo "Available scripts:"
show_available_kafka_cli_tools
}
function show_available_kafka_cli_tools(){
docker exec so-kafka ls /opt/kafka/bin | grep kafka
}
if [ -z $1 ]; then
usage
exit 1
fi
available_tools=$(show_available_kafka_cli_tools)
script_exists=false
for script in $available_tools; do
if [ "$script" == "$1" ]; then
script_exists=true
break
fi
done
if [ "$script_exists" == true ]; then
docker exec so-kafka /opt/kafka/bin/$1 "${@:2}"
else
echo -e "\nInvalid script: $1"
usage
exit 1
fi

View File

@@ -0,0 +1,87 @@
#! /bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
if [ -z "$NOROOT" ]; then
# Check for prerequisites
if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run using sudo!"
exit 1
fi
fi
usage() {
cat <<USAGE_EOF
Usage: $0 <operation> [parameters]
Where <operation> is one of the following:
topic-partitions: Increase the number of partitions for a Kafka topic
Required arguments: topic-partitions <topic name> <# partitions>
Example: $0 topic-partitions suricata-topic 6
list-topics: List of Kafka topics
Example: $0 list-topics
USAGE_EOF
exit 1
}
if [[ $# -lt 1 || $1 == --help || $1 == -h ]]; then
usage
fi
kafka_client_config="/opt/kafka/config/kraft/client.properties"
too_few_arguments() {
echo -e "\nMissing one or more required arguments!\n"
usage
}
get_kafka_brokers() {
brokers_cache="/opt/so/state/kafka_brokers"
broker_port="9092"
if [[ ! -f "$brokers_cache" ]] || [[ $(find "/$brokers_cache" -mmin +120) ]]; then
echo "Refreshing Kafka brokers list"
salt-call pillar.get kafka:nodes --out=json | jq -r --arg broker_port "$broker_port" '.local | to_entries[] | select(.value.role | contains("broker")) | "\(.value.ip):\($broker_port)"' | paste -sd "," - > "$brokers_cache"
else
echo "Using cached Kafka brokers list"
fi
brokers=$(cat "$brokers_cache")
}
increase_topic_partitions() {
get_kafka_brokers
command=$(so-kafka-cli kafka-topics.sh --bootstrap-server $brokers --command-config $kafka_client_config --alter --topic $topic --partitions $partition_count)
if $command; then
echo -e "Successfully increased the number of partitions for topic $topic to $partition_count\n"
so-kafka-cli kafka-topics.sh --bootstrap-server $brokers --command-config $kafka_client_config --describe --topic $topic
fi
}
get_kafka_topics_list() {
get_kafka_brokers
so-kafka-cli kafka-topics.sh --bootstrap-server $brokers --command-config $kafka_client_config --exclude-internal --list | sort
}
operation=$1
case "${operation}" in
"topic-partitions")
if [[ $# -lt 3 ]]; then
too_few_arguments
fi
topic=$2
partition_count=$3
increase_topic_partitions
;;
"list-topics")
get_kafka_topics_list
;;
*)
usage
;;
esac

View File

@@ -0,0 +1,13 @@
#!/bin/bash
#
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% set TRUSTPASS = salt['pillar.get']('kafka:config:trustpass') %}
if [ ! -f /opt/so/saltstack/local/salt/kafka/files/kafka-truststore ]; then
docker run -v /etc/pki/ca.crt:/etc/pki/ca.crt --name so-kafkatrust --user root --entrypoint /opt/java/openjdk/bin/keytool {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-kafka:{{ GLOBALS.so_version }} -import -file /etc/pki/ca.crt -alias SOS -keystore /etc/pki/kafka-truststore -storepass {{ TRUSTPASS }} -storetype jks -noprompt
docker cp so-kafkatrust:/etc/pki/kafka-truststore /opt/so/saltstack/local/salt/kafka/files/kafka-truststore
docker rm so-kafkatrust
fi

View File

@@ -1 +1,2 @@
{"attributes": {"buildNum": 39457,"defaultIndex": "logs-*","defaultRoute": "/app/dashboards#/view/a8411b30-6d03-11ea-b301-3d6c35840645","discover:sampleSize": 100,"theme:darkMode": true,"timepicker:timeDefaults": "{\n \"from\": \"now-24h\",\n \"to\": \"now\"\n}"},"coreMigrationVersion": "8.10.4","id": "8.10.4","references": [],"type": "config","updated_at": "2021-10-10T10:10:10.105Z","version": "WzI5NzUsMl0="}
{"attributes": {"buildNum": 39457,"defaultIndex": "logs-*","defaultRoute": "/app/dashboards#/view/a8411b30-6d03-11ea-b301-3d6c35840645","discover:sampleSize": 100,"theme:darkMode": true,"timepicker:timeDefaults": "{\n \"from\": \"now-24h\",\n \"to\": \"now\"\n}"},"coreMigrationVersion": "8.14.3","id": "8.14.3","references": [],"type": "config","updated_at": "2021-10-10T10:10:10.105Z","version": "WzI5NzUsMl0="}

View File

@@ -63,7 +63,7 @@ update() {
IFS=$'\r\n' GLOBIGNORE='*' command eval 'LINES=($(cat $1))'
for i in "${LINES[@]}"; do
RESPONSE=$(curl -K /opt/so/conf/elasticsearch/curl.config -X PUT "localhost:5601/api/saved_objects/config/8.10.4" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d " $i ")
RESPONSE=$(curl -K /opt/so/conf/elasticsearch/curl.config -X PUT "localhost:5601/api/saved_objects/config/8.14.3" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d " $i ")
echo $RESPONSE; if [[ "$RESPONSE" != *"\"success\":true"* ]] && [[ "$RESPONSE" != *"updated_at"* ]] ; then RETURN_CODE=1;fi
done

View File

@@ -7,9 +7,7 @@ logstash:
- search
receiver:
- receiver
heavynode:
- manager
- search
heavynode: []
searchnode:
- search
manager:

View File

@@ -14,6 +14,11 @@
include:
{% if GLOBALS.role not in ['so-receiver','so-fleet'] %}
- elasticsearch.ca
{% endif %}
{# Kafka ca runs on nodes that can run logstash for Kafka input / output. Only when Kafka is global pipeline #}
{% if GLOBALS.role in ['so-searchnode', 'so-manager', 'so-managersearch', 'so-receiver', 'so-standalone'] and GLOBALS.pipeline == 'KAFKA' %}
- kafka.ca
- kafka.ssl
{% endif %}
- logstash.config
- logstash.sostatus
@@ -75,10 +80,14 @@ so-logstash:
{% else %}
- /etc/pki/tls/certs/intca.crt:/usr/share/filebeat/ca.crt:ro
{% endif %}
{% if GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-searchnode'] %}
{% if GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-searchnode' ] %}
- /opt/so/conf/ca/cacerts:/etc/pki/ca-trust/extracted/java/cacerts:ro
- /opt/so/conf/ca/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro
{% endif %}
{% if GLOBALS.pipeline == "KAFKA" and GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone', 'so-searchnode'] %}
- /etc/pki/kafka-logstash.p12:/usr/share/logstash/kafka-logstash.p12:ro
- /opt/so/conf/kafka/kafka-truststore.jks:/etc/pki/kafka-truststore.jks:ro
{% endif %}
{% if GLOBALS.role == 'so-eval' %}
- /nsm/zeek:/nsm/zeek:ro
- /nsm/suricata:/suricata:ro
@@ -102,6 +111,9 @@ so-logstash:
- file: ls_pipeline_{{assigned_pipeline}}_{{CONFIGFILE.split('.')[0] | replace("/","_") }}
{% endfor %}
{% endfor %}
{% if GLOBALS.pipeline == 'KAFKA' and GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone', 'so-searchnode'] %}
- file: kafkacertz
{% endif %}
- require:
{% if grains['role'] in ['so-manager', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-receiver'] %}
- x509: etc_filebeat_crt
@@ -115,6 +127,9 @@ so-logstash:
- file: cacertz
- file: capemz
{% endif %}
{% if GLOBALS.pipeline == 'KAFKA' and GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone', 'so-searchnode'] %}
- file: kafkacertz
{% endif %}
delete_so-logstash_so-status.disabled:
file.uncomment:

View File

@@ -6,24 +6,40 @@
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% import_yaml 'logstash/defaults.yaml' as LOGSTASH_DEFAULTS %}
{% set LOGSTASH_MERGED = salt['pillar.get']('logstash', LOGSTASH_DEFAULTS.logstash, merge=True) %}
{% set KAFKA_LOGSTASH = salt['pillar.get']('kafka:logstash', []) %}
{% set REDIS_NODES = [] %}
{# LOGSTASH_NODES is the same as ES_LOGSTASH_NODES from elasticsearch/config.map.jinja but heavynodes are present #}
{# used to store the redis nodes that logstash needs to know about to pull from the queue #}
{% set LOGSTASH_REDIS_NODES = [] %}
{# stores all logstash nodes #}
{% set LOGSTASH_NODES = [] %}
{% set node_data = salt['pillar.get']('logstash:nodes', {GLOBALS.role.split('-')[1]: {GLOBALS.hostname: {'ip': GLOBALS.node_ip}}}) %}
{% set logstash_node_data = salt['pillar.get']('logstash:nodes', {GLOBALS.role.split('-')[1]: {GLOBALS.hostname: {'ip': GLOBALS.node_ip}}}) %}
{% set redis_node_data = salt['pillar.get']('redis:nodes', {GLOBALS.role.split('-')[1]: {GLOBALS.hostname: {'ip': GLOBALS.node_ip}}}) %}
{% for node_type, node_details in node_data.items() | sort %}
{% for node_type, node_details in redis_node_data.items() | sort %}
{% if GLOBALS.role in ['so-searchnode', 'so-standalone', 'so-managersearch', 'so-fleet'] %}
{% if node_type in ['manager', 'managersearch', 'standalone', 'receiver' ] %}
{% for hostname in node_data[node_type].keys() %}
{% do REDIS_NODES.append({hostname:node_details[hostname].ip}) %}
{% for hostname in redis_node_data[node_type].keys() %}
{% do LOGSTASH_REDIS_NODES.append({hostname:node_details[hostname].ip}) %}
{% endfor %}
{% endif %}
{% else %}
{% do REDIS_NODES.append({GLOBALS.hostname:GLOBALS.node_ip}) %}
{% endif %}
{% endfor %}
{% for hostname in node_data[node_type].keys() %}
{% for node_type, node_details in logstash_node_data.items() | sort %}
{% for hostname in logstash_node_data[node_type].keys() %}
{% do LOGSTASH_NODES.append({hostname:node_details[hostname].ip}) %}
{% endfor %}
{% endfor %}
{# Append Kafka input pipeline when Kafka is enabled #}
{% if GLOBALS.pipeline == 'KAFKA' %}
{% do LOGSTASH_MERGED.defined_pipelines.search.remove('so/0900_input_redis.conf.jinja') %}
{% do LOGSTASH_MERGED.defined_pipelines.search.append('so/0800_input_kafka.conf.jinja') %}
{% do LOGSTASH_MERGED.defined_pipelines.manager.append('so/0800_input_kafka.conf.jinja') %}
{# Disable logstash on manager & receiver nodes unless it has an override configured #}
{% if not KAFKA_LOGSTASH %}
{% if GLOBALS.role in ['so-manager', 'so-receiver'] and GLOBALS.hostname not in KAFKA_LOGSTASH %}
{% do LOGSTASH_MERGED.update({'enabled': False}) %}
{% endif %}
{% endif %}
{% endif %}

View File

@@ -0,0 +1,39 @@
{%- set kafka_password = salt['pillar.get']('kafka:config:password') %}
{%- set kafka_trustpass = salt['pillar.get']('kafka:config:trustpass') %}
{%- set kafka_brokers = salt['pillar.get']('kafka:nodes', {}) %}
{%- set brokers = [] %}
{%- if kafka_brokers %}
{%- for key, values in kafka_brokers.items() %}
{%- if 'broker' in values['role'] %}
{%- do brokers.append(key ~ ':9092') %}
{%- endif %}
{%- endfor %}
{%- set bootstrap_servers = ','.join(brokers) %}
input {
kafka {
codec => json
topics_pattern => '.*-securityonion$'
group_id => 'searchnodes'
consumer_threads => 3
client_id => '{{ GLOBALS.hostname }}'
security_protocol => 'SSL'
bootstrap_servers => '{{ bootstrap_servers }}'
ssl_keystore_location => '/usr/share/logstash/kafka-logstash.p12'
ssl_keystore_password => '{{ kafka_password }}'
ssl_keystore_type => 'PKCS12'
ssl_truststore_location => '/etc/pki/kafka-truststore.jks'
ssl_truststore_password => '{{ kafka_trustpass }}'
decorate_events => true
tags => [ "elastic-agent", "input-{{ GLOBALS.hostname}}", "kafka" ]
}
}
filter {
if ![metadata] {
mutate {
rename => { "@metadata" => "metadata" }
}
}
}
{% endif %}

View File

@@ -1,8 +1,8 @@
{%- from 'logstash/map.jinja' import REDIS_NODES with context %}
{%- from 'logstash/map.jinja' import LOGSTASH_REDIS_NODES with context %}
{%- set REDIS_PASS = salt['pillar.get']('redis:config:requirepass') %}
{%- for index in range(REDIS_NODES|length) %}
{%- for host in REDIS_NODES[index] %}
{%- for index in range(LOGSTASH_REDIS_NODES|length) %}
{%- for host in LOGSTASH_REDIS_NODES[index] %}
input {
redis {
host => '{{ host }}'

View File

@@ -3,3 +3,5 @@ manager:
enabled: True
hour: 3
minute: 0
additionalCA: ''
insecureSkipVerify: False

View File

@@ -73,6 +73,15 @@ manager_sbin:
- exclude_pat:
- "*_test.py"
manager_sbin_jinja:
file.recurse:
- name: /usr/sbin/
- source: salt://manager/tools/sbin_jinja/
- user: socore
- group: socore
- file_mode: 755
- template: jinja
so-repo-file:
file.managed:
- name: /opt/so/conf/reposync/repodownload.conf

7
salt/manager/map.jinja Normal file
View File

@@ -0,0 +1,7 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% import_yaml 'manager/defaults.yaml' as MANAGERDEFAULTS %}
{% set MANAGERMERGED = salt['pillar.get']('manager', MANAGERDEFAULTS.manager, merge=True) %}

View File

@@ -24,3 +24,16 @@ manager:
description: Proxy server to use for updates.
global: True
helpLink: proxy.html
additionalCA:
description: Additional CA certificates to trust in PEM format.
global: True
advanced: True
multiline: True
forcedType: string
helpLink: proxy.html
insecureSkipVerify:
description: Disable TLS verification for outgoing requests. This will make your installation less secure to MITM attacks. Recommended only for debugging purposes.
advanced: True
forcedType: bool
global: True
helpLink: proxy.html

Some files were not shown because too many files have changed in this diff Show More