Compare commits

...

170 Commits

Author SHA1 Message Date
Mike Reeves
9f2b920454 Merge pull request #8535 from Security-Onion-Solutions/hotfix/2.3.140
Hotfix/2.3.140
2022-08-15 15:06:37 -04:00
Mike Reeves
604af45661 Merge pull request #8534 from Security-Onion-Solutions/2.3.140hotfix3
2.3.140 Hotfix
2022-08-15 13:09:14 -04:00
Mike Reeves
3f435c5c1a 2.3.140 Hotfix 2022-08-15 13:03:25 -04:00
Mike Reeves
9903be8120 Merge pull request #8532 from Security-Onion-Solutions/2.3.140-20220815 2022-08-12 15:04:00 -04:00
Doug Burks
991a601a3d FIX: so-curator-closed-delete-delete needs to reference new Elasticsearch directory #8529 2022-08-12 13:21:06 -04:00
Doug Burks
86519d43dc Update HOTFIX 2022-08-12 13:20:15 -04:00
Doug Burks
484aa7b207 Merge pull request #8336 from Security-Onion-Solutions/hotfix/2.3.140
Hotfix/2.3.140
2022-07-19 16:13:47 -04:00
Mike Reeves
6986448239 Merge pull request #8333 from Security-Onion-Solutions/2.3.140hotfix
2.3.140 Hotfix
2022-07-19 14:47:50 -04:00
Mike Reeves
dd48d66c1c 2.3.140 Hotfix 2022-07-19 14:39:44 -04:00
Mike Reeves
440f4e75c1 Merge pull request #8332 from Security-Onion-Solutions/dev
Merge Hotfix
2022-07-19 13:30:20 -04:00
weslambert
c795a70e9c Merge pull request #8329 from Security-Onion-Solutions/fix/elastalert_stop_check_enabled
Check to ensure Elastalert is enabled and suppress missing container error output
2022-07-19 13:27:35 -04:00
weslambert
340dbe8547 Check to see if Elastalert is enabled before trying to run 'so-elastalert-stop'. Also suppress error output for when so-elastalert container is not present. 2022-07-19 13:25:09 -04:00
Mike Reeves
52a5e743e9 Merge pull request #8327 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update HOTFIX
2022-07-19 11:17:13 -04:00
Wes Lambert
5ceff52796 Move Elastalert indices check to function and call from beginning of soup and during pre-upgrade to 2.3.140 2022-07-19 14:54:39 +00:00
Wes Lambert
f3a0ab0b2d Perform Elastalert index check twice 2022-07-19 14:48:19 +00:00
Wes Lambert
4a7c994b66 Revise Elastalert index check deletion logic 2022-07-19 14:31:45 +00:00
Mike Reeves
07b8785f3d Update soup 2022-07-19 10:23:10 -04:00
Mike Reeves
9a1092ab01 Update HOTFIX 2022-07-19 10:21:36 -04:00
Mike Reeves
fbcbfaf7c3 Merge pull request #8310 from Security-Onion-Solutions/dev
2.3.140
2022-07-18 11:23:54 -04:00
Mike Reeves
497110d6cd Merge pull request #8320 from Security-Onion-Solutions/2.3.140-2
2.3.140
2022-07-18 10:57:53 -04:00
Mike Reeves
3711eb52b8 2.3.140 2022-07-18 10:54:50 -04:00
weslambert
8099b1688b Merge pull request #8319 from Security-Onion-Solutions/fix/elasticsearch_query_missing_query_path
Fix missing query path for so-elasticsearch-query
2022-07-18 09:47:16 -04:00
weslambert
2914007393 Add forward slash to fix issue with missing query path 2022-07-18 09:07:34 -04:00
weslambert
f5e10430ed Add forward slash to fix issue with missing query path 2022-07-18 09:07:13 -04:00
Mike Reeves
b5a78d4577 Merge pull request #8309 from Security-Onion-Solutions/2.3.140
2.3.140
2022-07-15 13:36:31 -04:00
Mike Reeves
0a14dad849 Update VERIFY_ISO.md 2022-07-15 13:31:51 -04:00
Mike Reeves
3430df6a20 2.3.140 2022-07-15 13:26:25 -04:00
Mike Reeves
881915f871 Merge pull request #8306 from Security-Onion-Solutions/TOoSmOotH-patch-3
Update defaults.yaml
2022-07-14 16:20:29 -04:00
Mike Reeves
cf8c6a6e94 Update defaults.yaml 2022-07-14 15:17:27 -04:00
weslambert
52ebbf8ff3 Merge pull request #8304 from Security-Onion-Solutions/fix/kibana_space_defaults_web_response_url
Change web_response to evaluate the response from the Spaces API and the default space query
2022-07-14 12:08:02 -04:00
weslambert
2443e8b97e Change web_response to evaluate the response from the Spaces API and the default space query 2022-07-14 12:04:56 -04:00
weslambert
4241eb4b29 Merge pull request #8298 from Security-Onion-Solutions/fix/kibana_space_defaults_shebang
Add shebang so that so-kibana-space-defaults will work correctly on Ubuntu
2022-07-13 16:50:21 -04:00
weslambert
0fd4f34b5b Add shebang so that so-kibana-space-defaults will work correctly on Ubuntu 2022-07-13 16:48:39 -04:00
Josh Patterson
37df49d4f3 Merge pull request #8296 from Security-Onion-Solutions/elastalert_esversion_check
use onlyif requisite instead
2022-07-13 15:22:40 -04:00
m0duspwnens
7d7cf42d9a use onlyif requisite instead 2022-07-13 15:21:34 -04:00
Doug Burks
de0a7d3bcd Merge pull request #8293 from Security-Onion-Solutions/dougburks-patch-1
change hyperlink for Elastic 8 issues
2022-07-13 12:41:50 -04:00
Doug Burks
c67a58a5b1 change hyperlink for Elastic 8 issues 2022-07-13 12:40:03 -04:00
Josh Patterson
e79ca4bb9b Merge pull request #8291 from Security-Onion-Solutions/elastalert_esversion_check
do not start elastalert if elasticsearch is not v8
2022-07-13 11:24:12 -04:00
m0duspwnens
086cf3996d do not start elastalert if elasticsearch is not v8 2022-07-13 11:21:27 -04:00
Doug Burks
7ae5d49a4a Merge pull request #8290 from Security-Onion-Solutions/dougburks-patch-1
increment version to 2.3.140
2022-07-13 09:33:37 -04:00
Doug Burks
34d3c6a882 increment version to 2.3.140 2022-07-13 09:32:28 -04:00
weslambert
4a5664db7b Merge pull request #8289 from Security-Onion-Solutions/fix/soup_unsupported_indices_check
Add missing 'fi' to if/then for unsupported indices check
2022-07-13 09:15:22 -04:00
weslambert
513c7ae56c Add missing 'fi' to if/then for unsupported indices check 2022-07-13 09:13:28 -04:00
weslambert
fa894cf83b Merge pull request #8288 from Security-Onion-Solutions/fix/soup_elastalert_indices_deletion_check
Ensure Elastalert indices are deleted before continuing with SOUP
2022-07-13 08:44:04 -04:00
weslambert
8e92060c29 Ensure Elastalert indices are deleted before continuing with SOUP -- if they are not, generate a failure condition 2022-07-13 08:38:55 -04:00
weslambert
d7eb8b9bcb Merge pull request #8281 from Security-Onion-Solutions/fix/soup_elasticsearch8_index_compatibility
SOUP - Check for indices created by Elasticsearch 6
2022-07-12 16:20:47 -04:00
weslambert
d0a0ca8458 Update exit code for ES checks 2022-07-12 16:15:44 -04:00
Josh Patterson
57b79421d8 Merge pull request #8280 from Security-Onion-Solutions/fix_filebeat
move port bindings back under port bindings
2022-07-12 16:12:49 -04:00
weslambert
4502182b53 Typo - Ensure Elasticsearch version 6 indices are checked 2022-07-12 15:35:46 -04:00
weslambert
0fc6f7b022 Add check for Elasticsearch 6 indices 2022-07-12 15:34:24 -04:00
m0duspwnens
ec451c19f8 move port bindings back under port bindings 2022-07-12 15:17:25 -04:00
weslambert
e9a22d0aff Merge pull request #8275 from Security-Onion-Solutions/fix/filebeat_es_output_additions
Specify outputs for Elasticsearch and Kibana for Eval and Import Mode
2022-07-11 19:03:07 -04:00
weslambert
11d3ed36b7 Specify outputs for Elasticsearch and Kibana for Eval and Import Mode
Add outputs for Elasticsearch and Kibana for Eval/Import Mode, since Logstash is not used in Eval Mode or Import Mode. Otherwise, logs from these inputs end up in a filebeat-prefixed index.
2022-07-11 17:22:09 -04:00
weslambert
d828bbfe47 Merge pull request #8273 from Security-Onion-Solutions/fix/kibana_space_defaults_cases
Add securitySolutionCases feature to ensure Cases are disabled by default
2022-07-11 16:39:30 -04:00
weslambert
bd32394560 Add securitySolutionCases feature to ensure Cases are disabled by default 2022-07-11 16:38:05 -04:00
weslambert
6f4f050a96 Merge pull request #8272 from Security-Onion-Solutions/fix/soup_kibana_space_defaults
Run so-kibana-space-defaults when upgrading to 2.3.140
2022-07-11 14:47:11 -04:00
weslambert
f77edaa5c9 Run so-kibana-space-defaults to re-establish the default enabled features since Fleet feature name changed 2022-07-11 14:41:23 -04:00
Jason Ertel
15124b6ad7 Merge pull request #8271 from Security-Onion-Solutions/kilo
Add content-type header to PUT request, now required in Kratos 0.10.1
2022-07-11 13:47:28 -04:00
Jason Ertel
077053afbd Add content-type header to PUT request, now required in Kratos 0.10.1 2022-07-11 13:43:41 -04:00
weslambert
dd1d5b1a83 Merge pull request #8270 from Security-Onion-Solutions/fix/curator_actions_delete_kratos
Add delete and warm action for Kratos indices in applicable Curator delete/warm scripts
2022-07-11 11:39:43 -04:00
weslambert
e82b6fcdec Typo - Change 'delete' to 'warm' 2022-07-11 11:34:53 -04:00
weslambert
8c8ac41b36 Add action for Kratos indices 2022-07-11 11:32:03 -04:00
weslambert
b611dda143 Add delete action for Kratos indices 2022-07-11 11:31:22 -04:00
weslambert
3f5b98d14d Merge pull request #8269 from Security-Onion-Solutions/fix/curator_actions_kratos
Add Curator actions and adjust Curator close scripts to account for so-kibana and so-kratos indices
2022-07-11 11:21:20 -04:00
Wes Lambert
0b6219d95f Adjust Curator close scripts to include Kibana and Kratos indices 2022-07-11 14:51:33 +00:00
Wes Lambert
2f729e24d9 Add Curator action files for Kratos indices 2022-07-11 14:34:10 +00:00
weslambert
992b6e14de Merge pull request #8268 from Security-Onion-Solutions/fix/kibana_disable_fleetv2
Disable fleetv2 because it is now used to control Fleet visibility and 'fleet' is now used for 'Integrations'
2022-07-11 10:09:12 -04:00
weslambert
09a1d8c549 Disable fleetv2 because it is now used to control Fleet visibility and 'fleet' is now used for 'Integrations' 2022-07-11 10:06:24 -04:00
Jason Ertel
f28c6d590a Merge pull request #8263 from Security-Onion-Solutions/kilo
Remove Jinja from yaml files before parsing
2022-07-08 20:32:22 -04:00
Jason Ertel
4f8bb6049b Future proof the jinja check to ensure the script does not silently overwrite jinja templates 2022-07-08 17:30:00 -04:00
Jason Ertel
a8e6b26406 Remove Jinja from yaml files before parsing 2022-07-08 17:07:24 -04:00
weslambert
2903bdbc7e Merge pull request #8260 from Security-Onion-Solutions/fix/kratos_dedicated_index_and_filestream_id_additions
Add dedicated index for Kratos and IDs for all filestream inputs
2022-07-08 12:04:40 -04:00
Wes Lambert
5c90fce3a1 Add Kratos Logstash output to search pipeline for Logstash 2022-07-08 15:58:00 +00:00
Wes Lambert
26698cfd07 Add Logstash output for dedicated Kratos index 2022-07-08 15:55:55 +00:00
Wes Lambert
764e8688b1 Modify Kratos input to use dedicated index and add filestream ID for all applicable inputs 2022-07-08 15:53:55 +00:00
Wes Lambert
b06c16f750 Add ingest node pipeline for Kratos 2022-07-08 15:53:00 +00:00
weslambert
42cfab4544 Merge pull request #8256 from Security-Onion-Solutions/fix/kibana_restart_after_role_sync
Restart Kibana in case it times out before being able to read role update
2022-07-07 17:44:47 -04:00
weslambert
4bbc901860 Restart Kibana in case it times out before being able to read in new role configuration 2022-07-07 17:19:02 -04:00
weslambert
a343f8ced0 Merge pull request #8255 from Security-Onion-Solutions/fix/so_kibana_user_role
Force so-user to sync roles to ensure so_kibana role change
2022-07-07 16:19:30 -04:00
weslambert
85be2f4f99 Force so-user to sync roles to ensure so_kibana role change from superuser to kibana_system 2022-07-07 15:55:44 -04:00
weslambert
8b3fa0c4c6 Merge pull request #8252 from Security-Onion-Solutions/feature/elastic_8_3_2
Update to Elastic 8.3.2
2022-07-07 11:14:14 -04:00
weslambert
ede845ce00 Update to Kibana 8.3.2 2022-07-07 11:05:44 -04:00
weslambert
42c96553c5 Update to Kibana 8.3.2 2022-07-07 11:04:43 -04:00
Mike Reeves
41d5cdd78c Merge pull request #8246 from Security-Onion-Solutions/TOoSmOotH-patch-2
Update soup
2022-07-06 16:39:38 -04:00
Mike Reeves
c819d3a558 Update soup 2022-07-06 16:36:57 -04:00
Mike Reeves
c00d33632a Update soup 2022-07-06 16:23:02 -04:00
Mike Reeves
a1ee793607 Merge pull request #8242 from Security-Onion-Solutions/fixsoup
Move soup order
2022-07-06 09:18:16 -04:00
Mike Reeves
1589107b97 Move soup order 2022-07-06 08:59:21 -04:00
Mike Reeves
31688ee898 Merge pull request #8238 from Security-Onion-Solutions/TOoSmOotH-patch-1
Make soup enforce versions
2022-07-05 16:56:14 -04:00
Mike Reeves
f1d188a46d Update soup 2022-07-05 16:50:20 -04:00
Mike Reeves
5f0c3aa7ae Update soup 2022-07-05 16:49:20 -04:00
weslambert
2b73cd1156 Merge pull request #8236 from Security-Onion-Solutions/fix/localfile_analyzer
Strip quotes and ensure file_path is typed as a list (localfile analyzer)
2022-07-05 16:28:56 -04:00
Mike Reeves
c6fac28804 Update soup 2022-07-05 16:26:44 -04:00
Jason Ertel
9d43b7ec89 Rollback string manipulation in favor of fixed unit tests 2022-07-05 16:21:27 -04:00
Jason Ertel
f6266b19cc Fix unit test issues 2022-07-05 16:20:24 -04:00
Mike Reeves
df0a774ffd Make soup enforce versions 2022-07-05 16:17:32 -04:00
weslambert
77ee30f31a Merge pull request #8237 from Security-Onion-Solutions/feature/elastic_8_3_1
Bump Elastic to 8.3.1
2022-07-05 14:50:24 -04:00
weslambert
2938464501 Update to Kibana 8.3.1 2022-07-05 14:46:02 -04:00
weslambert
79e88c9ca3 Update to Kibana 8.3.1 2022-07-05 14:45:30 -04:00
Wes Lambert
e96206d065 Strip quotes and ensure file_path is typed as a list 2022-07-05 14:25:54 +00:00
Josh Brower
7fa9ca8fc6 Merge pull request #8233 from Security-Onion-Solutions/fix/remove-sudo-bpf
Remove unneeded sudo
2022-07-05 09:23:48 -04:00
Josh Brower
a1d1779126 Remove unneeded sudo 2022-07-05 09:21:05 -04:00
Josh Patterson
fb365739ae Merge pull request #8225 from Security-Onion-Solutions/salltupdate
bootstrap-salt can now update to minor version with -r
2022-07-01 08:53:59 -04:00
m0duspwnens
5f898ae569 change to egrep 2022-07-01 08:47:46 -04:00
m0duspwnens
f0ff0d51f7 allow bootstrap-salt to install specific verion even if -r is used 2022-06-30 16:59:54 -04:00
m0duspwnens
7524ea2c05 allow bootstrap-salt to install specific verion even if -r is used 2022-06-30 15:10:13 -04:00
Mike Reeves
6bb979e2b6 Merge pull request #8219 from Security-Onion-Solutions/salty
Salty
2022-06-30 13:34:03 -04:00
Mike Reeves
8b3d5e808e Fix repo location 2022-06-30 13:30:56 -04:00
Mike Reeves
e86b7bff84 Fix repo location 2022-06-30 13:29:21 -04:00
Josh Patterson
69ce3613ff Merge pull request #8217 from Security-Onion-Solutions/salltupdate
point to salt3004.2
2022-06-30 11:29:35 -04:00
m0duspwnens
0ebd957308 point to salt3004.2 2022-06-30 11:26:03 -04:00
Josh Patterson
c3979f5a32 Merge pull request #8207 from Security-Onion-Solutions/salltupdate
Saltupdate 3004.2
2022-06-28 11:20:53 -04:00
m0duspwnens
8fccd4598a update saltstack.list for 3004.2 2022-06-27 16:23:01 -04:00
weslambert
3552dfac03 Merge pull request #8199 from Security-Onion-Solutions/fix/filebeat_filestream_elastic8
Change type from 'log' to 'filestream' to ensure compatibility with E…
2022-06-27 14:58:54 -04:00
Josh Patterson
fba5592f62 Update minion.defaults.yaml 2022-06-27 12:10:18 -04:00
Josh Patterson
05e84699d1 Update master.defaults.yaml 2022-06-27 12:09:39 -04:00
Mike Reeves
f36c8da1fe Update so-functions 2022-06-27 12:04:33 -04:00
Mike Reeves
080daee1d8 Update so-functions 2022-06-27 11:43:01 -04:00
Mike Reeves
909e876509 Update ubuntu.sls 2022-06-27 11:41:49 -04:00
Jason Ertel
ac68fa822b Merge pull request #8200 from Security-Onion-Solutions/contrib
Add gh action for contrib check
2022-06-27 11:25:10 -04:00
Jason Ertel
675ace21f5 Add gh action for contrib check 2022-06-27 11:11:15 -04:00
weslambert
85f790b28a Change type from 'log' to 'filestream' to ensure compatibility with Elastic 8 2022-06-27 10:39:58 -04:00
weslambert
d0818e83c9 Merge pull request #8197 from Security-Onion-Solutions/fix/localfile_analyzer_csv_path
Ensure file_path uses jinja to derive the value(s) from the pillar
2022-06-27 10:36:59 -04:00
weslambert
568b43d0af Ensure file_path uses jinja to derive the value(s) from the pillar 2022-06-27 10:10:13 -04:00
Jason Ertel
2e123b7a4f Merge pull request #8175 from Security-Onion-Solutions/kilo
Avoid failing setup due to retrying while waiting for lock file
2022-06-23 08:16:39 -04:00
Jason Ertel
ba6f716e4a Avoid failing setup due to retrying while waiting for lock file 2022-06-23 06:09:04 -04:00
weslambert
10bcc43e85 Merge pull request #8167 from Security-Onion-Solutions/feature/update_es_8_2_3
Update to Elastic 8.2.3
2022-06-21 16:11:39 -04:00
weslambert
af687fb2b5 Update config_saved_objects.ndjson 2022-06-21 16:06:28 -04:00
weslambert
776cc30a8e Update to ES 8.2.3 2022-06-21 16:06:01 -04:00
Doug Burks
00cf0b38d0 Merge pull request #8165 from Security-Onion-Solutions/dougburks-patch-1
FIX: Improve default dashboards #8136
2022-06-21 12:57:46 -04:00
Doug Burks
94c637449d FIX: Improve default dashboards #8136 2022-06-21 12:53:06 -04:00
Josh Brower
0a203add3b Merge pull request #8145 from Security-Onion-Solutions/defensivedepth-patch-1
pin v1.6.0
2022-06-17 13:14:58 -04:00
Josh Brower
b8ee896f8a pin v1.6.0 2022-06-17 12:38:54 -04:00
Josh Brower
238e671f34 Merge pull request #8129 from Security-Onion-Solutions/fix/curator-cron
Change curator to daily for true cluster
2022-06-15 11:40:53 -04:00
Josh Brower
072cb3cca2 Change curator to daily for true cluster 2022-06-15 11:38:38 -04:00
weslambert
44595cb333 Merge pull request #8123 from Security-Onion-Solutions/foxtrot
Merge foxtrot into dev
2022-06-14 15:44:13 -04:00
weslambert
959cec1845 Delete Elastalert indices before upgrading to Elastic 8 2022-06-14 11:40:11 -04:00
Doug Burks
286909af4b Merge pull request #8113 from Security-Onion-Solutions/fix/pfsense-category
FIX: Add event.category field to pfsense firewall logs #8112
2022-06-13 08:08:00 -04:00
doug
025993407e FIX: Add event.category field to pfsense firewall logs #8112 2022-06-13 08:03:44 -04:00
weslambert
151a42734c Update Elastic version to 8.2.2 2022-06-08 15:07:45 -04:00
weslambert
11e3576e0d Update Elastic version to 8.2.2 2022-06-08 15:07:07 -04:00
weslambert
adeccd0e7f Merge pull request #8097 from Security-Onion-Solutions/dev
Merge latest dev into foxtrot
2022-06-08 15:01:09 -04:00
weslambert
aadf391e5a Temporarily downgrade version for merge 2022-06-08 14:59:01 -04:00
weslambert
47f74fa5c6 Temporarily downgrade version for merge 2022-06-08 14:58:05 -04:00
Jason Ertel
e405750d26 Merge pull request #8095 from Security-Onion-Solutions/kilo
Bump version to 2.3.140
2022-06-08 09:07:56 -04:00
Jason Ertel
e36c33485d Bump version to 2.3.140 2022-06-08 09:04:57 -04:00
Josh Brower
8e368bdebe Merge in upstream dev 2022-05-06 20:01:07 -04:00
weslambert
bb9d6673ec Fix casing 2022-03-21 12:38:50 -04:00
weslambert
9afa949623 Don't rotate Filebeat log on startup 2022-03-21 12:38:12 -04:00
weslambert
b2c26807a3 Add xpack.reporting.kibanaServer.hostname to defaults file 2022-03-21 09:30:25 -04:00
Wes Lambert
faeaa948c8 Remove extra Salt logic and clean up output format of resultant script 2022-03-19 04:31:48 +00:00
Wes Lambert
1a6ef0cc6b Re-enable FB module load 2022-03-19 03:55:40 +00:00
Wes Lambert
a18b38de4d Update so-filebeat-module-setup to use new load style to avoid having to explicitly enabled filesets 2022-03-19 03:54:41 +00:00
Wes Lambert
2e7d314650 Remove Cyberark module 2022-03-19 03:43:55 +00:00
Wes Lambert
c97847f0e2 Remove Threat Intel Recored Future fileset 2022-03-19 03:43:34 +00:00
Wes Lambert
59a2ac38f5 Disable FB module load for now 2022-03-18 22:12:09 +00:00
Wes Lambert
543bf9a7a7 Update Kibana version to 8 2022-03-18 22:07:21 +00:00
Wes Lambert
d111c08fb3 Update Curator commands with new Filebeat module variables 2022-03-18 21:45:33 +00:00
weslambert
a9ea99daa8 Switch from so_elastic user to so_kibana user for Elastic 8 2022-03-18 15:09:50 -04:00
weslambert
cb0d4acd57 Remove X-Pack ML entry for Elastic 8 2022-03-18 14:46:28 -04:00
weslambert
e0374be4aa Update version from 7.16.2 to 8.1.0 for Kibana config 2022-03-18 11:57:33 -04:00
weslambert
6f294cc0c2 Change Kibana user role from superuser to kibana_system for Elastic 8 2022-03-18 11:54:08 -04:00
weslambert
5ec5b9a2ee Remove older module config files 2022-03-18 10:14:13 -04:00
weslambert
c659a443b0 Update from search.remote to cluster.remote for Elastic 8 2022-03-17 21:25:10 -04:00
weslambert
99430fddeb Update from search.remote to cluster.remote for Elastic 8 2022-03-17 21:24:39 -04:00
weslambert
7128b04636 Remove indices.query.bool.max_clause_count because it is dynamically allocated in Elastic 8 2022-03-17 21:20:41 -04:00
weslambert
712a92aa39 Switch from log input to filestream input 2022-03-17 21:18:03 -04:00
Wes Lambert
6e2aaa0098 Clean up original map file 2022-03-17 21:08:57 +00:00
Wes Lambert
09892a815b Add back bind mounts and remove THIRDPARTY 2022-03-17 21:06:07 +00:00
Wes Lambert
a60ef33930 Reorganize FB module management 2022-03-17 21:01:03 +00:00
59 changed files with 490 additions and 178 deletions

24
.github/workflows/contrib.yml vendored Normal file
View File

@@ -0,0 +1,24 @@
name: contrib
on:
issue_comment:
types: [created]
pull_request_target:
types: [opened,closed,synchronize]
jobs:
CLAssistant:
runs-on: ubuntu-latest
steps:
- name: "Contributor Check"
if: (github.event.comment.body == 'recheck' || github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA') || github.event_name == 'pull_request_target'
uses: cla-assistant/github-action@v2.1.3-beta
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PERSONAL_ACCESS_TOKEN : ${{ secrets.PERSONAL_ACCESS_TOKEN }}
with:
path-to-signatures: 'signatures_v1.json'
path-to-document: 'https://securityonionsolutions.com/cla'
allowlist: dependabot[bot],jertel,dougburks,TOoSmOotH,weslambert,defensivedepth,m0duspwnens
remote-organization-name: Security-Onion-Solutions
remote-repository-name: licensing

View File

@@ -12,6 +12,6 @@ jobs:
fetch-depth: '0' fetch-depth: '0'
- name: Gitleaks - name: Gitleaks
uses: zricethezav/gitleaks-action@master uses: gitleaks/gitleaks-action@v1.6.0
with: with:
config-path: .github/.gitleaks.toml config-path: .github/.gitleaks.toml

2
HOTFIX
View File

@@ -1 +1 @@
20220719 20220812

View File

@@ -1,6 +1,6 @@
## Security Onion 2.3.130 ## Security Onion 2.3.140
Security Onion 2.3.130 is here! Security Onion 2.3.140 is here!
## Screenshots ## Screenshots

View File

@@ -1,18 +1,18 @@
### 2.3.130-20220607 ISO image built on 2022/06/07 ### 2.3.140-20220812 ISO image built on 2022/08/12
### Download and Verify ### Download and Verify
2.3.130-20220607 ISO image: 2.3.140-20220812 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.3.130-20220607.iso https://download.securityonion.net/file/securityonion/securityonion-2.3.140-20220812.iso
MD5: 0034D6A9461C04357AFF512875408A4C MD5: 13D4A5D663B5A36D045B980E5F33E6BC
SHA1: BF80EEB101C583153CAD8E185A7DB3173FD5FFE8 SHA1: 85DC36B7E96575259DFD080BC860F6508D5F5899
SHA256: 15943623B96D8BB4A204A78668447F36B54A63ABA5F8467FBDF0B25C5E4E6078 SHA256: DE5D0F82732B81456180AA40C124E5C82688611941EEAF03D85986806631588C
Signature for ISO image: Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.130-20220607.iso.sig https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.140-20220812.iso.sig
Signing key: Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS
@@ -26,22 +26,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/ma
Download the signature file for the ISO: Download the signature file for the ISO:
``` ```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.130-20220607.iso.sig wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.140-20220812.iso.sig
``` ```
Download the ISO image: Download the ISO image:
``` ```
wget https://download.securityonion.net/file/securityonion/securityonion-2.3.130-20220607.iso wget https://download.securityonion.net/file/securityonion/securityonion-2.3.140-20220812.iso
``` ```
Verify the downloaded ISO image using the signature file: Verify the downloaded ISO image using the signature file:
``` ```
gpg --verify securityonion-2.3.130-20220607.iso.sig securityonion-2.3.130-20220607.iso gpg --verify securityonion-2.3.140-20220812.iso.sig securityonion-2.3.140-20220812.iso
``` ```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below: The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
``` ```
gpg: Signature made Tue 07 Jun 2022 01:27:20 PM EDT using RSA key ID FE507013 gpg: Signature made Fri 12 Aug 2022 03:59:11 PM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>" gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature! gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner. gpg: There is no indication that the signature belongs to the owner.

View File

@@ -1 +1 @@
2.3.130 2.3.140

View File

@@ -14,4 +14,5 @@ logstash:
- so/9700_output_strelka.conf.jinja - so/9700_output_strelka.conf.jinja
- so/9800_output_logscan.conf.jinja - so/9800_output_logscan.conf.jinja
- so/9801_output_rita.conf.jinja - so/9801_output_rita.conf.jinja
- so/9802_output_kratos.conf.jinja
- so/9900_output_endgame.conf.jinja - so/9900_output_endgame.conf.jinja

View File

@@ -29,7 +29,7 @@ fi
interface="$1" interface="$1"
shift shift
sudo tcpdump -i $interface -ddd $@ | tail -n+2 | tcpdump -i $interface -ddd $@ | tail -n+2 |
while read line; do while read line; do
cols=( $line ) cols=( $line )
printf "%04x%02x%02x%08x" ${cols[0]} ${cols[1]} ${cols[2]} ${cols[3]} printf "%04x%02x%02x%08x" ${cols[0]} ${cols[1]} ${cols[2]} ${cols[3]}

View File

@@ -49,19 +49,18 @@ if [ "$ELASTICSEARCH_CONNECTED" == "no" ]; then
fi fi
echo "Testing to see if the pipelines are already applied" echo "Testing to see if the pipelines are already applied"
ESVER=$({{ ELASTICCURL }} -sk https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT" |jq .version.number |tr -d \") ESVER=$({{ ELASTICCURL }} -sk https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT" |jq .version.number |tr -d \")
PIPELINES=$({{ ELASTICCURL }} -sk https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"/_ingest/pipeline/filebeat-$ESVER-suricata-eve-pipeline | jq . | wc -c) PIPELINES=$({{ ELASTICCURL }} -sk https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"/_ingest/pipeline/filebeat-$ESVER-elasticsearch-server-pipeline | jq . | wc -c)
if [[ "$PIPELINES" -lt 5 ]]; then if [[ "$PIPELINES" -lt 5 ]] || [ "$2" != "--force" ]; then
echo "Setting up ingest pipeline(s)" echo "Setting up ingest pipeline(s)"
{% from 'filebeat/modules.map.jinja' import MODULESMERGED with context %}
for MODULE in activemq apache auditd aws azure barracuda bluecoat cef checkpoint cisco coredns crowdstrike cyberark cylance elasticsearch envoyproxy f5 fortinet gcp google_workspace googlecloud gsuite haproxy ibmmq icinga iis imperva infoblox iptables juniper kafka kibana logstash microsoft mongodb mssql mysql nats netscout nginx o365 okta osquery panw postgresql rabbitmq radware redis santa snort snyk sonicwall sophos squid suricata system threatintel tomcat traefik zeek zscaler {%- for module in MODULESMERGED.modules.keys() %}
do {%- for fileset in MODULESMERGED.modules[module] %}
echo "Loading $MODULE" echo "{{ module }}.{{ fileset}}"
docker exec -i so-filebeat filebeat setup modules -pipelines -modules $MODULE -c $FB_MODULE_YML docker exec -i so-filebeat filebeat setup --pipelines --modules {{ module }} -M "{{ module }}.{{ fileset }}.enabled=true" -c $FB_MODULE_YML
sleep 2 sleep 0.5
done {% endfor %}
{%- endfor %}
else else
exit 0 exit 0
fi fi

View File

@@ -16,6 +16,7 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
import os import os
import re
import subprocess import subprocess
import sys import sys
import time import time
@@ -26,6 +27,7 @@ hostgroupsFilename = "/opt/so/saltstack/local/salt/firewall/hostgroups.local.yam
portgroupsFilename = "/opt/so/saltstack/local/salt/firewall/portgroups.local.yaml" portgroupsFilename = "/opt/so/saltstack/local/salt/firewall/portgroups.local.yaml"
defaultPortgroupsFilename = "/opt/so/saltstack/default/salt/firewall/portgroups.yaml" defaultPortgroupsFilename = "/opt/so/saltstack/default/salt/firewall/portgroups.yaml"
supportedProtocols = ['tcp', 'udp'] supportedProtocols = ['tcp', 'udp']
readonly = False
def showUsage(options, args): def showUsage(options, args):
print('Usage: {} [OPTIONS] <COMMAND> [ARGS...]'.format(sys.argv[0])) print('Usage: {} [OPTIONS] <COMMAND> [ARGS...]'.format(sys.argv[0]))
@@ -70,10 +72,26 @@ def checkApplyOption(options):
return apply(None, None) return apply(None, None)
def loadYaml(filename): def loadYaml(filename):
global readonly
file = open(filename, "r") file = open(filename, "r")
return yaml.safe_load(file.read()) content = file.read()
# Remove Jinja templating (for read-only operations)
if "{%" in content or "{{" in content:
content = content.replace("{{ ssh_port }}", "22")
pattern = r'.*({%|{{|}}|%}).*'
content = re.sub(pattern, "", content)
readonly = True
return yaml.safe_load(content)
def writeYaml(filename, content): def writeYaml(filename, content):
global readonly
if readonly:
raise Exception("Cannot write yaml file that has been flagged as read-only")
file = open(filename, "w") file = open(filename, "w")
return yaml.dump(content, file) return yaml.dump(content, file)

View File

@@ -1,6 +1,7 @@
#!/bin/bash
. /usr/sbin/so-common . /usr/sbin/so-common
{% set HIGHLANDER = salt['pillar.get']('global:highlander', False) %} {% set HIGHLANDER = salt['pillar.get']('global:highlander', False) %}
wait_for_web_response "http://localhost:5601/app/kibana" "Elastic" 300 "{{ ELASTICCURL }}" wait_for_web_response "http://localhost:5601/api/spaces/space/default" "default" 300 "{{ ELASTICCURL }}"
## This hackery will be removed if using Elastic Auth ## ## This hackery will be removed if using Elastic Auth ##
# Let's snag a cookie from Kibana # Let's snag a cookie from Kibana
@@ -12,6 +13,6 @@ echo "Setting up default Space:"
{% if HIGHLANDER %} {% if HIGHLANDER %}
{{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["enterpriseSearch"]} ' >> /opt/so/log/kibana/misc.log {{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["enterpriseSearch"]} ' >> /opt/so/log/kibana/misc.log
{% else %} {% else %}
{{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","siem","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","fleet"]} ' >> /opt/so/log/kibana/misc.log {{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","siem","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","fleet","fleetv2","securitySolutionCases"]} ' >> /opt/so/log/kibana/misc.log
{% endif %} {% endif %}
echo echo

View File

@@ -238,7 +238,7 @@ function syncElastic() {
syncElasticSystemUser "$authPillarJson" "so_monitor_user" "$usersTmpFile" syncElasticSystemUser "$authPillarJson" "so_monitor_user" "$usersTmpFile"
syncElasticSystemRole "$authPillarJson" "so_elastic_user" "superuser" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_elastic_user" "superuser" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_kibana_user" "superuser" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_kibana_user" "kibana_system" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_logstash_user" "superuser" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_logstash_user" "superuser" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_beats_user" "superuser" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_beats_user" "superuser" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_monitor_user" "remote_monitoring_collector" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_monitor_user" "remote_monitoring_collector" "$rolesTmpFile"
@@ -437,7 +437,7 @@ function updateStatus() {
state="inactive" state="inactive"
fi fi
body="{ \"schema_id\": \"$schemaId\", \"state\": \"$state\", \"traits\": $traitBlock }" body="{ \"schema_id\": \"$schemaId\", \"state\": \"$state\", \"traits\": $traitBlock }"
response=$(curl -fSsL -XPUT "${kratosUrl}/identities/$identityId" -d "$body") response=$(curl -fSsL -XPUT -H "Content-Type: application/json" "${kratosUrl}/identities/$identityId" -d "$body")
[[ $? != 0 ]] && fail "Unable to update user" [[ $? != 0 ]] && fail "Unable to update user"
} }

View File

@@ -48,7 +48,7 @@ fi
{% else %} {% else %}
gh_status=$(curl -s -o /dev/null -w "%{http_code}" http://github.com) gh_status=$(curl -s -o /dev/null -w "%{http_code}" https://github.com)
clone_dir="/tmp" clone_dir="/tmp"
if [ "$gh_status" == "200" ] || [ "$gh_status" == "301" ]; then if [ "$gh_status" == "200" ] || [ "$gh_status" == "301" ]; then

View File

@@ -371,12 +371,109 @@ clone_to_tmp() {
fi fi
} }
elastalert_indices_check() {
# Stop Elastalert to prevent Elastalert indices from being re-created
if grep -q "^so-elastalert$" /opt/so/conf/so-status/so-status.conf ; then
so-elastalert-stop || true
fi
# Wait for ElasticSearch to initialize
echo -n "Waiting for ElasticSearch..."
COUNT=0
ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do
so-elasticsearch-query / -k --output /dev/null
if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes"
echo "connected!"
break
else
((COUNT+=1))
sleep 1
echo -n "."
fi
done
# Unable to connect to Elasticsearch
if [ "$ELASTICSEARCH_CONNECTED" == "no" ]; then
echo
echo -e "Connection attempt timed out. Unable to connect to ElasticSearch. \nPlease try: \n -checking log(s) in /var/log/elasticsearch/\n -running 'sudo docker ps' \n -running 'sudo so-elastic-restart'"
echo
exit 1
fi
# Check Elastalert indices
echo "Deleting Elastalert indices to prevent issues with upgrade to Elastic 8..."
CHECK_COUNT=0
while [[ "$CHECK_COUNT" -le 2 ]]; do
# Delete Elastalert indices
for i in $(so-elasticsearch-query _cat/indices | grep elastalert | awk '{print $3}'); do
so-elasticsearch-query $i -XDELETE;
done
# Check to ensure Elastalert indices are deleted
COUNT=0
ELASTALERT_INDICES_DELETED="no"
while [[ "$COUNT" -le 240 ]]; do
RESPONSE=$(so-elasticsearch-query elastalert*)
if [[ "$RESPONSE" == "{}" ]]; then
ELASTALERT_INDICES_DELETED="yes"
echo "Elastalert indices successfully deleted."
break
else
((COUNT+=1))
sleep 1
echo -n "."
fi
done
((CHECK_COUNT+=1))
done
# If we were unable to delete the Elastalert indices, exit the script
if [ "$ELASTALERT_INDICES_DELETED" == "no" ]; then
echo
echo -e "Unable to connect to delete Elastalert indices. Exiting."
echo
exit 1
fi
}
enable_highstate() { enable_highstate() {
echo "Enabling highstate." echo "Enabling highstate."
salt-call state.enable highstate -l info --local salt-call state.enable highstate -l info --local
echo "" echo ""
} }
es_version_check() {
CHECK_ES=$(echo $INSTALLEDVERSION | awk -F. '{print $3}')
if [ "$CHECK_ES" -lt "110" ]; then
echo "You are currently running Security Onion $INSTALLEDVERSION. You will need to update to version 2.3.130 before updating to 2.3.140 or higher."
echo ""
echo "If your deployment has Internet access, you can use the following command to update to 2.3.130:"
echo "sudo BRANCH=2.3.130-20220607 soup"
echo ""
echo "Otherwise, if your deployment is configured for airgap, you can instead download the 2.3.130 ISO image from https://download.securityonion.net/file/securityonion/securityonion-2.3.130-20220607.iso."
echo ""
echo "*** Once you have updated to 2.3.130, you can then update to 2.3.140 or higher as you would normally. ***"
exit 1
fi
}
es_indices_check() {
echo "Checking for unsupported Elasticsearch indices..."
UNSUPPORTED_INDICES=$(for INDEX in $(so-elasticsearch-indices-list | awk '{print $3}'); do so-elasticsearch-query $INDEX/_settings?human |grep '"created_string":"6' | jq -r 'keys'[0]; done)
if [ -z "$UNSUPPORTED_INDICES" ]; then
echo "No unsupported indices found."
else
echo "The following indices were created with Elasticsearch 6, and are not supported when upgrading to Elasticsearch 8. These indices may need to be deleted, migrated, or re-indexed before proceeding with the upgrade. Please see https://docs.securityonion.net/en/2.3/soup.html#elastic-8 for more details."
echo
echo "$UNSUPPORTED_INDICES"
exit 1
fi
}
generate_and_clean_tarballs() { generate_and_clean_tarballs() {
local new_version local new_version
new_version=$(cat $UPDATE_DIR/VERSION) new_version=$(cat $UPDATE_DIR/VERSION)
@@ -422,8 +519,9 @@ preupgrade_changes() {
[[ "$INSTALLEDVERSION" == 2.3.80 ]] && up_to_2.3.90 [[ "$INSTALLEDVERSION" == 2.3.80 ]] && up_to_2.3.90
[[ "$INSTALLEDVERSION" == 2.3.90 || "$INSTALLEDVERSION" == 2.3.91 ]] && up_to_2.3.100 [[ "$INSTALLEDVERSION" == 2.3.90 || "$INSTALLEDVERSION" == 2.3.91 ]] && up_to_2.3.100
[[ "$INSTALLEDVERSION" == 2.3.100 ]] && up_to_2.3.110 [[ "$INSTALLEDVERSION" == 2.3.100 ]] && up_to_2.3.110
[[ "$INSTALLEDVERISON" == 2.3.110 ]] && up_to_2.3.120 [[ "$INSTALLEDVERSION" == 2.3.110 ]] && up_to_2.3.120
[[ "$INSTALLEDVERISON" == 2.3.120 ]] && up_to_2.3.130 [[ "$INSTALLEDVERSION" == 2.3.120 ]] && up_to_2.3.130
[[ "$INSTALLEDVERSION" == 2.3.130 ]] && up_to_2.3.140
true true
} }
@@ -439,6 +537,7 @@ postupgrade_changes() {
[[ "$POSTVERSION" == 2.3.100 ]] && post_to_2.3.110 [[ "$POSTVERSION" == 2.3.100 ]] && post_to_2.3.110
[[ "$POSTVERSION" == 2.3.110 ]] && post_to_2.3.120 [[ "$POSTVERSION" == 2.3.110 ]] && post_to_2.3.120
[[ "$POSTVERSION" == 2.3.120 ]] && post_to_2.3.130 [[ "$POSTVERSION" == 2.3.120 ]] && post_to_2.3.130
[[ "$POSTVERSION" == 2.3.130 ]] && post_to_2.3.140
true true
@@ -515,6 +614,14 @@ post_to_2.3.130() {
POSTVERSION=2.3.130 POSTVERSION=2.3.130
} }
post_to_2.3.140() {
echo "Post Processing for 2.3.140"
FORCE_SYNC=true so-user sync
so-kibana-restart
so-kibana-space-defaults
POSTVERSION=2.3.140
}
stop_salt_master() { stop_salt_master() {
@@ -762,10 +869,13 @@ up_to_2.3.100() {
echo "Adding receiver to assigned_hostgroups.local.map.yaml" echo "Adding receiver to assigned_hostgroups.local.map.yaml"
grep -qxF " receiver:" /opt/so/saltstack/local/salt/firewall/assigned_hostgroups.local.map.yaml || sed -i -e '$a\ receiver:' /opt/so/saltstack/local/salt/firewall/assigned_hostgroups.local.map.yaml grep -qxF " receiver:" /opt/so/saltstack/local/salt/firewall/assigned_hostgroups.local.map.yaml || sed -i -e '$a\ receiver:' /opt/so/saltstack/local/salt/firewall/assigned_hostgroups.local.map.yaml
INSTALLEDVERSION=2.3.100
} }
up_to_2.3.110() { up_to_2.3.110() {
sed -i 's|shards|index_template:\n template:\n settings:\n index:\n number_of_shards|g' /opt/so/saltstack/local/pillar/global.sls sed -i 's|shards|index_template:\n template:\n settings:\n index:\n number_of_shards|g' /opt/so/saltstack/local/pillar/global.sls
INSTALLEDVERSION=2.3.110
} }
up_to_2.3.120() { up_to_2.3.120() {
@@ -773,11 +883,19 @@ up_to_2.3.120() {
so-thehive-stop so-thehive-stop
so-thehive-es-stop so-thehive-es-stop
so-cortex-stop so-cortex-stop
INSTALLEDVERSION=2.3.120
} }
up_to_2.3.130() { up_to_2.3.130() {
# Remove file for nav update # Remove file for nav update
rm -f /opt/so/conf/navigator/layers/nav_layer_playbook.json rm -f /opt/so/conf/navigator/layers/nav_layer_playbook.json
INSTALLEDVERSION=2.3.130
}
up_to_2.3.140() {
elastalert_indices_check
##
INSTALLEDVERSION=2.3.140
} }
verify_upgradespace() { verify_upgradespace() {
@@ -958,7 +1076,7 @@ update_repo() {
fi fi
rm -f /etc/apt/sources.list.d/salt.list rm -f /etc/apt/sources.list.d/salt.list
echo "deb https://repo.securityonion.net/file/securityonion-repo/ubuntu/$ubuntu_version/amd64/salt $OSVER main" > /etc/apt/sources.list.d/saltstack.list echo "deb https://repo.securityonion.net/file/securityonion-repo/ubuntu/$ubuntu_version/amd64/salt3004.2/ $OSVER main" > /etc/apt/sources.list.d/saltstack.list
apt-get update apt-get update
fi fi
} }
@@ -1093,6 +1211,9 @@ main() {
fi fi
echo "Verifying we have the latest soup script." echo "Verifying we have the latest soup script."
verify_latest_update_script verify_latest_update_script
es_version_check
es_indices_check
elastalert_indices_check
echo "" echo ""
set_palette set_palette
check_elastic_license check_elastic_license

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-kratos:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close kratos indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-kratos.*|so-kratos.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-kratos:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete kratos indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-kratos.*|so-kratos.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-kratos:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-kratos
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -23,8 +23,8 @@ read lastPID < $lf
# if lastPID is not null and a process with that pid exists , exit # if lastPID is not null and a process with that pid exists , exit
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit [ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo $$ > $lf echo $$ > $lf
{% from 'filebeat/map.jinja' import THIRDPARTY with context %}
{% from 'filebeat/map.jinja' import SO with context %} {% from 'filebeat/modules.map.jinja' import MODULESMERGED with context %}
/usr/sbin/so-curator-closed-delete > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-close.yml > /dev/null 2>&1;
@@ -32,13 +32,12 @@ docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/cur
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-kibana-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-kratos-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-close.yml > /dev/null 2>&1;
{% for INDEX in THIRDPARTY.modules.keys() -%} {% for INDEX in MODULESMERGED.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-close.yml > /dev/null 2>&1;
{% endfor -%}
{% for INDEX in SO.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-close.yml > /dev/null 2>&1{% if not loop.last %};{% endif %} docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-close.yml > /dev/null 2>&1{% if not loop.last %};{% endif %}
{% endfor -%} {% endfor -%}

View File

@@ -29,7 +29,7 @@ LOG="/opt/so/log/curator/so-curator-closed-delete.log"
overlimit() { overlimit() {
[[ $(du -hs --block-size=1GB /nsm/elasticsearch/nodes | awk '{print $1}' ) -gt "{{LOG_SIZE_LIMIT}}" ]] [[ $(du -hs --block-size=1GB /nsm/elasticsearch/indices | awk '{print $1}' ) -gt "{{LOG_SIZE_LIMIT}}" ]]
} }
closedindices() { closedindices() {

View File

@@ -24,21 +24,18 @@ read lastPID < $lf
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit [ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo $$ > $lf echo $$ > $lf
{% from 'filebeat/map.jinja' import THIRDPARTY with context %} {% from 'filebeat/modules.map.jinja' import MODULESMERGED with context %}
{% from 'filebeat/map.jinja' import SO with context %}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-kratos-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-close.yml > /dev/null 2>&1;
{% for INDEX in THIRDPARTY.modules.keys() -%} {% for INDEX in MODULESMERGED.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-close.yml > /dev/null 2>&1;
{% endfor -%}
{% for INDEX in SO.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-close.yml > /dev/null 2>&1{% if not loop.last %};{% endif %} docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-close.yml > /dev/null 2>&1{% if not loop.last %};{% endif %}
{% endfor -%} {% endfor -%}

View File

@@ -24,21 +24,18 @@ read lastPID < $lf
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit [ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo $$ > $lf echo $$ > $lf
{% from 'filebeat/map.jinja' import THIRDPARTY with context %} {% from 'filebeat/modules.map.jinja' import MODULESMERGED with context %}
{% from 'filebeat/map.jinja' import SO with context %}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-kratos-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-delete.yml > /dev/null 2>&1;
{% for INDEX in THIRDPARTY.modules.keys() -%} {% for INDEX in MODULESMERGED.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-delete.yml > /dev/null 2>&1;
{% endfor -%}
{% for INDEX in SO.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-delete.yml > /dev/null 2>&1{% if not loop.last %};{% endif %} docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-delete.yml > /dev/null 2>&1{% if not loop.last %};{% endif %}
{% endfor -%} {% endfor -%}

View File

@@ -24,21 +24,18 @@ read lastPID < $lf
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit [ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo $$ > $lf echo $$ > $lf
{% from 'filebeat/map.jinja' import THIRDPARTY with context %} {% from 'filebeat/modules.map.jinja' import MODULESMERGED with context %}
{% from 'filebeat/map.jinja' import SO with context %}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-kratos-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-warm.yml > /dev/null 2>&1;
{% for INDEX in THIRDPARTY.modules.keys() -%} {% for INDEX in MODULESMERGED.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-warm.yml > /dev/null 2>&1;
{% endfor -%}
{% for INDEX in SO.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-warm.yml > /dev/null 2>&1{% if not loop.last %};{% endif %} docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-warm.yml > /dev/null 2>&1{% if not loop.last %};{% endif %}
{% endfor -%} {% endfor -%}

View File

@@ -201,8 +201,8 @@ so-curatorclusterclose:
cron.present: cron.present:
- name: /usr/sbin/so-curator-cluster-close > /opt/so/log/curator/cron-close.log 2>&1 - name: /usr/sbin/so-curator-cluster-close > /opt/so/log/curator/cron-close.log 2>&1
- user: root - user: root
- minute: '2' - minute: '5'
- hour: '*/1' - hour: '1'
- daymonth: '*' - daymonth: '*'
- month: '*' - month: '*'
- dayweek: '*' - dayweek: '*'
@@ -211,8 +211,8 @@ so-curatorclusterdelete:
cron.present: cron.present:
- name: /usr/sbin/so-curator-cluster-delete > /opt/so/log/curator/cron-delete.log 2>&1 - name: /usr/sbin/so-curator-cluster-delete > /opt/so/log/curator/cron-delete.log 2>&1
- user: root - user: root
- minute: '2' - minute: '5'
- hour: '*/1' - hour: '1'
- daymonth: '*' - daymonth: '*'
- month: '*' - month: '*'
- dayweek: '*' - dayweek: '*'
@@ -221,8 +221,8 @@ so-curatorclusterwarm:
cron.present: cron.present:
- name: /usr/sbin/so-curator-cluster-warm > /opt/so/log/curator/cron-warm.log 2>&1 - name: /usr/sbin/so-curator-cluster-warm > /opt/so/log/curator/cron-warm.log 2>&1
- user: root - user: root
- minute: '2' - minute: '5'
- hour: '*/1' - hour: '1'
- daymonth: '*' - daymonth: '*'
- month: '*' - month: '*'
- dayweek: '*' - dayweek: '*'

View File

@@ -129,6 +129,9 @@ so-elastalert:
- file: elastaconf - file: elastaconf
- watch: - watch:
- file: elastaconf - file: elastaconf
- onlyif:
- "so-elasticsearch-query / | jq -r '.version.number[0:1]' | grep -q 8" {# only run this state if elasticsearch is version 8 #}
append_so-elastalert_so-status.conf: append_so-elastalert_so-status.conf:
file.append: file.append:

View File

@@ -53,9 +53,6 @@ elasticsearch:
script: script:
max_compilations_rate: 20000/1m max_compilations_rate: 20000/1m
indices: indices:
query:
bool:
max_clause_count: 3500
id_field_data: id_field_data:
enabled: false enabled: false
logger: logger:

View File

@@ -51,9 +51,10 @@
}, },
{ "set": { "field": "_index", "value": "so-firewall", "override": true } }, { "set": { "field": "_index", "value": "so-firewall", "override": true } },
{ "set": { "if": "ctx.network?.transport_id == '0'", "field": "network.transport", "value": "icmp", "override": true } }, { "set": { "if": "ctx.network?.transport_id == '0'", "field": "network.transport", "value": "icmp", "override": true } },
{"community_id": {} }, { "community_id": {} },
{ "set": { "field": "module", "value": "pfsense", "override": true } }, { "set": { "field": "module", "value": "pfsense", "override": true } },
{ "set": { "field": "dataset", "value": "firewall", "override": true } }, { "set": { "field": "dataset", "value": "firewall", "override": true } },
{ "set": { "field": "category", "value": "network", "override": true } },
{ "remove": { "field": ["real_message", "ip_sub_msg", "firewall.sub_message"], "ignore_failure": true } } { "remove": { "field": ["real_message", "ip_sub_msg", "firewall.sub_message"], "ignore_failure": true } }
] ]
} }

View File

@@ -0,0 +1,13 @@
{
"description" : "kratos",
"processors" : [
{
"set": {
"field": "_index",
"value": "so-kratos",
"override": true
}
},
{ "pipeline": { "name": "common" } }
]
}

View File

@@ -30,7 +30,7 @@ echo -n "Waiting for ElasticSearch..."
COUNT=0 COUNT=0
ELASTICSEARCH_CONNECTED="no" ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do while [[ "$COUNT" -le 240 ]]; do
so-elasticsearch-query -k --output /dev/null --silent --head --fail so-elasticsearch-query / -k --output /dev/null --silent --head --fail
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes" ELASTICSEARCH_CONNECTED="yes"
echo "connected!" echo "connected!"

View File

@@ -64,6 +64,9 @@ logging.files:
# automatically rotated # automatically rotated
rotateeverybytes: 10485760 # = 10MB rotateeverybytes: 10485760 # = 10MB
# Rotate on startup
rotateonstartup: false
# Number of rotated log files to keep. Oldest files will be deleted first. # Number of rotated log files to keep. Oldest files will be deleted first.
keepfiles: 7 keepfiles: 7
@@ -114,7 +117,8 @@ filebeat.inputs:
fields_under_root: true fields_under_root: true
{%- if grains['role'] in ['so-eval', 'so-standalone', 'so-manager', 'so-managersearch', 'so-import'] %} {%- if grains['role'] in ['so-eval', 'so-standalone', 'so-manager', 'so-managersearch', 'so-import'] %}
- type: log - type: filestream
id: logscan
paths: paths:
- /logs/logscan/alerts.log - /logs/logscan/alerts.log
fields: fields:
@@ -131,7 +135,8 @@ filebeat.inputs:
{%- if grains['role'] in ['so-eval', 'so-standalone', 'so-sensor', 'so-helix', 'so-heavynode', 'so-import'] %} {%- if grains['role'] in ['so-eval', 'so-standalone', 'so-sensor', 'so-helix', 'so-heavynode', 'so-import'] %}
{%- if ZEEKVER != 'SURICATA' %} {%- if ZEEKVER != 'SURICATA' %}
{%- for LOGNAME in salt['pillar.get']('zeeklogs:enabled', '') %} {%- for LOGNAME in salt['pillar.get']('zeeklogs:enabled', '') %}
- type: log - type: filestream
id: zeek-{{ LOGNAME }}
paths: paths:
- /nsm/zeek/logs/current/{{ LOGNAME }}.log - /nsm/zeek/logs/current/{{ LOGNAME }}.log
fields: fields:
@@ -146,7 +151,8 @@ filebeat.inputs:
clean_removed: true clean_removed: true
close_removed: false close_removed: false
- type: log - type: filestream
id: import-zeek={{ LOGNAME }}
paths: paths:
- /nsm/import/*/zeek/logs/{{ LOGNAME }}.log - /nsm/import/*/zeek/logs/{{ LOGNAME }}.log
fields: fields:
@@ -170,7 +176,8 @@ filebeat.inputs:
{%- endfor %} {%- endfor %}
{%- endif %} {%- endif %}
- type: log - type: filestream
id: suricata-eve
paths: paths:
- /nsm/suricata/eve*.json - /nsm/suricata/eve*.json
fields: fields:
@@ -186,7 +193,8 @@ filebeat.inputs:
clean_removed: false clean_removed: false
close_removed: false close_removed: false
- type: log - type: filestream
id: import-suricata
paths: paths:
- /nsm/import/*/suricata/eve*.json - /nsm/import/*/suricata/eve*.json
fields: fields:
@@ -208,7 +216,8 @@ filebeat.inputs:
clean_removed: false clean_removed: false
close_removed: false close_removed: false
{%- if STRELKAENABLED == 1 %} {%- if STRELKAENABLED == 1 %}
- type: log - type: filestream
id: strelka
paths: paths:
- /nsm/strelka/log/strelka.log - /nsm/strelka/log/strelka.log
fields: fields:
@@ -229,7 +238,8 @@ filebeat.inputs:
{%- if WAZUHENABLED == 1 %} {%- if WAZUHENABLED == 1 %}
- type: log - type: filestream
id: wazuh
paths: paths:
- /wazuh/archives/archives.json - /wazuh/archives/archives.json
fields: fields:
@@ -247,7 +257,8 @@ filebeat.inputs:
{%- if FLEETMANAGER or FLEETNODE %} {%- if FLEETMANAGER or FLEETNODE %}
- type: log - type: filestream
id: osquery
paths: paths:
- /nsm/osquery/fleet/result.log - /nsm/osquery/fleet/result.log
fields: fields:
@@ -317,13 +328,13 @@ filebeat.inputs:
{%- endif %} {%- endif %}
{%- if grains['role'] in ['so-eval', 'so-standalone', 'so-manager', 'so-managersearch', 'so-import'] %} {%- if grains['role'] in ['so-eval', 'so-standalone', 'so-manager', 'so-managersearch', 'so-import'] %}
- type: log - type: filestream
id: kratos
paths: paths:
- /logs/kratos/kratos.log - /logs/kratos/kratos.log
fields: fields:
module: kratos module: kratos
category: host category: host
tags: beat-ext
processors: processors:
- decode_json_fields: - decode_json_fields:
fields: ["message"] fields: ["message"]
@@ -341,13 +352,15 @@ filebeat.inputs:
target: '' target: ''
fields: fields:
event.dataset: access event.dataset: access
pipeline: "kratos"
fields_under_root: true fields_under_root: true
clean_removed: false clean_removed: false
close_removed: false close_removed: false
{%- endif %} {%- endif %}
{%- if grains.role == 'so-idh' %} {%- if grains.role == 'so-idh' %}
- type: log - type: filestream
id: idh
paths: paths:
- /nsm/idh/opencanary.log - /nsm/idh/opencanary.log
fields: fields:
@@ -436,6 +449,12 @@ output.elasticsearch:
- index: "so-logscan" - index: "so-logscan"
when.contains: when.contains:
module: "logscan" module: "logscan"
- index: "so-elasticsearch-%{+YYYY.MM.dd}"
when.contains:
event.module: "elasticsearch"
- index: "so-kibana-%{+YYYY.MM.dd}"
when.contains:
event.module: "kibana"
setup.template.enabled: false setup.template.enabled: false
{%- else %} {%- else %}

View File

@@ -1,18 +1,2 @@
# DO NOT EDIT THIS FILE # DO NOT EDIT THIS FILE
{%- if MODULES.modules is iterable and MODULES.modules is not string and MODULES.modules|length > 0%} {{ MODULES|yaml(False) }}
{%- for module in MODULES.modules.keys() %}
- module: {{ module }}
{%- for fileset in MODULES.modules[module] %}
{{ fileset }}:
enabled: {{ MODULES.modules[module][fileset].enabled|string|lower }}
{#- only manage the settings if the fileset is enabled #}
{%- if MODULES.modules[module][fileset].enabled %}
{%- for var, value in MODULES.modules[module][fileset].items() %}
{%- if var|lower != 'enabled' %}
{{ var }}: {{ value }}
{%- endif %}
{%- endfor %}
{%- endif %}
{%- endfor %}
{%- endfor %}
{% endif %}

View File

@@ -18,8 +18,8 @@
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set LOCALHOSTNAME = salt['grains.get']('host') %} {% set LOCALHOSTNAME = salt['grains.get']('host') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% from 'filebeat/map.jinja' import THIRDPARTY with context %} {% from 'filebeat/modules.map.jinja' import MODULESMERGED with context %}
{% from 'filebeat/map.jinja' import SO with context %} {% from 'filebeat/modules.map.jinja' import MODULESENABLED with context %}
{% from 'filebeat/map.jinja' import FILEBEAT_EXTRA_HOSTS with context %} {% from 'filebeat/map.jinja' import FILEBEAT_EXTRA_HOSTS with context %}
{% set ES_INCLUDED_NODES = ['so-eval', 'so-standalone', 'so-managersearch', 'so-node', 'so-heavynode', 'so-import'] %} {% set ES_INCLUDED_NODES = ['so-eval', 'so-standalone', 'so-managersearch', 'so-node', 'so-heavynode', 'so-import'] %}
@@ -88,21 +88,21 @@ filebeatmoduleconf:
- template: jinja - template: jinja
- show_changes: False - show_changes: False
sodefaults_module_conf: merged_module_conf:
file.managed: file.managed:
- name: /opt/so/conf/filebeat/modules/securityonion.yml - name: /opt/so/conf/filebeat/modules/modules.yml
- source: salt://filebeat/etc/module_config.yml.jinja - source: salt://filebeat/etc/module_config.yml.jinja
- template: jinja - template: jinja
- defaults: - defaults:
MODULES: {{ SO }} MODULES: {{ MODULESENABLED }}
thirdparty_module_conf: so_module_conf_remove:
file.managed: file.absent:
- name: /opt/so/conf/filebeat/modules/securityonion.yml
thirdyparty_module_conf_remove:
file.absent:
- name: /opt/so/conf/filebeat/modules/thirdparty.yml - name: /opt/so/conf/filebeat/modules/thirdparty.yml
- source: salt://filebeat/etc/module_config.yml.jinja
- template: jinja
- defaults:
MODULES: {{ THIRDPARTY }}
so-filebeat: so-filebeat:
docker_container.running: docker_container.running:
@@ -127,11 +127,11 @@ so-filebeat:
- 0.0.0.0:514:514/udp - 0.0.0.0:514:514/udp
- 0.0.0.0:514:514/tcp - 0.0.0.0:514:514/tcp
- 0.0.0.0:5066:5066/tcp - 0.0.0.0:5066:5066/tcp
{% for module in THIRDPARTY.modules.keys() %} {% for module in MODULESMERGED.modules.keys() %}
{% for submodule in THIRDPARTY.modules[module] %} {% for submodule in MODULESMERGED.modules[module] %}
{% if THIRDPARTY.modules[module][submodule].enabled and THIRDPARTY.modules[module][submodule]["var.syslog_port"] is defined %} {% if MODULESMERGED.modules[module][submodule].enabled and MODULESMERGED.modules[module][submodule]["var.syslog_port"] is defined %}
- {{ THIRDPARTY.modules[module][submodule].get("var.syslog_host", "0.0.0.0") }}:{{ THIRDPARTY.modules[module][submodule]["var.syslog_port"] }}:{{ THIRDPARTY.modules[module][submodule]["var.syslog_port"] }}/tcp - {{ MODULESMERGED.modules[module][submodule].get("var.syslog_host", "0.0.0.0") }}:{{ MODULESMERGED.modules[module][submodule]["var.syslog_port"] }}:{{ MODULESMERGED.modules[module][submodule]["var.syslog_port"] }}/tcp
- {{ THIRDPARTY.modules[module][submodule].get("var.syslog_host", "0.0.0.0") }}:{{ THIRDPARTY.modules[module][submodule]["var.syslog_port"] }}:{{ THIRDPARTY.modules[module][submodule]["var.syslog_port"] }}/udp - {{ MODULESMERGED.modules[module][submodule].get("var.syslog_host", "0.0.0.0") }}:{{ MODULESMERGED.modules[module][submodule]["var.syslog_port"] }}:{{ MODULESMERGED.modules[module][submodule]["var.syslog_port"] }}/udp
{% endif %} {% endif %}
{% endfor %} {% endfor %}
{% endfor %} {% endfor %}

View File

@@ -1,10 +1,3 @@
{% import_yaml 'filebeat/thirdpartydefaults.yaml' as TPDEFAULTS %}
{% set THIRDPARTY = salt['pillar.get']('filebeat:third_party_filebeat', default=TPDEFAULTS.third_party_filebeat, merge=True) %}
{% import_yaml 'filebeat/securityoniondefaults.yaml' as SODEFAULTS %}
{% set SO = SODEFAULTS.securityonion_filebeat %}
{#% set SO = salt['pillar.get']('filebeat:third_party_filebeat', default=SODEFAULTS.third_party_filebeat, merge=True) %#}
{% set role = grains.role %} {% set role = grains.role %}
{% set FILEBEAT_EXTRA_HOSTS = [] %} {% set FILEBEAT_EXTRA_HOSTS = [] %}
{% set mainint = salt['pillar.get']('host:mainint') %} {% set mainint = salt['pillar.get']('host:mainint') %}

View File

@@ -0,0 +1,18 @@
{% import_yaml 'filebeat/thirdpartydefaults.yaml' as TPDEFAULTS %}
{% import_yaml 'filebeat/securityoniondefaults.yaml' as SODEFAULTS %}
{% set THIRDPARTY = salt['pillar.get']('filebeat:third_party_filebeat', default=TPDEFAULTS.third_party_filebeat, merge=True) %}
{% set SO = salt['pillar.get']('filebeat:securityonion_filebeat', default=SODEFAULTS.securityonion_filebeat, merge=True) %}
{% set MODULESMERGED = salt['defaults.merge'](SO, THIRDPARTY, in_place=False) %}
{% set MODULESENABLED = [] %}
{% for module in MODULESMERGED.modules.keys() %}
{% set ENABLEDFILESETS = {} %}
{% for fileset in MODULESMERGED.modules[module] %}
{% if MODULESMERGED.modules[module][fileset].get('enabled', False) %}
{% do ENABLEDFILESETS.update({'module': module, fileset: MODULESMERGED.modules[module][fileset]}) %}
{% endif %}
{% endfor %}
{% if ENABLEDFILESETS|length > 0 %}
{% do MODULESENABLED.append(ENABLEDFILESETS) %}
{% endif %}
{% endfor %}

View File

@@ -1,7 +1,7 @@
filebeat: filebeat:
config: config:
inputs: inputs:
- type: log - type: filestream
paths: paths:
- /nsm/mylogdir/mylog.log - /nsm/mylogdir/mylog.log
fields: fields:

View File

@@ -74,12 +74,6 @@ third_party_filebeat:
enabled: false enabled: false
amp: amp:
enabled: false enabled: false
cyberark:
corepas:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9527
cylance: cylance:
protect: protect:
enabled: false enabled: false
@@ -259,8 +253,6 @@ third_party_filebeat:
enabled: false enabled: false
anomalithreatstream: anomalithreatstream:
enabled: false enabled: false
recordedfuture:
enabled: false
zscaler: zscaler:
zia: zia:
enabled: false enabled: false

View File

@@ -59,7 +59,7 @@ update() {
IFS=$'\r\n' GLOBIGNORE='*' command eval 'LINES=($(cat $1))' IFS=$'\r\n' GLOBIGNORE='*' command eval 'LINES=($(cat $1))'
for i in "${LINES[@]}"; do for i in "${LINES[@]}"; do
RESPONSE=$({{ ELASTICCURL }} -X PUT "localhost:5601/api/saved_objects/config/7.17.4" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d " $i ") RESPONSE=$({{ ELASTICCURL }} -X PUT "localhost:5601/api/saved_objects/config/8.3.2" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d " $i ")
echo $RESPONSE; if [[ "$RESPONSE" != *"\"success\":true"* ]] && [[ "$RESPONSE" != *"updated_at"* ]] ; then RETURN_CODE=1;fi echo $RESPONSE; if [[ "$RESPONSE" != *"\"success\":true"* ]] && [[ "$RESPONSE" != *"updated_at"* ]] ; then RETURN_CODE=1;fi
done done

View File

@@ -2,7 +2,7 @@
{% set HIGHLANDER = salt['pillar.get']('global:highlander', False) %} {% set HIGHLANDER = salt['pillar.get']('global:highlander', False) %}
{% if salt['pillar.get']('elasticsearch:auth:enabled', False) %} {% if salt['pillar.get']('elasticsearch:auth:enabled', False) %}
{% do KIBANACONFIG.kibana.config.elasticsearch.update({'username': salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user'), 'password': salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass')}) %} {% do KIBANACONFIG.kibana.config.elasticsearch.update({'username': salt['pillar.get']('elasticsearch:auth:users:so_kibana_user:user'), 'password': salt['pillar.get']('elasticsearch:auth:users:so_kibana_user:pass')}) %}
{% else %} {% else %}
{% do KIBANACONFIG.kibana.config.xpack.update({'security': {'authc': {'providers': {'anonymous': {'anonymous1': {'order': 0, 'credentials': 'elasticsearch_anonymous_user'}}}}}}) %} {% do KIBANACONFIG.kibana.config.xpack.update({'security': {'authc': {'providers': {'anonymous': {'anonymous1': {'order': 0, 'credentials': 'elasticsearch_anonymous_user'}}}}}}) %}
{% endif %} {% endif %}

View File

@@ -28,7 +28,8 @@ kibana:
security: security:
showInsecureClusterWarning: False showInsecureClusterWarning: False
xpack: xpack:
ml:
enabled: False
security: security:
secureCookies: True secureCookies: true
reporting:
kibanaServer:
hostname: localhost

View File

@@ -1 +1 @@
{"attributes": {"buildNum": 39457,"defaultIndex": "2289a0c0-6970-11ea-a0cd-ffa0f6a1bc29","defaultRoute": "/app/dashboards#/view/a8411b30-6d03-11ea-b301-3d6c35840645","discover:sampleSize": 100,"theme:darkMode": true,"timepicker:timeDefaults": "{\n \"from\": \"now-24h\",\n \"to\": \"now\"\n}"},"coreMigrationVersion": "7.17.4","id": "7.17.4","migrationVersion": {"config": "7.13.0"},"references": [],"type": "config","updated_at": "2021-10-10T10:10:10.105Z","version": "WzI5NzUsMl0="} {"attributes": {"buildNum": 39457,"defaultIndex": "2289a0c0-6970-11ea-a0cd-ffa0f6a1bc29","defaultRoute": "/app/dashboards#/view/a8411b30-6d03-11ea-b301-3d6c35840645","discover:sampleSize": 100,"theme:darkMode": true,"timepicker:timeDefaults": "{\n \"from\": \"now-24h\",\n \"to\": \"now\"\n}"},"coreMigrationVersion": "8.3.2","id": "8.3.2","migrationVersion": {"config": "7.13.0"},"references": [],"type": "config","updated_at": "2021-10-10T10:10:10.105Z","version": "WzI5NzUsMl0="}

View File

@@ -0,0 +1,22 @@
{%- if grains['role'] == 'so-eval' -%}
{%- set ES = salt['pillar.get']('manager:mainip', '') -%}
{%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
output {
if [module] =~ "kratos" and "import" not in [tags] {
elasticsearch {
pipeline => "kratos"
hosts => "{{ ES }}"
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
user => "{{ ES_USER }}"
password => "{{ ES_PASS }}"
{% endif %}
index => "so-kratos"
ssl => true
ssl_certificate_verification => false
}
}
}

View File

@@ -42,7 +42,7 @@ gpgkey=file:///etc/pki/rpm-gpg/docker.pub
[saltstack] [saltstack]
name=SaltStack repo for RHEL/CentOS $releasever PY3 name=SaltStack repo for RHEL/CentOS $releasever PY3
baseurl=https://repo.securityonion.net/file/securityonion-repo/saltstack/ baseurl=https://repo.securityonion.net/file/securityonion-repo/salt/
enabled=1 enabled=1
gpgcheck=1 gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/SALTSTACK-GPG-KEY.pub gpgkey=file:///etc/pki/rpm-gpg/SALTSTACK-GPG-KEY.pub

View File

@@ -42,7 +42,7 @@ gpgkey=https://repo.securityonion.net/file/securityonion-repo/keys/docker.pub
[saltstack] [saltstack]
name=SaltStack repo for RHEL/CentOS $releasever PY3 name=SaltStack repo for RHEL/CentOS $releasever PY3
baseurl=http://repocache.securityonion.net/file/securityonion-repo/saltstack/ baseurl=http://repocache.securityonion.net/file/securityonion-repo/salt/
enabled=1 enabled=1
gpgcheck=1 gpgcheck=1
gpgkey=https://repo.securityonion.net/file/securityonion-repo/keys/SALTSTACK-GPG-KEY.pub gpgkey=https://repo.securityonion.net/file/securityonion-repo/keys/SALTSTACK-GPG-KEY.pub

View File

@@ -7,7 +7,7 @@ saltstack.list:
file.managed: file.managed:
- name: /etc/apt/sources.list.d/saltstack.list - name: /etc/apt/sources.list.d/saltstack.list
- contents: - contents:
- deb https://repo.securityonion.net/file/securityonion-repo/ubuntu/{{grains.osrelease}}/amd64/salt/ {{grains.oscodename}} main - deb https://repo.securityonion.net/file/securityonion-repo/ubuntu/{{grains.osrelease}}/amd64/salt3004.2/ {{grains.oscodename}} main
apt_update: apt_update:
cmd.run: cmd.run:

View File

@@ -2,4 +2,4 @@
# When updating the salt version, also update the version in securityonion-builds/images/iso-task/Dockerfile and saltify function in so-functions # When updating the salt version, also update the version in securityonion-builds/images/iso-task/Dockerfile and saltify function in so-functions
salt: salt:
master: master:
version: 3004.1 version: 3004.2

View File

@@ -2,6 +2,6 @@
# When updating the salt version, also update the version in securityonion-builds/images/iso-task/Dockerfile and saltify function in so-functions # When updating the salt version, also update the version in securityonion-builds/images/iso-task/Dockerfile and saltify function in so-functions
salt: salt:
minion: minion:
version: 3004.1 version: 3004.2
check_threshold: 3600 # in seconds, threshold used for so-salt-minion-check. any value less than 600 seconds may cause a lot of salt-minion restarts since the job to touch the file occurs every 5-8 minutes by default check_threshold: 3600 # in seconds, threshold used for so-salt-minion-check. any value less than 600 seconds may cause a lot of salt-minion restarts since the job to touch the file occurs every 5-8 minutes by default
service_start_delay: 30 # in seconds. service_start_delay: 30 # in seconds.

View File

@@ -4216,17 +4216,35 @@ install_centos_stable_deps() {
install_centos_stable() { install_centos_stable() {
__PACKAGES="" __PACKAGES=""
local cloud='salt-cloud'
local master='salt-master'
local minion='salt-minion'
local syndic='salt-syndic'
if echo "$STABLE_REV" | grep -q "archive";then # point release being applied
local ver=$(echo "$STABLE_REV"|awk -F/ '{print $2}') # strip archive/
elif echo "$STABLE_REV" | egrep -vq "archive|latest";then # latest or major version(3003, 3004, etc) being applie
local ver=$STABLE_REV
fi
if [ ! -z $ver ]; then
cloud+="-$ver"
master+="-$ver"
minion+="-$ver"
syndic+="-$ver"
fi
if [ "$_INSTALL_CLOUD" -eq $BS_TRUE ];then if [ "$_INSTALL_CLOUD" -eq $BS_TRUE ];then
__PACKAGES="${__PACKAGES} salt-cloud" __PACKAGES="${__PACKAGES} $cloud"
fi fi
if [ "$_INSTALL_MASTER" -eq $BS_TRUE ];then if [ "$_INSTALL_MASTER" -eq $BS_TRUE ];then
__PACKAGES="${__PACKAGES} salt-master" __PACKAGES="${__PACKAGES} $master"
fi fi
if [ "$_INSTALL_MINION" -eq $BS_TRUE ]; then if [ "$_INSTALL_MINION" -eq $BS_TRUE ]; then
__PACKAGES="${__PACKAGES} salt-minion" __PACKAGES="${__PACKAGES} $minion"
fi fi
if [ "$_INSTALL_SYNDIC" -eq $BS_TRUE ];then if [ "$_INSTALL_SYNDIC" -eq $BS_TRUE ];then
__PACKAGES="${__PACKAGES} salt-syndic" __PACKAGES="${__PACKAGES} $syndic"
fi fi
# shellcheck disable=SC2086 # shellcheck disable=SC2086

View File

@@ -15,8 +15,9 @@ function ci() {
exit 1 exit 1
fi fi
pip install pytest pytest-cov
flake8 "$TARGET_DIR" "--config=${HOME_DIR}/pytest.ini" flake8 "$TARGET_DIR" "--config=${HOME_DIR}/pytest.ini"
pytest "$TARGET_DIR" "--cov-config=${HOME_DIR}/pytest.ini" "--cov=$TARGET_DIR" --doctest-modules --cov-report=term --cov-fail-under=100 python3 -m pytest "--cov-config=${HOME_DIR}/pytest.ini" "--cov=$TARGET_DIR" --doctest-modules --cov-report=term --cov-fail-under=100 "$TARGET_DIR"
} }
function download() { function download() {

View File

@@ -1 +1 @@
file_path: [] file_path: "{{ salt['pillar.get']('sensoroni:analyzers:localfile:file_path', '') }}"

View File

@@ -17,13 +17,16 @@ class TestLocalfileMethods(unittest.TestCase):
def test_main_success(self): def test_main_success(self):
output = {"foo": "bar"} output = {"foo": "bar"}
conf = {"file_path": ["somefile.csv"]}
with patch('sys.stdout', new=StringIO()) as mock_stdout: with patch('sys.stdout', new=StringIO()) as mock_stdout:
with patch('localfile.localfile.analyze', new=MagicMock(return_value=output)) as mock: with patch('localfile.localfile.analyze', new=MagicMock(return_value=output)) as mock:
with patch('helpers.loadConfig', new=MagicMock(return_value=conf)) as lcmock:
sys.argv = ["cmd", "input"] sys.argv = ["cmd", "input"]
localfile.main() localfile.main()
expected = '{"foo": "bar"}\n' expected = '{"foo": "bar"}\n'
self.assertEqual(mock_stdout.getvalue(), expected) self.assertEqual(mock_stdout.getvalue(), expected)
mock.assert_called_once() mock.assert_called_once()
lcmock.assert_called_once()
def test_checkConfigRequirements_present(self): def test_checkConfigRequirements_present(self):
conf = {"file_path": "['intel.csv']"} conf = {"file_path": "['intel.csv']"}

View File

@@ -35,7 +35,9 @@ class TestMalwareHashRegistryMethods(unittest.TestCase):
response = malwarehashregistry.sendReq(hash) response = malwarehashregistry.sendReq(hash)
mock.assert_called_once_with(options, hash, flags) mock.assert_called_once_with(options, hash, flags)
self.assertIsNotNone(response) self.assertIsNotNone(response)
self.assertEqual(response, {"hash": "84af04b8e69682782607a0c5796ca56999eda6b3", "last_seen": "2019-15-07 03:30:33", "av_detection_percentage": 35}) self.assertEqual(response["hash"], "84af04b8e69682782607a0c5796ca56999eda6b3")
self.assertRegex(response["last_seen"], r'2019-..-07 ..:..:33') # host running this test won't always use UTC
self.assertEqual(response["av_detection_percentage"], 35)
def test_sendReqNoData(self): def test_sendReqNoData(self):
output = "84af04b8e69682782607a0c5796ca5696b3 NO_DATA" output = "84af04b8e69682782607a0c5796ca5696b3 NO_DATA"

View File

@@ -1,5 +1,5 @@
[ [
{ "name": "Overview", "description": "Overview of all events", "query": "* | groupby -sankey event.dataset event.category* | groupby event.dataset | groupby -bar event.module | groupby event.module | groupby -pie event.category | groupby event.category | groupby observer.name | groupby source.ip | groupby destination.ip | groupby destination.port"}, { "name": "Overview", "description": "Overview of all events", "query": "* | groupby -sankey event.dataset event.category* | groupby -pie event.category | groupby -bar event.module | groupby event.dataset | groupby event.module | groupby event.category | groupby observer.name | groupby source.ip | groupby destination.ip | groupby destination.port"},
{ "name": "SOC Auth", "description": "Show all SOC authentication logs", "query": "event.module:kratos AND event.dataset:audit AND msg:authenticated | groupby http_request.headers.x-real-ip | groupby identity_id | groupby http_request.headers.user-agent"}, { "name": "SOC Auth", "description": "Show all SOC authentication logs", "query": "event.module:kratos AND event.dataset:audit AND msg:authenticated | groupby http_request.headers.x-real-ip | groupby identity_id | groupby http_request.headers.user-agent"},
{ "name": "Elastalerts", "description": "Elastalert logs", "query": "_index: \"*:elastalert*\" | groupby rule_name | groupby alert_info.type"}, { "name": "Elastalerts", "description": "Elastalert logs", "query": "_index: \"*:elastalert*\" | groupby rule_name | groupby alert_info.type"},
{ "name": "Alerts", "description": "Show all alerts", "query": "event.dataset: alert | groupby event.module | groupby rule.name | groupby event.severity | groupby source.ip | groupby destination.ip | groupby destination.port"}, { "name": "Alerts", "description": "Show all alerts", "query": "event.dataset: alert | groupby event.module | groupby rule.name | groupby event.severity | groupby source.ip | groupby destination.ip | groupby destination.port"},
@@ -16,7 +16,7 @@
{ "name": "DPD", "description": "Dynamic Protocol Detection errors", "query": "event.dataset:dpd | groupby error.reason | groupby source.ip | groupby destination.ip | groupby destination.port | groupby network.protocol"}, { "name": "DPD", "description": "Dynamic Protocol Detection errors", "query": "event.dataset:dpd | groupby error.reason | groupby source.ip | groupby destination.ip | groupby destination.port | groupby network.protocol"},
{ "name": "Files", "description": "Files seen in network traffic", "query": "event.dataset:file | groupby file.mime_type | groupby file.source | groupby file.bytes.total | groupby source.ip | groupby destination.ip"}, { "name": "Files", "description": "Files seen in network traffic", "query": "event.dataset:file | groupby file.mime_type | groupby file.source | groupby file.bytes.total | groupby source.ip | groupby destination.ip"},
{ "name": "FTP", "description": "File Transfer Protocol logs", "query": "event.dataset:ftp | groupby ftp.command | groupby ftp.argument | groupby ftp.user | groupby source.ip | groupby destination.ip | groupby destination.port"}, { "name": "FTP", "description": "File Transfer Protocol logs", "query": "event.dataset:ftp | groupby ftp.command | groupby ftp.argument | groupby ftp.user | groupby source.ip | groupby destination.ip | groupby destination.port"},
{ "name": "HTTP", "description": "Hyper Text Transport Protocol logs", "query": "event.dataset:http | groupby http.method | groupby http.virtual_host | groupby http.uri | groupby http.useragent | groupby http.status_code | groupby http.status_message | groupby source.ip | groupby destination.ip | groupby destination.port"}, { "name": "HTTP", "description": "Hyper Text Transport Protocol logs", "query": "event.dataset:http | groupby http.method | groupby http.virtual_host | groupby http.uri | groupby http.useragent | groupby http.status_code | groupby http.status_message | groupby file.resp_mime_types | groupby source.ip | groupby destination.ip | groupby destination.port"},
{ "name": "Intel", "description": "Zeek Intel framework hits", "query": "event.dataset:intel | groupby intel.indicator | groupby intel.indicator_type | groupby intel.seen_where | groupby source.ip | groupby destination.ip | groupby destination.port"}, { "name": "Intel", "description": "Zeek Intel framework hits", "query": "event.dataset:intel | groupby intel.indicator | groupby intel.indicator_type | groupby intel.seen_where | groupby source.ip | groupby destination.ip | groupby destination.port"},
{ "name": "IRC", "description": "Internet Relay Chat logs", "query": "event.dataset:irc | groupby irc.command.type | groupby irc.username | groupby irc.nickname | groupby irc.command.value | groupby irc.command.info | groupby source.ip | groupby destination.ip | groupby destination.port"}, { "name": "IRC", "description": "Internet Relay Chat logs", "query": "event.dataset:irc | groupby irc.command.type | groupby irc.username | groupby irc.nickname | groupby irc.command.value | groupby irc.command.info | groupby source.ip | groupby destination.ip | groupby destination.port"},
{ "name": "Kerberos", "description": "Kerberos logs", "query": "event.dataset:kerberos | groupby kerberos.service | groupby kerberos.client | groupby kerberos.request_type | groupby source.ip | groupby destination.ip | groupby destination.port"}, { "name": "Kerberos", "description": "Kerberos logs", "query": "event.dataset:kerberos | groupby kerberos.service | groupby kerberos.client | groupby kerberos.request_type | groupby source.ip | groupby destination.ip | groupby destination.port"},

View File

@@ -218,7 +218,7 @@ suricata:
enabled: "yes" enabled: "yes"
# memcap: 64mb # memcap: 64mb
rdp: rdp:
#enabled: "no" enabled: "yes"
ssh: ssh:
enabled: "yes" enabled: "yes"
smtp: smtp:
@@ -331,7 +331,16 @@ suricata:
dhcp: dhcp:
enabled: "yes" enabled: "yes"
sip: sip:
#enabled: "no" enabled: "yes"
rfb:
enabled: "yes"
detection-ports:
dp: 5900, 5901, 5902, 5903, 5904, 5905, 5906, 5907, 5908, 5909
mqtt:
enabled: "no"
http2:
enabled: "no"
asn1-max-frames: 256 asn1-max-frames: 256
run-as: run-as:
user: suricata user: suricata

View File

@@ -30,13 +30,13 @@ fi
echo "Applying cross cluster search config..." echo "Applying cross cluster search config..."
{{ ELASTICCURL }} -s -k -XPUT -L https://{{ ES }}:9200/_cluster/settings \ {{ ELASTICCURL }} -s -k -XPUT -L https://{{ ES }}:9200/_cluster/settings \
-H 'Content-Type: application/json' \ -H 'Content-Type: application/json' \
-d "{\"persistent\": {\"search\": {\"remote\": {\"{{ MANAGER }}\": {\"seeds\": [\"127.0.0.1:9300\"]}}}}}" -d "{\"persistent\": {\"cluster\": {\"remote\": {\"{{ MANAGER }}\": {\"seeds\": [\"127.0.0.1:9300\"]}}}}}"
# Add all the search nodes to cross cluster searching. # Add all the search nodes to cross cluster searching.
{%- if TRUECLUSTER is sameas false %} {%- if TRUECLUSTER is sameas false %}
{%- if salt['pillar.get']('nodestab', {}) %} {%- if salt['pillar.get']('nodestab', {}) %}
{%- for SN, SNDATA in salt['pillar.get']('nodestab', {}).items() %} {%- for SN, SNDATA in salt['pillar.get']('nodestab', {}).items() %}
{{ ELASTICCURL }} -s -k -XPUT -L https://{{ ES }}:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"search": {"remote": {"{{ SN }}": {"skip_unavailable": "true", "seeds": ["{{ SN.split('_')|first }}:9300"]}}}}}' {{ ELASTICCURL }} -s -k -XPUT -L https://{{ ES }}:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"cluster": {"remote": {"{{ SN }}": {"skip_unavailable": "true", "seeds": ["{{ SN.split('_')|first }}:9300"]}}}}}'
{%- endfor %} {%- endfor %}
{%- endif %} {%- endif %}
{%- endif %} {%- endif %}

View File

@@ -28,4 +28,4 @@ fi
echo "Applying cross cluster search config..." echo "Applying cross cluster search config..."
{{ ELASTICCURL }} -s -k -XPUT -L https://{{ ES }}:9200/_cluster/settings \ {{ ELASTICCURL }} -s -k -XPUT -L https://{{ ES }}:9200/_cluster/settings \
-H 'Content-Type: application/json' \ -H 'Content-Type: application/json' \
-d "{\"persistent\": {\"search\": {\"remote\": {\"{{ grains.host }}\": {\"seeds\": [\"127.0.0.1:9300\"]}}}}}" -d "{\"persistent\": {\"cluster\": {\"remote\": {\"{{ grains.host }}\": {\"seeds\": [\"127.0.0.1:9300\"]}}}}}"

View File

@@ -145,7 +145,7 @@ analyst_salt_local() {
securityonion_repo securityonion_repo
gpg_rpm_import gpg_rpm_import
# Install salt # Install salt
logCmd "yum -y install salt-minion-3004.1 httpd-tools python3 python36-docker python36-dateutil python36-m2crypto python36-mysql python36-packaging python36-lxml yum-utils device-mapper-persistent-data lvm2 openssl jq" logCmd "yum -y install salt-minion-3004.2 httpd-tools python3 python36-docker python36-dateutil python36-m2crypto python36-mysql python36-packaging python36-lxml yum-utils device-mapper-persistent-data lvm2 openssl jq"
logCmd "yum -y update --exclude=salt*" logCmd "yum -y update --exclude=salt*"
salt-call state.apply workstation --local --file-root=../salt/ -l info 2>&1 | tee -a outfile salt-call state.apply workstation --local --file-root=../salt/ -l info 2>&1 | tee -a outfile
@@ -2277,7 +2277,7 @@ saltify() {
fi fi
set_progress_str 7 'Installing salt-master' set_progress_str 7 'Installing salt-master'
if [[ ! $is_iso ]]; then if [[ ! $is_iso ]]; then
logCmd "yum -y install salt-master-3004.1" logCmd "yum -y install salt-master-3004.2"
fi fi
logCmd "systemctl enable salt-master" logCmd "systemctl enable salt-master"
;; ;;
@@ -2290,7 +2290,7 @@ saltify() {
fi fi
set_progress_str 8 'Installing salt-minion & python modules' set_progress_str 8 'Installing salt-minion & python modules'
if [[ ! ( $is_iso || $is_analyst_iso ) ]]; then if [[ ! ( $is_iso || $is_analyst_iso ) ]]; then
logCmd "yum -y install salt-minion-3004.1 httpd-tools python3 python36-docker python36-dateutil python36-m2crypto python36-mysql python36-packaging python36-lxml yum-utils device-mapper-persistent-data lvm2 openssl jq" logCmd "yum -y install salt-minion-3004.2 httpd-tools python3 python36-docker python36-dateutil python36-m2crypto python36-mysql python36-packaging python36-lxml yum-utils device-mapper-persistent-data lvm2 openssl jq"
logCmd "yum -y update --exclude=salt*" logCmd "yum -y update --exclude=salt*"
fi fi
logCmd "systemctl enable salt-minion" logCmd "systemctl enable salt-minion"
@@ -2330,7 +2330,7 @@ saltify() {
# Add saltstack repo(s) # Add saltstack repo(s)
wget -q --inet4-only -O - https://repo.securityonion.net/file/securityonion-repo/ubuntu/"$ubuntu_version"/amd64/salt/SALTSTACK-GPG-KEY.pub | apt-key add - >> "$setup_log" 2>&1 wget -q --inet4-only -O - https://repo.securityonion.net/file/securityonion-repo/ubuntu/"$ubuntu_version"/amd64/salt/SALTSTACK-GPG-KEY.pub | apt-key add - >> "$setup_log" 2>&1
echo "deb https://repo.securityonion.net/file/securityonion-repo/ubuntu/$ubuntu_version/amd64/salt/ $OSVER main" > /etc/apt/sources.list.d/saltstack.list 2>> "$setup_log" echo "deb https://repo.securityonion.net/file/securityonion-repo/ubuntu/$ubuntu_version/amd64/salt3004.2/ $OSVER main" > /etc/apt/sources.list.d/saltstack.list 2>> "$setup_log"
# Add Docker repo # Add Docker repo
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - >> "$setup_log" 2>&1 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - >> "$setup_log" 2>&1
@@ -2351,7 +2351,7 @@ saltify() {
set_progress_str 6 'Installing various dependencies' set_progress_str 6 'Installing various dependencies'
retry 50 10 "apt-get -y install sqlite3 libssl-dev" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-get -y install sqlite3 libssl-dev" >> "$setup_log" 2>&1 || exit 1
set_progress_str 7 'Installing salt-master' set_progress_str 7 'Installing salt-master'
retry 50 10 "apt-get -y install salt-master=3004.1+ds-1" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-get -y install salt-master=3004.2+ds-1" >> "$setup_log" 2>&1 || exit 1
retry 50 10 "apt-mark hold salt-master" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-mark hold salt-master" >> "$setup_log" 2>&1 || exit 1
;; ;;
*) *)
@@ -2362,14 +2362,14 @@ saltify() {
echo "Using apt-key add to add SALTSTACK-GPG-KEY.pub and GPG-KEY-WAZUH" >> "$setup_log" 2>&1 echo "Using apt-key add to add SALTSTACK-GPG-KEY.pub and GPG-KEY-WAZUH" >> "$setup_log" 2>&1
apt-key add "$temp_install_dir"/gpg/SALTSTACK-GPG-KEY.pub >> "$setup_log" 2>&1 apt-key add "$temp_install_dir"/gpg/SALTSTACK-GPG-KEY.pub >> "$setup_log" 2>&1
apt-key add "$temp_install_dir"/gpg/GPG-KEY-WAZUH >> "$setup_log" 2>&1 apt-key add "$temp_install_dir"/gpg/GPG-KEY-WAZUH >> "$setup_log" 2>&1
echo "deb https://repo.securityonion.net/file/securityonion-repo/ubuntu/$ubuntu_version/amd64/salt/ $OSVER main" > /etc/apt/sources.list.d/saltstack.list 2>> "$setup_log" echo "deb https://repo.securityonion.net/file/securityonion-repo/ubuntu/$ubuntu_version/amd64/salt3004.2/ $OSVER main" > /etc/apt/sources.list.d/saltstack.list 2>> "$setup_log"
echo "deb https://packages.wazuh.com/3.x/apt/ stable main" > /etc/apt/sources.list.d/wazuh.list 2>> "$setup_log" echo "deb https://packages.wazuh.com/3.x/apt/ stable main" > /etc/apt/sources.list.d/wazuh.list 2>> "$setup_log"
;; ;;
esac esac
retry 50 10 "apt-get update" "" "Err:" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-get update" "" "Err:" >> "$setup_log" 2>&1 || exit 1
set_progress_str 8 'Installing salt-minion & python modules' set_progress_str 8 'Installing salt-minion & python modules'
retry 50 10 "apt-get -y install salt-minion=3004.1+ds-1 salt-common=3004.1+ds-1" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-get -y install salt-minion=3004.2+ds-1 salt-common=3004.2+ds-1" >> "$setup_log" 2>&1 || exit 1
retry 50 10 "apt-mark hold salt-minion salt-common" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-mark hold salt-minion salt-common" >> "$setup_log" 2>&1 || exit 1
retry 50 10 "apt-get -y install python3-pip python3-dateutil python3-m2crypto python3-mysqldb python3-packaging python3-influxdb python3-lxml" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-get -y install python3-pip python3-dateutil python3-m2crypto python3-mysqldb python3-packaging python3-influxdb python3-lxml" >> "$setup_log" 2>&1 || exit 1
fi fi

View File

@@ -1106,9 +1106,9 @@ if [[ $success != 0 ]]; then SO_ERROR=1; fi
# Check entire setup log for errors or unexpected salt states and ensure cron jobs are not reporting errors to root's mailbox # Check entire setup log for errors or unexpected salt states and ensure cron jobs are not reporting errors to root's mailbox
# Ignore "Status .* was not found" due to output from salt http.query or http.wait_for_successful_query states used with retry # Ignore "Status .* was not found" due to output from salt http.query or http.wait_for_successful_query states used with retry
# Uncaught exception, closing connection|Exception in callback None - this is seen during influxdb / http.wait_for_successful_query state for ubuntu reinstall # Uncaught exception, closing connection|Exception in callback None - this is seen during influxdb / http.wait_for_successful_query state for ubuntu reinstall
if grep -E "ERROR|Result: False" $setup_log | grep -qvE "Status .* was not found|An exception occurred in this state|Uncaught exception, closing connection|Exception in callback None|deprecation: ERROR|code: 100" || [[ -s /var/spool/mail/root && "$setup_type" == "iso" ]]; then if grep -E "ERROR|Result: False" $setup_log | grep -qvE "Status .* was not found|An exception occurred in this state|Uncaught exception, closing connection|Exception in callback None|deprecation: ERROR|code: 100|Running scope as unit" || [[ -s /var/spool/mail/root && "$setup_type" == "iso" ]]; then
SO_ERROR=1 SO_ERROR=1
grep --color=never "ERROR" "$setup_log" | grep -qvE "Status .* was not found|An exception occurred in this state|Uncaught exception, closing connection|Exception in callback None|deprecation: ERROR|code: 100" > "$error_log" grep --color=never "ERROR" "$setup_log" | grep -qvE "Status .* was not found|An exception occurred in this state|Uncaught exception, closing connection|Exception in callback None|deprecation: ERROR|code: 100|Running scope as unit" > "$error_log"
fi fi
if [[ -n $SO_ERROR ]]; then if [[ -n $SO_ERROR ]]; then

Binary file not shown.

Binary file not shown.

Binary file not shown.