Compare commits

...

324 Commits

Author SHA1 Message Date
Mike Reeves
9f2b920454 Merge pull request #8535 from Security-Onion-Solutions/hotfix/2.3.140
Hotfix/2.3.140
2022-08-15 15:06:37 -04:00
Mike Reeves
604af45661 Merge pull request #8534 from Security-Onion-Solutions/2.3.140hotfix3
2.3.140 Hotfix
2022-08-15 13:09:14 -04:00
Mike Reeves
3f435c5c1a 2.3.140 Hotfix 2022-08-15 13:03:25 -04:00
Mike Reeves
9903be8120 Merge pull request #8532 from Security-Onion-Solutions/2.3.140-20220815 2022-08-12 15:04:00 -04:00
Doug Burks
991a601a3d FIX: so-curator-closed-delete-delete needs to reference new Elasticsearch directory #8529 2022-08-12 13:21:06 -04:00
Doug Burks
86519d43dc Update HOTFIX 2022-08-12 13:20:15 -04:00
Doug Burks
484aa7b207 Merge pull request #8336 from Security-Onion-Solutions/hotfix/2.3.140
Hotfix/2.3.140
2022-07-19 16:13:47 -04:00
Mike Reeves
6986448239 Merge pull request #8333 from Security-Onion-Solutions/2.3.140hotfix
2.3.140 Hotfix
2022-07-19 14:47:50 -04:00
Mike Reeves
dd48d66c1c 2.3.140 Hotfix 2022-07-19 14:39:44 -04:00
Mike Reeves
440f4e75c1 Merge pull request #8332 from Security-Onion-Solutions/dev
Merge Hotfix
2022-07-19 13:30:20 -04:00
weslambert
c795a70e9c Merge pull request #8329 from Security-Onion-Solutions/fix/elastalert_stop_check_enabled
Check to ensure Elastalert is enabled and suppress missing container error output
2022-07-19 13:27:35 -04:00
weslambert
340dbe8547 Check to see if Elastalert is enabled before trying to run 'so-elastalert-stop'. Also suppress error output for when so-elastalert container is not present. 2022-07-19 13:25:09 -04:00
Mike Reeves
52a5e743e9 Merge pull request #8327 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update HOTFIX
2022-07-19 11:17:13 -04:00
Wes Lambert
5ceff52796 Move Elastalert indices check to function and call from beginning of soup and during pre-upgrade to 2.3.140 2022-07-19 14:54:39 +00:00
Wes Lambert
f3a0ab0b2d Perform Elastalert index check twice 2022-07-19 14:48:19 +00:00
Wes Lambert
4a7c994b66 Revise Elastalert index check deletion logic 2022-07-19 14:31:45 +00:00
Mike Reeves
07b8785f3d Update soup 2022-07-19 10:23:10 -04:00
Mike Reeves
9a1092ab01 Update HOTFIX 2022-07-19 10:21:36 -04:00
Mike Reeves
fbcbfaf7c3 Merge pull request #8310 from Security-Onion-Solutions/dev
2.3.140
2022-07-18 11:23:54 -04:00
Mike Reeves
497110d6cd Merge pull request #8320 from Security-Onion-Solutions/2.3.140-2
2.3.140
2022-07-18 10:57:53 -04:00
Mike Reeves
3711eb52b8 2.3.140 2022-07-18 10:54:50 -04:00
weslambert
8099b1688b Merge pull request #8319 from Security-Onion-Solutions/fix/elasticsearch_query_missing_query_path
Fix missing query path for so-elasticsearch-query
2022-07-18 09:47:16 -04:00
weslambert
2914007393 Add forward slash to fix issue with missing query path 2022-07-18 09:07:34 -04:00
weslambert
f5e10430ed Add forward slash to fix issue with missing query path 2022-07-18 09:07:13 -04:00
Mike Reeves
b5a78d4577 Merge pull request #8309 from Security-Onion-Solutions/2.3.140
2.3.140
2022-07-15 13:36:31 -04:00
Mike Reeves
0a14dad849 Update VERIFY_ISO.md 2022-07-15 13:31:51 -04:00
Mike Reeves
3430df6a20 2.3.140 2022-07-15 13:26:25 -04:00
Mike Reeves
881915f871 Merge pull request #8306 from Security-Onion-Solutions/TOoSmOotH-patch-3
Update defaults.yaml
2022-07-14 16:20:29 -04:00
Mike Reeves
cf8c6a6e94 Update defaults.yaml 2022-07-14 15:17:27 -04:00
weslambert
52ebbf8ff3 Merge pull request #8304 from Security-Onion-Solutions/fix/kibana_space_defaults_web_response_url
Change web_response to evaluate the response from the Spaces API and the default space query
2022-07-14 12:08:02 -04:00
weslambert
2443e8b97e Change web_response to evaluate the response from the Spaces API and the default space query 2022-07-14 12:04:56 -04:00
weslambert
4241eb4b29 Merge pull request #8298 from Security-Onion-Solutions/fix/kibana_space_defaults_shebang
Add shebang so that so-kibana-space-defaults will work correctly on Ubuntu
2022-07-13 16:50:21 -04:00
weslambert
0fd4f34b5b Add shebang so that so-kibana-space-defaults will work correctly on Ubuntu 2022-07-13 16:48:39 -04:00
Josh Patterson
37df49d4f3 Merge pull request #8296 from Security-Onion-Solutions/elastalert_esversion_check
use onlyif requisite instead
2022-07-13 15:22:40 -04:00
m0duspwnens
7d7cf42d9a use onlyif requisite instead 2022-07-13 15:21:34 -04:00
Doug Burks
de0a7d3bcd Merge pull request #8293 from Security-Onion-Solutions/dougburks-patch-1
change hyperlink for Elastic 8 issues
2022-07-13 12:41:50 -04:00
Doug Burks
c67a58a5b1 change hyperlink for Elastic 8 issues 2022-07-13 12:40:03 -04:00
Josh Patterson
e79ca4bb9b Merge pull request #8291 from Security-Onion-Solutions/elastalert_esversion_check
do not start elastalert if elasticsearch is not v8
2022-07-13 11:24:12 -04:00
m0duspwnens
086cf3996d do not start elastalert if elasticsearch is not v8 2022-07-13 11:21:27 -04:00
Doug Burks
7ae5d49a4a Merge pull request #8290 from Security-Onion-Solutions/dougburks-patch-1
increment version to 2.3.140
2022-07-13 09:33:37 -04:00
Doug Burks
34d3c6a882 increment version to 2.3.140 2022-07-13 09:32:28 -04:00
weslambert
4a5664db7b Merge pull request #8289 from Security-Onion-Solutions/fix/soup_unsupported_indices_check
Add missing 'fi' to if/then for unsupported indices check
2022-07-13 09:15:22 -04:00
weslambert
513c7ae56c Add missing 'fi' to if/then for unsupported indices check 2022-07-13 09:13:28 -04:00
weslambert
fa894cf83b Merge pull request #8288 from Security-Onion-Solutions/fix/soup_elastalert_indices_deletion_check
Ensure Elastalert indices are deleted before continuing with SOUP
2022-07-13 08:44:04 -04:00
weslambert
8e92060c29 Ensure Elastalert indices are deleted before continuing with SOUP -- if they are not, generate a failure condition 2022-07-13 08:38:55 -04:00
weslambert
d7eb8b9bcb Merge pull request #8281 from Security-Onion-Solutions/fix/soup_elasticsearch8_index_compatibility
SOUP - Check for indices created by Elasticsearch 6
2022-07-12 16:20:47 -04:00
weslambert
d0a0ca8458 Update exit code for ES checks 2022-07-12 16:15:44 -04:00
Josh Patterson
57b79421d8 Merge pull request #8280 from Security-Onion-Solutions/fix_filebeat
move port bindings back under port bindings
2022-07-12 16:12:49 -04:00
weslambert
4502182b53 Typo - Ensure Elasticsearch version 6 indices are checked 2022-07-12 15:35:46 -04:00
weslambert
0fc6f7b022 Add check for Elasticsearch 6 indices 2022-07-12 15:34:24 -04:00
m0duspwnens
ec451c19f8 move port bindings back under port bindings 2022-07-12 15:17:25 -04:00
weslambert
e9a22d0aff Merge pull request #8275 from Security-Onion-Solutions/fix/filebeat_es_output_additions
Specify outputs for Elasticsearch and Kibana for Eval and Import Mode
2022-07-11 19:03:07 -04:00
weslambert
11d3ed36b7 Specify outputs for Elasticsearch and Kibana for Eval and Import Mode
Add outputs for Elasticsearch and Kibana for Eval/Import Mode, since Logstash is not used in Eval Mode or Import Mode. Otherwise, logs from these inputs end up in a filebeat-prefixed index.
2022-07-11 17:22:09 -04:00
weslambert
d828bbfe47 Merge pull request #8273 from Security-Onion-Solutions/fix/kibana_space_defaults_cases
Add securitySolutionCases feature to ensure Cases are disabled by default
2022-07-11 16:39:30 -04:00
weslambert
bd32394560 Add securitySolutionCases feature to ensure Cases are disabled by default 2022-07-11 16:38:05 -04:00
weslambert
6f4f050a96 Merge pull request #8272 from Security-Onion-Solutions/fix/soup_kibana_space_defaults
Run so-kibana-space-defaults when upgrading to 2.3.140
2022-07-11 14:47:11 -04:00
weslambert
f77edaa5c9 Run so-kibana-space-defaults to re-establish the default enabled features since Fleet feature name changed 2022-07-11 14:41:23 -04:00
Jason Ertel
15124b6ad7 Merge pull request #8271 from Security-Onion-Solutions/kilo
Add content-type header to PUT request, now required in Kratos 0.10.1
2022-07-11 13:47:28 -04:00
Jason Ertel
077053afbd Add content-type header to PUT request, now required in Kratos 0.10.1 2022-07-11 13:43:41 -04:00
weslambert
dd1d5b1a83 Merge pull request #8270 from Security-Onion-Solutions/fix/curator_actions_delete_kratos
Add delete and warm action for Kratos indices in applicable Curator delete/warm scripts
2022-07-11 11:39:43 -04:00
weslambert
e82b6fcdec Typo - Change 'delete' to 'warm' 2022-07-11 11:34:53 -04:00
weslambert
8c8ac41b36 Add action for Kratos indices 2022-07-11 11:32:03 -04:00
weslambert
b611dda143 Add delete action for Kratos indices 2022-07-11 11:31:22 -04:00
weslambert
3f5b98d14d Merge pull request #8269 from Security-Onion-Solutions/fix/curator_actions_kratos
Add Curator actions and adjust Curator close scripts to account for so-kibana and so-kratos indices
2022-07-11 11:21:20 -04:00
Wes Lambert
0b6219d95f Adjust Curator close scripts to include Kibana and Kratos indices 2022-07-11 14:51:33 +00:00
Wes Lambert
2f729e24d9 Add Curator action files for Kratos indices 2022-07-11 14:34:10 +00:00
weslambert
992b6e14de Merge pull request #8268 from Security-Onion-Solutions/fix/kibana_disable_fleetv2
Disable fleetv2 because it is now used to control Fleet visibility and 'fleet' is now used for 'Integrations'
2022-07-11 10:09:12 -04:00
weslambert
09a1d8c549 Disable fleetv2 because it is now used to control Fleet visibility and 'fleet' is now used for 'Integrations' 2022-07-11 10:06:24 -04:00
Jason Ertel
f28c6d590a Merge pull request #8263 from Security-Onion-Solutions/kilo
Remove Jinja from yaml files before parsing
2022-07-08 20:32:22 -04:00
Jason Ertel
4f8bb6049b Future proof the jinja check to ensure the script does not silently overwrite jinja templates 2022-07-08 17:30:00 -04:00
Jason Ertel
a8e6b26406 Remove Jinja from yaml files before parsing 2022-07-08 17:07:24 -04:00
weslambert
2903bdbc7e Merge pull request #8260 from Security-Onion-Solutions/fix/kratos_dedicated_index_and_filestream_id_additions
Add dedicated index for Kratos and IDs for all filestream inputs
2022-07-08 12:04:40 -04:00
Wes Lambert
5c90fce3a1 Add Kratos Logstash output to search pipeline for Logstash 2022-07-08 15:58:00 +00:00
Wes Lambert
26698cfd07 Add Logstash output for dedicated Kratos index 2022-07-08 15:55:55 +00:00
Wes Lambert
764e8688b1 Modify Kratos input to use dedicated index and add filestream ID for all applicable inputs 2022-07-08 15:53:55 +00:00
Wes Lambert
b06c16f750 Add ingest node pipeline for Kratos 2022-07-08 15:53:00 +00:00
weslambert
42cfab4544 Merge pull request #8256 from Security-Onion-Solutions/fix/kibana_restart_after_role_sync
Restart Kibana in case it times out before being able to read role update
2022-07-07 17:44:47 -04:00
weslambert
4bbc901860 Restart Kibana in case it times out before being able to read in new role configuration 2022-07-07 17:19:02 -04:00
weslambert
a343f8ced0 Merge pull request #8255 from Security-Onion-Solutions/fix/so_kibana_user_role
Force so-user to sync roles to ensure so_kibana role change
2022-07-07 16:19:30 -04:00
weslambert
85be2f4f99 Force so-user to sync roles to ensure so_kibana role change from superuser to kibana_system 2022-07-07 15:55:44 -04:00
weslambert
8b3fa0c4c6 Merge pull request #8252 from Security-Onion-Solutions/feature/elastic_8_3_2
Update to Elastic 8.3.2
2022-07-07 11:14:14 -04:00
weslambert
ede845ce00 Update to Kibana 8.3.2 2022-07-07 11:05:44 -04:00
weslambert
42c96553c5 Update to Kibana 8.3.2 2022-07-07 11:04:43 -04:00
Mike Reeves
41d5cdd78c Merge pull request #8246 from Security-Onion-Solutions/TOoSmOotH-patch-2
Update soup
2022-07-06 16:39:38 -04:00
Mike Reeves
c819d3a558 Update soup 2022-07-06 16:36:57 -04:00
Mike Reeves
c00d33632a Update soup 2022-07-06 16:23:02 -04:00
Mike Reeves
a1ee793607 Merge pull request #8242 from Security-Onion-Solutions/fixsoup
Move soup order
2022-07-06 09:18:16 -04:00
Mike Reeves
1589107b97 Move soup order 2022-07-06 08:59:21 -04:00
Mike Reeves
31688ee898 Merge pull request #8238 from Security-Onion-Solutions/TOoSmOotH-patch-1
Make soup enforce versions
2022-07-05 16:56:14 -04:00
Mike Reeves
f1d188a46d Update soup 2022-07-05 16:50:20 -04:00
Mike Reeves
5f0c3aa7ae Update soup 2022-07-05 16:49:20 -04:00
weslambert
2b73cd1156 Merge pull request #8236 from Security-Onion-Solutions/fix/localfile_analyzer
Strip quotes and ensure file_path is typed as a list (localfile analyzer)
2022-07-05 16:28:56 -04:00
Mike Reeves
c6fac28804 Update soup 2022-07-05 16:26:44 -04:00
Jason Ertel
9d43b7ec89 Rollback string manipulation in favor of fixed unit tests 2022-07-05 16:21:27 -04:00
Jason Ertel
f6266b19cc Fix unit test issues 2022-07-05 16:20:24 -04:00
Mike Reeves
df0a774ffd Make soup enforce versions 2022-07-05 16:17:32 -04:00
weslambert
77ee30f31a Merge pull request #8237 from Security-Onion-Solutions/feature/elastic_8_3_1
Bump Elastic to 8.3.1
2022-07-05 14:50:24 -04:00
weslambert
2938464501 Update to Kibana 8.3.1 2022-07-05 14:46:02 -04:00
weslambert
79e88c9ca3 Update to Kibana 8.3.1 2022-07-05 14:45:30 -04:00
Wes Lambert
e96206d065 Strip quotes and ensure file_path is typed as a list 2022-07-05 14:25:54 +00:00
Josh Brower
7fa9ca8fc6 Merge pull request #8233 from Security-Onion-Solutions/fix/remove-sudo-bpf
Remove unneeded sudo
2022-07-05 09:23:48 -04:00
Josh Brower
a1d1779126 Remove unneeded sudo 2022-07-05 09:21:05 -04:00
Josh Patterson
fb365739ae Merge pull request #8225 from Security-Onion-Solutions/salltupdate
bootstrap-salt can now update to minor version with -r
2022-07-01 08:53:59 -04:00
m0duspwnens
5f898ae569 change to egrep 2022-07-01 08:47:46 -04:00
m0duspwnens
f0ff0d51f7 allow bootstrap-salt to install specific verion even if -r is used 2022-06-30 16:59:54 -04:00
m0duspwnens
7524ea2c05 allow bootstrap-salt to install specific verion even if -r is used 2022-06-30 15:10:13 -04:00
Mike Reeves
6bb979e2b6 Merge pull request #8219 from Security-Onion-Solutions/salty
Salty
2022-06-30 13:34:03 -04:00
Mike Reeves
8b3d5e808e Fix repo location 2022-06-30 13:30:56 -04:00
Mike Reeves
e86b7bff84 Fix repo location 2022-06-30 13:29:21 -04:00
Josh Patterson
69ce3613ff Merge pull request #8217 from Security-Onion-Solutions/salltupdate
point to salt3004.2
2022-06-30 11:29:35 -04:00
m0duspwnens
0ebd957308 point to salt3004.2 2022-06-30 11:26:03 -04:00
Josh Patterson
c3979f5a32 Merge pull request #8207 from Security-Onion-Solutions/salltupdate
Saltupdate 3004.2
2022-06-28 11:20:53 -04:00
m0duspwnens
8fccd4598a update saltstack.list for 3004.2 2022-06-27 16:23:01 -04:00
weslambert
3552dfac03 Merge pull request #8199 from Security-Onion-Solutions/fix/filebeat_filestream_elastic8
Change type from 'log' to 'filestream' to ensure compatibility with E…
2022-06-27 14:58:54 -04:00
Josh Patterson
fba5592f62 Update minion.defaults.yaml 2022-06-27 12:10:18 -04:00
Josh Patterson
05e84699d1 Update master.defaults.yaml 2022-06-27 12:09:39 -04:00
Mike Reeves
f36c8da1fe Update so-functions 2022-06-27 12:04:33 -04:00
Mike Reeves
080daee1d8 Update so-functions 2022-06-27 11:43:01 -04:00
Mike Reeves
909e876509 Update ubuntu.sls 2022-06-27 11:41:49 -04:00
Jason Ertel
ac68fa822b Merge pull request #8200 from Security-Onion-Solutions/contrib
Add gh action for contrib check
2022-06-27 11:25:10 -04:00
Jason Ertel
675ace21f5 Add gh action for contrib check 2022-06-27 11:11:15 -04:00
weslambert
85f790b28a Change type from 'log' to 'filestream' to ensure compatibility with Elastic 8 2022-06-27 10:39:58 -04:00
weslambert
d0818e83c9 Merge pull request #8197 from Security-Onion-Solutions/fix/localfile_analyzer_csv_path
Ensure file_path uses jinja to derive the value(s) from the pillar
2022-06-27 10:36:59 -04:00
weslambert
568b43d0af Ensure file_path uses jinja to derive the value(s) from the pillar 2022-06-27 10:10:13 -04:00
Jason Ertel
2e123b7a4f Merge pull request #8175 from Security-Onion-Solutions/kilo
Avoid failing setup due to retrying while waiting for lock file
2022-06-23 08:16:39 -04:00
Jason Ertel
ba6f716e4a Avoid failing setup due to retrying while waiting for lock file 2022-06-23 06:09:04 -04:00
weslambert
10bcc43e85 Merge pull request #8167 from Security-Onion-Solutions/feature/update_es_8_2_3
Update to Elastic 8.2.3
2022-06-21 16:11:39 -04:00
weslambert
af687fb2b5 Update config_saved_objects.ndjson 2022-06-21 16:06:28 -04:00
weslambert
776cc30a8e Update to ES 8.2.3 2022-06-21 16:06:01 -04:00
Doug Burks
00cf0b38d0 Merge pull request #8165 from Security-Onion-Solutions/dougburks-patch-1
FIX: Improve default dashboards #8136
2022-06-21 12:57:46 -04:00
Doug Burks
94c637449d FIX: Improve default dashboards #8136 2022-06-21 12:53:06 -04:00
Josh Brower
0a203add3b Merge pull request #8145 from Security-Onion-Solutions/defensivedepth-patch-1
pin v1.6.0
2022-06-17 13:14:58 -04:00
Josh Brower
b8ee896f8a pin v1.6.0 2022-06-17 12:38:54 -04:00
Josh Brower
238e671f34 Merge pull request #8129 from Security-Onion-Solutions/fix/curator-cron
Change curator to daily for true cluster
2022-06-15 11:40:53 -04:00
Josh Brower
072cb3cca2 Change curator to daily for true cluster 2022-06-15 11:38:38 -04:00
weslambert
44595cb333 Merge pull request #8123 from Security-Onion-Solutions/foxtrot
Merge foxtrot into dev
2022-06-14 15:44:13 -04:00
weslambert
959cec1845 Delete Elastalert indices before upgrading to Elastic 8 2022-06-14 11:40:11 -04:00
Doug Burks
286909af4b Merge pull request #8113 from Security-Onion-Solutions/fix/pfsense-category
FIX: Add event.category field to pfsense firewall logs #8112
2022-06-13 08:08:00 -04:00
doug
025993407e FIX: Add event.category field to pfsense firewall logs #8112 2022-06-13 08:03:44 -04:00
weslambert
151a42734c Update Elastic version to 8.2.2 2022-06-08 15:07:45 -04:00
weslambert
11e3576e0d Update Elastic version to 8.2.2 2022-06-08 15:07:07 -04:00
weslambert
adeccd0e7f Merge pull request #8097 from Security-Onion-Solutions/dev
Merge latest dev into foxtrot
2022-06-08 15:01:09 -04:00
weslambert
aadf391e5a Temporarily downgrade version for merge 2022-06-08 14:59:01 -04:00
weslambert
47f74fa5c6 Temporarily downgrade version for merge 2022-06-08 14:58:05 -04:00
Jason Ertel
e405750d26 Merge pull request #8095 from Security-Onion-Solutions/kilo
Bump version to 2.3.140
2022-06-08 09:07:56 -04:00
Jason Ertel
e36c33485d Bump version to 2.3.140 2022-06-08 09:04:57 -04:00
Mike Reeves
65165e52f4 Merge pull request #8086 from Security-Onion-Solutions/dev
2.3.130
2022-06-07 15:51:12 -04:00
Mike Reeves
2cceae54df Merge pull request #8087 from Security-Onion-Solutions/2.3.130
2.3.130
2022-06-07 13:44:38 -04:00
Mike Reeves
8912e241aa 2.3.130 2022-06-07 13:41:51 -04:00
Mike Reeves
7357f157ec Merge pull request #8085 from Security-Onion-Solutions/2.3.130
2.3.130
2022-06-07 12:04:47 -04:00
Mike Reeves
37881bd4b6 2.3.130 2022-06-07 11:34:10 -04:00
Josh Brower
2574f0e23d Merge pull request #8081 from Security-Onion-Solutions/fix/fleetdm-websockets
Allow websockets for fleetdm
2022-06-06 19:15:02 -04:00
Josh Brower
c9d9804c3a Allow websockets for fleetdm 2022-06-06 17:26:24 -04:00
Doug Burks
73baa1d2f0 Merge pull request #8073 from Security-Onion-Solutions/dougburks-patch-1
Update motd.md to include links to Dashboards and Cases
2022-06-04 08:53:54 -04:00
Doug Burks
dce415297c improve readability in motd.md 2022-06-04 06:59:09 -04:00
Doug Burks
de126647f8 Update motd.md to include links to Dashboards and Cases 2022-06-04 06:55:08 -04:00
Doug Burks
c34f456151 Merge pull request #8069 from Security-Onion-Solutions/dougburks-patch-1
add bar and pie examples to overview dashboard in dashboards.queries.…
2022-06-03 15:04:16 -04:00
Doug Burks
83bff5ee87 add bar and pie examples to overview dashboard in dashboards.queries.json 2022-06-03 15:02:40 -04:00
Doug Burks
918f431728 Merge pull request #8065 from Security-Onion-Solutions/dougburks-patch-1
Add sankey diagram to default dashboard in dashboards.queries.json
2022-06-03 11:13:39 -04:00
Doug Burks
4a886338c8 fix description field for default dashboard in dashboards.queries.json 2022-06-03 11:10:01 -04:00
Doug Burks
7da1802eae Add sankey diagram to default dashboard in dashboards.queries.json 2022-06-03 11:03:48 -04:00
Mike Reeves
ff92b524c2 Merge pull request #8062 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update soup
2022-06-02 11:51:42 -04:00
Mike Reeves
395eaa39b4 Update soup 2022-06-02 11:45:37 -04:00
Mike Reeves
2867a32931 Merge pull request #8061 from Security-Onion-Solutions/soup130
soup for 130
2022-06-02 10:42:17 -04:00
Mike Reeves
fce43cf390 soup for 130 2022-06-02 10:33:18 -04:00
Josh Patterson
e5c9b91529 Merge pull request #8054 from Security-Onion-Solutions/dmz_receiver
Dmz receiver
2022-06-01 15:31:42 -04:00
m0duspwnens
e5b74bcb78 remove podman state 2022-06-01 15:26:25 -04:00
Doug Burks
91f8d3e5e9 Merge pull request #8050 from Security-Onion-Solutions/fix/elastalert-query
FIX: Elastalert query in Hunt #8049
2022-05-31 16:54:34 -04:00
Doug Burks
269b16bbfd https://github.com/Security-Onion-Solutions/securityonion/issues/8049 2022-05-31 16:51:05 -04:00
Doug Burks
cd382a1b25 FIX: Elastalert query in Hunt #8049 2022-05-31 16:50:32 -04:00
Doug Burks
e1c9b0d108 FIX: Elastalert query in Hunt #8049 2022-05-31 16:47:52 -04:00
Doug Burks
9a98667e85 FIX: Elastalert query in Hunt #8049 2022-05-31 16:47:11 -04:00
weslambert
494ce0756d Merge pull request #8045 from Security-Onion-Solutions/fix/mhr_naming
Fix naming for Malware Hash Registry analyzer
2022-05-31 07:52:48 -04:00
Wes Lambert
7f30a364ee Make sure everything is added back after renaming mhr to malwarehashregistry 2022-05-31 11:44:35 +00:00
Wes Lambert
c82aa89497 Fix Malware Hash Registry naming so it's more descriptive in SOC 2022-05-31 11:41:48 +00:00
Josh Brower
025677a1e6 Merge pull request #8034 from Security-Onion-Solutions/feature/sigmafp
Feature/SigmaCustomFilters
2022-05-31 07:25:44 -04:00
Josh Brower
a5361fb745 Change Target_log name 2022-05-28 18:07:05 -04:00
Mike Reeves
30d7801ae1 Merge pull request #8033 from Security-Onion-Solutions/kilo 2022-05-28 11:38:35 -04:00
Jason Ertel
210bc556db Add logscan and suricata variants for cloud tests to move from PM into the cloud and help alleviate disk contention 2022-05-28 10:29:04 -04:00
Jason Ertel
e87e672b9e Add logscan and suricata variants for cloud tests to move from PM into the cloud and help alleviate disk contention 2022-05-28 10:28:20 -04:00
Jason Ertel
a70da41f20 Merge pull request #8032 from Security-Onion-Solutions/kilo
Exclude pkg upgrade retry error logs from failing setup
2022-05-28 08:34:40 -04:00
Jason Ertel
8bb02763dc Exclude pkg upgrade retry error logs from failing setup 2022-05-28 08:28:10 -04:00
weslambert
a59ada695b Merge pull request #8031 from Security-Onion-Solutions/fix/screenshots
Fix/screenshots
2022-05-27 17:05:51 -04:00
doug
b93a108386 update Cases screenshot in README 2022-05-27 16:33:08 -04:00
doug
6089f3906d update screenshots and README 2022-05-27 16:32:00 -04:00
Josh Brower
94ee45ac63 Merge pull request #8029 from Security-Onion-Solutions/upgrade/navigator
Upgrade Navigator to 4.6.4
2022-05-27 14:46:59 -04:00
Josh Brower
43cb78a6a8 Upgrade Navigator 2022-05-27 14:21:11 -04:00
Josh Patterson
76bb1fbbcc Merge pull request #8014 from Security-Onion-Solutions/issue/7918
manage suricata classifications.config
2022-05-26 13:13:03 -04:00
m0duspwnens
53d6e1d30d simplfy 2022-05-26 11:51:17 -04:00
m0duspwnens
1bfde852f5 manage suricata classifications.config https://github.com/Security-Onion-Solutions/securityonion/issues/7918 2022-05-26 11:43:31 -04:00
m0duspwnens
53883e4ade manage suricata classifications.config https://github.com/Security-Onion-Solutions/securityonion/issues/7918 2022-05-26 11:40:33 -04:00
weslambert
1a0ac4d253 Merge pull request #8007 from Security-Onion-Solutions/fix/filestream-id
Add filestream input ID for RITA logs
2022-05-25 10:11:36 -04:00
weslambert
44622350ea Add ID for RITA filestream inputs 2022-05-25 10:09:01 -04:00
weslambert
99864f4787 Merge pull request #8001 from Security-Onion-Solutions/feature/analyzer_readme
Add configuration requirements for various analyzers
2022-05-25 09:33:07 -04:00
Doug Burks
6bd02c0b99 Merge pull request #8003 from Security-Onion-Solutions/feature/elastic-7.17.4
UPGRADE: Elastic 7.17.4 #8002
2022-05-24 13:24:13 -04:00
Doug Burks
1d0bb21908 UPGRADE: Elastic 7.17.4 #8002 2022-05-24 13:19:30 -04:00
Doug Burks
bde06e7ec5 UPGRADE: Elastic 7.17.4 #8002 2022-05-24 13:19:01 -04:00
Wes Lambert
b93512eb01 Adjust verbiage around pillar configuration 2022-05-24 12:36:32 +00:00
Wes Lambert
92dee14ee8 Add configuration requirements for various analyzers 2022-05-24 12:29:14 +00:00
weslambert
3e6dfcfaca Merge pull request #7996 from Security-Onion-Solutions/weslambert-patch-2
Create Virustotal README
2022-05-23 11:43:43 -04:00
weslambert
a6f1bf3aef Create Virustotal README 2022-05-23 11:39:44 -04:00
Jason Ertel
88f17f037e Merge pull request #7982 from Security-Onion-Solutions/kilo
Upgrade to Kratos 0.9.0-alpha.3
2022-05-19 13:28:58 -04:00
Jason Ertel
c20859f8c3 Upgrade to Kratos 0.9.0-alpha.3 2022-05-18 17:05:21 -04:00
Jason Ertel
c95bafd521 Merge pull request #7969 from Security-Onion-Solutions/fix/helpers-analyzers
Only import yaml module when config is loaded
2022-05-18 07:15:32 -04:00
Wes Lambert
429ccb2dcc Only import yaml module when config is loaded 2022-05-18 02:07:39 +00:00
weslambert
94ca3ddbda Merge pull request #7961 from Security-Onion-Solutions/weslambert-patch-1
Add information for MHR and WhoisLookup, and other minor updates
2022-05-17 13:33:24 -04:00
weslambert
d3206a048f Add information for MHR and WhoisLookup, and other minor updates 2022-05-17 12:49:16 -04:00
weslambert
ff855eb8f7 Merge pull request #7958 from Security-Onion-Solutions/feature/mhr_analyzer
Add Team Cymru Malware Hash Registry Analyzer
2022-05-17 12:42:01 -04:00
Wes Lambert
8af1f19ac3 Another no_results change 2022-05-17 16:12:43 +00:00
Wes Lambert
e4a7e3cba6 Change 'No results found.' to 'no_results' 2022-05-17 16:11:58 +00:00
weslambert
2688083ff1 Merge pull request #7959 from Security-Onion-Solutions/feature/whoislookup-analyzer
Add Whoislookup RDAP-based analyzer
2022-05-17 12:09:06 -04:00
Wes Lambert
766e9748c5 Add Whoislookup RDAP-based analyzer 2022-05-17 15:52:12 +00:00
weslambert
3761b491c0 Remove whitespace 2022-05-17 10:50:33 -04:00
Wes Lambert
e8fc3ccdf4 Add Team Cymru Malware Hash Registry Analyzer 2022-05-17 14:44:53 +00:00
Doug Burks
eb9597217c Merge pull request #7949 from Security-Onion-Solutions/fix/dashboards-hunt-queries
update dashboards.queries.json and hunt.queries.json
2022-05-16 08:47:06 -04:00
doug
5cbb50a781 update dashboards.queries.json and hunt.queries.json 2022-05-16 08:33:48 -04:00
Jason Ertel
685789de33 Merge pull request #7936 from Security-Onion-Solutions/kilo
Improved unit test coverage of new analyzers; Utilize localized summa…
2022-05-12 16:47:18 -04:00
Jason Ertel
b45b6b198b Improved unit test coverage of new analyzers; Utilize localized summaries; Require 100% code coverage on analyzers 2022-05-12 16:32:47 -04:00
weslambert
6c506bbab0 Merge pull request #7935 from Security-Onion-Solutions/fix/pulsedive
Fix Pulsedive analyzer logic
2022-05-12 15:20:15 -04:00
Wes Lambert
3dc266cfa9 Add test for when indicator is not found 2022-05-12 19:02:41 +00:00
Wes Lambert
a233c08830 Update logic to handle indicators that are not present in database. 2022-05-12 19:02:02 +00:00
Doug Burks
58b049257d Merge pull request #7932 from Security-Onion-Solutions/dougburks-patch-1
remove duplicate showSubtitle from hunt.queries.json
2022-05-12 09:24:18 -04:00
Doug Burks
6ed3f42449 remove duplicate showSubtitle from hunt.queries.json 2022-05-12 09:23:00 -04:00
m0duspwnens
d8abc0a195 if in dmz_nodes dont add to filebeta 2022-05-11 11:51:18 -04:00
m0duspwnens
a641346c02 prevent nodes with logstash:dmz:true from being added to logstash:nodes pillar 2022-05-10 17:28:19 -04:00
Jason Ertel
60b55acd6f Merge pull request #7926 from Security-Onion-Solutions/kilo
Add support for analyzers in airgapped environments
2022-05-10 17:12:18 -04:00
Jason Ertel
35e47c8c3e Add support for analyzers in airgapped environments 2022-05-10 16:51:00 -04:00
weslambert
7f797a11f8 Merge pull request #7924 from Security-Onion-Solutions/analyzer-docs
Update analyzer docs with information about analyzers that require au…
2022-05-10 09:40:50 -04:00
Jason Ertel
91a7f25d3a Corrected brand name capitalization 2022-05-10 09:39:19 -04:00
weslambert
34d57c386b Update analyzer docs with information about analyzers that require authentication 2022-05-10 09:32:18 -04:00
weslambert
000e813fbb Merge pull request #7921 from Security-Onion-Solutions/fix/analyzer-packages
Update analyzer packages to those downloaded by Alpine and add additional build script option
2022-05-09 16:43:31 -04:00
Wes Lambert
555ca2e277 Update analyzer build/testing script to download necessary Python packages 2022-05-09 20:06:39 +00:00
Wes Lambert
32adba6141 Update analyzer packages with those built from native (Alpine) Docker image 2022-05-09 20:04:41 +00:00
Jason Ertel
e19635e44a Merge pull request #7920 from Security-Onion-Solutions/kilo
Disable MRU queries on dashboards
2022-05-09 15:08:55 -04:00
Jason Ertel
31c04aabdd Disable MRU queries on dashboards 2022-05-09 15:06:43 -04:00
Jason Ertel
dc209a37cd Merge pull request #7916 from Security-Onion-Solutions/kilo
Disable actions on dashboards group-by tables
2022-05-09 11:52:22 -04:00
Jason Ertel
3f35dc54d2 Disable actions on dashboards group-by tables 2022-05-09 11:44:39 -04:00
Josh Brower
8e368bdebe Merge in upstream dev 2022-05-06 20:01:07 -04:00
Jason Ertel
0e64a9e5c3 Merge pull request #7912 from Security-Onion-Solutions/kilo
Add dashboard ref to soc.json
2022-05-06 15:18:05 -04:00
Jason Ertel
0786191fc9 Add dashboard ref to soc.json 2022-05-06 15:16:27 -04:00
Jason Ertel
60763c38db Merge pull request #7911 from Security-Onion-Solutions/kilo
Analyzers + Dashboards
2022-05-06 13:50:54 -04:00
weslambert
9800f59ed7 Add Urlscan to observable support matrix 2022-05-06 13:11:43 -04:00
Wes Lambert
ccac71f649 Fix formatting/whitespace 2022-05-06 17:08:40 +00:00
Wes Lambert
1990ba0cf0 Fix formatting/whitespace 2022-05-06 17:08:33 +00:00
Wes Lambert
8ff5778569 Add Urlscan analyzer and tests 2022-05-06 17:01:06 +00:00
Jason Ertel
bee4cf4c52 Fix typo in analyzer desc 2022-05-06 09:20:03 -04:00
Jason Ertel
105c95909c Dashboard queries 2022-05-04 19:32:06 -04:00
Jason Ertel
890bcd58f9 Merge branch 'dev' into kilo 2022-05-04 19:25:08 -04:00
weslambert
a96c665d04 Change test name for EmailRep 2022-05-03 14:13:25 -04:00
weslambert
f3a91d9fcd Add EmailRep analyzer to observable support matrix 2022-05-03 10:10:57 -04:00
Wes Lambert
5a9acb3857 Add EmailRep analyzer and tests 2022-05-03 14:06:32 +00:00
Wes Lambert
8b5666b238 Ensure API key is used 2022-05-03 12:48:06 +00:00
weslambert
efb229cfcb Update to match configuration in analyzer dir 2022-05-02 16:35:21 -04:00
weslambert
2fcb2b081d Update allowed complexity to 12 2022-05-02 16:14:43 -04:00
weslambert
25f17a5efd Update allowed complexity to 11 2022-04-29 09:42:57 -04:00
weslambert
66b4fe9f58 Add additional information around URI and User Agent 2022-04-28 17:14:36 -04:00
Wes Lambert
c001708707 Add Pulsedive analyzer and tests 2022-04-28 20:56:03 +00:00
weslambert
4edd729596 Add initial supported observable matrix/table 2022-04-27 08:58:34 -04:00
Wes Lambert
76f183b112 Add Greynoise analyzer and tests 2022-04-26 17:25:35 +00:00
Wes Lambert
bd63753d80 Update analyzer name/description 2022-04-25 19:27:10 +00:00
Wes Lambert
15fcaa7030 Add localfile analyzer and tests 2022-04-25 19:23:35 +00:00
Jason Ertel
71a86b0a3c Merge pull request #7856 from Security-Onion-Solutions/bumpver
Bump version
2022-04-25 13:01:19 -04:00
Jason Ertel
e2145720bd Bump version 2022-04-25 12:10:29 -04:00
Jason Ertel
d8fdf2b701 Merge branch 'dev' into kilo 2022-04-22 15:11:24 -04:00
Jason Ertel
459d388614 Only override nameservers if the first nameserver given is non empty 2022-04-22 15:08:56 -04:00
Wes Lambert
fbf6e64e67 Add initial OTX analyzer and tests 2022-04-22 17:13:40 +00:00
Wes Lambert
b2db32a2c7 Add function/test for non-existent VT api_key 2022-04-21 17:33:24 +00:00
Wes Lambert
9287d6adf7 Reduce size of test output for test 2022-04-21 16:56:22 +00:00
Wes Lambert
c8e189f35a Add source-packages for JA3er 2022-04-21 16:46:45 +00:00
Wes Lambert
5afcc8de4f Add JA3er analyzer and associated test 2022-04-21 16:42:46 +00:00
weslambert
d7eed52fae Change -f to -r 2022-04-21 09:46:44 -04:00
Jason Ertel
aeb70dad8f Doc updates 2022-04-19 14:31:21 -04:00
Jason Ertel
4129cef9fb Add new spamhaus analyzer 2022-04-19 12:12:52 -04:00
Jason Ertel
0cb73d8f6a Merge branch 'dev' into kilo 2022-04-18 11:04:32 -04:00
Jason Ertel
159122b52c Merge branch 'dev' into kilo 2022-04-18 10:11:37 -04:00
Jason Ertel
2d025e944c Add yaml since helpers module uses it 2022-04-09 17:48:21 -04:00
Jason Ertel
202ca34c6f Remove obsolete source/site pkg dirs 2022-04-09 14:36:21 -04:00
Jason Ertel
f9568626f2 Merge branch 'dev' into kilo 2022-04-09 09:02:55 -04:00
Jason Ertel
224e30c0ee Change localized table layout 2022-04-08 17:31:15 -04:00
Jason Ertel
ebcfbaa06d Analyzer improvements 2022-04-08 16:57:40 -04:00
Jason Ertel
44e318e046 Provide CLI feedback for missing input 2022-04-07 10:16:44 -04:00
Jason Ertel
d8defdd7b0 Improve unit test stability 2022-04-05 07:36:25 -04:00
Jason Ertel
d2fa80e48a Update status codes to match SOC 2022-04-05 07:20:23 -04:00
Jason Ertel
04eef0d31f Merge branch 'dev' into kilo 2022-04-04 15:59:09 -04:00
Jason Ertel
7df6833568 Add unit tests for Urlhaus; remove placeholder whois analyzer 2022-04-04 15:58:53 -04:00
Wes Lambert
07cf3469a0 Remove pyyaml for requirements file 2022-04-04 11:40:02 +00:00
Wes Lambert
39101cafd1 Add UrlHaus analyzer and helpers script 2022-04-01 21:11:57 +00:00
Jason Ertel
2dc370c8b6 Add source packages to salt state 2022-03-31 18:56:38 -04:00
Jason Ertel
57dc848792 Support analyzer deps 2022-03-31 16:48:13 -04:00
Jason Ertel
9947ba6e43 Support CentOS paths 2022-03-31 16:47:56 -04:00
Jason Ertel
48fbc2290f Add dep support for analyzers 2022-03-31 13:59:35 -04:00
Jason Ertel
1aba4da2bb Correct analyzer path 2022-03-30 21:01:07 -04:00
Jason Ertel
45f511caab Remove extra comma 2022-03-30 13:21:35 -04:00
Jason Ertel
e667bb1e59 merge 2022-03-30 10:57:40 -04:00
Jason Ertel
b2a96fab7e merge 2022-03-29 14:07:20 -04:00
Jason Ertel
d2bf6d5618 Add build script to help pre-validate analyzers before pushing 2022-03-29 14:04:23 -04:00
Jason Ertel
484ef4bc31 Ensure generated python files are not pushed to version control 2022-03-29 13:51:12 -04:00
Jason Ertel
cb491630ae Analyzer CI 2022-03-29 13:40:56 -04:00
Jason Ertel
0a8d24a225 Add automated CI for analyzers 2022-03-29 13:10:04 -04:00
Jason Ertel
c23b87965f Merge branch 'dev' into kilo 2022-03-28 15:53:33 -04:00
Jason Ertel
deb9b0e5ef Add analyze feature 2022-03-28 15:53:24 -04:00
weslambert
bb9d6673ec Fix casing 2022-03-21 12:38:50 -04:00
weslambert
9afa949623 Don't rotate Filebeat log on startup 2022-03-21 12:38:12 -04:00
weslambert
b2c26807a3 Add xpack.reporting.kibanaServer.hostname to defaults file 2022-03-21 09:30:25 -04:00
Wes Lambert
faeaa948c8 Remove extra Salt logic and clean up output format of resultant script 2022-03-19 04:31:48 +00:00
Wes Lambert
1a6ef0cc6b Re-enable FB module load 2022-03-19 03:55:40 +00:00
Wes Lambert
a18b38de4d Update so-filebeat-module-setup to use new load style to avoid having to explicitly enabled filesets 2022-03-19 03:54:41 +00:00
Wes Lambert
2e7d314650 Remove Cyberark module 2022-03-19 03:43:55 +00:00
Wes Lambert
c97847f0e2 Remove Threat Intel Recored Future fileset 2022-03-19 03:43:34 +00:00
Wes Lambert
59a2ac38f5 Disable FB module load for now 2022-03-18 22:12:09 +00:00
Wes Lambert
543bf9a7a7 Update Kibana version to 8 2022-03-18 22:07:21 +00:00
Wes Lambert
d111c08fb3 Update Curator commands with new Filebeat module variables 2022-03-18 21:45:33 +00:00
weslambert
a9ea99daa8 Switch from so_elastic user to so_kibana user for Elastic 8 2022-03-18 15:09:50 -04:00
weslambert
cb0d4acd57 Remove X-Pack ML entry for Elastic 8 2022-03-18 14:46:28 -04:00
weslambert
e0374be4aa Update version from 7.16.2 to 8.1.0 for Kibana config 2022-03-18 11:57:33 -04:00
weslambert
6f294cc0c2 Change Kibana user role from superuser to kibana_system for Elastic 8 2022-03-18 11:54:08 -04:00
weslambert
5ec5b9a2ee Remove older module config files 2022-03-18 10:14:13 -04:00
weslambert
c659a443b0 Update from search.remote to cluster.remote for Elastic 8 2022-03-17 21:25:10 -04:00
weslambert
99430fddeb Update from search.remote to cluster.remote for Elastic 8 2022-03-17 21:24:39 -04:00
weslambert
7128b04636 Remove indices.query.bool.max_clause_count because it is dynamically allocated in Elastic 8 2022-03-17 21:20:41 -04:00
weslambert
712a92aa39 Switch from log input to filestream input 2022-03-17 21:18:03 -04:00
Wes Lambert
6e2aaa0098 Clean up original map file 2022-03-17 21:08:57 +00:00
Wes Lambert
09892a815b Add back bind mounts and remove THIRDPARTY 2022-03-17 21:06:07 +00:00
Wes Lambert
a60ef33930 Reorganize FB module management 2022-03-17 21:01:03 +00:00
235 changed files with 4155 additions and 308 deletions

24
.github/workflows/contrib.yml vendored Normal file
View File

@@ -0,0 +1,24 @@
name: contrib
on:
issue_comment:
types: [created]
pull_request_target:
types: [opened,closed,synchronize]
jobs:
CLAssistant:
runs-on: ubuntu-latest
steps:
- name: "Contributor Check"
if: (github.event.comment.body == 'recheck' || github.event.comment.body == 'I have read the CLA Document and I hereby sign the CLA') || github.event_name == 'pull_request_target'
uses: cla-assistant/github-action@v2.1.3-beta
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PERSONAL_ACCESS_TOKEN : ${{ secrets.PERSONAL_ACCESS_TOKEN }}
with:
path-to-signatures: 'signatures_v1.json'
path-to-document: 'https://securityonionsolutions.com/cla'
allowlist: dependabot[bot],jertel,dougburks,TOoSmOotH,weslambert,defensivedepth,m0duspwnens
remote-organization-name: Security-Onion-Solutions
remote-repository-name: licensing

View File

@@ -12,6 +12,6 @@ jobs:
fetch-depth: '0' fetch-depth: '0'
- name: Gitleaks - name: Gitleaks
uses: zricethezav/gitleaks-action@master uses: gitleaks/gitleaks-action@v1.6.0
with: with:
config-path: .github/.gitleaks.toml config-path: .github/.gitleaks.toml

31
.github/workflows/pythontest.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: python-test
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.10"]
python-code-path: ["salt/sensoroni/files/analyzers"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install flake8 pytest pytest-cov
find . -name requirements.txt -exec pip install -r {} \;
- name: Lint with flake8
run: |
flake8 ${{ matrix.python-code-path }} --show-source --max-complexity=12 --doctests --max-line-length=200 --statistics
- name: Test with pytest
run: |
pytest ${{ matrix.python-code-path }} --cov=${{ matrix.python-code-path }} --doctest-modules --cov-report=term --cov-fail-under=100 --cov-config=${{ matrix.python-code-path }}/pytest.ini

11
.gitignore vendored
View File

@@ -57,3 +57,14 @@ $RECYCLE.BIN/
*.lnk *.lnk
# End of https://www.gitignore.io/api/macos,windows # End of https://www.gitignore.io/api/macos,windows
# Pytest output
__pycache__
.pytest_cache
.coverage
*.pyc
.venv
# Analyzer dev/test config files
*_dev.yaml
site-packages

2
HOTFIX
View File

@@ -1 +1 @@
20220719 20220812

View File

@@ -1,14 +1,20 @@
## Security Onion 2.3.120 ## Security Onion 2.3.140
Security Onion 2.3.120 is here! Security Onion 2.3.140 is here!
## Screenshots ## Screenshots
Alerts Alerts
![Alerts](./assets/images/screenshots/alerts-1.png) ![Alerts](./assets/images/screenshots/alerts.png)
Dashboards
![Dashboards](./assets/images/screenshots/dashboards.png)
Hunt Hunt
![Hunt](./assets/images/screenshots/hunt-1.png) ![Hunt](./assets/images/screenshots/hunt.png)
Cases
![Cases](./assets/images/screenshots/cases-comments.png)
### Release Notes ### Release Notes

View File

@@ -1,18 +1,18 @@
### 2.3.120-20220425 ISO image built on 2022/04/25 ### 2.3.140-20220812 ISO image built on 2022/08/12
### Download and Verify ### Download and Verify
2.3.120-20220425 ISO image: 2.3.140-20220812 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.3.120-20220425.iso https://download.securityonion.net/file/securityonion/securityonion-2.3.140-20220812.iso
MD5: C99729E452B064C471BEF04532F28556 MD5: 13D4A5D663B5A36D045B980E5F33E6BC
SHA1: 60BF07D5347C24568C7B793BFA9792E98479CFBF SHA1: 85DC36B7E96575259DFD080BC860F6508D5F5899
SHA256: CD17D0D7CABE21D45FA45E1CF91C5F24EB9608C79FF88480134E5592AFDD696E SHA256: DE5D0F82732B81456180AA40C124E5C82688611941EEAF03D85986806631588C
Signature for ISO image: Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.120-20220425.iso.sig https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.140-20220812.iso.sig
Signing key: Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS
@@ -26,22 +26,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/ma
Download the signature file for the ISO: Download the signature file for the ISO:
``` ```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.120-20220425.iso.sig wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.140-20220812.iso.sig
``` ```
Download the ISO image: Download the ISO image:
``` ```
wget https://download.securityonion.net/file/securityonion/securityonion-2.3.120-20220425.iso wget https://download.securityonion.net/file/securityonion/securityonion-2.3.140-20220812.iso
``` ```
Verify the downloaded ISO image using the signature file: Verify the downloaded ISO image using the signature file:
``` ```
gpg --verify securityonion-2.3.120-20220425.iso.sig securityonion-2.3.120-20220425.iso gpg --verify securityonion-2.3.140-20220812.iso.sig securityonion-2.3.140-20220812.iso
``` ```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below: The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
``` ```
gpg: Signature made Mon 25 Apr 2022 08:20:40 AM EDT using RSA key ID FE507013 gpg: Signature made Fri 12 Aug 2022 03:59:11 PM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>" gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature! gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner. gpg: There is no indication that the signature belongs to the owner.

View File

@@ -1 +1 @@
2.3.120 2.3.140

Binary file not shown.

Before

Width:  |  Height:  |  Size: 245 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 186 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 386 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 168 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 191 KiB

View File

@@ -2,7 +2,7 @@
{% set cached_grains = salt.saltutil.runner('cache.grains', tgt='*') %} {% set cached_grains = salt.saltutil.runner('cache.grains', tgt='*') %}
{% for minionid, ip in salt.saltutil.runner( {% for minionid, ip in salt.saltutil.runner(
'mine.get', 'mine.get',
tgt='G@role:so-manager or G@role:so-managersearch or G@role:so-standalone or G@role:so-node or G@role:so-heavynode or G@role:so-receiver or G@role:so-helix ', tgt='G@role:so-manager or G@role:so-managersearch or G@role:so-standalone or G@role:so-node or G@role:so-heavynode or G@role:so-receiver or G@role:so-helix',
fun='network.ip_addrs', fun='network.ip_addrs',
tgt_type='compound') | dictsort() tgt_type='compound') | dictsort()
%} %}

View File

@@ -14,4 +14,5 @@ logstash:
- so/9700_output_strelka.conf.jinja - so/9700_output_strelka.conf.jinja
- so/9800_output_logscan.conf.jinja - so/9800_output_logscan.conf.jinja
- so/9801_output_rita.conf.jinja - so/9801_output_rita.conf.jinja
- so/9802_output_kratos.conf.jinja
- so/9900_output_endgame.conf.jinja - so/9900_output_endgame.conf.jinja

View File

@@ -29,7 +29,7 @@ fi
interface="$1" interface="$1"
shift shift
sudo tcpdump -i $interface -ddd $@ | tail -n+2 | tcpdump -i $interface -ddd $@ | tail -n+2 |
while read line; do while read line; do
cols=( $line ) cols=( $line )
printf "%04x%02x%02x%08x" ${cols[0]} ${cols[1]} ${cols[2]} ${cols[3]} printf "%04x%02x%02x%08x" ${cols[0]} ${cols[1]} ${cols[2]} ${cols[3]}

View File

@@ -49,19 +49,18 @@ if [ "$ELASTICSEARCH_CONNECTED" == "no" ]; then
fi fi
echo "Testing to see if the pipelines are already applied" echo "Testing to see if the pipelines are already applied"
ESVER=$({{ ELASTICCURL }} -sk https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT" |jq .version.number |tr -d \") ESVER=$({{ ELASTICCURL }} -sk https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT" |jq .version.number |tr -d \")
PIPELINES=$({{ ELASTICCURL }} -sk https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"/_ingest/pipeline/filebeat-$ESVER-suricata-eve-pipeline | jq . | wc -c) PIPELINES=$({{ ELASTICCURL }} -sk https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"/_ingest/pipeline/filebeat-$ESVER-elasticsearch-server-pipeline | jq . | wc -c)
if [[ "$PIPELINES" -lt 5 ]]; then if [[ "$PIPELINES" -lt 5 ]] || [ "$2" != "--force" ]; then
echo "Setting up ingest pipeline(s)" echo "Setting up ingest pipeline(s)"
{% from 'filebeat/modules.map.jinja' import MODULESMERGED with context %}
for MODULE in activemq apache auditd aws azure barracuda bluecoat cef checkpoint cisco coredns crowdstrike cyberark cylance elasticsearch envoyproxy f5 fortinet gcp google_workspace googlecloud gsuite haproxy ibmmq icinga iis imperva infoblox iptables juniper kafka kibana logstash microsoft mongodb mssql mysql nats netscout nginx o365 okta osquery panw postgresql rabbitmq radware redis santa snort snyk sonicwall sophos squid suricata system threatintel tomcat traefik zeek zscaler {%- for module in MODULESMERGED.modules.keys() %}
do {%- for fileset in MODULESMERGED.modules[module] %}
echo "Loading $MODULE" echo "{{ module }}.{{ fileset}}"
docker exec -i so-filebeat filebeat setup modules -pipelines -modules $MODULE -c $FB_MODULE_YML docker exec -i so-filebeat filebeat setup --pipelines --modules {{ module }} -M "{{ module }}.{{ fileset }}.enabled=true" -c $FB_MODULE_YML
sleep 2 sleep 0.5
done {% endfor %}
{%- endfor %}
else else
exit 0 exit 0
fi fi

View File

@@ -16,6 +16,7 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
import os import os
import re
import subprocess import subprocess
import sys import sys
import time import time
@@ -26,6 +27,7 @@ hostgroupsFilename = "/opt/so/saltstack/local/salt/firewall/hostgroups.local.yam
portgroupsFilename = "/opt/so/saltstack/local/salt/firewall/portgroups.local.yaml" portgroupsFilename = "/opt/so/saltstack/local/salt/firewall/portgroups.local.yaml"
defaultPortgroupsFilename = "/opt/so/saltstack/default/salt/firewall/portgroups.yaml" defaultPortgroupsFilename = "/opt/so/saltstack/default/salt/firewall/portgroups.yaml"
supportedProtocols = ['tcp', 'udp'] supportedProtocols = ['tcp', 'udp']
readonly = False
def showUsage(options, args): def showUsage(options, args):
print('Usage: {} [OPTIONS] <COMMAND> [ARGS...]'.format(sys.argv[0])) print('Usage: {} [OPTIONS] <COMMAND> [ARGS...]'.format(sys.argv[0]))
@@ -70,10 +72,26 @@ def checkApplyOption(options):
return apply(None, None) return apply(None, None)
def loadYaml(filename): def loadYaml(filename):
global readonly
file = open(filename, "r") file = open(filename, "r")
return yaml.safe_load(file.read()) content = file.read()
# Remove Jinja templating (for read-only operations)
if "{%" in content or "{{" in content:
content = content.replace("{{ ssh_port }}", "22")
pattern = r'.*({%|{{|}}|%}).*'
content = re.sub(pattern, "", content)
readonly = True
return yaml.safe_load(content)
def writeYaml(filename, content): def writeYaml(filename, content):
global readonly
if readonly:
raise Exception("Cannot write yaml file that has been flagged as read-only")
file = open(filename, "w") file = open(filename, "w")
return yaml.dump(content, file) return yaml.dump(content, file)

View File

@@ -1,6 +1,7 @@
#!/bin/bash
. /usr/sbin/so-common . /usr/sbin/so-common
{% set HIGHLANDER = salt['pillar.get']('global:highlander', False) %} {% set HIGHLANDER = salt['pillar.get']('global:highlander', False) %}
wait_for_web_response "http://localhost:5601/app/kibana" "Elastic" 300 "{{ ELASTICCURL }}" wait_for_web_response "http://localhost:5601/api/spaces/space/default" "default" 300 "{{ ELASTICCURL }}"
## This hackery will be removed if using Elastic Auth ## ## This hackery will be removed if using Elastic Auth ##
# Let's snag a cookie from Kibana # Let's snag a cookie from Kibana
@@ -12,6 +13,6 @@ echo "Setting up default Space:"
{% if HIGHLANDER %} {% if HIGHLANDER %}
{{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["enterpriseSearch"]} ' >> /opt/so/log/kibana/misc.log {{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["enterpriseSearch"]} ' >> /opt/so/log/kibana/misc.log
{% else %} {% else %}
{{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","siem","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","fleet"]} ' >> /opt/so/log/kibana/misc.log {{ ELASTICCURL }} -b "sid=$SESSIONCOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","siem","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","fleet","fleetv2","securitySolutionCases"]} ' >> /opt/so/log/kibana/misc.log
{% endif %} {% endif %}
echo echo

View File

@@ -44,7 +44,7 @@ operation=$1
email=$2 email=$2
role=$3 role=$3
kratosUrl=${KRATOS_URL:-http://127.0.0.1:4434} kratosUrl=${KRATOS_URL:-http://127.0.0.1:4434/admin}
databasePath=${KRATOS_DB_PATH:-/opt/so/conf/kratos/db/db.sqlite} databasePath=${KRATOS_DB_PATH:-/opt/so/conf/kratos/db/db.sqlite}
databaseTimeout=${KRATOS_DB_TIMEOUT:-5000} databaseTimeout=${KRATOS_DB_TIMEOUT:-5000}
bcryptRounds=${BCRYPT_ROUNDS:-12} bcryptRounds=${BCRYPT_ROUNDS:-12}
@@ -238,7 +238,7 @@ function syncElastic() {
syncElasticSystemUser "$authPillarJson" "so_monitor_user" "$usersTmpFile" syncElasticSystemUser "$authPillarJson" "so_monitor_user" "$usersTmpFile"
syncElasticSystemRole "$authPillarJson" "so_elastic_user" "superuser" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_elastic_user" "superuser" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_kibana_user" "superuser" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_kibana_user" "kibana_system" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_logstash_user" "superuser" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_logstash_user" "superuser" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_beats_user" "superuser" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_beats_user" "superuser" "$rolesTmpFile"
syncElasticSystemRole "$authPillarJson" "so_monitor_user" "remote_monitoring_collector" "$rolesTmpFile" syncElasticSystemRole "$authPillarJson" "so_monitor_user" "remote_monitoring_collector" "$rolesTmpFile"
@@ -408,7 +408,7 @@ function migrateLockedUsers() {
# This is a migration function to convert locked users from prior to 2.3.90 # This is a migration function to convert locked users from prior to 2.3.90
# to inactive users using the newer Kratos functionality. This should only # to inactive users using the newer Kratos functionality. This should only
# find locked users once. # find locked users once.
lockedEmails=$(curl -s http://localhost:4434/identities | jq -r '.[] | select(.traits.status == "locked") | .traits.email') lockedEmails=$(curl -s ${kratosUrl}/identities | jq -r '.[] | select(.traits.status == "locked") | .traits.email')
if [[ -n "$lockedEmails" ]]; then if [[ -n "$lockedEmails" ]]; then
echo "Disabling locked users..." echo "Disabling locked users..."
for email in $lockedEmails; do for email in $lockedEmails; do
@@ -437,7 +437,7 @@ function updateStatus() {
state="inactive" state="inactive"
fi fi
body="{ \"schema_id\": \"$schemaId\", \"state\": \"$state\", \"traits\": $traitBlock }" body="{ \"schema_id\": \"$schemaId\", \"state\": \"$state\", \"traits\": $traitBlock }"
response=$(curl -fSsL -XPUT "${kratosUrl}/identities/$identityId" -d "$body") response=$(curl -fSsL -XPUT -H "Content-Type: application/json" "${kratosUrl}/identities/$identityId" -d "$body")
[[ $? != 0 ]] && fail "Unable to update user" [[ $? != 0 ]] && fail "Unable to update user"
} }

View File

@@ -48,7 +48,7 @@ fi
{% else %} {% else %}
gh_status=$(curl -s -o /dev/null -w "%{http_code}" http://github.com) gh_status=$(curl -s -o /dev/null -w "%{http_code}" https://github.com)
clone_dir="/tmp" clone_dir="/tmp"
if [ "$gh_status" == "200" ] || [ "$gh_status" == "301" ]; then if [ "$gh_status" == "200" ] || [ "$gh_status" == "301" ]; then

View File

@@ -371,12 +371,109 @@ clone_to_tmp() {
fi fi
} }
elastalert_indices_check() {
# Stop Elastalert to prevent Elastalert indices from being re-created
if grep -q "^so-elastalert$" /opt/so/conf/so-status/so-status.conf ; then
so-elastalert-stop || true
fi
# Wait for ElasticSearch to initialize
echo -n "Waiting for ElasticSearch..."
COUNT=0
ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do
so-elasticsearch-query / -k --output /dev/null
if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes"
echo "connected!"
break
else
((COUNT+=1))
sleep 1
echo -n "."
fi
done
# Unable to connect to Elasticsearch
if [ "$ELASTICSEARCH_CONNECTED" == "no" ]; then
echo
echo -e "Connection attempt timed out. Unable to connect to ElasticSearch. \nPlease try: \n -checking log(s) in /var/log/elasticsearch/\n -running 'sudo docker ps' \n -running 'sudo so-elastic-restart'"
echo
exit 1
fi
# Check Elastalert indices
echo "Deleting Elastalert indices to prevent issues with upgrade to Elastic 8..."
CHECK_COUNT=0
while [[ "$CHECK_COUNT" -le 2 ]]; do
# Delete Elastalert indices
for i in $(so-elasticsearch-query _cat/indices | grep elastalert | awk '{print $3}'); do
so-elasticsearch-query $i -XDELETE;
done
# Check to ensure Elastalert indices are deleted
COUNT=0
ELASTALERT_INDICES_DELETED="no"
while [[ "$COUNT" -le 240 ]]; do
RESPONSE=$(so-elasticsearch-query elastalert*)
if [[ "$RESPONSE" == "{}" ]]; then
ELASTALERT_INDICES_DELETED="yes"
echo "Elastalert indices successfully deleted."
break
else
((COUNT+=1))
sleep 1
echo -n "."
fi
done
((CHECK_COUNT+=1))
done
# If we were unable to delete the Elastalert indices, exit the script
if [ "$ELASTALERT_INDICES_DELETED" == "no" ]; then
echo
echo -e "Unable to connect to delete Elastalert indices. Exiting."
echo
exit 1
fi
}
enable_highstate() { enable_highstate() {
echo "Enabling highstate." echo "Enabling highstate."
salt-call state.enable highstate -l info --local salt-call state.enable highstate -l info --local
echo "" echo ""
} }
es_version_check() {
CHECK_ES=$(echo $INSTALLEDVERSION | awk -F. '{print $3}')
if [ "$CHECK_ES" -lt "110" ]; then
echo "You are currently running Security Onion $INSTALLEDVERSION. You will need to update to version 2.3.130 before updating to 2.3.140 or higher."
echo ""
echo "If your deployment has Internet access, you can use the following command to update to 2.3.130:"
echo "sudo BRANCH=2.3.130-20220607 soup"
echo ""
echo "Otherwise, if your deployment is configured for airgap, you can instead download the 2.3.130 ISO image from https://download.securityonion.net/file/securityonion/securityonion-2.3.130-20220607.iso."
echo ""
echo "*** Once you have updated to 2.3.130, you can then update to 2.3.140 or higher as you would normally. ***"
exit 1
fi
}
es_indices_check() {
echo "Checking for unsupported Elasticsearch indices..."
UNSUPPORTED_INDICES=$(for INDEX in $(so-elasticsearch-indices-list | awk '{print $3}'); do so-elasticsearch-query $INDEX/_settings?human |grep '"created_string":"6' | jq -r 'keys'[0]; done)
if [ -z "$UNSUPPORTED_INDICES" ]; then
echo "No unsupported indices found."
else
echo "The following indices were created with Elasticsearch 6, and are not supported when upgrading to Elasticsearch 8. These indices may need to be deleted, migrated, or re-indexed before proceeding with the upgrade. Please see https://docs.securityonion.net/en/2.3/soup.html#elastic-8 for more details."
echo
echo "$UNSUPPORTED_INDICES"
exit 1
fi
}
generate_and_clean_tarballs() { generate_and_clean_tarballs() {
local new_version local new_version
new_version=$(cat $UPDATE_DIR/VERSION) new_version=$(cat $UPDATE_DIR/VERSION)
@@ -422,7 +519,9 @@ preupgrade_changes() {
[[ "$INSTALLEDVERSION" == 2.3.80 ]] && up_to_2.3.90 [[ "$INSTALLEDVERSION" == 2.3.80 ]] && up_to_2.3.90
[[ "$INSTALLEDVERSION" == 2.3.90 || "$INSTALLEDVERSION" == 2.3.91 ]] && up_to_2.3.100 [[ "$INSTALLEDVERSION" == 2.3.90 || "$INSTALLEDVERSION" == 2.3.91 ]] && up_to_2.3.100
[[ "$INSTALLEDVERSION" == 2.3.100 ]] && up_to_2.3.110 [[ "$INSTALLEDVERSION" == 2.3.100 ]] && up_to_2.3.110
[[ "$INSTALLEDVERISON" == 2.3.110 ]] && up_to_2.3.120 [[ "$INSTALLEDVERSION" == 2.3.110 ]] && up_to_2.3.120
[[ "$INSTALLEDVERSION" == 2.3.120 ]] && up_to_2.3.130
[[ "$INSTALLEDVERSION" == 2.3.130 ]] && up_to_2.3.140
true true
} }
@@ -437,6 +536,9 @@ postupgrade_changes() {
[[ "$POSTVERSION" == 2.3.90 || "$POSTVERSION" == 2.3.91 ]] && post_to_2.3.100 [[ "$POSTVERSION" == 2.3.90 || "$POSTVERSION" == 2.3.91 ]] && post_to_2.3.100
[[ "$POSTVERSION" == 2.3.100 ]] && post_to_2.3.110 [[ "$POSTVERSION" == 2.3.100 ]] && post_to_2.3.110
[[ "$POSTVERSION" == 2.3.110 ]] && post_to_2.3.120 [[ "$POSTVERSION" == 2.3.110 ]] && post_to_2.3.120
[[ "$POSTVERSION" == 2.3.120 ]] && post_to_2.3.130
[[ "$POSTVERSION" == 2.3.130 ]] && post_to_2.3.140
true true
} }
@@ -507,6 +609,19 @@ post_to_2.3.120() {
sed -i '/so-thehive-es/d;/so-thehive/d;/so-cortex/d' /opt/so/conf/so-status/so-status.conf sed -i '/so-thehive-es/d;/so-thehive/d;/so-cortex/d' /opt/so/conf/so-status/so-status.conf
} }
post_to_2.3.130() {
echo "Post Processing for 2.3.130"
POSTVERSION=2.3.130
}
post_to_2.3.140() {
echo "Post Processing for 2.3.140"
FORCE_SYNC=true so-user sync
so-kibana-restart
so-kibana-space-defaults
POSTVERSION=2.3.140
}
stop_salt_master() { stop_salt_master() {
@@ -754,10 +869,13 @@ up_to_2.3.100() {
echo "Adding receiver to assigned_hostgroups.local.map.yaml" echo "Adding receiver to assigned_hostgroups.local.map.yaml"
grep -qxF " receiver:" /opt/so/saltstack/local/salt/firewall/assigned_hostgroups.local.map.yaml || sed -i -e '$a\ receiver:' /opt/so/saltstack/local/salt/firewall/assigned_hostgroups.local.map.yaml grep -qxF " receiver:" /opt/so/saltstack/local/salt/firewall/assigned_hostgroups.local.map.yaml || sed -i -e '$a\ receiver:' /opt/so/saltstack/local/salt/firewall/assigned_hostgroups.local.map.yaml
INSTALLEDVERSION=2.3.100
} }
up_to_2.3.110() { up_to_2.3.110() {
sed -i 's|shards|index_template:\n template:\n settings:\n index:\n number_of_shards|g' /opt/so/saltstack/local/pillar/global.sls sed -i 's|shards|index_template:\n template:\n settings:\n index:\n number_of_shards|g' /opt/so/saltstack/local/pillar/global.sls
INSTALLEDVERSION=2.3.110
} }
up_to_2.3.120() { up_to_2.3.120() {
@@ -765,7 +883,20 @@ up_to_2.3.120() {
so-thehive-stop so-thehive-stop
so-thehive-es-stop so-thehive-es-stop
so-cortex-stop so-cortex-stop
} INSTALLEDVERSION=2.3.120
}
up_to_2.3.130() {
# Remove file for nav update
rm -f /opt/so/conf/navigator/layers/nav_layer_playbook.json
INSTALLEDVERSION=2.3.130
}
up_to_2.3.140() {
elastalert_indices_check
##
INSTALLEDVERSION=2.3.140
}
verify_upgradespace() { verify_upgradespace() {
CURRENTSPACE=$(df -BG / | grep -v Avail | awk '{print $4}' | sed 's/.$//') CURRENTSPACE=$(df -BG / | grep -v Avail | awk '{print $4}' | sed 's/.$//')
@@ -945,7 +1076,7 @@ update_repo() {
fi fi
rm -f /etc/apt/sources.list.d/salt.list rm -f /etc/apt/sources.list.d/salt.list
echo "deb https://repo.securityonion.net/file/securityonion-repo/ubuntu/$ubuntu_version/amd64/salt $OSVER main" > /etc/apt/sources.list.d/saltstack.list echo "deb https://repo.securityonion.net/file/securityonion-repo/ubuntu/$ubuntu_version/amd64/salt3004.2/ $OSVER main" > /etc/apt/sources.list.d/saltstack.list
apt-get update apt-get update
fi fi
} }
@@ -1080,6 +1211,9 @@ main() {
fi fi
echo "Verifying we have the latest soup script." echo "Verifying we have the latest soup script."
verify_latest_update_script verify_latest_update_script
es_version_check
es_indices_check
elastalert_indices_check
echo "" echo ""
set_palette set_palette
check_elastic_license check_elastic_license

View File

@@ -0,0 +1,29 @@
{%- set cur_close_days = salt['pillar.get']('elasticsearch:index_settings:so-kratos:close', 30) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: close
description: >-
Close kratos indices older than {{cur_close_days}} days.
options:
delete_aliases: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-kratos.*|so-kratos.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{cur_close_days}}
exclude:

View File

@@ -0,0 +1,29 @@
{%- set DELETE_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-kratos:delete', 365) -%}
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True. If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
1:
action: delete_indices
description: >-
Delete kratos indices when older than {{ DELETE_DAYS }} days.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^(logstash-kratos.*|so-kratos.*)$'
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ DELETE_DAYS }}
exclude:

View File

@@ -0,0 +1,24 @@
{%- set WARM_DAYS = salt['pillar.get']('elasticsearch:index_settings:so-kratos:warm', 7) -%}
actions:
1:
action: allocation
description: "Apply shard allocation filtering rules to the specified indices"
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
disable_action: false
filters:
- filtertype: pattern
kind: prefix
value: so-kratos
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: {{ WARM_DAYS }}

View File

@@ -23,8 +23,8 @@ read lastPID < $lf
# if lastPID is not null and a process with that pid exists , exit # if lastPID is not null and a process with that pid exists , exit
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit [ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo $$ > $lf echo $$ > $lf
{% from 'filebeat/map.jinja' import THIRDPARTY with context %}
{% from 'filebeat/map.jinja' import SO with context %} {% from 'filebeat/modules.map.jinja' import MODULESMERGED with context %}
/usr/sbin/so-curator-closed-delete > /dev/null 2>&1; /usr/sbin/so-curator-closed-delete > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-close.yml > /dev/null 2>&1;
@@ -32,13 +32,12 @@ docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/cur
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-kibana-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-kratos-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-close.yml > /dev/null 2>&1;
{% for INDEX in THIRDPARTY.modules.keys() -%} {% for INDEX in MODULESMERGED.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-close.yml > /dev/null 2>&1;
{% endfor -%}
{% for INDEX in SO.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-close.yml > /dev/null 2>&1{% if not loop.last %};{% endif %} docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-close.yml > /dev/null 2>&1{% if not loop.last %};{% endif %}
{% endfor -%} {% endfor -%}

View File

@@ -29,7 +29,7 @@ LOG="/opt/so/log/curator/so-curator-closed-delete.log"
overlimit() { overlimit() {
[[ $(du -hs --block-size=1GB /nsm/elasticsearch/nodes | awk '{print $1}' ) -gt "{{LOG_SIZE_LIMIT}}" ]] [[ $(du -hs --block-size=1GB /nsm/elasticsearch/indices | awk '{print $1}' ) -gt "{{LOG_SIZE_LIMIT}}" ]]
} }
closedindices() { closedindices() {

View File

@@ -24,21 +24,18 @@ read lastPID < $lf
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit [ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo $$ > $lf echo $$ > $lf
{% from 'filebeat/map.jinja' import THIRDPARTY with context %} {% from 'filebeat/modules.map.jinja' import MODULESMERGED with context %}
{% from 'filebeat/map.jinja' import SO with context %}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-kratos-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-close.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-close.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-close.yml > /dev/null 2>&1;
{% for INDEX in THIRDPARTY.modules.keys() -%} {% for INDEX in MODULESMERGED.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-close.yml > /dev/null 2>&1;
{% endfor -%}
{% for INDEX in SO.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-close.yml > /dev/null 2>&1{% if not loop.last %};{% endif %} docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-close.yml > /dev/null 2>&1{% if not loop.last %};{% endif %}
{% endfor -%} {% endfor -%}

View File

@@ -24,21 +24,18 @@ read lastPID < $lf
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit [ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo $$ > $lf echo $$ > $lf
{% from 'filebeat/map.jinja' import THIRDPARTY with context %} {% from 'filebeat/modules.map.jinja' import MODULESMERGED with context %}
{% from 'filebeat/map.jinja' import SO with context %}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-kratos-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-delete.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-delete.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-delete.yml > /dev/null 2>&1;
{% for INDEX in THIRDPARTY.modules.keys() -%} {% for INDEX in MODULESMERGED.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-delete.yml > /dev/null 2>&1;
{% endfor -%}
{% for INDEX in SO.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-delete.yml > /dev/null 2>&1{% if not loop.last %};{% endif %} docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-delete.yml > /dev/null 2>&1{% if not loop.last %};{% endif %}
{% endfor -%} {% endfor -%}

View File

@@ -24,21 +24,18 @@ read lastPID < $lf
[ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit [ ! -z "$lastPID" -a -d /proc/$lastPID ] && exit
echo $$ > $lf echo $$ > $lf
{% from 'filebeat/map.jinja' import THIRDPARTY with context %} {% from 'filebeat/modules.map.jinja' import MODULESMERGED with context %}
{% from 'filebeat/map.jinja' import SO with context %}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-zeek-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-beats-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-firewall-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ids-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-import-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-kratos-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-osquery-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-ossec-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-strelka-warm.yml > /dev/null 2>&1;
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-warm.yml > /dev/null 2>&1; docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-syslog-warm.yml > /dev/null 2>&1;
{% for INDEX in THIRDPARTY.modules.keys() -%} {% for INDEX in MODULESMERGED.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-warm.yml > /dev/null 2>&1;
{% endfor -%}
{% for INDEX in SO.modules.keys() -%}
docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-warm.yml > /dev/null 2>&1{% if not loop.last %};{% endif %} docker exec so-curator curator --config /etc/curator/config/curator.yml /etc/curator/action/so-{{ INDEX }}-warm.yml > /dev/null 2>&1{% if not loop.last %};{% endif %}
{% endfor -%} {% endfor -%}

View File

@@ -201,8 +201,8 @@ so-curatorclusterclose:
cron.present: cron.present:
- name: /usr/sbin/so-curator-cluster-close > /opt/so/log/curator/cron-close.log 2>&1 - name: /usr/sbin/so-curator-cluster-close > /opt/so/log/curator/cron-close.log 2>&1
- user: root - user: root
- minute: '2' - minute: '5'
- hour: '*/1' - hour: '1'
- daymonth: '*' - daymonth: '*'
- month: '*' - month: '*'
- dayweek: '*' - dayweek: '*'
@@ -211,8 +211,8 @@ so-curatorclusterdelete:
cron.present: cron.present:
- name: /usr/sbin/so-curator-cluster-delete > /opt/so/log/curator/cron-delete.log 2>&1 - name: /usr/sbin/so-curator-cluster-delete > /opt/so/log/curator/cron-delete.log 2>&1
- user: root - user: root
- minute: '2' - minute: '5'
- hour: '*/1' - hour: '1'
- daymonth: '*' - daymonth: '*'
- month: '*' - month: '*'
- dayweek: '*' - dayweek: '*'
@@ -221,8 +221,8 @@ so-curatorclusterwarm:
cron.present: cron.present:
- name: /usr/sbin/so-curator-cluster-warm > /opt/so/log/curator/cron-warm.log 2>&1 - name: /usr/sbin/so-curator-cluster-warm > /opt/so/log/curator/cron-warm.log 2>&1
- user: root - user: root
- minute: '2' - minute: '5'
- hour: '*/1' - hour: '1'
- daymonth: '*' - daymonth: '*'
- month: '*' - month: '*'
- dayweek: '*' - dayweek: '*'

View File

@@ -129,6 +129,9 @@ so-elastalert:
- file: elastaconf - file: elastaconf
- watch: - watch:
- file: elastaconf - file: elastaconf
- onlyif:
- "so-elasticsearch-query / | jq -r '.version.number[0:1]' | grep -q 8" {# only run this state if elasticsearch is version 8 #}
append_so-elastalert_so-status.conf: append_so-elastalert_so-status.conf:
file.append: file.append:

View File

@@ -53,9 +53,6 @@ elasticsearch:
script: script:
max_compilations_rate: 20000/1m max_compilations_rate: 20000/1m
indices: indices:
query:
bool:
max_clause_count: 3500
id_field_data: id_field_data:
enabled: false enabled: false
logger: logger:

View File

@@ -51,9 +51,10 @@
}, },
{ "set": { "field": "_index", "value": "so-firewall", "override": true } }, { "set": { "field": "_index", "value": "so-firewall", "override": true } },
{ "set": { "if": "ctx.network?.transport_id == '0'", "field": "network.transport", "value": "icmp", "override": true } }, { "set": { "if": "ctx.network?.transport_id == '0'", "field": "network.transport", "value": "icmp", "override": true } },
{"community_id": {} }, { "community_id": {} },
{ "set": { "field": "module", "value": "pfsense", "override": true } }, { "set": { "field": "module", "value": "pfsense", "override": true } },
{ "set": { "field": "dataset", "value": "firewall", "override": true } }, { "set": { "field": "dataset", "value": "firewall", "override": true } },
{ "set": { "field": "category", "value": "network", "override": true } },
{ "remove": { "field": ["real_message", "ip_sub_msg", "firewall.sub_message"], "ignore_failure": true } } { "remove": { "field": ["real_message", "ip_sub_msg", "firewall.sub_message"], "ignore_failure": true } }
] ]
} }

View File

@@ -0,0 +1,13 @@
{
"description" : "kratos",
"processors" : [
{
"set": {
"field": "_index",
"value": "so-kratos",
"override": true
}
},
{ "pipeline": { "name": "common" } }
]
}

View File

@@ -30,7 +30,7 @@ echo -n "Waiting for ElasticSearch..."
COUNT=0 COUNT=0
ELASTICSEARCH_CONNECTED="no" ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do while [[ "$COUNT" -le 240 ]]; do
so-elasticsearch-query -k --output /dev/null --silent --head --fail so-elasticsearch-query / -k --output /dev/null --silent --head --fail
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes" ELASTICSEARCH_CONNECTED="yes"
echo "connected!" echo "connected!"

View File

@@ -64,6 +64,9 @@ logging.files:
# automatically rotated # automatically rotated
rotateeverybytes: 10485760 # = 10MB rotateeverybytes: 10485760 # = 10MB
# Rotate on startup
rotateonstartup: false
# Number of rotated log files to keep. Oldest files will be deleted first. # Number of rotated log files to keep. Oldest files will be deleted first.
keepfiles: 7 keepfiles: 7
@@ -114,7 +117,8 @@ filebeat.inputs:
fields_under_root: true fields_under_root: true
{%- if grains['role'] in ['so-eval', 'so-standalone', 'so-manager', 'so-managersearch', 'so-import'] %} {%- if grains['role'] in ['so-eval', 'so-standalone', 'so-manager', 'so-managersearch', 'so-import'] %}
- type: log - type: filestream
id: logscan
paths: paths:
- /logs/logscan/alerts.log - /logs/logscan/alerts.log
fields: fields:
@@ -131,7 +135,8 @@ filebeat.inputs:
{%- if grains['role'] in ['so-eval', 'so-standalone', 'so-sensor', 'so-helix', 'so-heavynode', 'so-import'] %} {%- if grains['role'] in ['so-eval', 'so-standalone', 'so-sensor', 'so-helix', 'so-heavynode', 'so-import'] %}
{%- if ZEEKVER != 'SURICATA' %} {%- if ZEEKVER != 'SURICATA' %}
{%- for LOGNAME in salt['pillar.get']('zeeklogs:enabled', '') %} {%- for LOGNAME in salt['pillar.get']('zeeklogs:enabled', '') %}
- type: log - type: filestream
id: zeek-{{ LOGNAME }}
paths: paths:
- /nsm/zeek/logs/current/{{ LOGNAME }}.log - /nsm/zeek/logs/current/{{ LOGNAME }}.log
fields: fields:
@@ -146,7 +151,8 @@ filebeat.inputs:
clean_removed: true clean_removed: true
close_removed: false close_removed: false
- type: log - type: filestream
id: import-zeek={{ LOGNAME }}
paths: paths:
- /nsm/import/*/zeek/logs/{{ LOGNAME }}.log - /nsm/import/*/zeek/logs/{{ LOGNAME }}.log
fields: fields:
@@ -170,7 +176,8 @@ filebeat.inputs:
{%- endfor %} {%- endfor %}
{%- endif %} {%- endif %}
- type: log - type: filestream
id: suricata-eve
paths: paths:
- /nsm/suricata/eve*.json - /nsm/suricata/eve*.json
fields: fields:
@@ -186,7 +193,8 @@ filebeat.inputs:
clean_removed: false clean_removed: false
close_removed: false close_removed: false
- type: log - type: filestream
id: import-suricata
paths: paths:
- /nsm/import/*/suricata/eve*.json - /nsm/import/*/suricata/eve*.json
fields: fields:
@@ -208,7 +216,8 @@ filebeat.inputs:
clean_removed: false clean_removed: false
close_removed: false close_removed: false
{%- if STRELKAENABLED == 1 %} {%- if STRELKAENABLED == 1 %}
- type: log - type: filestream
id: strelka
paths: paths:
- /nsm/strelka/log/strelka.log - /nsm/strelka/log/strelka.log
fields: fields:
@@ -229,7 +238,8 @@ filebeat.inputs:
{%- if WAZUHENABLED == 1 %} {%- if WAZUHENABLED == 1 %}
- type: log - type: filestream
id: wazuh
paths: paths:
- /wazuh/archives/archives.json - /wazuh/archives/archives.json
fields: fields:
@@ -247,7 +257,8 @@ filebeat.inputs:
{%- if FLEETMANAGER or FLEETNODE %} {%- if FLEETMANAGER or FLEETNODE %}
- type: log - type: filestream
id: osquery
paths: paths:
- /nsm/osquery/fleet/result.log - /nsm/osquery/fleet/result.log
fields: fields:
@@ -267,6 +278,7 @@ filebeat.inputs:
{%- if RITAENABLED %} {%- if RITAENABLED %}
- type: filestream - type: filestream
id: rita-beacon
paths: paths:
- /nsm/rita/beacons.csv - /nsm/rita/beacons.csv
exclude_lines: ['^Score', '^Source', '^Domain', '^No results'] exclude_lines: ['^Score', '^Source', '^Domain', '^No results']
@@ -282,6 +294,7 @@ filebeat.inputs:
index: "so-rita" index: "so-rita"
- type: filestream - type: filestream
id: rita-connection
paths: paths:
- /nsm/rita/long-connections.csv - /nsm/rita/long-connections.csv
- /nsm/rita/open-connections.csv - /nsm/rita/open-connections.csv
@@ -298,6 +311,7 @@ filebeat.inputs:
index: "so-rita" index: "so-rita"
- type: filestream - type: filestream
id: rita-dns
paths: paths:
- /nsm/rita/exploded-dns.csv - /nsm/rita/exploded-dns.csv
exclude_lines: ['^Domain', '^No results'] exclude_lines: ['^Domain', '^No results']
@@ -314,13 +328,13 @@ filebeat.inputs:
{%- endif %} {%- endif %}
{%- if grains['role'] in ['so-eval', 'so-standalone', 'so-manager', 'so-managersearch', 'so-import'] %} {%- if grains['role'] in ['so-eval', 'so-standalone', 'so-manager', 'so-managersearch', 'so-import'] %}
- type: log - type: filestream
id: kratos
paths: paths:
- /logs/kratos/kratos.log - /logs/kratos/kratos.log
fields: fields:
module: kratos module: kratos
category: host category: host
tags: beat-ext
processors: processors:
- decode_json_fields: - decode_json_fields:
fields: ["message"] fields: ["message"]
@@ -338,13 +352,15 @@ filebeat.inputs:
target: '' target: ''
fields: fields:
event.dataset: access event.dataset: access
pipeline: "kratos"
fields_under_root: true fields_under_root: true
clean_removed: false clean_removed: false
close_removed: false close_removed: false
{%- endif %} {%- endif %}
{%- if grains.role == 'so-idh' %} {%- if grains.role == 'so-idh' %}
- type: log - type: filestream
id: idh
paths: paths:
- /nsm/idh/opencanary.log - /nsm/idh/opencanary.log
fields: fields:
@@ -433,6 +449,12 @@ output.elasticsearch:
- index: "so-logscan" - index: "so-logscan"
when.contains: when.contains:
module: "logscan" module: "logscan"
- index: "so-elasticsearch-%{+YYYY.MM.dd}"
when.contains:
event.module: "elasticsearch"
- index: "so-kibana-%{+YYYY.MM.dd}"
when.contains:
event.module: "kibana"
setup.template.enabled: false setup.template.enabled: false
{%- else %} {%- else %}
@@ -443,6 +465,13 @@ output.logstash:
# The Logstash hosts # The Logstash hosts
hosts: hosts:
{# dont let filebeat send to a node designated as dmz #}
{% import_yaml 'logstash/dmz_nodes.yaml' as dmz_nodes -%}
{% if dmz_nodes.logstash.dmz_nodes -%}
{% set dmz_nodes = dmz_nodes.logstash.dmz_nodes -%}
{% else -%}
{% set dmz_nodes = [] -%}
{% endif -%}
{%- if grains.role in ['so-sensor', 'so-fleet', 'so-node', 'so-idh'] %} {%- if grains.role in ['so-sensor', 'so-fleet', 'so-node', 'so-idh'] %}
{%- set LOGSTASH = namespace() %} {%- set LOGSTASH = namespace() %}
{%- set LOGSTASH.count = 0 %} {%- set LOGSTASH.count = 0 %}
@@ -451,8 +480,10 @@ output.logstash:
{%- for node_type, node_details in node_data.items() | sort -%} {%- for node_type, node_details in node_data.items() | sort -%}
{%- if node_type in ['manager', 'managersearch', 'standalone', 'receiver' ] %} {%- if node_type in ['manager', 'managersearch', 'standalone', 'receiver' ] %}
{%- for hostname in node_data[node_type].keys() %} {%- for hostname in node_data[node_type].keys() %}
{%- if hostname not in dmz_nodes %}
{%- set LOGSTASH.count = LOGSTASH.count + 1 %} {%- set LOGSTASH.count = LOGSTASH.count + 1 %}
- "{{ hostname }}:5644" #{{ node_details[hostname].ip }} - "{{ hostname }}:5644" #{{ node_details[hostname].ip }}
{%- endif %}
{%- endfor %} {%- endfor %}
{%- endif %} {%- endif %}
{%- if LOGSTASH.count > 1 %} {%- if LOGSTASH.count > 1 %}

View File

@@ -1,18 +1,2 @@
# DO NOT EDIT THIS FILE # DO NOT EDIT THIS FILE
{%- if MODULES.modules is iterable and MODULES.modules is not string and MODULES.modules|length > 0%} {{ MODULES|yaml(False) }}
{%- for module in MODULES.modules.keys() %}
- module: {{ module }}
{%- for fileset in MODULES.modules[module] %}
{{ fileset }}:
enabled: {{ MODULES.modules[module][fileset].enabled|string|lower }}
{#- only manage the settings if the fileset is enabled #}
{%- if MODULES.modules[module][fileset].enabled %}
{%- for var, value in MODULES.modules[module][fileset].items() %}
{%- if var|lower != 'enabled' %}
{{ var }}: {{ value }}
{%- endif %}
{%- endfor %}
{%- endif %}
{%- endfor %}
{%- endfor %}
{% endif %}

View File

@@ -18,8 +18,8 @@
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set LOCALHOSTNAME = salt['grains.get']('host') %} {% set LOCALHOSTNAME = salt['grains.get']('host') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% from 'filebeat/map.jinja' import THIRDPARTY with context %} {% from 'filebeat/modules.map.jinja' import MODULESMERGED with context %}
{% from 'filebeat/map.jinja' import SO with context %} {% from 'filebeat/modules.map.jinja' import MODULESENABLED with context %}
{% from 'filebeat/map.jinja' import FILEBEAT_EXTRA_HOSTS with context %} {% from 'filebeat/map.jinja' import FILEBEAT_EXTRA_HOSTS with context %}
{% set ES_INCLUDED_NODES = ['so-eval', 'so-standalone', 'so-managersearch', 'so-node', 'so-heavynode', 'so-import'] %} {% set ES_INCLUDED_NODES = ['so-eval', 'so-standalone', 'so-managersearch', 'so-node', 'so-heavynode', 'so-import'] %}
@@ -88,21 +88,21 @@ filebeatmoduleconf:
- template: jinja - template: jinja
- show_changes: False - show_changes: False
sodefaults_module_conf: merged_module_conf:
file.managed: file.managed:
- name: /opt/so/conf/filebeat/modules/securityonion.yml - name: /opt/so/conf/filebeat/modules/modules.yml
- source: salt://filebeat/etc/module_config.yml.jinja - source: salt://filebeat/etc/module_config.yml.jinja
- template: jinja - template: jinja
- defaults: - defaults:
MODULES: {{ SO }} MODULES: {{ MODULESENABLED }}
thirdparty_module_conf: so_module_conf_remove:
file.managed: file.absent:
- name: /opt/so/conf/filebeat/modules/securityonion.yml
thirdyparty_module_conf_remove:
file.absent:
- name: /opt/so/conf/filebeat/modules/thirdparty.yml - name: /opt/so/conf/filebeat/modules/thirdparty.yml
- source: salt://filebeat/etc/module_config.yml.jinja
- template: jinja
- defaults:
MODULES: {{ THIRDPARTY }}
so-filebeat: so-filebeat:
docker_container.running: docker_container.running:
@@ -127,11 +127,11 @@ so-filebeat:
- 0.0.0.0:514:514/udp - 0.0.0.0:514:514/udp
- 0.0.0.0:514:514/tcp - 0.0.0.0:514:514/tcp
- 0.0.0.0:5066:5066/tcp - 0.0.0.0:5066:5066/tcp
{% for module in THIRDPARTY.modules.keys() %} {% for module in MODULESMERGED.modules.keys() %}
{% for submodule in THIRDPARTY.modules[module] %} {% for submodule in MODULESMERGED.modules[module] %}
{% if THIRDPARTY.modules[module][submodule].enabled and THIRDPARTY.modules[module][submodule]["var.syslog_port"] is defined %} {% if MODULESMERGED.modules[module][submodule].enabled and MODULESMERGED.modules[module][submodule]["var.syslog_port"] is defined %}
- {{ THIRDPARTY.modules[module][submodule].get("var.syslog_host", "0.0.0.0") }}:{{ THIRDPARTY.modules[module][submodule]["var.syslog_port"] }}:{{ THIRDPARTY.modules[module][submodule]["var.syslog_port"] }}/tcp - {{ MODULESMERGED.modules[module][submodule].get("var.syslog_host", "0.0.0.0") }}:{{ MODULESMERGED.modules[module][submodule]["var.syslog_port"] }}:{{ MODULESMERGED.modules[module][submodule]["var.syslog_port"] }}/tcp
- {{ THIRDPARTY.modules[module][submodule].get("var.syslog_host", "0.0.0.0") }}:{{ THIRDPARTY.modules[module][submodule]["var.syslog_port"] }}:{{ THIRDPARTY.modules[module][submodule]["var.syslog_port"] }}/udp - {{ MODULESMERGED.modules[module][submodule].get("var.syslog_host", "0.0.0.0") }}:{{ MODULESMERGED.modules[module][submodule]["var.syslog_port"] }}:{{ MODULESMERGED.modules[module][submodule]["var.syslog_port"] }}/udp
{% endif %} {% endif %}
{% endfor %} {% endfor %}
{% endfor %} {% endfor %}

View File

@@ -1,10 +1,3 @@
{% import_yaml 'filebeat/thirdpartydefaults.yaml' as TPDEFAULTS %}
{% set THIRDPARTY = salt['pillar.get']('filebeat:third_party_filebeat', default=TPDEFAULTS.third_party_filebeat, merge=True) %}
{% import_yaml 'filebeat/securityoniondefaults.yaml' as SODEFAULTS %}
{% set SO = SODEFAULTS.securityonion_filebeat %}
{#% set SO = salt['pillar.get']('filebeat:third_party_filebeat', default=SODEFAULTS.third_party_filebeat, merge=True) %#}
{% set role = grains.role %} {% set role = grains.role %}
{% set FILEBEAT_EXTRA_HOSTS = [] %} {% set FILEBEAT_EXTRA_HOSTS = [] %}
{% set mainint = salt['pillar.get']('host:mainint') %} {% set mainint = salt['pillar.get']('host:mainint') %}

View File

@@ -0,0 +1,18 @@
{% import_yaml 'filebeat/thirdpartydefaults.yaml' as TPDEFAULTS %}
{% import_yaml 'filebeat/securityoniondefaults.yaml' as SODEFAULTS %}
{% set THIRDPARTY = salt['pillar.get']('filebeat:third_party_filebeat', default=TPDEFAULTS.third_party_filebeat, merge=True) %}
{% set SO = salt['pillar.get']('filebeat:securityonion_filebeat', default=SODEFAULTS.securityonion_filebeat, merge=True) %}
{% set MODULESMERGED = salt['defaults.merge'](SO, THIRDPARTY, in_place=False) %}
{% set MODULESENABLED = [] %}
{% for module in MODULESMERGED.modules.keys() %}
{% set ENABLEDFILESETS = {} %}
{% for fileset in MODULESMERGED.modules[module] %}
{% if MODULESMERGED.modules[module][fileset].get('enabled', False) %}
{% do ENABLEDFILESETS.update({'module': module, fileset: MODULESMERGED.modules[module][fileset]}) %}
{% endif %}
{% endfor %}
{% if ENABLEDFILESETS|length > 0 %}
{% do MODULESENABLED.append(ENABLEDFILESETS) %}
{% endif %}
{% endfor %}

View File

@@ -1,7 +1,7 @@
filebeat: filebeat:
config: config:
inputs: inputs:
- type: log - type: filestream
paths: paths:
- /nsm/mylogdir/mylog.log - /nsm/mylogdir/mylog.log
fields: fields:

View File

@@ -74,12 +74,6 @@ third_party_filebeat:
enabled: false enabled: false
amp: amp:
enabled: false enabled: false
cyberark:
corepas:
enabled: false
var.input: udp
var.syslog_host: 0.0.0.0
var.syslog_port: 9527
cylance: cylance:
protect: protect:
enabled: false enabled: false
@@ -259,8 +253,6 @@ third_party_filebeat:
enabled: false enabled: false
anomalithreatstream: anomalithreatstream:
enabled: false enabled: false
recordedfuture:
enabled: false
zscaler: zscaler:
zia: zia:
enabled: false enabled: false

View File

@@ -59,7 +59,7 @@ update() {
IFS=$'\r\n' GLOBIGNORE='*' command eval 'LINES=($(cat $1))' IFS=$'\r\n' GLOBIGNORE='*' command eval 'LINES=($(cat $1))'
for i in "${LINES[@]}"; do for i in "${LINES[@]}"; do
RESPONSE=$({{ ELASTICCURL }} -X PUT "localhost:5601/api/saved_objects/config/7.17.3" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d " $i ") RESPONSE=$({{ ELASTICCURL }} -X PUT "localhost:5601/api/saved_objects/config/8.3.2" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d " $i ")
echo $RESPONSE; if [[ "$RESPONSE" != *"\"success\":true"* ]] && [[ "$RESPONSE" != *"updated_at"* ]] ; then RETURN_CODE=1;fi echo $RESPONSE; if [[ "$RESPONSE" != *"\"success\":true"* ]] && [[ "$RESPONSE" != *"updated_at"* ]] ; then RETURN_CODE=1;fi
done done

View File

@@ -2,7 +2,7 @@
{% set HIGHLANDER = salt['pillar.get']('global:highlander', False) %} {% set HIGHLANDER = salt['pillar.get']('global:highlander', False) %}
{% if salt['pillar.get']('elasticsearch:auth:enabled', False) %} {% if salt['pillar.get']('elasticsearch:auth:enabled', False) %}
{% do KIBANACONFIG.kibana.config.elasticsearch.update({'username': salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user'), 'password': salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass')}) %} {% do KIBANACONFIG.kibana.config.elasticsearch.update({'username': salt['pillar.get']('elasticsearch:auth:users:so_kibana_user:user'), 'password': salt['pillar.get']('elasticsearch:auth:users:so_kibana_user:pass')}) %}
{% else %} {% else %}
{% do KIBANACONFIG.kibana.config.xpack.update({'security': {'authc': {'providers': {'anonymous': {'anonymous1': {'order': 0, 'credentials': 'elasticsearch_anonymous_user'}}}}}}) %} {% do KIBANACONFIG.kibana.config.xpack.update({'security': {'authc': {'providers': {'anonymous': {'anonymous1': {'order': 0, 'credentials': 'elasticsearch_anonymous_user'}}}}}}) %}
{% endif %} {% endif %}

View File

@@ -28,7 +28,8 @@ kibana:
security: security:
showInsecureClusterWarning: False showInsecureClusterWarning: False
xpack: xpack:
ml:
enabled: False
security: security:
secureCookies: True secureCookies: true
reporting:
kibanaServer:
hostname: localhost

View File

@@ -1 +1 @@
{"attributes": {"buildNum": 39457,"defaultIndex": "2289a0c0-6970-11ea-a0cd-ffa0f6a1bc29","defaultRoute": "/app/dashboards#/view/a8411b30-6d03-11ea-b301-3d6c35840645","discover:sampleSize": 100,"theme:darkMode": true,"timepicker:timeDefaults": "{\n \"from\": \"now-24h\",\n \"to\": \"now\"\n}"},"coreMigrationVersion": "7.17.3","id": "7.17.3","migrationVersion": {"config": "7.13.0"},"references": [],"type": "config","updated_at": "2021-10-10T10:10:10.105Z","version": "WzI5NzUsMl0="} {"attributes": {"buildNum": 39457,"defaultIndex": "2289a0c0-6970-11ea-a0cd-ffa0f6a1bc29","defaultRoute": "/app/dashboards#/view/a8411b30-6d03-11ea-b301-3d6c35840645","discover:sampleSize": 100,"theme:darkMode": true,"timepicker:timeDefaults": "{\n \"from\": \"now-24h\",\n \"to\": \"now\"\n}"},"coreMigrationVersion": "8.3.2","id": "8.3.2","migrationVersion": {"config": "7.13.0"},"references": [],"type": "config","updated_at": "2021-10-10T10:10:10.105Z","version": "WzI5NzUsMl0="}

View File

@@ -37,7 +37,7 @@ selfservice:
ui_url: https://{{ WEBACCESS }}/login/ ui_url: https://{{ WEBACCESS }}/login/
default_browser_return_url: https://{{ WEBACCESS }}/ default_browser_return_url: https://{{ WEBACCESS }}/
whitelisted_return_urls: allowed_return_urls:
- http://127.0.0.1 - http://127.0.0.1
log: log:
@@ -59,7 +59,10 @@ hashers:
cost: 12 cost: 12
identity: identity:
default_schema_url: file:///kratos-conf/schema.json default_schema_id: default
schemas:
- id: default
url: file:///kratos-conf/schema.json
courier: courier:
smtp: smtp:

View File

@@ -0,0 +1,9 @@
# Do not edit this file. Copy it to /opt/so/saltstack/local/salt/logstash/ and make changes there. It should be formatted as a list.
# logstash:
# dmz_nodes:
# - mydmznodehostname1
# - mydmznodehostname2
# - mydmznodehostname3
logstash:
dmz_nodes:

View File

@@ -0,0 +1,22 @@
{%- if grains['role'] == 'so-eval' -%}
{%- set ES = salt['pillar.get']('manager:mainip', '') -%}
{%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %}
{%- set ES_USER = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:user', '') %}
{%- set ES_PASS = salt['pillar.get']('elasticsearch:auth:users:so_elastic_user:pass', '') %}
output {
if [module] =~ "kratos" and "import" not in [tags] {
elasticsearch {
pipeline => "kratos"
hosts => "{{ ES }}"
{% if salt['pillar.get']('elasticsearch:auth:enabled') is sameas true %}
user => "{{ ES_USER }}"
password => "{{ ES_PASS }}"
{% endif %}
index => "so-kratos"
ssl => true
ssl_certificate_verification => false
}
}
}

View File

@@ -130,6 +130,8 @@ http {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy ""; proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
} }
error_page 500 502 503 504 /50x.html; error_page 500 502 503 504 /50x.html;
location = /usr/share/nginx/html/50x.html { location = /usr/share/nginx/html/50x.html {
@@ -331,6 +333,8 @@ http {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy ""; proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
} }
{%- endif %} {%- endif %}

View File

@@ -1,27 +1,52 @@
{ {
"name": "Playbook", "name": "Playbook Coverage",
"version": "3.0", "versions": {
"domain": "mitre-enterprise", "attack": "11",
"description": "Current Coverage of Playbook", "navigator": "4.6.4",
"layer": "4.3"
},
"domain": "enterprise-attack",
"description": "",
"filters": { "filters": {
"stages": ["act"],
"platforms": [ "platforms": [
"windows", "Linux",
"linux", "macOS",
"mac" "Windows",
"Azure AD",
"Office 365",
"SaaS",
"IaaS",
"Google Workspace",
"PRE",
"Network",
"Containers"
] ]
}, },
"sorting": 0, "sorting": 0,
"viewMode": 0, "layout": {
"layout": "side",
"aggregateFunction": "average",
"showID": false,
"showName": true,
"showAggregateScores": false,
"countUnscored": false
},
"hideDisabled": false, "hideDisabled": false,
"techniques": [], "techniques": [],
"gradient": { "gradient": {
"colors": ["#ff6666", "#ffe766", "#8ec843"], "colors": [
"#ff6666ff",
"#ffe766ff",
"#8ec843ff"
],
"minValue": 0, "minValue": 0,
"maxValue": 100 "maxValue": 100
}, },
"legendItems": [],
"metadata": [], "metadata": [],
"links": [],
"showTacticRowBackground": false, "showTacticRowBackground": false,
"tacticRowBackground": "#dddddd", "tacticRowBackground": "#dddddd",
"selectTechniquesAcrossTactics": true "selectTechniquesAcrossTactics": true,
"selectSubtechniquesWithParent": false
} }

View File

@@ -1,58 +1,62 @@
{%- set URL_BASE = salt['pillar.get']('global:url_base', '') %} {%- set URL_BASE = salt['pillar.get']('global:url_base', '') %}
{ {
"enterprise_attack_url": "assets/enterprise-attack.json", "versions": [
"pre_attack_url": "assets/pre-attack.json", {
"mobile_data_url": "assets/mobile-attack.json", "name": "ATT&CK v11",
"taxii_server": { "version": "11",
"enabled": false, "domains": [
"url": "https://cti-taxii.mitre.org/", {
"collections": { "name": "Enterprise",
"enterprise_attack": "95ecc380-afe9-11e4-9b6c-751b66dd541e", "identifier": "enterprise-attack",
"pre_attack": "062767bd-02d2-4b72-84ba-56caef0f8658", "data": ["assets/so/enterprise-attack.json"]
"mobile_attack": "2f669986-b40b-4423-b720-4396ca6a462b"
} }
}, ]
}
"domain": "mitre-enterprise", ],
"custom_context_menu_items": [ {"label": "view related plays","url": " https://{{URL_BASE}}/playbook/projects/detection-playbooks/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=cf_15&op%5Bcf_15%5D=%3D&f%5B%5D=&c%5B%5D=status&c%5B%5D=cf_10&c%5B%5D=cf_13&c%5B%5D=cf_18&c%5B%5D=cf_19&c%5B%5D=cf_1&c%5B%5D=updated_on&v%5Bcf_15%5D%5B%5D=~Technique_ID~"}], "custom_context_menu_items": [ {"label": "view related plays","url": " https://{{URL_BASE}}/playbook/projects/detection-playbooks/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=cf_15&op%5Bcf_15%5D=%3D&f%5B%5D=&c%5B%5D=status&c%5B%5D=cf_10&c%5B%5D=cf_13&c%5B%5D=cf_18&c%5B%5D=cf_19&c%5B%5D=cf_1&c%5B%5D=updated_on&v%5Bcf_15%5D%5B%5D=~Technique_ID~"}],
"default_layers": { "default_layers": {
"enabled": true, "enabled": true,
"urls": [ "urls": ["assets/so/nav_layer_playbook.json"]
"assets/playbook.json"
]
}, },
"comment_color": "yellow", "comment_color": "yellow",
"link_color": "blue",
"banner": "",
"features": [ "features": [
{"name": "leave_site_dialog", "enabled": true, "description": "Disable to remove the dialog prompt when leaving site."},
{"name": "tabs", "enabled": true, "description": "Disable to remove the ability to open new tabs."}, {"name": "tabs", "enabled": true, "description": "Disable to remove the ability to open new tabs."},
{"name": "selecting_techniques", "enabled": true, "description": "Disable to remove the ability to select techniques."}, {"name": "selecting_techniques", "enabled": true, "description": "Disable to remove the ability to select techniques."},
{"name": "header", "enabled": true, "description": "Disable to remove the header containing 'MITRE ATT&CK Navigator' and the link to the help page. The help page can still be accessed from the new tab menu."}, {"name": "header", "enabled": true, "description": "Disable to remove the header containing 'MITRE ATT&CK Navigator' and the link to the help page. The help page can still be accessed from the new tab menu."},
{"name": "subtechniques", "enabled": true, "description": "Disable to remove all sub-technique features from the interface."},
{"name": "selection_controls", "enabled": true, "description": "Disable to to disable all subfeatures", "subfeatures": [ {"name": "selection_controls", "enabled": true, "description": "Disable to to disable all subfeatures", "subfeatures": [
{"name": "search", "enabled": true, "description": "Disable to remove the technique search panel from the interface."}, {"name": "search", "enabled": true, "description": "Disable to remove the technique search panel from the interface."},
{"name": "multiselect", "enabled": true, "description": "Disable to remove the multiselect panel from interface."}, {"name": "multiselect", "enabled": true, "description": "Disable to remove the multiselect panel from interface."},
{"name": "deselect_all", "enabled": true, "description": "Disable to remove the deselect all button from the interface."} {"name": "deselect_all", "enabled": true, "description": "Disable to remove the deselect all button from the interface."}
]}, ]},
{"name": "layer_controls", "enabled": true, "description": "Disable to to disable all subfeatures", "subfeatures": [ {"name": "layer_controls", "enabled": true, "description": "Disable to disable all subfeatures", "subfeatures": [
{"name": "layer_info", "enabled": true, "description": "Disable to remove the layer info (name, description and metadata) panel from the interface. Note that the layer can still be renamed in the tab."}, {"name": "layer_info", "enabled": true, "description": "Disable to remove the layer info (name, description and layer metadata) panel from the interface. Note that the layer can still be renamed in the tab."},
{"name": "download_layer", "enabled": true, "description": "Disable to remove the button to download the layer."}, {"name": "download_layer", "enabled": true, "description": "Disable to remove the button to download the layer."},
{"name": "export_render", "enabled": true, "description": "Disable to the remove the button to render the current layer."}, {"name": "export_render", "enabled": true, "description": "Disable to remove the button to render the current layer."},
{"name": "export_excel", "enabled": true, "description": "Disable to the remove the button to export the current layer to MS Excel (.xlsx) format."}, {"name": "export_excel", "enabled": true, "description": "Disable to remove the button to export the current layer to MS Excel (.xlsx) format."},
{"name": "filters", "enabled": true, "description": "Disable to the remove the filters panel from interface."}, {"name": "filters", "enabled": true, "description": "Disable to remove the filters panel from interface."},
{"name": "sorting", "enabled": true, "description": "Disable to the remove the sorting button from the interface."}, {"name": "sorting", "enabled": true, "description": "Disable to remove the sorting button from the interface."},
{"name": "color_setup", "enabled": true, "description": "Disable to the remove the color setup panel from interface, containing customization controls for scoring gradient and tactic row color."}, {"name": "color_setup", "enabled": true, "description": "Disable to remove the color setup panel from interface, containing customization controls for scoring gradient and tactic row color."},
{"name": "toggle_hide_disabled", "enabled": true, "description": "Disable to the remove the hide disabled techniques button from the interface."}, {"name": "toggle_hide_disabled", "enabled": true, "description": "Disable to remove the hide disabled techniques button from the interface."},
{"name": "toggle_view_mode", "enabled": true, "description": "Disable to the remove the toggle view mode button from interface."}, {"name": "layout_controls", "enabled": true, "description": "Disable to remove the ability to change the current matrix layout."},
{"name": "legend", "enabled": true, "description": "Disable to the remove the legend panel from the interface."} {"name": "legend", "enabled": true, "description": "Disable to remove the legend panel from the interface."}
]}, ]},
{"name": "technique_controls", "enabled": true, "description": "Disable to to disable all subfeatures", "subfeatures": [ {"name": "technique_controls", "enabled": true, "description": "Disable to disable all subfeatures", "subfeatures": [
{"name": "disable_techniques", "enabled": true, "description": "Disable to the remove the ability to disable techniques."}, {"name": "disable_techniques", "enabled": true, "description": "Disable to remove the ability to disable techniques."},
{"name": "manual_color", "enabled": true, "description": "Disable to the remove the ability to assign manual colors to techniques."}, {"name": "manual_color", "enabled": true, "description": "Disable to remove the ability to assign manual colors to techniques."},
{"name": "scoring", "enabled": true, "description": "Disable to the remove the ability to score techniques."}, {"name": "scoring", "enabled": true, "description": "Disable to remove the ability to score techniques."},
{"name": "comments", "enabled": true, "description": "Disable to the remove the ability to add comments to techniques."}, {"name": "comments", "enabled": true, "description": "Disable to remove the ability to add comments to techniques."},
{"name": "comment_underline", "enabled": true, "description": "Disable to remove the comment underline effect on techniques."},
{"name": "links", "enabled": true, "description": "Disable to remove the ability to assign hyperlinks to techniques."},
{"name": "link_underline", "enabled": true, "description": "Disable to remove the hyperlink underline effect on techniques."},
{"name": "metadata", "enabled": true, "description": "Disable to remove the ability to add metadata to techniques."},
{"name": "clear_annotations", "enabled": true, "description": "Disable to remove the button to clear all annotations on the selected techniques."} {"name": "clear_annotations", "enabled": true, "description": "Disable to remove the button to clear all annotations on the selected techniques."}
]} ]}
] ]

View File

@@ -50,7 +50,7 @@ nginxtmp:
navigatorconfig: navigatorconfig:
file.managed: file.managed:
- name: /opt/so/conf/navigator/navigator_config.json - name: /opt/so/conf/navigator/config.json
- source: salt://nginx/files/navigator_config.json - source: salt://nginx/files/navigator_config.json
- user: 939 - user: 939
- group: 939 - group: 939
@@ -59,7 +59,7 @@ navigatorconfig:
navigatordefaultlayer: navigatordefaultlayer:
file.managed: file.managed:
- name: /opt/so/conf/navigator/nav_layer_playbook.json - name: /opt/so/conf/navigator/layers/nav_layer_playbook.json
- source: salt://nginx/files/nav_layer_playbook.json - source: salt://nginx/files/nav_layer_playbook.json
- user: 939 - user: 939
- group: 939 - group: 939
@@ -69,7 +69,7 @@ navigatordefaultlayer:
navigatorpreattack: navigatorpreattack:
file.managed: file.managed:
- name: /opt/so/conf/navigator/pre-attack.json - name: /opt/so/conf/navigator/layers/pre-attack.json
- source: salt://nginx/files/pre-attack.json - source: salt://nginx/files/pre-attack.json
- user: 939 - user: 939
- group: 939 - group: 939
@@ -78,7 +78,7 @@ navigatorpreattack:
navigatorenterpriseattack: navigatorenterpriseattack:
file.managed: file.managed:
- name: /opt/so/conf/navigator/enterprise-attack.json - name: /opt/so/conf/navigator/layers/enterprise-attack.json
- source: salt://nginx/files/enterprise-attack.json - source: salt://nginx/files/enterprise-attack.json
- user: 939 - user: 939
- group: 939 - group: 939
@@ -99,10 +99,8 @@ so-nginx:
- /etc/pki/managerssl.crt:/etc/pki/nginx/server.crt:ro - /etc/pki/managerssl.crt:/etc/pki/nginx/server.crt:ro
- /etc/pki/managerssl.key:/etc/pki/nginx/server.key:ro - /etc/pki/managerssl.key:/etc/pki/nginx/server.key:ro
# ATT&CK Navigator binds # ATT&CK Navigator binds
- /opt/so/conf/navigator/navigator_config.json:/opt/socore/html/navigator/assets/config.json:ro - /opt/so/conf/navigator/layers/:/opt/socore/html/navigator/assets/so:ro
- /opt/so/conf/navigator/nav_layer_playbook.json:/opt/socore/html/navigator/assets/playbook.json:ro - /opt/so/conf/navigator/config.json:/opt/socore/html/navigator/assets/config.json:ro
- /opt/so/conf/navigator/enterprise-attack.json:/opt/socore/html/navigator/assets/enterprise-attack.json:ro
- /opt/so/conf/navigator/pre-attack.json:/opt/socore/html/navigator/assets/pre-attack.json:ro
{% endif %} {% endif %}
{% if ISAIRGAP is sameas true %} {% if ISAIRGAP is sameas true %}
- /nsm/repo:/opt/socore/html/repo:ro - /nsm/repo:/opt/socore/html/repo:ro

View File

@@ -42,6 +42,15 @@ query_updatwebhooks:
- connection_user: root - connection_user: root
- connection_pass: {{ MYSQLPASS }} - connection_pass: {{ MYSQLPASS }}
query_updatename:
mysql_query.run:
- database: playbook
- query: "update custom_fields set name = 'Custom Filter' where id = 21;"
- connection_host: {{ MAINIP }}
- connection_port: 3306
- connection_user: root
- connection_pass: {{ MYSQLPASS }}
query_updatepluginurls: query_updatepluginurls:
mysql_query.run: mysql_query.run:
- database: playbook - database: playbook

View File

@@ -42,7 +42,7 @@ gpgkey=file:///etc/pki/rpm-gpg/docker.pub
[saltstack] [saltstack]
name=SaltStack repo for RHEL/CentOS $releasever PY3 name=SaltStack repo for RHEL/CentOS $releasever PY3
baseurl=https://repo.securityonion.net/file/securityonion-repo/saltstack/ baseurl=https://repo.securityonion.net/file/securityonion-repo/salt/
enabled=1 enabled=1
gpgcheck=1 gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/SALTSTACK-GPG-KEY.pub gpgkey=file:///etc/pki/rpm-gpg/SALTSTACK-GPG-KEY.pub

View File

@@ -42,7 +42,7 @@ gpgkey=https://repo.securityonion.net/file/securityonion-repo/keys/docker.pub
[saltstack] [saltstack]
name=SaltStack repo for RHEL/CentOS $releasever PY3 name=SaltStack repo for RHEL/CentOS $releasever PY3
baseurl=http://repocache.securityonion.net/file/securityonion-repo/saltstack/ baseurl=http://repocache.securityonion.net/file/securityonion-repo/salt/
enabled=1 enabled=1
gpgcheck=1 gpgcheck=1
gpgkey=https://repo.securityonion.net/file/securityonion-repo/keys/SALTSTACK-GPG-KEY.pub gpgkey=https://repo.securityonion.net/file/securityonion-repo/keys/SALTSTACK-GPG-KEY.pub

View File

@@ -7,7 +7,7 @@ saltstack.list:
file.managed: file.managed:
- name: /etc/apt/sources.list.d/saltstack.list - name: /etc/apt/sources.list.d/saltstack.list
- contents: - contents:
- deb https://repo.securityonion.net/file/securityonion-repo/ubuntu/{{grains.osrelease}}/amd64/salt/ {{grains.oscodename}} main - deb https://repo.securityonion.net/file/securityonion-repo/ubuntu/{{grains.osrelease}}/amd64/salt3004.2/ {{grains.oscodename}} main
apt_update: apt_update:
cmd.run: cmd.run:

View File

@@ -2,4 +2,4 @@
# When updating the salt version, also update the version in securityonion-builds/images/iso-task/Dockerfile and saltify function in so-functions # When updating the salt version, also update the version in securityonion-builds/images/iso-task/Dockerfile and saltify function in so-functions
salt: salt:
master: master:
version: 3004.1 version: 3004.2

View File

@@ -2,6 +2,6 @@
# When updating the salt version, also update the version in securityonion-builds/images/iso-task/Dockerfile and saltify function in so-functions # When updating the salt version, also update the version in securityonion-builds/images/iso-task/Dockerfile and saltify function in so-functions
salt: salt:
minion: minion:
version: 3004.1 version: 3004.2
check_threshold: 3600 # in seconds, threshold used for so-salt-minion-check. any value less than 600 seconds may cause a lot of salt-minion restarts since the job to touch the file occurs every 5-8 minutes by default check_threshold: 3600 # in seconds, threshold used for so-salt-minion-check. any value less than 600 seconds may cause a lot of salt-minion restarts since the job to touch the file occurs every 5-8 minutes by default
service_start_delay: 30 # in seconds. service_start_delay: 30 # in seconds.

View File

@@ -4216,17 +4216,35 @@ install_centos_stable_deps() {
install_centos_stable() { install_centos_stable() {
__PACKAGES="" __PACKAGES=""
local cloud='salt-cloud'
local master='salt-master'
local minion='salt-minion'
local syndic='salt-syndic'
if echo "$STABLE_REV" | grep -q "archive";then # point release being applied
local ver=$(echo "$STABLE_REV"|awk -F/ '{print $2}') # strip archive/
elif echo "$STABLE_REV" | egrep -vq "archive|latest";then # latest or major version(3003, 3004, etc) being applie
local ver=$STABLE_REV
fi
if [ ! -z $ver ]; then
cloud+="-$ver"
master+="-$ver"
minion+="-$ver"
syndic+="-$ver"
fi
if [ "$_INSTALL_CLOUD" -eq $BS_TRUE ];then if [ "$_INSTALL_CLOUD" -eq $BS_TRUE ];then
__PACKAGES="${__PACKAGES} salt-cloud" __PACKAGES="${__PACKAGES} $cloud"
fi fi
if [ "$_INSTALL_MASTER" -eq $BS_TRUE ];then if [ "$_INSTALL_MASTER" -eq $BS_TRUE ];then
__PACKAGES="${__PACKAGES} salt-master" __PACKAGES="${__PACKAGES} $master"
fi fi
if [ "$_INSTALL_MINION" -eq $BS_TRUE ]; then if [ "$_INSTALL_MINION" -eq $BS_TRUE ]; then
__PACKAGES="${__PACKAGES} salt-minion" __PACKAGES="${__PACKAGES} $minion"
fi fi
if [ "$_INSTALL_SYNDIC" -eq $BS_TRUE ];then if [ "$_INSTALL_SYNDIC" -eq $BS_TRUE ];then
__PACKAGES="${__PACKAGES} salt-syndic" __PACKAGES="${__PACKAGES} $syndic"
fi fi
# shellcheck disable=SC2086 # shellcheck disable=SC2086

View File

@@ -0,0 +1,269 @@
# Security Onion Analyzers
Security Onion provides a means for performing data analysis on varying inputs. This data can be any data of interest sourced from event logs. Examples include hostnames, IP addresses, file hashes, URLs, etc. The analysis is conducted by one or more analyzers that understand that type of input. Analyzers come with the default installation of Security Onion. However, it is also possible to add additional analyzers to extend the analysis across additional areas or data types.
## Supported Observable Types
The built-in analyzers support the following observable types:
| Name | Domain | Hash | IP | JA3 | Mail | Other | URI | URL | User Agent |
| ------------------------|--------|-------|-------|-------|-------|-------|-------|-------|------------
| Alienvault OTX |&check; |&check;|&check;|&cross;|&cross;|&cross;|&cross;|&check;|&cross;|
| EmailRep |&cross; |&cross;|&cross;|&cross;|&check;|&cross;|&cross;|&cross;|&cross;|
| Greynoise |&cross; |&cross;|&check;|&cross;|&cross;|&cross;|&cross;|&cross;|&cross;|
| JA3er |&cross; |&cross;|&cross;|&check;|&cross;|&cross;|&cross;|&cross;|&cross;|
| LocalFile |&check; |&check;|&check;|&check;|&cross;|&check;|&cross;|&check;|&cross;|
| Malware Hash Registry |&cross; |&check;|&cross;|&cross;|&cross;|&cross;|&cross;|&check;|&cross;|
| Pulsedive |&check; |&check;|&check;|&cross;|&cross;|&cross;|&check;|&check;|&check;|
| Spamhaus |&cross; |&cross;|&check;|&cross;|&cross;|&cross;|&cross;|&cross;|&cross;|
| Urlhaus |&cross; |&cross;|&cross;|&cross;|&cross;|&cross;|&cross;|&check;|&cross;|
| Urlscan |&cross; |&cross;|&cross;|&cross;|&cross;|&cross;|&cross;|&check;|&cross;|
| Virustotal |&check; |&check;|&check;|&cross;|&cross;|&cross;|&cross;|&check;|&cross;|
| WhoisLookup |&check; |&cross;|&cross;|&cross;|&cross;|&cross;|&check;|&cross;|&cross;|
## Authentication
Many analyzers require authentication, via an API key or similar. The table below illustrates which analyzers require authentication.
| Name | Authn Req'd|
--------------------------|------------|
[AlienVault OTX](https://otx.alienvault.com/api) |&check;|
[EmailRep](https://emailrep.io/key) |&check;|
[GreyNoise](https://www.greynoise.io/plans/community) |&check;|
[JA3er](https://ja3er.com/) |&cross;|
LocalFile |&cross;|
[Malware Hash Registry](https://hash.cymru.com/docs_whois) |&cross;|
[Pulsedive](https://pulsedive.com/api/) |&check;|
[Spamhaus](https://www.spamhaus.org/dbl/) |&cross;|
[Urlhaus](https://urlhaus.abuse.ch/) |&cross;|
[Urlscan](https://urlscan.io/docs/api/) |&check;|
[VirusTotal](https://developers.virustotal.com/reference/overview) |&check;|
[WhoisLookup](https://github.com/meeb/whoisit) |&cross;|
## Developer Guide
### Python
Analyzers are Python modules, and can be made up of a single .py script, for simpler analyzers, or a complex set of scripts organized within nested directories.
The Python language was chosen because of it's wide adoption in the security industry, ease of development and testing, and the abundance of developers with Python skills.
Specifically, analyzers must be compatible with Python 3.10.
For more information about Python, see the [Python Documentation](https://docs.python.org).
### Development
Custom analyzers should be developed outside of the Security Onion cluster, in a proper software development environment, with version control or other backup mechanisms in place. The analyzer can be developed, unit tested, and integration tested without the need for a Security Onion installation. Once satisifed with the analyzer functionality the analyzer directory should be copied to the Security Onion manager node.
Developing an analyzer directly on a Security Onion manager node is strongly discouraged, as loss of source code (and time and effort) can occur, should the management node suffer a catastrophic failure with disk storage loss.
For best results, avoid long, complicated functions in favor of short, discrete functions. This has several benefits:
- Easier to troubleshoot
- Easier to maintain
- Easier to unit test
- Easier for other developers to review
### Linting
Source code should adhere to the [PEP 8 - Style Guide for Python Code](https://peps.python.org/pep-0008/). Developers can use the default configuration of `flake8` to validate conformance, or run the included `build.sh` inside the analyzers directory. Note that linting conformance is mandatory for analyzers that are contributed back to the Security Onion project.
### Testing
Python's [unitest](https://docs.python.org/3/library/unittest.html) library can be used for covering analyzer code with unit tests. Unit tests are encouraged for custom analyzers, and mandatory for public analyzers submitted back to the Security Onion project.
If you are new to unit testing, please see the included `urlhaus_test.py` as an example.
Unit tests should be named following the pattern `<scriptname>_test.py`.
### Analyzer Package Structure
Delpoyment of a custom analyzer entails copying the analyzer source directory and depenency wheel archives to the Security Onion manager node. The destination locations can be found inside the `securityonion` salt source directory tree. Using the [Saltstack](https://github.com/saltstack/salt) directory pattern allows Security Onion developers to add their own analyzers with minimal additional effort needed to upgrade to newer versions of Security Onion. When the _sensoroni_ salt state executes it will merge the default analyzers with any local analyzers, and copy the merged analyzers into the `/opt/so/conf/sensoroni` directory.
Do not modify files in the `/opt/so/conf/sensoroni` directory! This is a generated directory and changes made inside will be automatically erased on a frequent interval.
On a Security Onion manager, custom analyzers should be placed inside the `/opt/so/saltstack/local/salt/sensoroni` directory, as described in the next section.
#### Directory Tree
From within the default saltstack directory, the following files and directories exist:
```
salt
|- sensoroni
|- files
|- analyzers
|- urlhaus <- Example of an existing analyzer
| |- source-packages <- Contains wheel package bundles for this analyzer's dependencies
| |- site-packages <- Auto-generated site-packages directory (or used for custom dependencies)
| |- requirements.txt <- List of all dependencies needed for this analyzer
| |- urlhaus.py <- Source code for the analyzer
| |- urlhaus_test.py <- Unit tests for the analyzer source code
| |- urlhaus.json <- Metadata for the analyzer
| |- __init__.py <- Package initialization file, often empty
|
|- build.sh <- Simple CI tool for validating linting and unit tests
|- helpers.py <- Common functions shared by many analyzers
|- helpers_test.py <- Unit tests for the shared source code
|- pytest.ini <- Configuration options for the flake8 and pytest
|- README.md <- The file you are currently reading
```
Custom analyzers should conform to this same structure, but instead of being placed in the `/opt/so/saltstack/default` directory tree, they should be placed in the `/opt/so/saltstack/local` directory tree. This ensures future Security Onion upgrades will not overwrite customizations. Shared files like `build.sh` and `helpers.py` do not need to be duplicated. They can remain in the _default_ directory tree. Only new or modified files should exist in the _local_ directory tree.
#### Metadata
Each analyzer has certain metadata that helps describe the function of the analyzer, required inputs, artifact compatibility, optional configuration options, analyzer version, and other important details of the analyzer. This file is a static file and is not intended to be used for dynamic or custom configuration options. It should only be modified by the author of the analyzer.
The following example describes the urlhaus metadata content:
```
{
"name": "Urlhaus", <- Unique human-friendly name of this analyzer
"version": "0.1", <- The version of the analyzer
"author": "Security Onion Solutions", <- Author's name, and/or email or other contact information
"description": "This analyzer queries URLHaus...", <- A brief, concise description of the analyzer
"supportedTypes" : ["url"], <- List of types that must match the SOC observable types
"baseUrl": "https://urlhaus-api.abuse.ch/v1/url/" <- Optional hardcoded data used by the analyzer
}
```
The `supportedTypes` values should only contain the types that this analyzer can work with. In the case of the URLHaus analyzer, we know that it works with URLs. So adding "hash" to this list wouldn't make sense, since URLHaus doesn't provide information about file hashes. If an analyzer does not support a particular type then it will not show up in the analyzer results in SOC for that observable being analyzed. This is intentional, to eliminate unnecessary screen clutter in SOC. To find a list of available values for the `supportedTypes` field, login to SOC and inside of a Case, click the + button on the Observables tab. You will see a list of types and each of those can be used in this metadata field, when applicable to the analyzer.
#### Dependencies
Analyzers will often require the use of third-party packages. For example, if an analyzer needs to make a request to a remote server via HTTPS, then the `requests` package will likely be used. Each analyzer will container a `requirements.txt` file, in which all third-party dependencies can be specified, following the python [Requirements File Specification](https://pip.pypa.io/en/stable/reference/requirements-file-format/).
Additionally, to support airgapped users, the dependency packages themselves, and any transitive dependencies, should be placed inside the `source-packages` directory. To obtain the full hierarchy of dependencies, execute the following commands:
```bash
pip download -r <my-analyzer-path>/requirements.txt -d <my-analyzer-path>/source-packages
```
### Analyzer Architecture
The Sensoroni Docker container is responsible for executing analyzers. Only the manager's Sensoroni container will process analyzer jobs. Other nodes in the grid, such as sensors and search nodes, will not be assigned analyzer jobs.
When the Sensoroni Docker container starts, the `/opt/so/conf/sensoroni/analyzer` directory is mapped into the container. The initialization of the Sensoroni Analyze module will scan that directory for any subdirectories. Valid subdirectories will be added as an available analyzer.
The analyzer itself will only run when a user in SOC enqueues an analyzer job, such as via the Cases -> Observables tab. When the Sensoroni node is ready to run the job it will execute the python command interpretor separately for each loaded analyzer. The command line resembles the following:
```bash
python -m urlhaus '{"artifactType":"url","value":"https://bigbadbotnet.invalid",...}'
```
It is up to each analyzer to determine whether the provided input is compatible with that analyzer. This is assisted by the analyzer metadata, as described earlier in this document, with the use of the `supportedTypes` list.
Once the analyzer completes its functionality, it must terminate promptly. See the following sections for more details on expected internal behavior of the analyzer.
#### Configuration
Analyzers may need dynamic configuration data, such as credentials or other secrets, in order to complete their function. Optional configuration files can provide this information, and are expected to reside in the analyzer's directory. Configuration files are typically written in YAML syntax for ease of modification.
Configuration files for analyzers included with Security Onion will be pillarized, meaning they derive their custom values from the Saltstack pillar data. For example, an analyzer that requires a user supplied credential might contain a config file resembling the following, where Jinja templating syntax is used to extra Salt pillar data:
```yaml
username: {{ salt['pillar.get']('sensoroni:analyzers:myanalyzer:username', '') }}
password: {{ salt['pillar.get']('sensoroni:analyzers:myanalyzer:password', '') }}
```
Sensoroni will not provide any inputs to the analyzer during execution, other than the artifact input in JSON format. However, developers will likely need to test the analyzer outside of Sensoroni and without Jinja templating, therefore an alternate config file should normally be supplied as the configuration argument during testing. Analyzers should allow for this additional command line argument, but by default should automatically read a configuration file stored in the analyzer's directory.
#### Exit Code
If an analyzer determines it cannot or should not operate on the input then the analyzer should return an exit code of `126`.
If an analyzer does attempt to operate against the input then the exit code should be 0, regardless of the outcome. The outcome, be it an error, a confirmed threat detection, or perhaps an unknown outcome, should be noted in the output of the analyzer.
#### Output
The outcome of the analyzer is reflected in the analyzer's output to `stdout`. The output must be JSON formatted, and should contain the following fields.
`summary`: A very short summarization of the outcome. This should be under 50 characters, otherwise it will be truncated when displayed on the Analyzer job list.
`status`: Can be one of the following status values, which most appropriately reflects the outcome:
- `ok`: The analyzer has concluded that the provided input is not a known threat.
- `info`: This analyzer provides informative data, but does not attempt to conclude the input is a threat.
- `caution`: The data provided is inconclusive. Analysts should review this information further. This can be used in error scenarios, such as if the analyzer fails to complete, perhaps due to a remote service being offline.
- `threat`: The analyzer has detected that the input is likely related to a threat.
`error`: [Optional] If the analyzer encounters an unrecoverable error, those details, useful for administrators to troubleshoot the problem, should be placed in this field.
Additional fields are allowed, and should contain data that is specific to the analyzer.
Below is an example of a _urlhaus_ analyzer output. Note that the urlhaus raw JSON is added to a custom field called "response".
```json
{
"response": {
"blacklists": {
"spamhaus_dbl": "not listed",
"surbl": "not listed"
},
"date_added": "2022-04-07 12:39:14 UTC",
"host": "abeibaba.com",
"id": "2135795",
"larted": "false",
"last_online": null,
"payloads": null,
"query_status": "ok",
"reporter": "switchcert",
"tags": [
"Flubot"
],
"takedown_time_seconds": null,
"threat": "malware_download",
"url": "https://abeibaba.com/ian/?redacted",
"url_status": "offline",
"urlhaus_reference": "https://urlhaus.abuse.ch/url/2135795/"
},
"status": "threat",
"summary": "malware_download"
}
```
Users in SOC will be able to view the entire JSON output, therefore it is important that sensitive information, such as credentials or other secrets, is excluded from the output.
#### Internationalization
Some of the built-in analyzers use snake_case summary values, instead of human friendly words or phrases. These are identifiers that the SOC UI will use to lookup a localized translation for the user. The use of these identifiers is not required for custom analyzers. In fact, in order for an identifier to be properly localized the translations must exist in the SOC product, which is out of scope of this development guide. That said, the following generic translations might be useful for custom analyzers:
| Identifier | English |
| ------------------ | -------------------------- |
| `malicious` | Malicious |
| `suspicious` | Suspicious |
| `harmless` | Harmless |
| `internal_failure` | Analyzer Internal Failure |
| `timeout` | Remote Host Timed Out |
#### Timeout
It is expected that analyzers will finish quickly, but there is a default timeout in place that will abort the analyzer if the timeout is exceeded. By default that timeout is 15 minutes (900000 milliseconds), but can be customized via the `sensoroni:analyze_timeout_ms` salt pillar.
## Contributing
Review the Security Onion project [contribution guidelines](https://github.com/Security-Onion-Solutions/securityonion/blob/master/CONTRIBUTING.md) if you are considering contributing an analyzer to the Security Onion project.
#### Procedure
In order to make a custom analyzer into a permanent Security Onion analyzer, the following steps need to be taken:
1. Fork the [securityonion GitHub repository](https://github.com/Security-Onion-Solutions/securityonion)
2. Copy your custom analyzer directory to the forked project, under the `securityonion/salt/sensoroni/files/analyzers` directory.
3. Ensure the contribution requirements in the following section are met.
4. Submit a [pull request](https://github.com/Security-Onion-Solutions/securityonion/pulls) to merge your GitHub fork back into the `securityonion` _dev_ branch.
#### Requirements
The following requirements must be satisfied in order for analyzer pull requests to be accepted into the Security Onion GitHub project:
- Analyzer contributions must not contain licensed dependencies or source code that is incompatible with the [GPLv2 licensing](https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html).
- All source code must pass the `flake8` lint check. This ensures source code conforms to the same style guides as the other analyzers. The Security Onion project will automatically run the linter after each push to a `securityonion` repository fork, and again when submitting a pull request. Failed lint checks will result in the submitter being sent an automated email message.
- All source code must include accompanying unit test coverage. The Security Onion project will automatically run the unit tests after each push to a `securityonion` repository fork, and again when submitting a pull request. Failed unit tests, or insufficient unit test coverage, will result in the submitter being sent an automated email message.
- Documentation of the analyzer, its input requirements, conditions for operation, and other relevant information must be clearly written in an accompanying analyzer metadata file. This file is described in more detail earlier in this document.
- Source code must be well-written and be free of security defects that can put users or their data at unnecessary risk.

View File

@@ -0,0 +1,40 @@
#!/bin/bash
COMMAND=$1
SENSORONI_CONTAINER=${SENSORONI_CONTAINER:-so-sensoroni}
function ci() {
HOME_DIR=$(dirname "$0")
TARGET_DIR=${1:-.}
PATH=$PATH:/usr/local/bin
if ! which pytest &> /dev/null || ! which flake8 &> /dev/null ; then
echo "Missing dependencies. Consider running the following command:"
echo " python -m pip install flake8 pytest pytest-cov"
exit 1
fi
pip install pytest pytest-cov
flake8 "$TARGET_DIR" "--config=${HOME_DIR}/pytest.ini"
python3 -m pytest "--cov-config=${HOME_DIR}/pytest.ini" "--cov=$TARGET_DIR" --doctest-modules --cov-report=term --cov-fail-under=100 "$TARGET_DIR"
}
function download() {
ANALYZERS=$1
if [[ $ANALYZERS = "all" ]]; then
ANALYZERS="*/"
fi
for ANALYZER in $ANALYZERS; do
rm -fr $ANALYZER/site-packages
mkdir -p $ANALYZER/source-packages
rm -fr $ANALYZER/source-packages/*
docker exec -it $SENSORONI_CONTAINER pip download -r /opt/sensoroni/analyzers/$ANALYZER/requirements.txt -d /opt/sensoroni/analyzers/$ANALYZER/source-packages
done
}
if [[ "$COMMAND" == "download" ]]; then
download "$2"
else
ci
fi

View File

@@ -0,0 +1,17 @@
# EmailRep
## Description
Submit an email address to EmailRepIO for analysis.
## Configuration Requirements
``api_key`` - API key used for communication with the EmailRepIO API
This value should be set in the ``sensoroni`` pillar, like so:
```
sensoroni:
analyzers:
emailrep:
api_key: $yourapikey
```

View File

@@ -0,0 +1,7 @@
{
"name": "EmailRep",
"version": "0.1",
"author": "Security Onion Solutions",
"description": "This analyzer queries the EmailRep API for email address reputation information",
"supportedTypes" : ["email", "mail"]
}

View File

@@ -0,0 +1,67 @@
import json
import os
import sys
import requests
import helpers
import argparse
def checkConfigRequirements(conf):
if "api_key" not in conf:
sys.exit(126)
else:
return True
def sendReq(conf, meta, email):
url = conf['base_url'] + email
headers = {"Key": conf['api_key']}
response = requests.request('GET', url=url, headers=headers)
return response.json()
def prepareResults(raw):
if "suspicious" in raw:
if raw['suspicious'] is True:
status = "caution"
summary = "suspicious"
elif raw['suspicious'] is False:
status = "ok"
summary = "harmless"
elif "status" in raw:
if raw["reason"] == "invalid email":
status = "caution"
summary = "invalid_input"
if "exceeded daily limit" in raw["reason"]:
status = "caution"
summary = "excessive_usage"
else:
status = "caution"
summary = "internal_failure"
results = {'response': raw, 'summary': summary, 'status': status}
return results
def analyze(conf, input):
checkConfigRequirements(conf)
meta = helpers.loadMetadata(__file__)
data = helpers.parseArtifact(input)
helpers.checkSupportedType(meta, data["artifactType"])
response = sendReq(conf, meta, data["value"])
return prepareResults(response)
def main():
dir = os.path.dirname(os.path.realpath(__file__))
parser = argparse.ArgumentParser(description='Search Greynoise for a given artifact')
parser.add_argument('artifact', help='the artifact represented in JSON format')
parser.add_argument('-c', '--config', metavar="CONFIG_FILE", default=dir + "/emailrep.yaml", help='optional config file to use instead of the default config file')
args = parser.parse_args()
if args.artifact:
results = analyze(helpers.loadConfig(args.config), args.artifact)
print(json.dumps(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,2 @@
base_url: https://emailrep.io/
api_key: "{{ salt['pillar.get']('sensoroni:analyzers:emailrep:api_key', '') }}"

View File

@@ -0,0 +1,85 @@
from io import StringIO
import sys
from unittest.mock import patch, MagicMock
from emailrep import emailrep
import unittest
class TestEmailRepMethods(unittest.TestCase):
def test_main_missing_input(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stderr:
sys.argv = ["cmd"]
emailrep.main()
self.assertEqual(mock_stderr.getvalue(), "usage: cmd [-h] [-c CONFIG_FILE] artifact\ncmd: error: the following arguments are required: artifact\n")
sysmock.assert_called_once_with(2)
def test_main_success(self):
output = {"foo": "bar"}
with patch('sys.stdout', new=StringIO()) as mock_stdout:
with patch('emailrep.emailrep.analyze', new=MagicMock(return_value=output)) as mock:
sys.argv = ["cmd", "input"]
emailrep.main()
expected = '{"foo": "bar"}\n'
self.assertEqual(mock_stdout.getvalue(), expected)
mock.assert_called_once()
def test_checkConfigRequirements_not_present(self):
conf = {"not_a_file_path": "blahblah"}
with self.assertRaises(SystemExit) as cm:
emailrep.checkConfigRequirements(conf)
self.assertEqual(cm.exception.code, 126)
def test_sendReq(self):
with patch('requests.request', new=MagicMock(return_value=MagicMock())) as mock:
meta = {}
conf = {"base_url": "https://myurl/", "api_key": "abcd1234"}
email = "test@abc.com"
response = emailrep.sendReq(conf=conf, meta=meta, email=email)
mock.assert_called_once_with("GET", headers={"Key": "abcd1234"}, url="https://myurl/test@abc.com")
self.assertIsNotNone(response)
def test_prepareResults_invalidEmail(self):
raw = {"status": "fail", "reason": "invalid email"}
results = emailrep.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "invalid_input")
self.assertEqual(results["status"], "caution")
def test_prepareResults_not_suspicious(self):
raw = {"email": "notsus@domain.com", "reputation": "high", "suspicious": False, "references": 21, "details": {"blacklisted": False, "malicious_activity": False, "profiles": ["twitter"]}}
results = emailrep.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "harmless")
self.assertEqual(results["status"], "ok")
def test_prepareResults_suspicious(self):
raw = {"email": "sus@domain.com", "reputation": "none", "suspicious": True, "references": 0, "details": {"blacklisted": False, "malicious_activity": False, "profiles": []}}
results = emailrep.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "suspicious")
self.assertEqual(results["status"], "caution")
def test_prepareResults_exceeded_limit(self):
raw = {"status": "fail", "reason": "exceeded daily limit. please wait 24 hrs or visit emailrep.io/key for an api key."}
results = emailrep.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "excessive_usage")
self.assertEqual(results["status"], "caution")
def test_prepareResults_error(self):
raw = {}
results = emailrep.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "internal_failure")
self.assertEqual(results["status"], "caution")
def test_analyze(self):
output = {"email": "sus@domain.com", "reputation": "none", "suspicious": True, "references": 0, "details": {"blacklisted": False, "malicious_activity": False, "profiles": []}}
artifactInput = '{"value":"sus@domain.com","artifactType":"email"}'
conf = {"base_url": "myurl/", "api_key": "abcd1234"}
with patch('emailrep.emailrep.sendReq', new=MagicMock(return_value=output)) as mock:
results = emailrep.analyze(conf, artifactInput)
self.assertEqual(results["summary"], "suspicious")
mock.assert_called_once()

View File

@@ -0,0 +1,2 @@
requests>=2.27.1
pyyaml>=6.0

View File

@@ -0,0 +1,19 @@
# Greynoise
## Description
Submit an IP address to Greynoise for analysis.
## Configuration Requirements
``api_key`` - API key used for communication with the Greynoise API
``api_version`` - Version of Greynoise API. Default is ``community``
This value should be set in the ``sensoroni`` pillar, like so:
```
sensoroni:
analyzers:
greynoise:
api_key: $yourapikey
```

View File

@@ -0,0 +1,7 @@
{
"name": "Greynoise IP Analyzer",
"version": "0.1",
"author": "Security Onion Solutions",
"description": "This analyzer queries Greynoise for context around an IP address",
"supportedTypes" : ["ip"]
}

View File

@@ -0,0 +1,78 @@
import json
import os
import sys
import requests
import helpers
import argparse
def checkConfigRequirements(conf):
if "api_key" not in conf or len(conf['api_key']) == 0:
sys.exit(126)
else:
return True
def sendReq(conf, meta, ip):
url = conf['base_url']
if conf['api_version'] == 'community':
url = url + 'v3/community/' + ip
elif conf['api_version'] == 'investigate' or 'automate':
url = url + 'v2/noise/context/' + ip
headers = {"key": conf['api_key']}
response = requests.request('GET', url=url, headers=headers)
return response.json()
def prepareResults(raw):
if "message" in raw:
if "Success" in raw["message"]:
if "classification" in raw:
if "benign" in raw['classification']:
status = "ok"
summary = "harmless"
elif "malicious" in raw['classification']:
status = "threat"
summary = "malicious"
elif "unknown" in raw['classification']:
status = "caution"
summary = "suspicious"
elif "IP not observed scanning the internet or contained in RIOT data set." in raw["message"]:
status = "ok"
summary = "no_results"
elif "Request is not a valid routable IPv4 address" in raw["message"]:
status = "caution"
summary = "invalid_input"
else:
status = "info"
summary = raw["message"]
else:
status = "caution"
summary = "internal_failure"
results = {'response': raw, 'summary': summary, 'status': status}
return results
def analyze(conf, input):
checkConfigRequirements(conf)
meta = helpers.loadMetadata(__file__)
data = helpers.parseArtifact(input)
helpers.checkSupportedType(meta, data["artifactType"])
response = sendReq(conf, meta, data["value"])
return prepareResults(response)
def main():
dir = os.path.dirname(os.path.realpath(__file__))
parser = argparse.ArgumentParser(description='Search Greynoise for a given artifact')
parser.add_argument('artifact', help='the artifact represented in JSON format')
parser.add_argument('-c', '--config', metavar="CONFIG_FILE", default=dir + "/greynoise.yaml", help='optional config file to use instead of the default config file')
args = parser.parse_args()
if args.artifact:
results = analyze(helpers.loadConfig(args.config), args.artifact)
print(json.dumps(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,3 @@
base_url: https://api.greynoise.io/
api_key: "{{ salt['pillar.get']('sensoroni:analyzers:greynoise:api_key', '') }}"
api_version: "{{ salt['pillar.get']('sensoroni:analyzers:greynoise:api_version', 'community') }}"

View File

@@ -0,0 +1,117 @@
from io import StringIO
import sys
from unittest.mock import patch, MagicMock
from greynoise import greynoise
import unittest
class TestGreynoiseMethods(unittest.TestCase):
def test_main_missing_input(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stderr:
sys.argv = ["cmd"]
greynoise.main()
self.assertEqual(mock_stderr.getvalue(), "usage: cmd [-h] [-c CONFIG_FILE] artifact\ncmd: error: the following arguments are required: artifact\n")
sysmock.assert_called_once_with(2)
def test_main_success(self):
output = {"foo": "bar"}
with patch('sys.stdout', new=StringIO()) as mock_stdout:
with patch('greynoise.greynoise.analyze', new=MagicMock(return_value=output)) as mock:
sys.argv = ["cmd", "input"]
greynoise.main()
expected = '{"foo": "bar"}\n'
self.assertEqual(mock_stdout.getvalue(), expected)
mock.assert_called_once()
def test_checkConfigRequirements_not_present(self):
conf = {"not_a_file_path": "blahblah"}
with self.assertRaises(SystemExit) as cm:
greynoise.checkConfigRequirements(conf)
self.assertEqual(cm.exception.code, 126)
def test_sendReq_community(self):
with patch('requests.request', new=MagicMock(return_value=MagicMock())) as mock:
meta = {}
conf = {"base_url": "https://myurl/", "api_key": "abcd1234", "api_version": "community"}
ip = "192.168.1.1"
response = greynoise.sendReq(conf=conf, meta=meta, ip=ip)
mock.assert_called_once_with("GET", headers={'key': 'abcd1234'}, url="https://myurl/v3/community/192.168.1.1")
self.assertIsNotNone(response)
def test_sendReq_investigate(self):
with patch('requests.request', new=MagicMock(return_value=MagicMock())) as mock:
meta = {}
conf = {"base_url": "https://myurl/", "api_key": "abcd1234", "api_version": "investigate"}
ip = "192.168.1.1"
response = greynoise.sendReq(conf=conf, meta=meta, ip=ip)
mock.assert_called_once_with("GET", headers={'key': 'abcd1234'}, url="https://myurl/v2/noise/context/192.168.1.1")
self.assertIsNotNone(response)
def test_sendReq_automate(self):
with patch('requests.request', new=MagicMock(return_value=MagicMock())) as mock:
meta = {}
conf = {"base_url": "https://myurl/", "api_key": "abcd1234", "api_version": "automate"}
ip = "192.168.1.1"
response = greynoise.sendReq(conf=conf, meta=meta, ip=ip)
mock.assert_called_once_with("GET", headers={'key': 'abcd1234'}, url="https://myurl/v2/noise/context/192.168.1.1")
self.assertIsNotNone(response)
def test_prepareResults_invalidIP(self):
raw = {"message": "Request is not a valid routable IPv4 address"}
results = greynoise.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "invalid_input")
self.assertEqual(results["status"], "caution")
def test_prepareResults_not_found(self):
raw = {"ip": "192.190.1.1", "noise": "false", "riot": "false", "message": "IP not observed scanning the internet or contained in RIOT data set."}
results = greynoise.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "no_results")
self.assertEqual(results["status"], "ok")
def test_prepareResults_benign(self):
raw = {"ip": "8.8.8.8", "noise": "false", "riot": "true", "classification": "benign", "name": "Google Public DNS", "link": "https://viz.gn.io", "last_seen": "2022-04-26", "message": "Success"}
results = greynoise.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "harmless")
self.assertEqual(results["status"], "ok")
def test_prepareResults_malicious(self):
raw = {"ip": "121.142.87.218", "noise": "true", "riot": "false", "classification": "malicious", "name": "unknown", "link": "https://viz.gn.io", "last_seen": "2022-04-26", "message": "Success"}
results = greynoise.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "malicious")
self.assertEqual(results["status"], "threat")
def test_prepareResults_unknown(self):
raw = {"ip": "221.4.62.149", "noise": "true", "riot": "false", "classification": "unknown", "name": "unknown", "link": "https://viz.gn.io", "last_seen": "2022-04-26", "message": "Success"}
results = greynoise.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "suspicious")
self.assertEqual(results["status"], "caution")
def test_prepareResults_unknown_message(self):
raw = {"message": "unknown"}
results = greynoise.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "unknown")
self.assertEqual(results["status"], "info")
def test_prepareResults_error(self):
raw = {}
results = greynoise.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "internal_failure")
self.assertEqual(results["status"], "caution")
def test_analyze(self):
output = {"ip": "221.4.62.149", "noise": "true", "riot": "false", "classification": "unknown", "name": "unknown", "link": "https://viz.gn.io", "last_seen": "2022-04-26", "message": "Success"}
artifactInput = '{"value":"221.4.62.149","artifactType":"ip"}'
conf = {"base_url": "myurl/", "api_key": "abcd1234", "api_version": "community"}
with patch('greynoise.greynoise.sendReq', new=MagicMock(return_value=output)) as mock:
results = greynoise.analyze(conf, artifactInput)
self.assertEqual(results["summary"], "suspicious")
mock.assert_called_once()

View File

@@ -0,0 +1,2 @@
requests>=2.27.1
pyyaml>=6.0

View File

@@ -0,0 +1,28 @@
import json
import os
import sys
def checkSupportedType(meta, artifact_type):
if artifact_type not in meta['supportedTypes']:
sys.exit(126)
else:
return True
def parseArtifact(artifact):
data = json.loads(artifact)
return data
def loadMetadata(file):
dir = os.path.dirname(os.path.realpath(file))
filename = os.path.realpath(file).rsplit('/', 1)[1].split('.')[0]
with open(str(dir + "/" + filename + ".json"), "r") as metafile:
return json.load(metafile)
def loadConfig(path):
import yaml
with open(str(path), "r") as conffile:
return yaml.safe_load(conffile)

View File

@@ -0,0 +1,35 @@
from unittest.mock import patch, MagicMock
import helpers
import os
import unittest
class TestHelpersMethods(unittest.TestCase):
def test_checkSupportedType(self):
with patch('sys.exit', new=MagicMock()) as mock:
meta = {"supportedTypes": ["ip", "foo"]}
result = helpers.checkSupportedType(meta, "ip")
self.assertTrue(result)
mock.assert_not_called()
result = helpers.checkSupportedType(meta, "bar")
self.assertFalse(result)
mock.assert_called_once_with(126)
def test_loadMetadata(self):
dir = os.path.dirname(os.path.realpath(__file__))
input = dir + '/urlhaus/urlhaus.py'
data = helpers.loadMetadata(input)
self.assertEqual(data["name"], "Urlhaus")
def test_loadConfig(self):
dir = os.path.dirname(os.path.realpath(__file__))
data = helpers.loadConfig(dir + "/virustotal/virustotal.yaml")
self.assertEqual(data["base_url"], "https://www.virustotal.com/api/v3/search?query=")
def test_parseArtifact(self):
input = '{"value":"foo","artifactType":"bar"}'
data = helpers.parseArtifact(input)
self.assertEqual(data["artifactType"], "bar")
self.assertEqual(data["value"], "foo")

View File

@@ -0,0 +1,7 @@
{
"name": "JA3er Hash Search",
"version": "0.1",
"author": "Security Onion Solutions",
"description": "This analyzer queries JA3er user agents and sightings",
"supportedTypes" : ["ja3"]
}

View File

@@ -0,0 +1,53 @@
import json
import os
import requests
import helpers
import argparse
def sendReq(conf, meta, hash):
url = conf['base_url'] + hash
response = requests.request('GET', url)
return response.json()
def prepareResults(raw):
if "error" in raw:
if "Sorry" in raw["error"]:
status = "ok"
summary = "no_results"
elif "Invalid hash" in raw["error"]:
status = "caution"
summary = "invalid_input"
else:
status = "caution"
summary = "internal_failure"
else:
status = "info"
summary = "suspicious"
results = {'response': raw, 'summary': summary, 'status': status}
return results
def analyze(conf, input):
meta = helpers.loadMetadata(__file__)
data = helpers.parseArtifact(input)
helpers.checkSupportedType(meta, data["artifactType"])
response = sendReq(conf, meta, data["value"])
return prepareResults(response)
def main():
dir = os.path.dirname(os.path.realpath(__file__))
parser = argparse.ArgumentParser(description='Search JA3er for a given artifact')
parser.add_argument('artifact', help='the artifact represented in JSON format')
parser.add_argument('-c', '--config', metavar="CONFIG_FILE", default=dir + "/ja3er.yaml", help='optional config file to use instead of the default config file')
args = parser.parse_args()
if args.artifact:
results = analyze(helpers.loadConfig(args.config), args.artifact)
print(json.dumps(results))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1 @@
base_url: https://ja3er.com/search/

View File

@@ -0,0 +1,72 @@
from io import StringIO
import sys
from unittest.mock import patch, MagicMock
from ja3er import ja3er
import unittest
class TestJa3erMethods(unittest.TestCase):
def test_main_missing_input(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stderr:
sys.argv = ["cmd"]
ja3er.main()
self.assertEqual(mock_stderr.getvalue(), "usage: cmd [-h] [-c CONFIG_FILE] artifact\ncmd: error: the following arguments are required: artifact\n")
sysmock.assert_called_once_with(2)
def test_main_success(self):
output = {"foo": "bar"}
with patch('sys.stdout', new=StringIO()) as mock_stdout:
with patch('ja3er.ja3er.analyze', new=MagicMock(return_value=output)) as mock:
sys.argv = ["cmd", "input"]
ja3er.main()
expected = '{"foo": "bar"}\n'
self.assertEqual(mock_stdout.getvalue(), expected)
mock.assert_called_once()
def test_sendReq(self):
with patch('requests.request', new=MagicMock(return_value=MagicMock())) as mock:
meta = {}
conf = {"base_url": "myurl/"}
hash = "abcd1234"
response = ja3er.sendReq(conf=conf, meta=meta, hash=hash)
mock.assert_called_once_with("GET", "myurl/abcd1234")
self.assertIsNotNone(response)
def test_prepareResults_none(self):
raw = {"error": "Sorry no values found"}
results = ja3er.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "no_results")
self.assertEqual(results["status"], "ok")
def test_prepareResults_invalidHash(self):
raw = {"error": "Invalid hash"}
results = ja3er.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "invalid_input")
self.assertEqual(results["status"], "caution")
def test_prepareResults_internal_failure(self):
raw = {"error": "unknown"}
results = ja3er.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "internal_failure")
self.assertEqual(results["status"], "caution")
def test_prepareResults_info(self):
raw = [{"User-Agent": "Blah/5.0", "Count": 24874, "Last_seen": "2022-04-08 16:18:38"}, {"Comment": "Brave browser v1.36.122\n\n", "Reported": "2022-03-28 20:26:42"}]
results = ja3er.prepareResults(raw)
self.assertEqual(results["response"], raw)
self.assertEqual(results["summary"], "suspicious")
self.assertEqual(results["status"], "info")
def test_analyze(self):
output = {"info": "Results found."}
artifactInput = '{"value":"abcd1234","artifactType":"ja3"}'
conf = {"base_url": "myurl/"}
with patch('ja3er.ja3er.sendReq', new=MagicMock(return_value=output)) as mock:
results = ja3er.analyze(conf, artifactInput)
self.assertEqual(results["summary"], "suspicious")
mock.assert_called_once()

View File

@@ -0,0 +1,2 @@
requests>=2.27.1
pyyaml>=6.0

Some files were not shown because too many files have changed in this diff Show More