Compare commits

..

269 Commits

Author SHA1 Message Date
Mike Reeves
afed0b70eb Merge pull request #3572 from Security-Onion-Solutions/dev
2.3.40
2021-03-22 14:43:34 -04:00
Jason Ertel
e9bd3888c4 Merge pull request #3571 from Security-Onion-Solutions/2340sigrtd
Verify ISO and update gpg
2021-03-22 10:03:42 -04:00
Mike Reeves
ea5624b4bf Update date 2021-03-22 10:02:04 -04:00
Mike Reeves
11cb843fb4 Verify ISO and update gpg 2021-03-22 09:59:48 -04:00
Mike Reeves
57664a3c8a Merge pull request #3570 from Security-Onion-Solutions/Update-Readme
Update README.md
2021-03-22 09:14:34 -04:00
Mike Reeves
71d4d7ee8f Update README.md 2021-03-22 09:03:47 -04:00
Mike Reeves
25c9e70658 Merge pull request #3564 from Security-Onion-Solutions/fix/dash
Fix Dashboard Placeholder
2021-03-20 16:10:07 -04:00
Mike Reeves
e06e023d8e Fix Dashboard Placeholder 2021-03-20 14:05:55 -04:00
Mike Reeves
4fe14dbfd8 Merge pull request #3558 from Security-Onion-Solutions/fix/https-playbook-alerter
Fix https Playbook Alerter
2021-03-19 16:39:35 -04:00
Josh Brower
2425355680 Fix https Playbook Alerter 2021-03-19 16:38:33 -04:00
Josh Patterson
30b948f6b8 Merge pull request #3557 from Security-Onion-Solutions/suri-eve-file-mode
prevent salt warning about file mode
2021-03-19 16:24:26 -04:00
m0duspwnens
e87fb013dc prevent salt warning - The 'file_mode' argument will be ignored. Please use 'mode' instead to set file permissions. 2021-03-19 16:21:18 -04:00
Mike Reeves
908a9c2c06 Merge pull request #3550 from Security-Onion-Solutions/issue/3493
fix docker-ce holds
2021-03-19 15:18:45 -04:00
m0duspwnens
d0f938a600 fix docker-ce holds 2021-03-19 15:16:58 -04:00
Mike Reeves
ee2a6f8be9 Merge pull request #3549 from Security-Onion-Solutions/saved_objects
Update saved objects and remove index patterns because this is now handled by Field Caps API
2021-03-19 14:32:55 -04:00
Wes Lambert
b481cf885b Update saved objects and remove index patterns because this is now handled by Field Caps API 2021-03-19 18:30:42 +00:00
Mike Reeves
890c0da81a Merge pull request #3546 from Security-Onion-Solutions/kilo
Update release notes for 2.3.40
2021-03-19 11:25:15 -04:00
Jason Ertel
e69f6270f9 Merge branch 'dev' into kilo 2021-03-19 11:15:47 -04:00
Jason Ertel
83a3488a06 Update changes.json to reflect 2.3.40 changes 2021-03-19 11:15:27 -04:00
Mike Reeves
de61886441 Merge pull request #3544 from Security-Onion-Solutions/feature/setup-kibana-space
Configure default Space in Kibana during setup
2021-03-19 09:02:18 -04:00
Josh Brower
9d533e5db0 Merge pull request #3542 from Security-Onion-Solutions/fix/fleet-custom-hostname
Fix Fleet Custom Hostname Reactor
2021-03-19 08:21:30 -04:00
Josh Brower
d020f1d1a1 Fix Fleet Custom Hostname Reactor 2021-03-19 08:15:47 -04:00
William Wernert
b595c6ddf7 Configure default Space in Kibana during setup 2021-03-18 16:00:13 -04:00
Mike Reeves
28999af493 Merge pull request #3539 from Security-Onion-Solutions/fix/postsoup
Fix/postsoup
2021-03-18 15:46:36 -04:00
Josh Brower
77b8aecfd9 add so-kibana-space-defaults 2021-03-18 15:40:12 -04:00
Mike Reeves
2e84af621e Add postloop for 2.3.40 2021-03-18 15:14:10 -04:00
William Wernert
6b2947ca6a Merge pull request #3535 from Security-Onion-Solutions/fix/cloud-var
Set is_cloud variable in the main shell process
2021-03-18 14:00:58 -04:00
Mike Reeves
2bd3a6418d Merge pull request #3536 from Security-Onion-Solutions/kilo
Refresh fieldcaps every 5 minutes
2021-03-18 13:57:24 -04:00
Jason Ertel
cc30abfe1b Refresh fieldcaps every 5 minutes 2021-03-18 13:48:57 -04:00
William Wernert
0edf419bcb Remove redundant message 2021-03-18 13:16:45 -04:00
William Wernert
360f0d4dfd Also print stdout message to log 2021-03-18 13:12:16 -04:00
William Wernert
27ff823bc0 [fix] Don't set is_cloud in a subshell 2021-03-18 13:09:46 -04:00
Mike Reeves
1f85506fb1 Merge pull request #3532 from Security-Onion-Solutions/fix/packaging
Also add python packaging lib package to common state
2021-03-18 11:30:56 -04:00
William Wernert
cb0fb93f77 Also add python packaging lib package to common state 2021-03-18 11:28:25 -04:00
William Wernert
fcf0417fbf Merge pull request #3528 from Security-Onion-Solutions/fix/default-no-proxy
Change proxy prompt to default to no
2021-03-18 09:57:03 -04:00
William Wernert
c910a2d2a0 Change proxy prompt to default to no 2021-03-18 09:52:11 -04:00
William Wernert
066a8598a6 Merge pull request #3523 from Security-Onion-Solutions/issue/3493
fix docker versions in setup
2021-03-18 09:31:35 -04:00
William Wernert
b5770964c4 Merge pull request #3522 from Security-Onion-Solutions/fix/install-network-manager
[fix] CentOS ami does not include NetworkManager, so install it
2021-03-18 09:10:41 -04:00
William Wernert
31725ac627 [fix] Indent 2021-03-18 09:09:29 -04:00
m0duspwnens
dbe54708ef fix docker versions in setup https://github.com/Security-Onion-Solutions/securityonion/issues/3493 2021-03-18 09:09:28 -04:00
William Wernert
163cb8f3ca [fix] Typo 2021-03-18 09:08:31 -04:00
William Wernert
4f104c860e [fix] CentOS ami does not include NetworkManager, so install it 2021-03-18 09:00:02 -04:00
Mike Reeves
db605adaf6 Merge pull request #3517 from Security-Onion-Solutions/fix/restarting-docker-message 2021-03-17 21:15:37 -04:00
Mike Reeves
308f10fbdd Merge pull request #3510 from Security-Onion-Solutions/kilo 2021-03-17 21:14:45 -04:00
William Wernert
6e3d951b01 [fix] Show message in terminal when restarting Docker to avoid confusion 2021-03-17 20:17:23 -04:00
Mike Reeves
9a2b5fa301 Merge pull request #3516 from Security-Onion-Solutions/add_suricata_eve_clean
https://github.com/Security-Onion-Solutions/securityonion/issues/3515
2021-03-17 18:50:23 -04:00
m0duspwnens
ec179f8e9b https://github.com/Security-Onion-Solutions/securityonion/issues/3515 2021-03-17 18:44:25 -04:00
Jason Ertel
bc002cb9fb Merge branch 'dev' into kilo 2021-03-17 18:29:52 -04:00
Jason Ertel
4e9f629231 Reformat inactiveTools list in JSON format 2021-03-17 18:25:05 -04:00
Mike Reeves
75f9138a40 Merge pull request #3514 from Security-Onion-Solutions/fix/accept-hostname-proxy
[fix] Also accept a hostname in the proxy URL
2021-03-17 17:51:59 -04:00
William Wernert
96ac742b69 [fix] Also accept a hostname in the proxy URL 2021-03-17 17:31:47 -04:00
Jason Ertel
42809083e8 Merge branch 'dev' into kilo 2021-03-17 17:14:29 -04:00
Mike Reeves
a3b7388aba Merge pull request #3511 from Security-Onion-Solutions/fix/elastic-license-agree
Make the Elastic license prompt case insensitive
2021-03-17 16:57:32 -04:00
William Wernert
7da027abc1 Make the Elastic license prompt case insensitive 2021-03-17 16:55:34 -04:00
Jason Ertel
4de809ecbd Automatically hide SOC tools that are not installed. Resolves #1643. 2021-03-17 16:13:50 -04:00
Josh Brower
8fd3f102f1 Merge pull request #3509 from Security-Onion-Solutions/fix/kibana-space-defaults
Add space defaults script
2021-03-17 15:55:11 -04:00
Josh Brower
7583593152 Add space defaults scripot 2021-03-17 15:47:36 -04:00
Jason Ertel
dc0d989942 Merge pull request #3504 from Security-Onion-Solutions/issue/3493
UPGRADE: docker-ce, docker-ce-cli, containerd to latest
2021-03-17 13:51:31 -04:00
William Wernert
46d346aa62 Merge pull request #3503 from Security-Onion-Solutions/foxtrot
Foxtrot
2021-03-17 12:07:40 -04:00
William Wernert
16d6e116fa Merge branch 'dev' into foxtrot
# Conflicts:
#	salt/idstools/init.sls
2021-03-17 11:52:54 -04:00
Mike Reeves
52b836d456 Merge pull request #3498 from Security-Onion-Solutions/fix/so-rule-apply
Fix so-rule apply - manually tested
2021-03-17 11:28:16 -04:00
William Wernert
8aac9d6bea Reorder states in sync_files.sls 2021-03-17 10:46:17 -04:00
William Wernert
99a37a56a9 [fix] Change the commands so-rule uses to apply changes 2021-03-17 10:36:43 -04:00
m0duspwnens
f63cc10602 https://github.com/Security-Onion-Solutions/securityonion/issues/3493 2021-03-17 10:26:52 -04:00
William Wernert
c0163108ab Merge branch 'dev' into foxtrot
# Conflicts:
#	salt/common/tools/sbin/soup
2021-03-17 10:23:51 -04:00
m0duspwnens
aa14dda155 https://github.com/Security-Onion-Solutions/securityonion/issues/3493 2021-03-17 10:20:20 -04:00
Mike Reeves
fbdb627ab7 Merge pull request #3488 from Security-Onion-Solutions/issue/3288
insert instead of append
2021-03-17 09:17:20 -04:00
m0duspwnens
68ce7a902d insert instead of append 2021-03-17 09:14:19 -04:00
Doug Burks
2ba130b44c Merge pull request #3487 from Security-Onion-Solutions/issue/3486
FEATURE: soup should provide some initial information and then prompt…
2021-03-17 09:02:29 -04:00
Doug Burks
d32c1de411 FEATURE: soup should provide some initial information and then prompt the user to continue #3486 2021-03-17 09:00:46 -04:00
Josh Brower
d21abd9693 Merge pull request #3482 from Security-Onion-Solutions/feature/revert-livequery-hunt
Temp revert Fleet Live Query to Hunt
2021-03-17 08:29:28 -04:00
Josh Brower
bba9913be1 Temp revert Fleet Live Query to Hunt 2021-03-17 08:25:25 -04:00
Jason Ertel
1b6f681ae1 Merge pull request #3477 from Security-Onion-Solutions/esheap
Esheap
2021-03-17 08:14:13 -04:00
Mike Reeves
137e1a699d Fix the math 2021-03-16 19:01:10 -04:00
Mike Reeves
2f3488b134 Merge pull request #3476 from Security-Onion-Solutions/issue/3288
Issue/3288
2021-03-16 18:56:07 -04:00
Mike Reeves
7719a26a96 Change ES Heap calculation 2021-03-16 18:53:41 -04:00
m0duspwnens
53c3b19a08 Merge remote-tracking branch 'remotes/origin/dev' into issue/3288 2021-03-16 16:46:32 -04:00
Doug Burks
065f1c2927 Merge pull request #3473 from Security-Onion-Solutions/fix/shorten-elastic-license-url
Shorten Elastic License URL to avoid line wrap
2021-03-16 16:43:38 -04:00
Doug Burks
388524ec4e Shorten Elastic License URL to avoid line wrap 2021-03-16 16:39:14 -04:00
m0duspwnens
38a497932c https://github.com/Security-Onion-Solutions/securityonion/issues/3288 2021-03-16 16:36:35 -04:00
weslambert
8d29f757b1 Merge pull request #3471 from Security-Onion-Solutions/kilo
Reverse Zeek index close/delete count for Curator
2021-03-16 14:34:46 -04:00
Josh Brower
b56434aea1 Merge pull request #3470 from Security-Onion-Solutions/feature/disable-features-ui
Feature/disable certain features in Kibana UI
2021-03-16 14:00:21 -04:00
Josh Brower
abd4f92088 Cleanup curl output 2021-03-16 13:53:28 -04:00
Josh Brower
c855e0a55a Disable certain Features within the default space 2021-03-16 13:48:13 -04:00
Wes Lambert
7a02150389 Reverse Zeek index close/delete count for Curator 2021-03-16 17:16:55 +00:00
weslambert
5fd483a99d Merge pull request #3466 from Security-Onion-Solutions/soup2340
Soup for 2.3.40
2021-03-16 13:03:33 -04:00
Mike Reeves
d92c1c11aa Merge pull request #3463 from Security-Onion-Solutions/kilo
Ignore TIME_WAIT when checking for Strelka frontend port reservation
2021-03-16 12:59:16 -04:00
Mike Reeves
71c6bb71c1 Merge remote-tracking branch 'remotes/origin/dev' into soup2340 2021-03-16 12:56:24 -04:00
Mike Reeves
e528d84ebe Update Elastic License Text 2021-03-16 12:56:06 -04:00
William Wernert
129db23062 Move interface message to later in setup 2021-03-16 12:34:44 -04:00
William Wernert
1e7aaf9ffb Collect manager info before showing message about copying ssh key 2021-03-16 12:32:37 -04:00
Mike Reeves
2851840e76 Fix Logging 2021-03-16 12:18:01 -04:00
Josh Brower
7b748128ea Merge pull request #3462 from Security-Onion-Solutions/delta
Fixes IP & Port mappings
2021-03-16 12:05:23 -04:00
Josh Brower
4d6cac4a2a Merge remote-tracking branch 'remotes/origin/dev' into delta 2021-03-16 11:57:17 -04:00
William Wernert
c8bbe078a6 Use more lines on proxy error message 2021-03-16 11:42:15 -04:00
William Wernert
6a48d7f478 Print curl error to populate variable 2021-03-16 11:34:36 -04:00
Wes Lambert
038c58f3d5 Ignore TIME_WAIT when checking for Strelka frontend port reservation 2021-03-16 14:51:16 +00:00
William Wernert
59c62393b5 Change back to validating proxy, show user error message from curl 2021-03-16 10:18:02 -04:00
Mike Reeves
00025e5c74 Fix Syntax Error 2021-03-16 09:34:53 -04:00
Josh Brower
71ae5b60ea Update Sigmac mappings and config for IPs and ports 2021-03-16 09:32:40 -04:00
Josh Brower
44c75122ed Update Sigmac mappings and config for IPs and ports 2021-03-16 09:05:35 -04:00
Mike Reeves
8d23518f90 Update Elastic Link 2021-03-15 17:50:06 -04:00
Mike Reeves
9a4c4448f3 Fix whiptail display 2021-03-15 17:45:44 -04:00
Mike Reeves
12501e0079 Add check license to its own logic 2021-03-15 17:41:45 -04:00
Mike Reeves
72759de97f Fix so-common syntax 2021-03-15 17:37:44 -04:00
Mike Reeves
67e0d450e4 Add Elastic License Prompts 2021-03-15 17:32:36 -04:00
Mike Reeves
05ec7dba21 Merge pull request #3452 from Security-Onion-Solutions/Telegraf-Fix
Turn off SSL Verification in Telegraf
2021-03-15 16:47:27 -04:00
Mike Reeves
674bb342ea Turn off SSL Verification in Telegraf 2021-03-15 16:39:43 -04:00
Josh Brower
5fe025318b Update Sigmac mappings and config for IPs and ports 2021-03-15 15:53:00 -04:00
William Wernert
086f2b3437 Change when prereq packages are installed to follow new order 2021-03-15 14:59:24 -04:00
Mike Reeves
c93aab7a85 Merge pull request #3448 from Security-Onion-Solutions/kilo
Allow for moving Strelka files to processed directory after scanning
2021-03-15 14:51:04 -04:00
William Wernert
efc0463201 Change when proxy + variables are set so strings are built correctly 2021-03-15 14:45:23 -04:00
William Wernert
55aee69a74 Merge branch 'dev' into foxtrot 2021-03-15 12:34:24 -04:00
William Wernert
6ae3a26cbe Revert all proxy changes on reinstall 2021-03-15 12:34:13 -04:00
Wes Lambert
f142b754dc Add Strelka files.processed directory so files will be moved from staging to processed 2021-03-15 15:43:31 +00:00
Wes Lambert
b6a785395d Add Strelka staging directory for state 2021-03-15 15:42:13 +00:00
Mike Reeves
ab75d0e563 soup for 2.3.40 2021-03-15 10:51:31 -04:00
Mike Reeves
79c7af9a31 soup for 2.3.40 2021-03-15 10:48:24 -04:00
Mike Reeves
d931e57fd8 Merge pull request #3428 from Security-Onion-Solutions/kilo 2021-03-12 17:03:48 -05:00
Doug Burks
cfdf9703ab Merge pull request #3427 from Security-Onion-Solutions/issue/3340
FEATURE: soup should output more guidance for distributed deployments at the end #3340
2021-03-12 15:27:26 -05:00
Doug Burks
da7adab566 FEATURE: soup should output more guidance for distributed deployments at the end #3340 2021-03-12 12:59:17 -05:00
William Wernert
f80dfda60b Only run initial installer progress to 98 to avoid sitting at 100 2021-03-12 11:39:44 -05:00
William Wernert
302d6e03be Merge branch 'dev' into foxtrot 2021-03-12 11:36:26 -05:00
Mike Reeves
4ac408ad38 Merge pull request #3423 from Security-Onion-Solutions/issue/3422
FIX: Improve Setup verbiage #3422
2021-03-12 11:04:25 -05:00
doug
edb88ac09a FIX: Improve Setup verbiage #3422 2021-03-12 10:54:44 -05:00
Jason Ertel
747f387936 Replace salt's http.wait_for_successful_query with so-common's wait_for_web_response due to issues with salt 2021-03-12 10:42:18 -05:00
Jason Ertel
8cddfeb47d Provide pillar for each client param 2021-03-12 07:42:10 -05:00
Doug Burks
555f9b5091 Merge pull request #3417 from Security-Onion-Solutions/issue/3413
FIX: SMTP shoud read SNMP on Kibana SNMP view #3413
2021-03-12 06:52:21 -05:00
doug
a5779a520c FIX: SMTP shoud read SNMP on Kibana SNMP view #3413 2021-03-12 06:48:57 -05:00
Jason Ertel
a7ea0808c3 Merge pull request #3399 from Security-Onion-Solutions/kilo
feature: Show job owner/submitter. Resolves #2775
2021-03-12 06:45:34 -05:00
Jason Ertel
462f76e2bb Remove client params block in favor in individual settings that will go into the pillar 2021-03-12 06:38:53 -05:00
Jason Ertel
b5cf9ae820 Merge branch 'dev' into kilo 2021-03-11 18:01:17 -05:00
Jason Ertel
80987dfd1d Support overrides of client params 2021-03-11 18:01:04 -05:00
William Wernert
6842204981 Ask for hostname earlier in setup 2021-03-11 16:55:06 -05:00
Doug Burks
ab1c84afca Merge pull request #3409 from Security-Onion-Solutions/issue/3408
FIX: Populate http.status_message field #3408
2021-03-11 16:45:53 -05:00
doug
adbc7436b6 FIX: Populate http.status_message field #3408 2021-03-11 16:42:20 -05:00
William Wernert
6d431c0bda Add more info to comment 2021-03-11 16:36:56 -05:00
William Wernert
b14b9e8e17 [fix] Fix dependency install progress bar 2021-03-11 16:34:54 -05:00
William Wernert
b35e65190e [fix] Fix dependency install progress bar 2021-03-11 16:30:14 -05:00
William Wernert
8e8bb1489b Redirect output of kill command 2021-03-11 16:13:52 -05:00
William Wernert
e2fc1b0b39 Redirect output of kill command 2021-03-11 16:06:49 -05:00
William Wernert
3306ffa792 Only collect proxy once, include manager in no_proxy value on minions 2021-03-11 16:03:43 -05:00
William Wernert
a86b2ab653 [fix] Remove additional collect_proxy call 2021-03-11 15:54:46 -05:00
William Wernert
5612fc10d4 [feat] Remove setup dependency on bc 2021-03-11 15:53:04 -05:00
Jason Ertel
286351f424 Merge branch 'dev' into kilo 2021-03-11 15:32:38 -05:00
Jason Ertel
908720592a Upgrade saved objects to 7.11.2 2021-03-11 15:32:22 -05:00
William Wernert
66da3e380f [fix] Set percentage value when needed 2021-03-11 15:25:38 -05:00
William Wernert
e60bc87ffa Install setup required packages later so that also uses the proxy 2021-03-11 15:20:39 -05:00
William Wernert
0d01f63e3b [fix] Confirm proxy password 2021-03-11 11:46:46 -05:00
Jason Ertel
79dd0d1809 Fix indentation 2021-03-11 11:13:14 -05:00
Mike Reeves
cdd95986a8 Merge pull request #3398 from Security-Onion-Solutions/issue/3397
FIX: Improve Suricata DHCP logging and parsing #3397
2021-03-11 11:07:53 -05:00
doug
b4ad7e7359 FIX: Improve Suricata DHCP logging and parsing #3397 2021-03-11 11:01:51 -05:00
William Wernert
0434ffac38 Merge branch 'dev' into foxtrot 2021-03-11 10:52:36 -05:00
William Wernert
506162bfcc Use auth for automated proxy test 2021-03-11 10:52:17 -05:00
Doug Burks
adb25d63d2 Merge pull request #3396 from Security-Onion-Solutions/issue/3295
FIX: Improve DHCP leases query in Hunt #3395
2021-03-11 08:22:48 -05:00
Doug Burks
85aaa71006 FIX: Improve DHCP leases query in Hunt #3395 2021-03-11 08:01:27 -05:00
William Wernert
750de6333d [fix] Remove last bad usage of cortexkey 2021-03-10 16:24:21 -05:00
William Wernert
9ffbb9d37e [fix] Use update so-cortex-user-enable with correct pillar
Fixes #3388
2021-03-10 16:17:10 -05:00
William Wernert
157badf448 [fix] Use correct pillar value for api key
Fixes #3388
2021-03-10 16:12:59 -05:00
Jason Ertel
eefa6bb949 feature: Show job owner/submitter. Resolves #2775 2021-03-10 14:44:21 -05:00
William Wernert
19ccd0c9a2 Merge branch 'dev' into foxtrot 2021-03-10 09:33:42 -05:00
Mike Reeves
6bbcc7a5e9 Merge pull request #3382 from Security-Onion-Solutions/kilo
Ensure MTU is defined for advanced sensor automation
2021-03-10 09:27:20 -05:00
Jason Ertel
3eb4a37c76 Expose zeek and suri pins for automation 2021-03-10 09:26:46 -05:00
Jason Ertel
180bba782e Expose zeek and suri pins for automation 2021-03-10 09:26:11 -05:00
Jason Ertel
b1531cc75e Merge pull request #3384 from Security-Onion-Solutions/Eval/Import-Fix
Update cert location for eval.import
2021-03-10 09:15:53 -05:00
Mike Reeves
18203513ab Update cert location for eval.import 2021-03-10 09:14:14 -05:00
Jason Ertel
46af6a5c84 Ensure MTU is defined for advanced sensor automation 2021-03-10 08:14:25 -05:00
Mike Reeves
2e74cb6abf Merge pull request #3377 from Security-Onion-Solutions/kilo 2021-03-09 21:40:43 -05:00
Jason Ertel
a496b03de7 Add missing MTU var for automation of advanced sensor 2021-03-09 20:52:34 -05:00
William Wernert
60f40163aa Merge branch 'dev' into foxtrot 2021-03-09 13:51:13 -05:00
Jason Ertel
46288802d1 Merge pull request #3368 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update 9101_output_osquery_livequery.conf.jinja
2021-03-09 13:16:17 -05:00
Mike Reeves
2e01330e1b Update 9101_output_osquery_livequery.conf.jinja 2021-03-09 13:15:04 -05:00
William Wernert
f0e089b6bf Merge branch 'dev' into foxtrot 2021-03-09 10:11:04 -05:00
Mike Reeves
734d25b1ac Merge pull request #3361 from Security-Onion-Solutions/nomorefeatures
Make saved objects less hacky
2021-03-09 10:05:23 -05:00
Mike Reeves
49258a13a3 Make saved objects less hacky 2021-03-09 10:03:29 -05:00
Josh Brower
00da549430 Merge pull request #3358 from Security-Onion-Solutions/delta
FEATURE: Initial support for viewing Osquery Live Query results in Hunt
2021-03-09 09:18:57 -05:00
Jason Ertel
b1777ff10f Merge pull request #3357 from Security-Onion-Solutions/nomorefeatures
SSL with Elastic Security
2021-03-08 21:22:30 -05:00
Mike Reeves
3967e581cf Merge pull request #3356 from Security-Onion-Solutions/kilo
fix: Sensors can temporarily show offline while processing large PCAP…
2021-03-08 19:14:54 -05:00
William Wernert
ba71b2fbc8 Change proxy Jinja logic (none and empty string are falsy) 2021-03-08 17:36:34 -05:00
Mike Reeves
1ecb079066 Fix Kibana Script for loading dashboards 2021-03-08 17:36:07 -05:00
William Wernert
f85f86ccdd [fix] Check for empty proxy string everywhere 2021-03-08 17:25:23 -05:00
William Wernert
8c4e66f7bb [fix] Print error to stderr 2021-03-08 15:52:21 -05:00
William Wernert
5ee6856a07 Strip the last substring following a hyphen for automated branches
Also don't show the user a stack trace on invalid version strings, just alert on the bad string and exit
2021-03-08 15:43:54 -05:00
William Wernert
ed4f8025be [fix] Also check for proxy to be empty string 2021-03-08 13:57:24 -05:00
Josh Brower
fe8788c09a Merge remote-tracking branch 'remotes/origin/dev' into delta 2021-03-08 12:56:47 -05:00
William Wernert
5c7d3656dd [fix] Don't try to create so_proxy during automated installs, just set it 2021-03-08 12:26:17 -05:00
Jason Ertel
84c152e233 fix: Sensors can temporarily show offline while processing large PCAP jobs. Resolves #3279. 2021-03-08 12:05:44 -05:00
Mike Reeves
bf4ac2a312 Fix some merge conflicts 2021-03-08 11:43:24 -05:00
William Wernert
368b04b24e Add back accidentally removed code 2021-03-08 09:04:17 -05:00
William Wernert
ca2766511b Revert "[wip] Change when proxy is set up so main ip is known"
This reverts commit 1ea3cb1c61.

# Conflicts:
#	setup/so-functions
2021-03-08 09:02:53 -05:00
William Wernert
06c584910c Merge branch 'dev' into foxtrot 2021-03-08 08:58:31 -05:00
Josh Brower
19b3c7bb07 Merge pull request #3339 from Security-Onion-Solutions/feature/live_query-hunt
Feature/live query hunt
2021-03-08 08:31:25 -05:00
William Wernert
49db2a016a Merge pull request #3341 from Security-Onion-Solutions/kilo
Kilo
2021-03-08 08:17:29 -05:00
Jason Ertel
94610307b3 Merge branch 'dev' into kilo 2021-03-08 07:56:48 -05:00
William Wernert
35ae9363f5 [fix] Log gateway error, and don't show whiptail msg on automated installs 2021-03-05 20:15:37 -05:00
William Wernert
9c49cef2de Merge branch 'feature/docker-prune-rework' into foxtrot 2021-03-05 14:18:57 -05:00
William Wernert
f537b3c7f7 Merge branch 'feature/setup-ssh-harden' into foxtrot 2021-03-05 14:18:35 -05:00
William Wernert
e5110dc3fc [fix] None -> none 2021-03-05 14:08:03 -05:00
William Wernert
50fcdb65a6 [fix] Modify the proxy automated test
* It makes more sense to test the proxy using a network install, not via the iso
2021-03-05 13:53:48 -05:00
William Wernert
32e7afdc5f Merge branch 'feature/setup' into foxtrot 2021-03-05 12:53:31 -05:00
William Wernert
245902326f [wip] Add automation support for proxy settings 2021-03-05 12:53:20 -05:00
Jason Ertel
7234353476 Merge pull request #3319 from Security-Onion-Solutions/foxtrot
fix: syntax error in reserved ports configuration #3308
2021-03-05 12:51:50 -05:00
William Wernert
ec04145d15 [fix] Set proxy for idstools container manually 2021-03-05 11:34:31 -05:00
Jason Ertel
61a7efeeab fix: syntax error in reserved ports configuration; ensure ports are reserved prior to setup 2021-03-05 10:54:01 -05:00
Josh Brower
548f67ca6f Initial support for Live Queries in Hunt 2021-03-04 18:21:13 -05:00
William Wernert
33b2bd33fe [fix] Also create config.json so containers use proxy 2021-03-04 17:12:10 -05:00
William Wernert
e0d0baafcc [fix] Permanently set proxy for yum using template 2021-03-04 16:40:32 -05:00
William Wernert
b3c7760ad4 [fix] Use correct variable in so-proxy.sh 2021-03-04 14:08:21 -05:00
Mike Reeves
39d4f077b4 Merge pull request #3290 from Security-Onion-Solutions/foxtrot
Foxtrot
2021-03-04 13:44:00 -05:00
William Wernert
a435ea77e8 [fix] Also add hostname to no_proxy list 2021-03-04 12:43:42 -05:00
William Wernert
2ee8c7ad1c [fix] Always pass $proxy_addr since we retry the surrounding function 2021-03-04 12:16:23 -05:00
William Wernert
ac0a4f4a13 Merge branch 'dev' into feature/setup 2021-03-04 12:11:17 -05:00
William Wernert
b265854644 [wip] Move proxy config to separate file 2021-03-04 12:10:42 -05:00
William Wernert
4339ded17f [wip][fix] Don't add logic to so-setup, create wrapper function in so-functions 2021-03-04 12:10:14 -05:00
William Wernert
d19ca943cc [fix][wip] Only setup proxy early on configure network setup 2021-03-04 11:57:16 -05:00
William Wernert
2e56252f54 [wip] Syntax fixes 2021-03-04 11:54:21 -05:00
William Wernert
13dc822197 [wip] Ask user if they want to re-enter the proxy 2021-03-04 11:53:08 -05:00
William Wernert
5a97341d33 [wip] Fix how collect_proxy function works on retry 2021-03-04 11:41:36 -05:00
William Wernert
7ee0fd6375 [wip] Specify setup log location to user when directing them to it 2021-03-04 11:31:22 -05:00
Mike Reeves
05c7bd5789 Merge pull request #3285 from Security-Onion-Solutions/elastic
Elastic
2021-03-04 10:57:06 -05:00
Mike Reeves
c2b347e4bb Security Enable for only nodes and heavy 2021-03-04 10:52:01 -05:00
Mike Reeves
a0a8d12526 Enable SSL and Features 2021-03-04 10:08:28 -05:00
Mike Reeves
8c474cc7df Merge pull request #3268 from Security-Onion-Solutions/issue/3254
FIX: Custom Kibana settings are not being applied properly on upgrades #3254
2021-03-04 08:39:50 -05:00
William Wernert
3d5cf128ae [wip] Test proxy before using it 2021-03-03 15:02:21 -05:00
Mike Reeves
49371a1d6a fix elastic output for ssl 2021-03-03 14:30:45 -05:00
William Wernert
1ea3cb1c61 [wip] Change when proxy is set up so main ip is known
* Also only restart docker if the command exists (i.e. docker is installed)
2021-03-03 14:20:26 -05:00
Mike Reeves
bf4249d28b fix elastalert verification 2021-03-03 14:16:10 -05:00
William Wernert
4ffa0fbc13 [wip] Fix proxy validation 2021-03-03 14:09:59 -05:00
Mike Reeves
e0538417f1 fix http.wait 2021-03-03 14:06:35 -05:00
doug
d39b3280c8 FIX: Custom Kibana settings are not being applied properly on upgrades #3254 2021-03-03 14:04:32 -05:00
Mike Reeves
6c7111cd0a turn off verification mode for ES 2021-03-03 13:42:04 -05:00
Mike Reeves
4de62c878c turn on elastic security 2021-03-03 12:51:29 -05:00
William Wernert
e951e9d9c5 [wip] Further proxy changes
* Remove unused docker.conf template
* Rename proxy variable to avoid name collision
* Reword address prompt to specify users should not include user:pass in their input
* Actually call the collect_proxy function
2021-03-03 12:19:14 -05:00
William Wernert
26b1da744c [wip] Reword proxy yesno prompt 2021-03-03 12:01:15 -05:00
William Wernert
83791d87c7 [wip][fix] Use passwordbox for proxy password 2021-03-03 11:58:45 -05:00
William Wernert
279a5b60b8 Soup indent fixes 2021-03-03 11:58:10 -05:00
Mike Reeves
4f34eca5b9 remove unused script 2021-03-03 10:32:23 -05:00
Mike Reeves
07b5cc3d1d Fix https for rw indicies script 2021-03-03 10:29:41 -05:00
Mike Reeves
d7451dcd75 Merge remote-tracking branch 'origin/foxtrot' into nomorefeatures 2021-03-03 10:04:38 -05:00
Mike Reeves
4f867e5375 Fix all scripts for ssl elastic 2021-03-03 10:02:23 -05:00
William Wernert
82018a206c [wip] Don't validate user+pass for proxy, use new variable 2021-03-03 09:56:14 -05:00
William Wernert
2b94fa366e [wip] Add auth inputs for proxy settings, fix some broken logic 2021-03-03 09:51:38 -05:00
William Wernert
de77d3ebc9 [wip] Initial work for setting up proxy on manager 2021-03-02 17:41:49 -05:00
William Wernert
4df53b3c70 Unify log_size_limit variable value in so-curator-closed-delete-delete 2021-03-02 17:38:17 -05:00
William Wernert
497938460a [fix] manager:log_size_limit is no longer used, remove generation 2021-03-02 16:47:49 -05:00
Mike Reeves
e0d9212e55 Make https default for all things 2021-03-02 14:01:05 -05:00
Mike Reeves
80574d3c20 Make https default for all things 2021-03-02 13:59:43 -05:00
Mike Reeves
bfd05a8cfc Change to https for elastic connections 2021-03-02 11:32:29 -05:00
Mike Reeves
3219f4cd12 Remove Features Option 2021-03-02 11:04:50 -05:00
William Wernert
a18dd869c4 Merge branch 'dev' into feature/setup 2021-03-02 10:23:33 -05:00
William Wernert
61611b8de2 Fix Elasticsearch disk space prompt
Resolves #3205
2021-03-02 10:23:04 -05:00
William Wernert
0db9991307 Reword/remove some comments 2021-03-02 10:20:33 -05:00
Jason Ertel
4014dbbc3d Revert "Move version to 2.3.31"
This reverts commit cf21200a36.
2021-03-02 10:14:45 -05:00
William Wernert
35f5c7fb4b Merge branch 'dev' into feature/docker-prune-rework 2021-03-02 09:48:41 -05:00
Jason Ertel
cf21200a36 Move version to 2.3.31 2021-03-02 09:11:49 -05:00
Mike Reeves
bff446543a Merge pull request #3215 from Security-Onion-Solutions/foxtrot
Foxtrot
2021-03-01 15:58:41 -05:00
Jason Ertel
53a45e1c97 Merge branch 'dev' into foxtrot 2021-03-01 15:54:41 -05:00
Jason Ertel
b37d5ae15f Enable advanced setup for some search/sensor installs 2021-03-01 15:54:29 -05:00
Mike Reeves
85204dbb14 Merge pull request #3210 from Security-Onion-Solutions/dev2340
Update VERSION
2021-03-01 15:28:45 -05:00
Mike Reeves
2c75cb74db Update VERSION 2021-03-01 15:17:38 -05:00
William Wernert
1834e07aad Merge branch 'dev' into feature/docker-prune-rework 2021-03-01 09:37:47 -05:00
William Wernert
33696398eb Add new so-docker-prune script
* Script will pull list of so- images and prune any older than most recent + last version
2021-02-26 18:06:07 -05:00
Josh Brower
b8137214e4 Initial Support - Live Query to Hunt 2021-02-26 08:08:09 -05:00
Josh Patterson
dc673eef77 Merge pull request #3148 from Security-Onion-Solutions/salt-3002.5
Salt 3002.5
2021-02-25 23:00:35 -05:00
Josh Patterson
18365ed87d Merge pull request #3140 from Security-Onion-Solutions/issue/3130
Issue/3130
2021-02-25 11:27:46 -05:00
Josh Patterson
81331264e7 Merge pull request #3117 from Security-Onion-Solutions/issue/3115
logfile is 1 word
2021-02-24 11:57:33 -05:00
Josh Patterson
a9066f491d Merge pull request #3116 from Security-Onion-Solutions/issue/3115
Issue/3115
2021-02-24 11:51:42 -05:00
Josh Patterson
988ad5f8fc Merge pull request #3086 from Security-Onion-Solutions/issue/3056
Issue/3056
2021-02-23 14:53:42 -05:00
William Wernert
d205fff3ba Run ssh-harden in setup per #1932 2021-02-19 13:45:23 -05:00
93 changed files with 2083 additions and 1572 deletions

View File

@@ -1,6 +1,6 @@
## Security Onion 2.3.30 ## Security Onion 2.3.40
Security Onion 2.3.30 is here! Security Onion 2.3.40 is here!
## Screenshots ## Screenshots

View File

@@ -1,16 +1,16 @@
### 2.3.30 ISO image built on 2021/03/01 ### 2.3.40 ISO image built on 2021/03/22
### Download and Verify ### Download and Verify
2.3.30 ISO image: 2.3.40 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.3.30.iso https://download.securityonion.net/file/securityonion/securityonion-2.3.40.iso
MD5: 65202BA0F7661A5E27087F097B8E571E MD5: FB72C0675F262A714B287BB33CE82504
SHA1: 14E842E39EDBB55A104263281CF25BF88A2E9D67 SHA1: E8F5A9AA23990DF794611F9A178D88414F5DA81C
SHA256: 210B37B9E3DFC827AFE2940E2C87B175ADA968EDD04298A5926F63D9269847B7 SHA256: DB125D6E770F75C3FD35ABE3F8A8B21454B7A7618C2B446D11B6AC8574601070
Signature for ISO image: Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.30.iso.sig https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.40.iso.sig
Signing key: Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS
@@ -24,22 +24,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/ma
Download the signature file for the ISO: Download the signature file for the ISO:
``` ```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.30.iso.sig wget https://github.com/Security-Onion-Solutions/securityonion/raw/master/sigs/securityonion-2.3.40.iso.sig
``` ```
Download the ISO image: Download the ISO image:
``` ```
wget https://download.securityonion.net/file/securityonion/securityonion-2.3.30.iso wget https://download.securityonion.net/file/securityonion/securityonion-2.3.40.iso
``` ```
Verify the downloaded ISO image using the signature file: Verify the downloaded ISO image using the signature file:
``` ```
gpg --verify securityonion-2.3.30.iso.sig securityonion-2.3.30.iso gpg --verify securityonion-2.3.40.iso.sig securityonion-2.3.40.iso
``` ```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below: The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
``` ```
gpg: Signature made Mon 01 Mar 2021 02:15:28 PM EST using RSA key ID FE507013 gpg: Signature made Mon 22 Mar 2021 09:35:50 AM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>" gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature! gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner. gpg: There is no indication that the signature belongs to the owner.

View File

@@ -1 +1 @@
2.3.30 2.3.40

View File

@@ -1 +1 @@
net.ipv4.ip_local_reserved_ports="55000,57314" net.ipv4.ip_local_reserved_ports=55000,57314

View File

@@ -90,7 +90,6 @@ commonpkgs:
- ntpdate - ntpdate
- jq - jq
- python3-docker - python3-docker
- docker-ce
- curl - curl
- ca-certificates - ca-certificates
- software-properties-common - software-properties-common
@@ -104,12 +103,15 @@ commonpkgs:
- python3-dateutil - python3-dateutil
- python3-m2crypto - python3-m2crypto
- python3-mysqldb - python3-mysqldb
- python3-packaging
- git - git
heldpackages: heldpackages:
pkg.installed: pkg.installed:
- pkgs: - pkgs:
- containerd.io: 1.2.13-2 - containerd.io: 1.4.4-1
- docker-ce: 5:19.03.14~3-0~ubuntu-bionic - docker-ce: 5:20.10.5~3-0~ubuntu-bionic
- docker-ce-cli: 5:20.10.5~3-0~ubuntu-bionic
- docker-ce-rootless-extras: 5:20.10.5~3-0~ubuntu-bionic
- hold: True - hold: True
- update_holds: True - update_holds: True
@@ -135,6 +137,7 @@ commonpkgs:
- python36-dateutil - python36-dateutil
- python36-m2crypto - python36-m2crypto
- python36-mysql - python36-mysql
- python36-packaging
- yum-utils - yum-utils
- device-mapper-persistent-data - device-mapper-persistent-data
- lvm2 - lvm2
@@ -144,8 +147,10 @@ commonpkgs:
heldpackages: heldpackages:
pkg.installed: pkg.installed:
- pkgs: - pkgs:
- containerd.io: 1.2.13-3.2.el7 - containerd.io: 1.4.4-3.1.el7
- docker-ce: 3:19.03.14-3.el7 - docker-ce: 3:20.10.5-3.el7
- docker-ce-cli: 1:20.10.5-3.el7
- docker-ce-rootless-extras: 20.10.5-3.el7
- hold: True - hold: True
- update_holds: True - update_holds: True
{% endif %} {% endif %}

View File

@@ -86,6 +86,19 @@ add_interface_bond0() {
fi fi
} }
check_airgap() {
# See if this is an airgap install
AIRGAP=$(cat /opt/so/saltstack/local/pillar/global.sls | grep airgap: | awk '{print $2}')
if [[ "$AIRGAP" == "True" ]]; then
is_airgap=0
UPDATE_DIR=/tmp/soagupdate/SecurityOnion
AGDOCKER=/tmp/soagupdate/docker
AGREPO=/tmp/soagupdate/Packages
else
is_airgap=1
fi
}
check_container() { check_container() {
docker ps | grep "$1:" > /dev/null 2>&1 docker ps | grep "$1:" > /dev/null 2>&1
return $? return $?
@@ -97,6 +110,46 @@ check_password() {
return $? return $?
} }
check_elastic_license() {
[ -n "$TESTING" ] && return
# See if the user has already accepted the license
if [ ! -f /opt/so/state/yeselastic.txt ]; then
elastic_license
else
echo "Elastic License has already been accepted"
fi
}
elastic_license() {
read -r -d '' message <<- EOM
\n
Starting in Elastic Stack version 7.11, the Elastic Stack binaries are only available under the Elastic License:
https://securityonion.net/elastic-license
Please review the Elastic License:
https://www.elastic.co/licensing/elastic-license
Do you agree to the terms of the Elastic License?
If so, type AGREE to accept the Elastic License and continue. Otherwise, press Enter to exit this program without making any changes.
EOM
AGREED=$(whiptail --title "Security Onion Setup" --inputbox \
"$message" 20 75 3>&1 1>&2 2>&3)
if [ "${AGREED^^}" = 'AGREE' ]; then
mkdir -p /opt/so/state
touch /opt/so/state/yeselastic.txt
else
echo "Starting in 2.3.40 you must accept the Elastic license if you want to run Security Onion."
exit 1
fi
}
fail() { fail() {
msg=$1 msg=$1
echo "ERROR: $msg" echo "ERROR: $msg"
@@ -250,6 +303,12 @@ set_minionid() {
MINIONID=$(lookup_grain id) MINIONID=$(lookup_grain id)
} }
set_palette() {
if [ "$OS" == ubuntu ]; then
update-alternatives --set newt-palette /etc/newt/palette.original
fi
}
set_version() { set_version() {
CURRENTVERSION=0.0.0 CURRENTVERSION=0.0.0
if [ -f /etc/soversion ]; then if [ -f /etc/soversion ]; then
@@ -340,6 +399,26 @@ valid_int() {
# {% raw %} # {% raw %}
valid_proxy() {
local proxy=$1
local url_prefixes=( 'http://' 'https://' )
local has_prefix=false
for prefix in "${url_prefixes[@]}"; do
echo "$proxy" | grep -q "$prefix" && has_prefix=true && proxy=${proxy#"$prefix"} && break
done
local url_arr
mapfile -t url_arr <<< "$(echo "$proxy" | tr ":" "\n")"
local valid_url=true
if ! valid_ip4 "${url_arr[0]}" && ! valid_fqdn "${url_arr[0]}" && ! valid_hostname "${url_arr[0]}"; then
valid_url=false
fi
[[ $has_prefix == true ]] && [[ $valid_url == true ]] && return 0 || return 1
}
valid_string() { valid_string() {
local str=$1 local str=$1
local min_length=${2:-1} local min_length=${2:-1}

View File

@@ -30,7 +30,7 @@ fi
USER=$1 USER=$1
CORTEX_KEY=$(lookup_pillar cortexkey) CORTEX_KEY=$(lookup_pillar cortexorguserkey)
CORTEX_API_URL="$(lookup_pillar url_base)/cortex/api" CORTEX_API_URL="$(lookup_pillar url_base)/cortex/api"
CORTEX_ORG_NAME=$(lookup_pillar cortexorgname) CORTEX_ORG_NAME=$(lookup_pillar cortexorgname)
CORTEX_USER=$USER CORTEX_USER=$USER

View File

@@ -30,7 +30,7 @@ fi
USER=$1 USER=$1
CORTEX_KEY=$(lookup_pillar cortexkey) CORTEX_KEY=$(lookup_pillar cortexorguserkey)
CORTEX_API_URL="$(lookup_pillar url_base)/cortex/api" CORTEX_API_URL="$(lookup_pillar url_base)/cortex/api"
CORTEX_USER=$USER CORTEX_USER=$USER

View File

@@ -0,0 +1,85 @@
#!/usr/bin/env python3
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import sys, argparse, re, docker
from packaging.version import Version, InvalidVersion
from itertools import groupby, chain
def get_image_name(string) -> str:
return ':'.join(string.split(':')[:-1])
def get_so_image_basename(string) -> str:
return get_image_name(string).split('/so-')[-1]
def get_image_version(string) -> str:
ver = string.split(':')[-1]
if ver == 'latest':
# Version doesn't like "latest", so use a high semver
return '999999.9.9'
else:
try:
Version(ver)
except InvalidVersion:
# Strip the last substring following a hyphen for automated branches
ver = '-'.join(ver.split('-')[:-1])
return ver
def main(quiet):
client = docker.from_env()
image_list = client.images.list(filters={ 'dangling': False })
# Map list of image objects to flattened list of tags (format: "name:version")
tag_list = list(chain.from_iterable(list(map(lambda x: x.attrs.get('RepoTags'), image_list))))
# Filter to only SO images (base name begins with "so-")
tag_list = list(filter(lambda x: re.match(r'^.*\/so-[^\/]*$', get_image_name(x)), tag_list))
# Group tags into lists by base name (sort by same projection first)
tag_list.sort(key=lambda x: get_so_image_basename(x))
grouped_tag_lists = [ list(it) for _, it in groupby(tag_list, lambda x: get_so_image_basename(x)) ]
no_prunable = True
for t_list in grouped_tag_lists:
try:
# Keep the 2 most current images
t_list.sort(key=lambda x: Version(get_image_version(x)), reverse=True)
if len(t_list) <= 2:
continue
else:
no_prunable = False
for tag in t_list[2:]:
if not quiet: print(f'Removing image {tag}')
client.images.remove(tag)
except InvalidVersion as e:
print(f'so-{get_so_image_basename(t_list[0])}: {e.args[0]}', file=sys.stderr)
exit(1)
if no_prunable and not quiet:
print('No Security Onion images to prune')
if __name__ == "__main__":
main_parser = argparse.ArgumentParser(add_help=False)
main_parser.add_argument('-q', '--quiet', action='store_const', const=True, required=False)
args = main_parser.parse_args(sys.argv[1:])
main(args.quiet)

View File

@@ -50,11 +50,7 @@ done
if [ $SKIP -ne 1 ]; then if [ $SKIP -ne 1 ]; then
# List indices # List indices
echo echo
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -k -L https://{{ NODEIP }}:9200/_cat/indices?v curl -k -L https://{{ NODEIP }}:9200/_cat/indices?v
{% else %}
curl -L {{ NODEIP }}:9200/_cat/indices?v
{% endif %}
echo echo
# Inform user we are about to delete all data # Inform user we are about to delete all data
echo echo
@@ -93,18 +89,10 @@ fi
# Delete data # Delete data
echo "Deleting data..." echo "Deleting data..."
{% if grains['role'] in ['so-node','so-heavynode'] %}
INDXS=$(curl -s -XGET -k -L https://{{ NODEIP }}:9200/_cat/indices?v | egrep 'logstash|elastalert|so-' | awk '{ print $3 }') INDXS=$(curl -s -XGET -k -L https://{{ NODEIP }}:9200/_cat/indices?v | egrep 'logstash|elastalert|so-' | awk '{ print $3 }')
{% else %}
INDXS=$(curl -s -XGET -L {{ NODEIP }}:9200/_cat/indices?v | egrep 'logstash|elastalert|so-' | awk '{ print $3 }')
{% endif %}
for INDX in ${INDXS} for INDX in ${INDXS}
do do
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -XDELETE -k -L https://"{{ NODEIP }}:9200/${INDX}" > /dev/null 2>&1 curl -XDELETE -k -L https://"{{ NODEIP }}:9200/${INDX}" > /dev/null 2>&1
{% else %}
curl -XDELETE -L "{{ NODEIP }}:9200/${INDX}" > /dev/null 2>&1
{% endif %}
done done
#Start Logstash/Filebeat #Start Logstash/Filebeat

View File

@@ -21,6 +21,5 @@ THEHIVEESPORT=9400
echo "Removing read only attributes for indices..." echo "Removing read only attributes for indices..."
echo echo
for p in $ESPORT $THEHIVEESPORT; do curl -s -k -XPUT -H "Content-Type: application/json" -L https://$IP:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' 2>&1 | if grep -q ack; then echo "Index settings updated..."; else echo "There was any issue updating the read-only attribute. Please ensure Elasticsearch is running.";fi;
curl -XPUT -H "Content-Type: application/json" -L http://$IP:$p/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' 2>&1 | if grep -q ack; then echo "Index settings updated..."; else echo "There was any issue updating the read-only attribute. Please ensure Elasticsearch is running.";fi; curl -XPUT -H "Content-Type: application/json" -L http://$IP:9400/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}' 2>&1 | if grep -q ack; then echo "Index settings updated..."; else echo "There was any issue updating the read-only attribute. Please ensure Elasticsearch is running.";fi;
done

View File

@@ -19,15 +19,7 @@
. /usr/sbin/so-common . /usr/sbin/so-common
if [ "$1" == "" ]; then if [ "$1" == "" ]; then
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k -L https://{{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines" curl -s -k -L https://{{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines"
{% else %}
curl -s -L {{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines"
{% endif %}
else else
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k -L https://{{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines.\"$1\"" curl -s -k -L https://{{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines.\"$1\""
{% else %}
curl -s -L {{ NODEIP }}:9200/_nodes/stats | jq .nodes | jq ".[] | .ingest.pipelines.\"$1\""
{% endif %}
fi fi

View File

@@ -17,15 +17,7 @@
{%- set NODEIP = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set NODEIP = salt['pillar.get']('elasticsearch:mainip', '') -%}
. /usr/sbin/so-common . /usr/sbin/so-common
if [ "$1" == "" ]; then if [ "$1" == "" ]; then
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/* | jq 'keys' curl -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/* | jq 'keys'
{% else %}
curl -s -L {{ NODEIP }}:9200/_ingest/pipeline/* | jq 'keys'
{% endif %}
else else
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/$1 | jq curl -s -k -L https://{{ NODEIP }}:9200/_ingest/pipeline/$1 | jq
{% else %}
curl -s -L {{ NODEIP }}:9200/_ingest/pipeline/$1 | jq
{% endif %}
fi fi

View File

@@ -17,15 +17,7 @@
{%- set NODEIP = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set NODEIP = salt['pillar.get']('elasticsearch:mainip', '') -%}
. /usr/sbin/so-common . /usr/sbin/so-common
if [ "$1" == "" ]; then if [ "$1" == "" ]; then
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k -L https://{{ NODEIP }}:9200/_template/* | jq 'keys' curl -s -k -L https://{{ NODEIP }}:9200/_template/* | jq 'keys'
{% else %}
curl -s -L {{ NODEIP }}:9200/_template/* | jq 'keys'
{% endif %}
else else
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -s -k -L https://{{ NODEIP }}:9200/_template/$1 | jq curl -s -k -L https://{{ NODEIP }}:9200/_template/$1 | jq
{% else %}
curl -s -L {{ NODEIP }}:9200/_template/$1 | jq
{% endif %}
fi fi

View File

@@ -30,11 +30,7 @@ echo -n "Waiting for ElasticSearch..."
COUNT=0 COUNT=0
ELASTICSEARCH_CONNECTED="no" ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do while [[ "$COUNT" -le 240 ]]; do
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -k --output /dev/null --silent --head --fail -L https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT" curl -k --output /dev/null --silent --head --fail -L https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
{% else %}
curl --output /dev/null --silent --head --fail -L http://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
{% endif %}
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes" ELASTICSEARCH_CONNECTED="yes"
echo "connected!" echo "connected!"
@@ -55,11 +51,7 @@ cd ${ELASTICSEARCH_TEMPLATES}
echo "Loading templates..." echo "Loading templates..."
{% if grains['role'] in ['so-node','so-heavynode'] %}
for i in *; do TEMPLATE=$(echo $i | cut -d '-' -f2); echo "so-$TEMPLATE"; curl -k ${ELASTICSEARCH_AUTH} -s -XPUT -L https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_template/so-$TEMPLATE -H 'Content-Type: application/json' -d@$i 2>/dev/null; echo; done for i in *; do TEMPLATE=$(echo $i | cut -d '-' -f2); echo "so-$TEMPLATE"; curl -k ${ELASTICSEARCH_AUTH} -s -XPUT -L https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_template/so-$TEMPLATE -H 'Content-Type: application/json' -d@$i 2>/dev/null; echo; done
{% else %}
for i in *; do TEMPLATE=$(echo $i | cut -d '-' -f2); echo "so-$TEMPLATE"; curl ${ELASTICSEARCH_AUTH} -s -XPUT -L http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_template/so-$TEMPLATE -H 'Content-Type: application/json' -d@$i 2>/dev/null; echo; done
{% endif %}
echo echo
cd - >/dev/null cd - >/dev/null

View File

@@ -1,53 +0,0 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
. /usr/sbin/so-common
. /usr/sbin/so-image-common
local_salt_dir=/opt/so/saltstack/local
cat << EOF
This program will switch from the open source version of the Elastic Stack to the Features version licensed under the Elastic license.
If you proceed, then we will download new Docker images and restart services.
Please review the Elastic license:
https://raw.githubusercontent.com/elastic/elasticsearch/master/licenses/ELASTIC-LICENSE.txt
Please also note that, if you have a distributed deployment and continue with this change, Elastic traffic between nodes will change from encrypted to cleartext!
(We expect to support Elastic Features Security at some point in the future.)
Do you agree to the terms of the Elastic license and understand the note about encryption?
If so, type AGREE to accept the Elastic license and continue. Otherwise, just press Enter to exit this program without making any changes.
EOF
read INPUT
if [ "$INPUT" != "AGREE" ]; then
exit
fi
echo "Please wait while switching to Elastic Features."
require_manager
TRUSTED_CONTAINERS=( \
"so-elasticsearch" \
"so-filebeat" \
"so-kibana" \
"so-logstash" )
update_docker_containers "features" "-features"
# Modify global.sls to enable Features
sed -i 's/features: False/features: True/' $local_salt_dir/pillar/global.sls

View File

@@ -15,8 +15,4 @@
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% if grains['role'] in ['so-node','so-heavynode'] %}
curl -X GET -k -L https://localhost:9200/_cat/indices?v curl -X GET -k -L https://localhost:9200/_cat/indices?v
{% else %}
curl -X GET -L localhost:9200/_cat/indices?v
{% endif %}

View File

@@ -0,0 +1,13 @@
. /usr/sbin/so-common
wait_for_web_response "http://localhost:5601/app/kibana" "Elastic"
## This hackery will be removed if using Elastic Auth ##
# Let's snag a cookie from Kibana
THECOOKIE=$(curl -c - -X GET http://localhost:5601/ | grep sid | awk '{print $7}')
# Disable certain Features from showing up in the Kibana UI
echo
echo "Setting up default Space:"
curl -b "sid=$THECOOKIE" -L -X PUT "localhost:5601/api/spaces/space/default" -H 'kbn-xsrf: true' -H 'Content-Type: application/json' -d' {"id":"default","name":"Default","disabledFeatures":["ml","enterpriseSearch","siem","logs","infrastructure","apm","uptime","monitoring","stackAlerts","actions","fleet"]} ' >> /opt/so/log/kibana/misc.log
echo

0
salt/common/tools/sbin/so-monitor-add Normal file → Executable file
View File

0
salt/common/tools/sbin/so-playbook-sigma-refresh Normal file → Executable file
View File

0
salt/common/tools/sbin/so-raid-status Normal file → Executable file
View File

27
salt/common/tools/sbin/so-rule Normal file → Executable file
View File

@@ -37,11 +37,9 @@ def print_err(string: str):
def check_apply(args: dict, prompt: bool = True): def check_apply(args: dict, prompt: bool = True):
cmd_arr = ['salt-call', 'state.apply', 'idstools', 'queue=True']
if args.apply: if args.apply:
print('Configuration updated. Applying idstools state...') print('Configuration updated. Applying changes:')
return subprocess.run(cmd_arr) return apply()
else: else:
if prompt: if prompt:
message = 'Configuration updated. Would you like to apply your changes now? (y/N) ' message = 'Configuration updated. Would you like to apply your changes now? (y/N) '
@@ -51,12 +49,24 @@ def check_apply(args: dict, prompt: bool = True):
if answer.lower() in [ 'n', '' ]: if answer.lower() in [ 'n', '' ]:
return 0 return 0
else: else:
print('Applying idstools state...') print('Applying changes:')
return subprocess.run(cmd_arr) return apply()
else: else:
return 0 return 0
def apply():
salt_cmd = ['salt-call', 'state.apply', '-l', 'quiet', 'idstools.sync_files', 'queue=True']
update_cmd = ['so-rule-update']
print('Syncing config files...')
cmd = subprocess.run(salt_cmd, stdout=subprocess.DEVNULL)
if cmd.returncode == 0:
print('Updating rules...')
return subprocess.run(update_cmd).returncode
else:
return cmd.returncode
def find_minion_pillar() -> str: def find_minion_pillar() -> str:
regex = '^.*_(manager|managersearch|standalone|import|eval)\.sls$' regex = '^.*_(manager|managersearch|standalone|import|eval)\.sls$'
@@ -442,10 +452,7 @@ def main():
modify.print_help() modify.print_help()
sys.exit(0) sys.exit(0)
if isinstance(exit_code, subprocess.CompletedProcess): sys.exit(exit_code)
sys.exit(exit_code.returncode)
else:
sys.exit(exit_code)
if __name__ == '__main__': if __name__ == '__main__':

0
salt/common/tools/sbin/so-suricata-testrule Normal file → Executable file
View File

View File

@@ -19,13 +19,12 @@
UPDATE_DIR=/tmp/sogh/securityonion UPDATE_DIR=/tmp/sogh/securityonion
INSTALLEDVERSION=$(cat /etc/soversion) INSTALLEDVERSION=$(cat /etc/soversion)
POSTVERSION=$INSTALLEDVERSION
INSTALLEDSALTVERSION=$(salt --versions-report | grep Salt: | awk {'print $2'}) INSTALLEDSALTVERSION=$(salt --versions-report | grep Salt: | awk {'print $2'})
DEFAULT_SALT_DIR=/opt/so/saltstack/default DEFAULT_SALT_DIR=/opt/so/saltstack/default
BATCHSIZE=5 BATCHSIZE=5
SOUP_LOG=/root/soup.log SOUP_LOG=/root/soup.log
exec 3>&1 1>${SOUP_LOG} 2>&1
add_common() { add_common() {
cp $UPDATE_DIR/salt/common/tools/sbin/so-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/ cp $UPDATE_DIR/salt/common/tools/sbin/so-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/
cp $UPDATE_DIR/salt/common/tools/sbin/so-image-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/ cp $UPDATE_DIR/salt/common/tools/sbin/so-image-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/
@@ -101,19 +100,6 @@ update_registry() {
salt-call state.apply registry queue=True salt-call state.apply registry queue=True
} }
check_airgap() {
# See if this is an airgap install
AIRGAP=$(cat /opt/so/saltstack/local/pillar/global.sls | grep airgap: | awk '{print $2}')
if [[ "$AIRGAP" == "True" ]]; then
is_airgap=0
UPDATE_DIR=/tmp/soagupdate/SecurityOnion
AGDOCKER=/tmp/soagupdate/docker
AGREPO=/tmp/soagupdate/Packages
else
is_airgap=1
fi
}
check_sudoers() { check_sudoers() {
if grep -q "so-setup" /etc/sudoers; then if grep -q "so-setup" /etc/sudoers; then
echo "There is an entry for so-setup in the sudoers file, this can be safely deleted using \"visudo\"." echo "There is an entry for so-setup in the sudoers file, this can be safely deleted using \"visudo\"."
@@ -243,21 +229,9 @@ masterunlock() {
fi fi
} }
playbook() { preupgrade_changes() {
echo "Applying playbook settings"
if [[ "$INSTALLEDVERSION" =~ rc.1 ]]; then
salt-call state.apply playbook.OLD_db_init
rm -f /opt/so/rules/elastalert/playbook/*.yaml
so-playbook-ruleupdate >> /root/soup_playbook_rule_update.log 2>&1 &
fi
if [[ "$INSTALLEDVERSION" != 2.3.30 ]]; then
so-playbook-sigma-refresh >> /root/soup_playbook_sigma_refresh.log 2>&1 &
fi
}
pillar_changes() {
# This function is to add any new pillar items if needed. # This function is to add any new pillar items if needed.
echo "Checking to see if pillar changes are needed." echo "Checking to see if changes are needed."
[[ "$INSTALLEDVERSION" =~ rc.1 ]] && rc1_to_rc2 [[ "$INSTALLEDVERSION" =~ rc.1 ]] && rc1_to_rc2
[[ "$INSTALLEDVERSION" =~ rc.2 ]] && rc2_to_rc3 [[ "$INSTALLEDVERSION" =~ rc.2 ]] && rc2_to_rc3
@@ -266,6 +240,34 @@ pillar_changes() {
[[ "$INSTALLEDVERSION" == 2.3.20 || "$INSTALLEDVERSION" == 2.3.21 ]] && up_2.3.2X_to_2.3.30 [[ "$INSTALLEDVERSION" == 2.3.20 || "$INSTALLEDVERSION" == 2.3.21 ]] && up_2.3.2X_to_2.3.30
} }
postupgrade_changes() {
# This function is to add any new pillar items if needed.
echo "Running post upgrade processes."
[[ "$POSTVERSION" =~ rc.1 ]] && post_rc1_to_rc2
[[ "$POSTVERSION" == 2.3.20 || "$POSTVERSION" == 2.3.21 ]] && post_2.3.2X_to_2.3.30
[[ "$POSTVERSION" == 2.3.30 ]] && post_2.3.30_to_2.3.40
}
post_rc1_to_2.3.21() {
salt-call state.apply playbook.OLD_db_init
rm -f /opt/so/rules/elastalert/playbook/*.yaml
so-playbook-ruleupdate >> /root/soup_playbook_rule_update.log 2>&1 &
POSTVERSION=2.3.21
}
post_2.3.2X_to_2.3.30() {
so-playbook-sigma-refresh >> /root/soup_playbook_sigma_refresh.log 2>&1 &
POSTVERSION=2.3.30
}
post_2.3.30_to_2.3.40() {
so-playbook-sigma-refresh >> /root/soup_playbook_sigma_refresh.log 2>&1 &
so-kibana-space-defaults
POSTVERSION=2.3.40
}
rc1_to_rc2() { rc1_to_rc2() {
# Move the static file to global.sls # Move the static file to global.sls
@@ -296,15 +298,14 @@ rc1_to_rc2() {
done </tmp/nodes.txt done </tmp/nodes.txt
# Add the nodes back using hostname # Add the nodes back using hostname
while read p; do while read p; do
local NAME=$(echo $p | awk '{print $1}') local NAME=$(echo $p | awk '{print $1}')
local EHOSTNAME=$(echo $p | awk -F"_" '{print $1}') local EHOSTNAME=$(echo $p | awk -F"_" '{print $1}')
local IP=$(echo $p | awk '{print $2}') local IP=$(echo $p | awk '{print $2}')
echo "Adding the new cross cluster config for $NAME" echo "Adding the new cross cluster config for $NAME"
curl -XPUT http://localhost:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"search": {"remote": {"'$NAME'": {"skip_unavailable": "true", "seeds": ["'$EHOSTNAME':9300"]}}}}}' curl -XPUT http://localhost:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"search": {"remote": {"'$NAME'": {"skip_unavailable": "true", "seeds": ["'$EHOSTNAME':9300"]}}}}}'
done </tmp/nodes.txt done </tmp/nodes.txt
INSTALLEDVERSION=rc.2 INSTALLEDVERSION=rc.2
} }
rc2_to_rc3() { rc2_to_rc3() {
@@ -334,10 +335,10 @@ rc3_to_2.3.0() {
fi fi
{ {
echo "redis_settings:" echo "redis_settings:"
echo " redis_maxmemory: 827" echo " redis_maxmemory: 827"
echo "playbook:" echo "playbook:"
echo " api_key: de6639318502476f2fa5aa06f43f51fb389a3d7f" echo " api_key: de6639318502476f2fa5aa06f43f51fb389a3d7f"
} >> /opt/so/saltstack/local/pillar/global.sls } >> /opt/so/saltstack/local/pillar/global.sls
sed -i 's/playbook:/playbook_db:/' /opt/so/saltstack/local/pillar/secrets.sls sed -i 's/playbook:/playbook_db:/' /opt/so/saltstack/local/pillar/secrets.sls
@@ -385,7 +386,6 @@ up_2.3.0_to_2.3.20(){
fi fi
INSTALLEDVERSION=2.3.20 INSTALLEDVERSION=2.3.20
} }
up_2.3.2X_to_2.3.30() { up_2.3.2X_to_2.3.30() {
@@ -395,11 +395,11 @@ up_2.3.2X_to_2.3.30() {
sed -i -r "s/ (\{\{.*}})$/ '\1'/g" "$pillar" sed -i -r "s/ (\{\{.*}})$/ '\1'/g" "$pillar"
done done
# Change the IMAGEREPO # Change the IMAGEREPO
sed -i "/ imagerepo: 'securityonion'/c\ imagerepo: 'security-onion-solutions'" /opt/so/saltstack/local/pillar/global.sls sed -i "/ imagerepo: 'securityonion'/c\ imagerepo: 'security-onion-solutions'" /opt/so/saltstack/local/pillar/global.sls
sed -i "/ imagerepo: securityonion/c\ imagerepo: 'security-onion-solutions'" /opt/so/saltstack/local/pillar/global.sls sed -i "/ imagerepo: securityonion/c\ imagerepo: 'security-onion-solutions'" /opt/so/saltstack/local/pillar/global.sls
# Strelka rule repo pillar addition # Strelka rule repo pillar addition
if [ $is_airgap -eq 0 ]; then if [ $is_airgap -eq 0 ]; then
# Add manager as default Strelka YARA rule repo # Add manager as default Strelka YARA rule repo
sed -i "/^strelka:/a \\ repos: \n - https://$HOSTNAME/repo/rules/strelka" /opt/so/saltstack/local/pillar/global.sls; sed -i "/^strelka:/a \\ repos: \n - https://$HOSTNAME/repo/rules/strelka" /opt/so/saltstack/local/pillar/global.sls;
@@ -410,16 +410,26 @@ up_2.3.2X_to_2.3.30() {
check_log_size_limit check_log_size_limit
} }
space_check() { verify_upgradespace() {
# Check to see if there is enough space
CURRENTSPACE=$(df -BG / | grep -v Avail | awk '{print $4}' | sed 's/.$//') CURRENTSPACE=$(df -BG / | grep -v Avail | awk '{print $4}' | sed 's/.$//')
if [ "$CURRENTSPACE" -lt "10" ]; then if [ "$CURRENTSPACE" -lt "10" ]; then
echo "You are low on disk space. Upgrade will try and clean up space."; echo "You are low on disk space."
clean_dockers return 1
else else
echo "Plenty of space for upgrading" return 0
fi fi
}
upgrade_space() {
if ! verify_upgradespace; then
clean_dockers
if ! verify_upgradespace; then
echo "There is not enough space to perform the upgrade. Please free up space and try again"
exit 1
fi
else
echo "You have enough space for upgrade. Proceeding with soup."
fi
} }
thehive_maint() { thehive_maint() {
@@ -427,16 +437,16 @@ thehive_maint() {
COUNT=0 COUNT=0
THEHIVE_CONNECTED="no" THEHIVE_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do while [[ "$COUNT" -le 240 ]]; do
curl --output /dev/null --silent --head --fail -k "https://localhost/thehive/api/alert" curl --output /dev/null --silent --head --fail -k "https://localhost/thehive/api/alert"
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
THEHIVE_CONNECTED="yes" THEHIVE_CONNECTED="yes"
echo "connected!" echo "connected!"
break break
else else
((COUNT+=1)) ((COUNT+=1))
sleep 1 sleep 1
echo -n "." echo -n "."
fi fi
done done
if [ "$THEHIVE_CONNECTED" == "yes" ]; then if [ "$THEHIVE_CONNECTED" == "yes" ]; then
echo "Migrating thehive databases if needed." echo "Migrating thehive databases if needed."
@@ -471,83 +481,84 @@ update_version() {
} }
upgrade_check() { upgrade_check() {
# Let's make sure we actually need to update. # Let's make sure we actually need to update.
NEWVERSION=$(cat $UPDATE_DIR/VERSION) NEWVERSION=$(cat $UPDATE_DIR/VERSION)
if [ "$INSTALLEDVERSION" == "$NEWVERSION" ]; then if [ "$INSTALLEDVERSION" == "$NEWVERSION" ]; then
echo "You are already running the latest version of Security Onion." echo "You are already running the latest version of Security Onion."
exit 0 exit 0
fi fi
} }
upgrade_check_salt() { upgrade_check_salt() {
NEWSALTVERSION=$(grep version: $UPDATE_DIR/salt/salt/master.defaults.yaml | awk {'print $2'}) NEWSALTVERSION=$(grep version: $UPDATE_DIR/salt/salt/master.defaults.yaml | awk {'print $2'})
if [ "$INSTALLEDSALTVERSION" == "$NEWSALTVERSION" ]; then if [ "$INSTALLEDSALTVERSION" == "$NEWSALTVERSION" ]; then
echo "You are already running the correct version of Salt for Security Onion." echo "You are already running the correct version of Salt for Security Onion."
else else
UPGRADESALT=1 UPGRADESALT=1
fi fi
} }
upgrade_salt() { upgrade_salt() {
SALTUPGRADED=True SALTUPGRADED=True
echo "Performing upgrade of Salt from $INSTALLEDSALTVERSION to $NEWSALTVERSION." echo "Performing upgrade of Salt from $INSTALLEDSALTVERSION to $NEWSALTVERSION."
echo "" echo ""
# If CentOS # If CentOS
if [ "$OS" == "centos" ]; then if [ "$OS" == "centos" ]; then
echo "Removing yum versionlock for Salt." echo "Removing yum versionlock for Salt."
echo "" echo ""
yum versionlock delete "salt-*" yum versionlock delete "salt-*"
echo "Updating Salt packages and restarting services." echo "Updating Salt packages and restarting services."
echo "" echo ""
if [ $is_airgap -eq 0 ]; then if [ $is_airgap -eq 0 ]; then
sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -r -F -M -x python3 stable "$NEWSALTVERSION" sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -r -F -M -x python3 stable "$NEWSALTVERSION"
else else
sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -F -M -x python3 stable "$NEWSALTVERSION" sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -F -M -x python3 stable "$NEWSALTVERSION"
fi fi
echo "Applying yum versionlock for Salt." echo "Applying yum versionlock for Salt."
echo "" echo ""
yum versionlock add "salt-*" yum versionlock add "salt-*"
# Else do Ubuntu things # Else do Ubuntu things
elif [ "$OS" == "ubuntu" ]; then elif [ "$OS" == "ubuntu" ]; then
echo "Removing apt hold for Salt." echo "Removing apt hold for Salt."
echo "" echo ""
apt-mark unhold "salt-common" apt-mark unhold "salt-common"
apt-mark unhold "salt-master" apt-mark unhold "salt-master"
apt-mark unhold "salt-minion" apt-mark unhold "salt-minion"
echo "Updating Salt packages and restarting services." echo "Updating Salt packages and restarting services."
echo "" echo ""
sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -F -M -x python3 stable "$NEWSALTVERSION" sh $UPDATE_DIR/salt/salt/scripts/bootstrap-salt.sh -F -M -x python3 stable "$NEWSALTVERSION"
echo "Applying apt hold for Salt." echo "Applying apt hold for Salt."
echo "" echo ""
apt-mark hold "salt-common" apt-mark hold "salt-common"
apt-mark hold "salt-master" apt-mark hold "salt-master"
apt-mark hold "salt-minion" apt-mark hold "salt-minion"
fi fi
} }
verify_latest_update_script() { verify_latest_update_script() {
# Check to see if the update scripts match. If not run the new one. # Check to see if the update scripts match. If not run the new one.
CURRENTSOUP=$(md5sum /opt/so/saltstack/default/salt/common/tools/sbin/soup | awk '{print $1}') CURRENTSOUP=$(md5sum /opt/so/saltstack/default/salt/common/tools/sbin/soup | awk '{print $1}')
GITSOUP=$(md5sum $UPDATE_DIR/salt/common/tools/sbin/soup | awk '{print $1}') GITSOUP=$(md5sum $UPDATE_DIR/salt/common/tools/sbin/soup | awk '{print $1}')
CURRENTCMN=$(md5sum /opt/so/saltstack/default/salt/common/tools/sbin/so-common | awk '{print $1}') CURRENTCMN=$(md5sum /opt/so/saltstack/default/salt/common/tools/sbin/so-common | awk '{print $1}')
GITCMN=$(md5sum $UPDATE_DIR/salt/common/tools/sbin/so-common | awk '{print $1}') GITCMN=$(md5sum $UPDATE_DIR/salt/common/tools/sbin/so-common | awk '{print $1}')
CURRENTIMGCMN=$(md5sum /opt/so/saltstack/default/salt/common/tools/sbin/so-image-common | awk '{print $1}') CURRENTIMGCMN=$(md5sum /opt/so/saltstack/default/salt/common/tools/sbin/so-image-common | awk '{print $1}')
GITIMGCMN=$(md5sum $UPDATE_DIR/salt/common/tools/sbin/so-image-common | awk '{print $1}') GITIMGCMN=$(md5sum $UPDATE_DIR/salt/common/tools/sbin/so-image-common | awk '{print $1}')
if [[ "$CURRENTSOUP" == "$GITSOUP" && "$CURRENTCMN" == "$GITCMN" && "$CURRENTIMGCMN" == "$GITIMGCMN" ]]; then if [[ "$CURRENTSOUP" == "$GITSOUP" && "$CURRENTCMN" == "$GITCMN" && "$CURRENTIMGCMN" == "$GITIMGCMN" ]]; then
echo "This version of the soup script is up to date. Proceeding." echo "This version of the soup script is up to date. Proceeding."
else else
echo "You are not running the latest soup version. Updating soup and its components. Might take multiple runs to complete" echo "You are not running the latest soup version. Updating soup and its components. Might take multiple runs to complete"
cp $UPDATE_DIR/salt/common/tools/sbin/soup $DEFAULT_SALT_DIR/salt/common/tools/sbin/ cp $UPDATE_DIR/salt/common/tools/sbin/soup $DEFAULT_SALT_DIR/salt/common/tools/sbin/
cp $UPDATE_DIR/salt/common/tools/sbin/so-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/ cp $UPDATE_DIR/salt/common/tools/sbin/so-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/
cp $UPDATE_DIR/salt/common/tools/sbin/so-image-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/ cp $UPDATE_DIR/salt/common/tools/sbin/so-image-common $DEFAULT_SALT_DIR/salt/common/tools/sbin/
salt-call state.apply common queue=True salt-call state.apply common queue=True
echo "" echo ""
echo "soup has been updated. Please run soup again." echo "soup has been updated. Please run soup again."
exit 0 exit 0
fi fi
} }
main () { main () {
echo "### Preparing soup at `date` ###"
while getopts ":b" opt; do while getopts ":b" opt; do
case "$opt" in case "$opt" in
b ) # process option b b ) # process option b
@@ -557,9 +568,10 @@ while getopts ":b" opt; do
echo "Batch size must be a number greater than 0." echo "Batch size must be a number greater than 0."
exit 1 exit 1
fi fi
;; ;;
\? ) echo "Usage: cmd [-b]" \? )
;; echo "Usage: cmd [-b]"
;;
esac esac
done done
@@ -573,6 +585,8 @@ check_airgap
echo "Found that Security Onion $INSTALLEDVERSION is currently installed." echo "Found that Security Onion $INSTALLEDVERSION is currently installed."
echo "" echo ""
set_os set_os
set_palette
check_elastic_license
echo "" echo ""
if [ $is_airgap -eq 0 ]; then if [ $is_airgap -eq 0 ]; then
# Let's mount the ISO since this is airgap # Let's mount the ISO since this is airgap
@@ -599,7 +613,7 @@ fi
echo "Let's see if we need to update Security Onion." echo "Let's see if we need to update Security Onion."
upgrade_check upgrade_check
space_check upgrade_space
echo "Checking for Salt Master and Minion updates." echo "Checking for Salt Master and Minion updates."
upgrade_check_salt upgrade_check_salt
@@ -613,16 +627,6 @@ if [ $is_airgap -eq 0 ]; then
else else
update_registry update_registry
update_docker_containers "soup" update_docker_containers "soup"
FEATURESCHECK=$(lookup_pillar features elastic)
if [[ "$FEATURESCHECK" == "True" ]]; then
TRUSTED_CONTAINERS=(
"so-elasticsearch"
"so-filebeat"
"so-kibana"
"so-logstash"
)
update_docker_containers "features" "-features"
fi
fi fi
echo "" echo ""
echo "Stopping Salt Minion service." echo "Stopping Salt Minion service."
@@ -659,8 +663,7 @@ else
echo "" echo ""
fi fi
echo "Making pillar changes." preupgrade_changes
pillar_changes
echo "" echo ""
if [ $is_airgap -eq 0 ]; then if [ $is_airgap -eq 0 ]; then
@@ -714,7 +717,7 @@ echo "Starting Salt Master service."
systemctl start salt-master systemctl start salt-master
echo "Running a highstate. This could take several minutes." echo "Running a highstate. This could take several minutes."
salt-call state.highstate -l info queue=True salt-call state.highstate -l info queue=True
playbook postupgrade_changes
unmount_update unmount_update
thehive_maint thehive_maint
@@ -746,6 +749,39 @@ if [[ -n $lsl_msg ]]; then
esac esac
fi fi
NUM_MINIONS=$(ls /opt/so/saltstack/local/pillar/minions/*_*.sls | wc -l)
if [ $NUM_MINIONS -gt 1 ]; then
cat << EOF
This appears to be a distributed deployment. Other nodes should update themselves at the next Salt highstate (typically within 15 minutes). Do not manually restart anything until you know that all the search/heavy nodes in your deployment are updated. This is especially important if you are using true clustering for Elasticsearch.
Each minion is on a random 15 minute check-in period and things like network bandwidth can be a factor in how long the actual upgrade takes. If you have a heavy node on a slow link, it is going to take a while to get the containers to it. Depending on what changes happened between the versions, Elasticsearch might not be able to talk to said heavy node until the update is complete.
If it looks like youre missing data after the upgrade, please avoid restarting services and instead make sure at least one search node has completed its upgrade. The best way to do this is to run 'sudo salt-call state.highstate' from a search node and make sure there are no errors. Typically if it works on one node it will work on the rest. Forward nodes are less complex and will update as they check in so you can monitor those from the Grid section of SOC.
For more information, please see https://docs.securityonion.net/en/2.3/soup.html#distributed-deployments.
EOF
fi
echo "### soup has been served at `date` ###"
} }
main "$@" | tee /dev/fd/3 cat << EOF
SOUP - Security Onion UPdater
Please review the following for more information about the update process and recent updates:
https://docs.securityonion.net/soup
https://blog.securityonion.net
Please note that soup only updates Security Onion components and does NOT update the underlying operating system (OS). When you installed Security Onion, there was an option to automatically update the OS packages. If you did not enable this option, then you will want to ensure that the OS is fully updated before running soup.
Press Enter to continue or Ctrl-C to cancel.
EOF
read input
main "$@" | tee -a $SOUP_LOG

View File

@@ -4,12 +4,11 @@
{%- if grains['role'] in ['so-node', 'so-heavynode'] %} {%- if grains['role'] in ['so-node', 'so-heavynode'] %}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ELASTICSEARCH_HOST = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- set ELASTICSEARCH_PORT = salt['pillar.get']('elasticsearch:es_port', '') -%} {%- set ELASTICSEARCH_PORT = salt['pillar.get']('elasticsearch:es_port', '') -%}
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('elasticsearch:log_size_limit', '') -%}
{%- elif grains['role'] in ['so-eval', 'so-managersearch', 'so-standalone'] %} {%- elif grains['role'] in ['so-eval', 'so-managersearch', 'so-standalone'] %}
{%- set ELASTICSEARCH_HOST = salt['pillar.get']('manager:mainip', '') -%} {%- set ELASTICSEARCH_HOST = salt['pillar.get']('manager:mainip', '') -%}
{%- set ELASTICSEARCH_PORT = salt['pillar.get']('manager:es_port', '') -%} {%- set ELASTICSEARCH_PORT = salt['pillar.get']('manager:es_port', '') -%}
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('manager:log_size_limit', '') -%}
{%- endif -%} {%- endif -%}
{%- set LOG_SIZE_LIMIT = salt['pillar.get']('elasticsearch:log_size_limit', '') -%}
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC # Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
# #

View File

@@ -12,11 +12,11 @@ client:
- {{elasticsearch}} - {{elasticsearch}}
port: 9200 port: 9200
url_prefix: url_prefix:
{% if grains['role'] in ['so-node', 'so-heavynode'] %} use_ssl: True{% else %} use_ssl: False{% endif %} use_ssl: True
certificate: certificate:
client_cert: client_cert:
client_key: client_key:
{% if grains['role'] in ['so-node', 'so-heavynode'] %} ssl_no_validate: True{% else %} ssl_no_validate: False{% endif %} ssl_no_validate: True
http_auth: http_auth:
timeout: 30 timeout: 30
master_only: False master_only: False

View File

@@ -1,86 +1,9 @@
{% from 'allowed_states.map.jinja' import allowed_states %} {% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls in allowed_states %} {% if sls in allowed_states %}
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %} prune_images:
{% set MANAGER = salt['grains.get']('master') %} cmd.run:
{% set OLDVERSIONS = ['2.0.0-rc.1','2.0.1-rc.1','2.0.2-rc.1','2.0.3-rc.1','2.1.0-rc.2','2.2.0-rc.3','2.3.0','2.3.1','2.3.2','2.3.10','2.3.20']%} - name: so-docker-prune
{% for VERSION in OLDVERSIONS %}
remove_images_{{ VERSION }}:
docker_image.absent:
- force: True
- images:
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-acng:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-thehive-cortex:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-curator:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-domainstats:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-elastalert:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-elasticsearch:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-filebeat:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-fleet:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-fleet-launcher:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-freqserver:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-grafana:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-idstools:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-influxdb:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-kibana:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-kratos:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-logstash:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-minio:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-mysql:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-nginx:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-pcaptools:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-playbook:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-redis:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-soc:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-soctopus:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-steno:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-strelka-frontend:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-strelka-manager:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-strelka-backend:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-strelka-filestream:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-suricata:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-telegraf:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-thehive:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-thehive-es:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-wazuh:{{ VERSION }}'
- '{{ MANAGER }}:5000/{{ IMAGEREPO }}/so-zeek:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-acng:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-thehive-cortex:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-curator:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-domainstats:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-elastalert:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-elasticsearch:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-filebeat:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-fleet:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-fleet-launcher:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-freqserver:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-grafana:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-idstools:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-influxdb:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-kibana:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-kratos:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-logstash:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-minio:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-mysql:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-nginx:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-pcaptools:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-playbook:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-redis:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-soc:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-soctopus:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-steno:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-strelka-frontend:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-strelka-manager:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-strelka-backend:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-strelka-filestream:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-suricata:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-telegraf:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-thehive:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-thehive-es:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-wazuh:{{ VERSION }}'
- '{{ MANAGER }}:5000/securityonion/so-zeek:{{ VERSION }}'
{% endfor %}
{% else %} {% else %}

View File

@@ -16,8 +16,8 @@ elastalert:
#aws_region: us-east-1 #aws_region: us-east-1
#profile: test #profile: test
#es_url_prefix: elasticsearch #es_url_prefix: elasticsearch
#use_ssl: True use_ssl: true
#verify_certs: True verify_certs: false
#es_send_get_body_as: GET #es_send_get_body_as: GET
#es_username: someusername #es_username: someusername
#es_password: somepassword #es_password: somepassword

View File

@@ -4,6 +4,9 @@ from time import gmtime, strftime
import requests,json import requests,json
from elastalert.alerts import Alerter from elastalert.alerts import Alerter
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
class PlaybookESAlerter(Alerter): class PlaybookESAlerter(Alerter):
""" """
Use matched data to create alerts in elasticsearch Use matched data to create alerts in elasticsearch
@@ -17,7 +20,7 @@ class PlaybookESAlerter(Alerter):
timestamp = strftime("%Y-%m-%d"'T'"%H:%M:%S", gmtime()) timestamp = strftime("%Y-%m-%d"'T'"%H:%M:%S", gmtime())
headers = {"Content-Type": "application/json"} headers = {"Content-Type": "application/json"}
payload = {"rule": { "name": self.rule['play_title'],"case_template": self.rule['play_id'],"uuid": self.rule['play_id'],"category": self.rule['rule.category']},"event":{ "severity": self.rule['event.severity'],"module": self.rule['event.module'],"dataset": self.rule['event.dataset'],"severity_label": self.rule['sigma_level']},"kibana_pivot": self.rule['kibana_pivot'],"soc_pivot": self.rule['soc_pivot'],"play_url": self.rule['play_url'],"sigma_level": self.rule['sigma_level'],"event_data": match, "@timestamp": timestamp} payload = {"rule": { "name": self.rule['play_title'],"case_template": self.rule['play_id'],"uuid": self.rule['play_id'],"category": self.rule['rule.category']},"event":{ "severity": self.rule['event.severity'],"module": self.rule['event.module'],"dataset": self.rule['event.dataset'],"severity_label": self.rule['sigma_level']},"kibana_pivot": self.rule['kibana_pivot'],"soc_pivot": self.rule['soc_pivot'],"play_url": self.rule['play_url'],"sigma_level": self.rule['sigma_level'],"event_data": match, "@timestamp": timestamp}
url = f"http://{self.rule['elasticsearch_host']}/so-playbook-alerts-{today}/_doc/" url = f"https://{self.rule['elasticsearch_host']}/so-playbook-alerts-{today}/_doc/"
requests.post(url, data=json.dumps(payload), headers=headers, verify=False) requests.post(url, data=json.dumps(payload), headers=headers, verify=False)
def get_info(self): def get_info(self):

View File

@@ -104,8 +104,9 @@ elastaconf:
wait_for_elasticsearch: wait_for_elasticsearch:
module.run: module.run:
- http.wait_for_successful_query: - http.wait_for_successful_query:
- url: 'http://{{MANAGER}}:9200/_cat/indices/.kibana*' - url: 'https://{{MANAGER}}:9200/_cat/indices/.kibana*'
- wait_for: 180 - wait_for: 180
- verify_ssl: False
so-elastalert: so-elastalert:
docker_container.running: docker_container.running:

View File

@@ -1,6 +1,5 @@
{%- set NODE_ROUTE_TYPE = salt['pillar.get']('elasticsearch:node_route_type', 'hot') %} {%- set NODE_ROUTE_TYPE = salt['pillar.get']('elasticsearch:node_route_type', 'hot') %}
{%- set NODEIP = salt['pillar.get']('elasticsearch:mainip') %} {%- set NODEIP = salt['pillar.get']('elasticsearch:mainip') %}
{%- set FEATURES = salt['pillar.get']('elastic:features', False) %}
{%- set TRUECLUSTER = salt['pillar.get']('elasticsearch:true_cluster', False) %} {%- set TRUECLUSTER = salt['pillar.get']('elasticsearch:true_cluster', False) %}
{%- if TRUECLUSTER is sameas true %} {%- if TRUECLUSTER is sameas true %}
{%- set ESCLUSTERNAME = salt['pillar.get']('elasticsearch:true_cluster_name') %} {%- set ESCLUSTERNAME = salt['pillar.get']('elasticsearch:true_cluster_name') %}
@@ -10,12 +9,6 @@
{%- set NODE_ROLES = salt['pillar.get']('elasticsearch:node_roles', ['data', 'ingest']) %} {%- set NODE_ROLES = salt['pillar.get']('elasticsearch:node_roles', ['data', 'ingest']) %}
cluster.name: "{{ ESCLUSTERNAME }}" cluster.name: "{{ ESCLUSTERNAME }}"
network.host: 0.0.0.0 network.host: 0.0.0.0
# minimum_master_nodes need to be explicitly set when bound on a public IP
# set to 1 to allow single node clusters
# Details: https://github.com/elastic/elasticsearch/pull/17288
#discovery.zen.minimum_master_nodes: 1
# This is a test -- if this is here, then the volume is mounted correctly.
path.logs: /var/log/elasticsearch path.logs: /var/log/elasticsearch
action.destructive_requires_name: true action.destructive_requires_name: true
transport.bind_host: 0.0.0.0 transport.bind_host: 0.0.0.0
@@ -25,27 +18,23 @@ cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.low: 95% cluster.routing.allocation.disk.watermark.low: 95%
cluster.routing.allocation.disk.watermark.high: 98% cluster.routing.allocation.disk.watermark.high: 98%
cluster.routing.allocation.disk.watermark.flood_stage: 98% cluster.routing.allocation.disk.watermark.flood_stage: 98%
{%- if FEATURES is sameas true %}
xpack.ml.enabled: false xpack.ml.enabled: false
#xpack.security.enabled: false xpack.security.enabled: true
#xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.enabled: true
#xpack.security.transport.ssl.verification_mode: none xpack.security.transport.ssl.verification_mode: none
#xpack.security.transport.ssl.key: /usr/share/elasticsearch/config/elasticsearch.key xpack.security.transport.ssl.key: /usr/share/elasticsearch/config/elasticsearch.key
#xpack.security.transport.ssl.certificate: /usr/share/elasticsearch/config/elasticsearch.crt xpack.security.transport.ssl.certificate: /usr/share/elasticsearch/config/elasticsearch.crt
#xpack.security.transport.ssl.certificate_authorities: [ "/usr/share/elasticsearch/config/ca.crt" ] xpack.security.transport.ssl.certificate_authorities: [ "/usr/share/elasticsearch/config/ca.crt" ]
#{%- if grains['role'] in ['so-node','so-heavynode'] %} xpack.security.http.ssl.enabled: true
#xpack.security.http.ssl.enabled: true xpack.security.http.ssl.client_authentication: none
#xpack.security.http.ssl.client_authentication: none xpack.security.http.ssl.key: /usr/share/elasticsearch/config/elasticsearch.key
#xpack.security.http.ssl.key: /usr/share/elasticsearch/config/elasticsearch.key xpack.security.http.ssl.certificate: /usr/share/elasticsearch/config/elasticsearch.crt
#xpack.security.http.ssl.certificate: /usr/share/elasticsearch/config/elasticsearch.crt xpack.security.http.ssl.certificate_authorities: /usr/share/elasticsearch/config/ca.crt
#xpack.security.http.ssl.certificate_authorities: /usr/share/elasticsearch/config/ca.crt xpack.security.authc:
#{%- endif %} anonymous:
#xpack.security.authc: username: anonymous_user
# anonymous: roles: superuser
# username: anonymous_user authz_exception: true
# roles: superuser
# authz_exception: true
{%- endif %}
node.name: {{ grains.host }} node.name: {{ grains.host }}
script.max_compilations_rate: 1000/1m script.max_compilations_rate: 1000/1m
{%- if TRUECLUSTER is sameas true %} {%- if TRUECLUSTER is sameas true %}

View File

@@ -32,8 +32,6 @@
{ "rename": { "field": "category", "target_field": "event.category", "ignore_failure": true, "ignore_missing": true } }, { "rename": { "field": "category", "target_field": "event.category", "ignore_failure": true, "ignore_missing": true } },
{ "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_failure": true, "ignore_missing": true } }, { "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_failure": true, "ignore_missing": true } },
{ "lowercase": { "field": "event.dataset", "ignore_failure": true, "ignore_missing": true } }, { "lowercase": { "field": "event.dataset", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "destination.port", "type": "integer", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "source.port", "type": "integer", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "log.id.uid", "type": "string", "ignore_failure": true, "ignore_missing": true } }, { "convert": { "field": "log.id.uid", "type": "string", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "agent.id", "type": "string", "ignore_failure": true, "ignore_missing": true } }, { "convert": { "field": "agent.id", "type": "string", "ignore_failure": true, "ignore_missing": true } },
{ "convert": { "field": "event.severity", "type": "integer", "ignore_failure": true, "ignore_missing": true } }, { "convert": { "field": "event.severity", "type": "integer", "ignore_failure": true, "ignore_missing": true } },

View File

@@ -0,0 +1,70 @@
{
"description" : "http.status",
"processors" : [
{ "set": { "if": "ctx.http.status_code == 100", "field": "http.status_message", "value": "Continue" } },
{ "set": { "if": "ctx.http.status_code == 101", "field": "http.status_message", "value": "Switching Protocols" } },
{ "set": { "if": "ctx.http.status_code == 102", "field": "http.status_message", "value": "Processing" } },
{ "set": { "if": "ctx.http.status_code == 103", "field": "http.status_message", "value": "Early Hints" } },
{ "set": { "if": "ctx.http.status_code == 200", "field": "http.status_message", "value": "OK" } },
{ "set": { "if": "ctx.http.status_code == 201", "field": "http.status_message", "value": "Created" } },
{ "set": { "if": "ctx.http.status_code == 202", "field": "http.status_message", "value": "Accepted" } },
{ "set": { "if": "ctx.http.status_code == 203", "field": "http.status_message", "value": "Non-Authoritative Information" } },
{ "set": { "if": "ctx.http.status_code == 204", "field": "http.status_message", "value": "No Content" } },
{ "set": { "if": "ctx.http.status_code == 205", "field": "http.status_message", "value": "Reset Content" } },
{ "set": { "if": "ctx.http.status_code == 206", "field": "http.status_message", "value": "Partial Content" } },
{ "set": { "if": "ctx.http.status_code == 207", "field": "http.status_message", "value": "Multi-Status" } },
{ "set": { "if": "ctx.http.status_code == 208", "field": "http.status_message", "value": "Already Reported" } },
{ "set": { "if": "ctx.http.status_code == 226", "field": "http.status_message", "value": "IM Used" } },
{ "set": { "if": "ctx.http.status_code == 300", "field": "http.status_message", "value": "Multiple Choices" } },
{ "set": { "if": "ctx.http.status_code == 301", "field": "http.status_message", "value": "Moved Permanently" } },
{ "set": { "if": "ctx.http.status_code == 302", "field": "http.status_message", "value": "Found" } },
{ "set": { "if": "ctx.http.status_code == 303", "field": "http.status_message", "value": "See Other" } },
{ "set": { "if": "ctx.http.status_code == 304", "field": "http.status_message", "value": "Not Modified" } },
{ "set": { "if": "ctx.http.status_code == 305", "field": "http.status_message", "value": "Use Proxy" } },
{ "set": { "if": "ctx.http.status_code == 306", "field": "http.status_message", "value": "(Unused)" } },
{ "set": { "if": "ctx.http.status_code == 307", "field": "http.status_message", "value": "Temporary Redirect" } },
{ "set": { "if": "ctx.http.status_code == 308", "field": "http.status_message", "value": "Permanent Redirect" } },
{ "set": { "if": "ctx.http.status_code == 400", "field": "http.status_message", "value": "Bad Request" } },
{ "set": { "if": "ctx.http.status_code == 401", "field": "http.status_message", "value": "Unauthorized" } },
{ "set": { "if": "ctx.http.status_code == 402", "field": "http.status_message", "value": "Payment Required" } },
{ "set": { "if": "ctx.http.status_code == 403", "field": "http.status_message", "value": "Forbidden" } },
{ "set": { "if": "ctx.http.status_code == 404", "field": "http.status_message", "value": "Not Found" } },
{ "set": { "if": "ctx.http.status_code == 405", "field": "http.status_message", "value": "Method Not Allowed" } },
{ "set": { "if": "ctx.http.status_code == 406", "field": "http.status_message", "value": "Not Acceptable" } },
{ "set": { "if": "ctx.http.status_code == 407", "field": "http.status_message", "value": "Proxy Authentication Required" } },
{ "set": { "if": "ctx.http.status_code == 408", "field": "http.status_message", "value": "Request Timeout" } },
{ "set": { "if": "ctx.http.status_code == 409", "field": "http.status_message", "value": "Conflict" } },
{ "set": { "if": "ctx.http.status_code == 410", "field": "http.status_message", "value": "Gone" } },
{ "set": { "if": "ctx.http.status_code == 411", "field": "http.status_message", "value": "Length Required" } },
{ "set": { "if": "ctx.http.status_code == 412", "field": "http.status_message", "value": "Precondition Failed" } },
{ "set": { "if": "ctx.http.status_code == 413", "field": "http.status_message", "value": "Payload Too Large" } },
{ "set": { "if": "ctx.http.status_code == 414", "field": "http.status_message", "value": "URI Too Long" } },
{ "set": { "if": "ctx.http.status_code == 415", "field": "http.status_message", "value": "Unsupported Media Type" } },
{ "set": { "if": "ctx.http.status_code == 416", "field": "http.status_message", "value": "Range Not Satisfiable" } },
{ "set": { "if": "ctx.http.status_code == 417", "field": "http.status_message", "value": "Expectation Failed" } },
{ "set": { "if": "ctx.http.status_code == 421", "field": "http.status_message", "value": "Misdirected Request" } },
{ "set": { "if": "ctx.http.status_code == 422", "field": "http.status_message", "value": "Unprocessable Entity" } },
{ "set": { "if": "ctx.http.status_code == 423", "field": "http.status_message", "value": "Locked" } },
{ "set": { "if": "ctx.http.status_code == 424", "field": "http.status_message", "value": "Failed Dependency" } },
{ "set": { "if": "ctx.http.status_code == 425", "field": "http.status_message", "value": "Too Early" } },
{ "set": { "if": "ctx.http.status_code == 426", "field": "http.status_message", "value": "Upgrade Required" } },
{ "set": { "if": "ctx.http.status_code == 427", "field": "http.status_message", "value": "Unassigned" } },
{ "set": { "if": "ctx.http.status_code == 428", "field": "http.status_message", "value": "Precondition Required" } },
{ "set": { "if": "ctx.http.status_code == 429", "field": "http.status_message", "value": "Too Many Requests" } },
{ "set": { "if": "ctx.http.status_code == 430", "field": "http.status_message", "value": "Unassigned" } },
{ "set": { "if": "ctx.http.status_code == 431", "field": "http.status_message", "value": "Request Header Fields Too Large" } },
{ "set": { "if": "ctx.http.status_code == 451", "field": "http.status_message", "value": "Unavailable For Legal Reasons" } },
{ "set": { "if": "ctx.http.status_code == 500", "field": "http.status_message", "value": "Internal Server Error" } },
{ "set": { "if": "ctx.http.status_code == 501", "field": "http.status_message", "value": "Not Implemented" } },
{ "set": { "if": "ctx.http.status_code == 502", "field": "http.status_message", "value": "Bad Gateway" } },
{ "set": { "if": "ctx.http.status_code == 503", "field": "http.status_message", "value": "Service Unavailable" } },
{ "set": { "if": "ctx.http.status_code == 504", "field": "http.status_message", "value": "Gateway Timeout" } },
{ "set": { "if": "ctx.http.status_code == 505", "field": "http.status_message", "value": "HTTP Version Not Supported" } },
{ "set": { "if": "ctx.http.status_code == 506", "field": "http.status_message", "value": "Variant Also Negotiates" } },
{ "set": { "if": "ctx.http.status_code == 507", "field": "http.status_message", "value": "Insufficient Storage" } },
{ "set": { "if": "ctx.http.status_code == 508", "field": "http.status_message", "value": "Loop Detected" } },
{ "set": { "if": "ctx.http.status_code == 509", "field": "http.status_message", "value": "Unassigned" } },
{ "set": { "if": "ctx.http.status_code == 510", "field": "http.status_message", "value": "Not Extended" } },
{ "set": { "if": "ctx.http.status_code == 511", "field": "http.status_message", "value": "Network Authentication Required" } }
]
}

View File

@@ -0,0 +1,16 @@
{
"description" : "osquery live query",
"processors" : [
{
"script": {
"lang": "painless",
"source": "def dict = ['columns': new HashMap()]; for (entry in ctx['rows'].entrySet()) { dict['columns'][entry.getKey()] = entry.getValue(); } ctx['result'] = dict; "
}
},
{ "remove": { "field": [ "rows" ], "ignore_missing": true, "ignore_failure": true } },
{ "rename": { "field": "distributed_query_execution_id", "target_field": "result.query_id", "ignore_missing": true } },
{ "rename": { "field": "computer_name", "target_field": "host.hostname", "ignore_missing": true } },
{ "pipeline": { "name": "osquery.normalize" } },
{ "pipeline": { "name": "common" } }
]
}

View File

@@ -0,0 +1,14 @@
{
"description" : "osquery normalize",
"processors" : [
{ "rename": { "field": "result.columns.cmdline", "target_field": "process.command_line", "ignore_missing": true } },
{ "rename": { "field": "result.columns.cwd", "target_field": "process.working_directory", "ignore_missing": true } },
{ "rename": { "field": "result.columns.name", "target_field": "process.name", "ignore_missing": true } },
{ "rename": { "field": "result.columns.path", "target_field": "process.executable", "ignore_missing": true } },
{ "rename": { "field": "result.columns.pid", "target_field": "process.pid", "ignore_missing": true } },
{ "rename": { "field": "result.columns.parent", "target_field": "process.ppid", "ignore_missing": true } },
{ "rename": { "field": "result.columns.uid", "target_field": "user.id", "ignore_missing": true } },
{ "rename": { "field": "result.columns.username", "target_field": "user.name", "ignore_missing": true } },
{ "rename": { "field": "result.columns.gid", "target_field": "group.id", "ignore_missing": true } }
]
}

View File

@@ -1,24 +1,19 @@
{ {
"description" : "osquery", "description" : "osquery",
"processors" : [ "processors" : [
{ "json": { "field": "message", "target_field": "message2", "ignore_failure": true } }, { "json": { "field": "message", "target_field": "result", "ignore_failure": true } },
{ "gsub": { "field": "message2.columns.data", "pattern": "\\\\xC2\\\\xAE", "replacement": "", "ignore_missing": true } }, { "gsub": { "field": "result.columns.data", "pattern": "\\\\xC2\\\\xAE", "replacement": "", "ignore_missing": true } },
{ "rename": { "if": "ctx.message2.columns?.eventid != null", "field": "message2.columns", "target_field": "winlog", "ignore_missing": true } }, { "rename": { "if": "ctx.result.columns?.eventid != null", "field": "result.columns", "target_field": "winlog", "ignore_missing": true } },
{ "json": { "field": "winlog.data", "target_field": "unparsed", "ignore_failure": true} }, { "json": { "field": "winlog.data", "target_field": "unparsed", "ignore_failure": true} },
{ "set": { "if": "!(ctx.unparsed?.EventData instanceof Map)", "field": "error.eventdata_parsing", "value": true, "ignore_failure": true } }, { "set": { "if": "!(ctx.unparsed?.EventData instanceof Map)", "field": "error.eventdata_parsing", "value": true, "ignore_failure": true } },
{ "rename": { "if": "!(ctx.error?.eventdata_parsing == true)", "field": "unparsed.EventData", "target_field": "winlog.event_data", "ignore_missing": true, "ignore_failure": true } }, { "rename": { "if": "!(ctx.error?.eventdata_parsing == true)", "field": "unparsed.EventData", "target_field": "winlog.event_data", "ignore_missing": true, "ignore_failure": true } },
{ "rename": { "field": "winlog.source", "target_field": "winlog.channel", "ignore_missing": true } }, { "rename": { "field": "winlog.source", "target_field": "winlog.channel", "ignore_missing": true } },
{ "rename": { "field": "winlog.eventid", "target_field": "winlog.event_id", "ignore_missing": true } }, { "rename": { "field": "winlog.eventid", "target_field": "winlog.event_id", "ignore_missing": true } },
{ "pipeline": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational'", "name": "sysmon" } }, { "pipeline": { "if": "ctx.winlog?.channel == 'Microsoft-Windows-Sysmon/Operational'", "name": "sysmon" } },
{ "pipeline": { "if": "ctx.winlog?.channel != 'Microsoft-Windows-Sysmon/Operational'", "name":"win.eventlogs" } }, { "pipeline": { "if": "ctx.winlog?.channel != 'Microsoft-Windows-Sysmon/Operational' && ctx.containsKey('winlog')", "name":"win.eventlogs" } },
{
"script": {
"lang": "painless",
"source": "def dict = ['result': new HashMap()]; for (entry in ctx['message2'].entrySet()) { dict['result'][entry.getKey()] = entry.getValue(); } ctx['osquery'] = dict; "
}
},
{ "set": { "field": "event.module", "value": "osquery", "override": false } }, { "set": { "field": "event.module", "value": "osquery", "override": false } },
{ "set": { "field": "event.dataset", "value": "{{osquery.result.name}}", "override": false} }, { "set": { "field": "event.dataset", "value": "{{result.name}}", "override": false} },
{ "pipeline": { "if": "!(ctx.containsKey('winlog'))", "name": "osquery.normalize" } },
{ "pipeline": { "name": "common" } } { "pipeline": { "name": "common" } }
] ]
} }

View File

@@ -1,13 +1,14 @@
{ {
"description" : "suricata.dhcp", "description" : "suricata.dhcp",
"processors" : [ "processors" : [
{ "rename": { "field": "message2.proto", "target_field": "network.transport", "ignore_missing": true } }, { "rename": { "field": "message2.proto", "target_field": "network.transport", "ignore_missing": true } },
{ "rename": { "field": "message2.app_proto", "target_field": "network.protocol", "ignore_missing": true } }, { "rename": { "field": "message2.app_proto", "target_field": "network.protocol", "ignore_missing": true } },
{ "rename": { "field": "message2.dhcp.assigned_ip", "target_field": "dhcp.assigned_ip", "ignore_missing": true } }, { "rename": { "field": "message2.dhcp.assigned_ip", "target_field": "dhcp.assigned_ip", "ignore_missing": true } },
{ "rename": { "field": "message2.dhcp.client_mac", "target_field": "host.mac", "ignore_missing": true } }, { "rename": { "field": "message2.dhcp.client_ip", "target_field": "client.address", "ignore_missing": true } },
{ "rename": { "field": "message2.dhcp.dhcp_type", "target_field": "dhcp.message_types", "ignore_missing": true } }, { "rename": { "field": "message2.dhcp.client_mac", "target_field": "host.mac", "ignore_missing": true } },
{ "rename": { "field": "message2.dhcp.assigned_ip", "target_field": "dhcp.assigned_ip", "ignore_missing": true } }, { "rename": { "field": "message2.dhcp.dhcp_type", "target_field": "dhcp.message_types", "ignore_missing": true } },
{ "rename": { "field": "message2.dhcp.type", "target_field": "dhcp.type", "ignore_missing": true } }, { "rename": { "field": "message2.dhcp.hostname", "target_field": "host.hostname", "ignore_missing": true } },
{ "rename": { "field": "message2.dhcp.type", "target_field": "dhcp.type", "ignore_missing": true } },
{ "rename": { "field": "message2.dhcp.id", "target_field": "dhcp.id", "ignore_missing": true } }, { "rename": { "field": "message2.dhcp.id", "target_field": "dhcp.id", "ignore_missing": true } },
{ "pipeline": { "name": "common" } } { "pipeline": { "name": "common" } }
] ]

View File

@@ -1,17 +1,18 @@
{ {
"description" : "suricata.http", "description" : "suricata.http",
"processors" : [ "processors" : [
{ "rename": { "field": "message2.proto", "target_field": "network.transport", "ignore_missing": true } }, { "rename": { "field": "message2.proto", "target_field": "network.transport", "ignore_missing": true } },
{ "rename": { "field": "message2.app_proto", "target_field": "network.protocol", "ignore_missing": true } }, { "rename": { "field": "message2.app_proto", "target_field": "network.protocol", "ignore_missing": true } },
{ "rename": { "field": "message2.http.hostname", "target_field": "http.virtual_host", "ignore_missing": true } }, { "rename": { "field": "message2.http.hostname", "target_field": "http.virtual_host", "ignore_missing": true } },
{ "rename": { "field": "message2.http.http_user_agent", "target_field": "http.useragent", "ignore_missing": true } }, { "rename": { "field": "message2.http.http_user_agent", "target_field": "http.useragent", "ignore_missing": true } },
{ "rename": { "field": "message2.http.url", "target_field": "http.uri", "ignore_missing": true } }, { "rename": { "field": "message2.http.url", "target_field": "http.uri", "ignore_missing": true } },
{ "rename": { "field": "message2.http.http_content_type", "target_field": "file.resp_mime_types", "ignore_missing": true } }, { "rename": { "field": "message2.http.http_content_type", "target_field": "file.resp_mime_types", "ignore_missing": true } },
{ "rename": { "field": "message2.http.http_refer", "target_field": "http.referrer", "ignore_missing": true } }, { "rename": { "field": "message2.http.http_refer", "target_field": "http.referrer", "ignore_missing": true } },
{ "rename": { "field": "message2.http.http_method", "target_field": "http.method", "ignore_missing": true } }, { "rename": { "field": "message2.http.http_method", "target_field": "http.method", "ignore_missing": true } },
{ "rename": { "field": "message2.http.protocol", "target_field": "http.version", "ignore_missing": true } }, { "rename": { "field": "message2.http.protocol", "target_field": "http.version", "ignore_missing": true } },
{ "rename": { "field": "message2.http.status", "target_field": "http.status_code", "ignore_missing": true } }, { "rename": { "field": "message2.http.status", "target_field": "http.status_code", "ignore_missing": true } },
{ "rename": { "field": "message2.http.length", "target_field": "http.request.body.length", "ignore_missing": true } }, { "rename": { "field": "message2.http.length", "target_field": "http.request.body.length", "ignore_missing": true } },
{ "pipeline": { "if": "ctx.http?.status_code != null", "name": "http.status" } },
{ "pipeline": { "name": "common" } } { "pipeline": { "name": "common" } }
] ]
} }

View File

@@ -27,11 +27,7 @@ echo -n "Waiting for ElasticSearch..."
COUNT=0 COUNT=0
ELASTICSEARCH_CONNECTED="no" ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 240 ]]; do while [[ "$COUNT" -le 240 ]]; do
{% if grains['role'] in ['so-node','so-heavynode'] %} curl ${ELASTICSEARCH_AUTH} -k --output /dev/null --silent --head --fail -L https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
curl ${ELASTICSEARCH_AUTH} -k --output /dev/null --silent --head --fail -L https://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
{% else %}
curl ${ELASTICSEARCH_AUTH} --output /dev/null --silent --head --fail -L http://"$ELASTICSEARCH_HOST":"$ELASTICSEARCH_PORT"
{% endif %}
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes" ELASTICSEARCH_CONNECTED="yes"
echo "connected!" echo "connected!"
@@ -51,11 +47,7 @@ fi
cd ${ELASTICSEARCH_INGEST_PIPELINES} cd ${ELASTICSEARCH_INGEST_PIPELINES}
echo "Loading pipelines..." echo "Loading pipelines..."
{% if grains['role'] in ['so-node','so-heavynode'] %}
for i in *; do echo $i; RESPONSE=$(curl ${ELASTICSEARCH_AUTH} -k -XPUT -L https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_ingest/pipeline/$i -H 'Content-Type: application/json' -d@$i 2>/dev/null); echo $RESPONSE; if [[ "$RESPONSE" == *"error"* ]]; then RETURN_CODE=1; fi; done for i in *; do echo $i; RESPONSE=$(curl ${ELASTICSEARCH_AUTH} -k -XPUT -L https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_ingest/pipeline/$i -H 'Content-Type: application/json' -d@$i 2>/dev/null); echo $RESPONSE; if [[ "$RESPONSE" == *"error"* ]]; then RETURN_CODE=1; fi; done
{% else %}
for i in *; do echo $i; RESPONSE=$(curl ${ELASTICSEARCH_AUTH} -XPUT -L http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_ingest/pipeline/$i -H 'Content-Type: application/json' -d@$i 2>/dev/null); echo $RESPONSE; if [[ "$RESPONSE" == *"error"* ]]; then RETURN_CODE=1; fi; done
{% endif %}
echo echo
cd - >/dev/null cd - >/dev/null

View File

@@ -1,17 +0,0 @@
keystore.path: /usr/share/elasticsearch/config/sokeys
keystore.password: changeit
keystore.algorithm: SunX509
truststore.path: /etc/pki/java/cacerts
truststore.password: changeit
truststore.algorithm: PKIX
protocols:
- TLSv1.2
ciphers:
- TLS_RSA_WITH_AES_128_CBC_SHA256
- TLS_RSA_WITH_AES_256_GCM_SHA384
transport.encrypted: true
{%- if grains['role'] in ['so-node','so-heavynode'] %}
http.encrypted: true
{%- else %}
http.encrypted: false
{%- endif %}

View File

@@ -18,17 +18,10 @@
{% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% set NODEIP = salt['pillar.get']('elasticsearch:mainip', '') -%} {% set NODEIP = salt['pillar.get']('elasticsearch:mainip', '') -%}
{% set TRUECLUSTER = salt['pillar.get']('elasticsearch:true_cluster', False) %} {% set TRUECLUSTER = salt['pillar.get']('elasticsearch:true_cluster', False) %}
{% set MANAGERIP = salt['pillar.get']('global:managerip') %} {% set MANAGERIP = salt['pillar.get']('global:managerip') %}
{% if FEATURES is sameas true %}
{% set FEATUREZ = "-features" %}
{% else %}
{% set FEATUREZ = '' %}
{% endif %}
{% if grains['role'] in ['so-eval','so-managersearch', 'so-manager', 'so-standalone', 'so-import'] %} {% if grains['role'] in ['so-eval','so-managersearch', 'so-manager', 'so-standalone', 'so-import'] %}
{% set esclustername = salt['pillar.get']('manager:esclustername') %} {% set esclustername = salt['pillar.get']('manager:esclustername') %}
{% set esheap = salt['pillar.get']('manager:esheap') %} {% set esheap = salt['pillar.get']('manager:esheap') %}
@@ -147,14 +140,6 @@ esyml:
- group: 939 - group: 939
- template: jinja - template: jinja
sotls:
file.managed:
- name: /opt/so/conf/elasticsearch/sotls.yml
- source: salt://elasticsearch/files/sotls.yml
- user: 930
- group: 939
- template: jinja
#sync templates to /opt/so/conf/elasticsearch/templates #sync templates to /opt/so/conf/elasticsearch/templates
{% for TEMPLATE in TEMPLATES %} {% for TEMPLATE in TEMPLATES %}
es_template_{{TEMPLATE.split('.')[0] | replace("/","_") }}: es_template_{{TEMPLATE.split('.')[0] | replace("/","_") }}:
@@ -186,7 +171,7 @@ eslogdir:
so-elasticsearch: so-elasticsearch:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-elasticsearch:{{ VERSION }}{{ FEATUREZ }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-elasticsearch:{{ VERSION }}
- hostname: elasticsearch - hostname: elasticsearch
- name: so-elasticsearch - name: so-elasticsearch
- user: elasticsearch - user: elasticsearch
@@ -206,7 +191,7 @@ so-elasticsearch:
{% if TRUECLUSTER is sameas false or (TRUECLUSTER is sameas true and not salt['pillar.get']('nodestab', {})) %} {% if TRUECLUSTER is sameas false or (TRUECLUSTER is sameas true and not salt['pillar.get']('nodestab', {})) %}
- discovery.type=single-node - discovery.type=single-node
{% endif %} {% endif %}
- ES_JAVA_OPTS=-Xms{{ esheap }} -Xmx{{ esheap }} - ES_JAVA_OPTS=-Xms{{ esheap }} -Xmx{{ esheap }} -Des.transport.cname_in_publish_address=true
ulimits: ulimits:
- memlock=-1:-1 - memlock=-1:-1
- nofile=65536:65536 - nofile=65536:65536
@@ -228,7 +213,6 @@ so-elasticsearch:
- /etc/pki/elasticsearch.crt:/usr/share/elasticsearch/config/elasticsearch.crt:ro - /etc/pki/elasticsearch.crt:/usr/share/elasticsearch/config/elasticsearch.crt:ro
- /etc/pki/elasticsearch.key:/usr/share/elasticsearch/config/elasticsearch.key:ro - /etc/pki/elasticsearch.key:/usr/share/elasticsearch/config/elasticsearch.key:ro
- /etc/pki/elasticsearch.p12:/usr/share/elasticsearch/config/elasticsearch.p12:ro - /etc/pki/elasticsearch.p12:/usr/share/elasticsearch/config/elasticsearch.p12:ro
- /opt/so/conf/elasticsearch/sotls.yml:/usr/share/elasticsearch/config/sotls.yml:ro
- watch: - watch:
- file: cacertz - file: cacertz
- file: esyml - file: esyml

View File

@@ -51,16 +51,29 @@
"match_mapping_type": "string", "match_mapping_type": "string",
"path_match": "*.ip", "path_match": "*.ip",
"mapping": { "mapping": {
"type": "ip" "type": "ip",
"fields" : {
"keyword" : {
"ignore_above" : 45,
"type" : "keyword"
}
}
} }
} }
}, },
{ {
"port": { "port": {
"match_mapping_type": "string",
"path_match": "*.port", "path_match": "*.port",
"mapping": { "mapping": {
"type": "integer" "type": "integer",
"fields" : {
"keyword" : {
"ignore_above" : 6,
"type" : "keyword"
}
}
} }
} }
}, },
@@ -365,6 +378,10 @@
"request":{ "request":{
"type":"object", "type":"object",
"dynamic": true "dynamic": true
},
"result":{
"type":"object",
"dynamic": true
}, },
"rfb":{ "rfb":{
"type":"object", "type":"object",

View File

@@ -260,7 +260,8 @@ output.{{ type }}:
{%- if grains['role'] in ["so-eval", "so-import"] %} {%- if grains['role'] in ["so-eval", "so-import"] %}
output.elasticsearch: output.elasticsearch:
enabled: true enabled: true
hosts: ["{{ MANAGER }}:9200"] hosts: ["https://{{ MANAGER }}:9200"]
ssl.certificate_authorities: ["/usr/share/filebeat/intraca.crt"]
pipelines: pipelines:
- pipeline: "%{[module]}.%{[dataset]}" - pipeline: "%{[module]}.%{[dataset]}"
indices: indices:

View File

@@ -13,7 +13,6 @@
# along with this program. If not, see <http://www.gnu.org/licenses/>. # along with this program. If not, see <http://www.gnu.org/licenses/>.
{% from 'allowed_states.map.jinja' import allowed_states %} {% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls in allowed_states %} {% if sls in allowed_states %}
{% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set LOCALHOSTNAME = salt['grains.get']('host') %} {% set LOCALHOSTNAME = salt['grains.get']('host') %}
@@ -21,12 +20,6 @@
{% set LOCALHOSTIP = salt['grains.get']('ip_interfaces').get(MAININT)[0] %} {% set LOCALHOSTIP = salt['grains.get']('ip_interfaces').get(MAININT)[0] %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set MANAGERIP = salt['pillar.get']('global:managerip', '') %} {% set MANAGERIP = salt['pillar.get']('global:managerip', '') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{%- if FEATURES is sameas true %}
{% set FEATURES = "-features" %}
{% else %}
{% set FEATURES = '' %}
{% endif %}
filebeatetcdir: filebeatetcdir:
file.directory: file.directory:
- name: /opt/so/conf/filebeat/etc - name: /opt/so/conf/filebeat/etc
@@ -64,7 +57,7 @@ filebeatconfsync:
OUTPUT: {{ salt['pillar.get']('filebeat:config:output', {}) }} OUTPUT: {{ salt['pillar.get']('filebeat:config:output', {}) }}
so-filebeat: so-filebeat:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-filebeat:{{ VERSION }}{{ FEATURES }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-filebeat:{{ VERSION }}
- hostname: so-filebeat - hostname: so-filebeat
- user: root - user: root
- extra_hosts: {{ MANAGER }}:{{ MANAGERIP }},{{ LOCALHOSTNAME }}:{{ LOCALHOSTIP }} - extra_hosts: {{ MANAGER }}:{{ MANAGERIP }},{{ LOCALHOSTNAME }}:{{ LOCALHOSTIP }}

View File

@@ -26,15 +26,6 @@ iptables_fix_fwd:
- position: 1 - position: 1
- target: DOCKER-USER - target: DOCKER-USER
# Allow related/established sessions
iptables_allow_established:
iptables.append:
- table: filter
- chain: INPUT
- jump: ACCEPT
- match: conntrack
- ctstate: 'RELATED,ESTABLISHED'
# I like pings # I like pings
iptables_allow_pings: iptables_allow_pings:
iptables.append: iptables.append:
@@ -77,17 +68,6 @@ enable_docker_user_fw_policy:
- out-interface: docker0 - out-interface: docker0
- position: 1 - position: 1
enable_docker_user_established:
iptables.insert:
- table: filter
- chain: DOCKER-USER
- jump: ACCEPT
- in-interface: '!docker0'
- out-interface: docker0
- position: 1
- match: conntrack
- ctstate: 'RELATED,ESTABLISHED'
{% set count = namespace(value=0) %} {% set count = namespace(value=0) %}
{% for chain, hg in assigned_hostgroups.chain.items() %} {% for chain, hg in assigned_hostgroups.chain.items() %}
{% for hostgroup, portgroups in assigned_hostgroups.chain[chain].hostgroups.items() %} {% for hostgroup, portgroups in assigned_hostgroups.chain[chain].hostgroups.items() %}
@@ -120,6 +100,27 @@ enable_docker_user_established:
{% endfor %} {% endfor %}
{% endfor %} {% endfor %}
# Allow related/established sessions
iptables_allow_established:
iptables.insert:
- table: filter
- chain: INPUT
- jump: ACCEPT
- position: 1
- match: conntrack
- ctstate: 'RELATED,ESTABLISHED'
enable_docker_user_established:
iptables.insert:
- table: filter
- chain: DOCKER-USER
- jump: ACCEPT
- in-interface: '!docker0'
- out-interface: docker0
- position: 1
- match: conntrack
- ctstate: 'RELATED,ESTABLISHED'
# Block icmp timestamp response # Block icmp timestamp response
block_icmp_timestamp_reply: block_icmp_timestamp_reply:
iptables.append: iptables.append:

View File

@@ -19,13 +19,12 @@
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set ENGINE = salt['pillar.get']('global:mdengine', '') %} {% set ENGINE = salt['pillar.get']('global:mdengine', '') %}
{% set proxy = salt['pillar.get']('manager:proxy') %}
include:
- idstools.sync_files
# IDSTools Setup # IDSTools Setup
idstoolsdir:
file.directory:
- name: /opt/so/conf/idstools/etc
- user: 939
- group: 939
- makedirs: True
idstoolslogdir: idstoolslogdir:
file.directory: file.directory:
@@ -34,14 +33,6 @@ idstoolslogdir:
- group: 939 - group: 939
- makedirs: True - makedirs: True
idstoolsetcsync:
file.recurse:
- name: /opt/so/conf/idstools/etc
- source: salt://idstools/etc
- user: 939
- group: 939
- template: jinja
so-ruleupdatecron: so-ruleupdatecron:
cron.present: cron.present:
- name: /usr/sbin/so-rule-update > /opt/so/log/idstools/download.log 2>&1 - name: /usr/sbin/so-rule-update > /opt/so/log/idstools/download.log 2>&1
@@ -49,28 +40,17 @@ so-ruleupdatecron:
- minute: '1' - minute: '1'
- hour: '7' - hour: '7'
rulesdir:
file.directory:
- name: /opt/so/rules/nids
- user: 939
- group: 939
- makedirs: True
# Don't show changes because all.rules can be large
synclocalnidsrules:
file.recurse:
- name: /opt/so/rules/nids/
- source: salt://idstools/
- user: 939
- group: 939
- show_changes: False
- include_pat: 'E@.rules'
so-idstools: so-idstools:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-idstools:{{ VERSION }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-idstools:{{ VERSION }}
- hostname: so-idstools - hostname: so-idstools
- user: socore - user: socore
{% if proxy %}
- environment:
- http_proxy={{ proxy }}
- https_proxy={{ proxy }}
- no_proxy={{ salt['pillar.get']('manager:no_proxy') }}
{% endif %}
- binds: - binds:
- /opt/so/conf/idstools/etc:/opt/so/idstools/etc:ro - /opt/so/conf/idstools/etc:/opt/so/idstools/etc:ro
- /opt/so/rules/nids:/opt/so/rules/nids:rw - /opt/so/rules/nids:/opt/so/rules/nids:rw

View File

@@ -0,0 +1,46 @@
# Copyright 2014,2015,2016,2017,2018 Security Onion Solutions, LLC
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
idstoolsdir:
file.directory:
- name: /opt/so/conf/idstools/etc
- user: 939
- group: 939
- makedirs: True
idstoolsetcsync:
file.recurse:
- name: /opt/so/conf/idstools/etc
- source: salt://idstools/etc
- user: 939
- group: 939
- template: jinja
rulesdir:
file.directory:
- name: /opt/so/rules/nids
- user: 939
- group: 939
- makedirs: True
# Don't show changes because all.rules can be large
synclocalnidsrules:
file.recurse:
- name: /opt/so/rules/nids/
- source: salt://idstools/
- user: 939
- group: 939
- show_changes: False
- include_pat: 'E@.rules'

View File

@@ -1,53 +0,0 @@
{%- set ES = salt['pillar.get']('manager:mainip', '') -%}
# Wait for ElasticSearch to come up, so that we can query for version infromation
echo -n "Waiting for ElasticSearch..."
COUNT=0
ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 30 ]]; do
curl --output /dev/null --silent --head --fail -L http://{{ ES }}:9200
if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes"
echo "connected!"
break
else
((COUNT+=1))
sleep 1
echo -n "."
fi
done
if [ "$ELASTICSEARCH_CONNECTED" == "no" ]; then
echo
echo -e "Connection attempt timed out. Unable to connect to ElasticSearch. \nPlease try: \n -checking log(s) in /var/log/elasticsearch/\n -running 'sudo docker ps' \n -running 'sudo so-elastic-restart'"
echo
exit
fi
# Make sure Kibana is running
MAX_WAIT=240
# Check to see if Kibana is available
wait_step=0
until curl -s -XGET -L http://{{ ES }}:5601 > /dev/null ; do
wait_step=$(( ${wait_step} + 1 ))
echo "Waiting on Kibana...Attempt #$wait_step"
if [ ${wait_step} -gt ${MAX_WAIT} ]; then
echo "ERROR: Kibana not available for more than ${MAX_WAIT} seconds."
exit 5
fi
sleep 1s;
done
# Apply Kibana template
echo
echo "Applying Kibana template..."
curl -s -XPUT -L http://{{ ES }}:9200/_template/kibana \
-H 'Content-Type: application/json' \
-d'{"index_patterns" : ".kibana", "settings": { "number_of_shards" : 1, "number_of_replicas" : 0 }, "mappings" : { "search": {"properties": {"hits": {"type": "integer"}, "version": {"type": "integer"}}}}}'
echo
curl -s -XPUT -L "{{ ES }}:9200/.kibana/_settings" \
-H 'Content-Type: application/json' \
-d'{"index" : {"number_of_replicas" : 0}}'
echo

View File

@@ -3,6 +3,8 @@
# {%- set FLEET_NODE = salt['pillar.get']('global:fleet_node', False) -%} # {%- set FLEET_NODE = salt['pillar.get']('global:fleet_node', False) -%}
# {%- set MANAGER = salt['pillar.get']('global:url_base', '') %} # {%- set MANAGER = salt['pillar.get']('global:url_base', '') %}
. /usr/sbin/so-common
# Copy template file # Copy template file
cp /opt/so/conf/kibana/saved_objects.ndjson.template /opt/so/conf/kibana/saved_objects.ndjson cp /opt/so/conf/kibana/saved_objects.ndjson.template /opt/so/conf/kibana/saved_objects.ndjson
@@ -14,5 +16,11 @@ cp /opt/so/conf/kibana/saved_objects.ndjson.template /opt/so/conf/kibana/saved_o
# SOCtopus and Manager # SOCtopus and Manager
sed -i "s/PLACEHOLDER/{{ MANAGER }}/g" /opt/so/conf/kibana/saved_objects.ndjson sed -i "s/PLACEHOLDER/{{ MANAGER }}/g" /opt/so/conf/kibana/saved_objects.ndjson
wait_for_web_response "http://localhost:5601/app/kibana" "Elastic"
## This hackery will be removed if using Elastic Auth ##
# Let's snag a cookie from Kibana
THECOOKIE=$(curl -c - -X GET http://localhost:5601/ | grep sid | awk '{print $7}')
# Load saved objects # Load saved objects
curl -X POST "localhost:5601/api/saved_objects/_import?overwrite=true" -H "kbn-xsrf: true" --form file=@/opt/so/conf/kibana/saved_objects.ndjson > /dev/null 2>&1 curl -b "sid=$THECOOKIE" -L -X POST "localhost:5601/api/saved_objects/_import?overwrite=true" -H "kbn-xsrf: true" --form file=@/opt/so/conf/kibana/saved_objects.ndjson >> /opt/so/log/kibana/misc.log

View File

@@ -1,11 +1,11 @@
--- ---
# Default Kibana configuration from kibana-docker. # Default Kibana configuration from kibana-docker.
{%- set ES = salt['pillar.get']('manager:mainip', '') -%} {%- set ES = salt['pillar.get']('manager:mainip', '') -%}
{%- set FEATURES = salt['pillar.get']('elastic:features', False) %}
server.name: kibana server.name: kibana
server.host: "0" server.host: "0"
server.basePath: /kibana server.basePath: /kibana
elasticsearch.hosts: [ "http://{{ ES }}:9200" ] elasticsearch.hosts: [ "https://{{ ES }}:9200" ]
elasticsearch.ssl.verificationMode: none
#kibana.index: ".kibana" #kibana.index: ".kibana"
#elasticsearch.username: elastic #elasticsearch.username: elastic
#elasticsearch.password: changeme #elasticsearch.password: changeme
@@ -14,3 +14,7 @@ elasticsearch.requestTimeout: 90000
logging.dest: /var/log/kibana/kibana.log logging.dest: /var/log/kibana/kibana.log
telemetry.enabled: false telemetry.enabled: false
security.showInsecureClusterWarning: false security.showInsecureClusterWarning: false
xpack.security.authc.providers:
anonymous.anonymous1:
order: 0
credentials: "elasticsearch_anonymous_user"

File diff suppressed because one or more lines are too long

View File

@@ -4,12 +4,6 @@
{% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %} {% set VERSION = salt['pillar.get']('global:soversion', 'HH1.2.2') %}
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{%- if FEATURES is sameas true %}
{% set FEATURES = "-features" %}
{% else %}
{% set FEATURES = '' %}
{% endif %}
# Add ES Group # Add ES Group
kibanasearchgroup: kibanasearchgroup:
@@ -73,7 +67,7 @@ kibanabin:
# Start the kibana docker # Start the kibana docker
so-kibana: so-kibana:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-kibana:{{ VERSION }}{{ FEATURES }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-kibana:{{ VERSION }}
- hostname: kibana - hostname: kibana
- user: kibana - user: kibana
- environment: - environment:
@@ -100,21 +94,10 @@ kibanadashtemplate:
- user: 932 - user: 932
- group: 939 - group: 939
wait_for_kibana:
module.run:
- http.wait_for_successful_query:
- url: "http://{{MANAGER}}:5601/api/saved_objects/_find?type=config"
- wait_for: 900
- onchanges:
- file: kibanadashtemplate
so-kibana-config-load: so-kibana-config-load:
cmd.run: cmd.run:
- name: /usr/sbin/so-kibana-config-load - name: /usr/sbin/so-kibana-config-load
- cwd: /opt/so - cwd: /opt/so
- onchanges:
- wait_for_kibana
# Keep the setting correct # Keep the setting correct
#KibanaHappy: #KibanaHappy:

View File

@@ -19,13 +19,6 @@
{% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %} {% set IMAGEREPO = salt['pillar.get']('global:imagerepo') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set MANAGERIP = salt['pillar.get']('global:managerip') %} {% set MANAGERIP = salt['pillar.get']('global:managerip') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{%- if FEATURES is sameas true %}
{% set FEATURES = "-features" %}
{% else %}
{% set FEATURES = '' %}
{% endif %}
# Logstash Section - Decide which pillar to use # Logstash Section - Decide which pillar to use
{% set lsheap = salt['pillar.get']('logstash_settings:lsheap', '') %} {% set lsheap = salt['pillar.get']('logstash_settings:lsheap', '') %}
@@ -146,7 +139,7 @@ lslogdir:
so-logstash: so-logstash:
docker_container.running: docker_container.running:
- image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-logstash:{{ VERSION }}{{ FEATURES }} - image: {{ MANAGER }}:5000/{{ IMAGEREPO }}/so-logstash:{{ VERSION }}
- hostname: so-logstash - hostname: so-logstash
- name: so-logstash - name: so-logstash
- user: logstash - user: logstash

View File

@@ -0,0 +1,19 @@
{%- set MANAGER = salt['grains.get']('master') %}
{%- set THREADS = salt['pillar.get']('logstash_settings:ls_input_threads', '') %}
{% set BATCH = salt['pillar.get']('logstash_settings:ls_pipeline_batch_size', 125) %}
input {
redis {
host => '{{ MANAGER }}'
port => 6379
data_type => 'pattern_channel'
key => 'results_*'
type => 'live_query'
add_field => {
"module" => "osquery"
"dataset" => "live_query"
}
threads => {{ THREADS }}
batch_count => {{ BATCH }}
}
}

View File

@@ -3,7 +3,6 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [module] =~ "zeek" and "import" not in [tags] { if [module] =~ "zeek" and "import" not in [tags] {
elasticsearch { elasticsearch {
@@ -13,10 +12,8 @@ output {
template_name => "so-zeek" template_name => "so-zeek"
template => "/templates/so-zeek-template.json" template => "/templates/so-zeek-template.json"
template_overwrite => true template_overwrite => true
{%- if grains['role'] in ['so-node','so-heavynode'] %}
ssl => true ssl => true
ssl_certificate_verification => false ssl_certificate_verification => false
{%- endif %}
} }
} }
} }

View File

@@ -3,7 +3,6 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if "import" in [tags] { if "import" in [tags] {
elasticsearch { elasticsearch {
@@ -13,10 +12,8 @@ output {
template_name => "so-import" template_name => "so-import"
template => "/templates/so-import-template.json" template => "/templates/so-import-template.json"
template_overwrite => true template_overwrite => true
{%- if grains['role'] in ['so-node','so-heavynode'] %}
ssl => true ssl => true
ssl_certificate_verification => false ssl_certificate_verification => false
{%- endif %}
} }
} }
} }

View File

@@ -3,7 +3,6 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [event_type] == "sflow" { if [event_type] == "sflow" {
elasticsearch { elasticsearch {
@@ -12,10 +11,8 @@ output {
template_name => "so-flow" template_name => "so-flow"
template => "/templates/so-flow-template.json" template => "/templates/so-flow-template.json"
template_overwrite => true template_overwrite => true
{%- if grains['role'] in ['so-node','so-heavynode'] %}
ssl => true ssl => true
ssl_certificate_verification => false ssl_certificate_verification => false
{%- endif %}
} }
} }
} }

View File

@@ -3,7 +3,6 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [event_type] == "ids" and "import" not in [tags] { if [event_type] == "ids" and "import" not in [tags] {
elasticsearch { elasticsearch {
@@ -12,10 +11,8 @@ output {
template_name => "so-ids" template_name => "so-ids"
template => "/templates/so-ids-template.json" template => "/templates/so-ids-template.json"
template_overwrite => true template_overwrite => true
{%- if grains['role'] in ['so-node','so-heavynode'] %}
ssl => true ssl => true
ssl_certificate_verification => false ssl_certificate_verification => false
{%- endif %}
} }
} }
} }

View File

@@ -3,7 +3,6 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [module] =~ "syslog" { if [module] =~ "syslog" {
elasticsearch { elasticsearch {
@@ -13,10 +12,8 @@ output {
template_name => "so-syslog" template_name => "so-syslog"
template => "/templates/so-syslog-template.json" template => "/templates/so-syslog-template.json"
template_overwrite => true template_overwrite => true
{%- if grains['role'] in ['so-node','so-heavynode'] %}
ssl => true ssl => true
ssl_certificate_verification => false ssl_certificate_verification => false
{%- endif %}
} }
} }
} }

View File

@@ -3,9 +3,8 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [module] =~ "osquery" { if [module] =~ "osquery" and "live_query" not in [dataset] {
elasticsearch { elasticsearch {
pipeline => "%{module}.%{dataset}" pipeline => "%{module}.%{dataset}"
hosts => "{{ ES }}" hosts => "{{ ES }}"
@@ -13,10 +12,8 @@ output {
template_name => "so-osquery" template_name => "so-osquery"
template => "/templates/so-osquery-template.json" template => "/templates/so-osquery-template.json"
template_overwrite => true template_overwrite => true
{%- if grains['role'] in ['so-node','so-heavynode'] %}
ssl => true ssl => true
ssl_certificate_verification => false ssl_certificate_verification => false
{%- endif %}
} }
} }
} }

View File

@@ -0,0 +1,41 @@
{%- if grains['role'] == 'so-eval' -%}
{%- set ES = salt['pillar.get']('manager:mainip', '') -%}
{%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
filter {
if [type] =~ "live_query" {
mutate {
rename => {
"[host][hostname]" => "computer_name"
}
}
prune {
blacklist_names => ["host"]
}
split {
field => "rows"
}
}
}
output {
if [type] =~ "live_query" {
elasticsearch {
pipeline => "osquery.live_query"
hosts => "{{ ES }}"
index => "so-osquery"
template_name => "so-osquery"
template => "/templates/so-osquery-template.json"
template_overwrite => true
ssl => true
ssl_certificate_verification => false
}
}
}

View File

@@ -3,7 +3,6 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [dataset] =~ "firewall" { if [dataset] =~ "firewall" {
elasticsearch { elasticsearch {
@@ -12,10 +11,8 @@ output {
template_name => "so-firewall" template_name => "so-firewall"
template => "/templates/so-firewall-template.json" template => "/templates/so-firewall-template.json"
template_overwrite => true template_overwrite => true
{%- if grains['role'] in ['so-node','so-heavynode'] %}
ssl => true ssl => true
ssl_certificate_verification => false ssl_certificate_verification => false
{%- endif %}
} }
} }
} }

View File

@@ -3,7 +3,6 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [module] =~ "suricata" and "import" not in [tags] { if [module] =~ "suricata" and "import" not in [tags] {
elasticsearch { elasticsearch {
@@ -12,10 +11,8 @@ output {
index => "so-ids" index => "so-ids"
template_name => "so-ids" template_name => "so-ids"
template => "/templates/so-ids-template.json" template => "/templates/so-ids-template.json"
{%- if grains['role'] in ['so-node','so-heavynode'] %}
ssl => true ssl => true
ssl_certificate_verification => false ssl_certificate_verification => false
{%- endif %}
} }
} }
} }

View File

@@ -3,7 +3,6 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if "beat-ext" in [tags] and "import" not in [tags] { if "beat-ext" in [tags] and "import" not in [tags] {
elasticsearch { elasticsearch {
@@ -13,10 +12,8 @@ output {
template_name => "so-beats" template_name => "so-beats"
template => "/templates/so-beats-template.json" template => "/templates/so-beats-template.json"
template_overwrite => true template_overwrite => true
{%- if grains['role'] in ['so-node','so-heavynode'] %}
ssl => true ssl => true
ssl_certificate_verification => false ssl_certificate_verification => false
{%- endif %}
} }
} }
} }

View File

@@ -3,7 +3,6 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [module] =~ "ossec" { if [module] =~ "ossec" {
elasticsearch { elasticsearch {
@@ -13,10 +12,8 @@ output {
template_name => "so-ossec" template_name => "so-ossec"
template => "/templates/so-ossec-template.json" template => "/templates/so-ossec-template.json"
template_overwrite => true template_overwrite => true
{%- if grains['role'] in ['so-node','so-heavynode'] %}
ssl => true ssl => true
ssl_certificate_verification => false ssl_certificate_verification => false
{%- endif %}
} }
} }
} }

View File

@@ -3,7 +3,6 @@
{%- else %} {%- else %}
{%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%} {%- set ES = salt['pillar.get']('elasticsearch:mainip', '') -%}
{%- endif %} {%- endif %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
output { output {
if [module] =~ "strelka" { if [module] =~ "strelka" {
elasticsearch { elasticsearch {
@@ -13,10 +12,8 @@ output {
template_name => "so-strelka" template_name => "so-strelka"
template => "/templates/so-strelka-template.json" template => "/templates/so-strelka-template.json"
template_overwrite => true template_overwrite => true
{%- if grains['role'] in ['so-node','so-heavynode'] %}
ssl => true ssl => true
ssl_certificate_verification => false ssl_certificate_verification => false
{%- endif %}
} }
} }
} }

View File

@@ -25,8 +25,8 @@ events {
http { http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" ' log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" ' '$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"'; '"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main; access_log /var/log/nginx/access.log main;
@@ -172,6 +172,8 @@ http {
location / { location / {
auth_request /auth/sessions/whoami; auth_request /auth/sessions/whoami;
auth_request_set $userid $upstream_http_x_kratos_authenticated_identity_id;
proxy_set_header x-user-id $userid;
proxy_pass http://{{ manager_ip }}:9822/; proxy_pass http://{{ manager_ip }}:9822/;
proxy_read_timeout 90; proxy_read_timeout 90;
proxy_connect_timeout 90; proxy_connect_timeout 90;
@@ -231,15 +233,15 @@ http {
} }
{%- if airgap is sameas true %} {%- if airgap is sameas true %}
location /repo/ { location /repo/ {
allow all; allow all;
sendfile on; sendfile on;
sendfile_max_chunk 1m; sendfile_max_chunk 1m;
autoindex on; autoindex on;
autoindex_exact_size off; autoindex_exact_size off;
autoindex_format html; autoindex_format html;
autoindex_localtime on; autoindex_localtime on;
} }
{%- endif %} {%- endif %}
location /grafana/ { location /grafana/ {

File diff suppressed because one or more lines are too long

View File

@@ -89,7 +89,7 @@ def run():
# Update the Fleet host in the static pillar # Update the Fleet host in the static pillar
for line in fileinput.input(STATICFILE, inplace=True): for line in fileinput.input(STATICFILE, inplace=True):
line = re.sub(r'fleet_custom_hostname:.*\n', f"fleet_custom_hostname: {CUSTOMHOSTNAME}", line.rstrip()) line = re.sub(r'fleet_custom_hostname:.*$', f"fleet_custom_hostname: {CUSTOMHOSTNAME}", line.rstrip())
print(line) print(line)
return {} return {}

View File

@@ -1,52 +1,49 @@
{ {
"title": "Security Onion 2.3.30 is here!", "title": "Security Onion 2.3.40 is here!",
"changes": [ "changes": [
{ "summary": "Zeek is now at version 3.0.13." }, { "summary": "FEATURE: Add option for HTTP Method Specification/POST to Hunt/Alerts Actions <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/2904\">#2904</a>" },
{ "summary": "CyberChef is now at version 9.27.2." }, { "summary": "FEATURE: Add option to configure proxy for various tools used during setup + persist the proxy configuration <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/529\">#529</a>" },
{ "summary": "Elastic components are now at version 7.10.2. This is the last version that uses the Apache license." }, { "summary": "FEATURE: Alerts/Hunt - Provide method for base64-encoding pivot value <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/1749\">#1749</a>" },
{ "summary": "Suricata is now at version 6.0.1." }, { "summary": "FEATURE: Allow users to customize links in SOC <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/1248\">#1248</a>" },
{ "summary": "Salt is now at version 3002.5." }, { "summary": "FEATURE: Display user who requested PCAP in SOC <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/2775\">#2775</a>" },
{ "summary": "Suricata metadata parsing is now vastly improved." }, { "summary": "FEATURE: Make SOC browser app connection timeouts adjustable <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/2408\">#2408</a>" },
{ "summary": "If you choose Suricata for metadata parsing, it will now extract files from the network and send them to Strelka. You can add additional mime types <a href='https://github.com/Security-Onion-Solutions/securityonion/blob/dev/salt/idstools/sorules/extraction.rules'>here</a>." }, { "summary": "FEATURE: Move to FleetDM <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3483\">#3483</a>" },
{ "summary": "It is now possible to filter Suricata events from being written to the logs. This is a new Suricata 6 feature. We have included some examples <a href='https://github.com/Security-Onion-Solutions/securityonion/blob/dev/salt/idstools/sorules/filters.rules'>here</a>." }, { "summary": "FEATURE: Reduce field cache expiration from 1d to 5m, and expose value as a salt pillar <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3537\">#3537</a>" },
{ "summary": "The Kratos docker container will now perform DNS lookups locally before reaching out to the network DNS provider." }, { "summary": "FEATURE: Refactor docker_clean salt state to use loop w/ inspection instead of hardcoded image list <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3113\">#3113</a>" },
{ "summary": "Network configuration is now more compatible with manually configured OpenVPN or Wireguard VPN interfaces." }, { "summary": "FEATURE: Run so-ssh-harden during setup <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/1932\">#1932</a>" },
{ "summary": "<code>so-sensor-clean</code> will no longer spawn multiple instances." }, { "summary": "FEATURE: SOC should only display links to tools that are enabled <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/1643\">#1643</a>" },
{ "summary": "Suricata eve.json logs will now be cleaned up after 7 days. This can be changed via the pillar setting." }, { "summary": "FEATURE: Update Sigmac Osquery Field Mappings <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3137\">#3137</a>" },
{ "summary": "Fixed a security issue where the backup directory had improper file permissions." }, { "summary": "FEATURE: User must accept the Elastic licence during setup <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3233\">#3233</a>" },
{ "summary": "The automated backup script on the manager now backs up all keys along with the salt configurations. Backup retention is now set to 7 days." }, { "summary": "FEATURE: soup should output more guidance for distributed deployments at the end <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3340\">#3340</a>" },
{ "summary": "Strelka logs are now being rotated properly." }, { "summary": "FEATURE: soup should provide some initial information and then prompt the user to continue <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3486\">#3486</a>" },
{ "summary": "Elastalert can now be customized via a pillar." }, { "summary": "FIX: Add cronjob for so-suricata-eve-clean script <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3515\">#3515</a>" },
{ "summary": "Introduced new script <code>so-monitor-add</code> that allows the user to easily add interfaces to the bond for monitoring." }, { "summary": "FIX: Change Elasticsearch heap formula <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/1686\">#1686</a>" },
{ "summary": "Setup now validates all user input fields to give up-front feedback if an entered value is invalid." }, { "summary": "FIX: Create a post install version loop in soup <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3102\">#3102</a>" },
{ "summary": "There have been several changes to improve install reliability. Many install steps have had their validation processes reworked to ensure that required tasks have been completed before moving on to the next step of the install." }, { "summary": "FIX: Custom Kibana settings are not being applied properly on upgrades <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3254\">#3254</a>" },
{ "summary": "Users are now warned if they try to set <i>securityonion</i> as their hostname." }, { "summary": "FIX: Hunt query issues with quotes <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3320\">#3320</a>" },
{ "summary": "The ISO should now identify xvda and nvme devices as install targets." }, { "summary": "FIX: IP Addresses don't work with .security <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3327\">#3327</a>" },
{ "summary": "At the end of the first stage of the ISO setup, the ISO device should properly unmount and eject." }, { "summary": "FIX: Improve DHCP leases query in Hunt <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3395\">#3395</a>" },
{ "summary": "The text selection of choosing Suricata vs Zeek for metadata is now more descriptive." }, { "summary": "FIX: Improve Setup verbiage <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3422\">#3422</a>" },
{ "summary": "The logic for properly setting the <code>LOG_SIZE_LIMIT</code> variable has been improved." }, { "summary": "FIX: Improve Suricata DHCP logging and parsing <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3397\">#3397</a>" },
{ "summary": "When installing on Ubuntu, Setup will now wait for cloud init to complete before trying to start the install of packages." }, { "summary": "FIX: Keep RELATED,ESTABLISHED rules at the top of iptables chains <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3288\">#3288</a>" },
{ "summary": "The firewall state runs considerably faster now." }, { "summary": "FIX: Populate http.status_message field <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3408\">#3408</a>" },
{ "summary": "ICMP timestamps are now disabled." }, { "summary": "FIX: Remove 'types removal' deprecation messages from elastic log. <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3345\">#3345</a>" },
{ "summary": "Copyright dates on all Security Onion specific files have been updated." }, { "summary": "FIX: Reword + fix formatting on ES data storage prompt <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3205\">#3205</a>" },
{ "summary": "<code>so-tcpreplay</code> (and indirectly <code>so-test</code>) should now work properly." }, { "summary": "FIX: SMTP shoud read SNMP on Kibana SNMP view <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3413\">#3413</a>" },
{ "summary": "The Zeek packet loss script is now more accurate." }, { "summary": "FIX: Sensors can temporarily show offline while processing large PCAP jobs <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3279\">#3279</a>" },
{ "summary": "Grafana now includes an estimated EPS graph for events ingested on the manager." }, { "summary": "FIX: Soup should log to the screen as well as to a file <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3467\">#3467</a>" },
{ "summary": "Updated Elastalert to release 0.2.4-alt2 based on the <a href='https://github.com/jertel/elastalert'>jertel/elastalert</a> alt branch." }, { "summary": "FIX: Strelka port 57314 not immediately relinquished upon restart <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3457\">#3457</a>" },
{ "summary": "Pivots from Alerts/Hunts to action links will properly URI encode values." }, { "summary": "FIX: Switch SOC to pull from fieldcaps API due to field caching changes in Kibana 7.11 <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3502\">#3502</a>" },
{ "summary": "Hunt timeline graph will properly scale the data point interval based on the search date range." }, { "summary": "FIX: Syntax error in /etc/sysctl.d/99-reserved-ports.conf <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3308\">#3308</a>" },
{ "summary": "Grid interface will properly show <i>Search</i> as the node type instead of <i>so-node</i>." }, { "summary": "FIX: Telegraf hardcoded to use https and is not aware of elasticsearch features <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/2061\">#2061</a>" },
{ "summary": "Import node now supports airgap environments." }, { "summary": "FIX: Zeek Index Close and Delete Count for curator <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3274\">#3274</a>" },
{ "summary": "The so-mysql container will now show <i>healthy</i> when viewing the docker ps output." }, { "summary": "FIX: so-cortex-user-add and so-cortex-user-enable use wrong pillar value for api key <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3388\">#3388</a>" },
{ "summary": "The Soctopus configuration now uses private IPs instead of public IPs, allowing network communications to succeed within the grid." }, { "summary": "FIX: so-rule does not completely apply change <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3289\">#3289</a>" },
{ "summary": "The Correlate action in Hunt now groups the OR filters together to ensure subsequent user-added filters are correctly ANDed to the entire OR group." }, { "summary": "FIX: soup should recheck disk space after it tries to clean up. <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3235\">#3235</a>" },
{ "summary": "Add support to <code>so-firewall</code> script to display existing port groups and host groups." }, { "summary": "UPGRADE: Elastic 7.11.2 <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3389\">#3389</a>" },
{ "summary": "TheHive initialization during Security Onion setup will now properly check for a running ES instance and will retry connectivity checks to TheHive before proceeding." }, { "summary": "UPGRADE: Suricata 6.0.2 <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3217\">#3217</a>" },
{ "summary": "Changes to the <i>.security</i> analyzer yields more accurate query results when using Playbook." }, { "summary": "UPGRADE: Zeek 4 <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3216\">#3216</a>" },
{ "summary": "Several Hunt queries have been updated." }, { "summary": "UPGRADE: Zeek container to use Python 3 <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/1113\">#1113</a>" },
{ "summary": "The pfSense firewall log parser has been updated to improve compatibility." }, { "summary": "UPGRADE: docker-ce to latest <a href=\"https://github.com/Security-Onion-Solutions/securityonion/issues/3493\">#3493</a>" }
{ "summary": "Kibana dashboard hyperlinks have been updated for faster navigation." },
{ "summary": "Added a new <code>so-rule</code> script to make it easier to disable, enable, and modify SIDs." },
{ "summary": "ISO now gives the option to just configure the network during setup." }
] ]
} }

View File

@@ -17,7 +17,7 @@
{ "name": "Connections", "description": "Connections grouped by destination country", "query": "event.dataset:conn | groupby destination.geo.country_name"}, { "name": "Connections", "description": "Connections grouped by destination country", "query": "event.dataset:conn | groupby destination.geo.country_name"},
{ "name": "Connections", "description": "Connections grouped by source country", "query": "event.dataset:conn | groupby source.geo.country_name"}, { "name": "Connections", "description": "Connections grouped by source country", "query": "event.dataset:conn | groupby source.geo.country_name"},
{ "name": "DCE_RPC", "description": "DCE_RPC grouped by operation", "query": "event.dataset:dce_rpc | groupby dce_rpc.operation"}, { "name": "DCE_RPC", "description": "DCE_RPC grouped by operation", "query": "event.dataset:dce_rpc | groupby dce_rpc.operation"},
{ "name": "DHCP", "description": "DHCP leases", "query": "event.dataset:dhcp | groupby host.hostname host.domain"}, { "name": "DHCP", "description": "DHCP leases", "query": "event.dataset:dhcp | groupby host.hostname client.address"},
{ "name": "DHCP", "description": "DHCP grouped by message type", "query": "event.dataset:dhcp | groupby dhcp.message_types"}, { "name": "DHCP", "description": "DHCP grouped by message type", "query": "event.dataset:dhcp | groupby dhcp.message_types"},
{ "name": "DNP3", "description": "DNP3 grouped by reply", "query": "event.dataset:dnp3 | groupby dnp3.fc_reply"}, { "name": "DNP3", "description": "DNP3 grouped by reply", "query": "event.dataset:dnp3 | groupby dnp3.fc_reply"},
{ "name": "DNS", "description": "DNS queries grouped by port", "query": "event.dataset:dns | groupby dns.query.name destination.port"}, { "name": "DNS", "description": "DNS queries grouped by port", "query": "event.dataset:dns | groupby dns.query.name destination.port"},
@@ -42,6 +42,7 @@
{ "name": "MYSQL", "description": "MYSQL grouped by command", "query": "event.dataset:mysql | groupby mysql.command"}, { "name": "MYSQL", "description": "MYSQL grouped by command", "query": "event.dataset:mysql | groupby mysql.command"},
{ "name": "NOTICE", "description": "Zeek notice logs grouped by note and message", "query": "event.dataset:notice | groupby notice.note notice.message"}, { "name": "NOTICE", "description": "Zeek notice logs grouped by note and message", "query": "event.dataset:notice | groupby notice.note notice.message"},
{ "name": "NTLM", "description": "NTLM grouped by computer name", "query": "event.dataset:ntlm | groupby ntlm.server.dns.name"}, { "name": "NTLM", "description": "NTLM grouped by computer name", "query": "event.dataset:ntlm | groupby ntlm.server.dns.name"},
{ "name": "Osquery Live Queries", "description": "Osquery Live Query results grouped by computer name", "query": "event.dataset:live_query | groupby host.hostname"},
{ "name": "PE", "description": "PE files list", "query": "event.dataset:pe | groupby file.machine file.os file.subsystem"}, { "name": "PE", "description": "PE files list", "query": "event.dataset:pe | groupby file.machine file.os file.subsystem"},
{ "name": "RADIUS", "description": "RADIUS grouped by username", "query": "event.dataset:radius | groupby user.name.keyword"}, { "name": "RADIUS", "description": "RADIUS grouped by username", "query": "event.dataset:radius | groupby user.name.keyword"},
{ "name": "RDP", "description": "RDP grouped by client name", "query": "event.dataset:rdp | groupby client.name"}, { "name": "RDP", "description": "RDP grouped by client name", "query": "event.dataset:rdp | groupby client.name"},

View File

@@ -1,14 +1,23 @@
{%- set MANAGERIP = salt['pillar.get']('global:managerip', '') %} {%- set MANAGERIP = salt['pillar.get']('global:managerip', '') %}
{%- set SENSORONIKEY = salt['pillar.get']('global:sensoronikey', '') %} {%- set SENSORONIKEY = salt['pillar.get']('global:sensoronikey', '') %}
{%- set THEHIVEKEY = salt['pillar.get']('global:hivekey', '') %} {%- set THEHIVEKEY = salt['pillar.get']('global:hivekey', '') %}
{%- set FEATURES = salt['pillar.get']('elastic:features', False) %} {%- set PLAYBOOK = salt['pillar.get']('manager:playbook', '0') %}
{%- set THEHIVE = salt['pillar.get']('manager:thehive', '0') %}
{%- set OSQUERY = salt['pillar.get']('manager:osquery', '0') %}
{%- set GRAFANA = salt['pillar.get']('manager:grafana', '0') %}
{%- set ISAIRGAP = salt['pillar.get']('global:airgap', 'False') %} {%- set ISAIRGAP = salt['pillar.get']('global:airgap', 'False') %}
{%- set API_TIMEOUT = salt['pillar.get']('sensoroni:api_timeout_ms', 0) %}
{%- set WEBSOCKET_TIMEOUT = salt['pillar.get']('sensoroni:websocket_timeout_ms', 0) %}
{%- set TIP_TIMEOUT = salt['pillar.get']('sensoroni:tip_timeout_ms', 0) %}
{%- set CACHE_EXPIRATION = salt['pillar.get']('sensoroni:cache_expiration_ms', 0) %}
{%- set ES_FIELDCAPS_CACHE = salt['pillar.get']('sensoroni:es_fieldcaps_cache_ms', '300000') %}
{%- import_json "soc/files/soc/alerts.queries.json" as alerts_queries %} {%- import_json "soc/files/soc/alerts.queries.json" as alerts_queries %}
{%- import_json "soc/files/soc/alerts.actions.json" as alerts_actions %} {%- import_json "soc/files/soc/alerts.actions.json" as alerts_actions %}
{%- import_json "soc/files/soc/alerts.eventfields.json" as alerts_eventfields %} {%- import_json "soc/files/soc/alerts.eventfields.json" as alerts_eventfields %}
{%- import_json "soc/files/soc/hunt.queries.json" as hunt_queries %} {%- import_json "soc/files/soc/hunt.queries.json" as hunt_queries %}
{%- import_json "soc/files/soc/hunt.actions.json" as hunt_actions %} {%- import_json "soc/files/soc/hunt.actions.json" as hunt_actions %}
{%- import_json "soc/files/soc/hunt.eventfields.json" as hunt_eventfields %} {%- import_json "soc/files/soc/hunt.eventfields.json" as hunt_eventfields %}
{%- import_json "soc/files/soc/tools.json" as tools %}
{%- set DNET = salt['pillar.get']('global:dockernet', '172.17.0.0') %} {%- set DNET = salt['pillar.get']('global:dockernet', '172.17.0.0') %}
{ {
@@ -31,7 +40,7 @@
"hostUrl": "http://{{ MANAGERIP }}:4434/" "hostUrl": "http://{{ MANAGERIP }}:4434/"
}, },
"elastic": { "elastic": {
"hostUrl": "http://{{ MANAGERIP }}:9200", "hostUrl": "https://{{ MANAGERIP }}:9200",
{%- if salt['pillar.get']('nodestab', {}) %} {%- if salt['pillar.get']('nodestab', {}) %}
"remoteHostUrls": [ "remoteHostUrls": [
{%- for SN, SNDATA in salt['pillar.get']('nodestab', {}).items() %} {%- for SN, SNDATA in salt['pillar.get']('nodestab', {}).items() %}
@@ -41,11 +50,12 @@
{%- endif %} {%- endif %}
"username": "", "username": "",
"password": "", "password": "",
"cacheMs": {{ ES_FIELDCAPS_CACHE }},
"verifyCert": false "verifyCert": false
}, },
"sostatus": { "sostatus": {
"refreshIntervalMs": 30000, "refreshIntervalMs": 30000,
"offlineThresholdMs": 60000 "offlineThresholdMs": 900000
}, },
{% if THEHIVEKEY != '' %} {% if THEHIVEKEY != '' %}
"thehive": { "thehive": {
@@ -67,6 +77,26 @@
"docsUrl": "https://docs.securityonion.net/en/2.3/", "docsUrl": "https://docs.securityonion.net/en/2.3/",
"cheatsheetUrl": "https://github.com/Security-Onion-Solutions/securityonion-docs/raw/2.3/images/cheat-sheet/Security-Onion-Cheat-Sheet.pdf", "cheatsheetUrl": "https://github.com/Security-Onion-Solutions/securityonion-docs/raw/2.3/images/cheat-sheet/Security-Onion-Cheat-Sheet.pdf",
{%- endif %} {%- endif %}
"apiTimeoutMs": {{ API_TIMEOUT }},
"webSocketTimeoutMs": {{ WEBSOCKET_TIMEOUT }},
"tipTimeoutMs": {{ TIP_TIMEOUT }},
"cacheExpirationMs": {{ CACHE_EXPIRATION }},
"inactiveTools": [
{%- if PLAYBOOK == 0 %}
"toolPlaybook",
{%- endif %}
{%- if THEHIVE == 0 %}
"toolTheHive",
{%- endif %}
{%- if OSQUERY == 0 %}
"toolFleet",
{%- endif %}
{%- if GRAFANA == 0 %}
"toolGrafana",
{%- endif %}
"toolUnused"
],
"tools": {{ tools | json }},
"hunt": { "hunt": {
"advanced": true, "advanced": true,
"groupItemsPerPage": 10, "groupItemsPerPage": 10,

View File

@@ -0,0 +1,9 @@
[
{ "name": "toolKibana", "description": "toolKibanaHelp", "icon": "fa-external-link-alt", "target": "so-kibana", "link": "/kibana/" },
{ "name": "toolGrafana", "description": "toolGrafanaHelp", "icon": "fa-external-link-alt", "target": "so-grafana", "link": "/grafana/d/so_overview" },
{ "name": "toolCyberchef", "description": "toolCyberchefHelp", "icon": "fa-external-link-alt", "target": "so-cyberchef", "link": "/cyberchef/" },
{ "name": "toolPlaybook", "description": "toolPlaybookHelp", "icon": "fa-external-link-alt", "target": "so-playbook", "link": "/playbook/projects/detection-playbooks/issues/" },
{ "name": "toolFleet", "description": "toolFleetHelp", "icon": "fa-external-link-alt", "target": "so-fleet", "link": "/fleet/" },
{ "name": "toolTheHive", "description": "toolTheHiveHelp", "icon": "fa-external-link-alt", "target": "so-thehive", "link": "/thehive/" },
{ "name": "toolNavigator", "description": "toolNavigatorHelp", "icon": "fa-external-link-alt", "target": "so-navigator", "link": "/navigator/" }
]

View File

@@ -6,7 +6,7 @@
[es] [es]
es_url = http://{{MANAGER}}:9200 es_url = https://{{MANAGER}}:9200
es_ip = {{MANAGER}} es_ip = {{MANAGER}}
es_user = YOURESUSER es_user = YOURESUSER
es_pass = YOURESPASS es_pass = YOURESPASS

View File

@@ -19,7 +19,8 @@ files:
- '/nsm/strelka/unprocessed/*' - '/nsm/strelka/unprocessed/*'
delete: false delete: false
gatekeeper: true gatekeeper: true
processed: '/nsm/strelka/processed'
response: response:
report: 5s report: 5s
delta: 5s delta: 5s
staging: '/nsm/strelka/processed' staging: '/nsm/strelka/staging'

View File

@@ -86,6 +86,13 @@ strelkaprocessed:
- group: 939 - group: 939
- makedirs: True - makedirs: True
strelkastaging:
file.directory:
- name: /nsm/strelka/staging
- user: 939
- group: 939
- makedirs: True
strelkaunprocessed: strelkaunprocessed:
file.directory: file.directory:
- name: /nsm/strelka/unprocessed - name: /nsm/strelka/unprocessed
@@ -96,7 +103,7 @@ strelkaunprocessed:
# Check to see if Strelka frontend port is available # Check to see if Strelka frontend port is available
strelkaportavailable: strelkaportavailable:
cmd.run: cmd.run:
- name: netstat -utanp | grep ":57314" | grep -qv docker && PROCESS=$(netstat -utanp | grep ":57314" | uniq) && echo "Another process ($PROCESS) appears to be using port 57314. Please terminate this process, or reboot to ensure a clean state so that Strelka can start properly." && exit 1 || exit 0 - name: netstat -utanp | grep ":57314" | grep -qvE 'docker|TIME_WAIT' && PROCESS=$(netstat -utanp | grep ":57314" | uniq) && echo "Another process ($PROCESS) appears to be using port 57314. Please terminate this process, or reboot to ensure a clean state so that Strelka can start properly." && exit 1 || exit 0
strelka_coordinator: strelka_coordinator:
docker_container.running: docker_container.running:

View File

@@ -179,6 +179,26 @@ disable_so-suricata_so-status.conf:
- month: '*' - month: '*'
- dayweek: '*' - dayweek: '*'
so-suricata-eve-clean:
file.managed:
- name: /usr/sbin/so-suricata-eve-clean
- user: root
- group: root
- mode: 755
- template: jinja
- source: salt://suricata/cron/so-suricata-eve-clean
# Add eve clean cron
clean_suricata_eve_files:
cron.present:
- name: /usr/sbin/so-suricata-eve-clean > /dev/null 2>&1
- user: root
- minute: '*/5'
- hour: '*'
- daymonth: '*'
- month: '*'
- dayweek: '*'
{% else %} {% else %}
{{sls}}_state_not_allowed: {{sls}}_state_not_allowed:

View File

@@ -61,7 +61,7 @@ suricata:
- sip - sip
- dhcp: - dhcp:
enabled: "yes" enabled: "yes"
# extended: "no" extended: "yes"
- ssh - ssh
#- stats: #- stats:
# totals: "yes" # totals: "yes"

View File

@@ -618,11 +618,8 @@
# # Read stats from one or more Elasticsearch servers or clusters # # Read stats from one or more Elasticsearch servers or clusters
{% if grains['role'] in ['so-manager', 'so-eval', 'so-managersearch', 'so-standalone'] %} {% if grains['role'] in ['so-manager', 'so-eval', 'so-managersearch', 'so-standalone'] %}
[[inputs.elasticsearch]] [[inputs.elasticsearch]]
servers = ["https://{{ MANAGER }}:9200"]
# ## specify a list of one or more Elasticsearch servers insecure_skip_verify = true
# # you can add username and password to your url to use basic authentication:
# # servers = ["http://user:pass@localhost:9200"]
servers = ["http://{{ MANAGER }}:9200"]
{% elif grains['role'] in ['so-node', 'so-hotnode', 'so-warmnode', 'so-heavynode'] %} {% elif grains['role'] in ['so-node', 'so-hotnode', 'so-warmnode', 'so-heavynode'] %}
[[inputs.elasticsearch]] [[inputs.elasticsearch]]
servers = ["https://{{ NODEIP }}:9200"] servers = ["https://{{ NODEIP }}:9200"]

View File

@@ -1,7 +1,6 @@
#!/bin/bash #!/bin/bash
{% set ES = salt['pillar.get']('manager:mainip', '') %} {% set ES = salt['pillar.get']('manager:mainip', '') %}
{% set MANAGER = salt['grains.get']('master') %} {% set MANAGER = salt['grains.get']('master') %}
{% set FEATURES = salt['pillar.get']('elastic:features', False) %}
{% set TRUECLUSTER = salt['pillar.get']('elasticsearch:true_cluster', False) %} {% set TRUECLUSTER = salt['pillar.get']('elasticsearch:true_cluster', False) %}
# Wait for ElasticSearch to come up, so that we can query for version infromation # Wait for ElasticSearch to come up, so that we can query for version infromation
@@ -9,7 +8,7 @@ echo -n "Waiting for ElasticSearch..."
COUNT=0 COUNT=0
ELASTICSEARCH_CONNECTED="no" ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 30 ]]; do while [[ "$COUNT" -le 30 ]]; do
curl --output /dev/null --silent --head --fail -L http://{{ ES }}:9200 curl -k --output /dev/null --silent --head --fail -L https://{{ ES }}:9200
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes" ELASTICSEARCH_CONNECTED="yes"
echo "connected!" echo "connected!"
@@ -29,7 +28,7 @@ if [ "$ELASTICSEARCH_CONNECTED" == "no" ]; then
fi fi
echo "Applying cross cluster search config..." echo "Applying cross cluster search config..."
curl -s -XPUT -L http://{{ ES }}:9200/_cluster/settings \ curl -s -k -XPUT -L https://{{ ES }}:9200/_cluster/settings \
-H 'Content-Type: application/json' \ -H 'Content-Type: application/json' \
-d "{\"persistent\": {\"search\": {\"remote\": {\"{{ MANAGER }}\": {\"seeds\": [\"127.0.0.1:9300\"]}}}}}" -d "{\"persistent\": {\"search\": {\"remote\": {\"{{ MANAGER }}\": {\"seeds\": [\"127.0.0.1:9300\"]}}}}}"
@@ -37,7 +36,7 @@ echo "Applying cross cluster search config..."
{%- if TRUECLUSTER is sameas false %} {%- if TRUECLUSTER is sameas false %}
{%- if salt['pillar.get']('nodestab', {}) %} {%- if salt['pillar.get']('nodestab', {}) %}
{%- for SN, SNDATA in salt['pillar.get']('nodestab', {}).items() %} {%- for SN, SNDATA in salt['pillar.get']('nodestab', {}).items() %}
curl -XPUT -L http://{{ ES }}:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"search": {"remote": {"{{ SN }}": {"skip_unavailable": "true", "seeds": ["{{ SN.split('_')|first }}:9300"]}}}}}' curl -s -k -XPUT -L https://{{ ES }}:9200/_cluster/settings -H'Content-Type: application/json' -d '{"persistent": {"search": {"remote": {"{{ SN }}": {"skip_unavailable": "true", "seeds": ["{{ SN.split('_')|first }}:9300"]}}}}}'
{%- endfor %} {%- endfor %}
{%- endif %} {%- endif %}
{%- endif %} {%- endif %}

View File

@@ -6,7 +6,7 @@ echo -n "Waiting for ElasticSearch..."
COUNT=0 COUNT=0
ELASTICSEARCH_CONNECTED="no" ELASTICSEARCH_CONNECTED="no"
while [[ "$COUNT" -le 30 ]]; do while [[ "$COUNT" -le 30 ]]; do
curl --output /dev/null --silent --head --fail -L http://{{ ES }}:9200 curl -k --output /dev/null --silent --head --fail -L https://{{ ES }}:9200
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then
ELASTICSEARCH_CONNECTED="yes" ELASTICSEARCH_CONNECTED="yes"
echo "connected!" echo "connected!"
@@ -26,6 +26,6 @@ if [ "$ELASTICSEARCH_CONNECTED" == "no" ]; then
fi fi
echo "Applying cross cluster search config..." echo "Applying cross cluster search config..."
curl -s -XPUT -L http://{{ ES }}:9200/_cluster/settings \ curl -s -k -XPUT -L https://{{ ES }}:9200/_cluster/settings \
-H 'Content-Type: application/json' \ -H 'Content-Type: application/json' \
-d "{\"persistent\": {\"search\": {\"remote\": {\"{{ grains.host }}\": {\"seeds\": [\"127.0.0.1:9300\"]}}}}}" -d "{\"persistent\": {\"search\": {\"remote\": {\"{{ grains.host }}\": {\"seeds\": [\"127.0.0.1:9300\"]}}}}}"

View File

@@ -1,3 +1,4 @@
{% set proxy = salt['pillar.get']('manager:proxy') -%}
[main] [main]
cachedir=/var/cache/yum/$basearch/$releasever cachedir=/var/cache/yum/$basearch/$releasever
keepcache=0 keepcache=0
@@ -11,7 +12,8 @@ installonly_limit={{ salt['pillar.get']('yum:config:installonly_limit', 2) }}
bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yum bugtracker_url=http://bugs.centos.org/set_project.php?project_id=23&ref=http://bugs.centos.org/bug_report_page.php?category=yum
distroverpkg=centos-release distroverpkg=centos-release
clean_requirements_on_remove=1 clean_requirements_on_remove=1
{% if (grains['role'] not in ['so-eval','so-managersearch', 'so-manager', 'so-standalone']) and salt['pillar.get']('global:managerupdate', '0') -%}
{% if (grains['role'] not in ['so-eval','so-managersearch', 'so-manager', 'so-standalone']) and salt['pillar.get']('global:managerupdate', '0') %}
proxy=http://{{ salt['pillar.get']('yum:config:proxy', salt['config.get']('master')) }}:3142 proxy=http://{{ salt['pillar.get']('yum:config:proxy', salt['config.get']('master')) }}:3142
{% elif proxy -%}
proxy={{ proxy }}
{% endif %} {% endif %}

View File

@@ -55,7 +55,7 @@ MSRVIP=10.66.166.42
# NODE_ES_HEAP_SIZE= # NODE_ES_HEAP_SIZE=
# NODE_LS_HEAP_SIZE= # NODE_LS_HEAP_SIZE=
NODESETUP=NODEBASIC NODESETUP=NODEBASIC
NSMSETUP=BASIC NSMSETUP=ADVANCED
NODEUPDATES=MANAGER NODEUPDATES=MANAGER
# OINKCODE= # OINKCODE=
# OSQUERY=1 # OSQUERY=1

View File

@@ -50,12 +50,12 @@ MNIC=eth0
# MSEARCH= # MSEARCH=
MSRV=distributed-manager MSRV=distributed-manager
MSRVIP=10.66.166.42 MSRVIP=10.66.166.42
# MTU= MTU=1500
# NIDS=Suricata # NIDS=Suricata
# NODE_ES_HEAP_SIZE= # NODE_ES_HEAP_SIZE=
# NODE_LS_HEAP_SIZE= # NODE_LS_HEAP_SIZE=
# NODESETUP=NODEBASIC # NODESETUP=NODEBASIC
NSMSETUP=BASIC NSMSETUP=ADVANCED
NODEUPDATES=MANAGER NODEUPDATES=MANAGER
# OINKCODE= # OINKCODE=
# OSQUERY=1 # OSQUERY=1
@@ -71,8 +71,10 @@ PATCHSCHEDULENAME=auto
SOREMOTEPASS1=onionuser SOREMOTEPASS1=onionuser
SOREMOTEPASS2=onionuser SOREMOTEPASS2=onionuser
# STRELKA=1 # STRELKA=1
SURIPINS=(2 3)
# THEHIVE=1 # THEHIVE=1
# WAZUH=1 # WAZUH=1
# WEBUSER=onionuser@somewhere.invalid # WEBUSER=onionuser@somewhere.invalid
# WEBPASSWD1=0n10nus3r # WEBPASSWD1=0n10nus3r
# WEBPASSWD2=0n10nus3r # WEBPASSWD2=0n10nus3r
ZEEKPINS=(0 1)

View File

@@ -55,7 +55,7 @@ MSRVIP=10.66.166.66
# NODE_ES_HEAP_SIZE= # NODE_ES_HEAP_SIZE=
# NODE_LS_HEAP_SIZE= # NODE_LS_HEAP_SIZE=
NODESETUP=NODEBASIC NODESETUP=NODEBASIC
NSMSETUP=BASIC NSMSETUP=ADVANCED
NODEUPDATES=MANAGER NODEUPDATES=MANAGER
# OINKCODE= # OINKCODE=
# OSQUERY=1 # OSQUERY=1

View File

@@ -27,7 +27,7 @@ BASICZEEK=2
BASICSURI=2 BASICSURI=2
# BLOGS= # BLOGS=
BNICS=ens19 BNICS=ens19
ZEEKVERSION=ZEEK ZEEKVERSION=SURICATA
# CURCLOSEDAYS= # CURCLOSEDAYS=
# EVALADVANCED=BASIC # EVALADVANCED=BASIC
# GRAFANA=1 # GRAFANA=1
@@ -50,12 +50,12 @@ MNIC=ens18
# MSEARCH= # MSEARCH=
MSRV=distributed-manager MSRV=distributed-manager
MSRVIP=10.66.166.66 MSRVIP=10.66.166.66
# MTU= MTU=1500
# NIDS=Suricata # NIDS=Suricata
# NODE_ES_HEAP_SIZE= # NODE_ES_HEAP_SIZE=
# NODE_LS_HEAP_SIZE= # NODE_LS_HEAP_SIZE=
# NODESETUP=NODEBASIC # NODESETUP=NODEBASIC
NSMSETUP=BASIC NSMSETUP=ADVANCED
NODEUPDATES=MANAGER NODEUPDATES=MANAGER
# OINKCODE= # OINKCODE=
# OSQUERY=1 # OSQUERY=1
@@ -71,8 +71,10 @@ PATCHSCHEDULENAME=auto
SOREMOTEPASS1=onionuser SOREMOTEPASS1=onionuser
SOREMOTEPASS2=onionuser SOREMOTEPASS2=onionuser
# STRELKA=1 # STRELKA=1
SURIPINS=(2 3)
# THEHIVE=1 # THEHIVE=1
# WAZUH=1 # WAZUH=1
# WEBUSER=onionuser@somewhere.invalid # WEBUSER=onionuser@somewhere.invalid
# WEBPASSWD1=0n10nus3r # WEBPASSWD1=0n10nus3r
# WEBPASSWD2=0n10nus3r # WEBPASSWD2=0n10nus3r
ZEEKPINS=(0 1)

View File

@@ -0,0 +1,78 @@
#!/bin/bash
# Copyright 2014,2015,2016,2017,2018,2019,2020,2021 Security Onion Solutions, LLC
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
TESTING=true
# address_type=DHCP
ADMINUSER=onionuser
ADMINPASS1=onionuser
ADMINPASS2=onionuser
ALLOW_CIDR=0.0.0.0/0
ALLOW_ROLE=a
BASICZEEK=2
BASICSURI=2
# BLOGS=
BNICS=eth1
ZEEKVERSION=ZEEK
# CURCLOSEDAYS=
# EVALADVANCED=BASIC
GRAFANA=1
# HELIXAPIKEY=
HNMANAGER=10.0.0.0/8,192.168.0.0/16,172.16.0.0/12
HNSENSOR=inherit
HOSTNAME=standalone
install_type=STANDALONE
# LSINPUTBATCHCOUNT=
# LSINPUTTHREADS=
# LSPIPELINEBATCH=
# LSPIPELINEWORKERS=
MANAGERADV=BASIC
MANAGERUPDATES=1
# MDNS=
# MGATEWAY=
# MIP=
# MMASK=
MNIC=eth0
# MSEARCH=
# MSRV=
# MTU=
NIDS=Suricata
# NODE_ES_HEAP_SIZE=
# NODE_LS_HEAP_SIZE=
NODESETUP=NODEBASIC
NSMSETUP=BASIC
NODEUPDATES=MANAGER
# OINKCODE=
OSQUERY=1
# PATCHSCHEDULEDAYS=
# PATCHSCHEDULEHOURS=
PATCHSCHEDULENAME=auto
PLAYBOOK=1
so_proxy=http://onionuser:0n10nus3r@10.66.166.30:3128
# REDIRECTHOST=
REDIRECTINFO=IP
RULESETUP=ETOPEN
# SHARDCOUNT=
# SKIP_REBOOT=
SOREMOTEPASS1=onionuser
SOREMOTEPASS2=onionuser
STRELKA=1
THEHIVE=1
WAZUH=1
WEBUSER=onionuser@somewhere.invalid
WEBPASSWD1=0n10nus3r
WEBPASSWD2=0n10nus3r

View File

@@ -1,2 +0,0 @@
[Service]
ExecStart=/usr/bin/dockerd /usr/bin/dockerd -H fd:// --registry-mirror "$proxy_addr"

View File

@@ -535,6 +535,56 @@ collect_patch_schedule_name_import() {
done done
} }
collect_proxy() {
[[ -n $TESTING ]] && return
collect_proxy_details
while ! proxy_validate; do
if whiptail_invalid_proxy; then
collect_proxy_details no_ask
else
so_proxy=""
break
fi
done
}
collect_proxy_details() {
local ask=${1:-true}
local use_proxy
if [[ $ask != true ]]; then
use_proxy=0
else
whiptail_proxy_ask
use_proxy=$?
fi
if [[ $use_proxy == 0 ]]; then
whiptail_proxy_addr "$proxy_addr"
while ! valid_proxy "$proxy_addr"; do
whiptail_invalid_input
whiptail_proxy_addr "$proxy_addr"
done
if whiptail_proxy_auth_ask; then
whiptail_proxy_auth_user "$proxy_user"
whiptail_proxy_auth_pass "$proxy_pass"
local url_prefixes=( 'http://' 'https://' )
for prefix in "${url_prefixes[@]}"; do
if echo "$proxy_addr" | grep -q "$prefix"; then
local proxy=${proxy_addr#"$prefix"}
so_proxy="${prefix}${proxy_user}:${proxy_pass}@${proxy}"
break
fi
done
else
so_proxy="$proxy_addr"
fi
export proxy
fi
}
collect_redirect_host() { collect_redirect_host() {
whiptail_set_redirect_host "$HOSTNAME" whiptail_set_redirect_host "$HOSTNAME"
@@ -691,10 +741,10 @@ check_requirements() {
else else
req_storage=100 req_storage=100
fi fi
if (( $(echo "$free_space_root < $req_storage" | bc -l) )); then if [[ $free_space_root -lt $req_storage ]]; then
whiptail_storage_requirements "/" "${free_space_root} GB" "${req_storage} GB" whiptail_storage_requirements "/" "${free_space_root} GB" "${req_storage} GB"
fi fi
if (( $(echo "$free_space_nsm < $req_storage" | bc -l) )); then if [[ $free_space_nsm -lt $req_storage ]]; then
whiptail_storage_requirements "/nsm" "${free_space_nsm} GB" "${req_storage} GB" whiptail_storage_requirements "/nsm" "${free_space_nsm} GB" "${req_storage} GB"
fi fi
else else
@@ -703,7 +753,7 @@ check_requirements() {
else else
req_storage=200 req_storage=200
fi fi
if (( $(echo "$free_space_root < $req_storage" | bc -l) )); then if [[ $free_space_root -lt $req_storage ]]; then
whiptail_storage_requirements "/" "${free_space_root} GB" "${req_storage} GB" whiptail_storage_requirements "/" "${free_space_root} GB" "${req_storage} GB"
fi fi
fi fi
@@ -743,12 +793,14 @@ check_sos_appliance() {
compare_main_nic_ip() { compare_main_nic_ip() {
if ! [[ $MNIC =~ ^(tun|wg|vpn).*$ ]]; then if ! [[ $MNIC =~ ^(tun|wg|vpn).*$ ]]; then
if [[ "$MAINIP" != "$MNIC_IP" ]]; then if [[ "$MAINIP" != "$MNIC_IP" ]]; then
error "[ERROR] Main gateway ($MAINIP) does not match ip address of managament NIC ($MNIC_IP)."
read -r -d '' message <<- EOM read -r -d '' message <<- EOM
The IP being routed by Linux is not the IP address assigned to the management interface ($MNIC). The IP being routed by Linux is not the IP address assigned to the management interface ($MNIC).
This is not a supported configuration, please remediate and rerun setup. This is not a supported configuration, please remediate and rerun setup.
EOM EOM
whiptail --title "Security Onion Setup" --msgbox "$message" 10 75 [[ -n $TESTING ]] || whiptail --title "Security Onion Setup" --msgbox "$message" 10 75
kill -SIGINT "$(ps --pid $$ -oppid=)"; exit 1 kill -SIGINT "$(ps --pid $$ -oppid=)"; exit 1
fi fi
else else
@@ -897,7 +949,7 @@ create_repo() {
} }
detect_cloud() { detect_cloud() {
echo "Testing if setup is running on a cloud instance..." >> "$setup_log" 2>&1 echo "Testing if setup is running on a cloud instance..." | tee -a "$setup_log"
if ( curl --fail -s -m 5 http://169.254.169.254/latest/meta-data/instance-id > /dev/null ) || ( dmidecode -s bios-vendor | grep -q Google > /dev/null); then export is_cloud="true"; fi if ( curl --fail -s -m 5 http://169.254.169.254/latest/meta-data/instance-id > /dev/null ) || ( dmidecode -s bios-vendor | grep -q Google > /dev/null); then export is_cloud="true"; fi
} }
@@ -939,36 +991,29 @@ detect_os() {
} }
installer_prereq_packages() { installer_progress_loop() {
local i=0
while true; do
[[ $i -lt 98 ]] && ((i++))
set_progress_str "$i" 'Checking that all required packages are installed and enabled...' nolog
[[ $i -gt 0 ]] && sleep 5s
done
}
installer_prereq_packages() {
if [ "$OS" == centos ]; then if [ "$OS" == centos ]; then
# Print message to stdout so the user knows setup is doing something
echo "Installing required packages to run installer..."
# Install bind-utils so the host command exists
if [[ ! $is_iso ]]; then if [[ ! $is_iso ]]; then
if ! command -v host > /dev/null 2>&1; then if ! yum versionlock > /dev/null 2>&1; then
yum -y install bind-utils >> "$setup_log" 2>&1 yum -y install yum-plugin-versionlock >> "$setup_log" 2>&1
fi fi
if ! command -v nmcli > /dev/null 2>&1; then if ! command -v nmcli > /dev/null 2>&1; then
{ yum -y install NetworkManager >> "$setup_log" 2>&1
yum -y install NetworkManager; fi
systemctl enable NetworkManager; fi
systemctl start NetworkManager; logCmd "systemctl enable NetworkManager"
} >> "$setup_log" 2<&1 logCmd "systemctl start NetworkManager"
fi
if ! command -v bc > /dev/null 2>&1; then
yum -y install bc >> "$setup_log" 2>&1
fi
if ! yum versionlock > /dev/null 2>&1; then
yum -y install yum-plugin-versionlock >> "$setup_log" 2>&1
fi
else
logCmd "systemctl enable NetworkManager"
logCmd "systemctl start NetworkManager"
fi
elif [ "$OS" == ubuntu ]; then elif [ "$OS" == ubuntu ]; then
# Print message to stdout so the user knows setup is doing something # Print message to stdout so the user knows setup is doing something
echo "Installing required packages to run installer..."
retry 50 10 "apt-get update" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-get update" >> "$setup_log" 2>&1 || exit 1
# Install network manager so we can do interface stuff # Install network manager so we can do interface stuff
if ! command -v nmcli > /dev/null 2>&1; then if ! command -v nmcli > /dev/null 2>&1; then
@@ -978,7 +1023,7 @@ installer_prereq_packages() {
systemctl start NetworkManager systemctl start NetworkManager
} >> "$setup_log" 2<&1 } >> "$setup_log" 2<&1
fi fi
retry 50 10 "apt-get -y install bc curl" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-get -y install curl" >> "$setup_log" 2>&1 || exit 1
fi fi
} }
@@ -1002,11 +1047,11 @@ disable_ipv6() {
sysctl -w net.ipv6.conf.all.disable_ipv6=1 sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1 sysctl -w net.ipv6.conf.default.disable_ipv6=1
} >> "$setup_log" 2>&1 } >> "$setup_log" 2>&1
{ {
echo "net.ipv6.conf.all.disable_ipv6 = 1" echo "net.ipv6.conf.all.disable_ipv6 = 1"
echo "net.ipv6.conf.default.disable_ipv6 = 1" echo "net.ipv6.conf.default.disable_ipv6 = 1"
echo "net.ipv6.conf.lo.disable_ipv6 = 1" echo "net.ipv6.conf.lo.disable_ipv6 = 1"
} >> /etc/sysctl.conf } >> /etc/sysctl.conf
} }
#disable_misc_network_features() { #disable_misc_network_features() {
@@ -1044,10 +1089,11 @@ docker_install() {
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo; yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo;
fi fi
if [[ ! $is_iso ]]; then if [[ ! $is_iso ]]; then
yum -y install docker-ce-19.03.14-3.el7 containerd.io-1.2.13-3.2.el7; yum -y install docker-ce-20.10.5-3.el7 containerd.io-1.4.4-3.1.el7;
fi fi
yum versionlock docker-ce-19.03.14-3.el7; yum versionlock docker-ce-20.10.5-3.el7;
yum versionlock containerd.io-1.2.13-3.2.el7 yum versionlock docker-ce-cli-20.10.5-3.el7;
yum versionlock containerd.io-1.4.4-3.1.el7
} >> "$setup_log" 2>&1 } >> "$setup_log" 2>&1
else else
@@ -1201,8 +1247,13 @@ es_heapsize() {
# https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html # https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
ES_HEAP_SIZE="25000m" ES_HEAP_SIZE="25000m"
else else
# Set heap size to 25% of available memory # Set heap size to 33% of available memory
ES_HEAP_SIZE=$(( total_mem / 4 ))"m" ES_HEAP_SIZE=$(( total_mem / 3 ))
if [ "$ES_HEAP_SIZE" -ge 25001 ] ; then
ES_HEAP_SIZE="25000m"
else
ES_HEAP_SIZE=$ES_HEAP_SIZE"m"
fi
fi fi
export ES_HEAP_SIZE export ES_HEAP_SIZE
@@ -1385,6 +1436,8 @@ install_cleanup() {
info "Removing so-setup permission entry from sudoers file" info "Removing so-setup permission entry from sudoers file"
sed -i '/so-setup/d' /etc/sudoers sed -i '/so-setup/d' /etc/sudoers
fi fi
so-ssh-harden -q
} }
import_registry_docker() { import_registry_docker() {
@@ -1432,6 +1485,8 @@ manager_pillar() {
"manager:"\ "manager:"\
" mainip: '$MAINIP'"\ " mainip: '$MAINIP'"\
" mainint: '$MNIC'"\ " mainint: '$MNIC'"\
" proxy: '$so_proxy'"\
" no_proxy: '$no_proxy_string'"\
" esheap: '$ES_HEAP_SIZE'"\ " esheap: '$ES_HEAP_SIZE'"\
" esclustername: '{{ grains.host }}'"\ " esclustername: '{{ grains.host }}'"\
" freq: 0"\ " freq: 0"\
@@ -1446,7 +1501,6 @@ manager_pillar() {
printf '%s\n'\ printf '%s\n'\
" elastalert: 1"\ " elastalert: 1"\
" es_port: $node_es_port"\ " es_port: $node_es_port"\
" log_size_limit: $log_size_limit"\
" cur_close_days: $CURCLOSEDAYS"\ " cur_close_days: $CURCLOSEDAYS"\
" grafana: $GRAFANA"\ " grafana: $GRAFANA"\
" osquery: $OSQUERY"\ " osquery: $OSQUERY"\
@@ -1512,7 +1566,6 @@ manager_global() {
" hnmanager: '$HNMANAGER'"\ " hnmanager: '$HNMANAGER'"\
" ntpserver: '$NTPSERVER'"\ " ntpserver: '$NTPSERVER'"\
" dockernet: '$DOCKERNET'"\ " dockernet: '$DOCKERNET'"\
" proxy: '$PROXY'"\
" mdengine: '$ZEEKVERSION'"\ " mdengine: '$ZEEKVERSION'"\
" ids: '$NIDS'"\ " ids: '$NIDS'"\
" url_base: '$REDIRECTIT'"\ " url_base: '$REDIRECTIT'"\
@@ -1642,8 +1695,8 @@ manager_global() {
" so-zeek:"\ " so-zeek:"\
" shards: 5"\ " shards: 5"\
" warm: 7"\ " warm: 7"\
" close: 365"\ " close: 45"\
" delete: 45"\ " delete: 365"\
"minio:"\ "minio:"\
" access_key: '$ACCESS_KEY'"\ " access_key: '$ACCESS_KEY'"\
" access_secret: '$ACCESS_SECRET'"\ " access_secret: '$ACCESS_SECRET'"\
@@ -1695,7 +1748,6 @@ network_init() {
network_init_whiptail() { network_init_whiptail() {
case "$setup_type" in case "$setup_type" in
'iso') 'iso')
collect_hostname
whiptail_management_nic whiptail_management_nic
whiptail_dhcp_or_static whiptail_dhcp_or_static
@@ -1709,7 +1761,6 @@ network_init_whiptail() {
'network') 'network')
whiptail_network_notice whiptail_network_notice
whiptail_dhcp_warn whiptail_dhcp_warn
collect_hostname
whiptail_management_nic whiptail_management_nic
;; ;;
esac esac
@@ -1777,6 +1828,22 @@ print_salt_state_apply() {
echo "Applying $state Salt state" echo "Applying $state Salt state"
} }
proxy_validate() {
local test_url="https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/master/KEYS"
proxy_test_err=$(curl -sS "$test_url" --proxy "$so_proxy" 2>&1)
local ret=$?
if [[ $ret != 0 ]]; then
error "Could not reach $test_url using proxy $so_proxy"
error "Received error: $proxy_test_err"
if [[ -n $TESTING ]]; then
error "Exiting setup"
kill -SIGINT "$(ps --pid $$ -oppid=)"; exit 1
fi
fi
return $ret
}
reserve_group_ids() { reserve_group_ids() {
# This is a hack to fix CentOS from taking group IDs that we need # This is a hack to fix CentOS from taking group IDs that we need
groupadd -g 928 kratos groupadd -g 928 kratos
@@ -1792,6 +1859,16 @@ reserve_group_ids() {
groupadd -g 946 cyberchef groupadd -g 946 cyberchef
} }
reserve_ports() {
# These are also set via salt but need to be set pre-install to avoid conflicts before salt runs
if ! sysctl net.ipv4.ip_local_reserved_ports | grep 55000 | grep 57314; then
echo "Reserving ephemeral ports used by Security Onion components to avoid collisions"
sysctl -w net.ipv4.ip_local_reserved_ports="55000,57314"
else
echo "Ephemeral ports already reserved"
fi
}
reinstall_init() { reinstall_init() {
info "Putting system in state to run setup again" info "Putting system in state to run setup again"
@@ -1860,6 +1937,24 @@ reinstall_init() {
} >> "$setup_log" 2>&1 } >> "$setup_log" 2>&1
} }
reset_proxy() {
[[ -f /etc/profile.d/so-proxy.sh ]] && rm -f /etc/profile.d/so-proxy.sh
[[ -f /etc/systemd/system/docker.service.d/http-proxy.conf ]] && rm -f /etc/systemd/system/docker.service.d/http-proxy.conf
systemctl daemon-reload
command -v docker &> /dev/null && echo "Restarting Docker..." | tee -a "$setup_log" && systemctl restart docker
[[ -f /root/.docker/config.json ]] && rm -f /root/.docker/config.json
[[ -f /etc/gitconfig ]] && rm -f /etc/gitconfig
if [[ $OS == 'centos' ]]; then
sed -i "/proxy=/d" /etc/yum.conf
else
[[ -f /etc/apt/apt.conf.d/00-proxy.conf ]] && rm -f /etc/apt/apt.conf.d/00-proxy.conf
fi
}
backup_dir() { backup_dir() {
dir=$1 dir=$1
backup_suffix=$2 backup_suffix=$2
@@ -1953,6 +2048,7 @@ saltify() {
python36-dateutil\ python36-dateutil\
python36-m2crypto\ python36-m2crypto\
python36-mysql\ python36-mysql\
python36-packaging\
yum-utils\ yum-utils\
device-mapper-persistent-data\ device-mapper-persistent-data\
lvm2\ lvm2\
@@ -2041,9 +2137,9 @@ saltify() {
retry 50 10 "apt-get -y install salt-minion=3002.5+ds-1 salt-common=3002.5+ds-1" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-get -y install salt-minion=3002.5+ds-1 salt-common=3002.5+ds-1" >> "$setup_log" 2>&1 || exit 1
retry 50 10 "apt-mark hold salt-minion salt-common" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-mark hold salt-minion salt-common" >> "$setup_log" 2>&1 || exit 1
if [[ $OSVER != 'xenial' ]]; then if [[ $OSVER != 'xenial' ]]; then
retry 50 10 "apt-get -y install python3-pip python3-dateutil python3-m2crypto python3-mysqldb" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-get -y install python3-pip python3-dateutil python3-m2crypto python3-mysqldb python3-packaging" >> "$setup_log" 2>&1 || exit 1
else else
retry 50 10 "apt-get -y install python-pip python-dateutil python-m2crypto python-mysqldb" >> "$setup_log" 2>&1 || exit 1 retry 50 10 "apt-get -y install python-pip python-dateutil python-m2crypto python-mysqldb python-packaging" >> "$setup_log" 2>&1 || exit 1
fi fi
fi fi
} }
@@ -2185,7 +2281,70 @@ set_main_ip() {
# Add /usr/sbin to everyone's path # Add /usr/sbin to everyone's path
set_path() { set_path() {
echo "complete -cf sudo" > /etc/profile.d/securityonion.sh echo "complete -cf sudo" >> /etc/profile.d/securityonion.sh
}
set_proxy() {
# Don't proxy localhost, local ip, and management ip
no_proxy_string="localhost, 127.0.0.1, ${MAINIP}, ${HOSTNAME}"
if [[ -n $MSRV ]] && [[ -n $MSRVIP ]];then
no_proxy_string="${no_proxy_string}, ${MSRVIP}, ${MSRV}"
fi
# Set proxy environment variables used by curl, wget, docker, and others
{
echo "export use_proxy=on"
echo "export http_proxy=\"${so_proxy}\""
echo "export https_proxy=\"\$http_proxy\""
echo "export ftp_proxy=\"\$http_proxy\""
echo "export no_proxy=\"${no_proxy_string}\""
} > /etc/profile.d/so-proxy.sh
source /etc/profile.d/so-proxy.sh
[[ -d '/etc/systemd/system/docker.service.d' ]] || mkdir -p /etc/systemd/system/docker.service.d
# Create proxy config for dockerd
printf '%s\n'\
"[Service]"\
"Environment=\"HTTP_PROXY=${so_proxy}\""\
"Environment=\"HTTPS_PROXY=${so_proxy}\""\
"Environment=\"NO_PROXY=${no_proxy_string}\"" > /etc/systemd/system/docker.service.d/http-proxy.conf
systemctl daemon-reload
command -v docker &> /dev/null && systemctl restart docker
# Create config.json for docker containers
[[ -d /root/.docker ]] || mkdir /root/.docker
printf '%s\n'\
"{"\
" \"proxies\":"\
" {"\
" \"default\":"\
" {"\
" \"httpProxy\":\"${so_proxy}\","\
" \"httpsProxy\":\"${so_proxy}\","\
" \"ftpProxy\":\"${so_proxy}\","\
" \"noProxy\":\"${no_proxy_string}\""\
" }"\
" }"\
"}" > /root/.docker/config.json
# Set proxy for package manager
if [ "$OS" = 'centos' ]; then
echo "proxy=$so_proxy" >> /etc/yum.conf
else
# Set it up so the updates roll through the manager
printf '%s\n'\
"Acquire::http::Proxy \"$so_proxy\";"\
"Acquire::https::Proxy \"$so_proxy\";" > /etc/apt/apt.conf.d/00-proxy.conf
fi
# Set global git proxy
printf '%s\n'\
"[http]"\
" proxy = ${so_proxy}" > /etc/gitconfig
} }
setup_salt_master_dirs() { setup_salt_master_dirs() {
@@ -2216,6 +2375,7 @@ set_progress_str() {
local percentage_input=$1 local percentage_input=$1
progress_bar_text=$2 progress_bar_text=$2
export progress_bar_text export progress_bar_text
local nolog=$2
if (( "$percentage_input" >= "$percentage" )); then if (( "$percentage_input" >= "$percentage" )); then
percentage="$percentage_input" percentage="$percentage_input"
@@ -2225,12 +2385,14 @@ set_progress_str() {
echo -e "$percentage_str" echo -e "$percentage_str"
info "Progressing ($percentage%): $progress_bar_text" if [[ -z $nolog ]]; then
info "Progressing ($percentage%): $progress_bar_text"
printf '%s\n' \ # printf '%s\n' \
'----'\ # '----'\
"$percentage% - ${progress_bar_text^^}"\ # "$percentage% - ${progress_bar_text^^}"\
"----" >> "$setup_log" 2>&1 # "----" >> "$setup_log" 2>&1
fi
} }
set_ssh_cmds() { set_ssh_cmds() {

View File

@@ -27,6 +27,8 @@ original_args=("$@")
cd "$(dirname "$0")" || exit 255 cd "$(dirname "$0")" || exit 255
echo "Getting started..."
# Source the generic function libraries that are also used by the product after # Source the generic function libraries that are also used by the product after
# setup. These functions are intended to be reusable outside of the setup process. # setup. These functions are intended to be reusable outside of the setup process.
source ../salt/common/tools/sbin/so-common source ../salt/common/tools/sbin/so-common
@@ -93,12 +95,23 @@ if ! [ -f $install_opt_file ]; then
analyze_system analyze_system
fi fi
# Set up handler for setup to exit early (use `kill -SIGUSR1 "$setup_proc"; exit 1` in child scripts)
trap 'catch $LINENO' SIGUSR1
setup_proc="$$"
catch() {
info "Fatal error occurred at $1 in so-setup, failing setup."
grep --color=never "ERROR" "$setup_log" > "$error_log"
whiptail_setup_failed
exit 1
}
automated=no automated=no
function progress() { progress() {
local title='Security Onion Install' local title='Security Onion Setup'
local msg=${1:-'Please wait while installing...'}
if [ $automated == no ]; then if [ $automated == no ]; then
whiptail --title "$title" --gauge 'Please wait while installing...' 6 60 0 # append to text whiptail --title "$title" --gauge "$msg" 6 70 0 # append to text
else else
cat >> $setup_log 2>&1 cat >> $setup_log 2>&1
fi fi
@@ -154,12 +167,9 @@ set_ssh_cmds $automated
local_sbin="$(pwd)/../salt/common/tools/sbin" local_sbin="$(pwd)/../salt/common/tools/sbin"
export PATH=$PATH:$local_sbin export PATH=$PATH:$local_sbin
installer_prereq_packages && detect_cloud
set_network_dev_status_list set_network_dev_status_list
set_palette >> $setup_log 2>&1
if [ "$OS" == ubuntu ]; then
update-alternatives --set newt-palette /etc/newt/palette.original >> $setup_log 2>&1
fi
# Kernel messages can overwrite whiptail screen #812 # Kernel messages can overwrite whiptail screen #812
# https://github.com/Security-Onion-Solutions/securityonion/issues/812 # https://github.com/Security-Onion-Solutions/securityonion/issues/812
@@ -192,19 +202,24 @@ if ! [[ -f $install_opt_file ]]; then
if [[ $setup_type == 'iso' ]] && [ "$automated" == no ]; then if [[ $setup_type == 'iso' ]] && [ "$automated" == no ]; then
whiptail_first_menu_iso whiptail_first_menu_iso
if [[ $option == "CONFIGURENETWORK" ]]; then if [[ $option == "CONFIGURENETWORK" ]]; then
collect_hostname
network_init_whiptail network_init_whiptail
whiptail_management_interface_setup whiptail_management_interface_setup
network_init network_init
printf '%s\n' \ printf '%s\n' \
"MNIC=$MNIC" \ "MNIC=$MNIC" \
"HOSTNAME=$HOSTNAME" > "$net_init_file" "HOSTNAME=$HOSTNAME" > "$net_init_file"
set_main_ip >> $setup_log 2>&1
compare_main_nic_ip
reset_proxy
collect_proxy
[[ -n "$so_proxy" ]] && set_proxy >> $setup_log 2>&1
whiptail_net_setup_complete whiptail_net_setup_complete
else else
whiptail_install_type true
fi fi
else
whiptail_install_type
fi fi
whiptail_install_type
else else
source $install_opt_file source $install_opt_file
fi fi
@@ -257,6 +272,10 @@ if [[ ( $is_manager || $is_import ) && $is_iso ]]; then
fi fi
fi fi
if [[ $is_manager || $is_import ]]; then
check_elastic_license
fi
if ! [[ -f $install_opt_file ]]; then if ! [[ -f $install_opt_file ]]; then
if [[ $is_manager && $is_sensor ]]; then if [[ $is_manager && $is_sensor ]]; then
check_requirements "standalone" check_requirements "standalone"
@@ -273,25 +292,31 @@ if ! [[ -f $install_opt_file ]]; then
[[ -f $net_init_file ]] && whiptail_net_reinit && reinit_networking=true [[ -f $net_init_file ]] && whiptail_net_reinit && reinit_networking=true
if [[ $reinit_networking ]] || ! [[ -f $net_init_file ]]; then if [[ $reinit_networking ]] || ! [[ -f $net_init_file ]]; then
collect_hostname
network_init_whiptail network_init_whiptail
else else
source "$net_init_file" source "$net_init_file"
fi fi
if [[ $is_minion ]]; then
collect_mngr_hostname
fi
if [[ $is_minion ]] || [[ $reinit_networking ]] || [[ $is_iso ]] && ! [[ -f $net_init_file ]]; then
whiptail_management_interface_setup
fi
if [[ $reinit_networking ]] || ! [[ -f $net_init_file ]]; then if [[ $reinit_networking ]] || ! [[ -f $net_init_file ]]; then
network_init network_init
fi fi
if [[ -n "$TURBO" ]]; then set_main_ip >> $setup_log 2>&1
use_turbo_proxy compare_main_nic_ip
if [[ $is_minion ]]; then
collect_mngr_hostname
fi
reset_proxy
if [[ -z $is_airgap ]]; then
collect_proxy
[[ -n "$so_proxy" ]] && set_proxy >> $setup_log 2>&1
fi
if [[ $is_minion ]] || [[ $reinit_networking ]] || [[ $is_iso ]] && ! [[ -f $net_init_file ]]; then
whiptail_management_interface_setup
fi fi
if [[ $is_minion ]]; then if [[ $is_minion ]]; then
@@ -310,6 +335,7 @@ if ! [[ -f $install_opt_file ]]; then
"HOSTNAME=$HOSTNAME" \ "HOSTNAME=$HOSTNAME" \
"MSRV=$MSRV" \ "MSRV=$MSRV" \
"MSRVIP=$MSRVIP" > "$install_opt_file" "MSRVIP=$MSRVIP" > "$install_opt_file"
[[ -n $so_proxy ]] && echo "so_proxy=$so_proxy" >> "$install_opt_file"
download_repo_tarball download_repo_tarball
exec bash /root/manager_setup/securityonion/setup/so-setup "${original_args[@]}" exec bash /root/manager_setup/securityonion/setup/so-setup "${original_args[@]}"
fi fi
@@ -323,6 +349,22 @@ else
rm -rf $install_opt_file >> "$setup_log" 2>&1 rm -rf $install_opt_file >> "$setup_log" 2>&1
fi fi
percentage=0
{
installer_progress_loop & # Run progress bar to 98 in ~8 minutes while waiting for package installs
progress_bg_proc=$!
installer_prereq_packages
install_success=$?
kill -9 "$progress_bg_proc"
wait "$progress_bg_proc" &> /dev/null # Kill just sends signal, redirect output of wait to catch stdout
if [[ $install_success -gt 0 ]]; then
echo "Could not install packages required for setup, exiting now." >> "$setup_log" 2>&1
kill -SIGUSR1 "$setup_proc"; exit 1
fi
} | progress '...'
detect_cloud
short_name=$(echo "$HOSTNAME" | awk -F. '{print $1}') short_name=$(echo "$HOSTNAME" | awk -F. '{print $1}')
MINION_ID=$(echo "${short_name}_${install_type}" | tr '[:upper:]' '[:lower:]') MINION_ID=$(echo "${short_name}_${install_type}" | tr '[:upper:]' '[:lower:]')
@@ -336,14 +378,14 @@ minion_type=$(get_minion_type)
set_default_log_size >> $setup_log 2>&1 set_default_log_size >> $setup_log 2>&1
if [[ $is_helix ]]; then if [[ $is_helix ]]; then
RULESETUP=${RULESETUP:-ETOPEN} RULESETUP=${RULESETUP:-ETOPEN}
NSMSETUP=${NSMSETUP:-BASIC} NSMSETUP=${NSMSETUP:-BASIC}
HNSENSOR=${HNSENSOR:-inherit} HNSENSOR=${HNSENSOR:-inherit}
MANAGERUPDATES=${MANAGERUPDATES:-0} MANAGERUPDATES=${MANAGERUPDATES:-0}
fi fi
if [[ $is_helix || ( $is_manager && $is_node ) ]]; then if [[ $is_helix || ( $is_manager && $is_node ) ]]; then
RULESETUP=${RULESETUP:-ETOPEN} RULESETUP=${RULESETUP:-ETOPEN}
NSMSETUP=${NSMSETUP:-BASIC} NSMSETUP=${NSMSETUP:-BASIC}
fi fi
@@ -363,7 +405,7 @@ fi
if [[ $is_import ]]; then if [[ $is_import ]]; then
PATCHSCHEDULENAME=${PATCHSCHEDULENAME:-auto} PATCHSCHEDULENAME=${PATCHSCHEDULENAME:-auto}
MTU=${MTU:-1500} MTU=${MTU:-1500}
RULESETUP=${RULESETUP:-ETOPEN} RULESETUP=${RULESETUP:-ETOPEN}
NSMSETUP=${NSMSETUP:-BASIC} NSMSETUP=${NSMSETUP:-BASIC}
HNSENSOR=${HNSENSOR:-inherit} HNSENSOR=${HNSENSOR:-inherit}
MANAGERUPDATES=${MANAGERUPDATES:-0} MANAGERUPDATES=${MANAGERUPDATES:-0}
@@ -527,21 +569,10 @@ whiptail_make_changes
# From here on changes will be made. # From here on changes will be made.
echo "1" > /root/accept_changes echo "1" > /root/accept_changes
# Set up handler for setup to exit early (use `kill -SIGUSR1 "$(ps --pid $$ -oppid=)"; exit 1` in child scripts)
trap 'catch $LINENO' SIGUSR1
catch() {
info "Fatal error occurred at $1 in so-setup, failing setup."
grep --color=never "ERROR" "$setup_log" > "$error_log"
whiptail_setup_failed
exit
}
# This block sets REDIRECTIT which is used by a function outside the below subshell # This block sets REDIRECTIT which is used by a function outside the below subshell
set_main_ip >> $setup_log 2>&1
compare_main_nic_ip
set_redirect >> $setup_log 2>&1 set_redirect >> $setup_log 2>&1
# Begin install # Begin install
{ {
# Set initial percentage to 0 # Set initial percentage to 0
@@ -550,6 +581,8 @@ set_redirect >> $setup_log 2>&1
# Show initial progress message # Show initial progress message
set_progress_str 0 'Running initial configuration steps' set_progress_str 0 'Running initial configuration steps'
reserve_ports
set_path set_path
if [[ $is_reinstall ]]; then if [[ $is_reinstall ]]; then
@@ -766,6 +799,9 @@ set_redirect >> $setup_log 2>&1
set_progress_str 70 "$(print_salt_state_apply 'kibana')" set_progress_str 70 "$(print_salt_state_apply 'kibana')"
salt-call state.apply -l info kibana >> $setup_log 2>&1 salt-call state.apply -l info kibana >> $setup_log 2>&1
set_progress_str 70 "Setting up default Space in Kibana"
so-kibana-space-defaults >> $setup_log 2>&1
fi fi
if [[ "$PLAYBOOK" = 1 ]]; then if [[ "$PLAYBOOK" = 1 ]]; then

View File

@@ -215,7 +215,7 @@ whiptail_create_web_user() {
[ -n "$TESTING" ] && return [ -n "$TESTING" ] && return
WEBUSER=$(whiptail --title "Security Onion Install" --inputbox \ WEBUSER=$(whiptail --title "Security Onion Install" --inputbox \
"Please enter an email address to create an administrator account for the web interface: \nThis will also be used for TheHive, Cortex, and Fleet." 10 60 "$1" 3>&1 1>&2 2>&3) "Please enter an email address to create an administrator account for the web interface.\n\nThis will also be used for TheHive, Cortex, and Fleet." 12 60 "$1" 3>&1 1>&2 2>&3)
local exitstatus=$? local exitstatus=$?
whiptail_check_exitstatus $exitstatus whiptail_check_exitstatus $exitstatus
@@ -376,7 +376,7 @@ whiptail_dockernet_check(){
[ -n "$TESTING" ] && return [ -n "$TESTING" ] && return
whiptail --title "Security Onion Setup" --yesno \ whiptail --title "Security Onion Setup" --yesno \
"Do you want to keep the default Docker IP range? \n \n(Choose yes if you don't know what this means)" 10 75 "Do you want to keep the default Docker IP range?\n\nIf you are unsure, please accept the default option of Yes." 10 75
} }
@@ -588,8 +588,21 @@ whiptail_invalid_input() { # TODO: This should accept a list of arguments to spe
} }
whiptail_invalid_proxy() {
[ -n "$TESTING" ] && return
local message
read -r -d '' message <<- EOM
Could not reach test url using proxy ${proxy_addr}.
Error was: ${proxy_test_err}
EOM
whiptail --title "Security Onion Setup" --yesno "$message" --yes-button "Enter Again" --no-button "Skip" 11 60
}
whiptail_invalid_string() { whiptail_invalid_string() {
[ -n "$TESTING" ] && return [ -n "$TESTING" ] && return
whiptail --title "Security Onion Setup" --msgbox "Invalid input, please try again.\n\nThe $1 cannot contain spaces." 9 45 whiptail --title "Security Onion Setup" --msgbox "Invalid input, please try again.\n\nThe $1 cannot contain spaces." 9 45
@@ -632,10 +645,22 @@ whiptail_log_size_limit() {
[ -n "$TESTING" ] && return [ -n "$TESTING" ] && return
case $install_type in
STANDALONE | EVAL | HEAVYNODE)
percentage=50
;;
*)
percentage=80
;;
esac
log_size_limit=$(whiptail --title "Security Onion Setup" --inputbox \ read -r -d '' message <<- EOM
"Please specify the amount of disk space (in GB) you would like to allocate for Elasticsearch data storage: \n\ Please specify the amount of disk space (in GB) you would like to allocate for Elasticsearch data storage.
By default, this is set to 80% of the disk space allotted for /nsm." 10 75 "$1" 3>&1 1>&2 2>&3)
By default, this is set to ${percentage}% of the disk space allotted for /nsm.
EOM
log_size_limit=$(whiptail --title "Security Onion Setup" --inputbox "$message" 11 75 "$1" 3>&1 1>&2 2>&3)
local exitstatus=$? local exitstatus=$?
whiptail_check_exitstatus $exitstatus whiptail_check_exitstatus $exitstatus
@@ -1117,7 +1142,7 @@ whiptail_patch_schedule() {
[ -n "$TESTING" ] && return [ -n "$TESTING" ] && return
patch_schedule=$(whiptail --title "Security Onion Setup" --radiolist \ patch_schedule=$(whiptail --title "Security Onion Setup" --radiolist \
"Choose OS patch schedule: \nThis will NOT update Security Onion related tools such as Zeek, Elasticsearch, Kibana, SaltStack, etc." 15 75 5 \ "Choose OS patch schedule.\n\nThis schedule will update the operating system packages but will NOT update Security Onion related tools such as Zeek, Elasticsearch, Kibana, SaltStack, etc." 20 75 5 \
"Automatic" "Updates installed every 8 hours if available" ON \ "Automatic" "Updates installed every 8 hours if available" ON \
"Manual" "Updates will be installed manually" OFF \ "Manual" "Updates will be installed manually" OFF \
"Import Schedule" "Import named schedule on following screen" OFF \ "Import Schedule" "Import named schedule on following screen" OFF \
@@ -1204,6 +1229,58 @@ whiptail_patch_schedule_select_hours() {
} }
whiptail_proxy_ask() {
[ -n "$TESTING" ] && return
whiptail --title "Security Onion Setup" --yesno "Do you want to set a proxy server for this installation?" 7 60 --defaultno
}
whiptail_proxy_addr() {
[ -n "$TESTING" ] && return
local message
read -r -d '' message <<- EOM
Please input the proxy server you wish to use, including the URL prefix (ex: https://your.proxy.com:1234).
If your proxy requires a username and password do not include them in your input. Setup will ask for those values next.
EOM
proxy_addr=$(whiptail --title "Security Onion Setup" --inputbox "$message" 13 60 "$1" 3>&1 1>&2 2>&3)
local exitstatus=$?
whiptail_check_exitstatus $exitstatus
}
whiptail_proxy_auth_ask() {
[ -n "$TESTING" ] && return
whiptail --title "Security Onion Setup" --yesno "Does your proxy require authentication?" 7 60
}
whiptail_proxy_auth_user() {
[ -n "$TESTING" ] && return
proxy_user=$(whiptail --title "Security Onion Setup" --inputbox "Please input the proxy user:" 8 60 "$1" 3>&1 1>&2 2>&3)
local exitstatus=$?
whiptail_check_exitstatus $exitstatus
}
whiptail_proxy_auth_pass() {
local arg=$1
[ -n "$TESTING" ] && return
if [[ $arg != 'confirm' ]]; then
proxy_pass=$(whiptail --title "Security Onion Setup" --passwordbox "Please input the proxy password:" 8 60 3>&1 1>&2 2>&3)
else
proxy_pass_confirm=$(whiptail --title "Security Onion Setup" --passwordbox "Please confirm the proxy password:" 8 60 3>&1 1>&2 2>&3)
fi
local exitstatus=$?
whiptail_check_exitstatus $exitstatus
}
whiptail_requirements_error() { whiptail_requirements_error() {
local requirement_needed=$1 local requirement_needed=$1
@@ -1306,8 +1383,8 @@ whiptail_set_redirect() {
[ -n "$TESTING" ] && return [ -n "$TESTING" ] && return
REDIRECTINFO=$(whiptail --title "Security Onion Setup" --radiolist \ REDIRECTINFO=$(whiptail --title "Security Onion Setup" --radiolist \
"Choose the access method for the web interface: \nNOTE: For security reasons, we use strict cookie enforcement" 20 75 4 \ "How would you like to access the web interface?\n\nSecurity Onion uses strict cookie enforcement, so whatever you choose here will be the only way that you can access the web interface.\n\nIf you choose something other than IP address, then you'll need to ensure that you can resolve the name via DNS or hosts entry. If you are unsure, please select IP." 20 75 4 \
"IP" "Use IP to access the web interface" ON \ "IP" "Use IP address to access the web interface" ON \
"HOSTNAME" "Use hostname to access the web interface" OFF \ "HOSTNAME" "Use hostname to access the web interface" OFF \
"OTHER" "Use a different name like a FQDN or Load Balancer" OFF 3>&1 1>&2 2>&3 ) "OTHER" "Use a different name like a FQDN or Load Balancer" OFF 3>&1 1>&2 2>&3 )
local exitstatus=$? local exitstatus=$?

Binary file not shown.