Compare commits

..

191 Commits

Author SHA1 Message Date
m0duspwnens
d91dd0dd3c watch some values 2024-04-29 17:14:00 -04:00
m0duspwnens
a0388fd568 engines config for valueWatch 2024-04-29 14:02:10 -04:00
m0duspwnens
05244cfd75 watch files change engine 2024-04-24 13:19:39 -04:00
m0duspwnens
6c5e0579cf logging changes. ensure salt master has pillarWatch engine 2024-04-19 09:32:32 -04:00
m0duspwnens
1f6eb9cdc3 match keys better. go through files reverse first found is prio 2024-04-18 13:50:37 -04:00
m0duspwnens
610dd2c08d improve it 2024-04-18 11:11:14 -04:00
m0duspwnens
506bbd314d more comments, better logging 2024-04-18 10:26:10 -04:00
m0duspwnens
4caa6a10b5 watch a pillar in files and take action 2024-04-17 18:09:04 -04:00
m0duspwnens
4b79623ce3 watch pillar files for changes and do something 2024-04-16 16:51:35 -04:00
m0duspwnens
c4994a208b restart salt minion if a manager and signing policies change 2024-04-15 11:37:21 -04:00
m0duspwnens
bb983d4ba2 just broker as default process 2024-04-12 16:16:03 -04:00
m0duspwnens
c014508519 need /opt/so/conf/ca/cacerts on receiver for kafka to run 2024-04-12 13:50:25 -04:00
reyesj2
fcfbb1e857 Merge kaffytaffy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 12:50:56 -04:00
reyesj2
911ee579a9 Typo
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 12:16:20 -04:00
reyesj2
a6ff92b099 Note to remove so-kafka-clusterid. Update soup and setup to generate needed kafka pillar values
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 12:11:18 -04:00
m0duspwnens
d73ba7dd3e order kafka pillar assignment 2024-04-12 11:55:26 -04:00
m0duspwnens
04ddcd5c93 add receiver managersearch and standalone to kafka.nodes pillar 2024-04-12 11:52:57 -04:00
reyesj2
af29ae1968 Merge kaffytaffy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 11:43:46 -04:00
reyesj2
fbd3cff90d Make global.pipeline use GLOBALMERGED value
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 11:21:19 -04:00
m0duspwnens
0ed9894b7e create kratos local pillar dirs during setup 2024-04-12 11:19:46 -04:00
m0duspwnens
a54a72c269 move kafka_cluster_id to kafka:cluster_id 2024-04-12 11:19:20 -04:00
m0duspwnens
f514e5e9bb add kafka to receiver 2024-04-11 16:23:05 -04:00
reyesj2
3955587372 Use global.pipeline for redis / kafka states
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 16:20:09 -04:00
reyesj2
6b28dc72e8 Update annotation for global.pipeline
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 15:38:33 -04:00
reyesj2
ca7253a589 Run kafka-clusterid script when pillar values are missing
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 15:38:03 -04:00
reyesj2
af53dcda1b Remove references to kafkanode
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 15:32:00 -04:00
m0duspwnens
d3bd56b131 disable logstash and redis if kafka enabled 2024-04-10 14:13:27 -04:00
m0duspwnens
e9e61ea2d8 Merge remote-tracking branch 'origin/2.4/dev' into kaffytaffy 2024-04-10 13:14:13 -04:00
m0duspwnens
86b984001d annotations and enable/disable from ui 2024-04-10 10:39:06 -04:00
m0duspwnens
fa7f8104c8 Merge remote-tracking branch 'origin/reyesj2/kafka' into kaffytaffy 2024-04-09 11:13:02 -04:00
m0duspwnens
bd5fe43285 jinja config files 2024-04-09 11:07:53 -04:00
m0duspwnens
d38051e806 fix client and server properties formatting 2024-04-09 10:36:37 -04:00
m0duspwnens
daa5342986 items not keys in for loop 2024-04-09 10:22:05 -04:00
m0duspwnens
c48436ccbf fix dict update 2024-04-09 10:19:17 -04:00
m0duspwnens
7aa00faa6c fix var 2024-04-09 09:31:54 -04:00
m0duspwnens
6217a7b9a9 add defaults and jijafy kafka config 2024-04-09 09:27:21 -04:00
reyesj2
d67ebabc95 Remove logstash output to kafka pipeline. Add additional topics for searchnodes to ingest and add partition/offset info to event
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-08 16:38:03 -04:00
Josh Brower
b9474b9352 Merge pull request #12766 from Security-Onion-Solutions/2.4/sigma-pipeline
Ship Defender logs + more
2024-04-08 16:35:24 -04:00
DefensiveDepth
376efab40c Ship Defender logs 2024-04-08 14:01:38 -04:00
reyesj2
65274e89d7 Add client_id to logstash pipeline. To identify which searchnode is pulling messages
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-05 15:38:00 -04:00
coreyogburn
acf29a6c9c Merge pull request #12760 from Security-Onion-Solutions/cogburn/detection-author-remap
Detection Author as a Keyword instead of Text
2024-04-05 11:39:53 -06:00
reyesj2
721e04f793 initial logstash input from kafka over ssl
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-05 13:37:14 -04:00
Corey Ogburn
00cea6fb80 Detection Author as a Keyword instead of Text
With Quick Actions added to Detections, as many fields should be usable as possible.
2024-04-05 11:22:47 -06:00
reyesj2
433309ef1a Generate kafka cluster id if it doesn't exist
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-05 09:35:12 -04:00
Mike Reeves
cbc95d0b30 Merge pull request #12759 from Security-Onion-Solutions/TOoSmOotH-patch-2
Update so-log-check
2024-04-05 08:17:50 -04:00
Mike Reeves
21f86be8ee Update so-log-check 2024-04-05 08:03:42 -04:00
Josh Brower
8e38c3763e Merge pull request #12756 from Security-Onion-Solutions/2.4/detections-defaults
Use list not string
2024-04-04 17:00:38 -04:00
DefensiveDepth
ca807bd6bd Use list not string 2024-04-04 16:58:39 -04:00
reyesj2
735cfb4c29 Autogenerate kafka topics when a message it sent to non-existing topic
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-04 16:45:58 -04:00
reyesj2
6202090836 Merge remote-tracking branch 'origin/kaffytaffy' into reyesj2/kafka 2024-04-04 16:27:06 -04:00
reyesj2
436cbc1f06 Add kafka signing_policy for client/server auth. Add kafka-client cert on manager so manager can interact with kafka using its own cert
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-04 16:21:29 -04:00
reyesj2
40b08d737c Generate kafka keystore on changes to kafka.key
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-04 16:16:53 -04:00
m0duspwnens
4c5b42b898 restart container on server config changes 2024-04-04 15:47:01 -04:00
m0duspwnens
7a6b72ebac add so-kafka to manager for firewall 2024-04-04 15:46:11 -04:00
Josh Brower
f72cbd5f23 Merge pull request #12755 from Security-Onion-Solutions/2.4/detections-defaults
2.4/detections defaults
2024-04-04 11:33:59 -04:00
Josh Brower
1d7e47f589 Merge pull request #12682 from Security-Onion-Solutions/2.4/soup-playbook
2.4/soup playbook
2024-04-04 11:28:09 -04:00
DefensiveDepth
49d5fa95a2 Detections tweaks 2024-04-04 11:26:44 -04:00
Jason Ertel
204f44449a Merge pull request #12754 from Security-Onion-Solutions/jertel/ana
skip telemetry summary in airgap mode
2024-04-04 10:39:07 -04:00
Jason Ertel
6046848ee7 skip telemetry summary in airgap mode 2024-04-04 10:25:32 -04:00
Doug Burks
b0aee238b1 Merge pull request #12753 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add dashboards specific to Elastic Agent #12746
2024-04-04 09:35:21 -04:00
Doug Burks
d8ac3f1292 FEATURE: Add dashboards specific to Elastic Agent #12746 2024-04-04 09:30:05 -04:00
Mike Reeves
8788b34c8a Merge pull request #12752 from Security-Onion-Solutions/updates23
Allow 2.3 to update
2024-04-04 09:25:41 -04:00
Mike Reeves
784ec54795 2.3 updates 2024-04-04 09:24:17 -04:00
Mike Reeves
54fce4bf8f 2.3 updates 2024-04-04 09:21:16 -04:00
Mike Reeves
c4ebe25bab Attempt to fix 2.3 when main repo changes 2024-04-04 09:18:37 -04:00
Doug Burks
7b4e207329 Merge pull request #12751 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module sigma #12743
2024-04-04 09:13:53 -04:00
Doug Burks
5ec3b834fb FEATURE: Add Events table columns for event.module sigma #12743 2024-04-04 09:11:41 -04:00
Mike Reeves
7668fa1396 Attempt to fix 2.3 when main repo changes 2024-04-04 09:03:29 -04:00
Mike Reeves
470b0e4bf6 Attempt to fix 2.3 when main repo changes 2024-04-04 08:55:13 -04:00
Mike Reeves
d3f163bf9e Attempt to fix 2.3 when main repo changes 2024-04-04 08:54:04 -04:00
Mike Reeves
4b31632dfc Attempt to fix 2.3 when main repo changes 2024-04-04 08:52:37 -04:00
DefensiveDepth
c2f7f7e3a5 Remove dup line 2024-04-04 08:52:30 -04:00
DefensiveDepth
07cb0c7d46 Merge remote-tracking branch 'origin/2.4/dev' into 2.4/soup-playbook 2024-04-04 08:51:09 -04:00
Mike Reeves
14c824143b Attempt to fix 2.3 when main repo changes 2024-04-04 08:48:44 -04:00
Jason Ertel
c75c411426 Merge pull request #12749 from Security-Onion-Solutions/jertel/ana
Clarify annotation description re: Airgap
2024-04-04 07:53:18 -04:00
Jason Ertel
a7fab380b4 clarify telemetry annotation 2024-04-04 07:51:23 -04:00
Jason Ertel
a9517e1291 clarify telemetry annotation 2024-04-04 07:49:30 -04:00
Josh Brower
1017838cfc Merge pull request #12748 from Security-Onion-Solutions/2.4/exclude-elastalert
Exclude Elastalert EQL errors
2024-04-04 06:57:22 -04:00
DefensiveDepth
1d221a574b Exclude Elastalert EQL errors 2024-04-04 06:48:25 -04:00
Jason Ertel
a35bfc4822 Merge pull request #12747 from Security-Onion-Solutions/jertel/ana
do not prompt about telemetry on airgap installs
2024-04-03 21:50:38 -04:00
Jason Ertel
7c64fc8c05 do not prompt about telemetry on airgap installs 2024-04-03 18:08:42 -04:00
DefensiveDepth
f66cca96ce YARA casing 2024-04-03 16:17:29 -04:00
Mike Reeves
12da7db22c Attempt to fix 2.3 when main repo changes 2024-04-03 15:38:23 -04:00
m0duspwnens
1b8584d4bb allow manager to manager on kafka ports 2024-04-03 15:36:35 -04:00
Mike Reeves
9c59f42c16 Attempt to fix 2.3 when main repo changes 2024-04-03 15:23:09 -04:00
coreyogburn
fb5eea8284 Merge pull request #12744 from Security-Onion-Solutions/cogburn/detection-state
Update SOC Config with State File Paths
2024-04-03 13:19:26 -06:00
Mike Reeves
9db9af27ae Attempt to fix 2.3 when main repo changes 2024-04-03 15:14:50 -04:00
Corey Ogburn
0f50a265cf Update SOC Config with State File Paths
Each detection engine is getting a state file to help manage the timer over restarts. By default, the files will go in soc's config folder inside a fingerprints folder.
2024-04-03 13:12:18 -06:00
Jason Ertel
3e05c04aa1 Merge pull request #12731 from Security-Onion-Solutions/jertel/ana
SOC Telemetry
2024-04-03 14:51:41 -04:00
Jason Ertel
8f8896c505 fix link 2024-04-03 14:45:39 -04:00
Jason Ertel
941a841da0 fix link 2024-04-03 14:41:57 -04:00
reyesj2
13105c4ab3 Generate certs for use with elasticfleet kafka output policy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-03 14:34:07 -04:00
reyesj2
dc27bbb01d Set kafka heap size. To be later configured from SOC
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-03 14:30:52 -04:00
Jason Ertel
2b8a051525 fix link 2024-04-03 14:30:09 -04:00
Mike Reeves
1c7cc8dd3b Merge pull request #12741 from Security-Onion-Solutions/metrics
Change code to allow for non root
2024-04-03 12:56:17 -04:00
Doug Burks
58d081eed1 Merge pull request #12742 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module kratos #12740
2024-04-03 12:48:24 -04:00
Doug Burks
9078b2bad2 FEATURE: Add Events table columns for event.module kratos #12740 2024-04-03 12:46:29 -04:00
Mike Reeves
8889c974b8 Change code to allow for non root 2024-04-03 12:38:59 -04:00
Doug Burks
f615a73120 Merge pull request #12739 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add dashboard for SOC Login Failures #12738
2024-04-03 12:01:08 -04:00
Doug Burks
66844af1c2 FEATURE: Add dashboard for SOC Login Failures #12738 2024-04-03 11:54:53 -04:00
Mike Reeves
a0b7d89eb6 Merge pull request #12734 from Security-Onion-Solutions/metrics
Add Elastic Agent Status Metrics
2024-04-03 11:12:53 -04:00
Mike Reeves
c31e459c2b Change metrics reporting order 2024-04-03 11:06:00 -04:00
m0duspwnens
b863060df1 kafka broker and listener on 0.0.0.0 2024-04-03 11:05:24 -04:00
weslambert
d96d696c35 Merge pull request #12735 from Security-Onion-Solutions/feature/cef
Add cef
2024-04-03 10:49:44 -04:00
Wes
105eadf111 Add cef 2024-04-03 14:40:41 +00:00
Jason Ertel
ca57c20691 suppress soup update output for cleaner console 2024-04-03 10:31:24 -04:00
Jason Ertel
c4767bfdc8 suppress soup update output for cleaner console 2024-04-03 10:28:43 -04:00
Mike Reeves
0de1f76139 add agent count to reposync 2024-04-03 10:26:59 -04:00
Jason Ertel
5f4a0fdfad suppress soup update output for cleaner console 2024-04-03 10:26:48 -04:00
m0duspwnens
18f95e867f port 9093 for kafka docker 2024-04-03 10:24:53 -04:00
m0duspwnens
ed6137a76a allow sensor and searchnode to connect to manager kafka ports 2024-04-03 10:24:10 -04:00
m0duspwnens
c3f02a698e add kafka nodes as extra hosts for the container 2024-04-03 10:23:36 -04:00
m0duspwnens
db106f8ca1 listen on 0.0.0.0 for CONTROLLER 2024-04-03 10:22:47 -04:00
Jason Ertel
c712529cf6 suppress soup update output for cleaner console 2024-04-03 10:21:35 -04:00
Mike Reeves
976ddd3982 add agentstatus to telegraf 2024-04-03 10:06:08 -04:00
Mike Reeves
64748b98ad add agentstatus to telegraf 2024-04-03 09:56:12 -04:00
Mike Reeves
3335612365 add agentstatus to telegraf 2024-04-03 09:54:16 -04:00
Mike Reeves
513273c8c3 add agentstatus to telegraf 2024-04-03 09:43:55 -04:00
Mike Reeves
0dfde3c9f2 add agentstatus to telegraf 2024-04-03 09:40:14 -04:00
Mike Reeves
0efdcfcb52 add agentstatus to telegraf 2024-04-03 09:36:02 -04:00
Josh Brower
fbdcc53fe0 Merge pull request #12732 from Security-Onion-Solutions/2.4/detections-defaults
Feature - auto-enabled Sigma rules
2024-04-03 09:01:09 -04:00
m0duspwnens
8e47cc73a5 kafka.nodes pillar to lf 2024-04-03 08:54:17 -04:00
m0duspwnens
639bf05081 add so-manager to kafka.nodes pillar 2024-04-03 08:52:26 -04:00
Jason Ertel
c1b5ef0891 ensure so-yaml.py is updated during soup 2024-04-03 08:44:40 -04:00
DefensiveDepth
a8f25150f6 Feature - auto-enabled Sigma rules 2024-04-03 08:21:50 -04:00
Jason Ertel
1ee2a6d37b Improve wording for Airgap annotation 2024-04-03 08:21:30 -04:00
Mike Reeves
f64d9224fb Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into metrics 2024-04-02 17:22:20 -04:00
m0duspwnens
4e142e0212 put alphabetical 2024-04-02 16:47:35 -04:00
m0duspwnens
c9bf1c86c6 Merge remote-tracking branch 'origin/reyesj2/kafka' into kaffytaffy 2024-04-02 16:40:47 -04:00
reyesj2
82830c8173 Fix typos and fix error related to elasticsearch saltstate being called from logstash state. Logstash will be removed from kafkanodes in future
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:37:39 -04:00
reyesj2
7f5741c43b Fix kafka storage setup
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:36:22 -04:00
reyesj2
643d4831c1 CRLF -> LF
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:35:14 -04:00
reyesj2
b032eed22a Update kafka to use manager docker registry
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:34:06 -04:00
reyesj2
1b49c8540e Fix kafka keystore script
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:32:15 -04:00
m0duspwnens
f7534a0ae3 make manager download so-kafka container 2024-04-02 16:01:12 -04:00
Jason Ertel
b6187ab769 Improve wording for Airgap annotation 2024-04-02 15:54:39 -04:00
m0duspwnens
780ad9eb10 add kafka to manager nodes 2024-04-02 15:50:25 -04:00
Mike Reeves
283939b18a Gather metrics from elastic agent to influx 2024-04-02 15:36:01 -04:00
m0duspwnens
e25bc8efe4 Merge remote-tracking branch 'origin/reyesj2/kafka' into kaffytaffy 2024-04-02 13:36:47 -04:00
Jason Ertel
3b112e20e3 fix syntax error 2024-04-02 12:32:33 -04:00
reyesj2
26abe90671 Removed duplicate kafka setup
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 12:19:46 -04:00
Doug Burks
23a6c4adb6 Merge pull request #12725 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module strelka #12716
2024-04-02 10:54:15 -04:00
Doug Burks
2f03cbf115 FEATURE: Add Events table columns for event.module strelka #12716 2024-04-02 10:42:20 -04:00
Doug Burks
a678a5a416 Merge pull request #12724 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module strelka #12716
2024-04-02 10:15:20 -04:00
Doug Burks
b2b54ccf60 FEATURE: Add Events table columns for event.module strelka #12716 2024-04-02 10:11:16 -04:00
Doug Burks
55e71c867c Merge pull request #12723 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module playbook #12703
2024-04-02 10:04:21 -04:00
Doug Burks
6c2437f8ef FEATURE: Add Events table columns for event.module playbook #12703 2024-04-02 09:55:56 -04:00
Doug Burks
261f2cbaf7 Merge pull request #12722 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module strelka #12716
2024-04-02 09:43:15 -04:00
Jason Ertel
f083558666 break out into sep func 2024-04-02 09:42:43 -04:00
Doug Burks
505eeea66a Update defaults.yaml 2024-04-02 09:39:54 -04:00
Josh Brower
1001aa665d Merge pull request #12720 from Security-Onion-Solutions/2.4/detections-defaults
Add default columns
2024-04-02 09:21:06 -04:00
DefensiveDepth
7f488422b0 Add default columns 2024-04-02 09:13:27 -04:00
Jason Ertel
f17d8d3369 analytics 2024-04-01 10:59:44 -04:00
Jason Ertel
ff777560ac limit col size 2024-04-01 10:35:15 -04:00
Jason Ertel
2c68fd6311 limit col size 2024-04-01 10:32:54 -04:00
Jason Ertel
c1bf710e46 limit col size 2024-04-01 10:32:25 -04:00
Jason Ertel
9d2b40f366 Merge branch '2.4/dev' into jertel/ana 2024-04-01 09:50:38 -04:00
Jason Ertel
3aea2dec85 analytics 2024-04-01 09:50:18 -04:00
coreyogburn
65f6b7022c Merge pull request #12702 from Security-Onion-Solutions/cogburn/yaml-fix
Correct YAML
2024-03-29 15:59:34 -06:00
Corey Ogburn
e5a3a54aea Proper YAML 2024-03-29 14:31:43 -06:00
Doug Burks
be88dbe181 Merge pull request #12700 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add individual dashboards for Zeek SSL and Suricata SSL logs…
2024-03-29 15:41:14 -04:00
Doug Burks
b64ed5535e FEATURE: Add individual dashboards for Zeek SSL and Suricata SSL logs #12699 2024-03-29 15:29:38 -04:00
Doug Burks
5be56703e9 Merge pull request #12698 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for zeek ssl and suricata ssl #12697
2024-03-29 14:46:39 -04:00
Doug Burks
0c7ba62867 FEATURE: Add Events table columns for zeek ssl and suricata ssl #12697 2024-03-29 14:44:29 -04:00
coreyogburn
d9d851040c Merge pull request #12696 from Security-Onion-Solutions/cogburn/manual-sync
New Settings for Manual Sync in Detections
2024-03-29 12:43:08 -06:00
Corey Ogburn
e747a4e3fe New Settings for Manual Sync in Detections 2024-03-29 12:25:03 -06:00
Doug Burks
cc2164221c Merge pull request #12695 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add process.command_line to Process Info and Process Ancestry dashboards #12694
2024-03-29 13:04:09 -04:00
Doug Burks
102c3271d1 FEATURE: Add process.command_line to Process Info and Process Ancestry dashboards #12694 2024-03-29 12:04:47 -04:00
DefensiveDepth
32b8649c77 Add more error checking 2024-03-28 14:31:02 -04:00
DefensiveDepth
9c5ba92589 Check if container is running first 2024-03-28 13:23:40 -04:00
DefensiveDepth
d2c9e0ea4a Cleanup 2024-03-28 13:04:48 -04:00
Jason Ertel
2928b71616 Merge pull request #12683 from Security-Onion-Solutions/jertel/lc
disregard errors in removed applications that occurred before th…
2024-03-28 09:48:26 -04:00
Jason Ertel
216b8c01bf disregard errors that in removed applications that occurred before the upgrade 2024-03-28 09:31:39 -04:00
DefensiveDepth
ce0c9f846d Remove containers from so-status 2024-03-27 16:13:52 -04:00
DefensiveDepth
ba262ee01a Check to see if Playbook is enabled 2024-03-27 15:43:25 -04:00
DefensiveDepth
b571eeb8e6 Initial cut of .70 soup changes 2024-03-27 14:58:16 -04:00
Mike Reeves
7fe377f899 Merge pull request #12674 from Security-Onion-Solutions/ipv6fix
Fix Input Validation to allow for IPv6
2024-03-27 09:48:01 -04:00
Mike Reeves
d57f773072 Fix regex to allow ipv6 in bpfs 2024-03-27 09:36:42 -04:00
Doug Burks
389357ad2b Merge pull request #12667 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module elastic_agent #12666
2024-03-26 16:11:46 -04:00
Doug Burks
e2caf4668e FEATURE: Add Events table columns for event.module elastic_agent #12666 2024-03-26 16:08:41 -04:00
Josh Brower
63a58efba4 Merge pull request #12656 from Security-Onion-Solutions/2.4/detections-fixes
Add bindings for sigma repos
2024-03-26 09:33:38 -04:00
DefensiveDepth
bbcd3116f7 Fixes 2024-03-26 09:31:46 -04:00
Josh Brower
9c12aa261e Merge pull request #12660 from Security-Onion-Solutions/kilo
Initial cut to remove Playbook and deps
2024-03-26 08:31:11 -04:00
DefensiveDepth
cc0f4847ba Casing and validation 2024-03-26 08:10:57 -04:00
Doug Burks
923b80ba60 Merge pull request #12663 from Security-Onion-Solutions/feature/improve-soc-dashboards
FEATURE: Include additional groupby fields in Dashboards relating to sankey diagrams #12657
2024-03-26 07:52:54 -04:00
DefensiveDepth
7c4ea8a58e Add Detections SOC Config 2024-03-26 07:39:39 -04:00
Doug Burks
20bd9a9701 FEATURE: Include additional groupby fields in Dashboards relating to sankey diagrams #12657 2024-03-26 07:39:24 -04:00
DefensiveDepth
49fa800b2b Add bindings for sigma repos 2024-03-25 14:45:50 -04:00
reyesj2
446f1ffdf5 merge 2.4/dev
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-03-25 13:55:48 -04:00
reyesj2
8cf29682bb Update to merge in 2.4/dev
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2023-11-29 13:41:23 -05:00
reyesj2
86dc7cc804 Kafka init
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2023-11-29 13:34:25 -05:00
73 changed files with 2440 additions and 134 deletions

View File

@@ -19,4 +19,4 @@ role:
receiver: receiver:
standalone: standalone:
searchnode: searchnode:
sensor: sensor:

30
pillar/kafka/nodes.sls Normal file
View File

@@ -0,0 +1,30 @@
{% set current_kafkanodes = salt.saltutil.runner('mine.get', tgt='G@role:so-manager or G@role:so-managersearch or G@role:so-standalone or G@role:so-receiver', fun='network.ip_addrs', tgt_type='compound') %}
{% set pillar_kafkanodes = salt['pillar.get']('kafka:nodes', default={}, merge=True) %}
{% set existing_ids = [] %}
{% for node in pillar_kafkanodes.values() %}
{% if node.get('id') %}
{% do existing_ids.append(node['nodeid']) %}
{% endif %}
{% endfor %}
{% set all_possible_ids = range(1, 256)|list %}
{% set available_ids = [] %}
{% for id in all_possible_ids %}
{% if id not in existing_ids %}
{% do available_ids.append(id) %}
{% endif %}
{% endfor %}
{% set final_nodes = pillar_kafkanodes.copy() %}
{% for minionid, ip in current_kafkanodes.items() %}
{% set hostname = minionid.split('_')[0] %}
{% if hostname not in final_nodes %}
{% set new_id = available_ids.pop(0) %}
{% do final_nodes.update({hostname: {'nodeid': new_id, 'ip': ip[0]}}) %}
{% endif %}
{% endfor %}
kafka:
nodes: {{ final_nodes|tojson }}

View File

@@ -61,6 +61,9 @@ base:
- backup.adv_backup - backup.adv_backup
- minions.{{ grains.id }} - minions.{{ grains.id }}
- minions.adv_{{ grains.id }} - minions.adv_{{ grains.id }}
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
- stig.soc_stig - stig.soc_stig
'*_sensor': '*_sensor':
@@ -176,6 +179,9 @@ base:
- minions.{{ grains.id }} - minions.{{ grains.id }}
- minions.adv_{{ grains.id }} - minions.adv_{{ grains.id }}
- stig.soc_stig - stig.soc_stig
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
'*_heavynode': '*_heavynode':
- elasticsearch.auth - elasticsearch.auth
@@ -232,6 +238,9 @@ base:
- redis.adv_redis - redis.adv_redis
- minions.{{ grains.id }} - minions.{{ grains.id }}
- minions.adv_{{ grains.id }} - minions.adv_{{ grains.id }}
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
'*_import': '*_import':
- secrets - secrets

View File

@@ -101,7 +101,8 @@
'utility', 'utility',
'schedule', 'schedule',
'docker_clean', 'docker_clean',
'stig' 'stig',
'kafka'
], ],
'so-managersearch': [ 'so-managersearch': [
'salt.master', 'salt.master',
@@ -122,7 +123,8 @@
'utility', 'utility',
'schedule', 'schedule',
'docker_clean', 'docker_clean',
'stig' 'stig',
'kafka'
], ],
'so-searchnode': [ 'so-searchnode': [
'ssl', 'ssl',
@@ -156,7 +158,8 @@
'schedule', 'schedule',
'tcpreplay', 'tcpreplay',
'docker_clean', 'docker_clean',
'stig' 'stig',
'kafka'
], ],
'so-sensor': [ 'so-sensor': [
'ssl', 'ssl',
@@ -187,7 +190,9 @@
'telegraf', 'telegraf',
'firewall', 'firewall',
'schedule', 'schedule',
'docker_clean' 'docker_clean',
'kafka',
'elasticsearch.ca'
], ],
'so-desktop': [ 'so-desktop': [
'ssl', 'ssl',

View File

@@ -70,3 +70,17 @@ x509_signing_policies:
- authorityKeyIdentifier: keyid,issuer:always - authorityKeyIdentifier: keyid,issuer:always
- days_valid: 820 - days_valid: 820
- copypath: /etc/pki/issued_certs/ - copypath: /etc/pki/issued_certs/
kafka:
- minions: '*'
- signing_private_key: /etc/pki/ca.key
- signing_cert: /etc/pki/ca.crt
- C: US
- ST: Utah
- L: Salt Lake City
- basicConstraints: "critical CA:false"
- keyUsage: "digitalSignature, keyEncipherment"
- subjectKeyIdentifier: hash
- authorityKeyIdentifier: keyid,issuer:always
- extendedKeyUsage: "serverAuth, clientAuth"
- days_valid: 820
- copypath: /etc/pki/issued_certs/

View File

@@ -1,9 +1,11 @@
{% import_yaml '/opt/so/saltstack/local/pillar/global/soc_global.sls' as SOC_GLOBAL %} {% if '2.4' in salt['cp.get_file_str']('/etc/soversion') %}
{% if SOC_GLOBAL.global.airgap %}
{% set UPDATE_DIR='/tmp/soagupdate/SecurityOnion' %} {% import_yaml '/opt/so/saltstack/local/pillar/global/soc_global.sls' as SOC_GLOBAL %}
{% else %} {% if SOC_GLOBAL.global.airgap %}
{% set UPDATE_DIR='/tmp/sogh/securityonion' %} {% set UPDATE_DIR='/tmp/soagupdate/SecurityOnion' %}
{% endif %} {% else %}
{% set UPDATE_DIR='/tmp/sogh/securityonion' %}
{% endif %}
remove_common_soup: remove_common_soup:
file.absent: file.absent:
@@ -68,3 +70,19 @@ copy_so-firewall_sbin:
- source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-firewall - source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-firewall
- force: True - force: True
- preserve: True - preserve: True
copy_so-yaml_sbin:
file.copy:
- name: /usr/sbin/so-yaml.py
- source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-yaml.py
- force: True
- preserve: True
{% else %}
fix_23_soup_sbin:
cmd.run:
- name: curl -s -f -o /usr/sbin/soup https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.3/main/salt/common/tools/sbin/soup
fix_23_soup_salt:
cmd.run:
- name: curl -s -f -o /opt/so/saltstack/defalt/salt/common/tools/sbin/soup https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.3/main/salt/common/tools/sbin/soup
{% endif %}

View File

@@ -248,6 +248,14 @@ get_random_value() {
head -c 5000 /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w $length | head -n 1 head -c 5000 /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w $length | head -n 1
} }
get_agent_count() {
if [ -f /opt/so/log/agents/agentstatus.log ]; then
AGENTCOUNT=$(cat /opt/so/log/agents/agentstatus.log | grep -wF active | awk '{print $2}')
else
AGENTCOUNT=0
fi
}
gpg_rpm_import() { gpg_rpm_import() {
if [[ $is_oracle ]]; then if [[ $is_oracle ]]; then
if [[ "$WHATWOULDYOUSAYYAHDOHERE" == "setup" ]]; then if [[ "$WHATWOULDYOUSAYYAHDOHERE" == "setup" ]]; then
@@ -329,7 +337,7 @@ lookup_salt_value() {
local="" local=""
fi fi
salt-call --no-color ${kind}.get ${group}${key} --out=${output} ${local} salt-call -lerror --no-color ${kind}.get ${group}${key} --out=${output} ${local}
} }
lookup_pillar() { lookup_pillar() {
@@ -570,8 +578,9 @@ sync_options() {
set_version set_version
set_os set_os
salt_minion_count salt_minion_count
get_agent_count
echo "$VERSION/$OS/$(uname -r)/$MINIONCOUNT/$(read_feat)" echo "$VERSION/$OS/$(uname -r)/$MINIONCOUNT:$AGENTCOUNT/$(read_feat)"
} }
systemctl_func() { systemctl_func() {

View File

@@ -50,6 +50,7 @@ container_list() {
"so-idh" "so-idh"
"so-idstools" "so-idstools"
"so-influxdb" "so-influxdb"
"so-kafka"
"so-kibana" "so-kibana"
"so-kratos" "so-kratos"
"so-logstash" "so-logstash"
@@ -64,7 +65,7 @@ container_list() {
"so-strelka-manager" "so-strelka-manager"
"so-suricata" "so-suricata"
"so-telegraf" "so-telegraf"
"so-zeek" "so-zeek"
) )
else else
TRUSTED_CONTAINERS=( TRUSTED_CONTAINERS=(

View File

@@ -198,6 +198,8 @@ if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|req.LocalMeta.host.ip" # known issue in GH EXCLUDED_ERRORS="$EXCLUDED_ERRORS|req.LocalMeta.host.ip" # known issue in GH
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|sendmail" # zeek EXCLUDED_ERRORS="$EXCLUDED_ERRORS|sendmail" # zeek
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|stats.log" EXCLUDED_ERRORS="$EXCLUDED_ERRORS|stats.log"
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Unknown column" # Elastalert errors from running EQL queries
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|parsing_exception" # Elastalert EQL parsing issue. Temp.
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|context deadline exceeded" EXCLUDED_ERRORS="$EXCLUDED_ERRORS|context deadline exceeded"
fi fi
@@ -207,6 +209,9 @@ RESULT=0
CONTAINER_IDS=$(docker ps -q) CONTAINER_IDS=$(docker ps -q)
exclude_container so-kibana # kibana error logs are too verbose with large varieties of errors most of which are temporary exclude_container so-kibana # kibana error logs are too verbose with large varieties of errors most of which are temporary
exclude_container so-idstools # ignore due to known issues and noisy logging exclude_container so-idstools # ignore due to known issues and noisy logging
exclude_container so-playbook # Playbook is removed as of 2.4.70, disregard output in stopped containers
exclude_container so-mysql # MySQL is removed as of 2.4.70, disregard output in stopped containers
exclude_container so-soctopus # Soctopus is removed as of 2.4.70, disregard output in stopped containers
for container_id in $CONTAINER_IDS; do for container_id in $CONTAINER_IDS; do
container_name=$(docker ps --format json | jq ". | select(.ID==\"$container_id\")|.Names") container_name=$(docker ps --format json | jq ". | select(.ID==\"$container_id\")|.Names")
@@ -224,10 +229,13 @@ exclude_log "kibana.log" # kibana error logs are too verbose with large variet
exclude_log "spool" # disregard zeek analyze logs as this is data specific exclude_log "spool" # disregard zeek analyze logs as this is data specific
exclude_log "import" # disregard imported test data the contains error strings exclude_log "import" # disregard imported test data the contains error strings
exclude_log "update.log" # ignore playbook updates due to several known issues exclude_log "update.log" # ignore playbook updates due to several known issues
exclude_log "playbook.log" # ignore due to several playbook known issues
exclude_log "cron-cluster-delete.log" # ignore since Curator has been removed exclude_log "cron-cluster-delete.log" # ignore since Curator has been removed
exclude_log "cron-close.log" # ignore since Curator has been removed exclude_log "cron-close.log" # ignore since Curator has been removed
exclude_log "curator.log" # ignore since Curator has been removed exclude_log "curator.log" # ignore since Curator has been removed
exclude_log "playbook.log" # Playbook is removed as of 2.4.70, logs may still be on disk
exclude_log "mysqld.log" # MySQL is removed as of 2.4.70, logs may still be on disk
exclude_log "soctopus.log" # Soctopus is removed as of 2.4.70, logs may still be on disk
exclude_log "agentstatus.log" # ignore this log since it tracks agents in error state
for log_file in $(cat /tmp/log_check_files); do for log_file in $(cat /tmp/log_check_files); do
status "Checking log file $log_file" status "Checking log file $log_file"

View File

@@ -185,3 +185,11 @@ docker:
custom_bind_mounts: [] custom_bind_mounts: []
extra_hosts: [] extra_hosts: []
extra_env: [] extra_env: []
'so-kafka':
final_octet: 88
port_bindings:
- 0.0.0.0:9092:9092
- 0.0.0.0:9093:9093
custom_bind_mounts: []
extra_hosts: []
extra_env: []

View File

@@ -65,3 +65,4 @@ docker:
so-steno: *dockerOptions so-steno: *dockerOptions
so-suricata: *dockerOptions so-suricata: *dockerOptions
so-zeek: *dockerOptions so-zeek: *dockerOptions
so-kafka: *dockerOptions

View File

@@ -0,0 +1,29 @@
{
"package": {
"name": "winlog",
"version": ""
},
"name": "windows-defender",
"namespace": "default",
"description": "Windows Defender - Operational logs",
"policy_id": "endpoints-initial",
"inputs": {
"winlogs-winlog": {
"enabled": true,
"streams": {
"winlog.winlog": {
"enabled": true,
"vars": {
"channel": "Microsoft-Windows-Windows Defender/Operational",
"data_stream.dataset": "winlog.winlog",
"preserve_original_event": false,
"providers": [],
"ignore_older": "72h",
"language": 0,
"tags": [] }
}
}
}
},
"force": true
}

View File

@@ -16,6 +16,9 @@
"paths": [ "paths": [
"/var/log/auth.log*", "/var/log/auth.log*",
"/var/log/secure*" "/var/log/secure*"
],
"tags": [
"so-grid-node"
] ]
} }
}, },
@@ -25,6 +28,9 @@
"paths": [ "paths": [
"/var/log/messages*", "/var/log/messages*",
"/var/log/syslog*" "/var/log/syslog*"
],
"tags": [
"so-grid-node"
] ]
} }
} }

View File

@@ -16,6 +16,9 @@
"paths": [ "paths": [
"/var/log/auth.log*", "/var/log/auth.log*",
"/var/log/secure*" "/var/log/secure*"
],
"tags": [
"so-grid-node"
] ]
} }
}, },
@@ -25,6 +28,9 @@
"paths": [ "paths": [
"/var/log/messages*", "/var/log/messages*",
"/var/log/syslog*" "/var/log/syslog*"
],
"tags": [
"so-grid-node"
] ]
} }
} }

View File

@@ -4,7 +4,7 @@
# Elastic License 2.0. # Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %} {% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %} {% if sls.split('.')[0] in allowed_states or sls in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
# Move our new CA over so Elastic and Logstash can use SSL with the internal CA # Move our new CA over so Elastic and Logstash can use SSL with the internal CA

View File

@@ -2402,6 +2402,50 @@ elasticsearch:
set_priority: set_priority:
priority: 50 priority: 50
min_age: 30d min_age: 30d
so-logs-cef_x_log:
index_sorting: False
index_template:
index_patterns:
- "logs-cef.log-*"
template:
settings:
index:
lifecycle:
name: so-logs-cef.log-logs
number_of_replicas: 0
composed_of:
- "logs-cef.log@package"
- "logs-cef.log@custom"
- "so-fleet_globals-1"
- "so-fleet_agent_id_verification-1"
priority: 501
data_stream:
hidden: false
allow_custom_routing: false
policy:
phases:
cold:
actions:
set_priority:
priority: 0
min_age: 30d
delete:
actions:
delete: {}
min_age: 365d
hot:
actions:
rollover:
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
min_age: 30d
so-logs-checkpoint_x_firewall: so-logs-checkpoint_x_firewall:
index_sorting: False index_sorting: False
index_template: index_template:

View File

@@ -83,6 +83,7 @@
{ "date": { "if": "ctx.event?.module == 'system'", "field": "event.created", "target_field": "@timestamp", "formats": ["yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'"] } }, { "date": { "if": "ctx.event?.module == 'system'", "field": "event.created", "target_field": "@timestamp", "formats": ["yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'"] } },
{ "community_id":{ "if": "ctx.event?.dataset == 'endpoint.events.network'", "ignore_failure":true } }, { "community_id":{ "if": "ctx.event?.dataset == 'endpoint.events.network'", "ignore_failure":true } },
{ "set": { "if": "ctx.event?.module == 'fim'", "override": true, "field": "event.module", "value": "file_integrity" } }, { "set": { "if": "ctx.event?.module == 'fim'", "override": true, "field": "event.module", "value": "file_integrity" } },
{ "rename": { "if": "ctx.winlog?.provider_name == 'Microsoft-Windows-Windows Defender'", "ignore_missing": true, "field": "winlog.event_data.Threat Name", "target_field": "winlog.event_data.threat_name" } },
{ "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "event.dataset_temp", "dataset_tag_temp", "module_temp" ], "ignore_missing": true, "ignore_failure": true } } { "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "event.dataset_temp", "dataset_tag_temp", "module_temp" ], "ignore_missing": true, "ignore_failure": true } }
], ],
"on_failure": [ "on_failure": [

View File

@@ -366,6 +366,7 @@ elasticsearch:
so-logs-azure_x_signinlogs: *indexSettings so-logs-azure_x_signinlogs: *indexSettings
so-logs-azure_x_springcloudlogs: *indexSettings so-logs-azure_x_springcloudlogs: *indexSettings
so-logs-barracuda_x_waf: *indexSettings so-logs-barracuda_x_waf: *indexSettings
so-logs-cef_x_log: *indexSettings
so-logs-cisco_asa_x_log: *indexSettings so-logs-cisco_asa_x_log: *indexSettings
so-logs-cisco_ftd_x_log: *indexSettings so-logs-cisco_ftd_x_log: *indexSettings
so-logs-cisco_ios_x_log: *indexSettings so-logs-cisco_ios_x_log: *indexSettings

View File

@@ -30,7 +30,8 @@
"type": "keyword" "type": "keyword"
}, },
"author": { "author": {
"type": "text" "ignore_above": 1024,
"type": "keyword"
}, },
"description": { "description": {
"type": "text" "type": "text"

View File

@@ -27,6 +27,7 @@
'so-elastic-fleet', 'so-elastic-fleet',
'so-elastic-fleet-package-registry', 'so-elastic-fleet-package-registry',
'so-influxdb', 'so-influxdb',
'so-kafka',
'so-kibana', 'so-kibana',
'so-kratos', 'so-kratos',
'so-logstash', 'so-logstash',
@@ -80,6 +81,7 @@
{% set NODE_CONTAINERS = [ {% set NODE_CONTAINERS = [
'so-logstash', 'so-logstash',
'so-redis', 'so-redis',
'so-kafka'
] %} ] %}
{% elif GLOBALS.role == 'so-idh' %} {% elif GLOBALS.role == 'so-idh' %}

View File

@@ -90,6 +90,11 @@ firewall:
tcp: tcp:
- 8086 - 8086
udp: [] udp: []
kafka:
tcp:
- 9092
- 9093
udp: []
kibana: kibana:
tcp: tcp:
- 5601 - 5601
@@ -364,6 +369,7 @@ firewall:
- elastic_agent_update - elastic_agent_update
- localrules - localrules
- sensoroni - sensoroni
- kafka
fleet: fleet:
portgroups: portgroups:
- elasticsearch_rest - elasticsearch_rest
@@ -399,6 +405,7 @@ firewall:
- docker_registry - docker_registry
- influxdb - influxdb
- sensoroni - sensoroni
- kafka
searchnode: searchnode:
portgroups: portgroups:
- redis - redis
@@ -412,6 +419,7 @@ firewall:
- elastic_agent_data - elastic_agent_data
- elastic_agent_update - elastic_agent_update
- sensoroni - sensoroni
- kafka
heavynode: heavynode:
portgroups: portgroups:
- redis - redis
@@ -1275,14 +1283,17 @@ firewall:
- beats_5044 - beats_5044
- beats_5644 - beats_5644
- elastic_agent_data - elastic_agent_data
- kafka
searchnode: searchnode:
portgroups: portgroups:
- redis - redis
- beats_5644 - beats_5644
- kafka
managersearch: managersearch:
portgroups: portgroups:
- redis - redis
- beats_5644 - beats_5644
- kafka
self: self:
portgroups: portgroups:
- redis - redis

View File

@@ -115,6 +115,9 @@ firewall:
influxdb: influxdb:
tcp: *tcpsettings tcp: *tcpsettings
udp: *udpsettings udp: *udpsettings
kafka:
tcp: *tcpsettings
udp: *udpsettings
kibana: kibana:
tcp: *tcpsettings tcp: *tcpsettings
udp: *udpsettings udp: *udpsettings
@@ -932,7 +935,6 @@ firewall:
portgroups: *portgroupshost portgroups: *portgroupshost
customhostgroup9: customhostgroup9:
portgroups: *portgroupshost portgroups: *portgroupshost
idh: idh:
chain: chain:
DOCKER-USER: DOCKER-USER:

View File

@@ -1,2 +1,3 @@
global: global:
pcapengine: STENO pcapengine: STENO
pipeline: REDIS

View File

@@ -28,7 +28,7 @@ global:
description: Used for handling of authentication cookies. description: Used for handling of authentication cookies.
global: True global: True
airgap: airgap:
description: Sets airgap mode. description: Airgapped systems do not have network connectivity to the internet. This setting represents how this grid was configured during initial setup. While it is technically possible to manually switch systems between airgap and non-airgap, there are some nuances and additional steps involved. For that reason this setting is marked read-only. Contact your support representative for guidance if there is a need to change this setting.
global: True global: True
readonly: True readonly: True
imagerepo: imagerepo:
@@ -36,9 +36,10 @@ global:
global: True global: True
advanced: True advanced: True
pipeline: pipeline:
description: Sets which pipeline technology for events to use. Currently only Redis is supported. description: Sets which pipeline technology for events to use. Currently only Redis is fully supported. Kafka is experimental and requires a Security Onion Pro license.
regex: ^(REDIS|KAFKA)$
regexFailureMessage: You must enter either REDIS or KAFKA.
global: True global: True
readonly: True
advanced: True advanced: True
repo_host: repo_host:
description: Specify the host where operating system packages will be served from. description: Specify the host where operating system packages will be served from.

106
salt/kafka/config.sls Normal file
View File

@@ -0,0 +1,106 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set kafka_ips_logstash = [] %}
{% set kafka_ips_kraft = [] %}
{% set kafkanodes = salt['pillar.get']('kafka:nodes', {}) %}
{% set kafka_ip = GLOBALS.node_ip %}
{# Create list for kafka <-> logstash/searchnode communcations #}
{% for node, node_data in kafkanodes.items() %}
{% do kafka_ips_logstash.append(node_data['ip'] + ":9092") %}
{% endfor %}
{% set kafka_server_list = "','".join(kafka_ips_logstash) %}
{# Create a list for kraft controller <-> kraft controller communications. Used for Kafka metadata management #}
{% for node, node_data in kafkanodes.items() %}
{% do kafka_ips_kraft.append(node_data['nodeid'] ~ "@" ~ node_data['ip'] ~ ":9093") %}
{% endfor %}
{% set kraft_server_list = "','".join(kafka_ips_kraft) %}
include:
- ssl
kafka_group:
group.present:
- name: kafka
- gid: 960
kafka:
user.present:
- uid: 960
- gid: 960
{# Future tools to query kafka directly / show consumer groups
kafka_sbin_tools:
file.recurse:
- name: /usr/sbin
- source: salt://kafka/tools/sbin
- user: 960
- group: 960
- file_mode: 755 #}
kafka_sbin_jinja_tools:
file.recurse:
- name: /usr/sbin
- source: salt://kafka/tools/sbin_jinja
- user: 960
- group: 960
- file_mode: 755
- template: jinja
- defaults:
GLOBALS: {{ GLOBALS }}
kakfa_log_dir:
file.directory:
- name: /opt/so/log/kafka
- user: 960
- group: 960
- makedirs: True
kafka_data_dir:
file.directory:
- name: /nsm/kafka/data
- user: 960
- group: 960
- makedirs: True
kafka_generate_keystore:
cmd.run:
- name: "/usr/sbin/so-kafka-generate-keystore"
- onchanges:
- x509: /etc/pki/kafka.key
kafka_keystore_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka.jks
- mode: 640
- user: 960
- group: 939
{% for sc in ['server', 'client'] %}
kafka_kraft_{{sc}}_properties:
file.managed:
- source: salt://kafka/etc/{{sc}}.properties.jinja
- name: /opt/so/conf/kafka/{{sc}}.properties
- template: jinja
- user: 960
- group: 960
- makedirs: True
- show_changes: False
{% endfor %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

39
salt/kafka/defaults.yaml Normal file
View File

@@ -0,0 +1,39 @@
kafka:
enabled: False
config:
server:
advertised_x_listeners:
auto_x_create_x_topics_x_enable: true
controller_x_listener_x_names: CONTROLLER
controller_x_quorum_x_voters:
inter_x_broker_x_listener_x_name: BROKER
listeners: BROKER://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
listener_x_security_x_protocol_x_map: CONTROLLER:SSL,BROKER:SSL
log_x_dirs: /nsm/kafka/data
log_x_retention_x_check_x_interval_x_ms: 300000
log_x_retention_x_hours: 168
log_x_segment_x_bytes: 1073741824
node_x_id:
num_x_io_x_threads: 8
num_x_network_x_threads: 3
num_x_partitions: 1
num_x_recovery_x_threads_x_per_x_data_x_dir: 1
offsets_x_topic_x_replication_x_factor: 1
process_x_roles: broker
socket_x_receive_x_buffer_x_bytes: 102400
socket_x_request_x_max_x_bytes: 104857600
socket_x_send_x_buffer_x_bytes: 102400
ssl_x_keystore_x_location: /etc/pki/kafka.jks
ssl_x_keystore_x_password: changeit
ssl_x_keystore_x_type: JKS
ssl_x_truststore_x_location: /etc/pki/java/sos/cacerts
ssl_x_truststore_x_password: changeit
transaction_x_state_x_log_x_min_x_isr: 1
transaction_x_state_x_log_x_replication_x_factor: 1
client:
security_x_protocol: SSL
ssl_x_truststore_x_location: /etc/pki/java/sos/cacerts
ssl_x_truststore_x_password: changeit
ssl_x_keystore_x_location: /etc/pki/kafka.jks
ssl_x_keystore_x_type: JKS
ssl_x_keystore_x_password: changeit

16
salt/kafka/disabled.sls Normal file
View File

@@ -0,0 +1,16 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
include:
- kafka.sostatus
so-kafka:
docker_container.absent:
- force: True
so-kafka_so-status.disabled:
file.comment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-kafka$

64
salt/kafka/enabled.sls Normal file
View File

@@ -0,0 +1,64 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'docker/docker.map.jinja' import DOCKER %}
{% set KAFKANODES = salt['pillar.get']('kafka:nodes', {}) %}
include:
- elasticsearch.ca
- kafka.sostatus
- kafka.config
- kafka.storage
so-kafka:
docker_container.running:
- image: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-kafka:{{ GLOBALS.so_version }}
- hostname: so-kafka
- name: so-kafka
- networks:
- sobridge:
- ipv4_address: {{ DOCKER.containers['so-kafka'].ip }}
- user: kafka
- environment:
- KAFKA_HEAP_OPTS=-Xmx2G -Xms1G
- extra_hosts:
{% for node in KAFKANODES %}
- {{ node }}:{{ KAFKANODES[node].ip }}
{% endfor %}
{% if DOCKER.containers['so-kafka'].extra_hosts %}
{% for XTRAHOST in DOCKER.containers['so-kafka'].extra_hosts %}
- {{ XTRAHOST }}
{% endfor %}
{% endif %}
- port_bindings:
{% for BINDING in DOCKER.containers['so-kafka'].port_bindings %}
- {{ BINDING }}
{% endfor %}
- binds:
- /etc/pki/kafka.jks:/etc/pki/kafka.jks
- /opt/so/conf/ca/cacerts:/etc/pki/java/sos/cacerts
- /nsm/kafka/data/:/nsm/kafka/data/:rw
- /opt/so/conf/kafka/server.properties:/kafka/config/kraft/server.properties
- /opt/so/conf/kafka/client.properties:/kafka/config/kraft/client.properties
- watch:
{% for sc in ['server', 'client'] %}
- file: kafka_kraft_{{sc}}_properties
{% endfor %}
delete_so-kafka_so-status.disabled:
file.uncomment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-kafka$
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -0,0 +1,7 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% from 'kafka/map.jinja' import KAFKAMERGED -%}
{{ KAFKAMERGED.config.client | yaml(False) | replace("_x_", ".") }}

View File

@@ -0,0 +1,7 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% from 'kafka/map.jinja' import KAFKAMERGED -%}
{{ KAFKAMERGED.config.server | yaml(False) | replace("_x_", ".") }}

14
salt/kafka/init.sls Normal file
View File

@@ -0,0 +1,14 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'kafka/map.jinja' import KAFKAMERGED %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
include:
{% if GLOBALS.pipeline == "KAFKA" and KAFKAMERGED.enabled %}
- kafka.enabled
{% else %}
- kafka.disabled
{% endif %}

20
salt/kafka/map.jinja Normal file
View File

@@ -0,0 +1,20 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% import_yaml 'kafka/defaults.yaml' as KAFKADEFAULTS %}
{% set KAFKAMERGED = salt['pillar.get']('kafka', KAFKADEFAULTS.kafka, merge=True) %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% do KAFKAMERGED.config.server.update({ 'node_x_id': salt['pillar.get']('kafka:nodes:' ~ GLOBALS.hostname ~ ':nodeid')}) %}
{% do KAFKAMERGED.config.server.update({'advertised_x_listeners': 'BROKER://' ~ GLOBALS.node_ip ~ ':9092'}) %}
{% set nodes = salt['pillar.get']('kafka:nodes', {}) %}
{% set combined = [] %}
{% for hostname, data in nodes.items() %}
{% do combined.append(data.nodeid ~ "@" ~ hostname ~ ":9093") %}
{% endfor %}
{% set kraft_controller_quorum_voters = ','.join(combined) %}
{% do KAFKAMERGED.config.server.update({'controller_x_quorum_x_voters': kraft_controller_quorum_voters}) %}

170
salt/kafka/soc_kafka.yaml Normal file
View File

@@ -0,0 +1,170 @@
kafka:
enabled:
description: Enable or disable Kafka.
helpLink: kafka.html
cluster_id:
description: The ID of the Kafka cluster.
readonly: True
advanced: True
sensitive: True
helpLink: kafka.html
config:
server:
advertised_x_listeners:
description: Specify the list of listeners (hostname and port) that Kafka brokers provide to clients for communication.
title: advertised.listeners
helpLink: kafka.html
auto_x_create_x_topics_x_enable:
description: Enable the auto creation of topics.
title: auto.create.topics.enable
forcedType: bool
helpLink: kafka.html
controller_x_listener_x_names:
description: Set listeners used by the controller in a comma-seperated list.
title: controller.listener.names
helpLink: kafka.html
controller_x_quorum_x_voters:
description: A comma-seperated list of ID and endpoint information mapped for a set of voters.
title: controller.quorum.voters
helpLink: kafka.html
inter_x_broker_x_listener_x_name:
description: The name of the listener used for inter-broker communication.
title: inter.broker.listener.name
helpLink: kafka.html
listeners:
description: Set of URIs that is listened on and the listener names in a comma-seperated list.
helpLink: kafka.html
listener_x_security_x_protocol_x_map:
description: Comma-seperated mapping of listener name and security protocols.
title: listener.security.protocol.map
helpLink: kafka.html
log_x_dirs:
description: Where Kafka logs are stored within the Docker container.
title: log.dirs
helpLink: kafka.html
log_x_retention_x_check_x_interval_x_ms:
description: Frequency at which log files are checked if they are qualified for deletion.
title: log.retention.check.interval.ms
helpLink: kafka.html
log_x_retention_x_hours:
description: How long, in hours, a log file is kept.
title: log.retention.hours
forcedType: int
helpLink: kafka.html
log_x_segment_x_bytes:
description: The maximum allowable size for a log file.
title: log.segment.bytes
forcedType: int
helpLink: kafka.html
node_x_id:
description: The node ID corresponds to the roles performed by this process whenever process.roles is populated.
title: node.id
forcedType: int
readonly: True
helpLink: kafka.html
num_x_io_x_threads:
description: The number of threads used by Kafka.
title: num.io.threads
forcedType: int
helpLink: kafka.html
num_x_network_x_threads:
description: The number of threads used for network communication.
title: num.network.threads
forcedType: int
helpLink: kafka.html
num_x_partitions:
description: The number of log partitions assigned per topic.
title: num.partitions
forcedType: int
helpLink: kafka.html
num_x_recovery_x_threads_x_per_x_data_x_dir:
description: The number of threads used for log recuperation at startup and purging at shutdown. This ammount of threads is used per data directory.
title: num.recovery.threads.per.data.dir
forcedType: int
helpLink: kafka.html
offsets_x_topic_x_replication_x_factor:
description: The offsets topic replication factor.
title: offsets.topic.replication.factor
forcedType: int
helpLink: kafka.html
process_x_roles:
description: The roles the process performs. Use a comma-seperated list is multiple.
title: process.roles
helpLink: kafka.html
socket_x_receive_x_buffer_x_bytes:
description: Size, in bytes of the SO_RCVBUF buffer. A value of -1 will use the OS default.
title: socket.receive.buffer.bytes
#forcedType: int - soc needs to allow -1 as an int before we can use this
helpLink: kafka.html
socket_x_request_x_max_x_bytes:
description: The maximum bytes allowed for a request to the socket.
title: socket.request.max.bytes
forcedType: int
helpLink: kafka.html
socket_x_send_x_buffer_x_bytes:
description: Size, in bytes of the SO_SNDBUF buffer. A value of -1 will use the OS default.
title: socket.send.buffer.byte
#forcedType: int - soc needs to allow -1 as an int before we can use this
helpLink: kafka.html
ssl_x_keystore_x_location:
description: The key store file location within the Docker container.
title: ssl.keystore.location
helpLink: kafka.html
ssl_x_keystore_x_password:
description: The key store file password. Invalid for PEM format.
title: ssl.keystore.password
sensitive: True
helpLink: kafka.html
ssl_x_keystore_x_type:
description: The key store file format.
title: ssl.keystore.type
regex: ^(JKS|PKCS12|PEM)$
helpLink: kafka.html
ssl_x_truststore_x_location:
description: The trust store file location within the Docker container.
title: ssl.truststore.location
helpLink: kafka.html
ssl_x_truststore_x_password:
description: The trust store file password. If null, the trust store file is still use, but integrity checking is disabled. Invalid for PEM format.
title: ssl.truststore.password
sensitive: True
helpLink: kafka.html
transaction_x_state_x_log_x_min_x_isr:
description: Overrides min.insync.replicas for the transaction topic. When a producer configures acks to "all" (or "-1"), this setting determines the minimum number of replicas required to acknowledge a write as successful. Failure to meet this minimum triggers an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used in conjunction, min.insync.replicas and acks enable stronger durability guarantees. For instance, creating a topic with a replication factor of 3, setting min.insync.replicas to 2, and using acks of "all" ensures that the producer raises an exception if a majority of replicas fail to receive a write.
title: transaction.state.log.min.isr
forcedType: int
helpLink: kafka.html
transaction_x_state_x_log_x_replication_x_factor:
description: Set the replication factor higher for the transaction topic to ensure availability. Internal topic creation will not proceed until the cluster size satisfies this replication factor prerequisite.
title: transaction.state.log.replication.factor
forcedType: int
helpLink: kafka.html
client:
security_x_protocol:
description: 'Broker communication protocol. Options are: SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT'
title: security.protocol
regex: ^(SASL_SSL|PLAINTEXT|SSL|SASL_PLAINTEXT)
helpLink: kafka.html
ssl_x_keystore_x_location:
description: The key store file location within the Docker container.
title: ssl.keystore.location
helpLink: kafka.html
ssl_x_keystore_x_password:
description: The key store file password. Invalid for PEM format.
title: ssl.keystore.password
sensitive: True
helpLink: kafka.html
ssl_x_keystore_x_type:
description: The key store file format.
title: ssl.keystore.type
regex: ^(JKS|PKCS12|PEM)$
helpLink: kafka.html
ssl_x_truststore_x_location:
description: The trust store file location within the Docker container.
title: ssl.truststore.location
helpLink: kafka.html
ssl_x_truststore_x_password:
description: The trust store file password. If null, the trust store file is still use, but integrity checking is disabled. Invalid for PEM format.
title: ssl.truststore.password
sensitive: True
helpLink: kafka.html

21
salt/kafka/sostatus.sls Normal file
View File

@@ -0,0 +1,21 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
append_so-kafka_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-kafka
- unless: grep -q so-kafka /opt/so/conf/so-status/so-status.conf
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

38
salt/kafka/storage.sls Normal file
View File

@@ -0,0 +1,38 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set kafka_cluster_id = salt['pillar.get']('kafka:cluster_id', default=None) %}
{% if GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone'] %}
{% if kafka_cluster_id is none %}
generate_kafka_cluster_id:
cmd.run:
- name: /usr/sbin/so-kafka-clusterid
{% endif %}
{% endif %}
{# Initialize kafka storage if it doesn't already exist. Just looking for meta.properties in /nsm/kafka/data #}
{% if not salt['file.file_exists']('/nsm/kafka/data/meta.properties') %}
kafka_storage_init:
cmd.run:
- name: |
docker run -v /nsm/kafka/data:/nsm/kafka/data -v /opt/so/conf/kafka/server.properties:/kafka/config/kraft/newserver.properties --name so-kafkainit --user root --entrypoint /kafka/bin/kafka-storage.sh {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-kafka:{{ GLOBALS.so_version }} format -t {{ kafka_cluster_id }} -c /kafka/config/kraft/newserver.properties
kafka_rm_kafkainit:
cmd.run:
- name: |
docker rm so-kafkainit
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -0,0 +1,13 @@
#!/bin/bash
#
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
# Generate a new keystore
docker run -v /etc/pki/kafka.p12:/etc/pki/kafka.p12 --name so-kafka-keystore --user root --entrypoint keytool {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-kafka:{{ GLOBALS.so_version }} -importkeystore -srckeystore /etc/pki/kafka.p12 -srcstoretype PKCS12 -srcstorepass changeit -destkeystore /etc/pki/kafka.jks -deststoretype JKS -deststorepass changeit -noprompt
docker cp so-kafka-keystore:/etc/pki/kafka.jks /etc/pki/kafka.jks
docker rm so-kafka-keystore

View File

@@ -75,9 +75,10 @@ so-logstash:
{% else %} {% else %}
- /etc/pki/tls/certs/intca.crt:/usr/share/filebeat/ca.crt:ro - /etc/pki/tls/certs/intca.crt:/usr/share/filebeat/ca.crt:ro
{% endif %} {% endif %}
{% if GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-searchnode'] %} {% if GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-searchnode' ] %}
- /opt/so/conf/ca/cacerts:/etc/pki/ca-trust/extracted/java/cacerts:ro - /opt/so/conf/ca/cacerts:/etc/pki/ca-trust/extracted/java/cacerts:ro
- /opt/so/conf/ca/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro - /opt/so/conf/ca/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro
- /etc/pki/kafka-logstash.p12:/usr/share/logstash/kafka-logstash.p12:ro
{% endif %} {% endif %}
{% if GLOBALS.role == 'so-eval' %} {% if GLOBALS.role == 'so-eval' %}
- /nsm/zeek:/nsm/zeek:ro - /nsm/zeek:/nsm/zeek:ro

View File

@@ -4,9 +4,10 @@
# Elastic License 2.0. # Elastic License 2.0.
{% from 'logstash/map.jinja' import LOGSTASH_MERGED %} {% from 'logstash/map.jinja' import LOGSTASH_MERGED %}
{% from 'kafka/map.jinja' import KAFKAMERGED %}
include: include:
{% if LOGSTASH_MERGED.enabled %} {% if LOGSTASH_MERGED.enabled and not KAFKAMERGED.enabled %}
- logstash.enabled - logstash.enabled
{% else %} {% else %}
- logstash.disabled - logstash.disabled

View File

@@ -0,0 +1,35 @@
{% set kafka_brokers = salt['pillar.get']('logstash:nodes:receiver', {}) %}
{% set kafka_on_mngr = salt ['pillar.get']('logstash:nodes:manager', {}) %}
{% set broker_ips = [] %}
{% for node, node_data in kafka_brokers.items() %}
{% do broker_ips.append(node_data['ip'] + ":9092") %}
{% endfor %}
{% for node, node_data in kafka_on_mngr.items() %}
{% do broker_ips.append(node_data['ip'] + ":9092") %}
{% endfor %}
{% set bootstrap_servers = "','".join(broker_ips) %}
input {
kafka {
codec => json
topics => ['default-logs', 'kratos-logs', 'soc-logs', 'strelka-logs', 'suricata-logs', 'zeek-logs']
group_id => 'searchnodes'
client_id => '{{ GLOBALS.hostname }}'
security_protocol => 'SSL'
bootstrap_servers => '{{ bootstrap_servers }}'
ssl_keystore_location => '/usr/share/logstash/kafka-logstash.p12'
ssl_keystore_password => 'changeit'
ssl_keystore_type => 'PKCS12'
ssl_truststore_location => '/etc/pki/ca-trust/extracted/java/cacerts'
ssl_truststore_password => 'changeit'
decorate_events => true
tags => [ "elastic-agent", "input-{{ GLOBALS.hostname}}", "kafka" ]
}
}
filter {
if ![metadata] {
mutate {
rename => { "@metadata" => "metadata" }
}
}
}

View File

@@ -27,6 +27,15 @@ repo_log_dir:
- user - user
- group - group
agents_log_dir:
file.directory:
- name: /opt/so/log/agents
- user: root
- group: root
- recurse:
- user
- group
yara_log_dir: yara_log_dir:
file.directory: file.directory:
- name: /opt/so/log/yarasync - name: /opt/so/log/yarasync
@@ -101,6 +110,17 @@ so-repo-sync:
- hour: '{{ MANAGERMERGED.reposync.hour }}' - hour: '{{ MANAGERMERGED.reposync.hour }}'
- minute: '{{ MANAGERMERGED.reposync.minute }}' - minute: '{{ MANAGERMERGED.reposync.minute }}'
so_fleetagent_status:
cron.present:
- name: /usr/sbin/so-elasticagent-status > /opt/so/log/agents/agentstatus.log 2>&1
- identifier: so_fleetagent_status
- user: root
- minute: '*/5'
- hour: '*'
- daymonth: '*'
- month: '*'
- dayweek: '*'
socore_own_saltstack: socore_own_saltstack:
file.directory: file.directory:
- name: /opt/so/saltstack - name: /opt/so/saltstack

View File

@@ -0,0 +1,10 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
curl -s -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/agent_status" | jq .

View File

@@ -0,0 +1,29 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
### THIS SCRIPT AND SALT STATE REFERENCES TO THIS SCRIPT TO BE REMOVED ONCE INITIAL TESTING IS DONE - THESE VALUES WILL GENERATED IN SETUP AND SOUP
local_salt_dir=/opt/so/saltstack/local
if [[ -f /usr/sbin/so-common ]]; then
source /usr/sbin/so-common
else
source $(dirname $0)/../../../common/tools/sbin/so-common
fi
if ! grep -q "^ cluster_id:" $local_salt_dir/pillar/kafka/soc_kafka.sls; then
kafka_cluster_id=$(get_random_value 22)
echo 'kafka: ' > $local_salt_dir/pillar/kafka/soc_kafka.sls
echo ' cluster_id: '$kafka_cluster_id >> $local_salt_dir/pillar/kafka/soc_kafka.sls
if ! grep -q "^ kafkapass:" $local_salt_dir/pillar/kafka/soc_kafka.sls; then
kafkapass=$(get_random_value)
echo ' kafkapass: '$kafkapass >> $local_salt_dir/pillar/kafka/soc_kafka.sls
fi

View File

@@ -17,13 +17,16 @@ def showUsage(args):
print('Usage: {} <COMMAND> <YAML_FILE> [ARGS...]'.format(sys.argv[0])) print('Usage: {} <COMMAND> <YAML_FILE> [ARGS...]'.format(sys.argv[0]))
print(' General commands:') print(' General commands:')
print(' append - Append a list item to a yaml key, if it exists and is a list. Requires KEY and LISTITEM args.') print(' append - Append a list item to a yaml key, if it exists and is a list. Requires KEY and LISTITEM args.')
print(' add - Add a new key and set its value. Fails if key already exists. Requires KEY and VALUE args.')
print(' remove - Removes a yaml key, if it exists. Requires KEY arg.') print(' remove - Removes a yaml key, if it exists. Requires KEY arg.')
print(' replace - Replaces (or adds) a new key and set its value. Requires KEY and VALUE args.')
print(' help - Prints this usage information.') print(' help - Prints this usage information.')
print('') print('')
print(' Where:') print(' Where:')
print(' YAML_FILE - Path to the file that will be modified. Ex: /opt/so/conf/service/conf.yaml') print(' YAML_FILE - Path to the file that will be modified. Ex: /opt/so/conf/service/conf.yaml')
print(' KEY - YAML key, does not support \' or " characters at this time. Ex: level1.level2') print(' KEY - YAML key, does not support \' or " characters at this time. Ex: level1.level2')
print(' LISTITEM - Item to add to the list.') print(' VALUE - Value to set for a given key')
print(' LISTITEM - Item to append to a given key\'s list value')
sys.exit(1) sys.exit(1)
@@ -37,6 +40,7 @@ def writeYaml(filename, content):
file = open(filename, "w") file = open(filename, "w")
return yaml.dump(content, file) return yaml.dump(content, file)
def appendItem(content, key, listItem): def appendItem(content, key, listItem):
pieces = key.split(".", 1) pieces = key.split(".", 1)
if len(pieces) > 1: if len(pieces) > 1:
@@ -51,6 +55,30 @@ def appendItem(content, key, listItem):
print("The key provided does not exist. No action was taken on the file.") print("The key provided does not exist. No action was taken on the file.")
return 1 return 1
def convertType(value):
if len(value) > 0 and (not value.startswith("0") or len(value) == 1):
if "." in value:
try:
value = float(value)
return value
except ValueError:
pass
try:
value = int(value)
return value
except ValueError:
pass
lowered_value = value.lower()
if lowered_value == "false":
return False
elif lowered_value == "true":
return True
return value
def append(args): def append(args):
if len(args) != 3: if len(args) != 3:
print('Missing filename, key arg, or list item to append', file=sys.stderr) print('Missing filename, key arg, or list item to append', file=sys.stderr)
@@ -62,11 +90,41 @@ def append(args):
listItem = args[2] listItem = args[2]
content = loadYaml(filename) content = loadYaml(filename)
appendItem(content, key, listItem) appendItem(content, key, convertType(listItem))
writeYaml(filename, content) writeYaml(filename, content)
return 0 return 0
def addKey(content, key, value):
pieces = key.split(".", 1)
if len(pieces) > 1:
if not pieces[0] in content:
content[pieces[0]] = {}
addKey(content[pieces[0]], pieces[1], value)
elif key in content:
raise KeyError("key already exists")
else:
content[key] = value
def add(args):
if len(args) != 3:
print('Missing filename, key arg, and/or value', file=sys.stderr)
showUsage(None)
return
filename = args[0]
key = args[1]
value = args[2]
content = loadYaml(filename)
addKey(content, key, convertType(value))
writeYaml(filename, content)
return 0
def removeKey(content, key): def removeKey(content, key):
pieces = key.split(".", 1) pieces = key.split(".", 1)
if len(pieces) > 1: if len(pieces) > 1:
@@ -91,6 +149,24 @@ def remove(args):
return 0 return 0
def replace(args):
if len(args) != 3:
print('Missing filename, key arg, and/or value', file=sys.stderr)
showUsage(None)
return
filename = args[0]
key = args[1]
value = args[2]
content = loadYaml(filename)
removeKey(content, key)
addKey(content, key, convertType(value))
writeYaml(filename, content)
return 0
def main(): def main():
args = sys.argv[1:] args = sys.argv[1:]
@@ -100,8 +176,10 @@ def main():
commands = { commands = {
"help": showUsage, "help": showUsage,
"add": add,
"append": append, "append": append,
"remove": remove, "remove": remove,
"replace": replace,
} }
code = 1 code = 1

View File

@@ -42,6 +42,14 @@ class TestRemove(unittest.TestCase):
sysmock.assert_called() sysmock.assert_called()
self.assertIn(mock_stdout.getvalue(), "Usage:") self.assertIn(mock_stdout.getvalue(), "Usage:")
def test_remove_missing_arg(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stdout:
sys.argv = ["cmd", "help"]
soyaml.remove(["file"])
sysmock.assert_called()
self.assertIn(mock_stdout.getvalue(), "Missing filename or key arg\n")
def test_remove(self): def test_remove(self):
filename = "/tmp/so-yaml_test-remove.yaml" filename = "/tmp/so-yaml_test-remove.yaml"
file = open(filename, "w") file = open(filename, "w")
@@ -106,6 +114,14 @@ class TestRemove(unittest.TestCase):
sysmock.assert_called_once_with(1) sysmock.assert_called_once_with(1)
self.assertIn(mock_stdout.getvalue(), "Missing filename or key arg\n") self.assertIn(mock_stdout.getvalue(), "Missing filename or key arg\n")
def test_append_missing_arg(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stdout:
sys.argv = ["cmd", "help"]
soyaml.append(["file", "key"])
sysmock.assert_called()
self.assertIn(mock_stdout.getvalue(), "Missing filename, key arg, or list item to append\n")
def test_append(self): def test_append(self):
filename = "/tmp/so-yaml_test-remove.yaml" filename = "/tmp/so-yaml_test-remove.yaml"
file = open(filename, "w") file = open(filename, "w")
@@ -201,3 +217,146 @@ class TestRemove(unittest.TestCase):
soyaml.main() soyaml.main()
sysmock.assert_called() sysmock.assert_called()
self.assertEqual(mock_stdout.getvalue(), "The existing value for the given key is not a list. No action was taken on the file.\n") self.assertEqual(mock_stdout.getvalue(), "The existing value for the given key is not a list. No action was taken on the file.\n")
def test_add_key(self):
content = {}
soyaml.addKey(content, "foo", 123)
self.assertEqual(content, {"foo": 123})
try:
soyaml.addKey(content, "foo", "bar")
self.assertFail("expected key error since key already exists")
except KeyError:
pass
try:
soyaml.addKey(content, "foo.bar", 123)
self.assertFail("expected type error since key parent value is not a map")
except TypeError:
pass
content = {}
soyaml.addKey(content, "foo", "bar")
self.assertEqual(content, {"foo": "bar"})
soyaml.addKey(content, "badda.badda", "boom")
self.assertEqual(content, {"foo": "bar", "badda": {"badda": "boom"}})
def test_add_missing_arg(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stdout:
sys.argv = ["cmd", "help"]
soyaml.add(["file", "key"])
sysmock.assert_called()
self.assertIn(mock_stdout.getvalue(), "Missing filename, key arg, and/or value\n")
def test_add(self):
filename = "/tmp/so-yaml_test-add.yaml"
file = open(filename, "w")
file.write("{key1: { child1: 123, child2: abc }, key2: false, key3: [a,b,c]}")
file.close()
soyaml.add([filename, "key4", "d"])
file = open(filename, "r")
actual = file.read()
file.close()
expected = "key1:\n child1: 123\n child2: abc\nkey2: false\nkey3:\n- a\n- b\n- c\nkey4: d\n"
self.assertEqual(actual, expected)
def test_add_nested(self):
filename = "/tmp/so-yaml_test-add.yaml"
file = open(filename, "w")
file.write("{key1: { child1: 123, child2: [a,b,c] }, key2: false, key3: [e,f,g]}")
file.close()
soyaml.add([filename, "key1.child3", "d"])
file = open(filename, "r")
actual = file.read()
file.close()
expected = "key1:\n child1: 123\n child2:\n - a\n - b\n - c\n child3: d\nkey2: false\nkey3:\n- e\n- f\n- g\n"
self.assertEqual(actual, expected)
def test_add_nested_deep(self):
filename = "/tmp/so-yaml_test-add.yaml"
file = open(filename, "w")
file.write("{key1: { child1: 123, child2: { deep1: 45 } }, key2: false, key3: [e,f,g]}")
file.close()
soyaml.add([filename, "key1.child2.deep2", "d"])
file = open(filename, "r")
actual = file.read()
file.close()
expected = "key1:\n child1: 123\n child2:\n deep1: 45\n deep2: d\nkey2: false\nkey3:\n- e\n- f\n- g\n"
self.assertEqual(actual, expected)
def test_replace_missing_arg(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stdout:
sys.argv = ["cmd", "help"]
soyaml.replace(["file", "key"])
sysmock.assert_called()
self.assertIn(mock_stdout.getvalue(), "Missing filename, key arg, and/or value\n")
def test_replace(self):
filename = "/tmp/so-yaml_test-add.yaml"
file = open(filename, "w")
file.write("{key1: { child1: 123, child2: abc }, key2: false, key3: [a,b,c]}")
file.close()
soyaml.replace([filename, "key2", True])
file = open(filename, "r")
actual = file.read()
file.close()
expected = "key1:\n child1: 123\n child2: abc\nkey2: true\nkey3:\n- a\n- b\n- c\n"
self.assertEqual(actual, expected)
def test_replace_nested(self):
filename = "/tmp/so-yaml_test-add.yaml"
file = open(filename, "w")
file.write("{key1: { child1: 123, child2: [a,b,c] }, key2: false, key3: [e,f,g]}")
file.close()
soyaml.replace([filename, "key1.child2", "d"])
file = open(filename, "r")
actual = file.read()
file.close()
expected = "key1:\n child1: 123\n child2: d\nkey2: false\nkey3:\n- e\n- f\n- g\n"
self.assertEqual(actual, expected)
def test_replace_nested_deep(self):
filename = "/tmp/so-yaml_test-add.yaml"
file = open(filename, "w")
file.write("{key1: { child1: 123, child2: { deep1: 45 } }, key2: false, key3: [e,f,g]}")
file.close()
soyaml.replace([filename, "key1.child2.deep1", 46])
file = open(filename, "r")
actual = file.read()
file.close()
expected = "key1:\n child1: 123\n child2:\n deep1: 46\nkey2: false\nkey3:\n- e\n- f\n- g\n"
self.assertEqual(actual, expected)
def test_convert(self):
self.assertEqual(soyaml.convertType("foo"), "foo")
self.assertEqual(soyaml.convertType("foo.bar"), "foo.bar")
self.assertEqual(soyaml.convertType("123"), 123)
self.assertEqual(soyaml.convertType("0"), 0)
self.assertEqual(soyaml.convertType("00"), "00")
self.assertEqual(soyaml.convertType("0123"), "0123")
self.assertEqual(soyaml.convertType("123.456"), 123.456)
self.assertEqual(soyaml.convertType("0123.456"), "0123.456")
self.assertEqual(soyaml.convertType("true"), True)
self.assertEqual(soyaml.convertType("TRUE"), True)
self.assertEqual(soyaml.convertType("false"), False)
self.assertEqual(soyaml.convertType("FALSE"), False)
self.assertEqual(soyaml.convertType(""), "")

View File

@@ -229,7 +229,7 @@ check_local_mods() {
# {% endraw %} # {% endraw %}
check_pillar_items() { check_pillar_items() {
local pillar_output=$(salt-call pillar.items --out=json) local pillar_output=$(salt-call pillar.items -lerror --out=json)
cond=$(jq '.local | has("_errors")' <<< "$pillar_output") cond=$(jq '.local | has("_errors")' <<< "$pillar_output")
if [[ "$cond" == "true" ]]; then if [[ "$cond" == "true" ]]; then
@@ -357,6 +357,7 @@ preupgrade_changes() {
[[ "$INSTALLEDVERSION" == 2.4.30 ]] && up_to_2.4.40 [[ "$INSTALLEDVERSION" == 2.4.30 ]] && up_to_2.4.40
[[ "$INSTALLEDVERSION" == 2.4.40 ]] && up_to_2.4.50 [[ "$INSTALLEDVERSION" == 2.4.40 ]] && up_to_2.4.50
[[ "$INSTALLEDVERSION" == 2.4.50 ]] && up_to_2.4.60 [[ "$INSTALLEDVERSION" == 2.4.50 ]] && up_to_2.4.60
[[ "$INSTALLEDVERSION" == 2.4.60 ]] && up_to_2.4.70
true true
} }
@@ -373,6 +374,7 @@ postupgrade_changes() {
[[ "$POSTVERSION" == 2.4.30 ]] && post_to_2.4.40 [[ "$POSTVERSION" == 2.4.30 ]] && post_to_2.4.40
[[ "$POSTVERSION" == 2.4.40 ]] && post_to_2.4.50 [[ "$POSTVERSION" == 2.4.40 ]] && post_to_2.4.50
[[ "$POSTVERSION" == 2.4.50 ]] && post_to_2.4.60 [[ "$POSTVERSION" == 2.4.50 ]] && post_to_2.4.60
[[ "$POSTVERSION" == 2.4.60 ]] && post_to_2.4.70
true true
} }
@@ -435,6 +437,29 @@ post_to_2.4.60() {
POSTVERSION=2.4.60 POSTVERSION=2.4.60
} }
post_to_2.4.70() {
# Global pipeline changes to REDIS or KAFKA
echo "Removing global.pipeline pillar configuration"
sed -i '/pipeline:/d' /opt/so/saltstack/local/pillar/global/soc_global.sls
# Kafka configuration
mkdir -p /opt/so/saltstack/local/pillar/kafka
touch /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls
touch /opt/so/saltstack/local/pillar/kafka/adv_kafka.sls
echo 'kafka: ' > /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls
if ! grep -q "^ cluster_id:" $local_salt_dir/pillar/kafka/soc_kafka.sls; then
kafka_cluster_id=$(get_random_value 22)
echo ' cluster_id: '$kafka_cluster_id >> $local_salt_dir/pillar/kafka/soc_kafka.sls
if ! grep -q "^ certpass:" $local_salt_dir/pillar/kafka/soc_kafka.sls; then
kafkapass=$(get_random_value)
echo ' certpass: '$kafkapass >> $local_salt_dir/pillar/kafka/soc_kafka.sls
fi
POSTVERSION=2.4.70
}
repo_sync() { repo_sync() {
echo "Sync the local repo." echo "Sync the local repo."
su socore -c '/usr/sbin/so-repo-sync' || fail "Unable to complete so-repo-sync." su socore -c '/usr/sbin/so-repo-sync' || fail "Unable to complete so-repo-sync."
@@ -574,6 +599,116 @@ up_to_2.4.60() {
INSTALLEDVERSION=2.4.60 INSTALLEDVERSION=2.4.60
} }
up_to_2.4.70() {
playbook_migration
toggle_telemetry
INSTALLEDVERSION=2.4.70
}
toggle_telemetry() {
if [[ -z $UNATTENDED && $is_airgap -ne 0 ]]; then
cat << ASSIST_EOF
--------------- SOC Telemetry ---------------
The Security Onion development team could use your help! Enabling SOC
Telemetry will help the team understand which UI features are being
used and enables informed prioritization of future development.
Adjust this setting at anytime via the SOC Configuration screen.
Documentation: https://docs.securityonion.net/en/2.4/telemetry.html
ASSIST_EOF
echo -n "Continue the upgrade with SOC Telemetry enabled [Y/n]? "
read -r input
input=$(echo "${input,,}" | xargs echo -n)
echo ""
if [[ ${#input} -eq 0 || "$input" == "yes" || "$input" == "y" || "$input" == "yy" ]]; then
echo "Thank you for helping improve Security Onion!"
else
if so-yaml.py replace /opt/so/saltstack/local/pillar/soc/soc_soc.sls soc.telemetryEnabled false; then
echo "Disabled SOC Telemetry."
else
fail "Failed to disable SOC Telemetry; aborting."
fi
fi
echo ""
fi
}
playbook_migration() {
# Start SOC Detections migration
mkdir -p /nsm/backup/detections-migration/{suricata,sigma/rules,elastalert}
# Remove cronjobs
crontab -l | grep -v 'so-playbook-sync_cron' | crontab -
crontab -l | grep -v 'so-playbook-ruleupdate_cron' | crontab -
if grep -A 1 'playbook:' /opt/so/saltstack/local/pillar/minions/* | grep -q 'enabled: True'; then
# Check for active Elastalert rules
active_rules_count=$(find /opt/so/rules/elastalert/playbook/ -type f -name "*.yaml" | wc -l)
if [[ "$active_rules_count" -gt 0 ]]; then
# Prompt the user to AGREE if active Elastalert rules found
echo
echo "$active_rules_count Active Elastalert/Playbook rules found."
echo "In preparation for the new Detections module, they will be backed up and then disabled."
echo
echo "If you would like to proceed, then type AGREE and press ENTER."
echo
# Read user input
read INPUT
if [ "${INPUT^^}" != 'AGREE' ]; then fail "SOUP canceled."; fi
echo "Backing up the Elastalert rules..."
rsync -av --stats /opt/so/rules/elastalert/playbook/*.yaml /nsm/backup/detections-migration/elastalert/
# Verify that rsync completed successfully
if [[ $? -eq 0 ]]; then
# Delete the Elastlaert rules
rm -f /opt/so/rules/elastalert/playbook/*.yaml
echo "Active Elastalert rules have been backed up."
else
fail "Error: rsync failed to copy the files. Active Elastalert rules have not been backed up."
fi
fi
echo
echo "Exporting Sigma rules from Playbook..."
MYSQLPW=$(awk '/mysql:/ {print $2}' /opt/so/saltstack/local/pillar/secrets.sls)
docker exec so-mysql sh -c "exec mysql -uroot -p${MYSQLPW} -D playbook -sN -e \"SELECT id, value FROM custom_values WHERE value LIKE '%View Sigma%'\"" | while read -r id value; do
echo -e "$value" > "/nsm/backup/detections-migration/sigma/rules/$id.yaml"
done || fail "Failed to export Sigma rules..."
echo
echo "Exporting Sigma Filters from Playbook..."
docker exec so-mysql sh -c "exec mysql -uroot -p${MYSQLPW} -D playbook -sN -e \"SELECT issues.subject as title, custom_values.value as filter FROM issues JOIN custom_values ON issues.id = custom_values.customized_id WHERE custom_values.value LIKE '%sofilter%'\"" > /nsm/backup/detections-migration/sigma/custom-filters.txt || fail "Failed to export Custom Sigma Filters."
echo
echo "Backing up Playbook database..."
docker exec so-mysql sh -c "mysqldump -uroot -p${MYSQLPW} --databases playbook > /tmp/playbook-dump" || fail "Failed to dump Playbook database."
docker cp so-mysql:/tmp/playbook-dump /nsm/backup/detections-migration/sigma/playbook-dump.sql || fail "Failed to backup Playbook database."
fi
echo
echo "Stopping Playbook services & cleaning up..."
for container in so-playbook so-mysql so-soctopus; do
if [ -n "$(docker ps -q -f name=^${container}$)" ]; then
docker stop $container
fi
done
sed -i '/so-playbook\|so-soctopus\|so-mysql/d' /opt/so/conf/so-status/so-status.conf
rm -f /usr/sbin/so-playbook-* /usr/sbin/so-soctopus-* /usr/sbin/so-mysql-*
echo
echo "Playbook Migration is complete...."
}
determine_elastic_agent_upgrade() { determine_elastic_agent_upgrade() {
if [[ $is_airgap -eq 0 ]]; then if [[ $is_airgap -eq 0 ]]; then
update_elastic_agent_airgap update_elastic_agent_airgap
@@ -756,7 +891,7 @@ verify_latest_update_script() {
else else
echo "You are not running the latest soup version. Updating soup and its components. This might take multiple runs to complete." echo "You are not running the latest soup version. Updating soup and its components. This might take multiple runs to complete."
salt-call state.apply common.soup_scripts queue=True -linfo --file-root=$UPDATE_DIR/salt --local salt-call state.apply common.soup_scripts queue=True -lerror --file-root=$UPDATE_DIR/salt --local --out-file=/dev/null
# Verify that soup scripts updated as expected # Verify that soup scripts updated as expected
get_soup_script_hashes get_soup_script_hashes
@@ -837,7 +972,6 @@ main() {
echo "### Preparing soup at $(date) ###" echo "### Preparing soup at $(date) ###"
echo "" echo ""
set_os set_os
check_salt_master_status 1 || fail "Could not talk to salt master: Please run 'systemctl status salt-master' to ensure the salt-master service is running and check the log at /opt/so/log/salt/master." check_salt_master_status 1 || fail "Could not talk to salt master: Please run 'systemctl status salt-master' to ensure the salt-master service is running and check the log at /opt/so/log/salt/master."

View File

@@ -4,9 +4,10 @@
# Elastic License 2.0. # Elastic License 2.0.
{% from 'redis/map.jinja' import REDISMERGED %} {% from 'redis/map.jinja' import REDISMERGED %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
include: include:
{% if REDISMERGED.enabled %} {% if GLOBALS.pipeline == "REDIS" and REDISMERGED.enabled %}
- redis.enabled - redis.enabled
{% else %} {% else %}
- redis.disabled - redis.disabled

View File

@@ -0,0 +1,125 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# -*- coding: utf-8 -*-
import logging
import re
log = logging.getLogger(__name__)
# will need this in future versions of this engine
#import salt.client
#local = salt.client.LocalClient()
def start(fpa, interval=10):
log.info("pillarWatch engine: ##### checking watched pillars for changes #####")
# try to open the file that stores the previous runs data
# if the file doesn't exist, create a blank one
try:
# maybe change this location
dataFile = open("/opt/so/state/pillarWatch.txt", "r+")
except FileNotFoundError:
log.warn("pillarWatch engine: No previous pillarWatch data saved")
dataFile = open("/opt/so/state/pillarWatch.txt", "w+")
df = dataFile.read()
for i in fpa:
log.trace("pillarWatch engine: files: %s" % i['files'])
log.trace("pillarWatch engine: pillar: %s" % i['pillar'])
log.trace("pillarWatch engine: actions: %s" % i['actions'])
pillarFiles = i['files']
pillar = i['pillar']
actions = i['actions']
# these are the keys that we are going to look for as we traverse the pillarFiles
patterns = pillar.split(".")
# check the pillar files in reveresed order to replicate the same hierarchy as the pillar top file
for pillarFile in reversed(pillarFiles):
currentPillarValue = ''
previousPillarValue = ''
# this var is used to track how many times the pattern has been found in the pillar file so that we can access the proper index later
patternFound = 0
with open(pillarFile, "r") as file:
log.debug("pillarWatch engine: checking file: %s" % pillarFile)
for line in file:
log.trace("pillarWatch engine: inspecting line: %s in file: %s" % (line, file))
log.trace("pillarWatch engine: looking for: %s" % patterns[patternFound])
# since we are looping line by line through a pillar file, the next line will check if each line matches the progression of keys through the pillar
# ex. if we are looking for the value of global.pipeline, then this will loop through the pillar file until 'global' is found, then it will look
# for pipeline. once pipeline is found, it will record the value
if re.search('^' + patterns[patternFound] + ':', line.strip()):
# strip the newline because it makes the logs u-g-l-y
log.debug("pillarWatch engine: found: %s" % line.strip('\n'))
patternFound += 1
# we have found the final key in the pillar that we are looking for, get the previous value then the current value
if patternFound == len(patterns):
# at this point, df is equal to the contents of the pillarWatch file that is used to tract the previous values of the pillars
previousPillarValue = 'PREVIOUSPILLARVALUENOTSAVEDINDATAFILE'
# check the contents of the dataFile that stores the previousPillarValue(s).
# find if the pillar we are checking for changes has previously been saved. if so, grab it's prior value
for l in df.splitlines():
if pillar in l:
previousPillarValue = str(l.split(":")[1].strip())
currentPillarValue = str(line.split(":")[1]).strip()
log.debug("pillarWatch engine: %s currentPillarValue: %s" % (pillar, currentPillarValue))
log.debug("pillarWatch engine: %s previousPillarValue: %s" % (pillar, previousPillarValue))
# if the pillar we are checking for changes has been defined in the dataFile,
# replace the previousPillarValue with the currentPillarValue. if it isn't in there, append it.
if pillar in df:
df = re.sub(r"\b{}\b.*".format(pillar), pillar + ': ' + currentPillarValue, df)
else:
df += pillar + ': ' + currentPillarValue + '\n'
log.trace("pillarWatch engine: df: %s" % df)
# we have found the pillar so we dont need to loop through the file anymore
break
# if key and value was found in the first file, then we don't want to look in
# any more files since we use the first file as the source of truth.
if patternFound == len(patterns):
break
# if the pillar value changed, then we find what actions we should take
log.debug("pillarWatch engine: checking if currentPillarValue != previousPillarValue")
if currentPillarValue != previousPillarValue:
log.info("pillarWatch engine: currentPillarValue != previousPillarValue: %s != %s" % (currentPillarValue, previousPillarValue))
# check if the previous pillar value is defined in the pillar from -> to actions
if previousPillarValue in actions['from']:
# check if the new / current pillar value is defined under to
if currentPillarValue in actions['from'][previousPillarValue]['to']:
ACTIONS=actions['from'][previousPillarValue]['to'][currentPillarValue]
# if the new / current pillar value isn't defined under to, is there a wildcard defined
elif '*' in actions['from'][previousPillarValue]['to']:
ACTIONS=actions['from'][previousPillarValue]['to']['*']
# no action was defined for us to take when we see the pillar change
else:
ACTIONS=['NO DEFINED ACTION FOR US TO TAKE']
# if the previous pillar wasn't defined in the actions from, is there a wildcard defined for the pillar that we are changing from
elif '*' in actions['from']:
# is the new pillar value defined for the wildcard match
if currentPillarValue in actions['from']['*']['to']:
ACTIONS=actions['from']['*']['to'][currentPillarValue]
# if the new pillar doesn't have an action, was a wildcard defined
elif '*' in actions['from']['*']['to']:
# need more logic here for to and from
ACTIONS=actions['from']['*']['to']['*']
else:
ACTIONS=['NO DEFINED ACTION FOR US TO TAKE']
# a match for the previous pillar wasn't defined in the action in either the form of a direct match or wildcard
else:
ACTIONS=['NO DEFINED ACTION FOR US TO TAKE']
log.debug("pillarWatch engine: all defined actions: %s" % actions['from'])
log.debug("pillarWatch engine: ACTIONS: %s chosen based on previousPillarValue: %s switching to currentPillarValue: %s" % (ACTIONS, previousPillarValue, currentPillarValue))
for action in ACTIONS:
log.info("pillarWatch engine: action: %s" % action)
if action != 'NO DEFINED ACTION FOR US TO TAKE':
for saltModule, args in action.items():
log.debug("pillarWatch engine: saltModule: %s" % saltModule)
log.debug("pillarWatch engine: args: %s" % args)
#__salt__[saltModule](**args)
actionReturn = __salt__[saltModule](**args)
log.info("pillarWatch engine: actionReturn: %s" % actionReturn)
dataFile.seek(0)
dataFile.write(df)
dataFile.truncate()
dataFile.close()

View File

@@ -0,0 +1,120 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# -*- coding: utf-8 -*-
import logging
import re
log = logging.getLogger(__name__)
# will need this in future versions of this engine
import salt.client
local = salt.client.LocalClient()
def start(fpa, interval=10):
def getValue(content, key):
pieces = key.split(".", 1)
if len(pieces) > 1:
getValue(content[pieces[0]], pieces[1])
else:
#log.info("ck: %s" % content[key])
return content[key]
#content.pop(key, None)
log.info("valWatch engine: ##### checking watched values for changes #####")
# try to open the file that stores the previous runs data
# if the file doesn't exist, create a blank one
try:
# maybe change this location
dataFile = open("/opt/so/state/valWatch.txt", "r+")
except FileNotFoundError:
log.warn("valWatch engine: No previous valWatch data saved")
dataFile = open("/opt/so/state/valWatch.txt", "w+")
df = dataFile.read()
for i in fpa:
log.info("valWatch engine: i: %s" % i)
log.trace("valWatch engine: map: %s" % i['map'])
log.trace("valWatch engine: value: %s" % i['value'])
log.trace("valWatch engine: targets: %s" % i['targets'])
log.trace("valWatch engine: actions: %s" % i['actions'])
mapFile = i['map']
value = i['value']
targets = i['targets']
# target type
ttype = i['ttype']
actions = i['actions']
# these are the keys that we are going to look for as we traverse the pillarFiles
patterns = value.split(".")
mainDict = patterns.pop(0)
# patterns = value.split(".")
for target in targets:
# tell targets to render mapfile and return value split
mapRender = local.cmd(target, fun='jinja.load_map', arg=[mapFile, mainDict], tgt_type=ttype)
currentValue = ''
previousValue = ''
# this var is used to track how many times the pattern has been found in the pillar file so that we can access the proper index later
patternFound = 0
#with open(pillarFile, "r") as file:
# log.debug("pillarWatch engine: checking file: %s" % pillarFile)
mapRenderKeys = list(mapRender.keys())
if len(mapRenderKeys) > 0:
log.info(mapRenderKeys)
log.info("valWatch engine: mapRender: %s" % mapRender)
minion = mapRenderKeys[0]
currentValue = getValue(mapRender[minion],value.split('.', 1)[1])
log.info("valWatch engine: currentValue: %s: %s: %s" % (minion, value, currentValue))
for l in df.splitlines():
if value in l:
previousPillarValue = str(l.split(":")[1].strip())
log.info("valWatch engine: previousValue: %s: %s: %s" % (minion, value, previousValue))
'''
for key in mapRender[minion]:
log.info("pillarWatch engine: inspecting key: %s in mainDict: %s" % (key, mainDict))
log.info("pillarWatch engine: looking for: %s" % patterns[patternFound])
# since we are looping line by line through a pillar file, the next line will check if each line matches the progression of keys through the pillar
# ex. if we are looking for the value of global.pipeline, then this will loop through the pillar file until 'global' is found, then it will look
# for pipeline. once pipeline is found, it will record the value
#if re.search(patterns[patternFound], key):
if patterns[patternFound] == key:
# strip the newline because it makes the logs u-g-l-y
log.info("pillarWatch engine: found: %s" % key)
patternFound += 1
# we have found the final key in the pillar that we are looking for, get the previous value then the current value
if patternFound == len(patterns):
# at this point, df is equal to the contents of the pillarWatch file that is used to tract the previous values of the pillars
previousPillarValue = 'PREVIOUSPILLARVALUENOTSAVEDINDATAFILE'
# check the contents of the dataFile that stores the previousPillarValue(s).
# find if the pillar we are checking for changes has previously been saved. if so, grab it's prior value
for l in df.splitlines():
if value in l:
previousPillarValue = str(l.split(":")[1].strip())
currentPillarValue = mapRender[minion][key]
log.info("pillarWatch engine: %s currentPillarValue: %s" % (value, currentPillarValue))
log.info("pillarWatch engine: %s previousPillarValue: %s" % (value, previousPillarValue))
# if the pillar we are checking for changes has been defined in the dataFile,
# replace the previousPillarValue with the currentPillarValue. if it isn't in there, append it.
if value in df:
df = re.sub(r"\b{}\b.*".format(pillar), pillar + ': ' + currentPillarValue, df)
else:
df += value + ': ' + currentPillarValue + '\n'
log.info("pillarWatch engine: df: %s" % df)
# we have found the pillar so we dont need to loop through the file anymore
break
# if key and value was found in the first file, then we don't want to look in
# any more files since we use the first file as the source of truth.
if patternFound == len(patterns):
break
'''
dataFile.seek(0)
dataFile.write(df)
dataFile.truncate()
dataFile.close()

View File

@@ -0,0 +1,141 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# -*- coding: utf-8 -*-
import logging
import re
from pathlib import Path
import os
import glob
import json
from time import sleep
import sys
log = logging.getLogger(__name__)
import salt.client
local = salt.client.LocalClient()
def start(watched, interval=10):
# this 20 second sleep allows enough time for the minion to reconnect during testing of the script when the salt-master is restarted
sleep(20)
log.info("valueWatch engine: started")
# this dict will be used to store the files that we are watching and their modification times for the current iteration though a loop
fileModTimesCurrent = {}
# same as fileModTimesCurrent, but stores the previous values through the loop.
# the combination of these two variables is used to determine if a files has changed.
fileModTimesPrevious = {}
#
currentValues = {}
def getValue(content, key):
pieces = key.split(".", 1)
if len(pieces) > 1:
getValue(content[pieces[0]], pieces[1])
else:
#log.info("ck: %s" % content[key])
return content[key]
#content.pop(key, None)
def updateModTimesCurrent(files):
# this dict will be used to store the files that we are watching and their modification times for the current iteration though a loop
fileModTimesCurrent.clear()
for f in files:
#log.warn(f)
fileName = Path(f).name
filePath = Path(f).parent
if '*' in fileName:
#log.info(fileName)
#log.info(filePath)
slsFiles = glob.glob(f)
for slsFile in slsFiles:
#log.info(slsFile)
fileModTimesCurrent.update({slsFile: os.path.getmtime(slsFile)})
else:
fileModTimesCurrent.update({f: os.path.getmtime(f)})
def compareFileModTimes():
ret = []
for f in fileModTimesCurrent:
log.info(f)
if f in fileModTimesPrevious:
log.info("valueWatch engine: fileModTimesCurrent: %s" % fileModTimesCurrent[f])
log.info("valueWatch engine: fileModTimesPrevious: %s" % fileModTimesPrevious[f])
if fileModTimesCurrent[f] != fileModTimesPrevious[f]:
log.error("valueWatch engine: fileModTimesCurrent[f] != fileModTimesPrevious[f]")
log.error("valueWatch engine: " + str(fileModTimesCurrent[f]) + " != " + str(fileModTimesPrevious[f]))
ret.append(f)
return ret
# this will set the current value of 'value' from engines.conf and save it to the currentValues dict
def updateCurrentValues():
for target in targets:
log.info("valueWatch engine: refreshing pillars on %s" % target)
refreshPillar = local.cmd(target, fun='saltutil.refresh_pillar', tgt_type=ttype)
log.info("valueWatch engine: pillar refresh results: %s" % refreshPillar)
# check if the result was True for the pillar refresh
# will need to add a recheck incase the minion was just temorarily unavailable
try:
if next(iter(refreshPillar.values())):
sleep(5)
# render the map file for the variable passed in from value.
mapRender = local.cmd(target, fun='jinja.load_map', arg=[mapFile, mainDict], tgt_type=ttype)
log.info("mR: %s" % mapRender)
currentValue = ''
previousValue = ''
mapRenderKeys = list(mapRender.keys())
if len(mapRenderKeys) > 0:
log.info(mapRenderKeys)
log.info("valueWatch engine: mapRender: %s" % mapRender)
minion = mapRenderKeys[0]
# if not isinstance(mapRender[minion], bool):
currentValue = getValue(mapRender[minion],value.split('.', 1)[1])
log.info("valueWatch engine: currentValue: %s: %s: %s" % (minion, value, currentValue))
currentValues.update({value: {minion: currentValue}})
# we have rendered the value so we don't need to have any more target render it
break
except StopIteration:
log.info("valueWatch engine: target %s did not respond or does not exist" % target)
log.info("valueWatch engine: currentValues: %s" % currentValues)
# run the main loop
while True:
log.info("valueWatch engine: checking watched files for changes")
for v in watched:
value = v['value']
files = v['files']
mapFile = v['map']
targets = v['targets']
ttype = v['ttype']
actions = v['actions']
patterns = value.split(".")
mainDict = patterns.pop(0)
log.info("valueWatch engine: value: %s" % value)
# the call to this function will update fileModtimesCurrent
updateModTimesCurrent(files)
#log.trace("valueWatch engine: fileModTimesCurrent: %s" % fileModTimesCurrent)
#log.trace("valueWatch engine: fileModTimesPrevious: %s" % fileModTimesPrevious)
# compare with the previous checks file modification times
modFilesDiff = compareFileModTimes()
# if there were changes in the pillar files, then we need to have the minion render the map file to determine if the value changed
if modFilesDiff:
log.info("valueWatch engine: change in files detetected, updating currentValues: %s" % modFilesDiff)
updateCurrentValues()
elif value not in currentValues:
log.info("valueWatch engine: %s not in currentValues, updating currentValues." % value)
updateCurrentValues()
else:
log.info("valueWatch engine: no files changed, no update for currentValues")
# save this iteration's values to previous so we can compare next run
fileModTimesPrevious.update(fileModTimesCurrent)
sleep(interval)

View File

@@ -4,3 +4,127 @@ engines_dirs:
engines: engines:
- checkmine: - checkmine:
interval: 60 interval: 60
- valueWatch:
watched:
- value: GLOBALMERGED.pipeline
files:
- /opt/so/saltstack/local/pillar/global/soc_global.sls
- /opt/so/saltstack/local/pillar/global/adv_global.sls
map: global/map.jinja
targets:
- G@role:so-notanodetype
- G@role:so-manager
- G@role:so-searchnode
ttype: compound
actions:
from:
'*':
to:
KAFKA:
- cmd.run:
cmd: /usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled True
KAFKA:
to:
'*':
- cmd.run:
cmd: /usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled False
# - value: FIREWALL_MERGED
# files:
# - /opt/so/saltstack/local/pillar/firewall/soc_firewall.sls
# - /opt/so/saltstack/local/pillar/firewall/adv_firewall.sls
# - /opt/so/saltstack/local/pillar/minions/*.sls
# map: firewall/map.jinja
# targets:
# - so-*
# ttype: compound
# actions:
# from:
# '*':
# to:
# '*':
# - cmd.run:
# cmd: date
interval: 10
- pillarWatch:
fpa:
# these files will be checked in reversed order to replicate the same hierarchy as the pillar top file
- files:
- /opt/so/saltstack/local/pillar/global/soc_global.sls
- /opt/so/saltstack/local/pillar/global/adv_global.sls
pillar: global.pipeline
actions:
from:
'*':
to:
KAFKA:
- cmd.run:
cmd: /usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled True
# - cmd.run:
# cmd: salt-call saltutil.kill_all_jobs
# - cmd.run:
# cmd: salt-call state.highstate &
KAFKA:
to:
'*':
- cmd.run:
cmd: /usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled False
# - cmd.run:
# cmd: salt-call saltutil.kill_all_jobs
# - cmd.run:
# cmd: salt-call state.highstate &
- files:
- /opt/so/saltstack/local/pillar/idstools/soc_idstools.sls
- /opt/so/saltstack/local/pillar/idstools/adv_idstools.sls
pillar: idstools.config.ruleset
actions:
from:
'*':
to:
'*':
- cmd.run:
cmd: /usr/sbin/so-rule-update
interval: 10
- valWatch:
fpa:
- value: GLOBALMERGED.pipeline
map: global/map.jinja
targets:
- so-manager
- so-managersearch
- so-standalone
actions:
from:
'*':
to:
KAFKA:
- cmd.run:
cmd: /usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled True
# - cmd.run:
# cmd: salt-call saltutil.kill_all_jobs
# - cmd.run:
# cmd: salt-call state.highstate &
KAFKA:
to:
'*':
- cmd.run:
cmd: /usr/sbin/so-yaml.py replace /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.enabled False
# - cmd.run:
# cmd: salt-call saltutil.kill_all_jobs
# - cmd.run:
# cmd: salt-call state.highstate &
# - files:
# - /opt/so/saltstack/local/pillar/idstools/soc_idstools.sls
# - /opt/so/saltstack/local/pillar/idstools/adv_idstools.sls
# pillar: idstools.config.ruleset
# actions:
# from:
# '*':
# to:
# '*':
# - cmd.run:
# cmd: /usr/sbin/so-rule-update
interval: 10

View File

@@ -27,6 +27,11 @@ checkmine_engine:
- source: salt://salt/engines/master/checkmine.py - source: salt://salt/engines/master/checkmine.py
- makedirs: True - makedirs: True
pillarWatch_engine:
file.managed:
- name: /etc/salt/engines/pillarWatch.py
- source: salt://salt/engines/master/pillarWatch.py
engines_config: engines_config:
file.managed: file.managed:
- name: /etc/salt/master.d/engines.conf - name: /etc/salt/master.d/engines.conf

View File

@@ -13,6 +13,9 @@ include:
- systemd.reload - systemd.reload
- repo.client - repo.client
- salt.mine_functions - salt.mine_functions
{% if GLOBALS.role in GLOBALS.manager_roles %}
- ca
{% endif %}
{% if INSTALLEDSALTVERSION|string != SALTVERSION|string %} {% if INSTALLEDSALTVERSION|string != SALTVERSION|string %}
@@ -98,5 +101,8 @@ salt_minion_service:
- file: mine_functions - file: mine_functions
{% if INSTALLEDSALTVERSION|string == SALTVERSION|string %} {% if INSTALLEDSALTVERSION|string == SALTVERSION|string %}
- file: set_log_levels - file: set_log_levels
{% endif %}
{% if GLOBALS.role in GLOBALS.manager_roles %}
- file: /etc/salt/minion.d/signing_policies.conf
{% endif %} {% endif %}
- order: last - order: last

View File

@@ -9,7 +9,14 @@
include: include:
- manager.sync_es_users - manager.sync_es_users
socdirtest: sigmarepodir:
file.directory:
- name: /opt/so/conf/sigma/repos
- user: 939
- group: 939
- makedirs: True
socdirelastaertrules:
file.directory: file.directory:
- name: /opt/so/rules/elastalert/rules - name: /opt/so/rules/elastalert/rules
- user: 939 - user: 939
@@ -45,6 +52,15 @@ socsaltdir:
- mode: 770 - mode: 770
- makedirs: True - makedirs: True
socanalytics:
file.managed:
- name: /opt/so/conf/soc/analytics.js
- source: salt://soc/files/soc/analytics.js
- user: 939
- group: 939
- mode: 600
- show_changes: False
socconfig: socconfig:
file.managed: file.managed:
- name: /opt/so/conf/soc/soc.json - name: /opt/so/conf/soc/soc.json

View File

@@ -1,5 +1,6 @@
soc: soc:
enabled: False enabled: False
telemetryEnabled: true
config: config:
logFilename: /opt/sensoroni/logs/sensoroni-server.log logFilename: /opt/sensoroni/logs/sensoroni-server.log
logLevel: info logLevel: info
@@ -70,13 +71,13 @@ soc:
icon: fa-person-running icon: fa-person-running
target: '' target: ''
links: links:
- '/#/hunt?q=(process.entity_id:"{:process.entity_id}") | groupby event.dataset | groupby -sankey event.dataset event.action | groupby event.action | groupby process.name | groupby host.name user.name | groupby source.ip source.port destination.ip destination.port | groupby dns.question.name | groupby dns.answers.data | groupby file.path | groupby registry.path | groupby dll.path' - '/#/hunt?q=(process.entity_id:"{:process.entity_id}") | groupby event.dataset | groupby -sankey event.dataset event.action | groupby event.action | groupby process.name | groupby process.command_line | groupby host.name user.name | groupby source.ip source.port destination.ip destination.port | groupby dns.question.name | groupby dns.answers.data | groupby file.path | groupby registry.path | groupby dll.path'
- name: actionProcessAncestors - name: actionProcessAncestors
description: actionProcessAncestorsHelp description: actionProcessAncestorsHelp
icon: fa-people-roof icon: fa-people-roof
target: '' target: ''
links: links:
- '/#/hunt?q=(process.entity_id:"{:process.entity_id}" OR process.entity_id:"{:process.Ext.ancestry|processAncestors}") | groupby event.dataset | groupby -sankey event.dataset event.action | groupby event.action | groupby process.parent.name | groupby -sankey process.parent.name process.name | groupby process.name | groupby host.name user.name | groupby source.ip source.port destination.ip destination.port | groupby dns.question.name | groupby dns.answers.data | groupby file.path | groupby registry.path | groupby dll.path' - '/#/hunt?q=(process.entity_id:"{:process.entity_id}" OR process.entity_id:"{:process.Ext.ancestry|processAncestors}") | groupby event.dataset | groupby -sankey event.dataset event.action | groupby event.action | groupby process.parent.name | groupby -sankey process.parent.name process.name | groupby process.name | groupby process.command_line | groupby host.name user.name | groupby source.ip source.port destination.ip destination.port | groupby dns.question.name | groupby dns.answers.data | groupby file.path | groupby registry.path | groupby dll.path'
eventFields: eventFields:
default: default:
- soc_timestamp - soc_timestamp
@@ -87,12 +88,13 @@ soc:
- log.id.uid - log.id.uid
- network.community_id - network.community_id
- event.dataset - event.dataset
':kratos:audit': ':kratos:':
- soc_timestamp - soc_timestamp
- http_request.headers.x-real-ip - http_request.headers.x-real-ip
- identity_id - identity_id
- http_request.headers.user-agent - http_request.headers.user-agent
- event.dataset - event.dataset
- msg
'::conn': '::conn':
- soc_timestamp - soc_timestamp
- source.ip - source.ip
@@ -409,7 +411,7 @@ soc:
- source.port - source.port
- destination.ip - destination.ip
- destination.port - destination.port
- smtp.from - smtp.mail_from
- smtp.recipient_to - smtp.recipient_to
- smtp.subject - smtp.subject
- smtp.useragent - smtp.useragent
@@ -457,7 +459,7 @@ soc:
- ssh.server - ssh.server
- log.id.uid - log.id.uid
- event.dataset - event.dataset
'::ssl': ':suricata:ssl':
- soc_timestamp - soc_timestamp
- source.ip - source.ip
- source.port - source.port
@@ -465,10 +467,30 @@ soc:
- destination.port - destination.port
- ssl.server_name - ssl.server_name
- ssl.certificate.subject - ssl.certificate.subject
- ssl.version
- log.id.uid
- event.dataset
':zeek:ssl':
- soc_timestamp
- source.ip
- source.port
- destination.ip
- destination.port
- ssl.server_name
- ssl.validation_status - ssl.validation_status
- ssl.version - ssl.version
- log.id.uid - log.id.uid
- event.dataset - event.dataset
'::ssl':
- soc_timestamp
- source.ip
- source.port
- destination.ip
- destination.port
- ssl.server_name
- ssl.version
- log.id.uid
- event.dataset
':zeek:syslog': ':zeek:syslog':
- soc_timestamp - soc_timestamp
- source.ip - source.ip
@@ -541,7 +563,7 @@ soc:
- process.executable - process.executable
- user.name - user.name
- event.dataset - event.dataset
':strelka:file': ':strelka:':
- soc_timestamp - soc_timestamp
- file.name - file.name
- file.size - file.size
@@ -550,6 +572,15 @@ soc:
- file.mime_type - file.mime_type
- log.id.fuid - log.id.fuid
- event.dataset - event.dataset
':strelka:file':
- soc_timestamp
- file.name
- file.size
- hash.md5
- file.source
- file.mime_type
- log.id.fuid
- event.dataset
':suricata:': ':suricata:':
- soc_timestamp - soc_timestamp
- source.ip - source.ip
@@ -1166,6 +1197,42 @@ soc:
- system.auth.sudo.command - system.auth.sudo.command
- event.dataset - event.dataset
- message - message
':opencanary:':
- soc_timestamp
- source.ip
- source.port
- logdata.HOSTNAME
- destination.port
- logdata.PATH
- logdata.USERNAME
- logdata.USERAGENT
- event.dataset
':elastic_agent:':
- soc_timestamp
- event.dataset
- message
':playbook:':
- soc_timestamp
- rule.name
- event.severity_label
- event_data.event.dataset
- event_data.source.ip
- event_data.source.port
- event_data.destination.host
- event_data.destination.port
- event_data.process.executable
- event_data.process.pid
':sigma:':
- soc_timestamp
- rule.name
- event.severity_label
- event_data.event.dataset
- event_data.source.ip
- event_data.source.port
- event_data.destination.host
- event_data.destination.port
- event_data.process.executable
- event_data.process.pid
server: server:
bindAddress: 0.0.0.0:9822 bindAddress: 0.0.0.0:9822
baseUrl: / baseUrl: /
@@ -1182,13 +1249,19 @@ soc:
elastalertengine: elastalertengine:
allowRegex: '' allowRegex: ''
autoUpdateEnabled: true autoUpdateEnabled: true
autoEnabledSigmaRules:
- core+critical
- securityonion-resources+critical
- securityonion-resources+high
communityRulesImportFrequencySeconds: 86400 communityRulesImportFrequencySeconds: 86400
denyRegex: '' denyRegex: ''
elastAlertRulesFolder: /opt/sensoroni/elastalert elastAlertRulesFolder: /opt/sensoroni/elastalert
reposFolder: /opt/sensoroni/sigma/repos
rulesFingerprintFile: /opt/sensoroni/fingerprints/sigma.fingerprint rulesFingerprintFile: /opt/sensoroni/fingerprints/sigma.fingerprint
stateFilePath: /opt/so/conf/soc/fingerprints/elastalertengine.state
rulesRepos: rulesRepos:
- repo: https://github.com/Security-Onion-Solutions/securityonion-resources - repo: https://github.com/Security-Onion-Solutions/securityonion-resources
license: DRL license: Elastic-2.0
folder: sigma/stable folder: sigma/stable
sigmaRulePackages: sigmaRulePackages:
- core - core
@@ -1246,6 +1319,7 @@ soc:
- repo: https://github.com/Security-Onion-Solutions/securityonion-yara - repo: https://github.com/Security-Onion-Solutions/securityonion-yara
license: DRL license: DRL
yaraRulesFolder: /opt/sensoroni/yara/rules yaraRulesFolder: /opt/sensoroni/yara/rules
stateFilePath: /opt/so/conf/soc/fingerprints/strelkaengine.state
suricataengine: suricataengine:
allowRegex: '' allowRegex: ''
autoUpdateEnabled: true autoUpdateEnabled: true
@@ -1253,6 +1327,7 @@ soc:
communityRulesFile: /nsm/rules/suricata/emerging-all.rules communityRulesFile: /nsm/rules/suricata/emerging-all.rules
denyRegex: '' denyRegex: ''
rulesFingerprintFile: /opt/sensoroni/fingerprints/emerging-all.fingerprint rulesFingerprintFile: /opt/sensoroni/fingerprints/emerging-all.fingerprint
stateFilePath: /opt/so/conf/soc/fingerprints/suricataengine.state
client: client:
enableReverseLookup: false enableReverseLookup: false
docsUrl: /docs/ docsUrl: /docs/
@@ -1596,202 +1671,223 @@ soc:
queries: queries:
- name: Overview - name: Overview
description: Overview of all events description: Overview of all events
query: '* | groupby -sankey event.dataset event.category* | groupby -pie event.category | groupby -bar event.module* | groupby event.dataset | groupby event.module* | groupby event.category | groupby observer.name | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: '* | groupby event.category | groupby -sankey event.category event.module | groupby event.module | groupby -sankey event.module event.dataset | groupby event.dataset | groupby observer.name | groupby host.name | groupby source.ip | groupby destination.ip | groupby destination.port'
- name: SOC Auth - name: SOC Logins
description: SOC (Security Onion Console) authentication logs description: SOC (Security Onion Console) logins
query: 'event.dataset:kratos.audit AND msg:*authenticated* | groupby -sankey http_request.headers.x-real-ip identity_id | groupby http_request.headers.x-real-ip | groupby identity_id | groupby http_request.headers.user-agent' query: 'event.dataset:kratos.audit AND msg:*authenticated* | groupby http_request.headers.x-real-ip | groupby -sankey http_request.headers.x-real-ip identity_id | groupby identity_id | groupby http_request.headers.user-agent'
- name: Elastalerts - name: SOC Login Failures
description: Elastalert logs description: SOC (Security Onion Console) login failures
query: '_index: "*:elastalert*" | groupby rule_name | groupby alert_info.type' query: 'event.dataset:kratos.audit AND msg:*Encountered*self-service*login*error* | groupby http_request.headers.x-real-ip | groupby -sankey http_request.headers.x-real-ip http_request.headers.user-agent | groupby http_request.headers.user-agent'
- name: Alerts - name: Alerts
description: Overview of all alerts description: Overview of all alerts
query: 'tags:alert | groupby event.module* | groupby rule.name | groupby event.severity | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:alert | groupby event.module* | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby rule.name | groupby event.severity | groupby destination_geo.organization_name'
- name: NIDS Alerts - name: NIDS Alerts
description: NIDS (Network Intrusion Detection System) alerts description: NIDS (Network Intrusion Detection System) alerts
query: 'event.category:network AND tags:alert | groupby rule.category | groupby -sankey source.ip destination.ip | groupby rule.name | groupby rule.uuid | groupby rule.gid | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'event.category:network AND tags:alert | groupby rule.category | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby rule.name | groupby rule.uuid | groupby rule.gid | groupby destination_geo.organization_name'
- name: Sysmon Overview - name: Elastic Agent Overview
description: Overview of all Sysmon data types description: Overview of all events from Elastic Agents
query: 'event.dataset:windows.sysmon_operational | groupby -sankey event.action host.name | groupby -sankey host.name user.name | groupby host.name | groupby event.category event.action | groupby user.name | groupby dns.question.name | groupby process.executable | groupby winlog.event_data.TargetObject | groupby file.name | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'event.module:endpoint | groupby event.dataset | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby -sankey user.name process.name | groupby process.name'
- name: Elastic Agent API Events
description: API (Application Programming Interface) events from Elastic Agents
query: 'event.dataset:endpoint.events.api | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby -sankey user.name process.name | groupby process.name | groupby process.Ext.api.name'
- name: Elastic Agent File Events
description: File events from Elastic Agents
query: 'event.dataset:endpoint.events.file | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby -sankey user.name process.name | groupby process.name | groupby event.action | groupby file.path'
- name: Elastic Agent Library Events
description: Library events from Elastic Agents
query: 'event.dataset:endpoint.events.library | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby -sankey user.name process.name | groupby process.name | groupby event.action | groupby dll.path | groupby dll.code_signature.status | groupby dll.code_signature.subject_name'
- name: Elastic Agent Network Events
description: Network events from Elastic Agents
query: 'event.dataset:endpoint.events.network | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby -sankey user.name process.name | groupby process.name | groupby event.action | groupby source.ip | groupby destination.ip | groupby destination.port'
- name: Elastic Agent Process Events
description: Process events from Elastic Agents
query: 'event.dataset:endpoint.events.process | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby -sankey user.name process.parent.name | groupby process.parent.name | groupby -sankey process.parent.name process.name | groupby process.name | groupby event.action | groupby process.working_directory'
- name: Elastic Agent Registry Events
description: Registry events from Elastic Agents
query: 'event.dataset:endpoint.events.registry | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby -sankey user.name process.name | groupby process.name | groupby event.action | groupby registry.path'
- name: Elastic Agent Security Events
description: Security events from Elastic Agents
query: 'event.dataset:endpoint.events.security | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby -sankey user.name process.executable | groupby process.executable | groupby event.action | groupby event.outcome'
- name: Host Overview - name: Host Overview
description: Overview of all host data types description: Overview of all host data types
query: '((event.category:registry OR event.category:host OR event.category:process OR event.category:driver OR event.category:configuration) OR (event.category:file AND _exists_:process.executable) OR (event.category:network AND _exists_:host.name)) | groupby event.dataset* event.category* event.action* | groupby event.type | groupby host.name | groupby user.name | groupby file.name | groupby process.executable' query: '((event.category:registry OR event.category:host OR event.category:process OR event.category:driver OR event.category:configuration) OR (event.category:file AND _exists_:process.executable) OR (event.category:network AND _exists_:host.name)) | groupby event.dataset* event.category* event.action* | groupby event.type | groupby -sankey event.type host.name | groupby host.name | groupby user.name | groupby file.name | groupby process.executable'
- name: Host Registry Changes - name: Host Registry Changes
description: Windows Registry changes description: Windows Registry changes
query: 'event.category: registry | groupby -sankey event.action host.name | groupby event.dataset event.action | groupby host.name | groupby process.executable | groupby registry.path | groupby process.executable registry.path' query: 'event.category: registry | groupby event.action | groupby -sankey event.action host.name | groupby host.name | groupby event.dataset event.action | groupby process.executable | groupby registry.path | groupby process.executable registry.path'
- name: Host DNS & Process Mappings - name: Host DNS & Process Mappings
description: DNS queries mapped to originating processes description: DNS queries mapped to originating processes
query: 'event.category: network AND _exists_:process.executable AND (_exists_:dns.question.name OR _exists_:dns.answers.data) | groupby -sankey host.name dns.question.name | groupby event.dataset event.type | groupby host.name | groupby process.executable | groupby dns.question.name | groupby dns.answers.data' query: 'event.category: network AND _exists_:process.executable AND (_exists_:dns.question.name OR _exists_:dns.answers.data) | groupby host.name | groupby -sankey host.name dns.question.name | groupby dns.question.name | groupby event.dataset event.type | groupby process.executable | groupby dns.answers.data'
- name: Host Process Activity - name: Host Process Activity
description: Process activity captured on an endpoint description: Process activity captured on an endpoint
query: 'event.category:process | groupby -sankey host.name user.name* | groupby event.dataset event.action | groupby host.name | groupby user.name | groupby process.working_directory | groupby process.executable | groupby process.command_line | groupby process.parent.executable | groupby process.parent.command_line | groupby -sankey process.parent.executable process.executable | table soc_timestamp host.name user.name process.parent.name process.name event.action process.working_directory event.dataset' query: 'event.category:process | groupby host.name | groupby -sankey host.name user.name* | groupby user.name | groupby event.dataset event.action | groupby process.working_directory | groupby process.executable | groupby process.command_line | groupby process.parent.executable | groupby process.parent.command_line | groupby -sankey process.parent.executable process.executable | table soc_timestamp host.name user.name process.parent.name process.name event.action process.working_directory event.dataset'
- name: Host File Activity - name: Host File Activity
description: File activity captured on an endpoint description: File activity captured on an endpoint
query: 'event.category: file AND _exists_:process.executable | groupby -sankey host.name process.executable | groupby host.name | groupby event.dataset event.action event.type | groupby file.name | groupby process.executable' query: 'event.category: file AND _exists_:process.executable | groupby host.name | groupby -sankey host.name process.executable | groupby process.executable | groupby event.dataset event.action event.type | groupby file.name'
- name: Host Network & Process Mappings - name: Host Network & Process Mappings
description: Network activity mapped to originating processes description: Network activity mapped to originating processes
query: 'event.category: network AND _exists_:process.executable | groupby -sankey event.action host.name | groupby -sankey host.name user.name | groupby event.dataset* event.type* event.action* | groupby host.name | groupby user.name | groupby dns.question.name | groupby process.executable | groupby winlog.event_data.TargetObject | groupby process.name | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'event.category: network AND _exists_:process.executable | groupby event.action | groupby -sankey event.action host.name | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby event.dataset* event.type* event.action* | groupby dns.question.name | groupby process.executable | groupby process.name | groupby source.ip | groupby destination.ip | groupby destination.port'
- name: Host API Events - name: Sysmon Overview
description: API (Application Programming Interface) events from endpoints description: Overview of all Sysmon data types
query: 'event.dataset:endpoint.events.api | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby process.name | groupby process.Ext.api.name' query: 'event.dataset:windows.sysmon_operational | groupby event.action | groupby -sankey event.action host.name | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby event.category event.action | groupby dns.question.name | groupby process.executable | groupby file.name | groupby source.ip | groupby destination.ip | groupby destination.port'
- name: Host Library Events
description: Library events from endpoints
query: 'event.dataset:endpoint.events.library | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby process.name | groupby event.action | groupby dll.path | groupby dll.code_signature.status | groupby dll.code_signature.subject_name'
- name: Host Security Events
description: Security events from endpoints
query: 'event.dataset:endpoint.events.security | groupby host.name | groupby -sankey host.name user.name | groupby user.name | groupby process.executable | groupby event.action | groupby event.outcome'
- name: Strelka - name: Strelka
description: Strelka file analysis description: Strelka file analysis
query: 'event.module:strelka | groupby file.mime_type | groupby -sankey file.mime_type file.source | groupby file.source | groupby file.name' query: 'event.module:strelka | groupby file.mime_type | groupby -sankey file.mime_type file.source | groupby file.source | groupby -sankey file.source file.name | groupby file.name'
- name: Zeek Notice - name: Zeek Notice
description: Zeek notice logs description: Zeek notice logs
query: 'event.dataset:zeek.notice | groupby -sankey notice.note destination.ip | groupby notice.note | groupby notice.message | groupby notice.sub_message | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'event.dataset:zeek.notice | groupby notice.note | groupby -sankey notice.note source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby notice.message | groupby notice.sub_message | groupby source_geo.organization_name | groupby destination_geo.organization_name'
- name: Connections and Metadata with community_id - name: Connections and Metadata with Community ID
description: Network connections that include community_id description: Network connections that include network.community_id
query: '_exists_:network.community_id | groupby event.module* | groupby -sankey event.module* event.dataset | groupby event.dataset | groupby source.ip source.port destination.ip destination.port | groupby network.protocol | groupby source_geo.organization_name source.geo.country_name | groupby destination_geo.organization_name destination.geo.country_name | groupby rule.name rule.category event.severity_label | groupby dns.query.name | groupby http.virtual_host http.uri | groupby notice.note notice.message notice.sub_message | groupby source.ip host.hostname user.name event.action event.type process.executable process.pid' query: '_exists_:network.community_id | groupby event.module* | groupby -sankey event.module* event.dataset | groupby event.dataset | groupby source.ip | groupby destination.ip | groupby destination.port | groupby network.protocol | groupby source_geo.organization_name | groupby source.geo.country_name | groupby destination_geo.organization_name | groupby destination.geo.country_name | groupby rule.name rule.category event.severity_label | groupby dns.query.name | groupby http.virtual_host http.uri | groupby notice.note notice.message notice.sub_message | groupby source.ip host.hostname user.name event.action event.type process.executable process.pid'
- name: Connections seen by Zeek or Suricata - name: Connections seen by Zeek or Suricata
description: Network connections logged by Zeek or Suricata description: Network connections logged by Zeek or Suricata
query: 'tags:conn | groupby source.ip | groupby destination.ip | groupby destination.port | groupby -sankey destination.port network.protocol | groupby network.protocol | groupby network.transport | groupby connection.history | groupby connection.state | groupby connection.state_description | groupby source.geo.country_name | groupby destination.geo.country_name | groupby client.ip_bytes | groupby server.ip_bytes | groupby client.oui' query: 'tags:conn | groupby source.ip | groupby destination.ip | groupby destination.port | groupby -sankey destination.port network.protocol | groupby network.protocol | groupby network.transport | groupby connection.history | groupby connection.state | groupby connection.state_description | groupby source.geo.country_name | groupby destination.geo.country_name | groupby client.ip_bytes | groupby server.ip_bytes | groupby client.oui'
- name: DCE_RPC - name: DCE_RPC
description: DCE_RPC (Distributed Computing Environment / Remote Procedure Calls) network metadata description: DCE_RPC (Distributed Computing Environment / Remote Procedure Calls) network metadata
query: 'tags:dce_rpc | groupby -sankey dce_rpc.endpoint dce_rpc.operation | groupby dce_rpc.endpoint | groupby dce_rpc.operation | groupby dce_rpc.named_pipe | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:dce_rpc | groupby dce_rpc.endpoint | groupby -sankey dce_rpc.endpoint dce_rpc.operation | groupby dce_rpc.operation | groupby -sankey dce_rpc.operation dce_rpc.named_pipe | groupby dce_rpc.named_pipe | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name'
- name: DHCP - name: DHCP
description: DHCP (Dynamic Host Configuration Protocol) leases description: DHCP (Dynamic Host Configuration Protocol) leases
query: 'tags:dhcp | groupby host.hostname | groupby dhcp.message_types | groupby -sankey client.address server.address | groupby client.address | groupby server.address | groupby host.domain' query: 'tags:dhcp | groupby host.hostname | groupby -sankey host.hostname client.address | groupby client.address | groupby -sankey client.address server.address | groupby server.address | groupby dhcp.message_types | groupby host.domain'
- name: DNS - name: DNS
description: DNS (Domain Name System) queries description: DNS (Domain Name System) queries
query: 'tags:dns | groupby dns.query.name | groupby dns.highest_registered_domain | groupby dns.parent_domain | groupby -sankey source.ip destination.ip | groupby dns.answers.name | groupby dns.query.type_name | groupby dns.response.code_name | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:dns | groupby dns.query.name | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby dns.highest_registered_domain | groupby dns.parent_domain | groupby dns.response.code_name | groupby dns.answers.name | groupby dns.query.type_name | groupby dns.response.code_name | groupby destination_geo.organization_name'
- name: DPD - name: DPD
description: DPD (Dynamic Protocol Detection) errors description: DPD (Dynamic Protocol Detection) errors
query: 'tags:dpd | groupby error.reason | groupby network.protocol | groupby -sankey source.ip destination.ip | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:dpd | groupby error.reason | groupby -sankey error.reason source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby network.protocol | groupby destination_geo.organization_name'
- name: Files - name: Files
description: Files seen in network traffic description: Files seen in network traffic
query: 'tags:file | groupby file.mime_type | groupby -sankey file.mime_type file.source | groupby file.source | groupby file.bytes.total | groupby source.ip | groupby destination.ip | groupby destination_geo.organization_name' query: 'tags:file | groupby file.mime_type | groupby -sankey file.mime_type file.source | groupby file.source | groupby file.bytes.total | groupby source.ip | groupby destination.ip | groupby destination_geo.organization_name'
- name: FTP - name: FTP
description: FTP (File Transfer Protocol) network metadata description: FTP (File Transfer Protocol) network metadata
query: 'tags:ftp | groupby -sankey ftp.command destination.ip | groupby ftp.command | groupby ftp.argument | groupby ftp.user | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:ftp | groupby ftp.command | groupby -sankey ftp.command source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name | groupby ftp.argument | groupby ftp.user'
- name: HTTP - name: HTTP
description: HTTP (Hyper Text Transport Protocol) network metadata description: HTTP (Hyper Text Transport Protocol) network metadata
query: 'tags:http | groupby http.method | groupby -sankey http.method http.virtual_host | groupby http.virtual_host | groupby http.uri | groupby http.useragent | groupby http.status_code | groupby http.status_message | groupby file.resp_mime_types | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:http | groupby http.method | groupby -sankey http.method http.virtual_host | groupby http.virtual_host | groupby http.uri | groupby http.useragent | groupby http.status_code | groupby http.status_message | groupby file.resp_mime_types | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name'
- name: Intel - name: Intel
description: Zeek Intel framework hits description: Zeek Intel framework hits
query: 'tags:intel | groupby intel.indicator | groupby -sankey source.ip intel.indicator | groupby intel.indicator_type | groupby intel.seen_where | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:intel | groupby intel.indicator | groupby -sankey intel.indicator source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby intel.indicator_type | groupby intel.seen_where'
- name: IRC - name: IRC
description: IRC (Internet Relay Chat) network metadata description: IRC (Internet Relay Chat) network metadata
query: 'tags:irc | groupby irc.command.type | groupby -sankey irc.command.type irc.username | groupby irc.username | groupby irc.nickname | groupby irc.command.value | groupby irc.command.info | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:irc | groupby irc.command.type | groupby -sankey irc.command.type irc.username | groupby irc.username | groupby irc.nickname | groupby irc.command.value | groupby irc.command.info | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name'
- name: Kerberos - name: Kerberos
description: Kerberos network metadata description: Kerberos network metadata
query: 'tags:kerberos | groupby kerberos.service | groupby -sankey kerberos.service destination.ip | groupby kerberos.client | groupby kerberos.request_type | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:kerberos | groupby kerberos.service | groupby -sankey kerberos.service source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby kerberos.client | groupby kerberos.request_type'
- name: MySQL - name: MySQL
description: MySQL network metadata description: MySQL network metadata
query: 'tags:mysql | groupby mysql.command | groupby -sankey mysql.command destination.ip | groupby mysql.argument | groupby mysql.success | groupby mysql.response | groupby mysql.rows | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:mysql | groupby mysql.command | groupby -sankey mysql.command source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby mysql.argument | groupby mysql.success | groupby mysql.response | groupby mysql.rows'
- name: NTLM - name: NTLM
description: NTLM (New Technology LAN Manager) network metadata description: NTLM (New Technology LAN Manager) network metadata
query: 'tags:ntlm | groupby ntlm.server.dns.name | groupby ntlm.server.nb.name | groupby -sankey source.ip destination.ip | groupby ntlm.server.tree.name | groupby ntlm.success | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:ntlm | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby ntlm.server.dns.name | groupby ntlm.server.nb.name | groupby ntlm.server.tree.name | groupby ntlm.success | groupby source.ip | groupby destination.ip'
- name: PE - name: PE
description: PE (Portable Executable) files transferred via network traffic description: PE (Portable Executable) files transferred via network traffic
query: 'tags:pe | groupby file.machine | groupby -sankey file.machine file.os | groupby file.os | groupby file.subsystem | groupby file.section_names | groupby file.is_exe | groupby file.is_64bit' query: 'tags:pe | groupby file.machine | groupby -sankey file.machine file.os | groupby file.os | groupby -sankey file.os file.subsystem | groupby file.subsystem | groupby file.section_names | groupby file.is_exe | groupby file.is_64bit'
- name: RADIUS - name: RADIUS
description: RADIUS (Remote Authentication Dial-In User Service) network metadata description: RADIUS (Remote Authentication Dial-In User Service) network metadata
query: 'tags:radius | groupby -sankey user.name destination.ip | groupby user.name | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:radius | groupby user.name | groupby -sankey user.name source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name'
- name: RDP - name: RDP
description: RDP (Remote Desktop Protocol) network metadata description: RDP (Remote Desktop Protocol) network metadata
query: 'tags:rdp | groupby client.name | groupby -sankey source.ip destination.ip | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:rdp | groupby client.name | groupby -sankey client.name source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name'
- name: RFB - name: RFB
description: RFB (Remote Frame Buffer) network metadata description: RFB (Remote Frame Buffer) network metadata
query: 'tags:rfb | groupby rfb.desktop.name | groupby -sankey source.ip destination.ip | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:rfb | groupby rfb.desktop.name | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name'
- name: Signatures - name: Signatures
description: Zeek signatures description: Zeek signatures
query: 'event.dataset:zeek.signatures | groupby signature_id' query: 'event.dataset:zeek.signatures | groupby signature_id'
- name: SIP - name: SIP
description: SIP (Session Initiation Protocol) network metadata description: SIP (Session Initiation Protocol) network metadata
query: 'tags:sip | groupby client.user_agent | groupby sip.method | groupby sip.uri | groupby -sankey source.ip destination.ip | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:sip | groupby sip.method | groupby -sankey sip.method source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name | groupby client.user_agent | groupby sip.method | groupby sip.uri'
- name: SMB_Files - name: SMB_Files
description: Files transferred via SMB (Server Message Block) description: Files transferred via SMB (Server Message Block)
query: 'tags:smb_files | groupby file.action | groupby file.path | groupby file.name | groupby -sankey source.ip destination.ip | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:smb_files | groupby file.action | groupby -sankey file.action source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby file.path | groupby file.name'
- name: SMB_Mapping - name: SMB_Mapping
description: SMB (Server Message Block) mapping network metadata description: SMB (Server Message Block) mapping network metadata
query: 'tags:smb_mapping | groupby smb.share_type | groupby smb.path | groupby smb.service | groupby -sankey source.ip destination.ip | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:smb_mapping | groupby smb.share_type | groupby -sankey smb.share_type smb.path | groupby smb.path | groupby -sankey smb.path smb.service | groupby smb.service | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port'
- name: SMTP - name: SMTP
description: SMTP (Simple Mail Transfer Protocol) network metadata description: SMTP (Simple Mail Transfer Protocol) network metadata
query: 'tags:smtp | groupby smtp.from | groupby smtp.recipient_to | groupby -sankey source.ip destination.ip | groupby smtp.subject | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:smtp | groupby smtp.mail_from | groupby -sankey smtp.mail_from smtp.recipient_to | groupby smtp.recipient_to | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby smtp.subject | groupby destination_geo.organization_name'
- name: SNMP - name: SNMP
description: SNMP (Simple Network Management Protocol) network metadat description: SNMP (Simple Network Management Protocol) network metadat
query: 'tags:snmp | groupby snmp.community | groupby snmp.version | groupby -sankey source.ip destination.ip | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:snmp | groupby snmp.community | groupby -sankey snmp.community source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby snmp.version'
- name: Software - name: Software
description: Software seen by Zeek via network traffic description: Software seen by Zeek via network traffic
query: 'tags:software | groupby -sankey software.type source.ip | groupby software.type | groupby software.name | groupby source.ip' query: 'tags:software | groupby software.type | groupby -sankey software.type source.ip | groupby source.ip | groupby software.name'
- name: SSH - name: SSH
description: SSH (Secure Shell) connections seen by Zeek description: SSH (Secure Shell) connections seen by Zeek
query: 'tags:ssh | groupby ssh.client | groupby ssh.server | groupby -sankey source.ip destination.ip | groupby ssh.direction | groupby ssh.version | groupby ssh.hassh_version | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'tags:ssh | groupby ssh.client | groupby -sankey ssh.client source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby ssh.server | groupby ssh.version | groupby ssh.hassh_version | groupby ssh.direction | groupby source_geo.organization_name | groupby destination_geo.organization_name'
- name: SSL - name: SSL
description: SSL/TLS network metadata description: SSL/TLS network metadata
query: 'tags:ssl | groupby ssl.version | groupby ssl.validation_status | groupby -sankey source.ip ssl.server_name | groupby ssl.server_name | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name | groupby ssl.certificate.issuer | groupby ssl.certificate.subject' query: 'tags:ssl | groupby ssl.version | groupby -sankey ssl.version ssl.server_name | groupby ssl.server_name | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name'
- name: SSL - Suricata
description: SSL/TLS network metadata from Suricata
query: 'event.dataset:suricata.ssl | groupby ssl.version | groupby -sankey ssl.version ssl.server_name | groupby ssl.server_name | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name | groupby ssl.certificate.issuer | groupby ssl.certificate.subject'
- name: SSL - Zeek
description: SSL/TLS network metadata from Zeek
query: 'event.dataset:zeek.ssl | groupby ssl.version | groupby ssl.validation_status | groupby -sankey ssl.validation_status ssl.server_name | groupby ssl.server_name | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name'
- name: STUN - name: STUN
description: STUN (Session Traversal Utilities for NAT) network metadata description: STUN (Session Traversal Utilities for NAT) network metadata
query: 'tags:stun* | groupby -sankey source.ip destination.ip | groupby destination.geo.country_name | groupby source.ip | groupby destination.ip | groupby destination.port | groupby event.dataset' query: 'tags:stun* | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby destination.geo.country_name | groupby event.dataset'
- name: Syslog - name: Syslog
description: Syslog logs description: Syslog logs
query: 'tags:syslog | groupby syslog.severity_label | groupby syslog.facility_label | groupby -sankey source.ip destination.ip | groupby source.ip | groupby destination.ip | groupby destination.port | groupby network.protocol' query: 'tags:syslog | groupby syslog.severity_label | groupby syslog.facility_label | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby network.protocol | groupby event.dataset'
- name: TDS - name: TDS
description: TDS (Tabular Data Stream) network metadata description: TDS (Tabular Data Stream) network metadata
query: 'tags:tds* | groupby -sankey event.dataset source.ip destination.ip | groupby event.dataset | groupby tds.command | groupby tds.header_type | groupby tds.procedure_name | groupby source.ip | groupby destination.ip | groupby destination.port | groupby tds.query' query: 'tags:tds* | groupby event.dataset | groupby -sankey event.dataset source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby tds.command | groupby tds.header_type | groupby tds.procedure_name | groupby tds.query'
- name: Tunnel - name: Tunnel
description: Tunnels seen by Zeek description: Tunnels seen by Zeek
query: 'tags:tunnel | groupby -sankey source.ip destination.ip | groupby tunnel.type | groupby event.action | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination.geo.country_name' query: 'tags:tunnel | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby tunnel.type | groupby event.action | groupby destination.geo.country_name'
- name: Weird - name: Weird
description: Weird network traffic seen by Zeek description: Weird network traffic seen by Zeek
query: 'event.dataset:zeek.weird | groupby -sankey weird.name destination.ip | groupby weird.name | groupby weird.additional_info | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name' query: 'event.dataset:zeek.weird | groupby weird.name | groupby -sankey weird.name source.ip | groupby source.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name'
- name: WireGuard - name: WireGuard
description: WireGuard VPN network metadata description: WireGuard VPN network metadata
query: 'tags:wireguard | groupby -sankey source.ip destination.ip | groupby destination.geo.country_name | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:wireguard | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby destination.geo.country_name'
- name: x509 - name: x509
description: x.509 certificates seen by Zeek description: x.509 certificates seen by Zeek
query: 'tags:x509 | groupby -sankey x509.certificate.key.length x509.san_dns | groupby x509.certificate.key.length | groupby x509.san_dns | groupby x509.certificate.key.type | groupby x509.certificate.subject | groupby x509.certificate.issuer' query: 'tags:x509 | groupby x509.certificate.key.length | groupby -sankey x509.certificate.key.length x509.san_dns | groupby x509.san_dns | groupby x509.certificate.key.type | groupby x509.certificate.subject | groupby x509.certificate.issuer'
- name: ICS Overview - name: ICS Overview
description: Overview of ICS (Industrial Control Systems) network metadata description: Overview of ICS (Industrial Control Systems) network metadata
query: 'tags:ics | groupby event.dataset | groupby -sankey source.ip destination.ip | groupby source.ip | groupby destination.ip | groupby destination.port | groupby source.mac | groupby destination.mac' query: 'tags:ics | groupby event.dataset | groupby -sankey event.dataset source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby source.mac | groupby destination.mac'
- name: ICS BACnet - name: ICS BACnet
description: BACnet (Building Automation and Control Networks) network metadata description: BACnet (Building Automation and Control Networks) network metadata
query: 'tags:bacnet* | groupby -sankey event.dataset source.ip destination.ip | groupby event.dataset | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:bacnet* | groupby event.dataset | groupby -sankey event.dataset source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port'
- name: ICS BSAP - name: ICS BSAP
description: BSAP (Bristol Standard Asynchronous Protocol) network metadata description: BSAP (Bristol Standard Asynchronous Protocol) network metadata
query: 'tags:bsap* | groupby -sankey event.dataset source.ip destination.ip | groupby event.dataset | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:bsap* | groupby event.dataset | groupby -sankey event.dataset source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port'
- name: ICS CIP - name: ICS CIP
description: CIP (Common Industrial Protocol) network metadata description: CIP (Common Industrial Protocol) network metadata
query: 'tags:cip* | groupby -sankey event.dataset source.ip destination.ip | groupby event.dataset | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:cip* | groupby event.dataset | groupby -sankey event.dataset source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port'
- name: ICS COTP - name: ICS COTP
description: COTP (Connection Oriented Transport Protocol) network metadata description: COTP (Connection Oriented Transport Protocol) network metadata
query: 'tags:cotp* | groupby -sankey source.ip destination.ip | groupby cotp.pdu.name | groupby cotp.pdu.code | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:cotp* | groupby cotp.pdu.name | groupby -sankey cotp.pdu.name source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby cotp.pdu.code'
- name: ICS DNP3 - name: ICS DNP3
description: DNP3 (Distributed Network Protocol) network metadata description: DNP3 (Distributed Network Protocol) network metadata
query: 'tags:dnp3* | groupby -sankey event.dataset source.ip destination.ip | groupby event.dataset | groupby dnp3.function_code | groupby dnp3.object_type | groupby dnp3.fc_request | groupby dnp3.fc_reply | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:dnp3* | groupby event.dataset | groupby -sankey event.dataset source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby dnp3.function_code | groupby dnp3.object_type | groupby dnp3.fc_request | groupby dnp3.fc_reply'
- name: ICS ECAT - name: ICS ECAT
description: ECAT (Ethernet for Control Automation Technology) network metadata description: ECAT (Ethernet for Control Automation Technology) network metadata
query: 'tags:ecat* | groupby -sankey event.dataset source.mac destination.mac | groupby event.dataset | groupby source.mac | groupby destination.mac | groupby ecat.command | groupby ecat.register.type' query: 'tags:ecat* | groupby event.dataset | groupby -sankey event.dataset ecat.command | groupby ecat.command | groupby -sankey ecat.command source.mac | groupby source.mac | groupby -sankey source.mac destination.mac | groupby destination.mac | groupby ecat.register.type'
- name: ICS ENIP - name: ICS ENIP
description: ENIP (Ethernet Industrial Protocol) network metadata description: ENIP (Ethernet Industrial Protocol) network metadata
query: 'tags:enip* | groupby -sankey source.ip destination.ip | groupby enip.command | groupby enip.status_code | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:enip* | groupby enip.command | groupby -sankey enip.command source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby enip.status_code'
- name: ICS Modbus - name: ICS Modbus
description: Modbus network metadata description: Modbus network metadata
query: 'tags:modbus* | groupby -sankey event.dataset modbus.function | groupby event.dataset | groupby modbus.function | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:modbus* | groupby event.dataset | groupby -sankey event.dataset modbus.function | groupby modbus.function | groupby -sankey modbus.function source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port'
- name: ICS OPC UA - name: ICS OPC UA
description: OPC UA (Unified Architecture) network metadata description: OPC UA (Unified Architecture) network metadata
query: 'tags:opcua* | groupby -sankey event.dataset source.ip destination.ip | groupby event.dataset | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:opcua* | groupby event.dataset | groupby -sankey event.dataset source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port'
- name: ICS Profinet - name: ICS Profinet
description: Profinet (Process Field Network) network metadata description: Profinet (Process Field Network) network metadata
query: 'tags:profinet* | groupby -sankey event.dataset source.ip destination.ip | groupby event.dataset | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:profinet* | groupby event.dataset | groupby -sankey event.dataset source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port'
- name: ICS S7 - name: ICS S7
description: S7 (Siemens) network metadata description: S7 (Siemens) network metadata
query: 'tags:s7* | groupby -sankey event.dataset source.ip destination.ip | groupby event.dataset | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'tags:s7* | groupby event.dataset | groupby -sankey event.dataset source.ip | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port'
- name: Firewall - name: Firewall
description: Firewall logs description: Firewall logs
query: 'observer.type:firewall | groupby -sankey event.action observer.ingress.interface.name | groupby event.action | groupby observer.ingress.interface.name | groupby network.type | groupby network.transport | groupby source.ip | groupby destination.ip | groupby destination.port' query: 'observer.type:firewall | groupby event.action | groupby -sankey event.action observer.ingress.interface.name | groupby observer.ingress.interface.name | groupby network.type | groupby network.transport | groupby source.ip | groupby destination.ip | groupby destination.port'
- name: Firewall Auth - name: Firewall Auth
description: Firewall authentication logs description: Firewall authentication logs
query: 'observer.type:firewall AND event.category:authentication | groupby user.name | groupby -sankey user.name source.ip | groupby source.ip | table soc_timestamp user.name source.ip message' query: 'observer.type:firewall AND event.category:authentication | groupby user.name | groupby -sankey user.name source.ip | groupby source.ip | table soc_timestamp user.name source.ip message'
- name: VLAN - name: VLAN
description: VLAN (Virtual Local Area Network) tagged logs description: VLAN (Virtual Local Area Network) tagged logs
query: '* AND _exists_:network.vlan.id | groupby network.vlan.id | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby event.dataset | groupby event.module | groupby observer.name | groupby source.geo.country_name | groupby destination.geo.country_name' query: '* AND _exists_:network.vlan.id | groupby network.vlan.id | groupby -sankey network.vlan.id source.ip | groupby source.ip | groupby destination.ip | groupby destination.port | groupby event.dataset | groupby event.module | groupby observer.name | groupby source.geo.country_name | groupby destination.geo.country_name'
- name: GeoIP - Destination Countries - name: GeoIP - Destination Countries
description: GeoIP tagged logs visualized by destination countries description: GeoIP tagged logs visualized by destination countries
query: '* AND _exists_:destination.geo.country_name | groupby destination.geo.country_name | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name | groupby event.dataset | groupby event.module' query: '* AND _exists_:destination.geo.country_name | groupby destination.geo.country_name | groupby source.ip | groupby -sankey source.ip destination.ip | groupby destination.ip | groupby destination.port | groupby destination_geo.organization_name | groupby event.dataset | groupby event.module'
@@ -1835,11 +1931,33 @@ soc:
- soc_timestamp - soc_timestamp
- rule.name - rule.name
- event.severity_label - event.severity_label
- event_data.event.module - event_data.event.dataset
- event_data.event.category - event_data.source.ip
- event_data.source.port
- event_data.destination.host
- event_data.destination.port
- event_data.process.executable - event_data.process.executable
- event_data.process.pid - event_data.process.pid
- event_data.winlog.computer_name ':sigma:':
- soc_timestamp
- rule.name
- event.severity_label
- event_data.event.dataset
- event_data.source.ip
- event_data.source.port
- event_data.destination.host
- event_data.destination.port
- event_data.process.executable
- event_data.process.pid
':strelka:':
- soc_timestamp
- file.name
- file.size
- hash.md5
- file.source
- file.mime_type
- log.id.fuid
- event.dataset
queryBaseFilter: tags:alert queryBaseFilter: tags:alert
queryToggleFilters: queryToggleFilters:
- name: acknowledged - name: acknowledged
@@ -1978,6 +2096,13 @@ soc:
mostRecentlyUsedLimit: 5 mostRecentlyUsedLimit: 5
safeStringMaxLength: 100 safeStringMaxLength: 100
queryBaseFilter: '_index:"*:so-detection" AND so_kind:detection' queryBaseFilter: '_index:"*:so-detection" AND so_kind:detection'
presets:
manualSync:
customEnabled: false
labels:
- Suricata
- Strelka
- ElastAlert
eventFields: eventFields:
default: default:
- so_detection.title - so_detection.title
@@ -1985,6 +2110,7 @@ soc:
- so_detection.severity - so_detection.severity
- so_detection.language - so_detection.language
- so_detection.ruleset - so_detection.ruleset
- soc_timestamp
queries: queries:
- name: "All Detections" - name: "All Detections"
query: "_id:*" query: "_id:*"
@@ -1996,12 +2122,14 @@ soc:
query: "so_detection.isEnabled:false" query: "so_detection.isEnabled:false"
- name: "Detection Type - Suricata (NIDS)" - name: "Detection Type - Suricata (NIDS)"
query: "so_detection.language:suricata" query: "so_detection.language:suricata"
- name: "Detection Type - Sigma - All" - name: "Detection Type - Sigma (Elastalert) - All"
query: "so_detection.language:sigma" query: "so_detection.language:sigma"
- name: "Detection Type - Sigma - Windows" - name: "Detection Type - Sigma (Elastalert) - Windows"
query: 'so_detection.language:sigma AND so_detection.content: "*product: windows*"' query: 'so_detection.language:sigma AND so_detection.content: "*product: windows*"'
- name: "Detection Type - Yara (Strelka)" - name: "Detection Type - YARA (Strelka)"
query: "so_detection.language:yara" query: "so_detection.language:yara"
- name: "Security Onion - Grid Detections"
query: "so_detection.ruleset:securityonion-resources"
detection: detection:
presets: presets:
severity: severity:

View File

@@ -8,6 +8,7 @@
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'docker/docker.map.jinja' import DOCKER %} {% from 'docker/docker.map.jinja' import DOCKER %}
{% from 'soc/merged.map.jinja' import DOCKER_EXTRA_HOSTS %} {% from 'soc/merged.map.jinja' import DOCKER_EXTRA_HOSTS %}
{% from 'soc/merged.map.jinja' import SOCMERGED %}
include: include:
- soc.config - soc.config
@@ -24,12 +25,16 @@ so-soc:
- binds: - binds:
- /nsm/rules:/nsm/rules:rw - /nsm/rules:/nsm/rules:rw
- /opt/so/conf/strelka:/opt/sensoroni/yara:rw - /opt/so/conf/strelka:/opt/sensoroni/yara:rw
- /opt/so/conf/sigma:/opt/sensoroni/sigma:rw
- /opt/so/rules/elastalert/rules:/opt/sensoroni/elastalert:rw - /opt/so/rules/elastalert/rules:/opt/sensoroni/elastalert:rw
- /opt/so/conf/soc/fingerprints:/opt/sensoroni/fingerprints:rw - /opt/so/conf/soc/fingerprints:/opt/sensoroni/fingerprints:rw
- /nsm/soc/jobs:/opt/sensoroni/jobs:rw - /nsm/soc/jobs:/opt/sensoroni/jobs:rw
- /nsm/soc/uploads:/nsm/soc/uploads:rw - /nsm/soc/uploads:/nsm/soc/uploads:rw
- /opt/so/log/soc/:/opt/sensoroni/logs/:rw - /opt/so/log/soc/:/opt/sensoroni/logs/:rw
- /opt/so/conf/soc/soc.json:/opt/sensoroni/sensoroni.json:ro - /opt/so/conf/soc/soc.json:/opt/sensoroni/sensoroni.json:ro
{% if SOCMERGED.telemetryEnabled and not GLOBALS.airgap %}
- /opt/so/conf/soc/analytics.js:/opt/sensoroni/html/js/analytics.js:ro
{% endif %}
- /opt/so/conf/soc/motd.md:/opt/sensoroni/html/motd.md:ro - /opt/so/conf/soc/motd.md:/opt/sensoroni/html/motd.md:ro
- /opt/so/conf/soc/banner.md:/opt/sensoroni/html/login/banner.md:ro - /opt/so/conf/soc/banner.md:/opt/sensoroni/html/login/banner.md:ro
- /opt/so/conf/soc/sigma_so_pipeline.yaml:/opt/sensoroni/sigma_so_pipeline.yaml:ro - /opt/so/conf/soc/sigma_so_pipeline.yaml:/opt/sensoroni/sigma_so_pipeline.yaml:ro
@@ -66,6 +71,7 @@ so-soc:
- file: socdatadir - file: socdatadir
- file: soclogdir - file: soclogdir
- file: socconfig - file: socconfig
- file: socanalytics
- file: socmotd - file: socmotd
- file: socbanner - file: socbanner
- file: soccustom - file: soccustom

View File

@@ -0,0 +1,5 @@
(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':
new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=
'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);
})(window,document,'script','dataLayer','GTM-TM46SL7T');

View File

@@ -12,6 +12,10 @@ To see all the latest features and fixes in this version of Security Onion, clic
Want the best hardware for your enterprise deployment? Check out our [enterprise appliances](https://securityonionsolutions.com/hardware/)! Want the best hardware for your enterprise deployment? Check out our [enterprise appliances](https://securityonionsolutions.com/hardware/)!
## Premium Support
Experiencing difficulties and need priority support or remote assistance? We offer a [premium support plan](https://securityonionsolutions.com/support/) to assist corporate, educational, and government organizations.
## Customize This Space ## Customize This Space
Make this area your own by customizing the content in the [Config](/#/config?s=soc.files.soc.motd__md) interface. Make this area your own by customizing the content in the [Config](/#/config?s=soc.files.soc.motd__md) interface.

View File

@@ -79,3 +79,12 @@ transformations:
- type: logsource - type: logsource
product: windows product: windows
category: driver_load category: driver_load
- id: linux_security_add-fields
type: add_condition
conditions:
event.module: 'system'
event.dataset: 'system.auth'
rule_conditions:
- type: logsource
product: linux
service: auth

View File

@@ -30,6 +30,11 @@
{# since cases is not a valid soc config item and only used for the map files, remove it from being placed in the config #} {# since cases is not a valid soc config item and only used for the map files, remove it from being placed in the config #}
{% do SOCMERGED.config.server.modules.pop('cases') %} {% do SOCMERGED.config.server.modules.pop('cases') %}
{# do not automatically enable Sigma rules if install is Eval or Import #}
{% if grains['role'] in ['so-eval', 'so-import'] %}
{% do SOCMERGED.config.server.modules.elastalertengine.update({'autoEnabledSigmaRules': []}) %}
{% endif %}
{# remove these modules if detections is disabled #} {# remove these modules if detections is disabled #}
{% if not SOCMERGED.config.server.client.detectionsEnabled %} {% if not SOCMERGED.config.server.client.detectionsEnabled %}
{% do SOCMERGED.config.server.modules.pop('elastalertengine') %} {% do SOCMERGED.config.server.modules.pop('elastalertengine') %}

View File

@@ -2,6 +2,11 @@ soc:
enabled: enabled:
description: You can enable or disable SOC. description: You can enable or disable SOC.
advanced: True advanced: True
telemetryEnabled:
title: SOC Telemetry
description: When this setting is enabled and the grid is not in airgap mode, SOC will provide feature usage data to the Security Onion development team via Google Analytics. This data helps Security Onion developers determine which product features are being used and can also provide insight into improving the user interface. When changing this setting, wait for the grid to fully synchronize and then perform a hard browser refresh on SOC, to force the browser cache to update and reflect the new setting.
global: True
helpLink: telemetry.html
files: files:
soc: soc:
banner__md: banner__md:
@@ -78,14 +83,42 @@ soc:
advanced: True advanced: True
modules: modules:
elastalertengine: elastalertengine:
sigmaRulePackages: allowRegex:
description: 'Defines the Sigma Community Ruleset you want to run. One of these (core | core+ | core++ | all ) as well as an optional Add-on (emerging_threats_addon). WARNING! Changing the ruleset will remove all existing Sigma rules of the previous ruleset and their associated overrides. This removal cannot be undone. (future use, not yet complete)' description: 'Regex used to filter imported Sigma rules. Deny regex takes precedence over the Allow regex setting.'
global: True
advanced: False
autoUpdateEnabled:
description: 'Set to true to enable automatic Internet-connected updates of the Sigma Community Ruleset. If this is an Airgap system, this setting will be overridden and set to false. (future use, not yet complete)'
global: True global: True
advanced: True advanced: True
helpLink: sigma.html
autoEnabledSigmaRules:
description: 'Sigma rules to automatically enable on initial import. Format is $Ruleset+$Level - for example, for the core community ruleset and critical level rules: core+critical'
global: True
advanced: True
helpLink: sigma.html
denyRegex:
description: 'Regex used to filter imported Sigma rules. Deny regex takes precedence over the Allow regex setting.'
global: True
advanced: True
helpLink: sigma.html
communityRulesImportFrequencySeconds:
description: 'How often to check for new Sigma rules (in seconds). This applies to both Community Rule Packages and any configured Git repos.'
global: True
advanced: True
helpLink: sigma.html
rulesRepos:
description: 'Custom Git repos to pull Sigma rules from. License field is required, folder is optional.'
global: True
advanced: True
forcedType: "[]{}"
helpLink: sigma.html
sigmaRulePackages:
description: 'Defines the Sigma Community Ruleset you want to run. One of these (core | core+ | core++ | all ) as well as an optional Add-on (emerging_threats_addon). WARNING! Changing the ruleset will remove all existing Sigma rules of the previous ruleset and their associated overrides. This removal cannot be undone.'
global: True
advanced: False
helpLink: sigma.html
autoUpdateEnabled:
description: 'Set to true to enable automatic Internet-connected updates of the Sigma Community Ruleset. If this is an Airgap system, this setting will be overridden and set to false.'
global: True
advanced: True
helpLink: sigma.html
elastic: elastic:
index: index:
description: Comma-separated list of indices or index patterns (wildcard "*" supported) that SOC will search for records. description: Comma-separated list of indices or index patterns (wildcard "*" supported) that SOC will search for records.
@@ -148,10 +181,51 @@ soc:
global: True global: True
advanced: True advanced: True
strelkaengine: strelkaengine:
autoUpdateEnabled: allowRegex:
description: 'Set to true to enable automatic Internet-connected updates of the Yara rulesets. If this is an Airgap system, this setting will be overridden and set to false. (future use, not yet complete)' description: 'Regex used to filter imported Yara rules. Deny regex takes precedence over the Allow regex setting.'
global: True global: True
advanced: True advanced: True
helpLink: yara.html
autoUpdateEnabled:
description: 'Set to true to enable automatic Internet-connected updates of the Yara rulesets. If this is an Airgap system, this setting will be overridden and set to false.'
global: True
advanced: True
denyRegex:
description: 'Regex used to filter imported Yara rules. Deny regex takes precedence over the Allow regex setting.'
global: True
advanced: True
helpLink: yara.html
communityRulesImportFrequencySeconds:
description: 'How often to check for new Yara rules (in seconds). This applies to both Community Rules and any configured Git repos.'
global: True
advanced: True
helpLink: yara.html
rulesRepos:
description: 'Custom Git repos to pull Yara rules from. License field is required'
global: True
advanced: True
forcedType: "[]{}"
helpLink: yara.html
suricataengine:
allowRegex:
description: 'Regex used to filter imported Suricata rules. Deny regex takes precedence over the Allow regex setting.'
global: True
advanced: True
helpLink: suricata.html
autoUpdateEnabled:
description: 'Set to true to enable automatic Internet-connected updates of the Suricata rulesets. If this is an Airgap system, this setting will be overridden and set to false.'
global: True
advanced: True
denyRegex:
description: 'Regex used to filter imported Suricata rules. Deny regex takes precedence over the Allow regex setting.'
global: True
advanced: True
helpLink: suricata.html
communityRulesImportFrequencySeconds:
description: 'How often to check for new Suricata rules (in seconds).'
global: True
advanced: True
helpLink: suricata.html
client: client:
enableReverseLookup: enableReverseLookup:
description: Set to true to enable reverse DNS lookups for IP addresses in the SOC UI. description: Set to true to enable reverse DNS lookups for IP addresses in the SOC UI.

View File

@@ -664,6 +664,230 @@ elastickeyperms:
{%- endif %} {%- endif %}
{% if grains['role'] in ['so-manager', 'so-receiver', 'so-searchnode'] %}
kafka_key:
x509.private_key_managed:
- name: /etc/pki/kafka.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/kafka.key') -%}
- prereq:
- x509: /etc/pki/kafka.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
kafka_crt:
x509.certificate_managed:
- name: /etc/pki/kafka.crt
- ca_server: {{ ca_server }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- signing_policy: kafka
- private_key: /etc/pki/kafka.key
- CN: {{ GLOBALS.hostname }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs12 -inkey /etc/pki/kafka.key -in /etc/pki/kafka.crt -export -out /etc/pki/kafka.p12 -nodes -passout pass:changeit"
- onchanges:
- x509: /etc/pki/kafka.key
elasticfleet_kafka_key:
x509.private_key_managed:
- name: /etc/pki/elasticfleet-kafka.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/elasticfleet-kafka.key') -%}
- prereq:
- x509: elasticfleet_kafka_crt
{%- endif %}
- retry:
attempts: 5
interval: 30
elasticfleet_kafka_crt:
x509.certificate_managed:
- name: /etc/pki/elasticfleet-kafka.crt
- ca_server: {{ ca_server }}
- signing_policy: kafka
- private_key: /etc/pki/elasticfleet-kafka.key
- CN: {{ GLOBALS.hostname }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs8 -in /etc/pki/elasticfleet-kafka.key -topk8 -out /etc/pki/elasticfleet-kafka.p8 -nocrypt"
- onchanges:
- x509: elasticfleet_kafka_key
kafka_logstash_key:
x509.private_key_managed:
- name: /etc/pki/kafka-logstash.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/kafka-logstash.key') -%}
- prereq:
- x509: /etc/pki/kafka-logstash.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
kafka_logstash_crt:
x509.certificate_managed:
- name: /etc/pki/kafka-logstash.crt
- ca_server: {{ ca_server }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- signing_policy: kafka
- private_key: /etc/pki/kafka-logstash.key
- CN: {{ GLOBALS.hostname }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
cmd.run:
- name: "/usr/bin/openssl pkcs12 -inkey /etc/pki/kafka-logstash.key -in /etc/pki/kafka-logstash.crt -export -out /etc/pki/kafka-logstash.p12 -nodes -passout pass:changeit"
- onchanges:
- x509: /etc/pki/kafka-logstash.key
{% if grains['role'] in ['so-manager', 'so-managersearch', 'so-standalone', 'so-receiver'] %}
kafka_client_key:
x509.private_key_managed:
- name: /etc/pki/kafka-client.key
- keysize: 4096
- backup: True
- new: True
{% if salt['file.file_exists']('/etc/pki/kafka-client.key') -%}
- prereq:
- x509: /etc/pki/kafka-client.crt
{%- endif %}
- retry:
attempts: 5
interval: 30
kafka_client_crt:
x509.certificate_managed:
- name: /etc/pki/kafka-client.crt
- ca_server: {{ ca_server }}
- subjectAltName: DNS:{{ GLOBALS.hostname }}, IP:{{ GLOBALS.node_ip }}
- signing_policy: kafka
- private_key: /etc/pki/kafka-client.key
- CN: {{ GLOBALS.hostname }}
- days_remaining: 0
- days_valid: 820
- backup: True
- timeout: 30
- retry:
attempts: 5
interval: 30
kafka_client_key_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka-client.key
- mode: 640
- user: 960
- group: 939
kafka_client_crt_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka-client.crt
- mode: 640
- user: 960
- group: 939
{% endif %}
kafka_key_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka.key
- mode: 640
- user: 960
- group: 939
kafka_crt_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka.crt
- mode: 640
- user: 960
- group: 939
kafka_logstash_key_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka-logstash.key
- mode: 640
- user: 960
- group: 939
kafka_logstash_crt_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka-logstash.crt
- mode: 640
- user: 960
- group: 939
kafka_logstash_pkcs12_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka-logstash.p12
- mode: 640
- user: 960
- group: 931
kafka_pkcs8_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka.p8
- mode: 640
- user: 960
- group: 939
kafka_pkcs12_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka.p12
- mode: 640
- user: 960
- group: 939
elasticfleet_kafka_cert_perms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-kafka.crt
- mode: 640
- user: 960
- group: 939
elasticfleet_kafka_key_perms:
file.managed:
- replace: False
- name: /etc/pki/elasticfleet-kafka.key
- mode: 640
- user: 960
- group: 939
{% endif %}
{% else %} {% else %}
{{sls}}_state_not_allowed: {{sls}}_state_not_allowed:

View File

@@ -71,3 +71,20 @@ fleet_crt:
fbcertdir: fbcertdir:
file.absent: file.absent:
- name: /opt/so/conf/filebeat/etc/pki - name: /opt/so/conf/filebeat/etc/pki
kafka_crt:
file.absent:
- name: /etc/pki/kafka.crt
kafka_key:
file.absent:
- name: /etc/pki/kafka.key
kafka_logstash_crt:
file.absent:
- name: /etc/pki/kafka-logstash.crt
kafka_logstash_key:
file.absent:
- name: /etc/pki/kafka-logstash.key
kafka_logstash_keystore:
file.absent:
- name: /etc/pki/kafka-logstash.p12

View File

@@ -145,7 +145,7 @@ suricata:
address-groups: address-groups:
HOME_NET: HOME_NET:
description: List of hosts or networks. description: List of hosts or networks.
regex: ^(([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))?)?$ regex: ^(((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\/([0-9]|[1-2][0-9]|3[0-2]))?$|^((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?))|:))|(([0-9A-Fa-f]{1,4}:){5}((:[0-9A-Fa-f]{1,4}){1,2}|:((25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)|:))|(([0-9A-Fa-f]{1,4}:){4}((:[0-9A-Fa-f]{1,4}){1,3}|:((25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)|:))|(([0-9A-Fa-f]{1,4}:){3}((:[0-9A-Fa-f]{1,4}){1,4}|:((25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)|:))|(([0-9A-Fa-f]{1,4}:){2}((:[0-9A-Fa-f]{1,4}){1,5}|:((25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)|:))|(([0-9A-Fa-f]{1,4}:){1}((:[0-9A-Fa-f]{1,4}){1,6}|:((25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)|:))|(:((:[0-9A-Fa-f]{1,4}){1,7}|:)))(\/([0-9]|[1-9][0-9]|1[0-1][0-9]|12[0-8]))?$
regexFailureMessage: You must enter a valid IP address or CIDR. regexFailureMessage: You must enter a valid IP address or CIDR.
helpLink: suricata.html helpLink: suricata.html
EXTERNAL_NET: EXTERNAL_NET:

View File

@@ -11,6 +11,7 @@ telegraf:
quiet: 'false' quiet: 'false'
scripts: scripts:
eval: eval:
- agentstatus.sh
- checkfiles.sh - checkfiles.sh
- influxdbsize.sh - influxdbsize.sh
- lasthighstate.sh - lasthighstate.sh
@@ -23,6 +24,7 @@ telegraf:
- zeekcaptureloss.sh - zeekcaptureloss.sh
- zeekloss.sh - zeekloss.sh
standalone: standalone:
- agentstatus.sh
- checkfiles.sh - checkfiles.sh
- eps.sh - eps.sh
- influxdbsize.sh - influxdbsize.sh
@@ -38,6 +40,7 @@ telegraf:
- zeekloss.sh - zeekloss.sh
- features.sh - features.sh
manager: manager:
- agentstatus.sh
- influxdbsize.sh - influxdbsize.sh
- lasthighstate.sh - lasthighstate.sh
- os.sh - os.sh
@@ -46,6 +49,7 @@ telegraf:
- sostatus.sh - sostatus.sh
- features.sh - features.sh
managersearch: managersearch:
- agentstatus.sh
- eps.sh - eps.sh
- influxdbsize.sh - influxdbsize.sh
- lasthighstate.sh - lasthighstate.sh

View File

@@ -56,6 +56,7 @@ so-telegraf:
- /opt/so/log/raid:/var/log/raid:ro - /opt/so/log/raid:/var/log/raid:ro
- /opt/so/log/sostatus:/var/log/sostatus:ro - /opt/so/log/sostatus:/var/log/sostatus:ro
- /opt/so/log/salt:/var/log/salt:ro - /opt/so/log/salt:/var/log/salt:ro
- /opt/so/log/agents:/var/log/agents:ro
{% if DOCKER.containers['so-telegraf'].custom_bind_mounts %} {% if DOCKER.containers['so-telegraf'].custom_bind_mounts %}
{% for BIND in DOCKER.containers['so-telegraf'].custom_bind_mounts %} {% for BIND in DOCKER.containers['so-telegraf'].custom_bind_mounts %}
- {{ BIND }} - {{ BIND }}

View File

@@ -0,0 +1,34 @@
#!/bin/bash
#
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
# if this script isn't already running
if [[ ! "`pidof -x $(basename $0) -o %PPID`" ]]; then
LOGFILE=/var/log/agents/agentstatus.log
# Check to see if the file is there yet so we don't break install verification since there is a 5 minute delay for this file to show up
if [ -f $LOGFILE ]; then
ONLINE=$(cat $LOGFILE | grep -wF online | awk '{print $2}' | tr -d ',')
ERROR=$(cat $LOGFILE | grep -wF error | awk '{print $2}' | tr -d ',')
INACTIVE=$(cat $LOGFILE | grep -wF inactive | awk '{print $2}' | tr -d ',')
OFFLINE=$(cat $LOGFILE | grep -wF offline | awk '{print $2}' | tr -d ',')
UPDATING=$(cat $LOGFILE | grep -wF updating | awk '{print $2}' | tr -d ',')
UNENROLLED=$(cat $LOGFILE | grep -wF unenrolled | awk '{print $2}' | tr -d ',')
OTHER=$(cat $LOGFILE | grep -wF other | awk '{print $2}' | tr -d ',')
EVENTS=$(cat $LOGFILE | grep -wF events | awk '{print $2}' | tr -d ',')
TOTAL=$(cat $LOGFILE | grep -wF total | awk '{print $2}' | tr -d ',')
ALL=$(cat $LOGFILE | grep -wF all | awk '{print $2}' | tr -d ',')
ACTIVE=$(cat $LOGFILE | grep -wF active | awk '{print $2}')
echo "agentstatus online=$ONLINE,error=$ERROR,inactive=$INACTIVE,offline=$OFFLINE,updating=$UPDATING,unenrolled=$UNENROLLED,other=$OTHER,events=$EVENTS,total=$TOTAL,all=$ALL,active=$ACTIVE"
fi
fi
exit 0

View File

@@ -106,6 +106,7 @@ base:
- utility - utility
- elasticfleet - elasticfleet
- stig - stig
- kafka
'*_standalone and G@saltversion:{{saltversion}}': '*_standalone and G@saltversion:{{saltversion}}':
- match: compound - match: compound
@@ -235,6 +236,7 @@ base:
- logstash - logstash
- redis - redis
- elasticfleet.install_agent_grid - elasticfleet.install_agent_grid
- kafka
'*_idh and G@saltversion:{{saltversion}}': '*_idh and G@saltversion:{{saltversion}}':
- match: compound - match: compound

View File

@@ -23,7 +23,7 @@
'manager_ip': INIT.PILLAR.global.managerip, 'manager_ip': INIT.PILLAR.global.managerip,
'md_engine': INIT.PILLAR.global.mdengine, 'md_engine': INIT.PILLAR.global.mdengine,
'pcap_engine': GLOBALMERGED.pcapengine, 'pcap_engine': GLOBALMERGED.pcapengine,
'pipeline': INIT.PILLAR.global.pipeline, 'pipeline': GLOBALMERGED.pipeline,
'so_version': INIT.PILLAR.global.soversion, 'so_version': INIT.PILLAR.global.soversion,
'so_docker_gateway': DOCKER.gateway, 'so_docker_gateway': DOCKER.gateway,
'so_docker_range': DOCKER.range, 'so_docker_range': DOCKER.range,

View File

@@ -24,7 +24,7 @@ zeek:
advanced: False advanced: False
helpLink: zeek.html helpLink: zeek.html
multiline: True multiline: True
regex: ^(([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))?)?$ regex: ^(((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\/([0-9]|[1-2][0-9]|3[0-2]))?$|^((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?))|:))|(([0-9A-Fa-f]{1,4}:){5}((:[0-9A-Fa-f]{1,4}){1,2}|:((25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)|:))|(([0-9A-Fa-f]{1,4}:){4}((:[0-9A-Fa-f]{1,4}){1,3}|:((25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)|:))|(([0-9A-Fa-f]{1,4}:){3}((:[0-9A-Fa-f]{1,4}){1,4}|:((25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)|:))|(([0-9A-Fa-f]{1,4}:){2}((:[0-9A-Fa-f]{1,4}){1,5}|:((25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)|:))|(([0-9A-Fa-f]{1,4}:){1}((:[0-9A-Fa-f]{1,4}){1,6}|:((25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)\.){3}(25[0-5]|(2[0-4]|1[0-9])[0-9]|0?[0-9][0-9]?)|:))|(:((:[0-9A-Fa-f]{1,4}){1,7}|:)))(\/([0-9]|[1-9][0-9]|1[0-1][0-9]|12[0-8]))?$
regexFailureMessage: You must enter a valid IP address or CIDR. regexFailureMessage: You must enter a valid IP address or CIDR.
node: node:
lb_procs: lb_procs:

View File

@@ -803,6 +803,7 @@ create_manager_pillars() {
patch_pillar patch_pillar
nginx_pillar nginx_pillar
kibana_pillar kibana_pillar
kafka_pillar
} }
create_repo() { create_repo() {
@@ -1191,6 +1192,18 @@ kibana_pillar() {
logCmd "touch $kibana_pillar_file" logCmd "touch $kibana_pillar_file"
} }
kafka_pillar() {
KAFKACLUSTERID=$(get_random_value 22)
KAFKAPASS=$(get_random_value)
logCmd "mkdir -p $local_salt_dir/pillar/kakfa"
logCmd "touch $adv_kafka_pillar_file"
logCmd "touch $kafka_pillar_file"
printf '%s\n'\
"kafka:"\
" cluster_id: $KAFKACLUSTERID"\
" certpass: $KAFKAPASS" > $kafka_pillar_file
}
logrotate_pillar() { logrotate_pillar() {
logCmd "mkdir -p $local_salt_dir/pillar/logrotate" logCmd "mkdir -p $local_salt_dir/pillar/logrotate"
logCmd "touch $adv_logrotate_pillar_file" logCmd "touch $adv_logrotate_pillar_file"
@@ -1258,6 +1271,10 @@ soc_pillar() {
" server:"\ " server:"\
" srvKey: '$SOCSRVKEY'"\ " srvKey: '$SOCSRVKEY'"\
"" > "$soc_pillar_file" "" > "$soc_pillar_file"
if [[ $telemetry -ne 0 ]]; then
echo " telemetryEnabled: false" >> $soc_pillar_file
fi
} }
telegraf_pillar() { telegraf_pillar() {
@@ -1321,7 +1338,6 @@ create_global() {
# Continue adding other details # Continue adding other details
echo " imagerepo: '$IMAGEREPO'" >> $global_pillar_file echo " imagerepo: '$IMAGEREPO'" >> $global_pillar_file
echo " pipeline: 'redis'" >> $global_pillar_file
echo " repo_host: '$HOSTNAME'" >> $global_pillar_file echo " repo_host: '$HOSTNAME'" >> $global_pillar_file
echo " influxdb_host: '$HOSTNAME'" >> $global_pillar_file echo " influxdb_host: '$HOSTNAME'" >> $global_pillar_file
echo " registry_host: '$HOSTNAME'" >> $global_pillar_file echo " registry_host: '$HOSTNAME'" >> $global_pillar_file
@@ -1387,7 +1403,7 @@ make_some_dirs() {
mkdir -p $local_salt_dir/salt/firewall/portgroups mkdir -p $local_salt_dir/salt/firewall/portgroups
mkdir -p $local_salt_dir/salt/firewall/ports mkdir -p $local_salt_dir/salt/firewall/ports
for THEDIR in bpf pcap elasticsearch ntp firewall redis backup influxdb strelka sensoroni soc docker zeek suricata nginx telegraf logstash soc manager kratos idstools idh elastalert stig global;do for THEDIR in bpf pcap elasticsearch ntp firewall redis backup influxdb strelka sensoroni soc docker zeek suricata nginx telegraf logstash soc manager kratos idstools idh elastalert stig global kafka;do
mkdir -p $local_salt_dir/pillar/$THEDIR mkdir -p $local_salt_dir/pillar/$THEDIR
touch $local_salt_dir/pillar/$THEDIR/adv_$THEDIR.sls touch $local_salt_dir/pillar/$THEDIR/adv_$THEDIR.sls
touch $local_salt_dir/pillar/$THEDIR/soc_$THEDIR.sls touch $local_salt_dir/pillar/$THEDIR/soc_$THEDIR.sls

View File

@@ -447,6 +447,7 @@ if ! [[ -f $install_opt_file ]]; then
get_redirect get_redirect
# Does the user want to allow access to the UI? # Does the user want to allow access to the UI?
collect_so_allow collect_so_allow
[[ ! $is_airgap ]] && whiptail_accept_telemetry
whiptail_end_settings whiptail_end_settings
elif [[ $is_standalone ]]; then elif [[ $is_standalone ]]; then
waitforstate=true waitforstate=true
@@ -468,6 +469,7 @@ if ! [[ -f $install_opt_file ]]; then
collect_webuser_inputs collect_webuser_inputs
get_redirect get_redirect
collect_so_allow collect_so_allow
[[ ! $is_airgap ]] && whiptail_accept_telemetry
whiptail_end_settings whiptail_end_settings
elif [[ $is_manager ]]; then elif [[ $is_manager ]]; then
info "Setting up as node type manager" info "Setting up as node type manager"
@@ -488,6 +490,7 @@ if ! [[ -f $install_opt_file ]]; then
collect_webuser_inputs collect_webuser_inputs
get_redirect get_redirect
collect_so_allow collect_so_allow
[[ ! $is_airgap ]] && whiptail_accept_telemetry
whiptail_end_settings whiptail_end_settings
elif [[ $is_managersearch ]]; then elif [[ $is_managersearch ]]; then
info "Setting up as node type managersearch" info "Setting up as node type managersearch"
@@ -508,6 +511,7 @@ if ! [[ -f $install_opt_file ]]; then
collect_webuser_inputs collect_webuser_inputs
get_redirect get_redirect
collect_so_allow collect_so_allow
[[ ! $is_airgap ]] && whiptail_accept_telemetry
whiptail_end_settings whiptail_end_settings
elif [[ $is_sensor ]]; then elif [[ $is_sensor ]]; then
info "Setting up as node type sensor" info "Setting up as node type sensor"
@@ -597,6 +601,7 @@ if ! [[ -f $install_opt_file ]]; then
collect_webuser_inputs collect_webuser_inputs
get_redirect get_redirect
collect_so_allow collect_so_allow
[[ ! $is_airgap ]] && whiptail_accept_telemetry
whiptail_end_settings whiptail_end_settings
elif [[ $is_receiver ]]; then elif [[ $is_receiver ]]; then
@@ -619,6 +624,16 @@ if ! [[ -f $install_opt_file ]]; then
set_minion_info set_minion_info
whiptail_end_settings whiptail_end_settings
elif [[ $is_kafka ]]; then
info "Setting up as node type Kafka broker"
#check_requirements "kafka"
networking_needful
collect_mngr_hostname
add_mngr_ip_to_hosts
check_manager_connection
set_minion_info
whiptail_end_settings
fi fi
if [[ $waitforstate ]]; then if [[ $waitforstate ]]; then

View File

@@ -178,6 +178,12 @@ export redis_pillar_file
adv_redis_pillar_file="$local_salt_dir/pillar/redis/adv_redis.sls" adv_redis_pillar_file="$local_salt_dir/pillar/redis/adv_redis.sls"
export adv_redis_pillar_file export adv_redis_pillar_file
kafka_pillar_file="local_salt_dir/pillar/kafka/soc_kafka.sls"
export kafka_pillar_file
adv_kafka_pillar_file="$local_salt_dir/pillar/kafka/adv_kafka.sls"
export kafka_pillar_file
idh_pillar_file="$local_salt_dir/pillar/idh/soc_idh.sls" idh_pillar_file="$local_salt_dir/pillar/idh/soc_idh.sls"
export idh_pillar_file export idh_pillar_file

View File

@@ -144,6 +144,26 @@ whiptail_cancel() {
exit 1 exit 1
} }
whiptail_accept_telemetry() {
[ -n "$TESTING" ] && return
read -r -d '' message <<- EOM
The Security Onion development team could use your help! Enabling SOC
Telemetry will help the team understand which UI features are being
used and enables informed prioritization of future development.
Adjust this setting at anytime via the SOC Configuration screen.
Documentation: https://docs.securityonion.net/en/2.4/telemetry.html
Enable SOC Telemetry to help improve future releases?
EOM
whiptail --title "$whiptail_title" --yesno "$message" 15 75
telemetry=$?
}
whiptail_check_exitstatus() { whiptail_check_exitstatus() {
case $1 in case $1 in
1) 1)
@@ -431,6 +451,14 @@ whiptail_end_settings() {
done done
fi fi
if [[ ! $is_airgap ]]; then
if [[ $telemetry -eq 0 ]]; then
__append_end_msg "SOC Telemetry: enabled"
else
__append_end_msg "SOC Telemetry: disabled"
fi
fi
# ADVANCED # ADVANCED
if [[ $MANAGERADV == 'ADVANCED' ]]; then if [[ $MANAGERADV == 'ADVANCED' ]]; then
__append_end_msg "Advanced Manager Settings:" __append_end_msg "Advanced Manager Settings:"
@@ -646,7 +674,7 @@ whiptail_install_type_dist_existing() {
Note: Heavy nodes (HEAVYNODE) are NOT recommended for most users. Note: Heavy nodes (HEAVYNODE) are NOT recommended for most users.
EOM EOM
install_type=$(whiptail --title "$whiptail_title" --menu "$node_msg" 19 75 6 \ install_type=$(whiptail --title "$whiptail_title" --menu "$node_msg" 19 75 7 \
"SENSOR" "Create a forward only sensor " \ "SENSOR" "Create a forward only sensor " \
"SEARCHNODE" "Add a search node with parsing " \ "SEARCHNODE" "Add a search node with parsing " \
"FLEET" "Dedicated Elastic Fleet Node " \ "FLEET" "Dedicated Elastic Fleet Node " \