Compare commits

...

443 Commits

Author SHA1 Message Date
m0duspwnens
d91dd0dd3c watch some values 2024-04-29 17:14:00 -04:00
m0duspwnens
a0388fd568 engines config for valueWatch 2024-04-29 14:02:10 -04:00
m0duspwnens
05244cfd75 watch files change engine 2024-04-24 13:19:39 -04:00
m0duspwnens
6c5e0579cf logging changes. ensure salt master has pillarWatch engine 2024-04-19 09:32:32 -04:00
m0duspwnens
1f6eb9cdc3 match keys better. go through files reverse first found is prio 2024-04-18 13:50:37 -04:00
m0duspwnens
610dd2c08d improve it 2024-04-18 11:11:14 -04:00
m0duspwnens
506bbd314d more comments, better logging 2024-04-18 10:26:10 -04:00
m0duspwnens
4caa6a10b5 watch a pillar in files and take action 2024-04-17 18:09:04 -04:00
m0duspwnens
4b79623ce3 watch pillar files for changes and do something 2024-04-16 16:51:35 -04:00
m0duspwnens
c4994a208b restart salt minion if a manager and signing policies change 2024-04-15 11:37:21 -04:00
m0duspwnens
bb983d4ba2 just broker as default process 2024-04-12 16:16:03 -04:00
m0duspwnens
c014508519 need /opt/so/conf/ca/cacerts on receiver for kafka to run 2024-04-12 13:50:25 -04:00
reyesj2
fcfbb1e857 Merge kaffytaffy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 12:50:56 -04:00
reyesj2
911ee579a9 Typo
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 12:16:20 -04:00
reyesj2
a6ff92b099 Note to remove so-kafka-clusterid. Update soup and setup to generate needed kafka pillar values
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 12:11:18 -04:00
m0duspwnens
d73ba7dd3e order kafka pillar assignment 2024-04-12 11:55:26 -04:00
m0duspwnens
04ddcd5c93 add receiver managersearch and standalone to kafka.nodes pillar 2024-04-12 11:52:57 -04:00
reyesj2
af29ae1968 Merge kaffytaffy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 11:43:46 -04:00
reyesj2
fbd3cff90d Make global.pipeline use GLOBALMERGED value
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-12 11:21:19 -04:00
m0duspwnens
0ed9894b7e create kratos local pillar dirs during setup 2024-04-12 11:19:46 -04:00
m0duspwnens
a54a72c269 move kafka_cluster_id to kafka:cluster_id 2024-04-12 11:19:20 -04:00
m0duspwnens
f514e5e9bb add kafka to receiver 2024-04-11 16:23:05 -04:00
reyesj2
3955587372 Use global.pipeline for redis / kafka states
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 16:20:09 -04:00
reyesj2
6b28dc72e8 Update annotation for global.pipeline
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 15:38:33 -04:00
reyesj2
ca7253a589 Run kafka-clusterid script when pillar values are missing
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 15:38:03 -04:00
reyesj2
af53dcda1b Remove references to kafkanode
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-11 15:32:00 -04:00
m0duspwnens
d3bd56b131 disable logstash and redis if kafka enabled 2024-04-10 14:13:27 -04:00
m0duspwnens
e9e61ea2d8 Merge remote-tracking branch 'origin/2.4/dev' into kaffytaffy 2024-04-10 13:14:13 -04:00
m0duspwnens
86b984001d annotations and enable/disable from ui 2024-04-10 10:39:06 -04:00
m0duspwnens
fa7f8104c8 Merge remote-tracking branch 'origin/reyesj2/kafka' into kaffytaffy 2024-04-09 11:13:02 -04:00
m0duspwnens
bd5fe43285 jinja config files 2024-04-09 11:07:53 -04:00
m0duspwnens
d38051e806 fix client and server properties formatting 2024-04-09 10:36:37 -04:00
m0duspwnens
daa5342986 items not keys in for loop 2024-04-09 10:22:05 -04:00
m0duspwnens
c48436ccbf fix dict update 2024-04-09 10:19:17 -04:00
m0duspwnens
7aa00faa6c fix var 2024-04-09 09:31:54 -04:00
m0duspwnens
6217a7b9a9 add defaults and jijafy kafka config 2024-04-09 09:27:21 -04:00
reyesj2
d67ebabc95 Remove logstash output to kafka pipeline. Add additional topics for searchnodes to ingest and add partition/offset info to event
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-08 16:38:03 -04:00
Josh Brower
b9474b9352 Merge pull request #12766 from Security-Onion-Solutions/2.4/sigma-pipeline
Ship Defender logs + more
2024-04-08 16:35:24 -04:00
DefensiveDepth
376efab40c Ship Defender logs 2024-04-08 14:01:38 -04:00
reyesj2
65274e89d7 Add client_id to logstash pipeline. To identify which searchnode is pulling messages
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-05 15:38:00 -04:00
coreyogburn
acf29a6c9c Merge pull request #12760 from Security-Onion-Solutions/cogburn/detection-author-remap
Detection Author as a Keyword instead of Text
2024-04-05 11:39:53 -06:00
reyesj2
721e04f793 initial logstash input from kafka over ssl
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-05 13:37:14 -04:00
Corey Ogburn
00cea6fb80 Detection Author as a Keyword instead of Text
With Quick Actions added to Detections, as many fields should be usable as possible.
2024-04-05 11:22:47 -06:00
reyesj2
433309ef1a Generate kafka cluster id if it doesn't exist
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-05 09:35:12 -04:00
Mike Reeves
cbc95d0b30 Merge pull request #12759 from Security-Onion-Solutions/TOoSmOotH-patch-2
Update so-log-check
2024-04-05 08:17:50 -04:00
Mike Reeves
21f86be8ee Update so-log-check 2024-04-05 08:03:42 -04:00
Josh Brower
8e38c3763e Merge pull request #12756 from Security-Onion-Solutions/2.4/detections-defaults
Use list not string
2024-04-04 17:00:38 -04:00
DefensiveDepth
ca807bd6bd Use list not string 2024-04-04 16:58:39 -04:00
reyesj2
735cfb4c29 Autogenerate kafka topics when a message it sent to non-existing topic
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-04 16:45:58 -04:00
reyesj2
6202090836 Merge remote-tracking branch 'origin/kaffytaffy' into reyesj2/kafka 2024-04-04 16:27:06 -04:00
reyesj2
436cbc1f06 Add kafka signing_policy for client/server auth. Add kafka-client cert on manager so manager can interact with kafka using its own cert
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-04 16:21:29 -04:00
reyesj2
40b08d737c Generate kafka keystore on changes to kafka.key
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-04 16:16:53 -04:00
m0duspwnens
4c5b42b898 restart container on server config changes 2024-04-04 15:47:01 -04:00
m0duspwnens
7a6b72ebac add so-kafka to manager for firewall 2024-04-04 15:46:11 -04:00
Josh Brower
f72cbd5f23 Merge pull request #12755 from Security-Onion-Solutions/2.4/detections-defaults
2.4/detections defaults
2024-04-04 11:33:59 -04:00
Josh Brower
1d7e47f589 Merge pull request #12682 from Security-Onion-Solutions/2.4/soup-playbook
2.4/soup playbook
2024-04-04 11:28:09 -04:00
DefensiveDepth
49d5fa95a2 Detections tweaks 2024-04-04 11:26:44 -04:00
Jason Ertel
204f44449a Merge pull request #12754 from Security-Onion-Solutions/jertel/ana
skip telemetry summary in airgap mode
2024-04-04 10:39:07 -04:00
Jason Ertel
6046848ee7 skip telemetry summary in airgap mode 2024-04-04 10:25:32 -04:00
Doug Burks
b0aee238b1 Merge pull request #12753 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add dashboards specific to Elastic Agent #12746
2024-04-04 09:35:21 -04:00
Doug Burks
d8ac3f1292 FEATURE: Add dashboards specific to Elastic Agent #12746 2024-04-04 09:30:05 -04:00
Mike Reeves
8788b34c8a Merge pull request #12752 from Security-Onion-Solutions/updates23
Allow 2.3 to update
2024-04-04 09:25:41 -04:00
Mike Reeves
784ec54795 2.3 updates 2024-04-04 09:24:17 -04:00
Mike Reeves
54fce4bf8f 2.3 updates 2024-04-04 09:21:16 -04:00
Mike Reeves
c4ebe25bab Attempt to fix 2.3 when main repo changes 2024-04-04 09:18:37 -04:00
Doug Burks
7b4e207329 Merge pull request #12751 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module sigma #12743
2024-04-04 09:13:53 -04:00
Doug Burks
5ec3b834fb FEATURE: Add Events table columns for event.module sigma #12743 2024-04-04 09:11:41 -04:00
Mike Reeves
7668fa1396 Attempt to fix 2.3 when main repo changes 2024-04-04 09:03:29 -04:00
Mike Reeves
470b0e4bf6 Attempt to fix 2.3 when main repo changes 2024-04-04 08:55:13 -04:00
Mike Reeves
d3f163bf9e Attempt to fix 2.3 when main repo changes 2024-04-04 08:54:04 -04:00
Mike Reeves
4b31632dfc Attempt to fix 2.3 when main repo changes 2024-04-04 08:52:37 -04:00
DefensiveDepth
c2f7f7e3a5 Remove dup line 2024-04-04 08:52:30 -04:00
DefensiveDepth
07cb0c7d46 Merge remote-tracking branch 'origin/2.4/dev' into 2.4/soup-playbook 2024-04-04 08:51:09 -04:00
Mike Reeves
14c824143b Attempt to fix 2.3 when main repo changes 2024-04-04 08:48:44 -04:00
Jason Ertel
c75c411426 Merge pull request #12749 from Security-Onion-Solutions/jertel/ana
Clarify annotation description re: Airgap
2024-04-04 07:53:18 -04:00
Jason Ertel
a7fab380b4 clarify telemetry annotation 2024-04-04 07:51:23 -04:00
Jason Ertel
a9517e1291 clarify telemetry annotation 2024-04-04 07:49:30 -04:00
Josh Brower
1017838cfc Merge pull request #12748 from Security-Onion-Solutions/2.4/exclude-elastalert
Exclude Elastalert EQL errors
2024-04-04 06:57:22 -04:00
DefensiveDepth
1d221a574b Exclude Elastalert EQL errors 2024-04-04 06:48:25 -04:00
Jason Ertel
a35bfc4822 Merge pull request #12747 from Security-Onion-Solutions/jertel/ana
do not prompt about telemetry on airgap installs
2024-04-03 21:50:38 -04:00
Jason Ertel
7c64fc8c05 do not prompt about telemetry on airgap installs 2024-04-03 18:08:42 -04:00
DefensiveDepth
f66cca96ce YARA casing 2024-04-03 16:17:29 -04:00
Mike Reeves
12da7db22c Attempt to fix 2.3 when main repo changes 2024-04-03 15:38:23 -04:00
m0duspwnens
1b8584d4bb allow manager to manager on kafka ports 2024-04-03 15:36:35 -04:00
Mike Reeves
9c59f42c16 Attempt to fix 2.3 when main repo changes 2024-04-03 15:23:09 -04:00
coreyogburn
fb5eea8284 Merge pull request #12744 from Security-Onion-Solutions/cogburn/detection-state
Update SOC Config with State File Paths
2024-04-03 13:19:26 -06:00
Mike Reeves
9db9af27ae Attempt to fix 2.3 when main repo changes 2024-04-03 15:14:50 -04:00
Corey Ogburn
0f50a265cf Update SOC Config with State File Paths
Each detection engine is getting a state file to help manage the timer over restarts. By default, the files will go in soc's config folder inside a fingerprints folder.
2024-04-03 13:12:18 -06:00
Jason Ertel
3e05c04aa1 Merge pull request #12731 from Security-Onion-Solutions/jertel/ana
SOC Telemetry
2024-04-03 14:51:41 -04:00
Jason Ertel
8f8896c505 fix link 2024-04-03 14:45:39 -04:00
Jason Ertel
941a841da0 fix link 2024-04-03 14:41:57 -04:00
reyesj2
13105c4ab3 Generate certs for use with elasticfleet kafka output policy
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-03 14:34:07 -04:00
reyesj2
dc27bbb01d Set kafka heap size. To be later configured from SOC
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-03 14:30:52 -04:00
Jason Ertel
2b8a051525 fix link 2024-04-03 14:30:09 -04:00
Mike Reeves
1c7cc8dd3b Merge pull request #12741 from Security-Onion-Solutions/metrics
Change code to allow for non root
2024-04-03 12:56:17 -04:00
Doug Burks
58d081eed1 Merge pull request #12742 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module kratos #12740
2024-04-03 12:48:24 -04:00
Doug Burks
9078b2bad2 FEATURE: Add Events table columns for event.module kratos #12740 2024-04-03 12:46:29 -04:00
Mike Reeves
8889c974b8 Change code to allow for non root 2024-04-03 12:38:59 -04:00
Doug Burks
f615a73120 Merge pull request #12739 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add dashboard for SOC Login Failures #12738
2024-04-03 12:01:08 -04:00
Doug Burks
66844af1c2 FEATURE: Add dashboard for SOC Login Failures #12738 2024-04-03 11:54:53 -04:00
Mike Reeves
a0b7d89eb6 Merge pull request #12734 from Security-Onion-Solutions/metrics
Add Elastic Agent Status Metrics
2024-04-03 11:12:53 -04:00
Mike Reeves
c31e459c2b Change metrics reporting order 2024-04-03 11:06:00 -04:00
m0duspwnens
b863060df1 kafka broker and listener on 0.0.0.0 2024-04-03 11:05:24 -04:00
weslambert
d96d696c35 Merge pull request #12735 from Security-Onion-Solutions/feature/cef
Add cef
2024-04-03 10:49:44 -04:00
Wes
105eadf111 Add cef 2024-04-03 14:40:41 +00:00
Jason Ertel
ca57c20691 suppress soup update output for cleaner console 2024-04-03 10:31:24 -04:00
Jason Ertel
c4767bfdc8 suppress soup update output for cleaner console 2024-04-03 10:28:43 -04:00
Mike Reeves
0de1f76139 add agent count to reposync 2024-04-03 10:26:59 -04:00
Jason Ertel
5f4a0fdfad suppress soup update output for cleaner console 2024-04-03 10:26:48 -04:00
m0duspwnens
18f95e867f port 9093 for kafka docker 2024-04-03 10:24:53 -04:00
m0duspwnens
ed6137a76a allow sensor and searchnode to connect to manager kafka ports 2024-04-03 10:24:10 -04:00
m0duspwnens
c3f02a698e add kafka nodes as extra hosts for the container 2024-04-03 10:23:36 -04:00
m0duspwnens
db106f8ca1 listen on 0.0.0.0 for CONTROLLER 2024-04-03 10:22:47 -04:00
Jason Ertel
c712529cf6 suppress soup update output for cleaner console 2024-04-03 10:21:35 -04:00
Mike Reeves
976ddd3982 add agentstatus to telegraf 2024-04-03 10:06:08 -04:00
Mike Reeves
64748b98ad add agentstatus to telegraf 2024-04-03 09:56:12 -04:00
Mike Reeves
3335612365 add agentstatus to telegraf 2024-04-03 09:54:16 -04:00
Mike Reeves
513273c8c3 add agentstatus to telegraf 2024-04-03 09:43:55 -04:00
Mike Reeves
0dfde3c9f2 add agentstatus to telegraf 2024-04-03 09:40:14 -04:00
Mike Reeves
0efdcfcb52 add agentstatus to telegraf 2024-04-03 09:36:02 -04:00
Josh Brower
fbdcc53fe0 Merge pull request #12732 from Security-Onion-Solutions/2.4/detections-defaults
Feature - auto-enabled Sigma rules
2024-04-03 09:01:09 -04:00
m0duspwnens
8e47cc73a5 kafka.nodes pillar to lf 2024-04-03 08:54:17 -04:00
m0duspwnens
639bf05081 add so-manager to kafka.nodes pillar 2024-04-03 08:52:26 -04:00
Jason Ertel
c1b5ef0891 ensure so-yaml.py is updated during soup 2024-04-03 08:44:40 -04:00
DefensiveDepth
a8f25150f6 Feature - auto-enabled Sigma rules 2024-04-03 08:21:50 -04:00
Jason Ertel
1ee2a6d37b Improve wording for Airgap annotation 2024-04-03 08:21:30 -04:00
Mike Reeves
f64d9224fb Merge branch '2.4/dev' of github.com:Security-Onion-Solutions/securityonion into metrics 2024-04-02 17:22:20 -04:00
m0duspwnens
4e142e0212 put alphabetical 2024-04-02 16:47:35 -04:00
m0duspwnens
c9bf1c86c6 Merge remote-tracking branch 'origin/reyesj2/kafka' into kaffytaffy 2024-04-02 16:40:47 -04:00
reyesj2
82830c8173 Fix typos and fix error related to elasticsearch saltstate being called from logstash state. Logstash will be removed from kafkanodes in future
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:37:39 -04:00
reyesj2
7f5741c43b Fix kafka storage setup
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:36:22 -04:00
reyesj2
643d4831c1 CRLF -> LF
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:35:14 -04:00
reyesj2
b032eed22a Update kafka to use manager docker registry
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:34:06 -04:00
reyesj2
1b49c8540e Fix kafka keystore script
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 16:32:15 -04:00
m0duspwnens
f7534a0ae3 make manager download so-kafka container 2024-04-02 16:01:12 -04:00
Jason Ertel
b6187ab769 Improve wording for Airgap annotation 2024-04-02 15:54:39 -04:00
m0duspwnens
780ad9eb10 add kafka to manager nodes 2024-04-02 15:50:25 -04:00
Mike Reeves
283939b18a Gather metrics from elastic agent to influx 2024-04-02 15:36:01 -04:00
m0duspwnens
e25bc8efe4 Merge remote-tracking branch 'origin/reyesj2/kafka' into kaffytaffy 2024-04-02 13:36:47 -04:00
Jason Ertel
3b112e20e3 fix syntax error 2024-04-02 12:32:33 -04:00
reyesj2
26abe90671 Removed duplicate kafka setup
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-04-02 12:19:46 -04:00
Doug Burks
23a6c4adb6 Merge pull request #12725 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module strelka #12716
2024-04-02 10:54:15 -04:00
Doug Burks
2f03cbf115 FEATURE: Add Events table columns for event.module strelka #12716 2024-04-02 10:42:20 -04:00
Doug Burks
a678a5a416 Merge pull request #12724 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module strelka #12716
2024-04-02 10:15:20 -04:00
Doug Burks
b2b54ccf60 FEATURE: Add Events table columns for event.module strelka #12716 2024-04-02 10:11:16 -04:00
Doug Burks
55e71c867c Merge pull request #12723 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module playbook #12703
2024-04-02 10:04:21 -04:00
Doug Burks
6c2437f8ef FEATURE: Add Events table columns for event.module playbook #12703 2024-04-02 09:55:56 -04:00
Doug Burks
261f2cbaf7 Merge pull request #12722 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module strelka #12716
2024-04-02 09:43:15 -04:00
Jason Ertel
f083558666 break out into sep func 2024-04-02 09:42:43 -04:00
Doug Burks
505eeea66a Update defaults.yaml 2024-04-02 09:39:54 -04:00
Josh Brower
1001aa665d Merge pull request #12720 from Security-Onion-Solutions/2.4/detections-defaults
Add default columns
2024-04-02 09:21:06 -04:00
DefensiveDepth
7f488422b0 Add default columns 2024-04-02 09:13:27 -04:00
Jason Ertel
f17d8d3369 analytics 2024-04-01 10:59:44 -04:00
Jason Ertel
ff777560ac limit col size 2024-04-01 10:35:15 -04:00
Jason Ertel
2c68fd6311 limit col size 2024-04-01 10:32:54 -04:00
Jason Ertel
c1bf710e46 limit col size 2024-04-01 10:32:25 -04:00
Jason Ertel
9d2b40f366 Merge branch '2.4/dev' into jertel/ana 2024-04-01 09:50:38 -04:00
Jason Ertel
3aea2dec85 analytics 2024-04-01 09:50:18 -04:00
coreyogburn
65f6b7022c Merge pull request #12702 from Security-Onion-Solutions/cogburn/yaml-fix
Correct YAML
2024-03-29 15:59:34 -06:00
Corey Ogburn
e5a3a54aea Proper YAML 2024-03-29 14:31:43 -06:00
Doug Burks
be88dbe181 Merge pull request #12700 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add individual dashboards for Zeek SSL and Suricata SSL logs…
2024-03-29 15:41:14 -04:00
Doug Burks
b64ed5535e FEATURE: Add individual dashboards for Zeek SSL and Suricata SSL logs #12699 2024-03-29 15:29:38 -04:00
Doug Burks
5be56703e9 Merge pull request #12698 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for zeek ssl and suricata ssl #12697
2024-03-29 14:46:39 -04:00
Doug Burks
0c7ba62867 FEATURE: Add Events table columns for zeek ssl and suricata ssl #12697 2024-03-29 14:44:29 -04:00
coreyogburn
d9d851040c Merge pull request #12696 from Security-Onion-Solutions/cogburn/manual-sync
New Settings for Manual Sync in Detections
2024-03-29 12:43:08 -06:00
Corey Ogburn
e747a4e3fe New Settings for Manual Sync in Detections 2024-03-29 12:25:03 -06:00
Doug Burks
cc2164221c Merge pull request #12695 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add process.command_line to Process Info and Process Ancestry dashboards #12694
2024-03-29 13:04:09 -04:00
Doug Burks
102c3271d1 FEATURE: Add process.command_line to Process Info and Process Ancestry dashboards #12694 2024-03-29 12:04:47 -04:00
DefensiveDepth
32b8649c77 Add more error checking 2024-03-28 14:31:02 -04:00
DefensiveDepth
9c5ba92589 Check if container is running first 2024-03-28 13:23:40 -04:00
DefensiveDepth
d2c9e0ea4a Cleanup 2024-03-28 13:04:48 -04:00
Jason Ertel
2928b71616 Merge pull request #12683 from Security-Onion-Solutions/jertel/lc
disregard errors in removed applications that occurred before th…
2024-03-28 09:48:26 -04:00
Jason Ertel
216b8c01bf disregard errors that in removed applications that occurred before the upgrade 2024-03-28 09:31:39 -04:00
DefensiveDepth
ce0c9f846d Remove containers from so-status 2024-03-27 16:13:52 -04:00
DefensiveDepth
ba262ee01a Check to see if Playbook is enabled 2024-03-27 15:43:25 -04:00
DefensiveDepth
b571eeb8e6 Initial cut of .70 soup changes 2024-03-27 14:58:16 -04:00
Mike Reeves
7fe377f899 Merge pull request #12674 from Security-Onion-Solutions/ipv6fix
Fix Input Validation to allow for IPv6
2024-03-27 09:48:01 -04:00
Mike Reeves
d57f773072 Fix regex to allow ipv6 in bpfs 2024-03-27 09:36:42 -04:00
Doug Burks
389357ad2b Merge pull request #12667 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events table columns for event.module elastic_agent #12666
2024-03-26 16:11:46 -04:00
Doug Burks
e2caf4668e FEATURE: Add Events table columns for event.module elastic_agent #12666 2024-03-26 16:08:41 -04:00
Josh Brower
63a58efba4 Merge pull request #12656 from Security-Onion-Solutions/2.4/detections-fixes
Add bindings for sigma repos
2024-03-26 09:33:38 -04:00
DefensiveDepth
bbcd3116f7 Fixes 2024-03-26 09:31:46 -04:00
Josh Brower
9c12aa261e Merge pull request #12660 from Security-Onion-Solutions/kilo
Initial cut to remove Playbook and deps
2024-03-26 08:31:11 -04:00
DefensiveDepth
cc0f4847ba Casing and validation 2024-03-26 08:10:57 -04:00
Doug Burks
923b80ba60 Merge pull request #12663 from Security-Onion-Solutions/feature/improve-soc-dashboards
FEATURE: Include additional groupby fields in Dashboards relating to sankey diagrams #12657
2024-03-26 07:52:54 -04:00
DefensiveDepth
7c4ea8a58e Add Detections SOC Config 2024-03-26 07:39:39 -04:00
Doug Burks
20bd9a9701 FEATURE: Include additional groupby fields in Dashboards relating to sankey diagrams #12657 2024-03-26 07:39:24 -04:00
Josh Brower
f0cb30a649 Merge pull request #12659 from Security-Onion-Solutions/2.4/remove-playbook
Remove Playbook ref
2024-03-25 21:12:22 -04:00
DefensiveDepth
94ee761207 Remove Playbook ref 2024-03-25 21:11:47 -04:00
Josh Brower
0a5dc411d0 Merge pull request #12658 from Security-Onion-Solutions/2.4/remove-playbook
Initial cut to remove Playbook and deps
2024-03-25 19:45:51 -04:00
DefensiveDepth
d7ecad4333 Initial cut to remove Playbook and deps 2024-03-25 19:42:31 -04:00
DefensiveDepth
49fa800b2b Add bindings for sigma repos 2024-03-25 14:45:50 -04:00
reyesj2
446f1ffdf5 merge 2.4/dev
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-03-25 13:55:48 -04:00
weslambert
57553bc1e5 Merge pull request #12652 from Security-Onion-Solutions/feature/pfsense_suricata
FEATURE: pfSense Suricata logs
2024-03-25 10:10:13 -04:00
weslambert
df058b3f4a Merge branch '2.4/dev' into feature/pfsense_suricata 2024-03-25 10:08:03 -04:00
Wes
5e21da443f Minor verbiage updates 2024-03-25 13:58:32 +00:00
Josh Patterson
7898277a9b Merge pull request #12651 from Security-Onion-Solutions/issue/12637
Allow for additional af-packet tuning options for Suricata
2024-03-25 09:37:52 -04:00
m0duspwnens
029d8a0e8f handle yes/no on checksum-checks 2024-03-25 09:30:41 -04:00
Josh Brower
b8d33ab983 Merge pull request #12639 from Security-Onion-Solutions/2.4/enable-detections
Enable Detections
2024-03-25 09:30:01 -04:00
weslambert
e124791d5d Merge pull request #12650 from Security-Onion-Solutions/fix/soc_template
FIX: http.response.status_code
2024-03-25 09:29:19 -04:00
coreyogburn
8ae30d0a77 Merge pull request #12640 from Security-Onion-Solutions/cogburn/sigma-repo-support
Update ElastAlert Config with Default Repos
2024-03-22 14:24:18 -06:00
m0duspwnens
81f3d69eb9 remove mmap-locked. 2024-03-22 15:55:59 -04:00
Corey Ogburn
237946e916 Specify Folder in Rule Repo 2024-03-22 13:52:20 -06:00
Corey Ogburn
3d04d37030 Update ElastAlert Config with Default Repos 2024-03-22 13:52:20 -06:00
m0duspwnens
bb0da2a5c5 add additional suricata af-packet config items 2024-03-22 14:34:14 -04:00
Doug Burks
d6ce3851ec Merge pull request #12644 from Security-Onion-Solutions/dougburks-patch-1
FIX: Specify that static IP address is recommended #12643
2024-03-22 13:47:33 -04:00
Doug Burks
9c6f3f4808 FIX: Specify that static IP address is recommended #12643 2024-03-22 13:41:44 -04:00
Doug Burks
1ab56033a2 Merge pull request #12642 from Security-Onion-Solutions/fix/add-event.dataset
FEATURE: Add event.dataset to all Events column layouts #12641
2024-03-22 13:22:57 -04:00
Doug Burks
a78a304d4f FEATURE: Add event.dataset to all Events column layouts #12641 2024-03-22 13:19:31 -04:00
DefensiveDepth
5ca9ec4b17 Enable Detections 2024-03-22 10:12:26 -04:00
weslambert
4e1543b6a8 Get only code 2024-03-22 09:56:21 -04:00
Jason Ertel
0e7d08b957 Merge pull request #12638 from Security-Onion-Solutions/jertel/logs
disregard benign telegraf error
2024-03-22 09:53:52 -04:00
Jason Ertel
f889a089bf disregard benign telegraf error 2024-03-22 09:48:27 -04:00
Doug Burks
2b019ec8fe Merge pull request #12634 from Security-Onion-Solutions/dougburks-patch-1
FEATURE: Add Events column layout for event.module system #12628
2024-03-22 05:52:23 -04:00
Wes
5934829e0d Include pfsense config 2024-03-21 20:08:33 +00:00
Wes
486a633dfe Add pfsense Suricata config 2024-03-21 20:07:59 +00:00
weslambert
77ac342786 Merge pull request #12632 from Security-Onion-Solutions/fix/remove_temp_yara
Remove temp YARA
2024-03-21 10:11:32 -04:00
weslambert
8429a364dc Remove Strelka rules watch 2024-03-21 10:09:36 -04:00
weslambert
1568f57096 Remove Strelka config 2024-03-21 10:07:27 -04:00
weslambert
f431e9ae08 Remove Strelka config 2024-03-21 10:06:25 -04:00
Josh Brower
4b03d088c3 Merge pull request #12611 from Security-Onion-Solutions/2.4/enable-detections
Change Detections defaults
2024-03-21 08:04:03 -04:00
DefensiveDepth
4a33234c34 Default update to 24 hours 2024-03-21 07:26:19 -04:00
Doug Burks
778997bed4 FEATURE: Add Events column layout for event.module system #12628 2024-03-20 17:07:37 -04:00
Doug Burks
655d3e349c Merge pull request #12627 from Security-Onion-Solutions/dougburks-patch-1
FIX: Annotations for BPF and Suricata PCAP #12626
2024-03-20 16:11:33 -04:00
Doug Burks
f3b921342e FIX: Annotations for BPF and Suricata PCAP #12626 2024-03-20 16:06:25 -04:00
Doug Burks
fff4d20e39 Update soc_suricata.yaml 2024-03-20 16:03:45 -04:00
Doug Burks
d2fb067110 FIX: Annotations for BPF and Suricata PCAP #12626 2024-03-20 15:57:32 -04:00
Doug Burks
876690a9f6 FIX: Annotations for BPF and Suricata PCAP #12626 2024-03-20 15:49:46 -04:00
Jason Ertel
4c2f2759d4 Merge pull request #12601 from Security-Onion-Solutions/jertel/suripcap
reschedule close/lock jobs
2024-03-20 12:11:15 -04:00
Mike Reeves
dd603934bc Merge pull request #12619 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update VERSION
2024-03-20 11:06:05 -04:00
Mike Reeves
d4d17e1835 Update VERSION 2024-03-20 11:04:40 -04:00
Mike Reeves
7779a95341 Merge pull request #12617 from Security-Onion-Solutions/2.4/main
fix merges
2024-03-20 10:53:09 -04:00
Mike Reeves
68ea2836dd Merge pull request #12615 from Security-Onion-Solutions/2.4.60
2.4.260
2024-03-20 10:43:08 -04:00
Mike Reeves
bb3bbd749c 2.4.260 2024-03-20 10:20:04 -04:00
DefensiveDepth
d84af803a6 Enable Autoupdates 2024-03-20 08:48:31 -04:00
DefensiveDepth
020eb47026 Change Detections defaults 2024-03-19 13:53:37 -04:00
Wes
c6df805556 Add SOC template 2024-03-18 14:53:36 +00:00
Jason Ertel
47d447eadd Merge branch '2.4/dev' into jertel/suripcap 2024-03-18 07:34:43 -04:00
Jason Ertel
af5b3feb96 re-schedule lock jobs 2024-03-18 07:34:18 -04:00
Mike Reeves
4237210f0b Merge pull request #12587 from Security-Onion-Solutions/TOoSmOotH-patch-10
Update soc_suricata.yaml
2024-03-14 11:37:35 -04:00
Mike Reeves
fd835f6394 Update soc_suricata.yaml 2024-03-14 11:36:45 -04:00
Mike Reeves
284e0d8435 Update soc_suricata.yaml 2024-03-14 11:33:47 -04:00
Jason Ertel
09bff01d79 Merge pull request #12584 from Security-Onion-Solutions/jertel/suripcap
handle airgap when detections not enabled
2024-03-13 21:35:06 -04:00
Jason Ertel
844cfe55cd handle airgap when detections not enabled 2024-03-13 20:52:17 -04:00
Jason Ertel
927fe9039d handle airgap when detections not enabled 2024-03-13 20:50:03 -04:00
Jason Ertel
cc1356c823 Merge pull request #12581 from Security-Onion-Solutions/jertel/suripcap
removed unused property
2024-03-13 14:20:22 -04:00
Jason Ertel
275a678fa1 removed unused property 2024-03-13 13:49:44 -04:00
Josh Patterson
3d33c99f53 Merge pull request #12579 from Security-Onion-Solutions/m0duspwnens-patch-1-dontshowchanges
Update init.sls
2024-03-13 11:26:20 -04:00
Josh Patterson
b9702d02db Update init.sls 2024-03-13 11:24:26 -04:00
Josh Patterson
292ab0e378 Merge pull request #12577 from Security-Onion-Solutions/jppsocerino
remove modules if detections disabled
2024-03-13 10:30:00 -04:00
m0duspwnens
1a829190ac remove modules if detections disabled 2024-03-13 09:46:44 -04:00
Josh Brower
dc3eace718 Merge pull request #12576 from Security-Onion-Solutions/2.4/regenpackages
Gen packages post-SOUP
2024-03-13 07:53:08 -04:00
DefensiveDepth
06013e2c6f Gen packages post-SOUP 2024-03-13 07:23:43 -04:00
Mike Reeves
603483148d Merge pull request #12567 from Security-Onion-Solutions/TOoSmOotH-patch-9
Update so-saltstack-update to use 2.4/main
2024-03-12 10:20:41 -04:00
Mike Reeves
3e0fb3f8bb Update so-saltstack-update 2024-03-12 10:18:27 -04:00
Mike Reeves
5deebe07d8 Merge pull request #12564 from Security-Onion-Solutions/TOoSmOotH-patch-8
Update soc_suricata.yaml
2024-03-12 09:24:56 -04:00
Josh Brower
197791f8ed Merge pull request #12565 from Security-Onion-Solutions/2.4/detections-defaults
2.4/detections defaults
2024-03-12 06:17:30 -04:00
Mike Reeves
72acb11925 Update soc_suricata.yaml 2024-03-11 19:04:51 -04:00
DefensiveDepth
0f41f07dc9 Merge remote-tracking branch 'origin/2.4/dev' into 2.4/detections-defaults 2024-03-11 16:41:26 -04:00
Josh Brower
47ab1f5b95 Merge pull request #12563 from Security-Onion-Solutions/kilo
Add yara update back
2024-03-11 16:39:31 -04:00
Josh Patterson
b7f058a8ca Merge pull request #12561 from Security-Onion-Solutions/jppnocap
transitional pcap
2024-03-11 15:57:16 -04:00
DefensiveDepth
61a183b7fc Add regex defaults 2024-03-11 15:55:39 -04:00
m0duspwnens
ba32b3e6e9 fix bpf for transition 2024-03-11 14:07:45 -04:00
Jason Ertel
8c54a19698 Merge pull request #12560 from Security-Onion-Solutions/jertel/email
auto-convert email addresses to lowercase during setup
2024-03-11 14:06:52 -04:00
Jason Ertel
cd28c00d67 auto-convert email addresses to lowercase during setup 2024-03-11 13:47:31 -04:00
Jason Ertel
b5d8df7fb2 auto-convert email addresses to lowercase during setup 2024-03-11 13:45:57 -04:00
m0duspwnens
907cf9f992 transition pcap 2024-03-11 12:20:28 -04:00
Josh Patterson
4355d5b659 Merge pull request #12544 from Security-Onion-Solutions/jertel/status
pcap improvements
2024-03-11 10:29:33 -04:00
Jorge Reyes
2ca96c7f4c Merge pull request #12555 from Security-Onion-Solutions/reyesj2-patch-osc
Create local salt directory
2024-03-11 09:40:20 -04:00
reyesj2
a8403c63c7 Create local salt dir for stig
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-03-11 09:35:54 -04:00
weslambert
34d5954e16 Fix indent 2024-03-11 09:12:05 -04:00
Jorge Reyes
f4725bf6d4 Merge pull request #12553 from Security-Onion-Solutions/reyesj2-patch-osc
Run scan against default scap security guide so that resulting score is accurate
2024-03-11 07:52:07 -04:00
Doug Burks
b622cf8d23 Merge pull request #12545 from Security-Onion-Solutions/dougburks-patch-1
Update soc_pcap.yaml
2024-03-08 16:45:29 -05:00
Doug Burks
a892352b61 Update soc_pcap.yaml 2024-03-08 16:43:29 -05:00
Jason Ertel
a55e04e64a pcap improvements 2024-03-08 15:48:53 -05:00
Josh Brower
4a9e8265ce Merge remote-tracking branch 'origin/2.4/dev' into kilo 2024-03-08 14:48:04 -05:00
coreyogburn
68ba9a89cf Merge pull request #12542 from Security-Onion-Solutions/cogburn/yara-license
Updated RulesRepo for New Strelka Structure
2024-03-08 11:42:49 -07:00
Corey Ogburn
6f05c3976b Updated RulesRepo for New Strelka Structure 2024-03-08 11:29:46 -07:00
Doug Burks
b6b6fc45e7 Merge pull request #12527 from Security-Onion-Solutions/TOoSmOotH-patch-7
Fix Space Free for Steno
2024-03-08 12:40:15 -05:00
Doug Burks
e1b27a930e Merge pull request #12540 from Security-Onion-Solutions/dougburks-patch-1
FIX: Update SOC annotations for Stenographer PCAP #12539
2024-03-08 12:32:15 -05:00
Doug Burks
6680e023e4 Update soc_pcap.yaml 2024-03-08 12:16:59 -05:00
Wes
e8ae609012 Add Strelka rules watch back 2024-03-08 16:27:17 +00:00
Wes
fc66a54902 Add Strelka download and update scripts back 2024-03-08 16:26:14 +00:00
Wes
4e32935991 Add Strelka config back 2024-03-08 16:24:37 +00:00
Josh Patterson
7ec887a327 Merge pull request #12537 from Security-Onion-Solutions/issue/12535
allow managersearch to receiver redis and 5644
2024-03-08 10:13:27 -05:00
m0duspwnens
3eb6fe2df9 allow managersearch to receiver redis and 5644 2024-03-08 09:52:12 -05:00
Jason Ertel
6d06aa8ed6 Merge pull request #12526 from Security-Onion-Solutions/jertel/status
unswap files
2024-03-07 14:49:17 -05:00
Mike Reeves
06257b9c4a Update so-minion 2024-03-07 14:32:46 -05:00
Jason Ertel
40574982e4 unswap files 2024-03-07 14:25:43 -05:00
Jason Ertel
e2567dcf8d Merge pull request #12521 from Security-Onion-Solutions/jertel/status
gracefully handle status check failure on ubuntu
2024-03-07 13:29:48 -05:00
Jason Ertel
fffef9b621 gracefully handle status check failure on ubuntu 2024-03-07 12:31:51 -05:00
weslambert
1633527695 Merge pull request #12519 from Security-Onion-Solutions/fix/error_message_system_syslog
Add error.message mapping for system.syslog
2024-03-07 10:47:33 -05:00
Wes
005930f7fd Add error.message mapping for system.syslog 2024-03-07 15:41:23 +00:00
Mike Reeves
b5f1733e97 Merge pull request #12513 from Security-Onion-Solutions/newsuripcap
Change Factoring for so-minion pcap disk space
2024-03-07 10:14:34 -05:00
m0duspwnens
70f3ce0536 change how maxfiles is calculated 2024-03-06 17:32:06 -05:00
reyesj2
17a75d5bd2 Run stig post remediate scan against default ol9 scap-security-guide.
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-03-06 17:19:01 -05:00
m0duspwnens
583227290f fix max-files calc 2024-03-06 15:18:22 -05:00
m0duspwnens
cf232534ca move suricata.pcap to suricata.config.outputs.pcap-log 2024-03-06 14:42:07 -05:00
Mike Reeves
7f1e786e3d Consolidate PCAP settings 2024-03-06 12:56:09 -05:00
Mike Reeves
9a413a2e31 Fix location of repo 2024-03-06 12:42:22 -05:00
Jason Ertel
8f36a8a4b6 Merge pull request #12514 from Security-Onion-Solutions/jertel/annotations
detections annotations
2024-03-06 11:10:21 -05:00
Jason Ertel
1cbac11fae detections annotations 2024-03-06 11:08:03 -05:00
Mike Reeves
ad12093429 Fix percent calc 2024-03-06 11:05:06 -05:00
Jason Ertel
167aff24f6 detections annotations 2024-03-06 11:03:52 -05:00
Josh Brower
9e671621db Merge pull request #12510 from Security-Onion-Solutions/2.4/excludedetections
Add Exclusion toggle
2024-03-06 10:56:29 -05:00
Mike Reeves
4dfa1a5626 Move Suricata around 2024-03-06 10:35:10 -05:00
Mike Reeves
f836d6a61d Update so-minion 2024-03-06 10:06:17 -05:00
Mike Reeves
a63fca727c Update soc_suricata.yaml 2024-03-06 10:02:06 -05:00
Mike Reeves
f58c104d89 Update so-minion 2024-03-06 09:51:56 -05:00
Jason Ertel
5acefb5d18 Merge pull request #12511 from Security-Onion-Solutions/jertel/annotations
PCAP annotations
2024-03-06 08:40:24 -05:00
Jason Ertel
0f12297f50 add new pcap annotations 2024-03-06 08:19:42 -05:00
Jason Ertel
12653eec8c add new pcap annotations 2024-03-06 08:14:33 -05:00
Josh Brower
1b47537a3f Add Exclusion toggle 2024-03-06 07:16:50 -05:00
Josh Patterson
73b45cfaf8 Merge pull request #12508 from Security-Onion-Solutions/jppsensoroni
fix pcapspace function
2024-03-05 17:53:28 -05:00
Josh Patterson
eaef076eba Update so-minion 2024-03-05 17:52:24 -05:00
Josh Patterson
ac9db8a392 Merge branch '2.4/dev' into jppsensoroni 2024-03-05 17:51:32 -05:00
m0duspwnens
5687fdcf57 fix pcapspace function 2024-03-05 17:46:43 -05:00
Jason Ertel
d5b08142a0 Merge pull request #12507 from Security-Onion-Solutions/jertel/annotations
fix oinkcodes with leading zeros
2024-03-05 16:44:56 -05:00
Jason Ertel
4b5f00cef4 fix oinkcodes with leading zeros 2024-03-05 16:42:20 -05:00
weslambert
185a160df0 Merge pull request #12500 from Security-Onion-Solutions/feature/additional_integrations_5
Additional Integrations #5
2024-03-05 16:12:05 -05:00
Mike Reeves
b9707fc8ea Merge pull request #12502 from Security-Onion-Solutions/TOoSmOotH-patch-5
Update so-minion
2024-03-05 15:10:02 -05:00
Mike Reeves
a686d46322 Update so-minion 2024-03-05 15:09:02 -05:00
Mike Reeves
6eb608c3f5 Update so-minion 2024-03-05 15:05:03 -05:00
weslambert
b9ebe6c40b Update VERSION 2024-03-05 12:58:34 -05:00
Josh Patterson
781f96a74e Merge pull request #12497 from Security-Onion-Solutions/jppsensoroni
fix sensoroni for non sensor
2024-03-05 10:36:12 -05:00
m0duspwnens
c0d19e11b9 fix } placement 2024-03-05 10:07:32 -05:00
m0duspwnens
1a58aa61a0 only import pcap and suricata if sensor 2024-03-05 09:54:40 -05:00
m0duspwnens
08f2b8251b add GLOBALS.is_sensor 2024-03-05 09:53:35 -05:00
weslambert
bed42208b1 Add journald integration 2024-03-05 09:49:55 -05:00
weslambert
2a7e5b096f Change version for foxtrot 2024-03-05 09:48:59 -05:00
weslambert
d8e8933ea0 Add AWS Security Hub template 2024-03-05 09:25:41 -05:00
weslambert
d85ac39e28 Add AWS Inspector template 2024-03-05 09:23:17 -05:00
weslambert
1514f1291e Add AWS GuardDuty template 2024-03-05 09:21:48 -05:00
weslambert
b64d61065a Add AWS Cloudfront template 2024-03-05 09:19:43 -05:00
Mike Reeves
58d222284e Merge pull request #12271 from Security-Onion-Solutions/suripcap
Suricata PCAP
2024-03-04 17:27:38 -05:00
Mike Reeves
fe238755e9 Fix df 2024-03-04 16:52:51 -05:00
Mike Reeves
018e099111 Modify setup 2024-03-04 14:53:15 -05:00
Josh Brower
9fd1653914 Merge pull request #12487 from Security-Onion-Solutions/2.4/elastic-agent-fim
Fix FIM
2024-03-04 07:41:36 -05:00
Josh Brower
f28f269bb1 Fix FIM 2024-03-04 07:38:32 -05:00
Josh Brower
f3dce66f03 Merge pull request #12482 from Security-Onion-Solutions/2.4/sigma-pipeline
2.4/sigma pipeline
2024-03-01 15:29:13 -05:00
Josh Brower
d832158cc5 Drop Hashes field 2024-03-01 15:26:02 -05:00
Josh Brower
b017157d21 Add antivirus mapping 2024-03-01 14:04:56 -05:00
Jorge Reyes
d911b7bfc4 Merge pull request #12469 from Security-Onion-Solutions/reyesj2-patch-4
FIX: EA installers not downloadable from SOC & fix logging
2024-02-29 16:21:44 -05:00
reyesj2
53761d4dba FIX: EA installers not downloadable from SOC + fix stg logging
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2024-02-29 16:15:26 -05:00
Mike Reeves
1fe8f3d9e4 Merge pull request #12405 from Security-Onion-Solutions/repochange
Manage the repo files
2024-02-29 14:01:48 -05:00
Josh Brower
aa3b917368 Merge pull request #12456 from Security-Onion-Solutions/feature/detections-airgap
Feature/detections airgap
2024-02-28 09:41:13 -05:00
Josh Brower
e2dd0f8cf1 Only update rule files if AG 2024-02-28 09:39:23 -05:00
weslambert
d1e55d5ab7 Merge pull request #12450 from Security-Onion-Solutions/fix/suricata_max_age
Roll Suricata logs daily to prevent alerts from being deleted when not meeting size threshold
2024-02-27 17:28:07 -05:00
weslambert
df3943b465 Daily rollover 2024-02-27 17:24:27 -05:00
Josh Patterson
d5fc6ddd2c Merge pull request #12449 from Security-Onion-Solutions/issue/12391
Issue/12391
2024-02-27 15:38:33 -05:00
m0duspwnens
fcc0f9d14f redo classifications 2024-02-27 13:20:58 -05:00
Josh Brower
59af547838 Fix download location 2024-02-27 09:49:54 -05:00
Josh Brower
a817bae1e5 Merge pull request #12437 from Security-Onion-Solutions/feature/detections-airgap
Airgap Support - Detections module
2024-02-26 16:47:26 -05:00
Josh Brower
c6baa4be1b Airgap Support - Detections module 2024-02-26 16:19:32 -05:00
m0duspwnens
8b7f7933bd suricata container watch classification.config 2024-02-26 15:29:13 -05:00
m0duspwnens
466dac30bb soup for classifications 2024-02-26 12:15:17 -05:00
Doug Burks
52580fb8c4 Merge pull request #12434 from Security-Onion-Solutions/feature/improve-endpoint-columns
Add multiple endpoint features
2024-02-26 12:05:30 -05:00
weslambert
acf7dbdabe Merge pull request #12432 from Security-Onion-Solutions/fix/endpoint_diag_template
Update pattern for endpoint diagnostic template
2024-02-26 12:01:29 -05:00
weslambert
1d099f97d2 Update pattern for endpoint diagnostic template 2024-02-26 11:27:56 -05:00
Doug Burks
f8424f3dad Update defaults.yaml 2024-02-26 11:22:09 -05:00
m0duspwnens
9a7e2153ee add classification.config 2024-02-26 11:01:53 -05:00
Doug Burks
c8a95a8706 FEATURE: Add new endpoint dashboards #12428 2024-02-26 09:59:07 -05:00
Doug Burks
4df21148fc FEATURE: Add default columns for endpoint.events datasets #12425 2024-02-26 09:40:51 -05:00
Doug Burks
ca249312ba FEATURE: Add new SOC action for Process Info #12421 2024-02-26 09:38:14 -05:00
Josh Brower
66b815d4b2 Merge pull request #12431 from Security-Onion-Solutions/feature/brower-detections
Add Detection AutoUpdate config
2024-02-26 08:43:33 -05:00
Josh Brower
a6bb7216f9 Add Detection AutoUpdate config 2024-02-26 08:18:42 -05:00
Josh Brower
77cb5748f6 Merge pull request #12430 from Security-Onion-Solutions/feature/sigma-pipeline
Feature/sigma pipeline
2024-02-26 08:00:00 -05:00
Doug Burks
d6cb8ab928 update events_x_process in defaults.yaml 2024-02-23 17:09:40 -05:00
Doug Burks
daf96d7934 fix new eventFields in merged.map.jinja 2024-02-23 17:07:48 -05:00
Doug Burks
58f4fb87d0 fix new eventFields in soc_soc.yaml 2024-02-23 17:06:29 -05:00
Doug Burks
b7ef1e8af1 add more endpoint.events.x fields to soc_soc.yaml 2024-02-23 15:38:53 -05:00
Doug Burks
7da0ccf5a6 add more endpoint.events.x entries to merged.map.jinja 2024-02-23 15:35:53 -05:00
Doug Burks
65cdc1dc86 Merge pull request #12423 from Security-Onion-Solutions/jppfiec
convert _x_ to . for soc ui to config
2024-02-23 15:22:16 -05:00
m0duspwnens
573d565976 convert _x_ to . for soc ui to config 2024-02-23 15:03:44 -05:00
Doug Burks
b8baca417b add endpoint_x_events_x_process to defaults.yaml 2024-02-23 14:03:04 -05:00
Josh Brower
d04aa06455 Fix source.ip 2024-02-22 14:01:02 -05:00
Mike Reeves
1824d7b36d Merge pull request #12416 from Security-Onion-Solutions/TOoSmOotH-patch-2
Fix Loss Calculation for Stenographer
2024-02-22 12:52:36 -05:00
Mike Reeves
e7914fc5a1 Update stenoloss.sh 2024-02-22 12:49:06 -05:00
Mike Reeves
759b2ff59e Manage the repos 2024-02-22 10:03:51 -05:00
Josh Brower
c886e72793 Imphash mappings 2024-02-22 08:59:33 -05:00
Josh Brower
0a9022ba6a Add hash mappings 2024-02-21 17:07:08 -05:00
Josh Patterson
d2f7946377 Merge pull request #12411 from Security-Onion-Solutions/issue/12382
nest under policy
2024-02-21 16:28:04 -05:00
coreyogburn
eb3432fb8b Merge pull request #12412 from Security-Onion-Solutions/kilo
Initial Support for Detections Module
2024-02-21 14:08:11 -07:00
Josh Brower
927ea0c9ec Update VERSION 2024-02-21 15:56:12 -05:00
m0duspwnens
162785575c nest under policy 2024-02-21 15:28:24 -05:00
Jason Ertel
152e7937db Merge pull request #12408 from Security-Onion-Solutions/jertel/24template
add missing template
2024-02-21 13:24:34 -05:00
Jason Ertel
25570e6ec2 add missing template 2024-02-21 13:18:39 -05:00
Josh Brower
1952f0f232 Merge remote-tracking branch 'origin/2.4/dev' into kilo 2024-02-21 13:11:49 -05:00
Mike Reeves
9ca0f586ae Manage the repos 2024-02-21 11:45:02 -05:00
Jason Ertel
29778438f0 Merge pull request #12396 from Security-Onion-Solutions/jertel/glm
add lock threads
2024-02-21 07:18:05 -05:00
Jason Ertel
6c6a362fcc add lock threads 2024-02-20 19:14:18 -05:00
Mike Reeves
89010dacab Merge pull request #12348 from Security-Onion-Solutions/TOoSmOotH-patch-4
Update soup
2024-02-20 12:10:09 -05:00
Jason Ertel
78d41c5342 Merge pull request #12386 from Security-Onion-Solutions/jertel/corricon
replace correlate icon to avoid confusion with searcheng.in
2024-02-20 10:39:38 -05:00
Jason Ertel
4b314c8715 replace correlate icon to avoid confusion with searcheng.in 2024-02-20 10:30:09 -05:00
Mike Reeves
ed0773604c Merge pull request #12385 from Security-Onion-Solutions/TOoSmOotH-patch-1
Update VERSION
2024-02-20 10:14:45 -05:00
Mike Reeves
07fcfab7ec Update VERSION 2024-02-20 10:14:11 -05:00
Josh Brower
ffb3cc87b7 Default ruleset; Descriptions 2024-02-16 11:55:10 -05:00
Josh Brower
e4dcb4a8dd Merge remote-tracking branch 'origin/cogburn/detection_playbooks' into kilo 2024-02-15 17:50:37 -05:00
Corey Ogburn
c64f37ab67 sigmaRulePackages is now a string array 2024-02-15 10:34:07 -07:00
Josh Brower
686304f24a Merge remote-tracking branch 'origin/2.4/dev' into kilo 2024-02-15 09:47:51 -05:00
Corey Ogburn
a5db9f87dd Merge branch 'kilo' into cogburn/detection_playbooks 2024-02-13 14:08:44 -07:00
Corey Ogburn
f321e734eb Added so-detection mapping in elasticsearch 2024-02-13 14:05:27 -07:00
Corey Ogburn
8800b7e878 WIP: Detections Changes
Removed some strelka/yara rules from salt.

Removed yara scripts for downloading and updating rules. This will be managed by SOC.

Added a new compile_yara.py script.

Added the strelka repos folder.
2024-02-13 14:05:27 -07:00
Corey Ogburn
031ee078c5 socsigmarepo
Need write permissions on the /opt/so/rules dir so I can clone the sigma repo there.
2024-02-13 14:05:27 -07:00
Corey Ogburn
c933627a71 Merge branch 'kilo' of github.com:security-onion-solutions/securityonion into kilo 2024-02-13 12:53:29 -07:00
Corey Ogburn
0d297274c8 DetectionComment Mapping Defined 2024-02-13 12:53:18 -07:00
Josh Brower
0c6c6ba2d5 Various UI tweaks 2024-02-13 13:38:43 -05:00
Josh Brower
ea80469c2d Detection Default queries 2024-02-12 19:39:55 -05:00
Josh Brower
5102269440 Update defaults 2024-02-12 16:44:54 -05:00
Mike Reeves
5a4e11b2f8 Update soup
Remove a function that isn't used any more
2024-02-12 16:09:47 -05:00
Corey Ogburn
64f6d0fba9 Updated Detection's ES Mappings
Detection's now have a License field and the Comment model is defined now.
2024-02-09 14:20:07 -07:00
Corey Ogburn
29174566f3 WIP: Updated Detection Mappings, Changed Engine to Language
Detection mappings updated to include the removal of Note and the addition of Tags, Ruleset, and Language.

SOC defaults updated to use language based queries rather than engine and show the language column instead of the engine column in results.
2024-02-08 09:44:56 -07:00
Josh Brower
81a3e95914 Fixup sigma pipelines 2024-02-07 16:42:16 -05:00
Josh Brower
7e3187c0b8 Fixup sigma pipelines 2024-02-07 15:35:31 -05:00
Josh Brower
b7b501d289 Add Sigma pipelines 2024-02-07 15:02:52 -05:00
Josh Brower
378c99ae88 Fix bindings 2024-02-02 18:27:49 -05:00
Corey Ogburn
8f81c9eb68 Updating config for Detection(s) 2024-02-02 11:49:58 -07:00
Josh Brower
fe196b5661 Add SOC Config for Detections 2024-02-01 12:22:50 -05:00
Josh Brower
49b5788ac1 add bindings 2024-02-01 07:21:49 -05:00
Josh Brower
881d6b313e Update VERSION - kilo 2024-01-31 17:04:11 -05:00
Josh Brower
db057b4dfa Merge pull request #12296 from Security-Onion-Solutions/cogburn/detection_playbooks
Cogburn/detection playbooks
2024-01-31 16:48:51 -05:00
Corey Ogburn
585147d1de Added so-detection mapping in elasticsearch 2024-01-31 10:39:47 -07:00
Mike Reeves
0d01d09d2e fix pcap paths 2024-01-31 09:15:35 -05:00
Mike Reeves
00289c201e fix pcap paths 2024-01-31 08:58:57 -05:00
Corey Ogburn
858166bcae WIP: Detections Changes
Removed some strelka/yara rules from salt.

Removed yara scripts for downloading and updating rules. This will be managed by SOC.

Added a new compile_yara.py script.

Added the strelka repos folder.
2024-01-30 15:43:51 -07:00
m0duspwnens
4be1214bab pcap engine logic for sensoroni 2024-01-30 16:53:57 -05:00
Corey Ogburn
0fa4d92f8f socsigmarepo
Need write permissions on the /opt/so/rules dir so I can clone the sigma repo there.
2024-01-30 14:49:05 -07:00
m0duspwnens
8a25748e33 grammar 2024-01-30 16:06:24 -05:00
m0duspwnens
8b503e2ffa telegraf dont run stenoloss script if suricata is pcap engine 2024-01-30 15:58:11 -05:00
m0duspwnens
f32cb1f115 fix find to work with steno and suri pcap 2024-01-30 15:48:10 -05:00
m0duspwnens
8ed66ea468 disable stenographer if suricata is pcap engine 2024-01-30 15:22:32 -05:00
m0duspwnens
0522dc180a map pcap dir to container. enable pcap-log in map 2024-01-30 13:39:35 -05:00
m0duspwnens
37dcb84a09 add missing comma 2024-01-30 10:50:01 -05:00
m0duspwnens
d118ff4728 add GLOBALS.pcap_engine 2024-01-29 16:54:08 -05:00
Mike Reeves
88d2ddba8b add placeholder for telegraf 2024-01-29 15:53:54 -05:00
Mike Reeves
ab551a747d Threads placeholder logic 2024-01-29 15:44:57 -05:00
Mike Reeves
88c01a22d6 Add annotation logic 2024-01-29 15:27:28 -05:00
Mike Reeves
0c969312e2 Add Globals 2024-01-29 15:22:20 -05:00
Mike Reeves
5b05aec96a Target sspecific minion 2024-01-29 14:56:51 -05:00
Mike Reeves
1a2245a1ed Add so-minion modifications 2024-01-29 13:44:53 -05:00
Mike Reeves
762a3bea17 Defaults and Annotations 2024-01-25 09:59:26 -05:00
reyesj2
8cf29682bb Update to merge in 2.4/dev
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2023-11-29 13:41:23 -05:00
reyesj2
86dc7cc804 Kafka init
Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
2023-11-29 13:34:25 -05:00
170 changed files with 4567 additions and 3563 deletions

190
.github/DISCUSSION_TEMPLATE/2-4.yml vendored Normal file
View File

@@ -0,0 +1,190 @@
body:
- type: markdown
attributes:
value: |
⚠️ This category is solely for conversations related to Security Onion 2.4 ⚠️
If your organization needs more immediate, enterprise grade professional support, with one-on-one virtual meetings and screensharing, contact us via our website: https://securityonion.com/support
- type: dropdown
attributes:
label: Version
description: Which version of Security Onion 2.4.x are you asking about?
options:
-
- 2.4 Pre-release (Beta, Release Candidate)
- 2.4.10
- 2.4.20
- 2.4.30
- 2.4.40
- 2.4.50
- 2.4.60
- 2.4.70
- 2.4.80
- 2.4.90
- 2.4.100
- Other (please provide detail below)
validations:
required: true
- type: dropdown
attributes:
label: Installation Method
description: How did you install Security Onion?
options:
-
- Security Onion ISO image
- Network installation on Red Hat derivative like Oracle, Rocky, Alma, etc.
- Network installation on Ubuntu
- Network installation on Debian
- Other (please provide detail below)
validations:
required: true
- type: dropdown
attributes:
label: Description
description: >
Is this discussion about installation, configuration, upgrading, or other?
options:
-
- installation
- configuration
- upgrading
- other (please provide detail below)
validations:
required: true
- type: dropdown
attributes:
label: Installation Type
description: >
When you installed, did you choose Import, Eval, Standalone, Distributed, or something else?
options:
-
- Import
- Eval
- Standalone
- Distributed
- other (please provide detail below)
validations:
required: true
- type: dropdown
attributes:
label: Location
description: >
Is this deployment in the cloud, on-prem with Internet access, or airgap?
options:
-
- cloud
- on-prem with Internet access
- airgap
- other (please provide detail below)
validations:
required: true
- type: dropdown
attributes:
label: Hardware Specs
description: >
Does your hardware meet or exceed the minimum requirements for your installation type as shown at https://docs.securityonion.net/en/2.4/hardware.html?
options:
-
- Meets minimum requirements
- Exceeds minimum requirements
- Does not meet minimum requirements
- other (please provide detail below)
validations:
required: true
- type: input
attributes:
label: CPU
description: How many CPU cores do you have?
validations:
required: true
- type: input
attributes:
label: RAM
description: How much RAM do you have?
validations:
required: true
- type: input
attributes:
label: Storage for /
description: How much storage do you have for the / partition?
validations:
required: true
- type: input
attributes:
label: Storage for /nsm
description: How much storage do you have for the /nsm partition?
validations:
required: true
- type: dropdown
attributes:
label: Network Traffic Collection
description: >
Are you collecting network traffic from a tap or span port?
options:
-
- tap
- span port
- other (please provide detail below)
validations:
required: true
- type: dropdown
attributes:
label: Network Traffic Speeds
description: >
How much network traffic are you monitoring?
options:
-
- Less than 1Gbps
- 1Gbps to 10Gbps
- more than 10Gbps
validations:
required: true
- type: dropdown
attributes:
label: Status
description: >
Does SOC Grid show all services on all nodes as running OK?
options:
-
- Yes, all services on all nodes are running OK
- No, one or more services are failed (please provide detail below)
validations:
required: true
- type: dropdown
attributes:
label: Salt Status
description: >
Do you get any failures when you run "sudo salt-call state.highstate"?
options:
-
- Yes, there are salt failures (please provide detail below)
- No, there are no failures
validations:
required: true
- type: dropdown
attributes:
label: Logs
description: >
Are there any additional clues in /opt/so/log/?
options:
-
- Yes, there are additional clues in /opt/so/log/ (please provide detail below)
- No, there are no additional clues
validations:
required: true
- type: textarea
attributes:
label: Detail
description: Please read our discussion guidelines at https://github.com/Security-Onion-Solutions/securityonion/discussions/1720 and then provide detailed information to help us help you.
placeholder: |-
STOP! Before typing, please read our discussion guidelines at https://github.com/Security-Onion-Solutions/securityonion/discussions/1720 in their entirety!
If your organization needs more immediate, enterprise grade professional support, with one-on-one virtual meetings and screensharing, contact us via our website: https://securityonion.com/support
validations:
required: true
- type: checkboxes
attributes:
label: Guidelines
options:
- label: I have read the discussion guidelines at https://github.com/Security-Onion-Solutions/securityonion/discussions/1720 and assert that I have followed the guidelines.
required: true

32
.github/workflows/close-threads.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: 'Close Threads'
on:
schedule:
- cron: '50 1 * * *'
workflow_dispatch:
permissions:
issues: write
pull-requests: write
discussions: write
concurrency:
group: lock-threads
jobs:
close-threads:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-issue-stale: -1
days-before-issue-close: 60
stale-issue-message: "This issue is stale because it has been inactive for an extended period. Stale issues convey that the issue, while important to someone, is not critical enough for the author, or other community members to work on, sponsor, or otherwise shepherd the issue through to a resolution."
close-issue-message: "This issue was closed because it has been stale for an extended period. It will be automatically locked in 30 days, after which no further commenting will be available."
days-before-pr-stale: 45
days-before-pr-close: 60
stale-pr-message: "This PR is stale because it has been inactive for an extended period. The longer a PR remains stale the more out of date with the main branch it becomes."
close-pr-message: "This PR was closed because it has been stale for an extended period. It will be automatically locked in 30 days. If there is still a commitment to finishing this PR re-open it before it is locked."

25
.github/workflows/lock-threads.yml vendored Normal file
View File

@@ -0,0 +1,25 @@
name: 'Lock Threads'
on:
schedule:
- cron: '50 2 * * *'
workflow_dispatch:
permissions:
issues: write
pull-requests: write
discussions: write
concurrency:
group: lock-threads
jobs:
lock-threads:
runs-on: ubuntu-latest
steps:
- uses: jertel/lock-threads@main
with:
include-discussion-currently-open: true
discussion-inactive-days: 90
issue-inactive-days: 30
pr-inactive-days: 30

View File

@@ -1,17 +1,17 @@
### 2.4.50-20240220 ISO image released on 2024/02/20 ### 2.4.60-20240320 ISO image released on 2024/03/20
### Download and Verify ### Download and Verify
2.4.50-20240220 ISO image: 2.4.60-20240320 ISO image:
https://download.securityonion.net/file/securityonion/securityonion-2.4.50-20240220.iso https://download.securityonion.net/file/securityonion/securityonion-2.4.60-20240320.iso
MD5: BCA6476EF1BF79773D8EFB11700FDE8E MD5: 178DD42D06B2F32F3870E0C27219821E
SHA1: 9FF0A304AA368BCD2EF2BE89AD47E65650241927 SHA1: 73EDCD50817A7F6003FE405CF1808A30D034F89D
SHA256: 49D7695EFFF6F3C4840079BF564F3191B585639816ADE98672A38017F25E9570 SHA256: DD334B8D7088A7B78160C253B680D645E25984BA5CCAB5CC5C327CA72137FC06
Signature for ISO image: Signature for ISO image:
https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.50-20240220.iso.sig https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.60-20240320.iso.sig
Signing key: Signing key:
https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.4/main/KEYS
@@ -25,22 +25,22 @@ wget https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.
Download the signature file for the ISO: Download the signature file for the ISO:
``` ```
wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.50-20240220.iso.sig wget https://github.com/Security-Onion-Solutions/securityonion/raw/2.4/main/sigs/securityonion-2.4.60-20240320.iso.sig
``` ```
Download the ISO image: Download the ISO image:
``` ```
wget https://download.securityonion.net/file/securityonion/securityonion-2.4.50-20240220.iso wget https://download.securityonion.net/file/securityonion/securityonion-2.4.60-20240320.iso
``` ```
Verify the downloaded ISO image using the signature file: Verify the downloaded ISO image using the signature file:
``` ```
gpg --verify securityonion-2.4.50-20240220.iso.sig securityonion-2.4.50-20240220.iso gpg --verify securityonion-2.4.60-20240320.iso.sig securityonion-2.4.60-20240320.iso
``` ```
The output should show "Good signature" and the Primary key fingerprint should match what's shown below: The output should show "Good signature" and the Primary key fingerprint should match what's shown below:
``` ```
gpg: Signature made Fri 16 Feb 2024 11:36:25 AM EST using RSA key ID FE507013 gpg: Signature made Tue 19 Mar 2024 03:17:58 PM EDT using RSA key ID FE507013
gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>" gpg: Good signature from "Security Onion Solutions, LLC <info@securityonionsolutions.com>"
gpg: WARNING: This key is not certified with a trusted signature! gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner. gpg: There is no indication that the signature belongs to the owner.

View File

@@ -1 +1 @@
2.4.50 2.4.70

30
pillar/kafka/nodes.sls Normal file
View File

@@ -0,0 +1,30 @@
{% set current_kafkanodes = salt.saltutil.runner('mine.get', tgt='G@role:so-manager or G@role:so-managersearch or G@role:so-standalone or G@role:so-receiver', fun='network.ip_addrs', tgt_type='compound') %}
{% set pillar_kafkanodes = salt['pillar.get']('kafka:nodes', default={}, merge=True) %}
{% set existing_ids = [] %}
{% for node in pillar_kafkanodes.values() %}
{% if node.get('id') %}
{% do existing_ids.append(node['nodeid']) %}
{% endif %}
{% endfor %}
{% set all_possible_ids = range(1, 256)|list %}
{% set available_ids = [] %}
{% for id in all_possible_ids %}
{% if id not in existing_ids %}
{% do available_ids.append(id) %}
{% endif %}
{% endfor %}
{% set final_nodes = pillar_kafkanodes.copy() %}
{% for minionid, ip in current_kafkanodes.items() %}
{% set hostname = minionid.split('_')[0] %}
{% if hostname not in final_nodes %}
{% set new_id = available_ids.pop(0) %}
{% do final_nodes.update({hostname: {'nodeid': new_id, 'ip': ip[0]}}) %}
{% endif %}
{% endfor %}
kafka:
nodes: {{ final_nodes|tojson }}

View File

@@ -43,8 +43,6 @@ base:
- soc.soc_soc - soc.soc_soc
- soc.adv_soc - soc.adv_soc
- soc.license - soc.license
- soctopus.soc_soctopus
- soctopus.adv_soctopus
- kibana.soc_kibana - kibana.soc_kibana
- kibana.adv_kibana - kibana.adv_kibana
- kratos.soc_kratos - kratos.soc_kratos
@@ -61,10 +59,11 @@ base:
- elastalert.adv_elastalert - elastalert.adv_elastalert
- backup.soc_backup - backup.soc_backup
- backup.adv_backup - backup.adv_backup
- soctopus.soc_soctopus
- soctopus.adv_soctopus
- minions.{{ grains.id }} - minions.{{ grains.id }}
- minions.adv_{{ grains.id }} - minions.adv_{{ grains.id }}
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
- stig.soc_stig - stig.soc_stig
'*_sensor': '*_sensor':
@@ -108,8 +107,6 @@ base:
- soc.soc_soc - soc.soc_soc
- soc.adv_soc - soc.adv_soc
- soc.license - soc.license
- soctopus.soc_soctopus
- soctopus.adv_soctopus
- kibana.soc_kibana - kibana.soc_kibana
- kibana.adv_kibana - kibana.adv_kibana
- strelka.soc_strelka - strelka.soc_strelka
@@ -165,8 +162,6 @@ base:
- soc.soc_soc - soc.soc_soc
- soc.adv_soc - soc.adv_soc
- soc.license - soc.license
- soctopus.soc_soctopus
- soctopus.adv_soctopus
- kibana.soc_kibana - kibana.soc_kibana
- kibana.adv_kibana - kibana.adv_kibana
- strelka.soc_strelka - strelka.soc_strelka
@@ -184,6 +179,9 @@ base:
- minions.{{ grains.id }} - minions.{{ grains.id }}
- minions.adv_{{ grains.id }} - minions.adv_{{ grains.id }}
- stig.soc_stig - stig.soc_stig
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
'*_heavynode': '*_heavynode':
- elasticsearch.auth - elasticsearch.auth
@@ -240,6 +238,9 @@ base:
- redis.adv_redis - redis.adv_redis
- minions.{{ grains.id }} - minions.{{ grains.id }}
- minions.adv_{{ grains.id }} - minions.adv_{{ grains.id }}
- kafka.nodes
- kafka.soc_kafka
- kafka.adv_kafka
'*_import': '*_import':
- secrets - secrets
@@ -262,8 +263,6 @@ base:
- soc.soc_soc - soc.soc_soc
- soc.adv_soc - soc.adv_soc
- soc.license - soc.license
- soctopus.soc_soctopus
- soctopus.adv_soctopus
- kibana.soc_kibana - kibana.soc_kibana
- kibana.adv_kibana - kibana.adv_kibana
- backup.soc_backup - backup.soc_backup

View File

@@ -34,7 +34,6 @@
'suricata', 'suricata',
'utility', 'utility',
'schedule', 'schedule',
'soctopus',
'tcpreplay', 'tcpreplay',
'docker_clean' 'docker_clean'
], ],
@@ -101,9 +100,9 @@
'suricata.manager', 'suricata.manager',
'utility', 'utility',
'schedule', 'schedule',
'soctopus',
'docker_clean', 'docker_clean',
'stig' 'stig',
'kafka'
], ],
'so-managersearch': [ 'so-managersearch': [
'salt.master', 'salt.master',
@@ -123,9 +122,9 @@
'suricata.manager', 'suricata.manager',
'utility', 'utility',
'schedule', 'schedule',
'soctopus',
'docker_clean', 'docker_clean',
'stig' 'stig',
'kafka'
], ],
'so-searchnode': [ 'so-searchnode': [
'ssl', 'ssl',
@@ -157,10 +156,10 @@
'healthcheck', 'healthcheck',
'utility', 'utility',
'schedule', 'schedule',
'soctopus',
'tcpreplay', 'tcpreplay',
'docker_clean', 'docker_clean',
'stig' 'stig',
'kafka'
], ],
'so-sensor': [ 'so-sensor': [
'ssl', 'ssl',
@@ -191,7 +190,9 @@
'telegraf', 'telegraf',
'firewall', 'firewall',
'schedule', 'schedule',
'docker_clean' 'docker_clean',
'kafka',
'elasticsearch.ca'
], ],
'so-desktop': [ 'so-desktop': [
'ssl', 'ssl',
@@ -200,10 +201,6 @@
], ],
}, grain='role') %} }, grain='role') %}
{% if grains.role in ['so-eval', 'so-manager', 'so-managersearch', 'so-standalone'] %}
{% do allowed_states.append('mysql') %}
{% endif %}
{%- if grains.role in ['so-sensor', 'so-eval', 'so-standalone', 'so-heavynode'] %} {%- if grains.role in ['so-sensor', 'so-eval', 'so-standalone', 'so-heavynode'] %}
{% do allowed_states.append('zeek') %} {% do allowed_states.append('zeek') %}
{%- endif %} {%- endif %}
@@ -229,10 +226,6 @@
{% do allowed_states.append('elastalert') %} {% do allowed_states.append('elastalert') %}
{% endif %} {% endif %}
{% if grains.role in ['so-eval', 'so-manager', 'so-standalone', 'so-managersearch'] %}
{% do allowed_states.append('playbook') %}
{% endif %}
{% if grains.role in ['so-manager', 'so-standalone', 'so-searchnode', 'so-managersearch', 'so-heavynode', 'so-receiver'] %} {% if grains.role in ['so-manager', 'so-standalone', 'so-searchnode', 'so-managersearch', 'so-heavynode', 'so-receiver'] %}
{% do allowed_states.append('logstash') %} {% do allowed_states.append('logstash') %}
{% endif %} {% endif %}

View File

@@ -1,7 +1,10 @@
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% if GLOBALS.pcap_engine == "TRANSITION" %}
{% set PCAPBPF = ["ip and host 255.255.255.1 and port 1"] %}
{% else %}
{% import_yaml 'bpf/defaults.yaml' as BPFDEFAULTS %} {% import_yaml 'bpf/defaults.yaml' as BPFDEFAULTS %}
{% set BPFMERGED = salt['pillar.get']('bpf', BPFDEFAULTS.bpf, merge=True) %} {% set BPFMERGED = salt['pillar.get']('bpf', BPFDEFAULTS.bpf, merge=True) %}
{% import 'bpf/macros.jinja' as MACROS %} {% import 'bpf/macros.jinja' as MACROS %}
{{ MACROS.remove_comments(BPFMERGED, 'pcap') }} {{ MACROS.remove_comments(BPFMERGED, 'pcap') }}
{% set PCAPBPF = BPFMERGED.pcap %} {% set PCAPBPF = BPFMERGED.pcap %}
{% endif %}

View File

@@ -1,6 +1,6 @@
bpf: bpf:
pcap: pcap:
description: List of BPF filters to apply to PCAP. description: List of BPF filters to apply to Stenographer.
multiline: True multiline: True
forcedType: "[]string" forcedType: "[]string"
helpLink: bpf.html helpLink: bpf.html

View File

@@ -70,3 +70,17 @@ x509_signing_policies:
- authorityKeyIdentifier: keyid,issuer:always - authorityKeyIdentifier: keyid,issuer:always
- days_valid: 820 - days_valid: 820
- copypath: /etc/pki/issued_certs/ - copypath: /etc/pki/issued_certs/
kafka:
- minions: '*'
- signing_private_key: /etc/pki/ca.key
- signing_cert: /etc/pki/ca.crt
- C: US
- ST: Utah
- L: Salt Lake City
- basicConstraints: "critical CA:false"
- keyUsage: "digitalSignature, keyEncipherment"
- subjectKeyIdentifier: hash
- authorityKeyIdentifier: keyid,issuer:always
- extendedKeyUsage: "serverAuth, clientAuth"
- days_valid: 820
- copypath: /etc/pki/issued_certs/

View File

@@ -1,3 +1,5 @@
{% if '2.4' in salt['cp.get_file_str']('/etc/soversion') %}
{% import_yaml '/opt/so/saltstack/local/pillar/global/soc_global.sls' as SOC_GLOBAL %} {% import_yaml '/opt/so/saltstack/local/pillar/global/soc_global.sls' as SOC_GLOBAL %}
{% if SOC_GLOBAL.global.airgap %} {% if SOC_GLOBAL.global.airgap %}
{% set UPDATE_DIR='/tmp/soagupdate/SecurityOnion' %} {% set UPDATE_DIR='/tmp/soagupdate/SecurityOnion' %}
@@ -68,3 +70,19 @@ copy_so-firewall_sbin:
- source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-firewall - source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-firewall
- force: True - force: True
- preserve: True - preserve: True
copy_so-yaml_sbin:
file.copy:
- name: /usr/sbin/so-yaml.py
- source: {{UPDATE_DIR}}/salt/manager/tools/sbin/so-yaml.py
- force: True
- preserve: True
{% else %}
fix_23_soup_sbin:
cmd.run:
- name: curl -s -f -o /usr/sbin/soup https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.3/main/salt/common/tools/sbin/soup
fix_23_soup_salt:
cmd.run:
- name: curl -s -f -o /opt/so/saltstack/defalt/salt/common/tools/sbin/soup https://raw.githubusercontent.com/Security-Onion-Solutions/securityonion/2.3/main/salt/common/tools/sbin/soup
{% endif %}

View File

@@ -248,6 +248,14 @@ get_random_value() {
head -c 5000 /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w $length | head -n 1 head -c 5000 /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w $length | head -n 1
} }
get_agent_count() {
if [ -f /opt/so/log/agents/agentstatus.log ]; then
AGENTCOUNT=$(cat /opt/so/log/agents/agentstatus.log | grep -wF active | awk '{print $2}')
else
AGENTCOUNT=0
fi
}
gpg_rpm_import() { gpg_rpm_import() {
if [[ $is_oracle ]]; then if [[ $is_oracle ]]; then
if [[ "$WHATWOULDYOUSAYYAHDOHERE" == "setup" ]]; then if [[ "$WHATWOULDYOUSAYYAHDOHERE" == "setup" ]]; then
@@ -329,7 +337,7 @@ lookup_salt_value() {
local="" local=""
fi fi
salt-call --no-color ${kind}.get ${group}${key} --out=${output} ${local} salt-call -lerror --no-color ${kind}.get ${group}${key} --out=${output} ${local}
} }
lookup_pillar() { lookup_pillar() {
@@ -570,8 +578,9 @@ sync_options() {
set_version set_version
set_os set_os
salt_minion_count salt_minion_count
get_agent_count
echo "$VERSION/$OS/$(uname -r)/$MINIONCOUNT/$(read_feat)" echo "$VERSION/$OS/$(uname -r)/$MINIONCOUNT:$AGENTCOUNT/$(read_feat)"
} }
systemctl_func() { systemctl_func() {

View File

@@ -47,12 +47,16 @@ def check_for_fps():
fps = 1 fps = 1
except FileNotFoundError: except FileNotFoundError:
fn = '/proc/sys/crypto/' + feat_full + '_enabled' fn = '/proc/sys/crypto/' + feat_full + '_enabled'
try:
with open(fn, 'r') as f: with open(fn, 'r') as f:
contents = f.read() contents = f.read()
if '1' in contents: if '1' in contents:
fps = 1 fps = 1
except:
# Unknown, so assume 0
fps = 0
with open('/opt/so/log/sostatus/lks_enabled', 'w') as f: with open('/opt/so/log/sostatus/fps_enabled', 'w') as f:
f.write(str(fps)) f.write(str(fps))
def check_for_lks(): def check_for_lks():
@@ -76,7 +80,7 @@ def check_for_lks():
lks = 1 lks = 1
if lks: if lks:
break break
with open('/opt/so/log/sostatus/fps_enabled', 'w') as f: with open('/opt/so/log/sostatus/lks_enabled', 'w') as f:
f.write(str(lks)) f.write(str(lks))
def fail(msg): def fail(msg):

View File

@@ -50,16 +50,14 @@ container_list() {
"so-idh" "so-idh"
"so-idstools" "so-idstools"
"so-influxdb" "so-influxdb"
"so-kafka"
"so-kibana" "so-kibana"
"so-kratos" "so-kratos"
"so-logstash" "so-logstash"
"so-mysql"
"so-nginx" "so-nginx"
"so-pcaptools" "so-pcaptools"
"so-playbook"
"so-redis" "so-redis"
"so-soc" "so-soc"
"so-soctopus"
"so-steno" "so-steno"
"so-strelka-backend" "so-strelka-backend"
"so-strelka-filestream" "so-strelka-filestream"

View File

@@ -49,10 +49,6 @@ if [ "$CONTINUE" == "y" ]; then
sed -i "s|$OLD_IP|$NEW_IP|g" $file sed -i "s|$OLD_IP|$NEW_IP|g" $file
done done
echo "Granting MySQL root user permissions on $NEW_IP"
docker exec -i so-mysql mysql --user=root --password=$(lookup_pillar_secret 'mysql') -e "GRANT ALL PRIVILEGES ON *.* TO 'root'@'$NEW_IP' IDENTIFIED BY '$(lookup_pillar_secret 'mysql')' WITH GRANT OPTION;" &> /dev/null
echo "Removing MySQL root user from $OLD_IP"
docker exec -i so-mysql mysql --user=root --password=$(lookup_pillar_secret 'mysql') -e "DROP USER 'root'@'$OLD_IP';" &> /dev/null
echo "Updating Kibana dashboards" echo "Updating Kibana dashboards"
salt-call state.apply kibana.so_savedobjects_defaults -l info queue=True salt-call state.apply kibana.so_savedobjects_defaults -l info queue=True

View File

@@ -122,6 +122,7 @@ if [[ $EXCLUDE_STARTUP_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|error while communicating" # Elasticsearch MS -> HN "sensor" temporarily unavailable EXCLUDED_ERRORS="$EXCLUDED_ERRORS|error while communicating" # Elasticsearch MS -> HN "sensor" temporarily unavailable
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|tls handshake error" # Docker registry container when new node comes onlines EXCLUDED_ERRORS="$EXCLUDED_ERRORS|tls handshake error" # Docker registry container when new node comes onlines
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Unable to get license information" # Logstash trying to contact ES before it's ready EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Unable to get license information" # Logstash trying to contact ES before it's ready
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|process already finished" # Telegraf script finished just as the auto kill timeout kicked in
fi fi
if [[ $EXCLUDE_FALSE_POSITIVE_ERRORS == 'Y' ]]; then if [[ $EXCLUDE_FALSE_POSITIVE_ERRORS == 'Y' ]]; then
@@ -154,15 +155,11 @@ if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|fail\\(error\\)" # redis/python generic stack line, rely on other lines for actual error EXCLUDED_ERRORS="$EXCLUDED_ERRORS|fail\\(error\\)" # redis/python generic stack line, rely on other lines for actual error
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|urlerror" # idstools connection timeout EXCLUDED_ERRORS="$EXCLUDED_ERRORS|urlerror" # idstools connection timeout
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|timeouterror" # idstools connection timeout EXCLUDED_ERRORS="$EXCLUDED_ERRORS|timeouterror" # idstools connection timeout
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|forbidden" # playbook
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|_ml" # Elastic ML errors EXCLUDED_ERRORS="$EXCLUDED_ERRORS|_ml" # Elastic ML errors
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|context canceled" # elastic agent during shutdown EXCLUDED_ERRORS="$EXCLUDED_ERRORS|context canceled" # elastic agent during shutdown
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|exited with code 128" # soctopus errors during forced restart by highstate
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|geoip databases update" # airgap can't update GeoIP DB EXCLUDED_ERRORS="$EXCLUDED_ERRORS|geoip databases update" # airgap can't update GeoIP DB
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|filenotfounderror" # bug in 2.4.10 filecheck salt state caused duplicate cronjobs EXCLUDED_ERRORS="$EXCLUDED_ERRORS|filenotfounderror" # bug in 2.4.10 filecheck salt state caused duplicate cronjobs
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|salt-minion-check" # bug in early 2.4 place Jinja script in non-jinja salt dir causing cron output errors EXCLUDED_ERRORS="$EXCLUDED_ERRORS|salt-minion-check" # bug in early 2.4 place Jinja script in non-jinja salt dir causing cron output errors
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|generating elastalert config" # playbook expected error
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|activerecord" # playbook expected error
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|monitoring.metrics" # known issue with elastic agent casting the field incorrectly if an integer value shows up before a float EXCLUDED_ERRORS="$EXCLUDED_ERRORS|monitoring.metrics" # known issue with elastic agent casting the field incorrectly if an integer value shows up before a float
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|repodownload.conf" # known issue with reposync on pre-2.4.20 EXCLUDED_ERRORS="$EXCLUDED_ERRORS|repodownload.conf" # known issue with reposync on pre-2.4.20
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|missing versions record" # stenographer corrupt index EXCLUDED_ERRORS="$EXCLUDED_ERRORS|missing versions record" # stenographer corrupt index
@@ -201,6 +198,8 @@ if [[ $EXCLUDE_KNOWN_ERRORS == 'Y' ]]; then
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|req.LocalMeta.host.ip" # known issue in GH EXCLUDED_ERRORS="$EXCLUDED_ERRORS|req.LocalMeta.host.ip" # known issue in GH
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|sendmail" # zeek EXCLUDED_ERRORS="$EXCLUDED_ERRORS|sendmail" # zeek
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|stats.log" EXCLUDED_ERRORS="$EXCLUDED_ERRORS|stats.log"
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|Unknown column" # Elastalert errors from running EQL queries
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|parsing_exception" # Elastalert EQL parsing issue. Temp.
EXCLUDED_ERRORS="$EXCLUDED_ERRORS|context deadline exceeded" EXCLUDED_ERRORS="$EXCLUDED_ERRORS|context deadline exceeded"
fi fi
@@ -210,7 +209,9 @@ RESULT=0
CONTAINER_IDS=$(docker ps -q) CONTAINER_IDS=$(docker ps -q)
exclude_container so-kibana # kibana error logs are too verbose with large varieties of errors most of which are temporary exclude_container so-kibana # kibana error logs are too verbose with large varieties of errors most of which are temporary
exclude_container so-idstools # ignore due to known issues and noisy logging exclude_container so-idstools # ignore due to known issues and noisy logging
exclude_container so-playbook # ignore due to several playbook known issues exclude_container so-playbook # Playbook is removed as of 2.4.70, disregard output in stopped containers
exclude_container so-mysql # MySQL is removed as of 2.4.70, disregard output in stopped containers
exclude_container so-soctopus # Soctopus is removed as of 2.4.70, disregard output in stopped containers
for container_id in $CONTAINER_IDS; do for container_id in $CONTAINER_IDS; do
container_name=$(docker ps --format json | jq ". | select(.ID==\"$container_id\")|.Names") container_name=$(docker ps --format json | jq ". | select(.ID==\"$container_id\")|.Names")
@@ -228,10 +229,13 @@ exclude_log "kibana.log" # kibana error logs are too verbose with large variet
exclude_log "spool" # disregard zeek analyze logs as this is data specific exclude_log "spool" # disregard zeek analyze logs as this is data specific
exclude_log "import" # disregard imported test data the contains error strings exclude_log "import" # disregard imported test data the contains error strings
exclude_log "update.log" # ignore playbook updates due to several known issues exclude_log "update.log" # ignore playbook updates due to several known issues
exclude_log "playbook.log" # ignore due to several playbook known issues
exclude_log "cron-cluster-delete.log" # ignore since Curator has been removed exclude_log "cron-cluster-delete.log" # ignore since Curator has been removed
exclude_log "cron-close.log" # ignore since Curator has been removed exclude_log "cron-close.log" # ignore since Curator has been removed
exclude_log "curator.log" # ignore since Curator has been removed exclude_log "curator.log" # ignore since Curator has been removed
exclude_log "playbook.log" # Playbook is removed as of 2.4.70, logs may still be on disk
exclude_log "mysqld.log" # MySQL is removed as of 2.4.70, logs may still be on disk
exclude_log "soctopus.log" # Soctopus is removed as of 2.4.70, logs may still be on disk
exclude_log "agentstatus.log" # ignore this log since it tracks agents in error state
for log_file in $(cat /tmp/log_check_files); do for log_file in $(cat /tmp/log_check_files); do
status "Checking log file $log_file" status "Checking log file $log_file"

View File

@@ -67,13 +67,6 @@ docker:
custom_bind_mounts: [] custom_bind_mounts: []
extra_hosts: [] extra_hosts: []
extra_env: [] extra_env: []
'so-mysql':
final_octet: 30
port_bindings:
- 0.0.0.0:3306:3306
custom_bind_mounts: []
extra_hosts: []
extra_env: []
'so-nginx': 'so-nginx':
final_octet: 31 final_octet: 31
port_bindings: port_bindings:
@@ -91,13 +84,6 @@ docker:
custom_bind_mounts: [] custom_bind_mounts: []
extra_hosts: [] extra_hosts: []
extra_env: [] extra_env: []
'so-playbook':
final_octet: 32
port_bindings:
- 0.0.0.0:3000:3000
custom_bind_mounts: []
extra_hosts: []
extra_env: []
'so-redis': 'so-redis':
final_octet: 33 final_octet: 33
port_bindings: port_bindings:
@@ -118,13 +104,6 @@ docker:
custom_bind_mounts: [] custom_bind_mounts: []
extra_hosts: [] extra_hosts: []
extra_env: [] extra_env: []
'so-soctopus':
final_octet: 35
port_bindings:
- 0.0.0.0:7000:7000
custom_bind_mounts: []
extra_hosts: []
extra_env: []
'so-strelka-backend': 'so-strelka-backend':
final_octet: 36 final_octet: 36
custom_bind_mounts: [] custom_bind_mounts: []
@@ -206,3 +185,11 @@ docker:
custom_bind_mounts: [] custom_bind_mounts: []
extra_hosts: [] extra_hosts: []
extra_env: [] extra_env: []
'so-kafka':
final_octet: 88
port_bindings:
- 0.0.0.0:9092:9092
- 0.0.0.0:9093:9093
custom_bind_mounts: []
extra_hosts: []
extra_env: []

View File

@@ -46,14 +46,11 @@ docker:
so-kibana: *dockerOptions so-kibana: *dockerOptions
so-kratos: *dockerOptions so-kratos: *dockerOptions
so-logstash: *dockerOptions so-logstash: *dockerOptions
so-mysql: *dockerOptions
so-nginx: *dockerOptions so-nginx: *dockerOptions
so-nginx-fleet-node: *dockerOptions so-nginx-fleet-node: *dockerOptions
so-playbook: *dockerOptions
so-redis: *dockerOptions so-redis: *dockerOptions
so-sensoroni: *dockerOptions so-sensoroni: *dockerOptions
so-soc: *dockerOptions so-soc: *dockerOptions
so-soctopus: *dockerOptions
so-strelka-backend: *dockerOptions so-strelka-backend: *dockerOptions
so-strelka-filestream: *dockerOptions so-strelka-filestream: *dockerOptions
so-strelka-frontend: *dockerOptions so-strelka-frontend: *dockerOptions
@@ -68,3 +65,4 @@ docker:
so-steno: *dockerOptions so-steno: *dockerOptions
so-suricata: *dockerOptions so-suricata: *dockerOptions
so-zeek: *dockerOptions so-zeek: *dockerOptions
so-kafka: *dockerOptions

View File

@@ -65,6 +65,7 @@ elasticfleet:
- http_endpoint - http_endpoint
- httpjson - httpjson
- iis - iis
- journald
- juniper - juniper
- juniper_srx - juniper_srx
- kafka_log - kafka_log

View File

@@ -0,0 +1,29 @@
{
"package": {
"name": "winlog",
"version": ""
},
"name": "windows-defender",
"namespace": "default",
"description": "Windows Defender - Operational logs",
"policy_id": "endpoints-initial",
"inputs": {
"winlogs-winlog": {
"enabled": true,
"streams": {
"winlog.winlog": {
"enabled": true,
"vars": {
"channel": "Microsoft-Windows-Windows Defender/Operational",
"data_stream.dataset": "winlog.winlog",
"preserve_original_event": false,
"providers": [],
"ignore_older": "72h",
"language": 0,
"tags": [] }
}
}
}
},
"force": true
}

View File

@@ -16,6 +16,9 @@
"paths": [ "paths": [
"/var/log/auth.log*", "/var/log/auth.log*",
"/var/log/secure*" "/var/log/secure*"
],
"tags": [
"so-grid-node"
] ]
} }
}, },
@@ -25,6 +28,9 @@
"paths": [ "paths": [
"/var/log/messages*", "/var/log/messages*",
"/var/log/syslog*" "/var/log/syslog*"
],
"tags": [
"so-grid-node"
] ]
} }
} }

View File

@@ -16,6 +16,9 @@
"paths": [ "paths": [
"/var/log/auth.log*", "/var/log/auth.log*",
"/var/log/secure*" "/var/log/secure*"
],
"tags": [
"so-grid-node"
] ]
} }
}, },
@@ -25,6 +28,9 @@
"paths": [ "paths": [
"/var/log/messages*", "/var/log/messages*",
"/var/log/syslog*" "/var/log/syslog*"
],
"tags": [
"so-grid-node"
] ]
} }
} }

View File

@@ -46,7 +46,7 @@ do
done done
printf "\n### Stripping out unused components" printf "\n### Stripping out unused components"
find /nsm/elastic-agent-workspace/elastic-agent-*/data/elastic-agent-*/components -maxdepth 1 -regex '.*fleet.*\|.*packet.*\|.*apm.*\|.*audit.*\|.*heart.*\|.*cloud.*' -delete find /nsm/elastic-agent-workspace/elastic-agent-*/data/elastic-agent-*/components -maxdepth 1 -regex '.*fleet.*\|.*packet.*\|.*apm.*\|.*heart.*\|.*cloud.*' -delete
printf "\n### Tarring everything up again" printf "\n### Tarring everything up again"
for OS in "${OSARCH[@]}" for OS in "${OSARCH[@]}"

View File

@@ -4,7 +4,7 @@
# Elastic License 2.0. # Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %} {% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %} {% if sls.split('.')[0] in allowed_states or sls in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
# Move our new CA over so Elastic and Logstash can use SSL with the internal CA # Move our new CA over so Elastic and Logstash can use SSL with the internal CA

View File

@@ -198,6 +198,142 @@ elasticsearch:
sort: sort:
field: '@timestamp' field: '@timestamp'
order: desc order: desc
so-detection:
index_sorting: false
index_template:
composed_of:
- detection-mappings
- detection-settings
index_patterns:
- so-detection*
priority: 500
template:
mappings:
date_detection: false
dynamic_templates:
- strings_as_keyword:
mapping:
ignore_above: 1024
type: keyword
match_mapping_type: string
settings:
index:
mapping:
total_fields:
limit: 1500
number_of_replicas: 0
number_of_shards: 1
refresh_interval: 30s
sort:
field: '@timestamp'
order: desc
so-logs-soc:
close: 30
delete: 365
index_sorting: false
index_template:
composed_of:
- agent-mappings
- dtc-agent-mappings
- base-mappings
- dtc-base-mappings
- client-mappings
- dtc-client-mappings
- container-mappings
- destination-mappings
- dtc-destination-mappings
- pb-override-destination-mappings
- dll-mappings
- dns-mappings
- dtc-dns-mappings
- ecs-mappings
- dtc-ecs-mappings
- error-mappings
- event-mappings
- dtc-event-mappings
- file-mappings
- dtc-file-mappings
- group-mappings
- host-mappings
- dtc-host-mappings
- http-mappings
- dtc-http-mappings
- log-mappings
- network-mappings
- dtc-network-mappings
- observer-mappings
- dtc-observer-mappings
- organization-mappings
- package-mappings
- process-mappings
- dtc-process-mappings
- related-mappings
- rule-mappings
- dtc-rule-mappings
- server-mappings
- service-mappings
- dtc-service-mappings
- source-mappings
- dtc-source-mappings
- pb-override-source-mappings
- threat-mappings
- tls-mappings
- url-mappings
- user_agent-mappings
- dtc-user_agent-mappings
- common-settings
- common-dynamic-mappings
data_stream: {}
index_patterns:
- logs-soc-so*
priority: 500
template:
mappings:
date_detection: false
dynamic_templates:
- strings_as_keyword:
mapping:
ignore_above: 1024
type: keyword
match_mapping_type: string
settings:
index:
lifecycle:
name: so-soc-logs
mapping:
total_fields:
limit: 5000
number_of_replicas: 0
number_of_shards: 1
refresh_interval: 30s
sort:
field: '@timestamp'
order: desc
policy:
phases:
cold:
actions:
set_priority:
priority: 0
min_age: 30d
delete:
actions:
delete: {}
min_age: 365d
hot:
actions:
rollover:
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
min_age: 30d
warm: 7
so-common: so-common:
close: 30 close: 30
delete: 365 delete: 365
@@ -1078,6 +1214,50 @@ elasticsearch:
set_priority: set_priority:
priority: 50 priority: 50
min_age: 30d min_age: 30d
so-logs-aws_x_cloudfront_logs:
index_sorting: False
index_template:
index_patterns:
- "logs-aws.cloudfront_logs-*"
template:
settings:
index:
lifecycle:
name: so-logs-aws.cloudfront_logs-logs
number_of_replicas: 0
composed_of:
- "logs-aws.cloudfront_logs@package"
- "logs-aws.cloudfront_logs@custom"
- "so-fleet_globals-1"
- "so-fleet_agent_id_verification-1"
priority: 501
data_stream:
hidden: false
allow_custom_routing: false
policy:
phases:
cold:
actions:
set_priority:
priority: 0
min_age: 30d
delete:
actions:
delete: {}
min_age: 365d
hot:
actions:
rollover:
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
min_age: 30d
so-logs-aws_x_cloudtrail: so-logs-aws_x_cloudtrail:
index_sorting: false index_sorting: false
index_template: index_template:
@@ -1298,6 +1478,94 @@ elasticsearch:
set_priority: set_priority:
priority: 50 priority: 50
min_age: 30d min_age: 30d
so-logs-aws_x_guardduty:
index_sorting: False
index_template:
index_patterns:
- "logs-aws.guardduty-*"
template:
settings:
index:
lifecycle:
name: so-logs-aws.guardduty-logs
number_of_replicas: 0
composed_of:
- "logs-aws.guardduty@package"
- "logs-aws.guardduty@custom"
- "so-fleet_globals-1"
- "so-fleet_agent_id_verification-1"
priority: 501
data_stream:
hidden: false
allow_custom_routing: false
policy:
phases:
cold:
actions:
set_priority:
priority: 0
min_age: 30d
delete:
actions:
delete: {}
min_age: 365d
hot:
actions:
rollover:
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
min_age: 30d
so-logs-aws_x_inspector:
index_sorting: False
index_template:
index_patterns:
- "logs-aws.inspector-*"
template:
settings:
index:
lifecycle:
name: so-logs-aws.inspector-logs
number_of_replicas: 0
composed_of:
- "logs-aws.inspector@package"
- "logs-aws.inspector@custom"
- "so-fleet_globals-1"
- "so-fleet_agent_id_verification-1"
priority: 501
data_stream:
hidden: false
allow_custom_routing: false
policy:
phases:
cold:
actions:
set_priority:
priority: 0
min_age: 30d
delete:
actions:
delete: {}
min_age: 365d
hot:
actions:
rollover:
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
min_age: 30d
so-logs-aws_x_route53_public_logs: so-logs-aws_x_route53_public_logs:
index_sorting: false index_sorting: false
index_template: index_template:
@@ -1430,6 +1698,94 @@ elasticsearch:
set_priority: set_priority:
priority: 50 priority: 50
min_age: 30d min_age: 30d
so-logs-aws_x_securityhub_findings:
index_sorting: False
index_template:
index_patterns:
- "logs-aws.securityhub_findings-*"
template:
settings:
index:
lifecycle:
name: so-logs-aws.securityhub_findings-logs
number_of_replicas: 0
composed_of:
- "logs-aws.securityhub_findings@package"
- "logs-aws.securityhub_findings@custom"
- "so-fleet_globals-1"
- "so-fleet_agent_id_verification-1"
priority: 501
data_stream:
hidden: false
allow_custom_routing: false
policy:
phases:
cold:
actions:
set_priority:
priority: 0
min_age: 30d
delete:
actions:
delete: {}
min_age: 365d
hot:
actions:
rollover:
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
min_age: 30d
so-logs-aws_x_securityhub_insights:
index_sorting: False
index_template:
index_patterns:
- "logs-aws.securityhub_insights-*"
template:
settings:
index:
lifecycle:
name: so-logs-aws.securityhub_insights-logs
number_of_replicas: 0
composed_of:
- "logs-aws.securityhub_insights@package"
- "logs-aws.securityhub_insights@custom"
- "so-fleet_globals-1"
- "so-fleet_agent_id_verification-1"
priority: 501
data_stream:
hidden: false
allow_custom_routing: false
policy:
phases:
cold:
actions:
set_priority:
priority: 0
min_age: 30d
delete:
actions:
delete: {}
min_age: 365d
hot:
actions:
rollover:
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
min_age: 30d
so-logs-aws_x_vpcflow: so-logs-aws_x_vpcflow:
index_sorting: false index_sorting: false
index_template: index_template:
@@ -2046,6 +2402,50 @@ elasticsearch:
set_priority: set_priority:
priority: 50 priority: 50
min_age: 30d min_age: 30d
so-logs-cef_x_log:
index_sorting: False
index_template:
index_patterns:
- "logs-cef.log-*"
template:
settings:
index:
lifecycle:
name: so-logs-cef.log-logs
number_of_replicas: 0
composed_of:
- "logs-cef.log@package"
- "logs-cef.log@custom"
- "so-fleet_globals-1"
- "so-fleet_agent_id_verification-1"
priority: 501
data_stream:
hidden: false
allow_custom_routing: false
policy:
phases:
cold:
actions:
set_priority:
priority: 0
min_age: 30d
delete:
actions:
delete: {}
min_age: 365d
hot:
actions:
rollover:
max_age: 30d
max_primary_shard_size: 50gb
set_priority:
priority: 100
min_age: 0ms
warm:
actions:
set_priority:
priority: 50
min_age: 30d
so-logs-checkpoint_x_firewall: so-logs-checkpoint_x_firewall:
index_sorting: False index_sorting: False
index_template: index_template:
@@ -3897,7 +4297,7 @@ elasticsearch:
allow_custom_routing: false allow_custom_routing: false
hidden: false hidden: false
index_patterns: index_patterns:
- logs-endpoint.diagnostic.collection-* - .logs-endpoint.diagnostic.collection-*
priority: 501 priority: 501
template: template:
settings: settings:
@@ -10568,7 +10968,7 @@ elasticsearch:
hot: hot:
actions: actions:
rollover: rollover:
max_age: 30d max_age: 1d
max_primary_shard_size: 50gb max_primary_shard_size: 50gb
set_priority: set_priority:
priority: 100 priority: 100

View File

@@ -61,6 +61,7 @@
{ "set": { "if": "ctx.event?.dataset != null && !ctx.event.dataset.contains('.')", "field": "event.dataset", "value": "{{event.module}}.{{event.dataset}}" } }, { "set": { "if": "ctx.event?.dataset != null && !ctx.event.dataset.contains('.')", "field": "event.dataset", "value": "{{event.module}}.{{event.dataset}}" } },
{ "split": { "if": "ctx.event?.dataset != null && ctx.event.dataset.contains('.')", "field": "event.dataset", "separator": "\\.", "target_field": "dataset_tag_temp" } }, { "split": { "if": "ctx.event?.dataset != null && ctx.event.dataset.contains('.')", "field": "event.dataset", "separator": "\\.", "target_field": "dataset_tag_temp" } },
{ "append": { "if": "ctx.dataset_tag_temp != null", "field": "tags", "value": "{{dataset_tag_temp.1}}" } }, { "append": { "if": "ctx.dataset_tag_temp != null", "field": "tags", "value": "{{dataset_tag_temp.1}}" } },
{ "grok": { "if": "ctx.http?.response?.status_code != null", "field": "http.response.status_code", "patterns": ["%{NUMBER:http.response.status_code:long} %{GREEDYDATA}"]} },
{ "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "dataset_tag_temp", "event.dataset_temp" ], "ignore_missing": true, "ignore_failure": true } } { "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "dataset_tag_temp", "event.dataset_temp" ], "ignore_missing": true, "ignore_failure": true } }
{%- endraw %} {%- endraw %}
{%- if HIGHLANDER %} {%- if HIGHLANDER %}

View File

@@ -83,6 +83,7 @@
{ "date": { "if": "ctx.event?.module == 'system'", "field": "event.created", "target_field": "@timestamp", "formats": ["yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'"] } }, { "date": { "if": "ctx.event?.module == 'system'", "field": "event.created", "target_field": "@timestamp", "formats": ["yyyy-MM-dd'T'HH:mm:ss.SSSSSS'Z'"] } },
{ "community_id":{ "if": "ctx.event?.dataset == 'endpoint.events.network'", "ignore_failure":true } }, { "community_id":{ "if": "ctx.event?.dataset == 'endpoint.events.network'", "ignore_failure":true } },
{ "set": { "if": "ctx.event?.module == 'fim'", "override": true, "field": "event.module", "value": "file_integrity" } }, { "set": { "if": "ctx.event?.module == 'fim'", "override": true, "field": "event.module", "value": "file_integrity" } },
{ "rename": { "if": "ctx.winlog?.provider_name == 'Microsoft-Windows-Windows Defender'", "ignore_missing": true, "field": "winlog.event_data.Threat Name", "target_field": "winlog.event_data.threat_name" } },
{ "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "event.dataset_temp", "dataset_tag_temp", "module_temp" ], "ignore_missing": true, "ignore_failure": true } } { "remove": { "field": [ "message2", "type", "fields", "category", "module", "dataset", "event.dataset_temp", "dataset_tag_temp", "module_temp" ], "ignore_missing": true, "ignore_failure": true } }
], ],
"on_failure": [ "on_failure": [

View File

@@ -0,0 +1,389 @@
{
"description": "Pipeline for pfSense",
"processors": [
{
"set": {
"field": "ecs.version",
"value": "8.10.0"
}
},
{
"set": {
"field": "observer.vendor",
"value": "netgate"
}
},
{
"set": {
"field": "observer.type",
"value": "firewall"
}
},
{
"rename": {
"field": "message",
"target_field": "event.original"
}
},
{
"set": {
"field": "event.kind",
"value": "event"
}
},
{
"set": {
"field": "event.timezone",
"value": "{{_tmp.tz_offset}}",
"if": "ctx._tmp?.tz_offset != null && ctx._tmp?.tz_offset != 'local'"
}
},
{
"grok": {
"description": "Parse syslog header",
"field": "event.original",
"patterns": [
"^(%{ECS_SYSLOG_PRI})?%{TIMESTAMP} %{GREEDYDATA:message}"
],
"pattern_definitions": {
"ECS_SYSLOG_PRI": "<%{NONNEGINT:log.syslog.priority:long}>(\\d )?",
"TIMESTAMP": "(?:%{BSD_TIMESTAMP_FORMAT}|%{SYSLOG_TIMESTAMP_FORMAT})",
"BSD_TIMESTAMP_FORMAT": "%{SYSLOGTIMESTAMP:_tmp.timestamp}(%{SPACE}%{BSD_PROCNAME}|%{SPACE}%{OBSERVER}%{SPACE}%{BSD_PROCNAME})(\\[%{POSINT:process.pid:long}\\])?:",
"BSD_PROCNAME": "(?:\\b%{NAME:process.name}|\\(%{NAME:process.name}\\))",
"NAME": "[[[:alnum:]]_-]+",
"SYSLOG_TIMESTAMP_FORMAT": "%{TIMESTAMP_ISO8601:_tmp.timestamp8601}%{SPACE}%{OBSERVER}%{SPACE}%{PROCESS}%{SPACE}(%{POSINT:process.pid:long}|-) - (-|%{META})",
"TIMESTAMP_ISO8601": "%{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE:event.timezone}?",
"OBSERVER": "(?:%{IP:observer.ip}|%{HOSTNAME:observer.name})",
"PROCESS": "(\\(%{DATA:process.name}\\)|(?:%{UNIXPATH}*/)?%{BASEPATH:process.name})",
"BASEPATH": "[[[:alnum:]]_%!$@:.,+~-]+",
"META": "\\[[^\\]]*\\]"
}
}
},
{
"date": {
"if": "ctx._tmp.timestamp8601 != null",
"field": "_tmp.timestamp8601",
"target_field": "@timestamp",
"formats": [
"ISO8601"
]
}
},
{
"date": {
"if": "ctx.event?.timezone != null && ctx._tmp?.timestamp != null",
"field": "_tmp.timestamp",
"target_field": "@timestamp",
"formats": [
"MMM d HH:mm:ss",
"MMM d HH:mm:ss",
"MMM dd HH:mm:ss"
],
"timezone": "{{ event.timezone }}"
}
},
{
"grok": {
"description": "Set Event Provider",
"field": "process.name",
"patterns": [
"^%{HYPHENATED_WORDS:event.provider}"
],
"pattern_definitions": {
"HYPHENATED_WORDS": "\\b[A-Za-z0-9_]+(-[A-Za-z_]+)*\\b"
}
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.16.0-firewall",
"if": "ctx.event.provider == 'filterlog'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.16.0-openvpn",
"if": "ctx.event.provider == 'openvpn'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.16.0-ipsec",
"if": "ctx.event.provider == 'charon'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.16.0-dhcp",
"if": "[\"dhcpd\", \"dhclient\", \"dhcp6c\"].contains(ctx.event.provider)"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.16.0-unbound",
"if": "ctx.event.provider == 'unbound'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.16.0-haproxy",
"if": "ctx.event.provider == 'haproxy'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.16.0-php-fpm",
"if": "ctx.event.provider == 'php-fpm'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.16.0-squid",
"if": "ctx.event.provider == 'squid'"
}
},
{
"pipeline": {
"name": "logs-pfsense.log-1.16.0-suricata",
"if": "ctx.event.provider == 'suricata'"
}
},
{
"drop": {
"if": "![\"filterlog\", \"openvpn\", \"charon\", \"dhcpd\", \"dhclient\", \"dhcp6c\", \"unbound\", \"haproxy\", \"php-fpm\", \"squid\", \"suricata\"].contains(ctx.event?.provider)"
}
},
{
"append": {
"field": "event.category",
"value": "network",
"if": "ctx.network != null"
}
},
{
"convert": {
"field": "source.address",
"target_field": "source.ip",
"type": "ip",
"ignore_failure": true,
"ignore_missing": true
}
},
{
"convert": {
"field": "destination.address",
"target_field": "destination.ip",
"type": "ip",
"ignore_failure": true,
"ignore_missing": true
}
},
{
"set": {
"field": "network.type",
"value": "ipv6",
"if": "ctx.source?.ip != null && ctx.source.ip.contains(\":\")"
}
},
{
"set": {
"field": "network.type",
"value": "ipv4",
"if": "ctx.source?.ip != null && ctx.source.ip.contains(\".\")"
}
},
{
"geoip": {
"field": "source.ip",
"target_field": "source.geo",
"ignore_missing": true
}
},
{
"geoip": {
"field": "destination.ip",
"target_field": "destination.geo",
"ignore_missing": true
}
},
{
"geoip": {
"ignore_missing": true,
"database_file": "GeoLite2-ASN.mmdb",
"field": "source.ip",
"target_field": "source.as",
"properties": [
"asn",
"organization_name"
]
}
},
{
"geoip": {
"database_file": "GeoLite2-ASN.mmdb",
"field": "destination.ip",
"target_field": "destination.as",
"properties": [
"asn",
"organization_name"
],
"ignore_missing": true
}
},
{
"rename": {
"field": "source.as.asn",
"target_field": "source.as.number",
"ignore_missing": true
}
},
{
"rename": {
"field": "source.as.organization_name",
"target_field": "source.as.organization.name",
"ignore_missing": true
}
},
{
"rename": {
"field": "destination.as.asn",
"target_field": "destination.as.number",
"ignore_missing": true
}
},
{
"rename": {
"field": "destination.as.organization_name",
"target_field": "destination.as.organization.name",
"ignore_missing": true
}
},
{
"community_id": {
"target_field": "network.community_id",
"ignore_failure": true
}
},
{
"grok": {
"field": "observer.ingress.interface.name",
"patterns": [
"%{DATA}.%{NONNEGINT:observer.ingress.vlan.id}"
],
"ignore_missing": true,
"ignore_failure": true
}
},
{
"set": {
"field": "network.vlan.id",
"copy_from": "observer.ingress.vlan.id",
"ignore_empty_value": true
}
},
{
"append": {
"field": "related.ip",
"value": "{{destination.ip}}",
"allow_duplicates": false,
"if": "ctx.destination?.ip != null"
}
},
{
"append": {
"field": "related.ip",
"value": "{{source.ip}}",
"allow_duplicates": false,
"if": "ctx.source?.ip != null"
}
},
{
"append": {
"field": "related.ip",
"value": "{{source.nat.ip}}",
"allow_duplicates": false,
"if": "ctx.source?.nat?.ip != null"
}
},
{
"append": {
"field": "related.hosts",
"value": "{{destination.domain}}",
"if": "ctx.destination?.domain != null"
}
},
{
"append": {
"field": "related.user",
"value": "{{user.name}}",
"if": "ctx.user?.name != null"
}
},
{
"set": {
"field": "network.direction",
"value": "{{network.direction}}bound",
"if": "ctx.network?.direction != null && ctx.network?.direction =~ /^(in|out)$/"
}
},
{
"remove": {
"field": [
"_tmp"
],
"ignore_failure": true
}
},
{
"script": {
"lang": "painless",
"description": "This script processor iterates over the whole document to remove fields with null values.",
"source": "void handleMap(Map map) {\n for (def x : map.values()) {\n if (x instanceof Map) {\n handleMap(x);\n } else if (x instanceof List) {\n handleList(x);\n }\n }\n map.values().removeIf(v -> v == null || (v instanceof String && v == \"-\"));\n}\nvoid handleList(List list) {\n for (def x : list) {\n if (x instanceof Map) {\n handleMap(x);\n } else if (x instanceof List) {\n handleList(x);\n }\n }\n}\nhandleMap(ctx);\n"
}
},
{
"remove": {
"field": "event.original",
"if": "ctx.tags == null || !(ctx.tags.contains('preserve_original_event'))",
"ignore_failure": true,
"ignore_missing": true
}
},
{
"pipeline": {
"name": "logs-pfsense.log@custom",
"ignore_missing_pipeline": true
}
}
],
"on_failure": [
{
"remove": {
"field": [
"_tmp"
],
"ignore_failure": true
}
},
{
"set": {
"field": "event.kind",
"value": "pipeline_error"
}
},
{
"append": {
"field": "error.message",
"value": "{{{ _ingest.on_failure_message }}}"
}
}
],
"_meta": {
"managed_by": "fleet",
"managed": true,
"package": {
"name": "pfsense"
}
}
}

View File

@@ -0,0 +1,31 @@
{
"description": "Pipeline for parsing pfSense Suricata logs.",
"processors": [
{
"pipeline": {
"name": "suricata.common"
}
}
],
"on_failure": [
{
"set": {
"field": "event.kind",
"value": "pipeline_error"
}
},
{
"append": {
"field": "error.message",
"value": "{{{ _ingest.on_failure_message }}}"
}
}
],
"_meta": {
"managed_by": "fleet",
"managed": true,
"package": {
"name": "pfsense"
}
}
}

View File

@@ -13,7 +13,6 @@
{ "rename": { "field": "message2.vlan", "target_field": "network.vlan.id", "ignore_failure": true } }, { "rename": { "field": "message2.vlan", "target_field": "network.vlan.id", "ignore_failure": true } },
{ "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_missing": true } }, { "rename": { "field": "message2.community_id", "target_field": "network.community_id", "ignore_missing": true } },
{ "rename": { "field": "message2.xff", "target_field": "xff.ip", "ignore_missing": true } }, { "rename": { "field": "message2.xff", "target_field": "xff.ip", "ignore_missing": true } },
{ "lowercase": { "field": "network.transport", "ignore_failure": true } },
{ "set": { "field": "event.dataset", "value": "{{ message2.event_type }}" } }, { "set": { "field": "event.dataset", "value": "{{ message2.event_type }}" } },
{ "set": { "field": "observer.name", "value": "{{agent.name}}" } }, { "set": { "field": "observer.name", "value": "{{agent.name}}" } },
{ "set": { "field": "event.ingested", "value": "{{@timestamp}}" } }, { "set": { "field": "event.ingested", "value": "{{@timestamp}}" } },

View File

@@ -95,6 +95,7 @@ elasticsearch:
description: The order to sort by. Must set index_sorting to True. description: The order to sort by. Must set index_sorting to True.
global: True global: True
helpLink: elasticsearch.html helpLink: elasticsearch.html
policy:
phases: phases:
hot: hot:
max_age: max_age:
@@ -365,6 +366,7 @@ elasticsearch:
so-logs-azure_x_signinlogs: *indexSettings so-logs-azure_x_signinlogs: *indexSettings
so-logs-azure_x_springcloudlogs: *indexSettings so-logs-azure_x_springcloudlogs: *indexSettings
so-logs-barracuda_x_waf: *indexSettings so-logs-barracuda_x_waf: *indexSettings
so-logs-cef_x_log: *indexSettings
so-logs-cisco_asa_x_log: *indexSettings so-logs-cisco_asa_x_log: *indexSettings
so-logs-cisco_ftd_x_log: *indexSettings so-logs-cisco_ftd_x_log: *indexSettings
so-logs-cisco_ios_x_log: *indexSettings so-logs-cisco_ios_x_log: *indexSettings

View File

@@ -0,0 +1,22 @@
{
"template": {
"mappings": {
"properties": {
"error": {
"properties": {
"message": {
"type": "match_only_text"
}
}
}
}
}
},
"_meta": {
"package": {
"name": "system"
},
"managed_by": "fleet",
"managed": true
}
}

View File

@@ -0,0 +1,139 @@
{
"template": {
"mappings": {
"properties": {
"so_audit_doc_id": {
"ignore_above": 1024,
"type": "keyword"
},
"@timestamp": {
"type": "date"
},
"so_kind": {
"ignore_above": 1024,
"type": "keyword"
},
"so_operation": {
"ignore_above": 1024,
"type": "keyword"
},
"so_detection": {
"properties": {
"publicId": {
"type": "text"
},
"title": {
"type": "text"
},
"severity": {
"ignore_above": 1024,
"type": "keyword"
},
"author": {
"ignore_above": 1024,
"type": "keyword"
},
"description": {
"type": "text"
},
"content": {
"type": "text"
},
"isEnabled": {
"type": "boolean"
},
"isReporting": {
"type": "boolean"
},
"isCommunity": {
"type": "boolean"
},
"tags": {
"type": "text"
},
"ruleset": {
"ignore_above": 1024,
"type": "keyword"
},
"engine": {
"ignore_above": 1024,
"type": "keyword"
},
"language": {
"ignore_above": 1024,
"type": "keyword"
},
"license": {
"ignore_above": 1024,
"type": "keyword"
},
"overrides": {
"properties": {
"type": {
"ignore_above": 1024,
"type": "keyword"
},
"isEnabled": {
"type": "boolean"
},
"createdAt": {
"type": "date"
},
"updatedAt": {
"type": "date"
},
"regex": {
"type": "text"
},
"value": {
"type": "text"
},
"thresholdType": {
"ignore_above": 1024,
"type": "keyword"
},
"track": {
"ignore_above": 1024,
"type": "keyword"
},
"ip": {
"type": "text"
},
"count": {
"type": "long"
},
"seconds": {
"type": "long"
},
"customFilter": {
"type": "text"
}
}
}
}
},
"so_detectioncomment": {
"properties": {
"createTime": {
"type": "date"
},
"detectionId": {
"ignore_above": 1024,
"type": "keyword"
},
"value": {
"type": "text"
},
"userId": {
"ignore_above": 1024,
"type": "keyword"
}
}
}
}
}
},
"_meta": {
"ecs_version": "1.12.2"
}
}

View File

@@ -0,0 +1,7 @@
{
"template": {},
"version": 1,
"_meta": {
"description": "default settings for common Security Onion Detections indices"
}
}

View File

@@ -9,11 +9,9 @@
'so-influxdb', 'so-influxdb',
'so-kibana', 'so-kibana',
'so-kratos', 'so-kratos',
'so-mysql',
'so-nginx', 'so-nginx',
'so-redis', 'so-redis',
'so-soc', 'so-soc',
'so-soctopus',
'so-strelka-coordinator', 'so-strelka-coordinator',
'so-strelka-gatekeeper', 'so-strelka-gatekeeper',
'so-strelka-frontend', 'so-strelka-frontend',
@@ -29,14 +27,13 @@
'so-elastic-fleet', 'so-elastic-fleet',
'so-elastic-fleet-package-registry', 'so-elastic-fleet-package-registry',
'so-influxdb', 'so-influxdb',
'so-kafka',
'so-kibana', 'so-kibana',
'so-kratos', 'so-kratos',
'so-logstash', 'so-logstash',
'so-mysql',
'so-nginx', 'so-nginx',
'so-redis', 'so-redis',
'so-soc', 'so-soc',
'so-soctopus',
'so-strelka-coordinator', 'so-strelka-coordinator',
'so-strelka-gatekeeper', 'so-strelka-gatekeeper',
'so-strelka-frontend', 'so-strelka-frontend',
@@ -84,6 +81,7 @@
{% set NODE_CONTAINERS = [ {% set NODE_CONTAINERS = [
'so-logstash', 'so-logstash',
'so-redis', 'so-redis',
'so-kafka'
] %} ] %}
{% elif GLOBALS.role == 'so-idh' %} {% elif GLOBALS.role == 'so-idh' %}

View File

@@ -90,6 +90,11 @@ firewall:
tcp: tcp:
- 8086 - 8086
udp: [] udp: []
kafka:
tcp:
- 9092
- 9093
udp: []
kibana: kibana:
tcp: tcp:
- 5601 - 5601
@@ -98,19 +103,11 @@ firewall:
tcp: tcp:
- 7788 - 7788
udp: [] udp: []
mysql:
tcp:
- 3306
udp: []
nginx: nginx:
tcp: tcp:
- 80 - 80
- 443 - 443
udp: [] udp: []
playbook:
tcp:
- 3000
udp: []
redis: redis:
tcp: tcp:
- 6379 - 6379
@@ -178,8 +175,6 @@ firewall:
hostgroups: hostgroups:
eval: eval:
portgroups: portgroups:
- playbook
- mysql
- kibana - kibana
- redis - redis
- influxdb - influxdb
@@ -363,8 +358,6 @@ firewall:
hostgroups: hostgroups:
manager: manager:
portgroups: portgroups:
- playbook
- mysql
- kibana - kibana
- redis - redis
- influxdb - influxdb
@@ -376,6 +369,7 @@ firewall:
- elastic_agent_update - elastic_agent_update
- localrules - localrules
- sensoroni - sensoroni
- kafka
fleet: fleet:
portgroups: portgroups:
- elasticsearch_rest - elasticsearch_rest
@@ -411,6 +405,7 @@ firewall:
- docker_registry - docker_registry
- influxdb - influxdb
- sensoroni - sensoroni
- kafka
searchnode: searchnode:
portgroups: portgroups:
- redis - redis
@@ -424,6 +419,7 @@ firewall:
- elastic_agent_data - elastic_agent_data
- elastic_agent_update - elastic_agent_update
- sensoroni - sensoroni
- kafka
heavynode: heavynode:
portgroups: portgroups:
- redis - redis
@@ -559,8 +555,6 @@ firewall:
hostgroups: hostgroups:
managersearch: managersearch:
portgroups: portgroups:
- playbook
- mysql
- kibana - kibana
- redis - redis
- influxdb - influxdb
@@ -756,8 +750,6 @@ firewall:
- all - all
standalone: standalone:
portgroups: portgroups:
- playbook
- mysql
- kibana - kibana
- redis - redis
- influxdb - influxdb
@@ -1291,10 +1283,17 @@ firewall:
- beats_5044 - beats_5044
- beats_5644 - beats_5644
- elastic_agent_data - elastic_agent_data
- kafka
searchnode: searchnode:
portgroups: portgroups:
- redis - redis
- beats_5644 - beats_5644
- kafka
managersearch:
portgroups:
- redis
- beats_5644
- kafka
self: self:
portgroups: portgroups:
- redis - redis

View File

@@ -115,21 +115,18 @@ firewall:
influxdb: influxdb:
tcp: *tcpsettings tcp: *tcpsettings
udp: *udpsettings udp: *udpsettings
kafka:
tcp: *tcpsettings
udp: *udpsettings
kibana: kibana:
tcp: *tcpsettings tcp: *tcpsettings
udp: *udpsettings udp: *udpsettings
localrules: localrules:
tcp: *tcpsettings tcp: *tcpsettings
udp: *udpsettings udp: *udpsettings
mysql:
tcp: *tcpsettings
udp: *udpsettings
nginx: nginx:
tcp: *tcpsettings tcp: *tcpsettings
udp: *udpsettings udp: *udpsettings
playbook:
tcp: *tcpsettings
udp: *udpsettings
redis: redis:
tcp: *tcpsettings tcp: *tcpsettings
udp: *udpsettings udp: *udpsettings
@@ -938,7 +935,6 @@ firewall:
portgroups: *portgroupshost portgroups: *portgroupshost
customhostgroup9: customhostgroup9:
portgroups: *portgroupshost portgroups: *portgroupshost
idh: idh:
chain: chain:
DOCKER-USER: DOCKER-USER:

View File

@@ -0,0 +1,3 @@
global:
pcapengine: STENO
pipeline: REDIS

2
salt/global/map.jinja Normal file
View File

@@ -0,0 +1,2 @@
{% import_yaml 'global/defaults.yaml' as GLOBALDEFAULTS %}
{% set GLOBALMERGED = salt['pillar.get']('global', GLOBALDEFAULTS.global, merge=True) %}

View File

@@ -10,10 +10,15 @@ global:
regex: ^(([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))?)?$ regex: ^(([0-9]{1,3}\.){3}[0-9]{1,3}(\/([0-9]|[1-2][0-9]|3[0-2]))?)?$
regexFailureMessage: You must enter a valid IP address or CIDR. regexFailureMessage: You must enter a valid IP address or CIDR.
mdengine: mdengine:
description: What engine to use for meta data generation. Options are ZEEK and SURICATA. description: Which engine to use for meta data generation. Options are ZEEK and SURICATA.
regex: ^(ZEEK|SURICATA)$ regex: ^(ZEEK|SURICATA)$
regexFailureMessage: You must enter either ZEEK or SURICATA. regexFailureMessage: You must enter either ZEEK or SURICATA.
global: True global: True
pcapengine:
description: Which engine to use for generating pcap. Options are STENO, SURICATA or TRANSITION.
regex: ^(STENO|SURICATA|TRANSITION)$
regexFailureMessage: You must enter either STENO, SURICATA or TRANSITION.
global: True
ids: ids:
description: Which IDS engine to use. Currently only Suricata is supported. description: Which IDS engine to use. Currently only Suricata is supported.
global: True global: True
@@ -23,7 +28,7 @@ global:
description: Used for handling of authentication cookies. description: Used for handling of authentication cookies.
global: True global: True
airgap: airgap:
description: Sets airgap mode. description: Airgapped systems do not have network connectivity to the internet. This setting represents how this grid was configured during initial setup. While it is technically possible to manually switch systems between airgap and non-airgap, there are some nuances and additional steps involved. For that reason this setting is marked read-only. Contact your support representative for guidance if there is a need to change this setting.
global: True global: True
readonly: True readonly: True
imagerepo: imagerepo:
@@ -31,9 +36,10 @@ global:
global: True global: True
advanced: True advanced: True
pipeline: pipeline:
description: Sets which pipeline technology for events to use. Currently only Redis is supported. description: Sets which pipeline technology for events to use. Currently only Redis is fully supported. Kafka is experimental and requires a Security Onion Pro license.
regex: ^(REDIS|KAFKA)$
regexFailureMessage: You must enter either REDIS or KAFKA.
global: True global: True
readonly: True
advanced: True advanced: True
repo_host: repo_host:
description: Specify the host where operating system packages will be served from. description: Specify the host where operating system packages will be served from.

View File

@@ -6,9 +6,10 @@ idstools:
description: Enter your registration code or oinkcode for paid NIDS rulesets. description: Enter your registration code or oinkcode for paid NIDS rulesets.
title: Registration Code title: Registration Code
global: True global: True
forcedType: string
helpLink: rules.html helpLink: rules.html
ruleset: ruleset:
description: Defines the ruleset you want to run. Options are ETOPEN or ETPRO. description: 'Defines the ruleset you want to run. Options are ETOPEN or ETPRO. WARNING! Changing the ruleset will remove all existing Suricata rules of the previous ruleset and their associated overrides. This removal cannot be undone.'
global: True global: True
regex: ETPRO\b|ETOPEN\b regex: ETPRO\b|ETOPEN\b
helpLink: rules.html helpLink: rules.html

106
salt/kafka/config.sls Normal file
View File

@@ -0,0 +1,106 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set kafka_ips_logstash = [] %}
{% set kafka_ips_kraft = [] %}
{% set kafkanodes = salt['pillar.get']('kafka:nodes', {}) %}
{% set kafka_ip = GLOBALS.node_ip %}
{# Create list for kafka <-> logstash/searchnode communcations #}
{% for node, node_data in kafkanodes.items() %}
{% do kafka_ips_logstash.append(node_data['ip'] + ":9092") %}
{% endfor %}
{% set kafka_server_list = "','".join(kafka_ips_logstash) %}
{# Create a list for kraft controller <-> kraft controller communications. Used for Kafka metadata management #}
{% for node, node_data in kafkanodes.items() %}
{% do kafka_ips_kraft.append(node_data['nodeid'] ~ "@" ~ node_data['ip'] ~ ":9093") %}
{% endfor %}
{% set kraft_server_list = "','".join(kafka_ips_kraft) %}
include:
- ssl
kafka_group:
group.present:
- name: kafka
- gid: 960
kafka:
user.present:
- uid: 960
- gid: 960
{# Future tools to query kafka directly / show consumer groups
kafka_sbin_tools:
file.recurse:
- name: /usr/sbin
- source: salt://kafka/tools/sbin
- user: 960
- group: 960
- file_mode: 755 #}
kafka_sbin_jinja_tools:
file.recurse:
- name: /usr/sbin
- source: salt://kafka/tools/sbin_jinja
- user: 960
- group: 960
- file_mode: 755
- template: jinja
- defaults:
GLOBALS: {{ GLOBALS }}
kakfa_log_dir:
file.directory:
- name: /opt/so/log/kafka
- user: 960
- group: 960
- makedirs: True
kafka_data_dir:
file.directory:
- name: /nsm/kafka/data
- user: 960
- group: 960
- makedirs: True
kafka_generate_keystore:
cmd.run:
- name: "/usr/sbin/so-kafka-generate-keystore"
- onchanges:
- x509: /etc/pki/kafka.key
kafka_keystore_perms:
file.managed:
- replace: False
- name: /etc/pki/kafka.jks
- mode: 640
- user: 960
- group: 939
{% for sc in ['server', 'client'] %}
kafka_kraft_{{sc}}_properties:
file.managed:
- source: salt://kafka/etc/{{sc}}.properties.jinja
- name: /opt/so/conf/kafka/{{sc}}.properties
- template: jinja
- user: 960
- group: 960
- makedirs: True
- show_changes: False
{% endfor %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

39
salt/kafka/defaults.yaml Normal file
View File

@@ -0,0 +1,39 @@
kafka:
enabled: False
config:
server:
advertised_x_listeners:
auto_x_create_x_topics_x_enable: true
controller_x_listener_x_names: CONTROLLER
controller_x_quorum_x_voters:
inter_x_broker_x_listener_x_name: BROKER
listeners: BROKER://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
listener_x_security_x_protocol_x_map: CONTROLLER:SSL,BROKER:SSL
log_x_dirs: /nsm/kafka/data
log_x_retention_x_check_x_interval_x_ms: 300000
log_x_retention_x_hours: 168
log_x_segment_x_bytes: 1073741824
node_x_id:
num_x_io_x_threads: 8
num_x_network_x_threads: 3
num_x_partitions: 1
num_x_recovery_x_threads_x_per_x_data_x_dir: 1
offsets_x_topic_x_replication_x_factor: 1
process_x_roles: broker
socket_x_receive_x_buffer_x_bytes: 102400
socket_x_request_x_max_x_bytes: 104857600
socket_x_send_x_buffer_x_bytes: 102400
ssl_x_keystore_x_location: /etc/pki/kafka.jks
ssl_x_keystore_x_password: changeit
ssl_x_keystore_x_type: JKS
ssl_x_truststore_x_location: /etc/pki/java/sos/cacerts
ssl_x_truststore_x_password: changeit
transaction_x_state_x_log_x_min_x_isr: 1
transaction_x_state_x_log_x_replication_x_factor: 1
client:
security_x_protocol: SSL
ssl_x_truststore_x_location: /etc/pki/java/sos/cacerts
ssl_x_truststore_x_password: changeit
ssl_x_keystore_x_location: /etc/pki/kafka.jks
ssl_x_keystore_x_type: JKS
ssl_x_keystore_x_password: changeit

View File

@@ -1,14 +1,16 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one # Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at # or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the # https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0. # Elastic License 2.0.
{% from 'mysql/map.jinja' import MYSQLMERGED %}
include: include:
{% if MYSQLMERGED.enabled %} - kafka.sostatus
- mysql.enabled
{% else %} so-kafka:
- mysql.disabled docker_container.absent:
{% endif %} - force: True
so-kafka_so-status.disabled:
file.comment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-kafka$

64
salt/kafka/enabled.sls Normal file
View File

@@ -0,0 +1,64 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'docker/docker.map.jinja' import DOCKER %}
{% set KAFKANODES = salt['pillar.get']('kafka:nodes', {}) %}
include:
- elasticsearch.ca
- kafka.sostatus
- kafka.config
- kafka.storage
so-kafka:
docker_container.running:
- image: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-kafka:{{ GLOBALS.so_version }}
- hostname: so-kafka
- name: so-kafka
- networks:
- sobridge:
- ipv4_address: {{ DOCKER.containers['so-kafka'].ip }}
- user: kafka
- environment:
- KAFKA_HEAP_OPTS=-Xmx2G -Xms1G
- extra_hosts:
{% for node in KAFKANODES %}
- {{ node }}:{{ KAFKANODES[node].ip }}
{% endfor %}
{% if DOCKER.containers['so-kafka'].extra_hosts %}
{% for XTRAHOST in DOCKER.containers['so-kafka'].extra_hosts %}
- {{ XTRAHOST }}
{% endfor %}
{% endif %}
- port_bindings:
{% for BINDING in DOCKER.containers['so-kafka'].port_bindings %}
- {{ BINDING }}
{% endfor %}
- binds:
- /etc/pki/kafka.jks:/etc/pki/kafka.jks
- /opt/so/conf/ca/cacerts:/etc/pki/java/sos/cacerts
- /nsm/kafka/data/:/nsm/kafka/data/:rw
- /opt/so/conf/kafka/server.properties:/kafka/config/kraft/server.properties
- /opt/so/conf/kafka/client.properties:/kafka/config/kraft/client.properties
- watch:
{% for sc in ['server', 'client'] %}
- file: kafka_kraft_{{sc}}_properties
{% endfor %}
delete_so-kafka_so-status.disabled:
file.uncomment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-kafka$
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -3,5 +3,5 @@
https://securityonion.net/license; you may not use this file except in compliance with the https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #} Elastic License 2.0. #}
{% import_yaml 'mysql/defaults.yaml' as MYSQLDEFAULTS with context %} {% from 'kafka/map.jinja' import KAFKAMERGED -%}
{% set MYSQLMERGED = salt['pillar.get']('mysql', MYSQLDEFAULTS.mysql, merge=True) %} {{ KAFKAMERGED.config.client | yaml(False) | replace("_x_", ".") }}

View File

@@ -3,5 +3,5 @@
https://securityonion.net/license; you may not use this file except in compliance with the https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #} Elastic License 2.0. #}
{% import_yaml 'soctopus/defaults.yaml' as SOCTOPUSDEFAULTS %} {% from 'kafka/map.jinja' import KAFKAMERGED -%}
{% set SOCTOPUSMERGED = salt['pillar.get']('soctopus', SOCTOPUSDEFAULTS.soctopus, merge=True) %} {{ KAFKAMERGED.config.server | yaml(False) | replace("_x_", ".") }}

View File

@@ -3,12 +3,12 @@
# https://securityonion.net/license; you may not use this file except in compliance with the # https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0. # Elastic License 2.0.
{% from 'kafka/map.jinja' import KAFKAMERGED %}
{% from 'vars/globals.map.jinja' import GLOBALS %} {% from 'vars/globals.map.jinja' import GLOBALS %}
{% from 'playbook/map.jinja' import PLAYBOOKMERGED %}
include: include:
{% if PLAYBOOKMERGED.enabled %} {% if GLOBALS.pipeline == "KAFKA" and KAFKAMERGED.enabled %}
- playbook.enabled - kafka.enabled
{% else %} {% else %}
- playbook.disabled - kafka.disabled
{% endif %} {% endif %}

20
salt/kafka/map.jinja Normal file
View File

@@ -0,0 +1,20 @@
{# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #}
{% import_yaml 'kafka/defaults.yaml' as KAFKADEFAULTS %}
{% set KAFKAMERGED = salt['pillar.get']('kafka', KAFKADEFAULTS.kafka, merge=True) %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% do KAFKAMERGED.config.server.update({ 'node_x_id': salt['pillar.get']('kafka:nodes:' ~ GLOBALS.hostname ~ ':nodeid')}) %}
{% do KAFKAMERGED.config.server.update({'advertised_x_listeners': 'BROKER://' ~ GLOBALS.node_ip ~ ':9092'}) %}
{% set nodes = salt['pillar.get']('kafka:nodes', {}) %}
{% set combined = [] %}
{% for hostname, data in nodes.items() %}
{% do combined.append(data.nodeid ~ "@" ~ hostname ~ ":9093") %}
{% endfor %}
{% set kraft_controller_quorum_voters = ','.join(combined) %}
{% do KAFKAMERGED.config.server.update({'controller_x_quorum_x_voters': kraft_controller_quorum_voters}) %}

170
salt/kafka/soc_kafka.yaml Normal file
View File

@@ -0,0 +1,170 @@
kafka:
enabled:
description: Enable or disable Kafka.
helpLink: kafka.html
cluster_id:
description: The ID of the Kafka cluster.
readonly: True
advanced: True
sensitive: True
helpLink: kafka.html
config:
server:
advertised_x_listeners:
description: Specify the list of listeners (hostname and port) that Kafka brokers provide to clients for communication.
title: advertised.listeners
helpLink: kafka.html
auto_x_create_x_topics_x_enable:
description: Enable the auto creation of topics.
title: auto.create.topics.enable
forcedType: bool
helpLink: kafka.html
controller_x_listener_x_names:
description: Set listeners used by the controller in a comma-seperated list.
title: controller.listener.names
helpLink: kafka.html
controller_x_quorum_x_voters:
description: A comma-seperated list of ID and endpoint information mapped for a set of voters.
title: controller.quorum.voters
helpLink: kafka.html
inter_x_broker_x_listener_x_name:
description: The name of the listener used for inter-broker communication.
title: inter.broker.listener.name
helpLink: kafka.html
listeners:
description: Set of URIs that is listened on and the listener names in a comma-seperated list.
helpLink: kafka.html
listener_x_security_x_protocol_x_map:
description: Comma-seperated mapping of listener name and security protocols.
title: listener.security.protocol.map
helpLink: kafka.html
log_x_dirs:
description: Where Kafka logs are stored within the Docker container.
title: log.dirs
helpLink: kafka.html
log_x_retention_x_check_x_interval_x_ms:
description: Frequency at which log files are checked if they are qualified for deletion.
title: log.retention.check.interval.ms
helpLink: kafka.html
log_x_retention_x_hours:
description: How long, in hours, a log file is kept.
title: log.retention.hours
forcedType: int
helpLink: kafka.html
log_x_segment_x_bytes:
description: The maximum allowable size for a log file.
title: log.segment.bytes
forcedType: int
helpLink: kafka.html
node_x_id:
description: The node ID corresponds to the roles performed by this process whenever process.roles is populated.
title: node.id
forcedType: int
readonly: True
helpLink: kafka.html
num_x_io_x_threads:
description: The number of threads used by Kafka.
title: num.io.threads
forcedType: int
helpLink: kafka.html
num_x_network_x_threads:
description: The number of threads used for network communication.
title: num.network.threads
forcedType: int
helpLink: kafka.html
num_x_partitions:
description: The number of log partitions assigned per topic.
title: num.partitions
forcedType: int
helpLink: kafka.html
num_x_recovery_x_threads_x_per_x_data_x_dir:
description: The number of threads used for log recuperation at startup and purging at shutdown. This ammount of threads is used per data directory.
title: num.recovery.threads.per.data.dir
forcedType: int
helpLink: kafka.html
offsets_x_topic_x_replication_x_factor:
description: The offsets topic replication factor.
title: offsets.topic.replication.factor
forcedType: int
helpLink: kafka.html
process_x_roles:
description: The roles the process performs. Use a comma-seperated list is multiple.
title: process.roles
helpLink: kafka.html
socket_x_receive_x_buffer_x_bytes:
description: Size, in bytes of the SO_RCVBUF buffer. A value of -1 will use the OS default.
title: socket.receive.buffer.bytes
#forcedType: int - soc needs to allow -1 as an int before we can use this
helpLink: kafka.html
socket_x_request_x_max_x_bytes:
description: The maximum bytes allowed for a request to the socket.
title: socket.request.max.bytes
forcedType: int
helpLink: kafka.html
socket_x_send_x_buffer_x_bytes:
description: Size, in bytes of the SO_SNDBUF buffer. A value of -1 will use the OS default.
title: socket.send.buffer.byte
#forcedType: int - soc needs to allow -1 as an int before we can use this
helpLink: kafka.html
ssl_x_keystore_x_location:
description: The key store file location within the Docker container.
title: ssl.keystore.location
helpLink: kafka.html
ssl_x_keystore_x_password:
description: The key store file password. Invalid for PEM format.
title: ssl.keystore.password
sensitive: True
helpLink: kafka.html
ssl_x_keystore_x_type:
description: The key store file format.
title: ssl.keystore.type
regex: ^(JKS|PKCS12|PEM)$
helpLink: kafka.html
ssl_x_truststore_x_location:
description: The trust store file location within the Docker container.
title: ssl.truststore.location
helpLink: kafka.html
ssl_x_truststore_x_password:
description: The trust store file password. If null, the trust store file is still use, but integrity checking is disabled. Invalid for PEM format.
title: ssl.truststore.password
sensitive: True
helpLink: kafka.html
transaction_x_state_x_log_x_min_x_isr:
description: Overrides min.insync.replicas for the transaction topic. When a producer configures acks to "all" (or "-1"), this setting determines the minimum number of replicas required to acknowledge a write as successful. Failure to meet this minimum triggers an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used in conjunction, min.insync.replicas and acks enable stronger durability guarantees. For instance, creating a topic with a replication factor of 3, setting min.insync.replicas to 2, and using acks of "all" ensures that the producer raises an exception if a majority of replicas fail to receive a write.
title: transaction.state.log.min.isr
forcedType: int
helpLink: kafka.html
transaction_x_state_x_log_x_replication_x_factor:
description: Set the replication factor higher for the transaction topic to ensure availability. Internal topic creation will not proceed until the cluster size satisfies this replication factor prerequisite.
title: transaction.state.log.replication.factor
forcedType: int
helpLink: kafka.html
client:
security_x_protocol:
description: 'Broker communication protocol. Options are: SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT'
title: security.protocol
regex: ^(SASL_SSL|PLAINTEXT|SSL|SASL_PLAINTEXT)
helpLink: kafka.html
ssl_x_keystore_x_location:
description: The key store file location within the Docker container.
title: ssl.keystore.location
helpLink: kafka.html
ssl_x_keystore_x_password:
description: The key store file password. Invalid for PEM format.
title: ssl.keystore.password
sensitive: True
helpLink: kafka.html
ssl_x_keystore_x_type:
description: The key store file format.
title: ssl.keystore.type
regex: ^(JKS|PKCS12|PEM)$
helpLink: kafka.html
ssl_x_truststore_x_location:
description: The trust store file location within the Docker container.
title: ssl.truststore.location
helpLink: kafka.html
ssl_x_truststore_x_password:
description: The trust store file password. If null, the trust store file is still use, but integrity checking is disabled. Invalid for PEM format.
title: ssl.truststore.password
sensitive: True
helpLink: kafka.html

View File

@@ -6,11 +6,11 @@
{% from 'allowed_states.map.jinja' import allowed_states %} {% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %} {% if sls.split('.')[0] in allowed_states %}
append_so-mysql_so-status.conf: append_so-kafka_so-status.conf:
file.append: file.append:
- name: /opt/so/conf/so-status/so-status.conf - name: /opt/so/conf/so-status/so-status.conf
- text: so-mysql - text: so-kafka
- unless: grep -q so-mysql /opt/so/conf/so-status/so-status.conf - unless: grep -q so-kafka /opt/so/conf/so-status/so-status.conf
{% else %} {% else %}

38
salt/kafka/storage.sls Normal file
View File

@@ -0,0 +1,38 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set kafka_cluster_id = salt['pillar.get']('kafka:cluster_id', default=None) %}
{% if GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone'] %}
{% if kafka_cluster_id is none %}
generate_kafka_cluster_id:
cmd.run:
- name: /usr/sbin/so-kafka-clusterid
{% endif %}
{% endif %}
{# Initialize kafka storage if it doesn't already exist. Just looking for meta.properties in /nsm/kafka/data #}
{% if not salt['file.file_exists']('/nsm/kafka/data/meta.properties') %}
kafka_storage_init:
cmd.run:
- name: |
docker run -v /nsm/kafka/data:/nsm/kafka/data -v /opt/so/conf/kafka/server.properties:/kafka/config/kraft/newserver.properties --name so-kafkainit --user root --entrypoint /kafka/bin/kafka-storage.sh {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-kafka:{{ GLOBALS.so_version }} format -t {{ kafka_cluster_id }} -c /kafka/config/kraft/newserver.properties
kafka_rm_kafkainit:
cmd.run:
- name: |
docker rm so-kafkainit
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -0,0 +1,13 @@
#!/bin/bash
#
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
# Generate a new keystore
docker run -v /etc/pki/kafka.p12:/etc/pki/kafka.p12 --name so-kafka-keystore --user root --entrypoint keytool {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-kafka:{{ GLOBALS.so_version }} -importkeystore -srckeystore /etc/pki/kafka.p12 -srcstoretype PKCS12 -srcstorepass changeit -destkeystore /etc/pki/kafka.jks -deststoretype JKS -deststorepass changeit -noprompt
docker cp so-kafka-keystore:/etc/pki/kafka.jks /etc/pki/kafka.jks
docker rm so-kafka-keystore

View File

@@ -78,6 +78,7 @@ so-logstash:
{% if GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-searchnode' ] %} {% if GLOBALS.role in ['so-manager', 'so-managersearch', 'so-standalone', 'so-import', 'so-heavynode', 'so-searchnode' ] %}
- /opt/so/conf/ca/cacerts:/etc/pki/ca-trust/extracted/java/cacerts:ro - /opt/so/conf/ca/cacerts:/etc/pki/ca-trust/extracted/java/cacerts:ro
- /opt/so/conf/ca/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro - /opt/so/conf/ca/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro
- /etc/pki/kafka-logstash.p12:/usr/share/logstash/kafka-logstash.p12:ro
{% endif %} {% endif %}
{% if GLOBALS.role == 'so-eval' %} {% if GLOBALS.role == 'so-eval' %}
- /nsm/zeek:/nsm/zeek:ro - /nsm/zeek:/nsm/zeek:ro

View File

@@ -4,9 +4,10 @@
# Elastic License 2.0. # Elastic License 2.0.
{% from 'logstash/map.jinja' import LOGSTASH_MERGED %} {% from 'logstash/map.jinja' import LOGSTASH_MERGED %}
{% from 'kafka/map.jinja' import KAFKAMERGED %}
include: include:
{% if LOGSTASH_MERGED.enabled %} {% if LOGSTASH_MERGED.enabled and not KAFKAMERGED.enabled %}
- logstash.enabled - logstash.enabled
{% else %} {% else %}
- logstash.disabled - logstash.disabled

View File

@@ -0,0 +1,35 @@
{% set kafka_brokers = salt['pillar.get']('logstash:nodes:receiver', {}) %}
{% set kafka_on_mngr = salt ['pillar.get']('logstash:nodes:manager', {}) %}
{% set broker_ips = [] %}
{% for node, node_data in kafka_brokers.items() %}
{% do broker_ips.append(node_data['ip'] + ":9092") %}
{% endfor %}
{% for node, node_data in kafka_on_mngr.items() %}
{% do broker_ips.append(node_data['ip'] + ":9092") %}
{% endfor %}
{% set bootstrap_servers = "','".join(broker_ips) %}
input {
kafka {
codec => json
topics => ['default-logs', 'kratos-logs', 'soc-logs', 'strelka-logs', 'suricata-logs', 'zeek-logs']
group_id => 'searchnodes'
client_id => '{{ GLOBALS.hostname }}'
security_protocol => 'SSL'
bootstrap_servers => '{{ bootstrap_servers }}'
ssl_keystore_location => '/usr/share/logstash/kafka-logstash.p12'
ssl_keystore_password => 'changeit'
ssl_keystore_type => 'PKCS12'
ssl_truststore_location => '/etc/pki/ca-trust/extracted/java/cacerts'
ssl_truststore_password => 'changeit'
decorate_events => true
tags => [ "elastic-agent", "input-{{ GLOBALS.hostname}}", "kafka" ]
}
}
filter {
if ![metadata] {
mutate {
rename => { "@metadata" => "metadata" }
}
}
}

View File

@@ -0,0 +1,2 @@
https://repo.securityonion.net/file/so-repo/prod/2.4/oracle/9
https://repo-alt.securityonion.net/prod/2.4/oracle/9

View File

@@ -0,0 +1,13 @@
[main]
gpgcheck=1
installonly_limit=3
clean_requirements_on_remove=True
best=True
skip_if_unavailable=False
cachedir=/opt/so/conf/reposync/cache
keepcache=0
[securityonionsync]
name=Security Onion Repo repo
mirrorlist=file:///opt/so/conf/reposync/mirror.txt
enabled=1
gpgcheck=1

View File

@@ -27,6 +27,15 @@ repo_log_dir:
- user - user
- group - group
agents_log_dir:
file.directory:
- name: /opt/so/log/agents
- user: root
- group: root
- recurse:
- user
- group
yara_log_dir: yara_log_dir:
file.directory: file.directory:
- name: /opt/so/log/yarasync - name: /opt/so/log/yarasync
@@ -75,6 +84,20 @@ yara_update_scripts:
- defaults: - defaults:
EXCLUDEDRULES: {{ STRELKAMERGED.rules.excluded }} EXCLUDEDRULES: {{ STRELKAMERGED.rules.excluded }}
so-repo-file:
file.managed:
- name: /opt/so/conf/reposync/repodownload.conf
- source: salt://manager/files/repodownload.conf
- user: socore
- group: socore
so-repo-mirrorlist:
file.managed:
- name: /opt/so/conf/reposync/mirror.txt
- source: salt://manager/files/mirror.txt
- user: socore
- group: socore
so-repo-sync: so-repo-sync:
{% if MANAGERMERGED.reposync.enabled %} {% if MANAGERMERGED.reposync.enabled %}
cron.present: cron.present:
@@ -87,6 +110,17 @@ so-repo-sync:
- hour: '{{ MANAGERMERGED.reposync.hour }}' - hour: '{{ MANAGERMERGED.reposync.hour }}'
- minute: '{{ MANAGERMERGED.reposync.minute }}' - minute: '{{ MANAGERMERGED.reposync.minute }}'
so_fleetagent_status:
cron.present:
- name: /usr/sbin/so-elasticagent-status > /opt/so/log/agents/agentstatus.log 2>&1
- identifier: so_fleetagent_status
- user: root
- minute: '*/5'
- hour: '*'
- daymonth: '*'
- month: '*'
- dayweek: '*'
socore_own_saltstack: socore_own_saltstack:
file.directory: file.directory:
- name: /opt/so/saltstack - name: /opt/so/saltstack
@@ -103,55 +137,6 @@ rules_dir:
- group: socore - group: socore
- makedirs: True - makedirs: True
{% if STRELKAMERGED.rules.enabled %}
strelkarepos:
file.managed:
- name: /opt/so/conf/strelka/repos.txt
- source: salt://strelka/rules/repos.txt.jinja
- template: jinja
- defaults:
STRELKAREPOS: {{ STRELKAMERGED.rules.repos }}
- makedirs: True
strelka-yara-update:
{% if MANAGERMERGED.reposync.enabled and not GLOBALS.airgap %}
cron.present:
{% else %}
cron.absent:
{% endif %}
- user: socore
- name: '/usr/sbin/so-yara-update >> /opt/so/log/yarasync/yara-update.log 2>&1'
- identifier: strelka-yara-update
- hour: '7'
- minute: '1'
strelka-yara-download:
{% if MANAGERMERGED.reposync.enabled and not GLOBALS.airgap %}
cron.present:
{% else %}
cron.absent:
{% endif %}
- user: socore
- name: '/usr/sbin/so-yara-download >> /opt/so/log/yarasync/yara-download.log 2>&1'
- identifier: strelka-yara-download
- hour: '7'
- minute: '1'
{% if not GLOBALS.airgap %}
update_yara_rules:
cmd.run:
- name: /usr/sbin/so-yara-update
- onchanges:
- file: yara_update_scripts
download_yara_rules:
cmd.run:
- name: /usr/sbin/so-yara-download
- onchanges:
- file: yara_update_scripts
{% endif %}
{% endif %}
{% else %} {% else %}
{{sls}}_state_not_allowed: {{sls}}_state_not_allowed:

View File

@@ -20,10 +20,6 @@ manager:
description: String of hosts to ignore the proxy settings for. description: String of hosts to ignore the proxy settings for.
global: True global: True
helpLink: proxy.html helpLink: proxy.html
playbook:
description: Enable playbook 1=enabled 0=disabled.
global: True
helpLink: playbook.html
proxy: proxy:
description: Proxy server to use for updates. description: Proxy server to use for updates.
global: True global: True

View File

@@ -5,8 +5,6 @@
# https://securityonion.net/license; you may not use this file except in compliance with the # https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0. # Elastic License 2.0.
. /usr/sbin/so-common . /usr/sbin/so-common
/usr/sbin/so-restart playbook $1 curl -s -K /opt/so/conf/elasticsearch/curl.config -L "http://localhost:5601/api/fleet/agent_status" | jq .

View File

@@ -0,0 +1,29 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
### THIS SCRIPT AND SALT STATE REFERENCES TO THIS SCRIPT TO BE REMOVED ONCE INITIAL TESTING IS DONE - THESE VALUES WILL GENERATED IN SETUP AND SOUP
local_salt_dir=/opt/so/saltstack/local
if [[ -f /usr/sbin/so-common ]]; then
source /usr/sbin/so-common
else
source $(dirname $0)/../../../common/tools/sbin/so-common
fi
if ! grep -q "^ cluster_id:" $local_salt_dir/pillar/kafka/soc_kafka.sls; then
kafka_cluster_id=$(get_random_value 22)
echo 'kafka: ' > $local_salt_dir/pillar/kafka/soc_kafka.sls
echo ' cluster_id: '$kafka_cluster_id >> $local_salt_dir/pillar/kafka/soc_kafka.sls
if ! grep -q "^ kafkapass:" $local_salt_dir/pillar/kafka/soc_kafka.sls; then
kafkapass=$(get_random_value)
echo ' kafkapass: '$kafkapass >> $local_salt_dir/pillar/kafka/soc_kafka.sls
fi

View File

@@ -79,6 +79,32 @@ function getinstallinfo() {
source <(echo $INSTALLVARS) source <(echo $INSTALLVARS)
} }
function pcapspace() {
if [[ "$OPERATION" == "setup" ]]; then
# Use 25% for PCAP
PCAP_PERCENTAGE=1
DFREEPERCENT=21
local SPACESIZE=$(df -k /nsm | tail -1 | awk '{print $2}' | tr -d \n)
else
local NSMSIZE=$(salt "$MINION_ID" disk.usage --out=json | jq -r '.[]."/nsm"."1K-blocks" ')
local ROOTSIZE=$(salt "$MINION_ID" disk.usage --out=json | jq -r '.[]."/"."1K-blocks" ')
if [[ "$NSMSIZE" == "null" ]]; then
# Looks like there is no dedicated nsm partition. Using root
local SPACESIZE=$ROOTSIZE
else
local SPACESIZE=$NSMSIZE
fi
fi
local s=$(( $SPACESIZE / 1000000 ))
local s1=$(( $s / 4 * $PCAP_PERCENTAGE ))
MAX_PCAP_SPACE=$s1
}
function testMinion() { function testMinion() {
# Always run on the host, since this is going to be the manager of a distributed grid, or an eval/standalone. # Always run on the host, since this is going to be the manager of a distributed grid, or an eval/standalone.
# Distributed managers must run this in order for the sensor nodes to have access to the so-tcpreplay image. # Distributed managers must run this in order for the sensor nodes to have access to the so-tcpreplay image.
@@ -244,6 +270,10 @@ function add_sensor_to_minion() {
echo " lb_procs: '$CORECOUNT'" >> $PILLARFILE echo " lb_procs: '$CORECOUNT'" >> $PILLARFILE
echo "suricata:" >> $PILLARFILE echo "suricata:" >> $PILLARFILE
echo " enabled: True " >> $PILLARFILE echo " enabled: True " >> $PILLARFILE
if [[ $is_pcaplimit ]]; then
echo " pcap:" >> $PILLARFILE
echo " maxsize: $MAX_PCAP_SPACE" >> $PILLARFILE
fi
echo " config:" >> $PILLARFILE echo " config:" >> $PILLARFILE
echo " af-packet:" >> $PILLARFILE echo " af-packet:" >> $PILLARFILE
echo " threads: '$CORECOUNT'" >> $PILLARFILE echo " threads: '$CORECOUNT'" >> $PILLARFILE
@@ -251,17 +281,11 @@ function add_sensor_to_minion() {
echo " enabled: True" >> $PILLARFILE echo " enabled: True" >> $PILLARFILE
if [[ $is_pcaplimit ]]; then if [[ $is_pcaplimit ]]; then
echo " config:" >> $PILLARFILE echo " config:" >> $PILLARFILE
echo " diskfreepercentage: 60" >> $PILLARFILE echo " diskfreepercentage: $DFREEPERCENT" >> $PILLARFILE
fi fi
echo " " >> $PILLARFILE echo " " >> $PILLARFILE
} }
function add_playbook_to_minion() {
printf '%s\n'\
"playbook:"\
" enabled: True"\
" " >> $PILLARFILE
}
function add_elastalert_to_minion() { function add_elastalert_to_minion() {
printf '%s\n'\ printf '%s\n'\
@@ -323,13 +347,6 @@ function add_nginx_to_minion() {
" " >> $PILLARFILE " " >> $PILLARFILE
} }
function add_soctopus_to_minion() {
printf '%s\n'\
"soctopus:"\
" enabled: True"\
" " >> $PILLARFILE
}
function add_soc_to_minion() { function add_soc_to_minion() {
printf '%s\n'\ printf '%s\n'\
"soc:"\ "soc:"\
@@ -344,13 +361,6 @@ function add_registry_to_minion() {
" " >> $PILLARFILE " " >> $PILLARFILE
} }
function add_mysql_to_minion() {
printf '%s\n'\
"mysql:"\
" enabled: True"\
" " >> $PILLARFILE
}
function add_kratos_to_minion() { function add_kratos_to_minion() {
printf '%s\n'\ printf '%s\n'\
"kratos:"\ "kratos:"\
@@ -422,19 +432,17 @@ function updateMine() {
function createEVAL() { function createEVAL() {
is_pcaplimit=true is_pcaplimit=true
pcapspace
add_elasticsearch_to_minion add_elasticsearch_to_minion
add_sensor_to_minion add_sensor_to_minion
add_strelka_to_minion add_strelka_to_minion
add_playbook_to_minion
add_elastalert_to_minion add_elastalert_to_minion
add_kibana_to_minion add_kibana_to_minion
add_telegraf_to_minion add_telegraf_to_minion
add_influxdb_to_minion add_influxdb_to_minion
add_nginx_to_minion add_nginx_to_minion
add_soctopus_to_minion
add_soc_to_minion add_soc_to_minion
add_registry_to_minion add_registry_to_minion
add_mysql_to_minion
add_kratos_to_minion add_kratos_to_minion
add_idstools_to_minion add_idstools_to_minion
add_elastic_fleet_package_registry_to_minion add_elastic_fleet_package_registry_to_minion
@@ -442,21 +450,19 @@ function createEVAL() {
function createSTANDALONE() { function createSTANDALONE() {
is_pcaplimit=true is_pcaplimit=true
pcapspace
add_elasticsearch_to_minion add_elasticsearch_to_minion
add_logstash_to_minion add_logstash_to_minion
add_sensor_to_minion add_sensor_to_minion
add_strelka_to_minion add_strelka_to_minion
add_playbook_to_minion
add_elastalert_to_minion add_elastalert_to_minion
add_kibana_to_minion add_kibana_to_minion
add_redis_to_minion add_redis_to_minion
add_telegraf_to_minion add_telegraf_to_minion
add_influxdb_to_minion add_influxdb_to_minion
add_nginx_to_minion add_nginx_to_minion
add_soctopus_to_minion
add_soc_to_minion add_soc_to_minion
add_registry_to_minion add_registry_to_minion
add_mysql_to_minion
add_kratos_to_minion add_kratos_to_minion
add_idstools_to_minion add_idstools_to_minion
add_elastic_fleet_package_registry_to_minion add_elastic_fleet_package_registry_to_minion
@@ -465,17 +471,14 @@ function createSTANDALONE() {
function createMANAGER() { function createMANAGER() {
add_elasticsearch_to_minion add_elasticsearch_to_minion
add_logstash_to_minion add_logstash_to_minion
add_playbook_to_minion
add_elastalert_to_minion add_elastalert_to_minion
add_kibana_to_minion add_kibana_to_minion
add_redis_to_minion add_redis_to_minion
add_telegraf_to_minion add_telegraf_to_minion
add_influxdb_to_minion add_influxdb_to_minion
add_nginx_to_minion add_nginx_to_minion
add_soctopus_to_minion
add_soc_to_minion add_soc_to_minion
add_registry_to_minion add_registry_to_minion
add_mysql_to_minion
add_kratos_to_minion add_kratos_to_minion
add_idstools_to_minion add_idstools_to_minion
add_elastic_fleet_package_registry_to_minion add_elastic_fleet_package_registry_to_minion
@@ -484,17 +487,14 @@ function createMANAGER() {
function createMANAGERSEARCH() { function createMANAGERSEARCH() {
add_elasticsearch_to_minion add_elasticsearch_to_minion
add_logstash_to_minion add_logstash_to_minion
add_playbook_to_minion
add_elastalert_to_minion add_elastalert_to_minion
add_kibana_to_minion add_kibana_to_minion
add_redis_to_minion add_redis_to_minion
add_telegraf_to_minion add_telegraf_to_minion
add_influxdb_to_minion add_influxdb_to_minion
add_nginx_to_minion add_nginx_to_minion
add_soctopus_to_minion
add_soc_to_minion add_soc_to_minion
add_registry_to_minion add_registry_to_minion
add_mysql_to_minion
add_kratos_to_minion add_kratos_to_minion
add_idstools_to_minion add_idstools_to_minion
add_elastic_fleet_package_registry_to_minion add_elastic_fleet_package_registry_to_minion
@@ -531,6 +531,9 @@ function createIDH() {
function createHEAVYNODE() { function createHEAVYNODE() {
is_pcaplimit=true is_pcaplimit=true
PCAP_PERCENTAGE=1
DFREEPERCENT=21
pcapspace
add_elasticsearch_to_minion add_elasticsearch_to_minion
add_elastic_agent_to_minion add_elastic_agent_to_minion
add_logstash_to_minion add_logstash_to_minion
@@ -541,6 +544,10 @@ function createHEAVYNODE() {
} }
function createSENSOR() { function createSENSOR() {
is_pcaplimit=true
DFREEPERCENT=10
PCAP_PERCENTAGE=3
pcapspace
add_sensor_to_minion add_sensor_to_minion
add_strelka_to_minion add_strelka_to_minion
add_telegraf_to_minion add_telegraf_to_minion

View File

@@ -47,7 +47,7 @@ got_root(){
got_root got_root
if [ $# -ne 1 ] ; then if [ $# -ne 1 ] ; then
BRANCH=master BRANCH=2.4/main
else else
BRANCH=$1 BRANCH=$1
fi fi

View File

@@ -17,13 +17,16 @@ def showUsage(args):
print('Usage: {} <COMMAND> <YAML_FILE> [ARGS...]'.format(sys.argv[0])) print('Usage: {} <COMMAND> <YAML_FILE> [ARGS...]'.format(sys.argv[0]))
print(' General commands:') print(' General commands:')
print(' append - Append a list item to a yaml key, if it exists and is a list. Requires KEY and LISTITEM args.') print(' append - Append a list item to a yaml key, if it exists and is a list. Requires KEY and LISTITEM args.')
print(' add - Add a new key and set its value. Fails if key already exists. Requires KEY and VALUE args.')
print(' remove - Removes a yaml key, if it exists. Requires KEY arg.') print(' remove - Removes a yaml key, if it exists. Requires KEY arg.')
print(' replace - Replaces (or adds) a new key and set its value. Requires KEY and VALUE args.')
print(' help - Prints this usage information.') print(' help - Prints this usage information.')
print('') print('')
print(' Where:') print(' Where:')
print(' YAML_FILE - Path to the file that will be modified. Ex: /opt/so/conf/service/conf.yaml') print(' YAML_FILE - Path to the file that will be modified. Ex: /opt/so/conf/service/conf.yaml')
print(' KEY - YAML key, does not support \' or " characters at this time. Ex: level1.level2') print(' KEY - YAML key, does not support \' or " characters at this time. Ex: level1.level2')
print(' LISTITEM - Item to add to the list.') print(' VALUE - Value to set for a given key')
print(' LISTITEM - Item to append to a given key\'s list value')
sys.exit(1) sys.exit(1)
@@ -37,6 +40,7 @@ def writeYaml(filename, content):
file = open(filename, "w") file = open(filename, "w")
return yaml.dump(content, file) return yaml.dump(content, file)
def appendItem(content, key, listItem): def appendItem(content, key, listItem):
pieces = key.split(".", 1) pieces = key.split(".", 1)
if len(pieces) > 1: if len(pieces) > 1:
@@ -51,6 +55,30 @@ def appendItem(content, key, listItem):
print("The key provided does not exist. No action was taken on the file.") print("The key provided does not exist. No action was taken on the file.")
return 1 return 1
def convertType(value):
if len(value) > 0 and (not value.startswith("0") or len(value) == 1):
if "." in value:
try:
value = float(value)
return value
except ValueError:
pass
try:
value = int(value)
return value
except ValueError:
pass
lowered_value = value.lower()
if lowered_value == "false":
return False
elif lowered_value == "true":
return True
return value
def append(args): def append(args):
if len(args) != 3: if len(args) != 3:
print('Missing filename, key arg, or list item to append', file=sys.stderr) print('Missing filename, key arg, or list item to append', file=sys.stderr)
@@ -62,11 +90,41 @@ def append(args):
listItem = args[2] listItem = args[2]
content = loadYaml(filename) content = loadYaml(filename)
appendItem(content, key, listItem) appendItem(content, key, convertType(listItem))
writeYaml(filename, content) writeYaml(filename, content)
return 0 return 0
def addKey(content, key, value):
pieces = key.split(".", 1)
if len(pieces) > 1:
if not pieces[0] in content:
content[pieces[0]] = {}
addKey(content[pieces[0]], pieces[1], value)
elif key in content:
raise KeyError("key already exists")
else:
content[key] = value
def add(args):
if len(args) != 3:
print('Missing filename, key arg, and/or value', file=sys.stderr)
showUsage(None)
return
filename = args[0]
key = args[1]
value = args[2]
content = loadYaml(filename)
addKey(content, key, convertType(value))
writeYaml(filename, content)
return 0
def removeKey(content, key): def removeKey(content, key):
pieces = key.split(".", 1) pieces = key.split(".", 1)
if len(pieces) > 1: if len(pieces) > 1:
@@ -91,6 +149,24 @@ def remove(args):
return 0 return 0
def replace(args):
if len(args) != 3:
print('Missing filename, key arg, and/or value', file=sys.stderr)
showUsage(None)
return
filename = args[0]
key = args[1]
value = args[2]
content = loadYaml(filename)
removeKey(content, key)
addKey(content, key, convertType(value))
writeYaml(filename, content)
return 0
def main(): def main():
args = sys.argv[1:] args = sys.argv[1:]
@@ -100,8 +176,10 @@ def main():
commands = { commands = {
"help": showUsage, "help": showUsage,
"add": add,
"append": append, "append": append,
"remove": remove, "remove": remove,
"replace": replace,
} }
code = 1 code = 1

View File

@@ -42,6 +42,14 @@ class TestRemove(unittest.TestCase):
sysmock.assert_called() sysmock.assert_called()
self.assertIn(mock_stdout.getvalue(), "Usage:") self.assertIn(mock_stdout.getvalue(), "Usage:")
def test_remove_missing_arg(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stdout:
sys.argv = ["cmd", "help"]
soyaml.remove(["file"])
sysmock.assert_called()
self.assertIn(mock_stdout.getvalue(), "Missing filename or key arg\n")
def test_remove(self): def test_remove(self):
filename = "/tmp/so-yaml_test-remove.yaml" filename = "/tmp/so-yaml_test-remove.yaml"
file = open(filename, "w") file = open(filename, "w")
@@ -106,6 +114,14 @@ class TestRemove(unittest.TestCase):
sysmock.assert_called_once_with(1) sysmock.assert_called_once_with(1)
self.assertIn(mock_stdout.getvalue(), "Missing filename or key arg\n") self.assertIn(mock_stdout.getvalue(), "Missing filename or key arg\n")
def test_append_missing_arg(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stdout:
sys.argv = ["cmd", "help"]
soyaml.append(["file", "key"])
sysmock.assert_called()
self.assertIn(mock_stdout.getvalue(), "Missing filename, key arg, or list item to append\n")
def test_append(self): def test_append(self):
filename = "/tmp/so-yaml_test-remove.yaml" filename = "/tmp/so-yaml_test-remove.yaml"
file = open(filename, "w") file = open(filename, "w")
@@ -201,3 +217,146 @@ class TestRemove(unittest.TestCase):
soyaml.main() soyaml.main()
sysmock.assert_called() sysmock.assert_called()
self.assertEqual(mock_stdout.getvalue(), "The existing value for the given key is not a list. No action was taken on the file.\n") self.assertEqual(mock_stdout.getvalue(), "The existing value for the given key is not a list. No action was taken on the file.\n")
def test_add_key(self):
content = {}
soyaml.addKey(content, "foo", 123)
self.assertEqual(content, {"foo": 123})
try:
soyaml.addKey(content, "foo", "bar")
self.assertFail("expected key error since key already exists")
except KeyError:
pass
try:
soyaml.addKey(content, "foo.bar", 123)
self.assertFail("expected type error since key parent value is not a map")
except TypeError:
pass
content = {}
soyaml.addKey(content, "foo", "bar")
self.assertEqual(content, {"foo": "bar"})
soyaml.addKey(content, "badda.badda", "boom")
self.assertEqual(content, {"foo": "bar", "badda": {"badda": "boom"}})
def test_add_missing_arg(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stdout:
sys.argv = ["cmd", "help"]
soyaml.add(["file", "key"])
sysmock.assert_called()
self.assertIn(mock_stdout.getvalue(), "Missing filename, key arg, and/or value\n")
def test_add(self):
filename = "/tmp/so-yaml_test-add.yaml"
file = open(filename, "w")
file.write("{key1: { child1: 123, child2: abc }, key2: false, key3: [a,b,c]}")
file.close()
soyaml.add([filename, "key4", "d"])
file = open(filename, "r")
actual = file.read()
file.close()
expected = "key1:\n child1: 123\n child2: abc\nkey2: false\nkey3:\n- a\n- b\n- c\nkey4: d\n"
self.assertEqual(actual, expected)
def test_add_nested(self):
filename = "/tmp/so-yaml_test-add.yaml"
file = open(filename, "w")
file.write("{key1: { child1: 123, child2: [a,b,c] }, key2: false, key3: [e,f,g]}")
file.close()
soyaml.add([filename, "key1.child3", "d"])
file = open(filename, "r")
actual = file.read()
file.close()
expected = "key1:\n child1: 123\n child2:\n - a\n - b\n - c\n child3: d\nkey2: false\nkey3:\n- e\n- f\n- g\n"
self.assertEqual(actual, expected)
def test_add_nested_deep(self):
filename = "/tmp/so-yaml_test-add.yaml"
file = open(filename, "w")
file.write("{key1: { child1: 123, child2: { deep1: 45 } }, key2: false, key3: [e,f,g]}")
file.close()
soyaml.add([filename, "key1.child2.deep2", "d"])
file = open(filename, "r")
actual = file.read()
file.close()
expected = "key1:\n child1: 123\n child2:\n deep1: 45\n deep2: d\nkey2: false\nkey3:\n- e\n- f\n- g\n"
self.assertEqual(actual, expected)
def test_replace_missing_arg(self):
with patch('sys.exit', new=MagicMock()) as sysmock:
with patch('sys.stderr', new=StringIO()) as mock_stdout:
sys.argv = ["cmd", "help"]
soyaml.replace(["file", "key"])
sysmock.assert_called()
self.assertIn(mock_stdout.getvalue(), "Missing filename, key arg, and/or value\n")
def test_replace(self):
filename = "/tmp/so-yaml_test-add.yaml"
file = open(filename, "w")
file.write("{key1: { child1: 123, child2: abc }, key2: false, key3: [a,b,c]}")
file.close()
soyaml.replace([filename, "key2", True])
file = open(filename, "r")
actual = file.read()
file.close()
expected = "key1:\n child1: 123\n child2: abc\nkey2: true\nkey3:\n- a\n- b\n- c\n"
self.assertEqual(actual, expected)
def test_replace_nested(self):
filename = "/tmp/so-yaml_test-add.yaml"
file = open(filename, "w")
file.write("{key1: { child1: 123, child2: [a,b,c] }, key2: false, key3: [e,f,g]}")
file.close()
soyaml.replace([filename, "key1.child2", "d"])
file = open(filename, "r")
actual = file.read()
file.close()
expected = "key1:\n child1: 123\n child2: d\nkey2: false\nkey3:\n- e\n- f\n- g\n"
self.assertEqual(actual, expected)
def test_replace_nested_deep(self):
filename = "/tmp/so-yaml_test-add.yaml"
file = open(filename, "w")
file.write("{key1: { child1: 123, child2: { deep1: 45 } }, key2: false, key3: [e,f,g]}")
file.close()
soyaml.replace([filename, "key1.child2.deep1", 46])
file = open(filename, "r")
actual = file.read()
file.close()
expected = "key1:\n child1: 123\n child2:\n deep1: 46\nkey2: false\nkey3:\n- e\n- f\n- g\n"
self.assertEqual(actual, expected)
def test_convert(self):
self.assertEqual(soyaml.convertType("foo"), "foo")
self.assertEqual(soyaml.convertType("foo.bar"), "foo.bar")
self.assertEqual(soyaml.convertType("123"), 123)
self.assertEqual(soyaml.convertType("0"), 0)
self.assertEqual(soyaml.convertType("00"), "00")
self.assertEqual(soyaml.convertType("0123"), "0123")
self.assertEqual(soyaml.convertType("123.456"), 123.456)
self.assertEqual(soyaml.convertType("0123.456"), "0123.456")
self.assertEqual(soyaml.convertType("true"), True)
self.assertEqual(soyaml.convertType("TRUE"), True)
self.assertEqual(soyaml.convertType("false"), False)
self.assertEqual(soyaml.convertType("FALSE"), False)
self.assertEqual(soyaml.convertType(""), "")

View File

@@ -229,7 +229,7 @@ check_local_mods() {
# {% endraw %} # {% endraw %}
check_pillar_items() { check_pillar_items() {
local pillar_output=$(salt-call pillar.items --out=json) local pillar_output=$(salt-call pillar.items -lerror --out=json)
cond=$(jq '.local | has("_errors")' <<< "$pillar_output") cond=$(jq '.local | has("_errors")' <<< "$pillar_output")
if [[ "$cond" == "true" ]]; then if [[ "$cond" == "true" ]]; then
@@ -247,67 +247,6 @@ check_sudoers() {
fi fi
} }
check_log_size_limit() {
local num_minion_pillars
num_minion_pillars=$(find /opt/so/saltstack/local/pillar/minions/ -type f | wc -l)
if [[ $num_minion_pillars -gt 1 ]]; then
if find /opt/so/saltstack/local/pillar/minions/ -type f | grep -q "_heavynode"; then
lsl_msg='distributed'
fi
else
local minion_id
minion_id=$(lookup_salt_value "id" "" "grains" "" "local")
local minion_arr
IFS='_' read -ra minion_arr <<< "$minion_id"
local node_type="${minion_arr[0]}"
local current_limit
# since it is possible for the salt-master service to be stopped when this is run, we need to check the pillar values locally
# we need to combine default local and default pillars before doing this so we can define --pillar-root in salt-call
local epoch_date=$(date +%s%N)
mkdir -vp /opt/so/saltstack/soup_tmp_${epoch_date}/
cp -r /opt/so/saltstack/default/pillar/ /opt/so/saltstack/soup_tmp_${epoch_date}/
# use \cp here to overwrite any pillar files from default with those in local for the tmp directory
\cp -r /opt/so/saltstack/local/pillar/ /opt/so/saltstack/soup_tmp_${epoch_date}/
current_limit=$(salt-call pillar.get elasticsearch:log_size_limit --local --pillar-root=/opt/so/saltstack/soup_tmp_${epoch_date}/pillar --out=newline_values_only)
rm -rf /opt/so/saltstack/soup_tmp_${epoch_date}/
local percent
case $node_type in
'standalone' | 'eval')
percent=50
;;
*)
percent=80
;;
esac
local disk_dir="/"
if [ -d /nsm ]; then
disk_dir="/nsm"
fi
local disk_size_1k
disk_size_1k=$(df $disk_dir | grep -v "^Filesystem" | awk '{print $2}')
local ratio="1048576"
local disk_size_gb
disk_size_gb=$( echo "$disk_size_1k" "$ratio" | awk '{print($1/$2)}' )
local new_limit
new_limit=$( echo "$disk_size_gb" "$percent" | awk '{printf("%.0f", $1 * ($2/100))}')
if [[ $current_limit != "$new_limit" ]]; then
lsl_msg='single-node'
lsl_details=( "$current_limit" "$new_limit" "$minion_id" )
fi
fi
}
check_os_updates() { check_os_updates() {
# Check to see if there are OS updates # Check to see if there are OS updates
echo "Checking for OS updates." echo "Checking for OS updates."
@@ -417,6 +356,8 @@ preupgrade_changes() {
[[ "$INSTALLEDVERSION" == 2.4.20 ]] && up_to_2.4.30 [[ "$INSTALLEDVERSION" == 2.4.20 ]] && up_to_2.4.30
[[ "$INSTALLEDVERSION" == 2.4.30 ]] && up_to_2.4.40 [[ "$INSTALLEDVERSION" == 2.4.30 ]] && up_to_2.4.40
[[ "$INSTALLEDVERSION" == 2.4.40 ]] && up_to_2.4.50 [[ "$INSTALLEDVERSION" == 2.4.40 ]] && up_to_2.4.50
[[ "$INSTALLEDVERSION" == 2.4.50 ]] && up_to_2.4.60
[[ "$INSTALLEDVERSION" == 2.4.60 ]] && up_to_2.4.70
true true
} }
@@ -432,6 +373,8 @@ postupgrade_changes() {
[[ "$POSTVERSION" == 2.4.20 ]] && post_to_2.4.30 [[ "$POSTVERSION" == 2.4.20 ]] && post_to_2.4.30
[[ "$POSTVERSION" == 2.4.30 ]] && post_to_2.4.40 [[ "$POSTVERSION" == 2.4.30 ]] && post_to_2.4.40
[[ "$POSTVERSION" == 2.4.40 ]] && post_to_2.4.50 [[ "$POSTVERSION" == 2.4.40 ]] && post_to_2.4.50
[[ "$POSTVERSION" == 2.4.50 ]] && post_to_2.4.60
[[ "$POSTVERSION" == 2.4.60 ]] && post_to_2.4.70
true true
} }
@@ -488,6 +431,35 @@ post_to_2.4.50() {
POSTVERSION=2.4.50 POSTVERSION=2.4.50
} }
post_to_2.4.60() {
echo "Regenerating Elastic Agent Installers..."
so-elastic-agent-gen-installers
POSTVERSION=2.4.60
}
post_to_2.4.70() {
# Global pipeline changes to REDIS or KAFKA
echo "Removing global.pipeline pillar configuration"
sed -i '/pipeline:/d' /opt/so/saltstack/local/pillar/global/soc_global.sls
# Kafka configuration
mkdir -p /opt/so/saltstack/local/pillar/kafka
touch /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls
touch /opt/so/saltstack/local/pillar/kafka/adv_kafka.sls
echo 'kafka: ' > /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls
if ! grep -q "^ cluster_id:" $local_salt_dir/pillar/kafka/soc_kafka.sls; then
kafka_cluster_id=$(get_random_value 22)
echo ' cluster_id: '$kafka_cluster_id >> $local_salt_dir/pillar/kafka/soc_kafka.sls
if ! grep -q "^ certpass:" $local_salt_dir/pillar/kafka/soc_kafka.sls; then
kafkapass=$(get_random_value)
echo ' certpass: '$kafkapass >> $local_salt_dir/pillar/kafka/soc_kafka.sls
fi
POSTVERSION=2.4.70
}
repo_sync() { repo_sync() {
echo "Sync the local repo." echo "Sync the local repo."
su socore -c '/usr/sbin/so-repo-sync' || fail "Unable to complete so-repo-sync." su socore -c '/usr/sbin/so-repo-sync' || fail "Unable to complete so-repo-sync."
@@ -591,6 +563,8 @@ up_to_2.4.40() {
up_to_2.4.50() { up_to_2.4.50() {
echo "Creating additional pillars.." echo "Creating additional pillars.."
mkdir -p /opt/so/saltstack/local/pillar/stig/ mkdir -p /opt/so/saltstack/local/pillar/stig/
mkdir -p /opt/so/saltstack/local/salt/stig/
chown socore:socore /opt/so/saltstack/local/salt/stig/
touch /opt/so/saltstack/local/pillar/stig/adv_stig.sls touch /opt/so/saltstack/local/pillar/stig/adv_stig.sls
touch /opt/so/saltstack/local/pillar/stig/soc_stig.sls touch /opt/so/saltstack/local/pillar/stig/soc_stig.sls
@@ -617,6 +591,124 @@ up_to_2.4.50() {
INSTALLEDVERSION=2.4.50 INSTALLEDVERSION=2.4.50
} }
up_to_2.4.60() {
echo "Creating directory to store Suricata classification.config"
mkdir -vp /opt/so/saltstack/local/salt/suricata/classification
chown socore:socore /opt/so/saltstack/local/salt/suricata/classification
INSTALLEDVERSION=2.4.60
}
up_to_2.4.70() {
playbook_migration
toggle_telemetry
INSTALLEDVERSION=2.4.70
}
toggle_telemetry() {
if [[ -z $UNATTENDED && $is_airgap -ne 0 ]]; then
cat << ASSIST_EOF
--------------- SOC Telemetry ---------------
The Security Onion development team could use your help! Enabling SOC
Telemetry will help the team understand which UI features are being
used and enables informed prioritization of future development.
Adjust this setting at anytime via the SOC Configuration screen.
Documentation: https://docs.securityonion.net/en/2.4/telemetry.html
ASSIST_EOF
echo -n "Continue the upgrade with SOC Telemetry enabled [Y/n]? "
read -r input
input=$(echo "${input,,}" | xargs echo -n)
echo ""
if [[ ${#input} -eq 0 || "$input" == "yes" || "$input" == "y" || "$input" == "yy" ]]; then
echo "Thank you for helping improve Security Onion!"
else
if so-yaml.py replace /opt/so/saltstack/local/pillar/soc/soc_soc.sls soc.telemetryEnabled false; then
echo "Disabled SOC Telemetry."
else
fail "Failed to disable SOC Telemetry; aborting."
fi
fi
echo ""
fi
}
playbook_migration() {
# Start SOC Detections migration
mkdir -p /nsm/backup/detections-migration/{suricata,sigma/rules,elastalert}
# Remove cronjobs
crontab -l | grep -v 'so-playbook-sync_cron' | crontab -
crontab -l | grep -v 'so-playbook-ruleupdate_cron' | crontab -
if grep -A 1 'playbook:' /opt/so/saltstack/local/pillar/minions/* | grep -q 'enabled: True'; then
# Check for active Elastalert rules
active_rules_count=$(find /opt/so/rules/elastalert/playbook/ -type f -name "*.yaml" | wc -l)
if [[ "$active_rules_count" -gt 0 ]]; then
# Prompt the user to AGREE if active Elastalert rules found
echo
echo "$active_rules_count Active Elastalert/Playbook rules found."
echo "In preparation for the new Detections module, they will be backed up and then disabled."
echo
echo "If you would like to proceed, then type AGREE and press ENTER."
echo
# Read user input
read INPUT
if [ "${INPUT^^}" != 'AGREE' ]; then fail "SOUP canceled."; fi
echo "Backing up the Elastalert rules..."
rsync -av --stats /opt/so/rules/elastalert/playbook/*.yaml /nsm/backup/detections-migration/elastalert/
# Verify that rsync completed successfully
if [[ $? -eq 0 ]]; then
# Delete the Elastlaert rules
rm -f /opt/so/rules/elastalert/playbook/*.yaml
echo "Active Elastalert rules have been backed up."
else
fail "Error: rsync failed to copy the files. Active Elastalert rules have not been backed up."
fi
fi
echo
echo "Exporting Sigma rules from Playbook..."
MYSQLPW=$(awk '/mysql:/ {print $2}' /opt/so/saltstack/local/pillar/secrets.sls)
docker exec so-mysql sh -c "exec mysql -uroot -p${MYSQLPW} -D playbook -sN -e \"SELECT id, value FROM custom_values WHERE value LIKE '%View Sigma%'\"" | while read -r id value; do
echo -e "$value" > "/nsm/backup/detections-migration/sigma/rules/$id.yaml"
done || fail "Failed to export Sigma rules..."
echo
echo "Exporting Sigma Filters from Playbook..."
docker exec so-mysql sh -c "exec mysql -uroot -p${MYSQLPW} -D playbook -sN -e \"SELECT issues.subject as title, custom_values.value as filter FROM issues JOIN custom_values ON issues.id = custom_values.customized_id WHERE custom_values.value LIKE '%sofilter%'\"" > /nsm/backup/detections-migration/sigma/custom-filters.txt || fail "Failed to export Custom Sigma Filters."
echo
echo "Backing up Playbook database..."
docker exec so-mysql sh -c "mysqldump -uroot -p${MYSQLPW} --databases playbook > /tmp/playbook-dump" || fail "Failed to dump Playbook database."
docker cp so-mysql:/tmp/playbook-dump /nsm/backup/detections-migration/sigma/playbook-dump.sql || fail "Failed to backup Playbook database."
fi
echo
echo "Stopping Playbook services & cleaning up..."
for container in so-playbook so-mysql so-soctopus; do
if [ -n "$(docker ps -q -f name=^${container}$)" ]; then
docker stop $container
fi
done
sed -i '/so-playbook\|so-soctopus\|so-mysql/d' /opt/so/conf/so-status/so-status.conf
rm -f /usr/sbin/so-playbook-* /usr/sbin/so-soctopus-* /usr/sbin/so-mysql-*
echo
echo "Playbook Migration is complete...."
}
determine_elastic_agent_upgrade() { determine_elastic_agent_upgrade() {
if [[ $is_airgap -eq 0 ]]; then if [[ $is_airgap -eq 0 ]]; then
update_elastic_agent_airgap update_elastic_agent_airgap
@@ -664,6 +756,10 @@ update_airgap_rules() {
if [ -d /nsm/repo/rules/sigma ]; then if [ -d /nsm/repo/rules/sigma ]; then
rsync -av $UPDATE_DIR/agrules/sigma/* /nsm/repo/rules/sigma/ rsync -av $UPDATE_DIR/agrules/sigma/* /nsm/repo/rules/sigma/
fi fi
# SOC Detections Airgap
rsync -av $UPDATE_DIR/agrules/detect-sigma/* /nsm/rules/detect-sigma/
rsync -av $UPDATE_DIR/agrules/detect-yara/* /nsm/rules/detect-yara/
} }
update_airgap_repo() { update_airgap_repo() {
@@ -795,7 +891,7 @@ verify_latest_update_script() {
else else
echo "You are not running the latest soup version. Updating soup and its components. This might take multiple runs to complete." echo "You are not running the latest soup version. Updating soup and its components. This might take multiple runs to complete."
salt-call state.apply common.soup_scripts queue=True -linfo --file-root=$UPDATE_DIR/salt --local salt-call state.apply common.soup_scripts queue=True -lerror --file-root=$UPDATE_DIR/salt --local --out-file=/dev/null
# Verify that soup scripts updated as expected # Verify that soup scripts updated as expected
get_soup_script_hashes get_soup_script_hashes
@@ -876,7 +972,6 @@ main() {
echo "### Preparing soup at $(date) ###" echo "### Preparing soup at $(date) ###"
echo "" echo ""
set_os set_os
check_salt_master_status 1 || fail "Could not talk to salt master: Please run 'systemctl status salt-master' to ensure the salt-master service is running and check the log at /opt/so/log/salt/master." check_salt_master_status 1 || fail "Could not talk to salt master: Please run 'systemctl status salt-master' to ensure the salt-master service is running and check the log at /opt/so/log/salt/master."

0
salt/manager/tools/sbin_jinja/so-yara-update Executable file → Normal file
View File

View File

@@ -1,89 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% set MYSQLPASS = salt['pillar.get']('secrets:mysql') %}
# MySQL Setup
mysqlpkgs:
pkg.removed:
- skip_suggestions: False
- pkgs:
{% if grains['os_family'] != 'RedHat' %}
- python3-mysqldb
{% else %}
- python3-mysqlclient
{% endif %}
mysqletcdir:
file.directory:
- name: /opt/so/conf/mysql/etc
- user: 939
- group: 939
- makedirs: True
mysqlpiddir:
file.directory:
- name: /opt/so/conf/mysql/pid
- user: 939
- group: 939
- makedirs: True
mysqlcnf:
file.managed:
- name: /opt/so/conf/mysql/etc/my.cnf
- source: salt://mysql/etc/my.cnf
- user: 939
- group: 939
mysqlpass:
file.managed:
- name: /opt/so/conf/mysql/etc/mypass
- source: salt://mysql/etc/mypass
- user: 939
- group: 939
- template: jinja
- defaults:
MYSQLPASS: {{ MYSQLPASS }}
mysqllogdir:
file.directory:
- name: /opt/so/log/mysql
- user: 939
- group: 939
- makedirs: True
mysqldatadir:
file.directory:
- name: /nsm/mysql
- user: 939
- group: 939
- makedirs: True
mysql_sbin:
file.recurse:
- name: /usr/sbin
- source: salt://mysql/tools/sbin
- user: 939
- group: 939
- file_mode: 755
#mysql_sbin_jinja:
# file.recurse:
# - name: /usr/sbin
# - source: salt://mysql/tools/sbin_jinja
# - user: 939
# - group: 939
# - file_mode: 755
# - template: jinja
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,2 +0,0 @@
mysql:
enabled: False

View File

@@ -1,27 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
include:
- mysql.sostatus
so-mysql:
docker_container.absent:
- force: True
so-mysql_so-status.disabled:
file.comment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-mysql$
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,84 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'docker/docker.map.jinja' import DOCKER %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set MYSQLPASS = salt['pillar.get']('secrets:mysql') %}
include:
- mysql.config
- mysql.sostatus
{% if MYSQLPASS == None %}
mysql_password_none:
test.configurable_test_state:
- changes: False
- result: False
- comment: "MySQL Password Error - Not Starting MySQL"
{% else %}
so-mysql:
docker_container.running:
- image: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-mysql:{{ GLOBALS.so_version }}
- hostname: so-mysql
- user: socore
- networks:
- sobridge:
- ipv4_address: {{ DOCKER.containers['so-mysql'].ip }}
- extra_hosts:
- {{ GLOBALS.manager }}:{{ GLOBALS.manager_ip }}
{% if DOCKER.containers['so-mysql'].extra_hosts %}
{% for XTRAHOST in DOCKER.containers['so-mysql'].extra_hosts %}
- {{ XTRAHOST }}
{% endfor %}
{% endif %}
- port_bindings:
{% for BINDING in DOCKER.containers['so-mysql'].port_bindings %}
- {{ BINDING }}
{% endfor %}
- environment:
- MYSQL_ROOT_HOST={{ GLOBALS.so_docker_gateway }}
- MYSQL_ROOT_PASSWORD=/etc/mypass
{% if DOCKER.containers['so-mysql'].extra_env %}
{% for XTRAENV in DOCKER.containers['so-mysql'].extra_env %}
- {{ XTRAENV }}
{% endfor %}
{% endif %}
- binds:
- /opt/so/conf/mysql/etc/my.cnf:/etc/my.cnf:ro
- /opt/so/conf/mysql/etc/mypass:/etc/mypass
- /nsm/mysql:/var/lib/mysql:rw
- /opt/so/log/mysql:/var/log/mysql:rw
{% if DOCKER.containers['so-mysql'].custom_bind_mounts %}
{% for BIND in DOCKER.containers['so-mysql'].custom_bind_mounts %}
- {{ BIND }}
{% endfor %}
{% endif %}
- cap_add:
- SYS_NICE
- watch:
- file: mysqlcnf
- file: mysqlpass
- require:
- file: mysqlcnf
- file: mysqlpass
{% endif %}
delete_so-mysql_so-status.disabled:
file.uncomment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-mysql$
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,32 +0,0 @@
# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html
[mysqld]
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
host_cache_size=0
skip-name-resolve
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
secure-file-priv=/var/lib/mysql-files
user=socore
log-error=/var/log/mysql/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
# Switch back to the native password module so that playbook can connect
authentication_policy=mysql_native_password

View File

@@ -1 +0,0 @@
{{ MYSQLPASS }}

View File

@@ -1,4 +0,0 @@
mysql:
enabled:
description: You can enable or disable MySQL.
advanced: True

View File

@@ -1,12 +0,0 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
/usr/sbin/so-restart mysql $1

View File

@@ -1,12 +0,0 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
/usr/sbin/so-start mysql $1

View File

@@ -1,12 +0,0 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
/usr/sbin/so-stop mysql $1

View File

@@ -277,38 +277,11 @@ http {
proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Proto $scheme;
} }
location /playbook/ {
auth_request /auth/sessions/whoami;
proxy_pass http://{{ GLOBALS.manager }}:3000/playbook/;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
location /soctopus/ {
auth_request /auth/sessions/whoami;
proxy_pass http://{{ GLOBALS.manager }}:7000/;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
proxy_set_header X-Forwarded-Proto $scheme;
}
location /kibana/app/soc/ { location /kibana/app/soc/ {
rewrite ^/kibana/app/soc/(.*) /soc/$1 permanent; rewrite ^/kibana/app/soc/(.*) /soc/$1 permanent;
} }
location /kibana/app/soctopus/ {
rewrite ^/kibana/app/soctopus/(.*) /soctopus/$1 permanent;
}
location /sensoroniagents/ { location /sensoroniagents/ {
if ($http_authorization = "") { if ($http_authorization = "") {

View File

@@ -3,5 +3,11 @@
https://securityonion.net/license; you may not use this file except in compliance with the https://securityonion.net/license; you may not use this file except in compliance with the
Elastic License 2.0. #} Elastic License 2.0. #}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% import_yaml 'pcap/defaults.yaml' as PCAPDEFAULTS %} {% import_yaml 'pcap/defaults.yaml' as PCAPDEFAULTS %}
{% set PCAPMERGED = salt['pillar.get']('pcap', PCAPDEFAULTS.pcap, merge=True) %} {% set PCAPMERGED = salt['pillar.get']('pcap', PCAPDEFAULTS.pcap, merge=True) %}
{# disable stenographer if the pcap engine is set to SURICATA #}
{% if GLOBALS.pcap_engine == "SURICATA" %}
{% do PCAPMERGED.update({'enabled': False}) %}
{% endif %}

View File

@@ -72,13 +72,6 @@ stenoca:
- user: 941 - user: 941
- group: 939 - group: 939
pcapdir:
file.directory:
- name: /nsm/pcap
- user: 941
- group: 941
- makedirs: True
pcaptmpdir: pcaptmpdir:
file.directory: file.directory:
- name: /nsm/pcaptmp - name: /nsm/pcaptmp

View File

@@ -15,3 +15,12 @@ include:
{% else %} {% else %}
- pcap.disabled - pcap.disabled
{% endif %} {% endif %}
# This directory needs to exist regardless of whether STENO is enabled or not, in order for
# Sensoroni to be able to look at old steno PCAP data
pcapdir:
file.directory:
- name: /nsm/pcap
- user: 941
- group: 941
- makedirs: True

View File

@@ -4,32 +4,32 @@ pcap:
helpLink: stenographer.html helpLink: stenographer.html
config: config:
maxdirectoryfiles: maxdirectoryfiles:
description: The maximum number of packet/index files to create before deleting old files. description: By default, Stenographer limits the number of files in the pcap directory to 30000 to avoid limitations with the ext3 filesystem. However, if you're using the ext4 or xfs filesystems, then it is safe to increase this value. So if you have a large amount of storage and find that you only have 3 weeks worth of PCAP on disk while still having plenty of free space, then you may want to increase this default setting.
helpLink: stenographer.html helpLink: stenographer.html
diskfreepercentage: diskfreepercentage:
description: The disk space percent to always keep free for PCAP description: Stenographer will purge old PCAP on a regular basis to keep the disk free percentage at this level. If you have a distributed deployment with dedicated forward nodes, then the default value of 10 should be reasonable since Stenographer should be the main consumer of disk space in the /nsm partition. However, if you have systems that run both Stenographer and Elasticsearch at the same time (like eval and standalone installations), then youll want to make sure that this value is no lower than 21 so that you avoid Elasticsearch hitting its watermark setting at 80% disk usage. If you have an older standalone installation, then you may need to manually change this value to 21.
helpLink: stenographer.html helpLink: stenographer.html
blocks: blocks:
description: The number of 1MB packet blocks used by AF_PACKET to store packets in memory, per thread. You shouldn't need to change this. description: The number of 1MB packet blocks used by Stenographer and AF_PACKET to store packets in memory, per thread. You shouldn't need to change this.
advanced: True advanced: True
helpLink: stenographer.html helpLink: stenographer.html
preallocate_file_mb: preallocate_file_mb:
description: File size to pre-allocate for individual PCAP files. You shouldn't need to change this. description: File size to pre-allocate for individual Stenographer PCAP files. You shouldn't need to change this.
advanced: True advanced: True
helpLink: stenographer.html helpLink: stenographer.html
aiops: aiops:
description: The max number of async writes to allow at once. description: The max number of async writes to allow for Stenographer at once.
advanced: True advanced: True
helpLink: stenographer.html helpLink: stenographer.html
pin_to_cpu: pin_to_cpu:
description: Enable CPU pinning for PCAP. description: Enable CPU pinning for Stenographer PCAP.
advanced: True advanced: True
helpLink: stenographer.html helpLink: stenographer.html
cpus_to_pin_to: cpus_to_pin_to:
description: CPU to pin PCAP to. Currently only a single CPU is supported. description: CPU to pin Stenographer PCAP to. Currently only a single CPU is supported.
advanced: True advanced: True
helpLink: stenographer.html helpLink: stenographer.html
disks: disks:
description: List of disks to use for PCAP. This is currently not used. description: List of disks to use for Stenographer PCAP. This is currently not used.
advanced: True advanced: True
helpLink: stenographer.html helpLink: stenographer.html

View File

@@ -1,19 +0,0 @@
{% from 'vars/globals.map.jinja' import GLOBALS %}
# This state will create the SecOps Automation user within Playbook
include:
- playbook
wait_for_playbook:
cmd.run:
- name: until nc -z {{ GLOBALS.manager }} 3000; do sleep 1; done
- timeout: 300
create_user:
cmd.script:
- source: salt://playbook/files/automation_user_create.sh
- cwd: /root
- template: jinja
- onchanges:
- cmd: wait_for_playbook

View File

@@ -1,120 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'docker/docker.map.jinja' import DOCKER %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set MYSQLPASS = salt['pillar.get']('secrets:mysql') %}
{% set PLAYBOOKPASS = salt['pillar.get']('secrets:playbook_db') %}
include:
- mysql
create_playbookdbuser:
mysql_user.present:
- name: playbookdbuser
- password: {{ PLAYBOOKPASS }}
- host: "{{ DOCKER.range.split('/')[0] }}/255.255.255.0"
- connection_host: {{ GLOBALS.manager }}
- connection_port: 3306
- connection_user: root
- connection_pass: {{ MYSQLPASS }}
query_playbookdbuser_grants:
mysql_query.run:
- database: playbook
- query: "GRANT ALL ON playbook.* TO 'playbookdbuser'@'{{ DOCKER.range.split('/')[0] }}/255.255.255.0';"
- connection_host: {{ GLOBALS.manager }}
- connection_port: 3306
- connection_user: root
- connection_pass: {{ MYSQLPASS }}
query_updatwebhooks:
mysql_query.run:
- database: playbook
- query: "update webhooks set url = 'http://{{ GLOBALS.manager_ip}}:7000/playbook/webhook' where project_id = 1"
- connection_host: {{ GLOBALS.manager }}
- connection_port: 3306
- connection_user: root
- connection_pass: {{ MYSQLPASS }}
query_updatename:
mysql_query.run:
- database: playbook
- query: "update custom_fields set name = 'Custom Filter' where id = 21;"
- connection_host: {{ GLOBALS.manager }}
- connection_port: 3306
- connection_user: root
- connection_pass: {{ MYSQLPASS }}
query_updatepluginurls:
mysql_query.run:
- database: playbook
- query: |-
update settings set value =
"--- !ruby/hash:ActiveSupport::HashWithIndifferentAccess
project: '1'
convert_url: http://{{ GLOBALS.manager }}:7000/playbook/sigmac
create_url: http://{{ GLOBALS.manager }}:7000/playbook/play"
where id = 43
- connection_host: {{ GLOBALS.manager }}
- connection_port: 3306
- connection_user: root
- connection_pass: {{ MYSQLPASS }}
playbook_sbin:
file.recurse:
- name: /usr/sbin
- source: salt://playbook/tools/sbin
- user: 939
- group: 939
- file_mode: 755
#playbook_sbin_jinja:
# file.recurse:
# - name: /usr/sbin
# - source: salt://playbook/tools/sbin_jinja
# - user: 939
# - group: 939
# - file_mode: 755
# - template: jinja
playbooklogdir:
file.directory:
- name: /opt/so/log/playbook
- dir_mode: 775
- user: 939
- group: 939
- makedirs: True
playbookfilesdir:
file.directory:
- name: /opt/so/conf/playbook/redmine-files
- dir_mode: 775
- user: 939
- group: 939
- makedirs: True
{% if 'idh' in salt['cmd.shell']("ls /opt/so/saltstack/local/pillar/minions/|awk -F'_' {'print $2'}|awk -F'.' {'print $1'}").split() %}
idh-plays:
file.recurse:
- name: /opt/so/conf/soctopus/sigma-import
- source: salt://idh/plays
- makedirs: True
cmd.run:
- name: so-playbook-import True
- onchanges:
- file: /opt/so/conf/soctopus/sigma-import
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,14 +0,0 @@
# This state will import the initial default playbook database.
# If there is an existing playbook database, it will be overwritten - no backups are made.
include:
- mysql
salt://playbook/files/playbook_db_init.sh:
cmd.script:
- cwd: /root
- template: jinja
'sleep 5':
cmd.run

View File

@@ -1,2 +0,0 @@
playbook:
enabled: False

View File

@@ -1,37 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
include:
- playbook.sostatus
so-playbook:
docker_container.absent:
- force: True
so-playbook_so-status.disabled:
file.comment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-playbook$
so-playbook-sync_cron:
cron.absent:
- identifier: so-playbook-sync_cron
- user: root
so-playbook-ruleupdate_cron:
cron.absent:
- identifier: so-playbook-ruleupdate_cron
- user: root
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,93 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
{% from 'docker/docker.map.jinja' import DOCKER %}
{% from 'vars/globals.map.jinja' import GLOBALS %}
{% set PLAYBOOKPASS = salt['pillar.get']('secrets:playbook_db') %}
include:
- playbook.config
- playbook.sostatus
{% if PLAYBOOKPASS == None %}
playbook_password_none:
test.configurable_test_state:
- changes: False
- result: False
- comment: "Playbook MySQL Password Error - Not Starting Playbook"
{% else %}
so-playbook:
docker_container.running:
- image: {{ GLOBALS.registry_host }}:5000/{{ GLOBALS.image_repo }}/so-playbook:{{ GLOBALS.so_version }}
- hostname: playbook
- name: so-playbook
- networks:
- sobridge:
- ipv4_address: {{ DOCKER.containers['so-playbook'].ip }}
- binds:
- /opt/so/conf/playbook/redmine-files:/usr/src/redmine/files:rw
- /opt/so/log/playbook:/playbook/log:rw
{% if DOCKER.containers['so-playbook'].custom_bind_mounts %}
{% for BIND in DOCKER.containers['so-playbook'].custom_bind_mounts %}
- {{ BIND }}
{% endfor %}
{% endif %}
- extra_hosts:
- {{ GLOBALS.manager }}:{{ GLOBALS.manager_ip }}
{% if DOCKER.containers['so-playbook'].extra_hosts %}
{% for XTRAHOST in DOCKER.containers['so-playbook'].extra_hosts %}
- {{ XTRAHOST }}
{% endfor %}
{% endif %}
- environment:
- REDMINE_DB_MYSQL={{ GLOBALS.manager }}
- REDMINE_DB_DATABASE=playbook
- REDMINE_DB_USERNAME=playbookdbuser
- REDMINE_DB_PASSWORD={{ PLAYBOOKPASS }}
{% if DOCKER.containers['so-playbook'].extra_env %}
{% for XTRAENV in DOCKER.containers['so-playbook'].extra_env %}
- {{ XTRAENV }}
{% endfor %}
{% endif %}
- port_bindings:
{% for BINDING in DOCKER.containers['so-playbook'].port_bindings %}
- {{ BINDING }}
{% endfor %}
delete_so-playbook_so-status.disabled:
file.uncomment:
- name: /opt/so/conf/so-status/so-status.conf
- regex: ^so-playbook$
so-playbook-sync_cron:
cron.present:
- name: /usr/sbin/so-playbook-sync > /opt/so/log/playbook/sync.log 2>&1
- identifier: so-playbook-sync_cron
- user: root
- minute: '*/5'
so-playbook-ruleupdate_cron:
cron.present:
- name: /usr/sbin/so-playbook-ruleupdate > /opt/so/log/playbook/update.log 2>&1
- identifier: so-playbook-ruleupdate_cron
- user: root
- minute: '1'
- hour: '6'
{% endif %}
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,49 +0,0 @@
#!/bin/bash
# {%- set admin_pass = salt['pillar.get']('secrets:playbook_admin', None) -%}
# {%- set automation_pass = salt['pillar.get']('secrets:playbook_automation', None) %}
local_salt_dir=/opt/so/saltstack/local
try_count=6
interval=10
while [[ $try_count -le 6 ]]; do
if docker top "so-playbook" &>/dev/null; then
automation_group=6
# Create user and retrieve api_key and user_id from response
mapfile -t automation_res < <(
curl -s --location --request POST 'http://127.0.0.1:3000/playbook/users.json' --user "admin:{{ admin_pass }}" --header 'Content-Type: application/json' --data '{
"user" : {
"login" : "automation",
"password": "{{ automation_pass }}",
"firstname": "SecOps",
"lastname": "Automation",
"mail": "automation2@localhost.local"
}
}' | jq -r '.user.api_key, .user.id'
)
automation_api_key=${automation_res[0]}
automation_user_id=${automation_res[1]}
# Add user_id from newly created user to Automation group
curl -s --location --request POST "http://127.0.0.1:3000/playbook/groups/${automation_group}/users.json" \
--user "admin:{{ admin_pass }}" \
--header 'Content-Type: application/json' \
--data "{
\"user_id\" : ${automation_user_id}
}"
# Update the Automation API key in the secrets pillar
so-yaml.py remove $local_salt_dir/pillar/secrets.sls secrets.playbook_automation_api_key
printf '%s\n'\
" playbook_automation_api_key: $automation_api_key" >> $local_salt_dir/pillar/secrets.sls
exit 0
fi
((try_count++))
sleep "${interval}s"
done
# Timeout exceeded, exit with non-zero exit code
exit 1

View File

@@ -1,17 +0,0 @@
#!/bin/bash
# {%- set MYSQLPASS = salt['pillar.get']('secrets:mysql', None) -%}
# {%- set admin_pass = salt['pillar.get']('secrets:playbook_admin', None) %}
. /usr/sbin/so-common
default_salt_dir=/opt/so/saltstack/default
# Generate salt + hash for admin user
admin_salt=$(get_random_value 32)
admin_stage1_hash=$(echo -n '{{ admin_pass }}' | sha1sum | awk '{print $1}')
admin_hash=$(echo -n "${admin_salt}${admin_stage1_hash}" | sha1sum | awk '{print $1}')
sed -i "s/ADMIN_HASH/${admin_hash}/g" $default_salt_dir/salt/playbook/files/playbook_db_init.sql
sed -i "s/ADMIN_SALT/${admin_salt}/g" $default_salt_dir/salt/playbook/files/playbook_db_init.sql
# Copy file to destination + execute SQL
docker cp $default_salt_dir/salt/playbook/files/playbook_db_init.sql so-mysql:/tmp/playbook_db_init.sql
docker exec so-mysql /bin/bash -c "/usr/bin/mysql -b -uroot -p{{MYSQLPASS}} < /tmp/playbook_db_init.sql"

File diff suppressed because one or more lines are too long

View File

@@ -1,2 +0,0 @@
{% import_yaml 'playbook/defaults.yaml' as PLAYBOOKDEFAULTS %}
{% set PLAYBOOKMERGED = salt['pillar.get']('playbook', PLAYBOOKDEFAULTS.playbook, merge=True) %}

View File

@@ -1,4 +0,0 @@
playbook:
enabled:
description: You can enable or disable Playbook.
helpLink: playbook.html

View File

@@ -1,21 +0,0 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
{% from 'allowed_states.map.jinja' import allowed_states %}
{% if sls.split('.')[0] in allowed_states %}
append_so-playbook_so-status.conf:
file.append:
- name: /opt/so/conf/so-status/so-status.conf
- text: so-playbook
- unless: grep -q so-playbook /opt/so/conf/so-status/so-status.conf
{% else %}
{{sls}}_state_not_allowed:
test.fail_without_changes:
- name: {{sls}}_state_not_allowed
{% endif %}

View File

@@ -1,14 +0,0 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
ENABLEPLAY=${1:-False}
docker exec so-soctopus /usr/local/bin/python -c "import playbook; print(playbook.play_import($ENABLEPLAY))"

View File

@@ -1,22 +0,0 @@
#!/bin/bash
#
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
salt-call state.apply playbook.db_init,playbook queue=True
/usr/sbin/so-soctopus-restart
salt-call state.apply playbook,playbook.automation_user_create queue=True
/usr/sbin/so-soctopus-restart
echo "Importing Plays - NOTE: this will continue after installation finishes and could take an hour or more. Rebooting while the import is in progress will delay playbook imports."
sleep 5
so-playbook-ruleupdate >> /root/setup_playbook_rule_update.log 2>&1 &

View File

@@ -1,12 +0,0 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
docker exec so-soctopus python3 playbook_bulk-update.py

View File

@@ -1,29 +0,0 @@
#!/bin/bash
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
. /usr/sbin/so-common
if ! [ -f /opt/so/state/playbook_regen_plays ] || [ "$1" = "--force" ]; then
echo "Refreshing Sigma & regenerating plays... "
# Regenerate ElastAlert & update Plays
docker exec so-soctopus python3 playbook_play-update.py
# Delete current Elastalert Rules
rm /opt/so/rules/elastalert/playbook/*.yaml
# Regenerate Elastalert Rules
so-playbook-sync
# Create state file
touch /opt/so/state/playbook_regen_plays
else
printf "\nState file found, exiting...\nRerun with --force to override.\n"
fi

Some files were not shown because too many files have changed in this diff Show More