Added Kafka reset to SOC UI. Incase of changing an active broker to a controller topics may become unavailable. Resolving this would require manual intervention. This option allows running a reset to start from a clean slate to then configure cluster to desired state before reenabling Kafka as global pipeline.

Signed-off-by: reyesj2 <94730068+reyesj2@users.noreply.github.com>
This commit is contained in:
reyesj2
2024-06-11 11:21:13 -04:00
parent 08557ae287
commit ca7b89c308
5 changed files with 31 additions and 5 deletions

View File

@@ -3,6 +3,7 @@ kafka:
cluster_id:
kafka_pass:
kafka_controllers:
reset_kafka:
config:
broker:
advertised_x_listeners:

View File

@@ -3,9 +3,6 @@
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
include:
- kafka.sostatus
so-kafka:
docker_container.absent:
- force: True

View File

@@ -0,0 +1,9 @@
# Copyright Security Onion Solutions LLC and/or licensed to Security Onion Solutions LLC under one
# or more contributor license agreements. Licensed under the Elastic License 2.0 as shown at
# https://securityonion.net/license; you may not use this file except in compliance with the
# Elastic License 2.0.
wipe_kafka_data:
file.absent:
- name: /nsm/kafka/data/
- force: True

View File

@@ -1,6 +1,6 @@
kafka:
enabled:
description: Enable or disable Kafka. Recommended to have desired configuration staged prior to enabling Kafka. Join all receiver nodes to grid that will be converted to Kafka nodes, configure kafka_controllers with the hostnames of the nodes you want to act as controllers, and configure the default_replication_factor to the desired value for your redundancy needs.
description: Enable or disable Kafka. Recommended to have desired configuration staged prior to enabling Kafka. Configure kafka_controllers with the hostnames of the nodes you want to act as controllers, join all receiver nodes to grid that will be converted to Kafka nodes, and configure the default_replication_factor to the desired value for your redundancy needs.
helpLink: kafka.html
cluster_id:
description: The ID of the Kafka cluster.
@@ -13,9 +13,12 @@ kafka:
sensitive: True
helpLink: kafka.html
kafka_controllers:
description: A comma-seperated list of Security Onion grid members that should act as controllers for this Kafka cluster. By default, the grid manager will use a 'combined' role where it will act as both a broker and controller. Keep total Kafka controllers to an odd number and ensure you do not assign ALL your Kafka nodes as controllers or this Kafka cluster will not start.
description: A comma-seperated list of Security Onion hosts that will act as Kafka controllers. These hosts will be responsible for managing the Kafka cluster. WARNING - The hostnames of receiver nodes intended to be controllers should be added here BEFORE they have joined the Security Onion grid or BEFORE enabling KAFKA. This is to ensure that data is not lost by converting a data broker to a controller. Failure to do so may result in topics becoming unavailable and requiring manual intervention to repair or resetting Kafka data.
forcedType: "string"
helpLink: kafka.html
reset_kafka:
description: Disable and reset the Kafka cluster. This will remove all Kafka data including logs that may have not yet been ingested into Elasticsearch and reverts the grid to using REDIS as the global pipeline. This is useful when testing different Kafka configurations such as rearranging Kafka brokers / controllers allowing you to reset the cluster rather than manually fixing any issues arising from attempting to reassign a Kafka broker into a controller. Enter 'YES_RESET_KAFKA' and submit to disable and reset Kafka. Make any configuration changes required and re-enable Kafka when ready.
helpLink: kafka.html
config:
broker:
advertised_x_listeners:

View File

@@ -75,4 +75,20 @@ engines:
cmd: salt -C 'G@role:so-standalone or G@role:so-manager or G@role:so-managersearch or G@role:so-receiver' state.apply kafka
- cmd.run:
cmd: salt-call state.apply elasticfleet
- files:
- /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls
- /opt/so/saltstack/local/pillar/kafka/adv_kafka.sls
pillar: kafka.reset_kafka
default: ''
actions:
from:
'*':
to:
'YES_RESET_KAFKA':
- cmd.run:
cmd: salt -C 'G@role:so-standalone or G@role:so-manager or G@role:so-managersearch or G@role:so-receiver' saltutil.kill_all_jobs
- cmd.run:
cmd: salt -C 'G@role:so-standalone or G@role:so-manager or G@role:so-managersearch or G@role:so-receiver' state.apply kafka.disabled,kafka.reset_kafka
- cmd.run:
cmd: /usr/sbin/so-yaml.py remove /opt/so/saltstack/local/pillar/kafka/soc_kafka.sls kafka.reset_kafka
interval: 10