Merge branch 'main' of https://github.com/Yamato-Security/hayabusa into feature/463-statistics-add-channel

This commit is contained in:
garigariganzy
2022-08-30 21:54:28 +09:00
65 changed files with 4130 additions and 2885 deletions

4
.gitignore vendored
View File

@@ -5,4 +5,6 @@
.DS_Store
test_*
.env
/logs
/logs
*.csv
hayabusa*

View File

@@ -1,20 +1,99 @@
# 変更点
## v1.4 [2022/XX/XX]
## v1.6.0 [2022/XX/XX]
**新機能:**
- XXX
**改善:**
- 結果概要に各レベルで検知した上位5つのルールを表示するようにした。 (#667) (@hitenkoku)
- 結果概要を出力しないようにするために `--no-summary` オプションを追加した。 (#672) (@hitenkoku)
- 結果概要の表示を短縮させた。 (#675) (@hitenkoku)
**バグ修正:**
- ログオン情報の要約オプションを追加した場合に、Hayabusaがクラッシュしていたのを修正した。 (#674) (@hitenkoku)
## v1.5.1 [2022/08/20]
**改善:**
- TimesketchにインポートできるCSV形式を出力するプロファイルを追加して、v1.5.1を再リリースした。 (#668) (@YamatoSecurity)
## v1.5.1 [2022/08/19]
**バグ修正:**
- Critical, medium、lowレベルのアラートはカラーで出力されていなかった。 (#663) (@fukusuket)
- `-f`で存在しないevtxファイルが指定された場合は、Hayabusaがクラッシュしていた。 (#664) (@fukusuket)
## v1.5.0 [2022/08/18]
**新機能:**
- `config/profiles.yaml``config/default_profile.yaml`の設定ファイルで、出力内容をカスタマイズできる。 (#165) (@hitenkoku)
- 対象のフィールドがレコード内に存在しないことを確認する `null` キーワードに対応した。 (#643) (@hitenkoku)
**改善:**
- ルールのアップデート機能のルールパスの出力から./を削除した。 (#642) (@hitenkoku)
- MITRE ATT&CK関連のタグとその他タグを出力するための出力用のエイリアスを追加した。 (#637) (@hitenkoku)
- 結果概要の数値をカンマをつけて見やすくした。 (#649) (@hitenkoku)
- `-h`オプションでメニューを使いやすいようにグループ化した。 (#651) (@YamatoSecurity and @hitenkoku)
- 結果概要内の検知数にパーセント表示を追加した。 (#658) (@hitenkoku)
**バグ修正:**
- aggregation conditionのルール検知が原因で検知しなかったイベント数の集計に誤りがあったので修正した。 (#640) (@hitenkoku)
- 一部のイベント0.01%程度)が検出されないレースコンディションの不具合を修正した。 (#639 #660) (@fukusuket)
## v1.4.3 [2022/08/03]
**バグ修正:**
- VC再頒布パッケージがインストールされていない環境でエラーが発生している状態を修正した。 (#635) (@fukusuket)
## v1.4.2 [2022/07/24]
**改善:**
- `--update-rules` オプションを利用する時に、更新対象のレポジトリを`--rules`オプションで指定できるようにした。 (#615) (@hitenkoku)
- 並列処理の改善による高速化。 (#479) (@kazuminn)
- `--output`オプションを利用したときのRulePathをRuleFileに変更した。RuleFileは出力するファイルの容量を低減させるためにファイル名のみを出力するようにした。 (#623) (@hitenkoku)
**バグ修正:**
- `cargo run`コマンドでhayabusaを実行するとconfigフォルダの読み込みエラーが発生する問題を修正した。 (#618) (@hitenkoku)
## v1.4.1 [2022/06/30]
**改善:**
- ルールや`./rules/config/default_details.txt` に対応する`details`の記載がない場合、すべてのフィールド情報を結果の``Details`列に出力するようにした (#606) (@hitenkoku)
- `--deep-scan`オプションの追加。 このオプションがない場合、`config/target_event_ids.txt`で指定されたイベントIDのみをスキャン対象とします。 このオプションをつけることですべてのイベントIDをスキャン対象とします。(#608) (@hitenkoku)
- `-U, --update-rules`オプションで`channel_abbreviations.txt``statistics_event_info.txt``target_event_IDs.txt`を更新できるように、`config`ディレクトリから`rules/config`ディレクトリに移動した。
## v1.4.0 [2022/06/26]
**新機能:**
- `--target-file-ext` オプションの追加。evtx以外の拡張子を指定する事ができます。ただし、ファイルの中身の形式はevtxファイル形式である必要があります。 (#586) (@hitenkoku)
- `--exclude-status` オプションの追加。ルール内の`status`フィールドをもとに、読み込み対象から除外するフィルタを利用することができます。 (#596) (@hitenkoku)
**改善:**
- ルール内に`details`フィールドがないときに、`rules/config/default_details.txt`に設定されたデフォルトの出力を行えるようにした。 (#359) (@hitenkoku)
- Clap Crateパッケージの更新 (#413) (@hitenkoku)
- オプションの指定がないときに、`--help`と同じ画面出力を行うように変更した。(#387) (@hitenkoku)
- ルール内に`details`フィールドがないときに、`rules/config/default_details.txt`に設定されたデフォルトの出力を行えるようにした。 (#359) (@hitenkoku)
- hayabusa.exeをカレントワーキングディレクトリ以外から動作できるようにした。 (#592) (@hitenkoku)
- `output` オプションで指定されファイルのサイズを出力するようにした。 (#595) (@hitenkoku)
**バグ修正:**
- XXX
- カラー出力で長い出力があった場合にエラーが出て終了する問題を修正した。 (#603) (@hitenkoku)
- `Excluded rules`の合計で`rules/tools/sigmac/testfiles`配下のテストルールも入っていたので、無視するようにした。 (#602) (@hitenkoku)
## v1.3.2 [2022/06/13]
@@ -35,6 +114,7 @@
- `--rfc-3339` オプションの時刻表示形式を変更した。 (#574) (@hitenkoku)
- `-R/ --display-record-id`オプションを`-R/ --hide-record-id`に変更。レコードIDはデフォルトで出力するようにして`-R`オプションを付けた際に表示しないように変更した。(#579) (@hitenkoku)
- ルール読み込み時のメッセージを追加した。 (#583) (@hitenkoku)
- `rules/tools/sigmac/testfiles`内のテスト用のymlファイルを読み込まないようにした. (#602) (@hitenkoku)
**バグ修正:**
@@ -97,7 +177,7 @@
**新機能:**
- `-C / --config` オプションの追加。検知ルールのコンフィグを指定することが可能。(Windowsでのライブ調査に便利) (@hitenkoku)
- `-C / --config` オプションの追加。検知ルールのコンフィグを指定することが可能。(Windowsでのライブ調査に便利) (@hitenkoku)
- `|equalsfield` と記載することでルール内で二つのフィールドの値が一致するかを記載に対応。 (@hach1yon)
- `-p / --pivot-keywords-list` オプションの追加。攻撃されたマシン名や疑わしいユーザ名などの情報をピボットキーワードリストとして出力する。 (@kazuminn)
- `-F / --full-data`オプションの追加。ルールの`details`で指定されたフィールドだけではなく、全フィールド情報を出力する。(@hach1yon)
@@ -128,7 +208,7 @@
- `-r / --rules`オプションで一つのルール指定が可能。(ルールをテストする際に便利!) (@kazuminn)
- ルール更新オプション (`-u / --update-rules`): [hayabusa-rules](https://github.com/Yamato-Security/hayabusa-rules)レポジトリにある最新のルールに更新できる。 (@hitenkoku)
- ライブ調査オプション (`-l / --live-analysis`): Windowsイベントログディレクトリを指定しないで、楽にWindows端末でライブ調査ができる。(@hitenkoku)
- ライブ調査オプション (`-l / --live-analysis`): Windowsイベントログディレクトリを指定しないで、楽にWindows端末でライブ調査ができる。(@hitenkoku)
**改善:**

View File

@@ -1,20 +1,99 @@
# Changes
## v1.4 [2022/XX/XX]
## v1.6.0 [2022/XX/XX]
**New Features:**
- XXX
**Enhancements:**
- Added top alerts to results summary. (#667) (@hitenkoku)
- Added `--no-summary` option to not display the results summary. (#672) (@hitenkoku)
- Made the results summary more compact. (#675) (@hitenkoku)
**Bug Fixes:**
- Hayabusa would crash with `-L` option (logon summary option). (#674) (@hitenkoku)
## v1.5.1 [2022/08/20]
**Enhancements:**
- Re-released v1.5.1 with an updated output profile that is compatible with Timesketch. (#668) (@YamatoSecurity)
## v1.5.1 [2022/08/19]
**Bug Fixes:**
- Critical, medium and low level alerts were not being displayed in color. (#663) (@fukusuket)
- Hayabusa would crash when an evtx file specified with `-f` did not exist. (#664) (@fukusuket)
## v1.5.0 [2022/08/18]
**New Features:**
- Customizable output of fields defined at `config/profiles.yaml` and `config/default_profile.yaml`. (#165) (@hitenkoku)
- Implemented the `null` keyword for rule detection. It is used to check if a target field exists or not. (#643) (@hitenkoku)
**Enhancements:**
- Trimmed `./` from the rule path when updating. (#642) (@hitenkoku)
- Added new output aliases for MITRE ATT&CK tags and other tags. (#637) (@hitenkoku)
- Organized the menu output when `-h` is used. (#651) (@YamatoSecurity and @hitenkoku)
- Added commas to summary numbers to make them easier to read. (#649) (@hitenkoku)
- Added output percentage of detections in Result Summary. (#658) (@hitenkoku)
**Bug Fixes:**
- Fixed miscalculation of Data Reduction due to aggregation condition rule detection. (#640) (@hitenkoku)
- Fixed a race condition bug where a few events (around 0.01%) would not be detected. (#639 #660) (@fukusuket)
## v1.4.3 [2022/08/03]
**Bug Fixes:**
- Hayabusa would not run on Windows 11 when the VC redistribute package was not installed but now everything is compiled statically. (#635) (@fukusuket)
## v1.4.2 [2022/07/24]
**Enhancements:**
- You can now update rules to a custom directory by combining the `--update-rules` and `--rules` options. (#615) (@hitenkoku)
- Improved speed with parallel processing by up to 20% with large files. (#479) (@kazuminn)
- When saving files with `-o`, the `.yml` detection rule path column changed from `RulePath` to `RuleFile` and only the rule file name will be saved in order to decrease file size. (#623) (@hitenkoku)
**Bug Fixes:**
- Fixed a runtime error when hayabusa is run from a different path than the current directory. (#618) (@hitenkoku)
## v1.4.1 [2022/06/30]
**Enhancements:**
- When no `details` field is defined in a rule nor in `./rules/config/default_details.txt`, all fields will be outputted to the `details` column. (#606) (@hitenkoku)
- Added the `-D, --deep-scan` option. Now by default, events are filtered by Event IDs that there are detection rules for defined in `./rules/config/target_event_IDs.txt`. This should improve performance by 25~55% while still detecting almost everything. If you want to do a thorough scan on all events, you can disable the event ID filter with `-D, --deep-scan`. (#608) (@hitenkoku)
- `channel_abbreviations.txt`, `statistics_event_info.txt` and `target_event_IDs.txt` have been moved from the `config` directory to the `rules/config` directory in order to provide updates with `-U, --update-rules`.
## v1.4.0 [2022/06/26]
**New Features:**
- Added `--target-file-ext` option. You can specify additional file extensions to scan in addtition to the default `.evtx` files. For example, `--target-file-ext evtx_data` or multiple extensions with `--target-file-ext evtx1 evtx2`. (#586) (@hitenkoku)
- Added `--exclude-status` option: You can ignore rules based on their `status`. (#596) (@hitenkoku)
**Enhancements:**
- Added default details output based on `rules/config/default_details.txt` when no `details` field in a rule is specified. (i.e. Sigma rules) (#359) (@hitenkoku)
- Updated clap crate package to version 3. (#413) (@hitnekoku)
- Updated the default usage and help menu. (#387) (@hitenkoku)
- Added default details output based on `rules/config/default_details.txt` when no `details` field in a rule is specified. (i.e. Sigma rules) (#359) (@hitenkoku)
- Hayabusa can be run from any directory, not just from the current directory. (#592) (@hitenkoku)
- Added saved file size output when `output` is specified. (#595) (@hitenkoku)
**Bug Fixes:**
- XXX
- Fixed output error and program termination when long output is displayed with color. (#603) (@hitenkoku)
- Ignore loading yml files in `rules/tools/sigmac/testfiles` to fix `Excluded rules` count. (#602) (@hitenkoku)
## v1.3.2 [2022/06/13]
@@ -99,7 +178,7 @@
**New Features:**
- Specify config directory (`-C / --config`): When specifying a different rules directory, the rules config directory will still be the default `rules/config`, so this option is useful when you want to test rules and their config files in a different directory. (@hitenkoku)
- Specify config directory (`-C / --config`): When specifying a different rules directory, the rules config directory will still be the default `rules/config`, so this option is useful when you want to test rules and their config files in a different directory. (@hitenkoku)
- `|equalsfield` aggregator: In order to write rules that compare if two fields are equal or not. (@hach1yon)
- Pivot keyword list generator feature (`-p / --pivot-keywords-list`): Will generate a list of keywords to grep for to quickly identify compromised machines, suspicious usernames, files, etc... (@kazuminn)
- `-F / --full-data` option: Will output all field information in addition to the fields defined in the rules `details`. (@hach1yon)
@@ -130,7 +209,7 @@
- Can specify a single rule with the `-r / --rules` option. (Great for testing rules!) (@kazuminn)
- Rule update option (`-u / --update-rules`): Update to the latest rules in the [hayabusa-rules](https://github.com/Yamato-Security/hayabusa-rules) repository. (@hitenkoku)
- Live analysis option (`-l / --live-analysis`): Can easily perform live analysis on Windows machines without specifying the Windows event log directory. (@hitenkoku)
- Live analysis option (`-l / --live-analysis`): Can easily perform live analysis on Windows machines without specifying the Windows event log directory. (@hitenkoku)
**Enhancements:**

679
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,19 +1,19 @@
[package]
name = "hayabusa"
version = "1.4.0-dev"
version = "1.6.0-dev"
authors = ["Yamato Security @SecurityYamato"]
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
itertools = "*"
dashmap = "*"
clap = { version = "3.*", features = ["derive", "cargo"]}
evtx = { git = "https://github.com/Yamato-Security/hayabusa-evtx.git" , rev = "158d496" , features = ["fast-alloc"]}
evtx = { git = "https://github.com/Yamato-Security/hayabusa-evtx.git" , features = ["fast-alloc"]}
quick-xml = {version = "0.*", features = ["serialize"] }
serde = { version = "1.*", features = ["derive"] }
serde_json = { version = "1.0"}
serde_derive = "1.*"
regex = "1.5.*"
regex = "1"
csv = "1.1.*"
base64 = "*"
flate2 = "1.*"
@@ -37,10 +37,17 @@ bytesize = "1.*"
hyper = "0.14.*"
lock_api = "0.4.*"
crossbeam-utils = "0.8.*"
num-format = "*"
comfy-table = "6.*"
[build-dependencies]
static_vcruntime = "2.*"
[dev-dependencies]
rand = "0.8.*"
[target.'cfg(windows)'.dependencies]
is_elevated = "0.1.*"
static_vcruntime = "2.*"
[target.'cfg(unix)'.dependencies] #Mac and Linux
openssl = { version = "*", features = ["vendored"] } #vendored is needed to compile statically.

BIN
README-1.5.1-Japanese.pdf Normal file

Binary file not shown.

BIN
README-1.5.1.pdf Normal file

Binary file not shown.

View File

@@ -1,16 +1,16 @@
<div align="center">
<p>
<img alt="Hayabusa Logo" src="hayabusa-logo.png" width="50%">
<img alt="Hayabusa Logo" src="logo.png" width="50%">
</p>
[<a href="README.md">English</a>] | [<b>日本語</b>]
</div>
---
[tag-1]: https://img.shields.io/github/downloads/Yamato-Security/hayabusa/total?style=plastic&label=GitHub%F0%9F%A6%85DownLoads
[tag-1]: https://img.shields.io/github/downloads/Yamato-Security/hayabusa/total?style=plastic&label=GitHub%F0%9F%A6%85Downloads
[tag-2]: https://img.shields.io/github/stars/Yamato-Security/hayabusa?style=plastic&label=GitHub%F0%9F%A6%85Stars
[tag-3]: https://img.shields.io/github/v/release/Yamato-Security/hayabusa?display_name=tag&label=latest-version&style=plastic
[tag-4]: https://img.shields.io/badge/Black%20Hat%20Arsenal-Asia%202022-blue
[tag-4]: https://github.com/toolswatch/badges/blob/master/arsenal/asia/2022.svg
[tag-5]: https://rust-reportcard.xuri.me/badge/github.com/Yamato-Security/hayabusa
[tag-6]: https://img.shields.io/badge/Maintenance%20Level-Actively%20Developed-brightgreen.svg
[tag-7]: https://img.shields.io/badge/Twitter-00acee?logo=twitter&logoColor=white
@@ -21,14 +21,14 @@
# Hayabusa について
Hayabusaは、日本の[Yamato Security](https://yamatosecurity.connpass.com/)グループによって作られた**Windowsイベントログのファストフォレンジックタイムライン生成**および**スレットハンティングツール**です。 Hayabusaは日本語で[「ハヤブサ」](https://en.wikipedia.org/wiki/Peregrine_falcon)を意味し、ハヤブサが世界で最も速く、狩猟(hunting)に優れ、とても訓練しやすい動物であることから選ばれました。[Rust](https://www.rust-lang.org/) で開発され、マルチスレッドに対応し、可能な限り高速に動作するよう配慮されています。[Sigma](https://github.com/SigmaHQ/Sigma)ルールをHayabusaルール形式に変換する[ツール](https://github.com/Yamato-Security/hayabusa-rules/tree/main/tools/sigmac)も提供しています。Hayabusaの検知ルールもSigmaと同様にYML形式であり、カスタマイズ性や拡張性に優れます。稼働中のシステムで実行してライブ調査することも、複数のシステムからログを収集してオフライン調査することも可能です。(※現時点では、リアルタイムアラートや定期的なスキャンには対応していません。) 出力は一つのCSVタイムラインにまとめられ、Excel、[Timeline Explorer](https://ericzimmerman.github.io/#!index.md)、[Elastic Stack](doc/ElasticStackImport/ElasticStackImport-Japanese.md)等で簡単に分析できるようになります。
Hayabusaは、日本の[Yamato Security](https://yamatosecurity.connpass.com/)グループによって作られた**Windowsイベントログのファストフォレンジックタイムライン生成**および**スレットハンティングツール**です。 Hayabusaは日本語で[「ハヤブサ」](https://en.wikipedia.org/wiki/Peregrine_falcon)を意味し、ハヤブサが世界で最も速く、狩猟(hunting)に優れ、とても訓練しやすい動物であることから選ばれました。[Rust](https://www.rust-lang.org/) で開発され、マルチスレッドに対応し、可能な限り高速に動作するよう配慮されています。[Sigma](https://github.com/SigmaHQ/Sigma)ルールをHayabusaルール形式に変換する[ツール](https://github.com/Yamato-Security/hayabusa-rules/tree/main/tools/sigmac)も提供しています。Hayabusaの検知ルールもSigmaと同様にYML形式であり、カスタマイズ性や拡張性に優れます。稼働中のシステムで実行してライブ調査することも、複数のシステムからログを収集してオフライン調査することも可能です。また、 [Velociraptor](https://docs.velociraptor.app/)と[Hayabusa artifact](https://docs.velociraptor.app/exchange/artifacts/pages/windows.eventlogs.hayabusa/)を用いることで企業向けの広範囲なスレットハンティングとインシデントレスポンスにも活用できます。出力は一つのCSVタイムラインにまとめられ、Excel、[Timeline Explorer](https://ericzimmerman.github.io/#!index.md)、[Elastic Stack](doc/ElasticStackImport/ElasticStackImport-Japanese.md)、[Timesketch](https://timesketch.org/)等で簡単に分析できるようになります。
## 目次
- [Hayabusa について](#hayabusa-について)
- [目次](#目次)
- [主な目的](#主な目的)
- [スレット(脅威)ハンティング](#スレット脅威ハンティング)
- [スレット(脅威)ハンティングと企業向けの広範囲なDFIR](#スレット脅威ハンティングと企業向けの広範囲なdfir)
- [フォレンジックタイムラインの高速生成](#フォレンジックタイムラインの高速生成)
- [スクリーンショット](#スクリーンショット)
- [起動画面](#起動画面)
@@ -39,9 +39,9 @@ Hayabusaは、日本の[Yamato Security](https://yamatosecurity.connpass.com/)
- [Timeline Explorerでの解析](#timeline-explorerでの解析)
- [Criticalアラートのフィルタリングとコンピュータごとのグルーピング](#criticalアラートのフィルタリングとコンピュータごとのグルーピング)
- [Elastic Stackダッシュボードでの解析](#elastic-stackダッシュボードでの解析)
- [Timesketchでの解析](#timesketchでの解析)
- [タイムラインのサンプル結果](#タイムラインのサンプル結果)
- [特徴&機能](#特徴機能)
- [予定されている機能](#予定されている機能)
- [ダウンロード](#ダウンロード)
- [Gitクローン](#gitクローン)
- [アドバンス: ソースコードからのコンパイル(任意)](#アドバンス-ソースコードからのコンパイル任意)
@@ -49,26 +49,39 @@ Hayabusaは、日本の[Yamato Security](https://yamatosecurity.connpass.com/)
- [32ビットWindowsバイナリのクロスコンパイル](#32ビットwindowsバイナリのクロスコンパイル)
- [macOSでのコンパイルの注意点](#macosでのコンパイルの注意点)
- [Linuxでのコンパイルの注意点](#linuxでのコンパイルの注意点)
- [LinuxのMUSLバイナリのクロスコンパイル](#linuxのmuslバイナリのクロスコンパイル)
- [Linuxでのコンパイルの注意点](#linuxでのコンパイルの注意点-1)
- [Hayabusaの実行](#hayabusaの実行)
- [注意: アンチウィルス/EDRの誤検知](#注意-アンチウィルスedrの誤検知)
- [注意: アンチウィルス/EDRの誤検知と遅い初回実行](#注意-アンチウィルスedrの誤検知と遅い初回実行)
- [Windows](#windows)
- [Linux](#linux)
- [macOS](#macos)
- [使用方法](#使用方法)
- [主なコマンド](#主なコマンド)
- [コマンドラインオプション](#コマンドラインオプション)
- [使用例](#使用例)
- [ピボットキーワードの作成](#ピボットキーワードの作成)
- [ログオン情報の要約](#ログオン情報の要約)
- [サンプルevtxファイルでHayabusaをテストする](#サンプルevtxファイルでhayabusaをテストする)
- [Hayabusaの出力](#hayabusaの出力)
- [プロファイル](#プロファイル)
- [1. `minimal`プロファイルの出力](#1-minimalプロファイルの出力)
- [2. `standard`プロファイルの出力](#2-standardプロファイルの出力)
- [3. `verbose`プロファイルの出力](#3-verboseプロファイルの出力)
- [4. `verbose-all-field-info`プロファイルの出力](#4-verbose-all-field-infoプロファイルの出力)
- [5. `verbose-details-and-all-field-info`プロファイルの出力](#5-verbose-details-and-all-field-infoプロファイルの出力)
- [6. `timesketch`プロファイルの出力](#6-timesketchプロファイルの出力)
- [プロファイルの比較](#プロファイルの比較)
- [Profile Field Aliases](#profile-field-aliases)
- [Levelの省略](#levelの省略)
- [MITRE ATT&CK戦術の省略](#mitre-attck戦術の省略)
- [Channel情報の省略](#channel情報の省略)
- [プログレスバー](#プログレスバー)
- [標準出力へのカラー設定](#標準出力へのカラー設定)
- [イベント頻度タイムライン](#イベント頻度タイムライン)
- [最多検知日の出力](#最多検知日の出力)
- [最多検知端末名の出力](#最多検知端末名の出力)
- [結果のサマリ](#結果のサマリ)
- [イベント頻度タイムライン](#イベント頻度タイムライン)
- [最多検知の出力](#最多検知の出力)
- [最多検知端末名の出力](#最多検知端末名の出力)
- [Hayabusaルール](#hayabusaルール)
- [Hayabusa v.s. 変換されたSigmaルール](#hayabusa-vs-変換されたsigmaルール)
- [検知ルールのチューニング](#検知ルールのチューニング)
@@ -87,9 +100,11 @@ Hayabusaは、日本の[Yamato Security](https://yamatosecurity.connpass.com/)
## 主な目的
### スレット(脅威)ハンティング
### スレット(脅威)ハンティングと企業向けの広範囲なDFIR
Hayabusaには現在、2300以上のSigmaルールと130以上のHayabusa検知ルールがあり、定期的にルールが追加されています。 最終的な目標はインシデントレスポンスや定期的なスレットハンティングのために、HayabusaエージェントをすべてのWindows端末にインストールして、中央サーバーにアラートを返す仕組みを作ることです。
Hayabusaには現在、2600以上のSigmaルールと130以上のHayabusa検知ルールがあり、定期的にルールが追加されています。
[Velociraptor](https://docs.velociraptor.app/)の[Hayabusa artifact](https://docs.velociraptor.app/exchange/artifacts/pages/windows.eventlogs.hayabusa/)を用いることで企業向けの広範囲なスレットハンティングだけでなくDFIR(デジタルフォレンジックとインシデントレスポンス)にも無料で利用することが可能です。この2つのオープンソースを組み合わせることで、SIEMが設定されていない環境でも実質的に遡及してSIEMを再現することができます。具体的な方法は[Eric Capuano](https://twitter.com/eric_capuano)の[こちら](https://www.youtube.com/watch?v=Q1IoGX--814)の動画で学ぶことができます。
最終的な目標はインシデントレスポンスや定期的なスレットハンティングのために、HayabusaエージェントをすべてのWindows端末にインストールして、中央サーバーにアラートを返す仕組みを作ることです。
### フォレンジックタイムラインの高速生成
@@ -97,29 +112,29 @@ Windowsのイベントログは、
1解析が困難なデータ形式であること
2データの大半がイズであり調査に有用でないこと
から、従来は非常に長い時間と手間がかかる解析作業となっていました。 Hayabusa は、有用なデータのみを抽出し、専門的なトレーニングを受けた分析者だけでなく、Windowsのシステム管理者であれば誰でも利用できる読みやすい形式で提示することを主な目的としています。
[Evtx Explorer](https://ericzimmerman.github.io/#!index.md)や[Event Log Explorer](https://eventlogxp.com/)のような深掘り分析を行うツールの代替ではなく、分析者が20%の時間で80%の作業を行えるようにすることを目的としています。
Hayabusaは従来のWindowsイベントログ分析解析と比較して、分析者が20%の時間で80%の作業を行えるようにすることを目しています。
# スクリーンショット
## 起動画面
![Hayabusa 起動画面](/screenshots/Hayabusa-Startup.png)
![Hayabusa 起動画面](screenshots/Hayabusa-Startup.png)
## ターミナル出力画面
![Hayabusa ターミナル出力画面](/screenshots/Hayabusa-Results.png)
![Hayabusa ターミナル出力画面](screenshots/Hayabusa-Results.png)
## イベント頻度タイムライン出力画面 (`-V`オプション)
![Hayabusa イベント頻度タイムライン出力画面](/screenshots/HayabusaEventFrequencyTimeline.png)
![Hayabusa イベント頻度タイムライン出力画面](screenshots/HayabusaEventFrequencyTimeline.png)
## 結果サマリ画面
![Hayabusa 結果サマリ画面](/screenshots/HayabusaResultsSummary.png)
![Hayabusa 結果サマリ画面](screenshots/HayabusaResultsSummary.png)
## Excelでの解析
![Hayabusa Excelでの解析](/screenshots/ExcelScreenshot.png)
![Hayabusa Excelでの解析](screenshots/ExcelScreenshot.png)
## Timeline Explorerでの解析
@@ -136,6 +151,10 @@ Windowsのイベントログは、
![Elastic Stack Dashboard 2](doc/ElasticStackImport/18-HayabusaDashboard-2.png)
## Timesketchでの解析
![Timesketch](screenshots/TimesketchAnalysis.png)
# タイムラインのサンプル結果
CSVのタイムライン結果のサンプルは[こちら](https://github.com/Yamato-Security/hayabusa/tree/main/sample-results)で確認できます。
@@ -144,6 +163,8 @@ CSVのタイムラインをExcelやTimeline Explorerで分析する方法は[こ
CSVのタイムラインをElastic Stackにインポートする方法は[こちら](doc/ElasticStackImport/ElasticStackImport-Japanese.md)で紹介しています。
CSVのタイムラインをTimesketchにインポートする方法は[こちら](doc/TimesketchImport/TimesketchImport-Japanese.md)で紹介しています。
# 特徴&機能
* クロスプラットフォーム対応: Windows, Linux, macOS。
@@ -160,11 +181,7 @@ CSVのタイムラインをElastic Stackにインポートする方法は[こち
* イベントログから不審なユーザやファイルを素早く特定するためのピボットキーワードの一覧作成。
* 詳細な調査のために全フィールド情報の出力。
* 成功と失敗したユーザログオンの要約。
# 予定されている機能
* すべてのエンドポイントでの企業全体のスレットハンティング。
* MITRE ATT&CKのヒートマップ生成機能。
* [Velociraptor](https://docs.velociraptor.app/)と組み合わせた企業向けの広範囲なすべてのエンドポイントに対するスレットハンティングとDFIR。
# ダウンロード
@@ -185,7 +202,7 @@ git clone https://github.com/Yamato-Security/hayabusa.git --recursive
`git pull --recurse-submodules`コマンド、もしくは以下のコマンドで`rules`フォルダを同期し、Hayabusaの最新のルールを更新することができます:
```bash
hayabusa-1.3.2-win-x64.exe -u
hayabusa-1.5.1-win-x64.exe -u
```
アップデートが失敗した場合は、`rules`フォルダの名前を変更してから、もう一回アップデートしてみて下さい。
@@ -200,7 +217,6 @@ hayabusa-1.3.2-win-x64.exe -u
Rustがインストールされている場合、以下のコマンドでソースコードからコンパイルすることができます:
```bash
cargo clean
cargo build --release
```
@@ -256,31 +272,55 @@ Fedora系のディストロ:
sudo yum install openssl-devel
```
## LinuxのMUSLバイナリのクロスコンパイル
まず、Linux OSでターゲットをインストールします。
```bash
rustup install stable-x86_64-unknown-linux-musl
rustup target add x86_64-unknown-linux-musl
```
以下のようにコンパイルします:
```
cargo build --release --target=x86_64-unknown-linux-musl
```
MUSLバイナリは`./target/x86_64-unknown-linux-musl/release/`ディレクトリ配下に作成されます。
MUSLバイナリはGNUバイナリより約15遅いです。
## Linuxでのコンパイルの注意点
# Hayabusaの実行
## 注意: アンチウィルス/EDRの誤検知
## 注意: アンチウィルス/EDRの誤検知と遅い初回実行
Hayabusa実行する際や、`.yml`ルールのダウンロードや実行時にルール内でdetectionに不審なPowerShellコマンドや`mimikatz`のようなキーワードが書かれている際に、アンチウィルスやEDRにブロックされる可能性があります。
誤検知のため、セキュリティ対策の製品がHayabusaを許可するように設定する必要があります。
マルウェア感染が心配であれば、ソースコードを確認した上で、自分でバイナリをコンパイルして下さい。
Windows PC起動後の初回実行時に時間がかかる場合があります。これはWindows Defenderのリアルタイムスキャンが行われていることが原因です。リアルタイムスキャンを無効にするかHayabusaのディレクトリをアンチウィルススキャンから除外することでこの現象は解消しますが、設定を変える前にセキュリティリスクを十分ご考慮ください。
## Windows
コマンドプロンプトやWindows Terminalから32ビットもしくは64ビットのWindowsバイナリをHayabusaのルートディレクトリから実行します。
例: `hayabusa-1.3.2-windows-x64.exe`
例: `hayabusa-1.5.1-windows-x64.exe`
## Linux
まず、バイナリに実行権限を与える必要があります。
```bash
chmod +x ./hayabusa-1.3.2-linux-x64-gnu
chmod +x ./hayabusa-1.5.1-linux-x64-gnu
```
次に、Hayabusaのルートディレクトリから実行します
```bash
./hayabusa-1.3.2-linux-x64-gnu
./hayabusa-1.5.1-linux-x64-gnu
```
## macOS
@@ -288,159 +328,185 @@ chmod +x ./hayabusa-1.3.2-linux-x64-gnu
まず、ターミナルやiTerm2からバイナリに実行権限を与える必要があります。
```bash
chmod +x ./hayabusa-1.3.2-mac-intel
chmod +x ./hayabusa-1.5.1-mac-intel
```
次に、Hayabusaのルートディレクトリから実行してみてください
```bash
./hayabusa-1.3.2-mac-intel
./hayabusa-1.5.1-mac-intel
```
macOSの最新版では、以下のセキュリティ警告が出る可能性があります
![Mac Error 1 JP](/screenshots/MacOS-RunError-1-JP.png)
![Mac Error 1 JP](screenshots/MacOS-RunError-1-JP.png)
macOSの環境設定から「セキュリティとプライバシー」を開き、「一般」タブから「このまま許可」ボタンをクリックしてください。
![Mac Error 2 JP](/screenshots/MacOS-RunError-2-JP.png)
![Mac Error 2 JP](screenshots/MacOS-RunError-2-JP.png)
その後、ターミナルからもう一回実行してみてください:
```bash
./hayabusa-1.3.2-mac-intel
./hayabusa-1.5.1-mac-intel
```
以下の警告が出るので、「開く」をクリックしてください。
![Mac Error 3 JP](/screenshots/MacOS-RunError-3-JP.png)
![Mac Error 3 JP](screenshots/MacOS-RunError-3-JP.png)
これで実行できるようになります。
# 使用方法
## 主なコマンド
* デフォルト: ファストフォレンジックタイムラインの作成。
* `--level-tuning`: アラート`level`のカスタムチューニング
* `-L, --logon-summary`: ログオンイベントのサマリを出力する。
* `-P, --pivot-keywords-list`: ピボットする不審なキーワードのリスト作成。
* `-s, --statistics`: イベントIDに基づくイベントの合計と割合の集計を出力する。
* `--set-default-profile`: デフォルトプロファイルを変更する。
* `-u, --update`: GitHubの[hayabusa-rules](https://github.com/Yamato-Security/hayabusa-rules)リポジトリにある最新のルールに同期させる。
## コマンドラインオプション
```
USAGE:
hayabusa.exe -f file.evtx [OPTIONS] / hayabusa.exe -d evtx-directory [OPTIONS]
hayabusa.exe <INPUT> [OTHER-ACTIONS] [OPTIONS]
OPTIONS:
--European-time ヨーロッパ形式で日付と時刻を出力する (例: 22-02-2022 22:00:00.123 +02:00)
--RFC-2822 RFC 2822形式で日付と時刻を出力する (例: Fri, 22 Feb 2022 22:00:00 -0600)
--RFC-3339 RFC 3339形式で日付と時刻を出力する (例: 2022-02-22 22:00:00.123456-06:00)
--US-military-time 24時間制(ミリタリータイム)のアメリカ形式で日付と時刻を出力する (例: 02-22-2022 22:00:00.123 -06:00)
--US-time アメリカ形式で日付と時刻を出力する (例: 02-22-2022 10:00:00.123 PM -06:00)
--target-file-ext <EVTX_FILE_EXT>... evtx以外の拡張子を解析対象に追加する。 (例1: evtx_data 例evtx1 evtx2)
--all-tags 出力したCSVファイルにルール内のタグ情報を全て出力する
-c, --config <RULE_CONFIG_DIRECTORY> ルールフォルダのコンフィグディレクトリ (デフォルト: ./rules/config)
--contributors コントリビュータの一覧表示
-d, --directory <DIRECTORY> .evtxファイルを持つディレクトリのパス
-D, --enable-deprecated-rules Deprecatedルールを有効にする
--end-timeline <END_TIMELINE> 解析対象とするイベントログの終了時刻 (例: "2022-02-22 23:59:59 +09:00")
-f, --filepath <FILE_PATH> 1つの.evtxファイルに対して解析を行う
-F, --full-data 全てのフィールド情報を出力する
-h, --help ヘルプ情報を表示する
-l, --live-analysis ローカル端末のC:\Windows\System32\winevt\Logsフォルダを解析する
-L, --logon-summary 成功と失敗したログオン情報の要約を出力する
--level-tuning <LEVEL_TUNING_FILE> ルールlevelのチューニング (デフォルト: ./rules/config/level_tuning.txt)
-m, --min-level <LEVEL> 結果出力をするルールの最低レベル (デフォルト: informational)
-n, --enable-noisy-rules Noisyルールを有効にする
--no_color カラー出力を無効にする
-o, --output <CSV_TIMELINE> タイムラインをCSV形式で保存する (例: results.csv)
-p, --pivot-keywords-list ピボットキーワードの一覧作成
-q, --quiet Quietモード: 起動バナーを表示しない
-Q, --quiet-errors Quiet errorsモード: エラーログを保存しない
-r, --rules <RULE_DIRECTORY/RULE_FILE> ルールファイルまたはルールファイルを持つディレクトリ (デフォルト: ./rules)
-R, --hide-record-id イベントレコードIDを表示しない
-s, --statistics イベントIDの統計情報を表示する
--start-timeline <START_TIMELINE> 解析対象とするイベントログの開始時刻 (例: "2020-02-22 00:00:00 +09:00")
-t, --thread-number <NUMBER> スレッド数 (デフォルト: パフォーマンスに最適な数値)
-u, --update-rules rulesフォルダをhayabusa-rulesのgithubリポジトリの最新版に更新する
-U, --UTC UTC形式で日付と時刻を出力する (デフォルト: 現地時間)
-v, --verbose 詳細な情報を出力する
-V, --visualize-timeline イベント頻度タイムラインを出力する
--version バージョン情報を表示する
INPUT:
-d, --directory <DIRECTORY> .evtxファイルを持つディレクトリのパス
-f, --file <FILE> 1つの.evtxファイルに対して解析を行う
-l, --live-analysis ローカル端末のC:\Windows\System32\winevt\Logsフォルダを解析する
ADVANCED:
-c, --rules-config <DIRECTORY> ルールフォルダのコンフィグディレクトリ (デフォルト: ./rules/config)
-Q, --quiet-errors Quiet errorsモード: エラーログを保存しない
-r, --rules <DIRECTORY/FILE> ルールファイルまたはルールファイルを持つディレクトリ (デフォルト: ./rules)
-t, --thread-number <NUMBER> スレッド数 (デフォルト: パフォーマンスに最適な数値)
--target-file-ext <EVTX_FILE_EXT>... evtx以外の拡張子を解析対象に追加する。 (例1: evtx_data 例evtx1 evtx2)
OUTPUT:
-o, --output <FILE> タイムラインをCSV形式で保存する (例: results.csv)
-P, --profile <PROFILE> 利用する出力プロファイル名を指定する (minimal, standard, verbose, verbose-all-field-info, verbose-details-and-all-field-info)
DISPLAY-SETTINGS:
--no-color カラー出力を無効にする
--no-summary 結果概要を出力しない
-q, --quiet Quietモード: 起動バナーを表示しない
-v, --verbose 詳細な情報を出力する
-V, --visualize-timeline イベント頻度タイムラインを出力する
FILTERING:
-D, --deep-scan すべてのイベントIDを対象にしたスキャンを行う遅くなる
--enable-deprecated-rules Deprecatedルールを有効にする
--exclude-status <STATUS>... 読み込み対象外とするルール内でのステータス (ex: experimental) (ex: stable test)
-m, --min-level <LEVEL> 結果出力をするルールの最低レベル (デフォルト: informational)
-n, --enable-noisy-rules Noisyルールを有効にする
--timeline-end <DATE> 解析対象とするイベントログの終了時刻 (例: "2022-02-22 23:59:59 +09:00")
--timeline-start <DATE> 解析対象とするイベントログの開始時刻 (例: "2020-02-22 00:00:00 +09:00")
OTHER-ACTIONS:
--contributors コントリビュータの一覧表示
-L, --logon-summary 成功と失敗したログオン情報の要約を出力する
--level-tuning [<FILE>] ルールlevelのチューニング (デフォルト: ./rules/config/level_tuning.txt)
-p, --pivot-keywords-list ピボットキーワードの一覧作成
-s, --statistics イベントIDの統計情報を表示する
--set-default-profile <PROFILE> デフォルトの出力コンフィグを設定する
-u, --update-rules rulesフォルダをhayabusa-rulesのgithubリポジトリの最新版に更新する
TIME-FORMAT:
--European-time ヨーロッパ形式で日付と時刻を出力する (例: 22-02-2022 22:00:00.123 +02:00)
--RFC-2822 RFC 2822形式で日付と時刻を出力する (例: Fri, 22 Feb 2022 22:00:00 -0600)
--RFC-3339 RFC 3339形式で日付と時刻を出力する (例: 2022-02-22 22:00:00.123456-06:00)
--US-military-time 24時間制(ミリタリータイム)のアメリカ形式で日付と時刻を出力する (例: 02-22-2022 22:00:00.123 -06:00)
--US-time アメリカ形式で日付と時刻を出力する (例: 02-22-2022 10:00:00.123 PM -06:00)
-U, --UTC UTC形式で日付と時刻を出力する (デフォルト: 現地時間)
```
## 使用例
* つのWindowsイベントログファイルに対してHayabusaを実行します:
* つのWindowsイベントログファイルに対してHayabusaを実行す:
```bash
hayabusa-1.3.2-win-x64.exe -f eventlog.evtx
hayabusa-1.5.1-win-x64.exe -f eventlog.evtx
```
* 複数のWindowsイベントログファイルのあるsample-evtxディレクトリに対して、Hayabusaを実行します:
* `verbose`プロファイルで複数のWindowsイベントログファイルのあるsample-evtxディレクトリに対して、Hayabusaを実行す:
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -P verbose
```
* 全てのフィールド情報も含めてつのCSVファイルにエクスポートして、Excel、Timeline Explorer、Elastic Stack等でさらに分析することができます:
* 全てのフィールド情報も含めてつのCSVファイルにエクスポートして、Excel、Timeline Explorer、Elastic Stack等でさらに分析することができる(注意: `verbose-details-and-all-field-info`プロファイルを使すると、出力するファイルのサイズがとても大きくなる!):
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -o results.csv -F
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -o results.csv -P `verbose-details-and-all-field-info`
```
* Hayabusaルールのみを実行します(デフォルトでは `-r .\rules` にあるすべてのルールが利用されます:
* Hayabusaルールのみを実行す(デフォルトでは`-r .\rules`にあるすべてのルールが利用され:
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa -o results.csv
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa -o results.csv
```
* Windowsでデフォルトで有効になっているログに対してのみ、Hayabusaルールを実行します:
* Windowsでデフォルトで有効になっているログに対してのみ、Hayabusaルールを実行す:
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default -o results.csv
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default -o results.csv
```
* Sysmonログに対してのみHayabusaルールを実行します:
* Sysmonログに対してのみHayabusaルールを実行す:
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\sysmon -o results.csv
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\sysmon -o results.csv
```
* Sigmaルールのみを実行します:
* Sigmaルールのみを実行す:
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\sigma -o results.csv
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\sigma -o results.csv
```
* 廃棄(deprecated)されたルール(`status``deprecated`になっているルール)とノイジールール(`.\rules\config\noisy_rules.txt`にルールIDが書かれているルール)を有効にします:
* 廃棄(deprecated)されたルール(`status``deprecated`になっているルール)とノイジールール(`.\rules\config\noisy_rules.txt`にルールIDが書かれているルール)を有効にす:
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx --enable-deprecated-rules --enable-noisy-rules -o results.csv
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx --enable-deprecated-rules --enable-noisy-rules -o results.csv
```
* ログオン情報を分析するルールのみを実行し、UTCタイムゾーンで出力します:
* ログオン情報を分析するルールのみを実行し、UTCタイムゾーンで出力す:
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default\events\Security\Logons -U -o results.csv
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default\events\Security\Logons -U -o results.csv
```
* 起動中のWindows端末上で実行しAdministrator権限が必要、アラート悪意のある可能性のある動作のみを検知します:
* 起動中のWindows端末上で実行しAdministrator権限が必要、アラート悪意のある可能性のある動作のみを検知す:
```bash
hayabusa-1.3.2-win-x64.exe -l -m low
hayabusa-1.5.1-win-x64.exe -l -m low
```
* criticalレベルのアラートからピボットキーワードの一覧を作成します(結果は結果毎に`keywords-Ip Address.txt``keyworss-Users.txt`等に出力されます):
* criticalレベルのアラートからピボットキーワードの一覧を作成す(結果は結果毎に`keywords-Ip Address.txt``keywords-Users.txt`等に出力され):
```bash
hayabusa-1.3.2-win-x64.exe -l -m critical -p -o keywords
hayabusa-1.5.1-win-x64.exe -l -m critical -p -o keywords
```
* イベントIDの統計情報を取得します:
* イベントIDの統計情報を出力する:
```bash
hayabusa-1.3.2-win-x64.exe -f Security.evtx -s
hayabusa-1.5.1-win-x64.exe -f Security.evtx -s
```
* ログオンサマリを出力する:
```bash
hayabusa-1.5.1-win-x64.exe -L -f Security.evtx -s
```
* 詳細なメッセージを出力します(処理に時間がかかるファイル、パースエラー等を特定するのに便利):
* 詳細なメッセージを出力す(処理に時間がかかるファイル、パースエラー等を特定するのに便利):
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -v
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -v
```
* Verbose出力の例:
@@ -458,6 +524,12 @@ Checking target evtx FilePath: "./hayabusa-sample-evtx/YamatoSecurity/T1218.004_
5 / 509 [=>------------------------------------------------------------------------------------------------------------------------------------------] 0.98 % 1s
```
* 結果を[Timesketch](https://timesketch.org/)にインポートできるCSV形式に保存する:
```bash
hayabusa-1.5.1-win-x64.exe -d ../hayabusa-sample-evtx --RFC-3339 -o timesketch-import.csv -P timesketch -U
```
* エラーログの出力をさせないようにする:
デフォルトでは、Hayabusaはエラーメッセージをエラーログに保存します。
エラーメッセージを保存したくない場合は、`-Q`を追加してください。
@@ -465,7 +537,7 @@ Checking target evtx FilePath: "./hayabusa-sample-evtx/YamatoSecurity/T1218.004_
## ピボットキーワードの作成
`-p`もしくは`--pivot-keywords-list`オプションを使うことで不審なユーザやホスト名、プロセスなどを一覧で出力することができ、イベントログから素早く特定することができます。
ピボットキーワードのカスタマイズは`config/pivot_keywords.txt`を変更することで行うことができます。以下はデフォルトの設定になります:
ピボットキーワードのカスタマイズは`./config/pivot_keywords.txt`を変更することで行うことができます。以下はデフォルトの設定になります:
```
Users.SubjectUserName
@@ -494,29 +566,85 @@ Hayabusaをテストしたり、新しいルールを作成したりするため
git clone https://github.com/Yamato-Security/hayabusa-sample-evtx.git
```
> ※ 以下の例でHayabusaを試したい方は、上記コマンドをhayabusaのルートフォルダから実行してください。
# Hayabusaの出力
## プロファイル
Hayabusaの結果を標準出力に表示しているとき(デフォルト)は、以下の情報を表示します:
Hayabusaの`config/profiles.yaml`設定ファイルでは、5つのプロファイルが定義されています:
* `Timestamp`: デフォルトでは`YYYY-MM-DD HH:mm:ss.sss +hh:mm`形式になっています。イベントログの`<Event><System><TimeCreated SystemTime>`フィールドから来ています。デフォルトのタイムゾーンはローカルのタイムゾーンになりますが、`--utc` オプションで UTC に変更することができます。
* `Computer`: イベントログの`<Event><System><Computer>`フィールドから来ています。
* `Channel`: ログ名です。イベントログの`<Event><System><EventID>`フィールドから来ています。
* `Event ID`: イベントログの`<Event><System><EventID>`フィールドから来ています。
* `Level`: YML検知ルールの`level`フィールドから来ています。(例:`informational`, `low`, `medium`, `high`, `critical`) デフォルトでは、すべてのレベルのアラートとイベントが出力されますが、`-m`オプションで最低のレベルを指定することができます。例えば`-m high`オプションを付けると、`high``critical`アラートしか出力されません。
* `Title`: YML検知ルールの`title`フィールドから来ています。
* `RecordID`: イベントレコードIDです。`<Event><System><EventRecordID>`フィールドから来ています。`-R`もしくは`--hide-record-id`オプションを付けると表示されません。
* `Details`: YML検知ルールの`details`フィールドから来ていますが、このフィールドはHayabusaルールにしかありません。このフィールドはアラートとイベントに関する追加情報を提供し、ログのフィールドから有用なデータを抽出することができます。イベントキーのマッピングが間違っている場合、もしくはフィールドが存在しない場合で抽出ができなかった箇所は`n/a` (not available)と記載されます。YML検知ルールに`details`フィールドが存在しない時のdetailsのメッセージを`./rules/config/default_details.txt`で設定できます。`default_details.txt`では`Provider Name``EventID``details`の組み合わせで設定することができます。
1. `minimal`
2. `standard` (デフォルト)
3. `verbose`
4. `verbose-all-field-info`
5. `verbose-details-and-all-field-info`
CSVファイルとして保存する場合、以下の列が追加されます:
このファイルを編集することで、簡単に独自のプロファイルをカスタマイズしたり、追加したりすることができます
`--set-default-profile <profile>`オプションでデフォルトのプロファイルを変更することもできます。
* `MitreAttack`: MITRE ATT&CKの戦術。
* `Rule Path`: アラートまたはイベントを生成した検知ルールへのパス。
* `File Path`: アラートまたはイベントを起こしたevtxファイルへのパス。
### 1. `minimal`プロファイルの出力
`-F`もしくは`--full-data`オプションを指定した場合、全てのフィールド情報が`RecordInformation`カラムにで出力されます。
`%Timestamp%`, `%Computer%`, `%Channel%`, `%EventID%`, `%Level%`, `%RuleTitle%`, `%Details%`
### 2. `standard`プロファイルの出力
`%Timestamp%`, `%Computer%`, `%Channel%`, `%EventID%`, `%Level%`, `%MitreTactics%`, `%RecordID%`, `%RuleTitle%`, `%Details%`
### 3. `verbose`プロファイルの出力
`%Timestamp%`, `%Computer%`, `%Channel%`, `%EventID%`, `%Level%`, `%MitreTactics`, `%MitreTags%`, `%OtherTags%`, `%RecordID%`, `%RuleTitle%`, `%Details%`, `%RuleFile%`, `%EvtxFile%`
### 4. `verbose-all-field-info`プロファイルの出力
最小限の`details`情報を出力する代わりに、イベントにあるすべての`EventData`フィールド情報が出力されます。
`%Timestamp%`, `%Computer%`, `%Channel%`, `%EventID%`, `%Level%`, `%MitreTactics`, `%MitreTags%`, `%OtherTags%`, `%RecordID%`, `%RuleTitle%`, `%AllFieldInfo%`, `%RuleFile%`, `%EvtxFile%`
### 5. `verbose-details-and-all-field-info`プロファイルの出力
`verbose`プロファイルで出力される情報とイベントにあるすべての`EventData`フィールド情報が出力されます。
(注意: 出力ファイルサイズは2倍になります)
`%Timestamp%`, `%Computer%`, `%Channel%`, `%EventID%`, `%Level%`, `%MitreTactics`, `%MitreTags%`, `%OtherTags%`, `%RecordID%`, `%RuleTitle%`, `%Details%`, `%RuleFile%`, `%EvtxFile%`, `%AllFieldInfo%`
### 6. `timesketch`プロファイルの出力
[Timesketch](https://timesketch.org/)にインポートできる`verbose`プロファイル。
`%Timestamp%`, `hayabusa`, `%RuleTitle%`, `%Computer%`, `%Channel%`, `%EventID%`, `%Level%`, `%MitreTactics`, `%MitreTags%`, `%OtherTags%`, `%RecordID%`, `%Details%`, `%RuleFile%`, `%EvtxFile%`
### プロファイルの比較
以下のベンチマークは、2018年製のマックブックプロ上で7.5GBのEVTXデータに対して実施されました。
| プロファイル | 処理時間 | 結果のファイルサイズ |
| :---: | :---: | :---: |
| minimal | 16分18秒 | 690 MB |
| standard | 16分23秒 | 710 MB |
| verbose | 17分 | 990 MB |
| timesketch | 17分 | 1015 MB |
| verbose-all-field-info | 16分50秒 | 1.6 GB |
| verbose-details-and-all-field-info | 17分12秒 | 2.1 GB |
### Profile Field Aliases
| エイリアス名 | Hayabusaの出力情報 |
| :--- | :--- |
|%Timestamp% | デフォルトでは`YYYY-MM-DD HH:mm:ss.sss +hh:mm`形式になっている。イベントログの`<Event><System><TimeCreated SystemTime>`フィールドから来ている。デフォルトのタイムゾーンはローカルのタイムゾーンになるが、`--UTC`オプションでUTCに変更することができる。 |
|%Computer% | イベントログの`<Event><System><Computer>`フィールド。 |
|%Channel% | ログ名。イベントログの`<Event><System><EventID>`フィールド。 |
|%EventID% | イベントログの`<Event><System><EventID>`フィールド。 |
|%Level% | YML検知ルールの`level`フィールド。(例:`informational``low``medium``high``critical`) |
|%MitreTactics% | MITRE ATT&CKの[戦術](https://attack.mitre.org/tactics/enterprise/) (例: Initial Access、Lateral Movement等々 |
|%MitreTags% | MITRE ATT&CKの戦術以外の情報。attack.g(グループ)、attack.t(技術)、attack.s(ソフトウェア)の情報を出力する。 |
|%OtherTags% | YML検知ルールの`tags`フィールドから`MitreTactics``MitreTags`以外のキーワードを出力する。|
|%RecordID% | `<Event><System><EventRecordID>`フィールドのイベントレコードID。 |
|%RuleTitle% | YML検知ルールの`title`フィールド。 |
|%Details% | YML検知ルールの`details`フィールドから来ていますが、このフィールドはHayabusaルールにしかありません。このフィールドはアラートとイベントに関する追加情報を提供し、ログのフィールドから有用なデータを抽出することができます。イベントキーのマッピングが間違っている場合、もしくはフィールドが存在しない場合で抽出ができなかった箇所は`n/a` (not available)と記載されます。YML検知ルールに`details`フィールドが存在しない時のdetailsのメッセージを`./rules/config/default_details.txt`で設定できます。`default_details.txt`では`Provider Name``EventID``details`の組み合わせで設定することができます。default_details.txt`やYML検知ルールに対応するルールが記載されていない場合はすべてのフィールド情報を出力します。 |
|%AllFieldInfo% | すべてのフィールド情報。 |
|%RuleFile% | アラートまたはイベントを生成した検知ルールのファイル名。 |
|%EvtxFile% | アラートまたはイベントを起こしたevtxファイルへのパス。 |
これらのエイリアスは、出力プロファイルで使用することができます。また、他の[イベントキーアライズ](https://github.com/Yamato-Security/hayabusa-rules/blob/main/README-Japanese.md#%E3%82%A4%E3%83%99%E3%83%B3%E3%83%88%E3%82%AD%E3%83%BC%E3%82%A8%E3%82%A4%E3%83%AA%E3%82%A2%E3%82%B9)を定義し、他のフィールドを出力することもできます。
## Levelの省略
簡潔に出力するためにLevelを以下のように省略し出力しています。
@@ -530,7 +658,7 @@ CSVファイルとして保存する場合、以下の列が追加されます:
## MITRE ATT&CK戦術の省略
簡潔に出力するためにMITRE ATT&CKの戦術を以下のように省略しています。
`config/output_tag.txt`の設定ファイルで自由に編集できます。
`./config/output_tag.txt`の設定ファイルで自由に編集できます。
検知したデータの戦術を全て出力したい場合は、`--all-tags`オプションをつけてください。
* `Recon` : Reconnaissance (偵察)
@@ -551,7 +679,7 @@ CSVファイルとして保存する場合、以下の列が追加されます:
## Channel情報の省略
簡潔に出力するためにChannelの表示を以下のように省略しています。
`config/channel_abbreviations.txt`の設定ファイルで自由に編集できます。
`./rules/config/channel_abbreviations.txt`の設定ファイルで自由に編集できます。
* `App` : `Application`
* `AppLocker` : `Microsoft-Windows-AppLocker/*`
@@ -594,16 +722,18 @@ Hayabusaの結果は`level`毎に文字色が変わります。
形式は`level名,(6桁のRGBのカラーhex)`です。
カラー出力をしないようにしたい場合は`--no-color`オプションをご利用ください。
## イベント頻度タイムライン
## 結果のサマリ
### イベント頻度タイムライン
`-V`または`--visualize-timeline`オプションを使うことで、検知したイベントの数が5以上の時、頻度のタイムライン(スパークライン)を画面に出力します。
マーカーの数は最大10個です。デフォルトのCommand PromptとPowerShell Promptでは文字化けがでるので、Windows TerminalやiTerm2等のターミナルをご利用ください。
## 最多検知日の出力
### 最多検知日の出力
各レベルで最も検知された日付を画面に出力します。
## 最多検知端末名の出力
### 最多検知端末名の出力
各レベルで多く検知されたユニークなイベントが多い端末名上位5つを画面に出力します。
@@ -654,14 +784,14 @@ Hayabusaルールは、Windowsのイベントログ解析専用に設計され
ファイアウォールやIDSと同様に、シグネチャベースのツールは、環境に合わせて調整が必要になるため、特定のルールを永続的または一時的に除外する必要がある場合があります。
ルールID(例: `4fe151c2-ecf9-4fae-95ae-b88ec9c2fca6`) を `rules/config/exclude_rules.txt`に追加すると、不要なルールや利用できないルールを無視することができます。
ルールID(例: `4fe151c2-ecf9-4fae-95ae-b88ec9c2fca6`) を `./rules/config/exclude_rules.txt`に追加すると、不要なルールや利用できないルールを無視することができます。
ルールIDを `rules/config/noisy_rules.txt`に追加して、デフォルトでルールを無視することもできますが、`-n`または `--enable-noisy-rules`オプションを指定してルールを使用することもできます。
ルールIDを `./rules/config/noisy_rules.txt`に追加して、デフォルトでルールを無視することもできますが、`-n`または `--enable-noisy-rules`オプションを指定してルールを使用することもできます。
## 検知レベルのlevelチューニング
Hayabusaルール、Sigmaルールはそれぞれの作者が検知した際のリスクレベルを決めています。
ユーザが独自のリスクレベルに設定するには`./rules/config/level_tuning.txt`に変換情報を書き、`hayabusa-1.3.2-win-x64.exe --level-tuning`を実行することでルールファイルが書き換えられます。
ユーザが独自のリスクレベルに設定するには`./rules/config/level_tuning.txt`に変換情報を書き、`hayabusa-1.5.1-win-x64.exe --level-tuning`を実行することでルールファイルが書き換えられます。
ルールファイルが直接書き換えられることに注意して使用してください。
`./rules/config/level_tuning.txt`の例:
@@ -674,12 +804,9 @@ id,new_level
## イベントIDフィルタリング
`config/target_eventids.txt`にイベントID番号を追加することで、イベントIDでフィルタリングすることができます。
これはパフォーマンスを向上させるので、特定のIDだけを検索したい場合に推奨されます。
すべてのルールの`EventID`フィールドと実際のスキャン結果で見られるIDから作成したIDフィルタリストのサンプルを[`config/target_eventids_sample.txt`](https://github.com/Yamato-Security/hayabusa/blob/main/config/target_eventids_sample.txt)で提供しています。
最高のパフォーマンスを得たい場合はこのリストを使用してください。ただし、検出漏れの可能性が若干あることにご注意ください。
デフォルトではパフォーマンスを上げるために、検知ルールでイベントIDが定義されていないイベントを無視しています。
`./rules/config/target_event_IDs.txt`で定義されたIDがスキャンされます。
すべてのイベントをスキャンしたい場合は、`-D, --deep-scan`オプションを使用してください。
# その他のWindowsイベントログ解析ツールおよび関連リソース
@@ -687,7 +814,7 @@ id,new_level
* [APT-Hunter](https://github.com/ahmedkhlief/APT-Hunter) - Pythonで開発された攻撃検知ツール。
* [Awesome Event IDs](https://github.com/stuhli/awesome-event-ids) - フォレンジック調査とインシデント対応に役立つイベントIDのリソース。
* [Chainsaw](https://github.com/countercept/chainsaw) - Rustで開発された同様のSigmaベースの攻撃検知ツール。
* [Chainsaw](https://github.com/countercept/chainsaw) - Rustで開発されたSigmaベースの攻撃検知ツール。
* [DeepBlueCLI](https://github.com/sans-blue-team/DeepBlueCLI) - [Eric Conrad](https://twitter.com/eric_conrad) によってPowershellで開発された攻撃検知ツール。
* [Epagneul](https://github.com/jurelou/epagneul) - Windowsイベントログの可視化ツール。
* [EventList](https://github.com/miriamxyra/EventList/) - [Miriam Wiesner](https://github.com/miriamxyra)によるセキュリティベースラインの有効なイベントIDをMITRE ATT&CKにマッピングするPowerShellツール。
@@ -728,6 +855,7 @@ Windows機での悪性な活動を検知する為には、デフォルトのロ
## 英語
* 2022/06/19 [VelociraptorチュートリアルとHayabusaの統合方法](https://www.youtube.com/watch?v=Q1IoGX--814) by [Eric Cupuano](https://twitter.com/eric_capuano)
* 2022/01/24 [Hayabusa結果をneo4jで可視化する方法](https://www.youtube.com/watch?v=7sQqz2ek-ko) by Matthew Seyer ([@forensic_matt](https://twitter.com/forensic_matt))
## 日本語

375
README.md
View File

@@ -1,16 +1,16 @@
<div align="center">
<p>
<img alt="Hayabusa Logo" src="hayabusa-logo.png" width="50%">
<img alt="Hayabusa Logo" src="logo.png" width="50%">
</p>
[ <b>English</b> ] | [<a href="README-Japanese.md">日本語</a>]
</div>
---
[tag-1]: https://img.shields.io/github/downloads/Yamato-Security/hayabusa/total?style=plastic&label=GitHub%F0%9F%A6%85DownLoads
[tag-1]: https://img.shields.io/github/downloads/Yamato-Security/hayabusa/total?style=plastic&label=GitHub%F0%9F%A6%85Downloads
[tag-2]: https://img.shields.io/github/stars/Yamato-Security/hayabusa?style=plastic&label=GitHub%F0%9F%A6%85Stars
[tag-3]: https://img.shields.io/github/v/release/Yamato-Security/hayabusa?display_name=tag&label=latest-version&style=plastic
[tag-4]: https://img.shields.io/badge/Black%20Hat%20Arsenal-Asia%202022-blue
[tag-4]: https://github.com/toolswatch/badges/blob/master/arsenal/asia/2022.svg
[tag-5]: https://rust-reportcard.xuri.me/badge/github.com/Yamato-Security/hayabusa
[tag-6]: https://img.shields.io/badge/Maintenance%20Level-Actively%20Developed-brightgreen.svg
[tag-7]: https://img.shields.io/badge/Twitter-00acee?logo=twitter&logoColor=white
@@ -20,14 +20,14 @@
# About Hayabusa
Hayabusa is a **Windows event log fast forensics timeline generator** and **threat hunting tool** created by the [Yamato Security](https://yamatosecurity.connpass.com/) group in Japan. Hayabusa means ["peregrine falcon"](https://en.wikipedia.org/wiki/Peregrine_falcon") in Japanese and was chosen as peregrine falcons are the fastest animal in the world, great at hunting and highly trainable. It is written in [Rust](https://www.rust-lang.org/) and supports multi-threading in order to be as fast as possible. We have provided a [tool](https://github.com/Yamato-Security/hayabusa-rules/tree/main/tools/sigmac) to convert [sigma](https://github.com/SigmaHQ/sigma) rules into hayabusa rule format. The hayabusa detection rules are based on sigma rules, written in YML in order to be as easily customizable and extensible as possible. It can be run either on running systems for live analysis or by gathering logs from multiple systems for offline analysis. (At the moment, it does not support real-time alerting or periodic scans.) The output will be consolidated into a single CSV timeline for easy analysis in Excel, [Timeline Explorer](https://ericzimmerman.github.io/#!index.md), or [Elastic Stack](doc/ElasticStackImport/ElasticStackImport-English.md).
Hayabusa is a **Windows event log fast forensics timeline generator** and **threat hunting tool** created by the [Yamato Security](https://yamatosecurity.connpass.com/) group in Japan. Hayabusa means ["peregrine falcon"](https://en.wikipedia.org/wiki/Peregrine_falcon") in Japanese and was chosen as peregrine falcons are the fastest animal in the world, great at hunting and highly trainable. It is written in [Rust](https://www.rust-lang.org/) and supports multi-threading in order to be as fast as possible. We have provided a [tool](https://github.com/Yamato-Security/hayabusa-rules/tree/main/tools/sigmac) to convert [Sigma](https://github.com/SigmaHQ/sigma) rules into Hayabusa rule format. The Sigma-compatible Hayabusa detection rules are written in YML in order to be as easily customizable and extensible as possible. Hayabusa can be run either on single running systems for live analysis, by gathering logs from single or multiple systems for offline analysis, or by running the [Hayabusa artifact](https://docs.velociraptor.app/exchange/artifacts/pages/windows.eventlogs.hayabusa/) with [Velociraptor](https://docs.velociraptor.app/) for enterprise-wide threat hunting and incident response. The output will be consolidated into a single CSV timeline for easy analysis in Excel, [Timeline Explorer](https://ericzimmerman.github.io/#!index.md), [Elastic Stack](doc/ElasticStackImport/ElasticStackImport-English.md), [Timesketch](https://timesketch.org/), etc...
## Table of Contents
- [About Hayabusa](#about-hayabusa)
- [Table of Contents](#table-of-contents)
- [Main Goals](#main-goals)
- [Threat Hunting](#threat-hunting)
- [Threat Hunting and Enterprise-wide DFIR](#threat-hunting-and-enterprise-wide-dfir)
- [Fast Forensics Timeline Generation](#fast-forensics-timeline-generation)
- [Screenshots](#screenshots)
- [Startup](#startup)
@@ -38,9 +38,9 @@ Hayabusa is a **Windows event log fast forensics timeline generator** and **thre
- [Analysis in Timeline Explorer](#analysis-in-timeline-explorer)
- [Critical Alert Filtering and Computer Grouping in Timeline Explorer](#critical-alert-filtering-and-computer-grouping-in-timeline-explorer)
- [Analysis with the Elastic Stack Dashboard](#analysis-with-the-elastic-stack-dashboard)
- [Analysis in Timesketch](#analysis-in-timesketch)
- [Analyzing Sample Timeline Results](#analyzing-sample-timeline-results)
- [Features](#features)
- [Planned Features](#planned-features)
- [Downloads](#downloads)
- [Git cloning](#git-cloning)
- [Advanced: Compiling From Source (Optional)](#advanced-compiling-from-source-optional)
@@ -48,26 +48,38 @@ Hayabusa is a **Windows event log fast forensics timeline generator** and **thre
- [Cross-compiling 32-bit Windows Binaries](#cross-compiling-32-bit-windows-binaries)
- [macOS Compiling Notes](#macos-compiling-notes)
- [Linux Compiling Notes](#linux-compiling-notes)
- [Cross-compiling Linux MUSL Binaries](#cross-compiling-linux-musl-binaries)
- [Running Hayabusa](#running-hayabusa)
- [Caution: Anti-Virus/EDR Warnings](#caution-anti-virusedr-warnings)
- [Caution: Anti-Virus/EDR Warnings and Slow Runtimes](#caution-anti-virusedr-warnings-and-slow-runtimes)
- [Windows](#windows)
- [Linux](#linux)
- [macOS](#macos)
- [Usage](#usage)
- [Main commands](#main-commands)
- [Command Line Options](#command-line-options)
- [Usage Examples](#usage-examples)
- [Pivot Keyword Generator](#pivot-keyword-generator)
- [Logon Summary Generator](#logon-summary-generator)
- [Testing Hayabusa on Sample Evtx Files](#testing-hayabusa-on-sample-evtx-files)
- [Hayabusa Output](#hayabusa-output)
- [Profiles](#profiles)
- [1. `minimal` profile output](#1-minimal-profile-output)
- [2. `standard` profile output](#2-standard-profile-output)
- [3. `verbose` profile output](#3-verbose-profile-output)
- [4. `verbose-all-field-info` profile output](#4-verbose-all-field-info-profile-output)
- [5. `verbose-details-and-all-field-info` profile output](#5-verbose-details-and-all-field-info-profile-output)
- [6. `timesketch` profile output](#6-timesketch-profile-output)
- [Profile Comparison](#profile-comparison)
- [Profile Field Aliases](#profile-field-aliases)
- [Level Abbrevations](#level-abbrevations)
- [MITRE ATT&CK Tactics Abbreviations](#mitre-attck-tactics-abbreviations)
- [Channel Abbreviations](#channel-abbreviations)
- [Progress Bar](#progress-bar)
- [Color Output](#color-output)
- [Event Fequency Timeline](#event-fequency-timeline)
- [Dates with most total detections](#dates-with-most-total-detections)
- [Top 5 computers with most unique detections](#top-5-computers-with-most-unique-detections)
- [Results Summary](#results-summary-1)
- [Event Fequency Timeline](#event-fequency-timeline)
- [Dates with most total detections](#dates-with-most-total-detections)
- [Top 5 computers with most unique detections](#top-5-computers-with-most-unique-detections)
- [Hayabusa Rules](#hayabusa-rules)
- [Hayabusa v.s. Converted Sigma Rules](#hayabusa-vs-converted-sigma-rules)
- [Detection Rule Tuning](#detection-rule-tuning)
@@ -86,36 +98,36 @@ Hayabusa is a **Windows event log fast forensics timeline generator** and **thre
## Main Goals
### Threat Hunting
### Threat Hunting and Enterprise-wide DFIR
Hayabusa currently has over 2300 sigma rules and over 130 hayabusa rules with more rules being added regularly. The ultimate goal is to be able to push out hayabusa agents to all Windows endpoints after an incident or for periodic threat hunting and have them alert back to a central server.
Hayabusa currently has over 2600 Sigma rules and over 130 Hayabusa built-in detection rules with more rules being added regularly. It can be used for enterprise-wide proactive threat hunting as well as DFIR (Digital Forensics and Incident Response) for free with [Velociraptor](https://docs.velociraptor.app/)'s [Hayabusa artifact](https://docs.velociraptor.app/exchange/artifacts/pages/windows.eventlogs.hayabusa/). By combining these two open-source tools, you can essentially retroactively reproduce a SIEM when there is no SIEM setup in the environment. You can learn about how to do this by watching [Eric Capuano](https://twitter.com/eric_capuano)'s Velociraptor walkthrough [here](https://www.youtube.com/watch?v=Q1IoGX--814).
### Fast Forensics Timeline Generation
Windows event log analysis has traditionally been a very long and tedious process because Windows event logs are 1) in a data format that is hard to analyze and 2) the majority of data is noise and not useful for investigations. Hayabusa's main goal is to extract out only useful data and present it in an easy-to-read format that is usable not only by professionally trained analysts but any Windows system administrator.
Hayabusa is not intended to be a replacement for tools like [Evtx Explorer](https://ericzimmerman.github.io/#!index.md) or [Event Log Explorer](https://eventlogxp.com/) for more deep-dive analysis but is intended for letting analysts get 80% of their work done in 20% of the time.
Windows event log analysis has traditionally been a very long and tedious process because Windows event logs are 1) in a data format that is hard to analyze and 2) the majority of data is noise and not useful for investigations. Hayabusa's goal is to extract out only useful data and present it in a concise as possible easy-to-read format that is usable not only by professionally trained analysts but any Windows system administrator.
Hayabusa hopes to let analysts get 80% of their work done in 20% of the time when compared to traditional Windows event log analysis.
# Screenshots
## Startup
![Hayabusa Startup](/screenshots/Hayabusa-Startup.png)
![Hayabusa Startup](screenshots/Hayabusa-Startup.png)
## Terminal Output
![Hayabusa terminal output](/screenshots/Hayabusa-Results.png)
![Hayabusa terminal output](screenshots/Hayabusa-Results.png)
## Event Fequency Timeline (`-V` option)
![Hayabusa Event Frequency Timeline](/screenshots/HayabusaEventFrequencyTimeline.png)
![Hayabusa Event Frequency Timeline](screenshots/HayabusaEventFrequencyTimeline.png)
## Results Summary
![Hayabusa results summary](/screenshots/HayabusaResultsSummary.png)
![Hayabusa results summary](screenshots/HayabusaResultsSummary.png)
## Analysis in Excel
![Hayabusa analysis in Excel](/screenshots/ExcelScreenshot.png)
![Hayabusa analysis in Excel](screenshots/ExcelScreenshot.png)
## Analysis in Timeline Explorer
@@ -131,6 +143,10 @@ Hayabusa is not intended to be a replacement for tools like [Evtx Explorer](http
![Elastic Stack Dashboard 2](doc/ElasticStackImport/18-HayabusaDashboard-2.png)
## Analysis in Timesketch
![Timesketch](screenshots/TimesketchAnalysis.png)
# Analyzing Sample Timeline Results
You can check out a sample CSV timeline [here](https://github.com/Yamato-Security/hayabusa/tree/main/sample-results).
@@ -139,6 +155,8 @@ You can learn how to analyze CSV timelines in Excel and Timeline Explorer [here]
You can learn how to import CSV files into Elastic Stack [here](doc/ElasticStackImport/ElasticStackImport-English.md).
You can learn how to import CSV files into Timesketch [here](doc/TimesketchImport/TimesketchImport-English.md).
# Features
* Cross-platform support: Windows, Linux, macOS.
@@ -155,15 +173,11 @@ You can learn how to import CSV files into Elastic Stack [here](doc/ElasticStack
* Create a list of unique pivot keywords to quickly identify abnormal users, hostnames, processes, etc... as well as correlate events.
* Output all fields for more thorough investigations.
* Successful and failed logon summary.
# Planned Features
* Enterprise-wide hunting on all endpoints.
* MITRE ATT&CK heatmap generation.
* Enterprise-wide threat hunting and DFIR on all endpoints with [Velociraptor](https://docs.velociraptor.app/).
# Downloads
Please download the latest stable version of hayabusa with compiled binaries or the source code from the [Releases](https://github.com/Yamato-Security/hayabusa/releases) page.
Please download the latest stable version of Hayabusa with compiled binaries or compile the source code from the [Releases](https://github.com/Yamato-Security/hayabusa/releases) page.
# Git cloning
@@ -180,7 +194,7 @@ Note: If you forget to use --recursive option, the `rules` folder, which is mana
You can sync the `rules` folder and get latest Hayabusa rules with `git pull --recurse-submodules` or use the following command:
```bash
hayabusa-1.3.2-win-x64.exe -u
hayabusa-1.5.1-win-x64.exe -u
```
If the update fails, you may need to rename the `rules` folder and try again.
@@ -188,14 +202,13 @@ If the update fails, you may need to rename the `rules` folder and try again.
>> Caution: When updating, rules and config files in the `rules` folder are replaced with the latest rules and config files in the [hayabusa-rules](https://github.com/Yamato-Security/hayabusa-rules) repository.
>> Any changes you make to existing files will be overwritten, so we recommend that you make backups of any files that you edit before updating.
>> If you are performing level tuning with `--level-tuning`, please re-tune your rule files after each update.
>> If you add new rules inside of the `rules` folder, they will **not** be overwritten or deleted when updating.
>> If you add **new** rules inside of the `rules` folder, they will **not** be overwritten or deleted when updating.
# Advanced: Compiling From Source (Optional)
If you have Rust installed, you can compile from source with the following command:
```bash
cargo clean
cargo build --release
```
@@ -207,7 +220,7 @@ Be sure to periodically update Rust with:
rustup update stable
```
The compiled binary will be outputted in the `target/release` folder.
The compiled binary will be outputted in the `./target/release` folder.
## Updating Rust Packages
@@ -254,31 +267,52 @@ Fedora-based distros:
sudo yum install openssl-devel
```
## Cross-compiling Linux MUSL Binaries
On a Linux OS, first install the target.
```bash
rustup install stable-x86_64-unknown-linux-musl
rustup target add x86_64-unknown-linux-musl
```
Compile with:
```
cargo build --release --target=x86_64-unknown-linux-musl
```
The MUSL binary will be created in the `./target/x86_64-unknown-linux-musl/release/` directory.
MUSL binaries are are about 15% slower than the GNU binaries.
# Running Hayabusa
## Caution: Anti-Virus/EDR Warnings
## Caution: Anti-Virus/EDR Warnings and Slow Runtimes
You may receive an alert from anti-virus or EDR products when trying to run hayabusa or even just when downloading the `.yml` rules as there will be keywords like `mimikatz` and suspicious PowerShell commands in the detection signature.
These are false positives so will need to configure exclusions in your security products to allow hayabusa to run.
If you are worried about malware or supply chain attacks, please check the hayabusa source code and compile the binaries yourself.
You may experience slow runtime especially on the first run after a reboot due to the real-time protection of Windows Defender. You can avoid this by temporarily turning real-time protection off or adding an exclusion to the hayabusa runtime directory. (Please take into consideration the security risks before doing these.)
## Windows
In Command Prompt or Windows Terminal, just run the 32-bit or 64-bit Windows binary from the hayabusa root directory.
Example: `hayabusa-1.3.2-windows-x64.exe`
In a Command/PowerShell Prompt or Windows Terminal, just run the appropriate 32-bit or 64-bit Windows binary.
Example: `hayabusa-1.5.1-windows-x64.exe`
## Linux
You first need to make the binary executable.
```bash
chmod +x ./hayabusa-1.3.2-linux-x64-gnu
chmod +x ./hayabusa-1.5.1-linux-x64-gnu
```
Then run it from the Hayabusa root directory:
```bash
./hayabusa-1.3.2-linux-x64-gnu
./hayabusa-1.5.1-linux-x64-gnu
```
## macOS
@@ -286,159 +320,186 @@ Then run it from the Hayabusa root directory:
From Terminal or iTerm2, you first need to make the binary executable.
```bash
chmod +x ./hayabusa-1.3.2-mac-intel
chmod +x ./hayabusa-1.5.1-mac-intel
```
Then, try to run it from the Hayabusa root directory:
```bash
./hayabusa-1.3.2-mac-intel
./hayabusa-1.5.1-mac-intel
```
On the latest version of macOS, you may receive the following security error when you try to run it:
![Mac Error 1 EN](/screenshots/MacOS-RunError-1-EN.png)
![Mac Error 1 EN](screenshots/MacOS-RunError-1-EN.png)
Click "Cancel" and then from System Preferences, open "Security & Privacy" and from the General tab, click "Allow Anyway".
![Mac Error 2 EN](/screenshots/MacOS-RunError-2-EN.png)
![Mac Error 2 EN](screenshots/MacOS-RunError-2-EN.png)
After that, try to run it again.
```bash
./hayabusa-1.3.2-mac-intel
./hayabusa-1.5.1-mac-intel
```
The following warning will pop up, so please click "Open".
![Mac Error 3 EN](/screenshots/MacOS-RunError-3-EN.png)
![Mac Error 3 EN](screenshots/MacOS-RunError-3-EN.png)
You should now be able to run hayabusa.
# Usage
## Main commands
* default: Create a fast forensics timeline.
* `--level-tuning`: Custom tune the alerts' `level`.
* `-L, --logon-summary`: Print a summary of logon events.
* `-P, --pivot-keywords-list`: Print a list of suspicious keywords to pivot on.
* `-s, --statistics`: Print metrics of the count and percentage of events based on Event ID.
* `--set-default-profile`: Change the default profile.
* `-u, --update`: Sync the rules to the latest rules in the [hayabusa-rules](https://github.com/Yamato-Security/hayabusa-rules) GitHub repository.
## Command Line Options
```
USAGE:
hayabusa.exe -f file.evtx [OPTIONS] / hayabusa.exe -d evtx-directory [OPTIONS]
hayabusa.exe <INPUT> [OTHER-ACTIONS] [OPTIONS]
OPTIONS:
--European-time Output timestamp in European time format (ex: 22-02-2022 22:00:00.123 +02:00)
--RFC-2822 Output timestamp in RFC 2822 format (ex: Fri, 22 Feb 2022 22:00:00 -0600)
--RFC-3339 Output timestamp in RFC 3339 format (ex: 2022-02-22 22:00:00.123456-06:00)
--US-military-time Output timestamp in US military time format (ex: 02-22-2022 22:00:00.123 -06:00)
--US-time Output timestamp in US time format (ex: 02-22-2022 10:00:00.123 PM -06:00)
--target-file-ext <EVTX_FILE_EXT>... Specify additional target file extensions (ex: evtx_data) (ex: evtx1 evtx2)
--all-tags Output all tags when saving to a CSV file
-c, --config <RULE_CONFIG_DIRECTORY> Specify custom rule config folder (default: ./rules/config)
--contributors Print the list of contributors
-d, --directory <DIRECTORY> Directory of multiple .evtx files
-D, --enable-deprecated-rules Enable rules marked as deprecated
--end-timeline <END_TIMELINE> End time of the event logs to load (ex: "2022-02-22 23:59:59 +09:00")
-f, --filepath <FILE_PATH> File path to one .evtx file
-F, --full-data Print all field information
-h, --help Print help information
-l, --live-analysis Analyze the local C:\Windows\System32\winevt\Logs folder
-L, --logon-summary Print a summary of successful and failed logons
--level-tuning <LEVEL_TUNING_FILE> Tune alert levels (default: ./rules/config/level_tuning.txt)
-m, --min-level <LEVEL> Minimum level for rules (default: informational)
-n, --enable-noisy-rules Enable rules marked as noisy
--no-color Disable color output
-o, --output <CSV_TIMELINE> Save the timeline in CSV format (ex: results.csv)
-p, --pivot-keywords-list Create a list of pivot keywords
-q, --quiet Quiet mode: do not display the launch banner
-Q, --quiet-errors Quiet errors mode: do not save error logs
-r, --rules <RULE_DIRECTORY/RULE_FILE> Specify a rule directory or file (default: ./rules)
-R, --hide-record-ID Do not display EventRecordID numbers
-s, --statistics Print statistics of event IDs
--start-timeline <START_TIMELINE> Start time of the event logs to load (ex: "2020-02-22 00:00:00 +09:00")
-t, --thread-number <NUMBER> Thread number (default: optimal number for performance)
-u, --update-rules Update to the latest rules in the hayabusa-rules github repository
-U, --UTC Output time in UTC format (default: local time)
-v, --verbose Output verbose information
-V, --visualize-timeline Output event frequency timeline
--version Print version information
INPUT:
-d, --directory <DIRECTORY> Directory of multiple .evtx files
-f, --file <FILE> File path to one .evtx file
-l, --live-analysis Analyze the local C:\Windows\System32\winevt\Logs folder
ADVANCED:
-c, --rules-config <DIRECTORY> Specify custom rule config directory (default: ./rules/config)
-Q, --quiet-errors Quiet errors mode: do not save error logs
-r, --rules <DIRECTORY/FILE> Specify a custom rule directory or file (default: ./rules)
-t, --thread-number <NUMBER> Thread number (default: optimal number for performance)
--target-file-ext <EVTX_FILE_EXT>... Specify additional target file extensions (ex: evtx_data) (ex: evtx1 evtx2)
OUTPUT:
-o, --output <FILE> Save the timeline in CSV format (ex: results.csv)
-P, --profile <PROFILE> Specify output profile (minimal, standard, verbose, verbose-all-field-info, verbose-details-and-all-field-info)
DISPLAY-SETTINGS:
--no-color Disable color output
--no-summary Do not display result summary
-q, --quiet Quiet mode: do not display the launch banner
-v, --verbose Output verbose information
-V, --visualize-timeline Output event frequency timeline
FILTERING:
-D, --deep-scan Disable event ID filter to scan all events (slower)
--enable-deprecated-rules Enable rules marked as deprecated
--exclude-status <STATUS>... Ignore rules according to status (ex: experimental) (ex: stable test)
-m, --min-level <LEVEL> Minimum level for rules (default: informational)
-n, --enable-noisy-rules Enable rules marked as noisy
--timeline-end <DATE> End time of the event logs to load (ex: "2022-02-22 23:59:59 +09:00")
--timeline-start <DATE> Start time of the event logs to load (ex: "2020-02-22 00:00:00 +09:00")
OTHER-ACTIONS:
--contributors Print the list of contributors
-L, --logon-summary Print a summary of successful and failed logons
--level-tuning [<FILE>] Tune alert levels (default: ./rules/config/level_tuning.txt)
-p, --pivot-keywords-list Create a list of pivot keywords
-s, --statistics Print statistics of event IDs
--set-default-profile <PROFILE> Set default output profile
-u, --update-rules Update to the latest rules in the hayabusa-rules github repository
TIME-FORMAT:
--European-time Output timestamp in European time format (ex: 22-02-2022 22:00:00.123 +02:00)
--RFC-2822 Output timestamp in RFC 2822 format (ex: Fri, 22 Feb 2022 22:00:00 -0600)
--RFC-3339 Output timestamp in RFC 3339 format (ex: 2022-02-22 22:00:00.123456-06:00)
--US-military-time Output timestamp in US military time format (ex: 02-22-2022 22:00:00.123 -06:00)
--US-time Output timestamp in US time format (ex: 02-22-2022 10:00:00.123 PM -06:00)
-U, --UTC Output time in UTC format (default: local time)
```
## Usage Examples
* Run hayabusa against one Windows event log file:
* Run hayabusa against one Windows event log file with default standard profile:
```bash
hayabusa-1.3.2-win-x64.exe -f eventlog.evtx
hayabusa-1.5.1-win-x64.exe -f eventlog.evtx
```
* Run hayabusa against the sample-evtx directory with multiple Windows event log files:
* Run hayabusa against the sample-evtx directory with multiple Windows event log files with the verbose profile:
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -P verbose
```
* Export to a single CSV file for further analysis with excel, timeline explorer, elastic stack, etc... and include all field information:
* Export to a single CSV file for further analysis with excel, timeline explorer, elastic stack, etc... and include all field information (Warning: your file output size will become much larger with the `verbose-details-and-all-field-info` profile!):
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -o results.csv -F
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -o results.csv -F
```
* Only run hayabusa rules (the default is to run all the rules in `-r .\rules`):
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa -o results.csv
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa -o results.csv
```
* Only run hayabusa rules for logs that are enabled by default on Windows:
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default -o results.csv
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default -o results.csv
```
* Only run hayabusa rules for sysmon logs:
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\sysmon -o results.csv
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\sysmon -o results.csv
```
* Only run sigma rules:
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\sigma -o results.csv
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\sigma -o results.csv
```
* Enable deprecated rules (those with `status` marked as `deprecated`) and noisy rules (those whose rule ID is listed in `.\rules\config\noisy_rules.txt`):
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx --enable-noisy-rules --enable-deprecated-rules -o results.csv
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx --enable-noisy-rules --enable-deprecated-rules -o results.csv
```
* Only run rules to analyze logons and output in the UTC timezone:
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default\events\Security\Logons -U -o results.csv
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default\events\Security\Logons -U -o results.csv
```
* Run on a live Windows machine (requires Administrator privileges) and only detect alerts (potentially malicious behavior):
```bash
hayabusa-1.3.2-win-x64.exe -l -m low
hayabusa-1.5.1-win-x64.exe -l -m low
```
* Create a list of pivot keywords from critical alerts and save the results. (Results will be saved to `keywords-Ip Addresses.txt`, `keywords-Users.txt`, etc...):
```bash
hayabusa-1.3.2-win-x64.exe -l -m critical -p -o keywords
hayabusa-1.5.1-win-x64.exe -l -m critical -p -o keywords
```
* Print Event ID statistics:
```bash
hayabusa-1.3.2-win-x64.exe -f Security.evtx -s
hayabusa-1.5.1-win-x64.exe -f Security.evtx -s
```
* Print logon summary:
```bash
hayabusa-1.5.1-win-x64.exe -L -f Security.evtx -s
```
* Print verbose information (useful for determining which files take long to process, parsing errors, etc...):
```bash
hayabusa-1.3.2-win-x64.exe -d .\hayabusa-sample-evtx -v
hayabusa-1.5.1-win-x64.exe -d .\hayabusa-sample-evtx -v
```
* Verbose output example:
@@ -456,13 +517,19 @@ Checking target evtx FilePath: "./hayabusa-sample-evtx/YamatoSecurity/T1218.004_
5 / 509 [=>------------------------------------------------------------------------------------------------------------------------------------------] 0.98 % 1s
```
* Output to a CSV format compatible to import into [Timesketch](https://timesketch.org/):
```bash
hayabusa-1.5.1-win-x64.exe -d ../hayabusa-sample-evtx --RFC-3339 -o timesketch-import.csv -P timesketch -U
```
* Quiet error mode:
By default, hayabusa will save error messages to error log files.
If you do not want to save error messages, please add `-Q`.
## Pivot Keyword Generator
You can use the `-p` or `--pivot-keywords-list` option to create a list of unique pivot keywords to quickly identify abnormal users, hostnames, processes, etc... as well as correlate events. You can customize what keywords you want to search for by editing `config/pivot_keywords.txt`.
You can use the `-p` or `--pivot-keywords-list` option to create a list of unique pivot keywords to quickly identify abnormal users, hostnames, processes, etc... as well as correlate events. You can customize what keywords you want to search for by editing `./config/pivot_keywords.txt`.
This is the default setting:
```
@@ -493,28 +560,84 @@ You can download the sample evtx files to a new `hayabusa-sample-evtx` sub-direc
git clone https://github.com/Yamato-Security/hayabusa-sample-evtx.git
```
> Note: You need to run the binary from the Hayabusa root directory.
# Hayabusa Output
When hayabusa output is being displayed to the screen (the default), it will display the following information:
## Profiles
* `Timestamp`: Default is `YYYY-MM-DD HH:mm:ss.sss +hh:mm` format. This comes from the `<Event><System><TimeCreated SystemTime>` field in the event log. The default timezone will be the local timezone but you can change the timezone to UTC with the `--utc` option.
* `Computer`: This comes from the `<Event><System><Computer>` field in the event log.
* `Channel`: The name of log. This comes from the `<Event><System><Channel>` field in the event log.
* `Event ID`: This comes from the `<Event><System><EventID>` field in the event log.
* `Level`: This comes from the `level` field in the YML detection rule. (`informational`, `low`, `medium`, `high`, `critical`) By default, all level alerts will be displayed but you can set the minimum level with `-m`. For example, you can set `-m high`) in order to only scan for and display high and critical alerts.
* `RecordID`: This comes from the `<Event><System><EventRecordID>` field in the event log. You can hidde this output with the `-R` or `--hide-record-id` option.
* `Title`: This comes from the `title` field in the YML detection rule.
* `Details`: This comes from the `details` field in the YML detection rule, however, only hayabusa rules have this field. This field gives extra information about the alert or event and can extract useful data from the fields in event logs. For example, usernames, command line information, process information, etc... When a placeholder points to a field that does not exist or there is an incorrect alias mapping, it will be outputted as `n/a` (not available). If the `details` field is not specified (i.e. sigma rules), default `details` messages to extract fields defined in `./rules/config/default_details.txt` will be outputted. You can add more default `details` messages by adding the `Provider Name`, `EventID` and `details` message you want to output in `default_details.txt`.
Hayabusa has 5 pre-defined profiles to use in `config/profiles.yaml`:
The following additional columns will be added to the output when saving to a CSV file:
1. `minimal`
2. `standard` (default)
3. `verbose`
4. `verbose-all-field-info`
5. `verbose-details-and-all-field-info`
* `MitreAttack`: MITRE ATT&CK tactics.
* `Rule Path`: The path to the detection rule that generated the alert or event.
* `File Path`: The path to the evtx file that caused the alert or event.
You can easily customize or add your own profiles by editing this file.
You can also easily change the default profile with `--set-default-profile <profile>`.
If you add the `-F` or `--full-data` option, a `RecordInformation` column with all field information will also be added.
### 1. `minimal` profile output
`%Timestamp%`, `%Computer%`, `%Channel%`, `%EventID%`, `%Level%`, `%RuleTitle%`, `%Details%`
### 2. `standard` profile output
`%Timestamp%`, `%Computer%`, `%Channel%`, `%EventID%`, `%Level%`, `%MitreTactics%`, `%RecordID%`, `%RuleTitle%`, `%Details%`
### 3. `verbose` profile output
`%Timestamp%`, `%Computer%`, `%Channel%`, `%EventID%`, `%Level%`, `%MitreTactics`, `%MitreTags%`, `%OtherTags%`, `%RecordID%`, `%RuleTitle%`, `%Details%`, `%RuleFile%`, `%EvtxFile%`
### 4. `verbose-all-field-info` profile output
Instead of outputting the minimal `details` information, all field information in the `EventData` section will be outputted.
`%Timestamp%`, `%Computer%`, `%Channel%`, `%EventID%`, `%Level%`, `%MitreTactics`, `%MitreTags%`, `%OtherTags%`, `%RecordID%`, `%RuleTitle%`, `%AllFieldInfo%`, `%RuleFile%`, `%EvtxFile%`
### 5. `verbose-details-and-all-field-info` profile output
`verbose` profile plus all field information. (Warning: this will usually double the output file size!)
`%Timestamp%`, `%Computer%`, `%Channel%`, `%EventID%`, `%Level%`, `%MitreTactics`, `%MitreTags%`, `%OtherTags%`, `%RecordID%`, `%RuleTitle%`, `%Details%`, `%RuleFile%`, `%EvtxFile%`, `%AllFieldInfo%`
### 6. `timesketch` profile output
The `verbose` profile that is compatible with importing into [Timesketch](https://timesketch.org/).
`%Timestamp%`, `hayabusa`, `%RuleTitle%`, `%Computer%`, `%Channel%`, `%EventID%`, `%Level%`, `%MitreTactics`, `%MitreTags%`, `%OtherTags%`, `%RecordID%`, `%Details%`, `%RuleFile%`, `%EvtxFile%`
### Profile Comparison
The following benchmarks were conducted on a 2018 MBP with 7.5GB of evtx data.
| Profile | Processing Time | Output Filesize |
| :---: | :---: | :---: |
| minimal | 16 minutes 18 seconds | 690 MB |
| standard | 16 minutes 23 seconds | 710 MB |
| verbose | 17 minutes | 990 MB |
| timesketch | 17 minutes | 1015 MB |
| verbose-all-field-info | 16 minutes 50 seconds | 1.6 GB |
| verbose-details-and-all-field-info | 17 minutes 12 seconds | 2.1 GB |
### Profile Field Aliases
| Alias name | Hayabusa output information|
| :--- | :--- |
|%Timestamp% | Default is `YYYY-MM-DD HH:mm:ss.sss +hh:mm` format. `<Event><System><TimeCreated SystemTime>` field in the event log. The default timezone will be the local timezone but you can change the timezone to UTC with the `--UTC` option. |
|%Computer% | The `<Event><System><Computer>` field. |
|%Channel% | The name of log. `<Event><System><Channel>` field. |
|%EventID% | The `<Event><System><EventID>` field. |
|%Level% | The `level` field in the YML detection rule. (`informational`, `low`, `medium`, `high`, `critical`) |
|%MitreTactics% | MITRE ATT&CK [tactics](https://attack.mitre.org/tactics/enterprise/) (Ex: Initial Access, Lateral Movement, etc...). |
|%MitreTags% | MITRE ATT&CK Group ID, Technique ID and Software ID. |
|%OtherTags% | Any keyword in the `tags` field in a YML detection rule which is not included in `MitreTactics` or `MitreTags`. |
|%RecordID% | The Event Record ID from `<Event><System><EventRecordID>` field. |
|%RuleTitle% | The `title` field in the YML detection rule. |
|%Details% | The `details` field in the YML detection rule, however, only hayabusa rules have this field. This field gives extra information about the alert or event and can extract useful data from the fields in event logs. For example, usernames, command line information, process information, etc... When a placeholder points to a field that does not exist or there is an incorrect alias mapping, it will be outputted as `n/a` (not available). If the `details` field is not specified (i.e. sigma rules), default `details` messages to extract fields defined in `./rules/config/default_details.txt` will be outputted. You can add more default `details` messages by adding the `Provider Name`, `EventID` and `details` message you want to output in `default_details.txt`. When no `details` field is defined in a rule nor in `default_details.txt`, all fields will be outputted to the `details` column. |
|%AllFieldInfo% | All field information. |
|%RuleFile% | The filename of the detection rule that generated the alert or event. |
|%EvtxFile% | The evtx filename that caused the alert or event. |
You can use these aliases in your output profiles, as well as define other [event key alises](https://github.com/Yamato-Security/hayabusa-rules/blob/main/README.md#eventkey-aliases) to output other fields.
## Level Abbrevations
@@ -529,7 +652,7 @@ In order to save space, we use the following abbrevations when displaying the al
## MITRE ATT&CK Tactics Abbreviations
In order to save space, we use the following abbreviations when displaying MITRE ATT&CK tactic tags.
You can freely edit these abbreviations in the `config/output_tag.txt` configuration file.
You can freely edit these abbreviations in the `./config/output_tag.txt` configuration file.
If you want to output all the tags defined in a rule, please specify the `--all-tags` option.
* `Recon` : Reconnaissance
@@ -550,7 +673,7 @@ If you want to output all the tags defined in a rule, please specify the `--all-
## Channel Abbreviations
In order to save space, we use the following abbreviations when displaying Channel.
You can freely edit these abbreviations in the `config/channel_abbreviations.txt` configuration file.
You can freely edit these abbreviations in the `./rules/config/channel_abbreviations.txt` configuration file.
* `App` : `Application`
* `AppLocker` : `Microsoft-Windows-AppLocker/*`
@@ -592,16 +715,18 @@ The alerts will be outputted in color based on the alert `level`.
You can change the default colors in the config file at `./config/level_color.txt` in the format of `level,(RGB 6-digit ColorHex)`.
If you want to disable color output, you can use `--no-color` option.
## Event Fequency Timeline
## Results Summary
### Event Fequency Timeline
If you add `-V` or `--visualize-timeline` option, the Event Frequency Timeline feature displays a sparkline frequency timeline of detected events.
Note: There needs to be more than 5 events. Also, the characters will not render correctly on the default Command Prompt or PowerShell Prompt, so please use a terminal like Windows Terminal, iTerm2, etc...
## Dates with most total detections
### Dates with most total detections
A summary of the dates with the most total detections categorized by level (`critical`, `high`, etc...).
## Top 5 computers with most unique detections
### Top 5 computers with most unique detections
The top 5 computers with the most unique detections categorized by level (`critical`, `high`, etc...).
@@ -651,15 +776,15 @@ Hayabusa rules are designed solely for Windows event log analysis and have the f
Like firewalls and IDSes, any signature-based tool will require some tuning to fit your environment so you may need to permanently or temporarily exclude certain rules.
You can add a rule ID (Example: `4fe151c2-ecf9-4fae-95ae-b88ec9c2fca6`) to `rules/config/exclude_rules.txt` in order to ignore any rule that you do not need or cannot be used.
You can add a rule ID (Example: `4fe151c2-ecf9-4fae-95ae-b88ec9c2fca6`) to `./rules/config/exclude_rules.txt` in order to ignore any rule that you do not need or cannot be used.
You can also add a rule ID to `rules/config/noisy_rules.txt` in order to ignore the rule by default but still be able to use the rule with the `-n` or `--enable-noisy-rules` option.
You can also add a rule ID to `./rules/config/noisy_rules.txt` in order to ignore the rule by default but still be able to use the rule with the `-n` or `--enable-noisy-rules` option.
## Detection Level Tuning
Hayabusa and Sigma rule authors will determine the risk level of the alert when writing their rules.
However, the actual risk level will differ between environments.
You can tune the risk level of the rules by adding them to `./rules/config/level_tuning.txt` and executing `hayabusa-1.3.2-win-x64.exe --level-tuning` which will update the `level` line in the rule file.
You can tune the risk level of the rules by adding them to `./rules/config/level_tuning.txt` and executing `hayabusa-1.5.1-win-x64.exe --level-tuning` which will update the `level` line in the rule file.
Please note that the rule file will be updated directly.
`./rules/config/level_tuning.txt` sample line:
@@ -673,12 +798,9 @@ In this case, the risk level of the rule with an `id` of `00000000-0000-0000-000
## Event ID Filtering
You can filter on event IDs by placing event ID numbers in `config/target_eventids.txt`.
This will increase performance so it is recommended if you only need to search for certain IDs.
We have provided a sample ID filter list at [`config/target_eventids_sample.txt`](https://github.com/Yamato-Security/hayabusa/blob/main/config/target_eventids_sample.txt) created from the `EventID` fields in all of the rules as well as IDs seen in actual results.
Please use this list if you want the best performance but be aware that there is a slight possibility for missing events (false negatives).
By default, events are filtered by ID to improve performance by ignorning events that have no detection rules.
The IDs defined in `./rules/config/target_event_IDs.txt` will be scanned.
If you want to scan all events, please use the `-D, --deep-scan` option.
# Other Windows Event Log Analyzers and Related Resources
@@ -686,7 +808,7 @@ There is no "one tool to rule them all" and we have found that each has its own
* [APT-Hunter](https://github.com/ahmedkhlief/APT-Hunter) - Attack detection tool written in Python.
* [Awesome Event IDs](https://github.com/stuhli/awesome-event-ids) - Collection of Event ID resources useful for Digital Forensics and Incident Response
* [Chainsaw](https://github.com/countercept/chainsaw) - A similar sigma-based attack detection tool written in Rust.
* [Chainsaw](https://github.com/countercept/chainsaw) - Another sigma-based attack detection tool written in Rust.
* [DeepBlueCLI](https://github.com/sans-blue-team/DeepBlueCLI) - Attack detection tool written in Powershell by [Eric Conrad](https://twitter.com/eric_conrad).
* [Epagneul](https://github.com/jurelou/epagneul) - Graph visualization for Windows event logs.
* [EventList](https://github.com/miriamxyra/EventList/) - Map security baseline event IDs to MITRE ATT&CK by [Miriam Wiesner](https://github.com/miriamxyra).
@@ -726,6 +848,7 @@ To create the most forensic evidence and detect with the highest accuracy, you n
## English
* 2022/06/19 [Velociraptor Walkthrough and Hayabusa Integration](https://www.youtube.com/watch?v=Q1IoGX--814) by [Eric Cupuano](https://twitter.com/eric_capuano)
* 2022/01/24 [Graphing Hayabusa results in neo4j](https://www.youtube.com/watch?v=7sQqz2ek-ko) by Matthew Seyer ([@forensic_matt](https://twitter.com/forensic_matt))
## Japanese
@@ -755,4 +878,4 @@ Hayabusa is released under [GPLv3](https://www.gnu.org/licenses/gpl-3.0.en.html)
# Twitter
You can recieve the latest news about Hayabusa, rule updates, other Yamato Security tools, etc... by following us on Twitter at [@SecurityYamato](https://twitter.com/SecurityYamato).
You can recieve the latest news about Hayabusa, rule updates, other Yamato Security tools, etc... by following us on Twitter at [@SecurityYamato](https://twitter.com/SecurityYamato).

4
build.rs Normal file
View File

@@ -0,0 +1,4 @@
fn main() {
#[cfg(target_os = "windows")]
static_vcruntime::metabuild();
}

View File

@@ -1,33 +0,0 @@
Channel,Abbreviation
Application,App
DNS Server,DNS-Svr
Key Management Service,KeyMgtSvc
Microsoft-ServiceBus-Client,SvcBusCli
Microsoft-Windows-CodeIntegrity/Operational,CodeInteg
Microsoft-Windows-LDAP-Client/Debug,LDAP-Cli
Microsoft-Windows-AppLocker/MSI and Script,AppLocker
Microsoft-Windows-AppLocker/EXE and DLL,AppLocker
Microsoft-Windows-AppLocker/Packaged app-Deployment,AppLocker
Microsoft-Windows-AppLocker/Packaged app-Execution,AppLocker
Microsoft-Windows-Bits-Client/Operational,BitsCli
Microsoft-Windows-DHCP-Server/Operational,DHCP-Svr
Microsoft-Windows-DriverFrameworks-UserMode/Operational,DvrFmwk
Microsoft-Windows-NTLM/Operational,NTLM
Microsoft-Windows-Security-Mitigations/KernelMode,SecMitig
Microsoft-Windows-Security-Mitigations/UserMode,SecMitig
Microsoft-Windows-SmbClient/Security,SmbCliSec
Microsoft-Windows-Sysmon/Operational,Sysmon
Microsoft-Windows-TaskScheduler/Operational,TaskSch
Microsoft-Windows-TerminalServices-RDPClient/Operational,RDP-Client
Microsoft-Windows-PrintService/Admin,PrintAdm
Microsoft-Windows-PrintService/Operational,PrintOp
Microsoft-Windows-PowerShell/Operational,PwSh
Microsoft-Windows-Windows Defender/Operational,Defender
Microsoft-Windows-Windows Firewall With Advanced Security/Firewall,Firewall
Microsoft-Windows-WinRM/Operational,WinRM
Microsoft-Windows-WMI-Activity/Operational,WMI
MSExchange Management,Exchange
OpenSSH/Operational,OpenSSH
Security,Sec
System,Sys
Windows PowerShell,PwShClassic

View File

@@ -0,0 +1,10 @@
---
Timestamp: "%Timestamp%"
Computer: "%Computer%"
Channel: "%Channel%"
EventID: "%EventID%"
Level: "%Level%"
MitreTactics: "%MitreTactics%"
RecordID: "%RecordID%"
RuleTitle: "%RuleTitle%"
Details: "%Details%"

87
config/profiles.yaml Normal file
View File

@@ -0,0 +1,87 @@
#Standard profile minus MITRE ATT&CK Tactics and Record ID.
minimal:
Timestamp: "%Timestamp%"
Computer: "%Computer%"
Channel: "%Channel%"
EventID: "%EventID%"
Level: "%Level%"
RuleTitle: "%RuleTitle%"
Details: "%Details%"
standard:
Timestamp: "%Timestamp%"
Computer: "%Computer%"
Channel: "%Channel%"
EventID: "%EventID%"
Level: "%Level%"
MitreTactics: "%MitreTactics%"
RecordID: "%RecordID%"
RuleTitle: "%RuleTitle%"
Details: "%Details%"
#Standard profile plus MitreTags(MITRE techniques, software and groups), rule filename and EVTX filename.
verbose:
Timestamp: "%Timestamp%"
Computer: "%Computer%"
Channel: "%Channel%"
EventID: "%EventID%"
Level: "%Level%"
MitreTactics: "%MitreTactics%"
MitreTags: "%MitreTags%"
OtherTags: "%OtherTags%"
RecordID: "%RecordID%"
RuleTitle: "%RuleTitle%"
Details: "%Details%"
RuleFile: "%RuleFile%"
EvtxFile: "%EvtxFile%"
#Verbose profile with all field information instead of the minimal fields defined in Details.
verbose-all-field-info:
Timestamp: "%Timestamp%"
Computer: "%Computer%"
Channel: "%Channel%"
EventID: "%EventID%"
Level: "%Level%"
MitreTactics: "%MitreTactics%"
MitreTags: "%MitreTags%"
OtherTags: "%OtherTags%"
RecordID: "%RecordID%"
RuleTitle: "%RuleTitle%"
AllFieldInfo: "%RecordInformation%"
RuleFile: "%RuleFile%"
EvtxFile: "%EvtxFile%"
#Verbose profile plus all field information. (Warning: this will more than double the output file size!)
verbose-details-and-all-field-info:
Timestamp: "%Timestamp%"
Computer: "%Computer%"
Channel: "%Channel%"
EventID: "%EventID%"
Level: "%Level%"
MitreTactics: "%MitreTactics%"
MitreTags: "%MitreTags%"
OtherTags: "%OtherTags%"
RecordID: "%RecordID%"
RuleTitle: "%RuleTitle%"
Details: "%Details%"
RuleFile: "%RuleFile%"
EvtxFile: "%EvtxFile%"
AllFieldInfo: "%RecordInformation%"
#Output that is compatible to import the CSV into Timesketch
timesketch:
datetime: "%Timestamp%"
timestamp_desc: "hayabusa"
message: "%RuleTitle%"
Computer: "%Computer%"
Channel: "%Channel%"
EventID: "%EventID%"
Level: "%Level%"
MitreTactics: "%MitreTactics%"
MitreTags: "%MitreTags%"
OtherTags: "%OtherTags%"
RecordID: "%RecordID%"
Details: "%Details%"
RuleFile: "%RuleFile%"
EvtxFile: "%EvtxFile%"
AllFieldInfo: "%RecordInformation%"

View File

@@ -1,496 +0,0 @@
eventid,event_title
6406,%1 registered to Windows Firewall to control filtering for the following: %2
1,Process Creation.
2,File Creation Timestamp Changed. (Possible Timestomping)
3,Network Connection.
4,Sysmon Service State Changed.
5,Process Terminated.
6,Driver Loaded.
7,Image Loaded.
8,Remote Thread Created. (Possible Code Injection)
9,Raw Access Read.
10,Process Access.
11,File Creation or Overwrite.
12,Registry Object Created/Deletion.
13,Registry Value Set.
14,Registry Key or Value Rename.
15,Alternate Data Stream Created.
16,Sysmon Service Configuration Changed.
17,Named Pipe Created.
18,Named Pipe Connection.
19,WmiEventFilter Activity.
20,WmiEventConsumer Activity.
21,WmiEventConsumerToFilter Activity.
22,DNS Query.
23,Deleted File Archived.
24,Clipboard Changed.
25,Process Tampering. (Possible Process Hollowing or Herpaderping)
26,File Deleted.
27,KDC Encryption Type Configuration
31,Windows Update Failed
34,Windows Update Failed
35,Windows Update Failed
43,New Device Information
81,Processing client request for operation CreateShell
82,Entering the plugin for operation CreateShell with a ResourceURI
104,Event Log was Cleared
106,A task has been scheduled
134,Sending response for operation CreateShell
169,Creating WSMan Session (on Server)
255,Sysmon Error.
400,New Mass Storage Installation
410,New Mass Storage Installation
800,Summary of Software Activities
903,New Application Installation
904,New Application Installation
905,Updated Application
906,Updated Application
907,Removed Application
908,Removed Application
1001,BSOD
1005,Scan Failed
1006,Detected Malware
1008,Action on Malware Failed
1009,Hotpatching Failed
1010,Failed to remove item from quarantine
1022,New MSI File Installed
1033,New MSI File Installed
1100,The event logging service has shut down
1101,Audit events have been dropped by the transport.
1102,The audit log was cleared
1104,The security Log is now full
1105,Event log automatic backup
1108,The event logging service encountered an error
1125,Group Policy: Internal Error
1127,Group Policy: Generic Internal Error
1129,Group Policy: Group Policy Application Failed due to Connectivity
1149,User authentication succeeded
2001,Failed to update signatures
2003,Failed to update engine
2004,Firewall Rule Add
2004,Reverting to last known good set of signatures
2005,Firewall Rule Change
2006,Firewall Rule Deleted
2009,Firewall Failed to load Group Policy
2033,Firewall Rule Deleted
3001,Code Integrity Check Warning
3002,Code Integrity Check Warning
3002,Real-Time Protection failed
3003,Code Integrity Check Warning
3004,Code Integrity Check Warning
3010,Code Integrity Check Warning
3023,Code Integrity Check Warning
4103,Module logging. Executing Pipeline.
4104,Script Block Logging.
4105,CommandStart - Started
4106,CommandStart - Stoppeed
4608,Windows is starting up
4609,Windows is shutting down
4610,An authentication package has been loaded by the Local Security Authority
4611,A trusted logon process has been registered with the Local Security Authority
4612,"Internal resources allocated for the queuing of audit messages have been exhausted, leading to the loss of some audits."
4614,A notification package has been loaded by the Security Account Manager.
4615,Invalid use of LPC port
4616,The system time was changed.
4618,A monitored security event pattern has occurred
4621,Administrator recovered system from CrashOnAuditFail
4622,A security package has been loaded by the Local Security Authority.
4624,Logon Success
4625,Logon Failure
4627,Group Membership Information
4634,Account Logoff
4646,IKE DoS-prevention mode started
4647,User initiated logoff
4648,Explicit Logon
4649,A replay attack was detected
4650,An IPsec Main Mode security association was established
4651,An IPsec Main Mode security association was established
4652,An IPsec Main Mode negotiation failed
4653,An IPsec Main Mode negotiation failed
4654,An IPsec Quick Mode negotiation failed
4655,An IPsec Main Mode security association ended
4656,A handle to an object was requested
4657,A registry value was modified
4658,The handle to an object was closed
4659,A handle to an object was requested with intent to delete
4660,An object was deleted
4661,A handle to an object was requested
4662,An operation was performed on an object
4663,An attempt was made to access an object
4664,An attempt was made to create a hard link
4665,An attempt was made to create an application client context.
4666,An application attempted an operation
4667,An application client context was deleted
4668,An application was initialized
4670,Permissions on an object were changed
4671,An application attempted to access a blocked ordinal through the TBS
4672,Admin Logon
4673,A privileged service was called
4674,An operation was attempted on a privileged object
4675,SIDs were filtered
4685,The state of a transaction has changed
4688,Process Creation.
4689,A process has exited
4690,An attempt was made to duplicate a handle to an object
4691,Indirect access to an object was requested
4692,Backup of data protection master key was attempted
4693,Recovery of data protection master key was attempted
4694,Protection of auditable protected data was attempted
4695,Unprotection of auditable protected data was attempted
4696,A primary token was assigned to process
4697,A service was installed in the system
4698,A scheduled task was created
4699,A scheduled task was deleted
4700,A scheduled task was enabled
4701,A scheduled task was disabled
4702,A scheduled task was updated
4704,A user right was assigned
4705,A user right was removed
4706,A new trust was created to a domain
4707,A trust to a domain was removed
4709,IPsec Services was started
4710,IPsec Services was disabled
4711,PAStore Engine
4712,IPsec Services encountered a potentially serious failure
4713,Kerberos policy was changed
4714,Encrypted data recovery policy was changed
4715,The audit policy (SACL) on an object was changed
4716,Trusted domain information was modified
4717,System security access was granted to an account
4718,System security access was removed from an account
4719,System audit policy was changed
4720,A user account was created
4722,A user account was enabled
4723,An attempt was made to change an account's password
4724,An attempt was made to reset an accounts password
4725,A user account was disabled
4726,A user account was deleted
4727,A security-enabled global group was created
4728,A member was added to a security-enabled global group
4729,A member was removed from a security-enabled global group
4730,A security-enabled global group was deleted
4731,A security-enabled local group was created
4732,A member was added to a security-enabled local group
4733,A member was removed from a security-enabled local group
4734,A security-enabled local group was deleted
4735,A security-enabled local group was changed
4737,A security-enabled global group was changed
4738,A user account was changed
4739,Domain Policy was changed
4740,A user account was locked out
4741,A computer account was created
4742,A computer account was changed
4743,A computer account was deleted
4744,A security-disabled local group was created
4745,A security-disabled local group was changed
4746,A member was added to a security-disabled local group
4747,A member was removed from a security-disabled local group
4748,A security-disabled local group was deleted
4749,A security-disabled global group was created
4750,A security-disabled global group was changed
4751,A member was added to a security-disabled global group
4752,A member was removed from a security-disabled global group
4753,A security-disabled global group was deleted
4754,A security-enabled universal group was created
4755,A security-enabled universal group was changed
4756,A member was added to a security-enabled universal group
4757,A member was removed from a security-enabled universal group
4758,A security-enabled universal group was deleted
4759,A security-disabled universal group was created
4760,A security-disabled universal group was changed
4761,A member was added to a security-disabled universal group
4762,A member was removed from a security-disabled universal group
4763,A security-disabled universal group was deleted
4764,A groups type was changed
4765,SID History was added to an account
4766,An attempt to add SID History to an account failed
4767,A user account was unlocked
4768,A Kerberos authentication ticket (TGT) was requested
4769,A Kerberos service ticket was requested
4770,A Kerberos service ticket was renewed
4771,Kerberos pre-authentication failed
4772,A Kerberos authentication ticket request failed
4773,A Kerberos service ticket request failed
4774,An account was mapped for logon
4775,An account could not be mapped for logon
4776,The domain controller attempted to validate the credentials for an account
4777,The domain controller failed to validate the credentials for an account
4778,A session was reconnected to a Window Station
4779,A session was disconnected from a Window Station
4780,The ACL was set on accounts which are members of administrators groups
4781,The name of an account was changed
4782,The password hash an account was accessed
4783,A basic application group was created
4784,A basic application group was changed
4785,A member was added to a basic application group
4786,A member was removed from a basic application group
4787,A non-member was added to a basic application group
4788,A non-member was removed from a basic application group..
4789,A basic application group was deleted
4790,An LDAP query group was created
4791,A basic application group was changed
4792,An LDAP query group was deleted
4793,The Password Policy Checking API was called
4794,An attempt was made to set the Directory Services Restore Mode administrator password
4800,The workstation was locked
4801,The workstation was unlocked
4802,The screen saver was invoked
4803,The screen saver was dismissed
4816,RPC detected an integrity violation while decrypting an incoming message
4817,Auditing settings on object were changed.
4864,A namespace collision was detected
4865,A trusted forest information entry was added
4866,A trusted forest information entry was removed
4867,A trusted forest information entry was modified
4868,The certificate manager denied a pending certificate request
4869,Certificate Services received a resubmitted certificate request
4870,Certificate Services revoked a certificate
4871,Certificate Services received a request to publish the certificate revocation list (CRL)
4872,Certificate Services published the certificate revocation list (CRL)
4873,A certificate request extension changed
4874,One or more certificate request attributes changed.
4875,Certificate Services received a request to shut down
4876,Certificate Services backup started
4877,Certificate Services backup completed
4878,Certificate Services restore started
4879,Certificate Services restore completed
4880,Certificate Services started
4881,Certificate Services stopped
4882,The security permissions for Certificate Services changed
4883,Certificate Services retrieved an archived key
4884,Certificate Services imported a certificate into its database
4885,The audit filter for Certificate Services changed
4886,Certificate Services received a certificate request
4887,Certificate Services approved a certificate request and issued a certificate
4888,Certificate Services denied a certificate request
4889,Certificate Services set the status of a certificate request to pending
4890,The certificate manager settings for Certificate Services changed.
4891,A configuration entry changed in Certificate Services
4892,A property of Certificate Services changed
4893,Certificate Services archived a key
4894,Certificate Services imported and archived a key
4895,Certificate Services published the CA certificate to Active Directory Domain Services
4896,One or more rows have been deleted from the certificate database
4897,Role separation enabled
4898,Certificate Services loaded a template
4899,A Certificate Services template was updated
4900,Certificate Services template security was updated
4902,The Per-user audit policy table was created
4904,An attempt was made to register a security event source
4905,An attempt was made to unregister a security event source
4906,The CrashOnAuditFail value has changed
4907,Auditing settings on object were changed
4908,Special Groups Logon table modified
4909,The local policy settings for the TBS were changed
4910,The group policy settings for the TBS were changed
4912,Per User Audit Policy was changed
4928,An Active Directory replica source naming context was established
4929,An Active Directory replica source naming context was removed
4930,An Active Directory replica source naming context was modified
4931,An Active Directory replica destination naming context was modified
4932,Synchronization of a replica of an Active Directory naming context has begun
4933,Synchronization of a replica of an Active Directory naming context has ended
4934,Attributes of an Active Directory object were replicated
4935,Replication failure begins
4936,Replication failure ends
4937,A lingering object was removed from a replica
4944,The following policy was active when the Windows Firewall started
4945,A rule was listed when the Windows Firewall started
4946,A change has been made to Windows Firewall exception list. A rule was added
4947,A change has been made to Windows Firewall exception list. A rule was modified
4948,A change has been made to Windows Firewall exception list. A rule was deleted
4949,Windows Firewall settings were restored to the default values
4950,A Windows Firewall setting has changed
4951,A rule has been ignored because its major version number was not recognized by Windows Firewall
4952,Parts of a rule have been ignored because its minor version number was not recognized by Windows Firewall
4953,A rule has been ignored by Windows Firewall because it could not parse the rule
4954,Windows Firewall Group Policy settings has changed. The new settings have been applied
4956,Windows Firewall has changed the active profile
4957,Windows Firewall did not apply the following rule
4958,Windows Firewall did not apply the following rule because the rule referred to items not configured on this computer
4960,IPsec dropped an inbound packet that failed an integrity check
4961,IPsec dropped an inbound packet that failed a replay check
4962,IPsec dropped an inbound packet that failed a replay check
4963,IPsec dropped an inbound clear text packet that should have been secured
4964,Special groups have been assigned to a new logon
4965,IPsec received a packet from a remote computer with an incorrect Security Parameter Index (SPI).
4976,"During Main Mode negotiation, IPsec received an invalid negotiation packet."
4977,"During Quick Mode negotiation, IPsec received an invalid negotiation packet."
4978,"During Extended Mode negotiation, IPsec received an invalid negotiation packet."
4979,IPsec Main Mode and Extended Mode security associations were established
4980,IPsec Main Mode and Extended Mode security associations were established
4981,IPsec Main Mode and Extended Mode security associations were established
4982,IPsec Main Mode and Extended Mode security associations were established
4983,An IPsec Extended Mode negotiation failed
4984,An IPsec Extended Mode negotiation failed
4985,The state of a transaction has changed
5008,Unexpected Error
5024,The Windows Firewall Service has started successfully
5025,The Windows Firewall Service has been stopped
5027,The Windows Firewall Service was unable to retrieve the security policy from the local storage
5028,The Windows Firewall Service was unable to parse the new security policy.
5029,The Windows Firewall Service failed to initialize the driver
5030,The Windows Firewall Service failed to start
5031,The Windows Firewall Service blocked an application from accepting incoming connections on the network.
5032,Windows Firewall was unable to notify the user that it blocked an application from accepting incoming connections on the network
5033,The Windows Firewall Driver has started successfully
5034,The Windows Firewall Driver has been stopped
5035,The Windows Firewall Driver failed to start
5037,The Windows Firewall Driver detected critical runtime error. Terminating
5038,Code integrity determined that the image hash of a file is not valid
5039,A registry key was virtualized.
5040,A change has been made to IPsec settings. An Authentication Set was added.
5041,A change has been made to IPsec settings. An Authentication Set was modified
5042,A change has been made to IPsec settings. An Authentication Set was deleted
5043,A change has been made to IPsec settings. A Connection Security Rule was added
5044,A change has been made to IPsec settings. A Connection Security Rule was modified
5045,A change has been made to IPsec settings. A Connection Security Rule was deleted
5046,A change has been made to IPsec settings. A Crypto Set was added
5047,A change has been made to IPsec settings. A Crypto Set was modified
5048,A change has been made to IPsec settings. A Crypto Set was deleted
5049,An IPsec Security Association was deleted
5050,An attempt to programmatically disable the Windows Firewall using a call to INetFwProfile
5051,A file was virtualized
5056,A cryptographic self test was performed
5057,A cryptographic primitive operation failed
5058,Key file operation
5059,Key migration operation
5060,Verification operation failed
5061,Cryptographic operation
5062,A kernel-mode cryptographic self test was performed
5063,A cryptographic provider operation was attempted
5064,A cryptographic context operation was attempted
5065,A cryptographic context modification was attempted
5066,A cryptographic function operation was attempted
5067,A cryptographic function modification was attempted
5068,A cryptographic function provider operation was attempted
5069,A cryptographic function property operation was attempted
5070,A cryptographic function property operation was attempted
5120,OCSP Responder Service Started
5121,OCSP Responder Service Stopped
5122,A Configuration entry changed in the OCSP Responder Service
5123,A configuration entry changed in the OCSP Responder Service
5124,A security setting was updated on OCSP Responder Service
5125,A request was submitted to OCSP Responder Service
5126,Signing Certificate was automatically updated by the OCSP Responder Service
5127,The OCSP Revocation Provider successfully updated the revocation information
5136,A directory service object was modified
5137,A directory service object was created
5138,A directory service object was undeleted
5139,A directory service object was moved
5140,A network share object was accessed
5141,A directory service object was deleted
5142,A network share object was added.
5143,A network share object was modified
5144,A network share object was deleted.
5145,A network share object was checked to see whether client can be granted desired access
5148,The Windows Filtering Platform has detected a DoS attack and entered a defensive mode; packets associated with this attack will be discarded.
5149,The DoS attack has subsided and normal processing is being resumed.
5150,The Windows Filtering Platform has blocked a packet.
5151,A more restrictive Windows Filtering Platform filter has blocked a packet.
5152,The Windows Filtering Platform blocked a packet
5153,A more restrictive Windows Filtering Platform filter has blocked a packet
5154,The Windows Filtering Platform has permitted an application or service to listen on a port for incoming connections
5155,The Windows Filtering Platform has blocked an application or service from listening on a port for incoming connections
5156,The Windows Filtering Platform has allowed a connection
5157,The Windows Filtering Platform has blocked a connection
5158,The Windows Filtering Platform has permitted a bind to a local port
5159,The Windows Filtering Platform has blocked a bind to a local port
5168,Spn check for SMB/SMB2 fails.
5376,Credential Manager credentials were backed up
5377,Credential Manager credentials were restored from a backup
5378,The requested credentials delegation was disallowed by policy
5440,The following callout was present when the Windows Filtering Platform Base Filtering Engine started
5441,The following filter was present when the Windows Filtering Platform Base Filtering Engine started
5442,The following provider was present when the Windows Filtering Platform Base Filtering Engine started
5443,The following provider context was present when the Windows Filtering Platform Base Filtering Engine started
5444,The following sub-layer was present when the Windows Filtering Platform Base Filtering Engine started
5446,A Windows Filtering Platform callout has been changed
5447,A Windows Filtering Platform filter has been changed
5448,A Windows Filtering Platform provider has been changed
5449,A Windows Filtering Platform provider context has been changed
5450,A Windows Filtering Platform sub-layer has been changed
5451,An IPsec Quick Mode security association was established
5452,An IPsec Quick Mode security association ended
5453,An IPsec negotiation with a remote computer failed because the IKE and AuthIP IPsec Keying Modules (IKEEXT) service is not started
5456,PAStore Engine applied Active Directory storage IPsec policy on the computer
5457,PAStore Engine failed to apply Active Directory storage IPsec policy on the computer
5458,PAStore Engine applied locally cached copy of Active Directory storage IPsec policy on the computer
5459,PAStore Engine failed to apply locally cached copy of Active Directory storage IPsec policy on the computer
5460,PAStore Engine applied local registry storage IPsec policy on the computer
5461,PAStore Engine failed to apply local registry storage IPsec policy on the computer
5462,PAStore Engine failed to apply some rules of the active IPsec policy on the computer
5463,PAStore Engine polled for changes to the active IPsec policy and detected no changes
5464,"PAStore Engine polled for changes to the active IPsec policy, detected changes, and applied them to IPsec Services"
5465,PAStore Engine received a control for forced reloading of IPsec policy and processed the control successfully
5466,"PAStore Engine polled for changes to the Active Directory IPsec policy, determined that Active Directory cannot be reached, and will use the cached copy of the Active Directory IPsec policy instead"
5467,"PAStore Engine polled for changes to the Active Directory IPsec policy, determined that Active Directory can be reached, and found no changes to the policy"
5468,"PAStore Engine polled for changes to the Active Directory IPsec policy, determined that Active Directory can be reached, found changes to the policy, and applied those changes"
5471,PAStore Engine loaded local storage IPsec policy on the computer
5472,PAStore Engine failed to load local storage IPsec policy on the computer
5473,PAStore Engine loaded directory storage IPsec policy on the computer
5474,PAStore Engine failed to load directory storage IPsec policy on the computer
5477,PAStore Engine failed to add quick mode filter
5478,IPsec Services has started successfully
5479,IPsec Services has been shut down successfully
5480,IPsec Services failed to get the complete list of network interfaces on the computer
5483,IPsec Services failed to initialize RPC server. IPsec Services could not be started
5484,IPsec Services has experienced a critical failure and has been shut down
5485,IPsec Services failed to process some IPsec filters on a plug-and-play event for network interfaces
6144,Security policy in the group policy objects has been applied successfully
6145,One or more errors occured while processing security policy in the group policy objects
6272,Network Policy Server granted access to a user
6273,Network Policy Server denied access to a user
6274,Network Policy Server discarded the request for a user
6275,Network Policy Server discarded the accounting request for a user
6276,Network Policy Server quarantined a user
6277,Network Policy Server granted access to a user but put it on probation because the host did not meet the defined health policy
6278,Network Policy Server granted full access to a user because the host met the defined health policy
6279,Network Policy Server locked the user account due to repeated failed authentication attempts
6280,Network Policy Server unlocked the user account
6281,Code Integrity determined that the page hashes of an image file are not valid...
6400,BranchCache: Received an incorrectly formatted response while discovering availability of content.
6401,BranchCache: Received invalid data from a peer. Data discarded.
6402,BranchCache: The message to the hosted cache offering it data is incorrectly formatted.
6403,BranchCache: The hosted cache sent an incorrectly formatted response to the client.
6404,BranchCache: Hosted cache could not be authenticated using the provisioned SSL certificate.
6405,BranchCache: %2 instance(s) of event id %1 occurred.
6407,1% (no more info in MSDN)
6408,Registered product %1 failed and Windows Firewall is now controlling the filtering for %2
6410,Code integrity determined that a file does not meet the security requirements to load into a process.
7022,Windows Service Fail or Crash
7023,The %1 service terminated with the following error: %2
7023,Windows Service Fail or Crash
7024,Windows Service Fail or Crash
7026,Windows Service Fail or Crash
7030,"The service is marked as an interactive service. However, the system is configured to not allow interactive services. This service may not function properly."
7031,Windows Service Fail or Crash
7032,Windows Service Fail or Crash
7034,Windows Service Fail or Crash
7035,The %1 service was successfully sent a %2 control.
7036,The service entered the running/stopped state
7040,The start type of the %1 service was changed from %2 to %3.
7045,New Windows Service
8000,Starting a Wireless Connection
8001,Successfully connected to Wireless connection
8002,Wireless Connection Failed
8003,AppLocker Block Error
8003,Disconnected from Wireless connection
8004,AppLocker Block Warning
8005,AppLocker permitted the execution of a PowerShell script
8006,AppLocker Warning Error
8007,AppLocker Warning
8011,Starting a Wireless Connection
10000,Network Connection and Disconnection Status (Wired and Wireless)
10001,Network Connection and Disconnection Status (Wired and Wireless)
11000,Wireless Association Status
11001,Wireless Association Status
11002,Wireless Association Status
11004,"Wireless Security Started, Stopped, Successful, or Failed"
11005,"Wireless Security Started, Stopped, Successful, or Failed"
11006,"Wireless Security Started, Stopped, Successful, or Failed"
11010,"Wireless Security Started, Stopped, Successful, or Failed"
12011,Wireless Authentication Started and Failed
12012,Wireless Authentication Started and Failed
12013,Wireless Authentication Started and Failed
unregistered_event_id,Unknown

View File

@@ -1,154 +0,0 @@
1
10
1000
1001
1006
1013
1015
1031
1032
1033
1034
104
106
11
1102
1116
1116
1117
1121
12
13
14
15
150
16
17
18
19
20
2003
21
2100
2102
213
217
22
23
24
255
257
26
3
30
300
301
302
316
31017
354
4
400
400
403
40300
40301
40302
4100
4103
4104
4611
4616
4624
4625
4634
4647
4648
4656
4657
4658
4660
4661
4662
4663
4672
4673
4674
4688
4689
4692
4697
4698
4699
4701
4703
4704
4706
4719
4720
4728
4732
4738
4742
4765
4766
4768
4769
4771
4776
4781
4794
4799
4825
4898
4899
4904
4905
4909
5
50
5001
5007
5010
5012
5013
5038
5101
5136
5140
5142
5145
5156
517
524
528
529
55
56
5829
5859
5861
59
6
600
6281
6416
675
7
70
7036
7040
7045
770
8
800
8001
8002
8004
8007
808
823
848
849
9
98

View File

@@ -1,6 +1,7 @@
Hayabusa was possible thanks to the following people (in alphabetical order):
Akira Nishikawa (@nishikawaakira): Previous lead developer, core hayabusa rule support, etc...
Fukusuke Takahashi (fukuseket): Static compiling for Windows, race condition and other bug fixes.
Garigariganzy (@garigariganzy31): Developer, event ID statistics implementation, etc...
ItiB (@itiB_S144) : Core developer, sigmac hayabusa backend, rule creation, etc...
James Takai / hachiyone(@hach1yon): Current lead developer, tokio multi-threading, sigma aggregation logic, sigmac backend, rule creation, sigma count implementation etc…

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 307 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 508 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 298 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -0,0 +1,80 @@
# Importing Hayabusa Results Into Timesketch
## About
"[Timesketch](https://timesketch.org/) is an open-source tool for collaborative forensic timeline analysis. Using sketches you and your collaborators can easily organize your timelines and analyze them all at the same time. Add meaning to your raw data with rich annotations, comments, tags and stars."
## Installing
We recommend using the Ubuntu 22.04 LTS Server edition.
You can download it [here](https://ubuntu.com/download/server).
Choose the minimal install when setting it up.
You won't have `ifconfig` available, so install it with `sudo apt install net-tools`.
After that, follow the install instructions [here](https://timesketch.org/guides/admin/install/):
``` bash
sudo apt install docker-compose
curl -s -O https://raw.githubusercontent.com/google/timesketch/master/contrib/deploy_timesketch.sh
chmod 755 deploy_timesketch.sh
cd /opt
sudo ~/deploy_timesketch.sh
cd timesketch
sudo docker-compose up -d
sudo docker-compose exec timesketch-web tsctl create-user <USERNAME>
```
## Prepared VM
We have pre-built a demo VM that you can use against the 2022 DEF CON 30 [OpenSOC](https://opensoc.io/) DFIR Challenge evidence hosted by [Recon InfoSec](https://www.reconinfosec.com/). (The evidence has already been imported.)
You can download it [here](https://www.dropbox.com/s/3be3s5c2r22ux2z/Prebuilt-Timesketch.ova?dl=0).
You can find the other evidence for this challenge [here](https://docs.google.com/document/d/1XM4Gfdojt8fCn_9B8JKk9bcUTXZc0_hzWRUH4mEr7dw/mobilebasic) and questions [here](https://docs.google.com/spreadsheets/d/1vKn8BgABuJsqH5WhhS9ebIGTBG4aoP-StINRi18abo4/htmlview).
The username for the VM is `user` and password is `password`.
## Logging in
Find out the IP address with `ifconfig` and open it with a web browser.
You will be redirected to a login page as shown below:
![Timesketch Login](01-TimesketchLogin.png)
Log in with the docker-compose user credentials you used when adding a user.
## Create a new sketch
Click on `New investiation` and create a name for the new sketch:
![New Investigation](02-NewInvestigation.png)
## Upload timeline
Click `Upload timeline` and upload a CSV file that you created with the following command:
`hayabusa-1.5.1-win-x64.exe -d ../hayabusa-sample-evtx --RFC-3339 -o timesketch-import.csv -P timesketch -U`
You can add `-m low` if you just want alerts and not include Windows events.
## Analyzing results
You should get the following screen:
![Timesketch timeline](03-TimesketchTimeline.png)
By default, only the UTC timestamp and alert rule title will be displayed so click `Customize columns` to add more fields.
> Warning: In the current version, there is a bug in that a new column will be blank. Please add another column (and then delete it afterwards if not needed) to display new columns.
You can also filter on fields in the searchbox, such as `Level: crit` to only show critical alerts.
![Timeline with columns](04-TimelineWithColumns.png)
If you click on an event, you can see all of the field information:
![Field Information](05-FieldInformation.png)
With the three icons to the left of the alert title, you can star events of interest, search +- 5 minutes to see the context of an event and add labels.
![Marking Events](06-MarkingEvents.png)

View File

@@ -0,0 +1,80 @@
# TimesketchにHayabusa結果をインポートする方法
## Timesketchについて
"[Timesketch](https://timesketch.org/)は、フォレンジックタイムラインの共同解析のためのオープンソースツールです。スケッチを使うことで、あなたとあなたの共同作業者は、簡単にタイムラインを整理し、同時に分析することができます。リッチなアノテーション、コメント、タグ、スターで生データに意味を持たせることができます。"
## インストール
Ubuntu 22.04 LTS Serverエディションの使用を推奨します。
[こちら](https://ubuntu.com/download/server)からダウンロードできます。
セットアップ時にミニマルインストールを選択してください。
`ifconfig`はインストールされていないので、`sudo apt install net-tools`でインストールしてください。
その後、インストール手順[こちら](https://timesketch.org/guides/admin/install/)に従ってください:
``` bash
sudo apt install docker-compose
curl -s -O https://raw.githubusercontent.com/google/timesketch/master/contrib/deploy_timesketch.sh
chmod 755 deploy_timesketch.sh
cd /opt
sudo ~/deploy_timesketch.sh
cd timesketch
sudo docker-compose up -d
sudo docker-compose exec timesketch-web tsctl create-user <USERNAME>
```
## 準備されたVM
[Recon InfoSec](https://www.reconinfosec.com/)主催の2022年のDEF CON 30 [OpenSOC](https://opensoc.io/) DFIR Challengeのエビデンスに対して使用できるデモ用VMを事前に構築しています。 (エビデンスは既にインポート済み。)
[こちら](https://www.dropbox.com/s/3be3s5c2r22ux2z/Prebuilt-Timesketch.ova?dl=0)からダウンロードできます。
このチャレンジの他のエビデンスは[こちら](https://docs.google.com/document/d/1XM4Gfdojt8fCn_9B8JKk9bcUTXZc0_hzWRUH4mEr7dw/mobilebasic)からダウンロードできます。
問題は[こちら](https://docs.google.com/spreadsheets/d/1vKn8BgABuJsqH5WhhS9ebIGTBG4aoP-StINRi18abo4/htmlview)からダウンロードできます。
VMのユーザ名は`user`。パスワードは`password`。
## ログイン
`ifconfig`でIPアドレスを調べ、Webブラウザで開いてください。
以下のようなログインページに移動されます:
![Timesketch Login](01-TimesketchLogin.png)
docker-composeコマンドで作成したユーザの認証情報でログインしてください。
## 新しいsketch作成
`New investiation`をクリックし、新しいスケッチに名前を付けます。
![New Investigation](02-NewInvestigation.png)
## タイムラインのアップロード
`Upload timeline`をクリックし、以下のコマンドで作成したCSVファイルをアップロードします:
`hayabusa-1.5.1-win-x64.exe -d ../hayabusa-sample-evtx --RFC-3339 -o timesketch-import.csv -P timesketch -U`
Windowsのイベントを含めず、アラートだけでよい場合は、`-m low`を追加することができます。
## 結果の解析
以下のような画面が表示されるはずです:
![Timesketch timeline](03-TimesketchTimeline.png)
デフォルトでは、UTCタイムスタンプとアラートルールのタイトル名のみが表示されますので、`Customize columns`をクリックし、他のフィールドを追加してください。
> 注意: 現在のバージョンでは、新しいカラムが空白になってしまうというバグがあります。新しいカラムを表示するには、別のカラムをまず追加してください(必要なければ後で削除してください。)
以下のように検索ボックスで`Level: crit`等を入力することで、クリティカルなアラートのみを表示させるようにフィルタリングできます。
![Timeline with columns](04-TimelineWithColumns.png)
イベントをクリックすると、すべてのフィールド情報を見ることができます:
![Field Information](05-FieldInformation.png)
アラートタイトルの左側にある3つのアイコンを使って、興味のあるイベントにスターをつけたり、イベントの文脈を見るために+-5分検索したり、ラベルを追加したりすることが可能です。
![Marking Events](06-MarkingEvents.png)

View File

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 58 KiB

2
rules

Submodule rules updated: 8c14d12be3...856316374c

Binary file not shown.

Before

Width:  |  Height:  |  Size: 309 KiB

After

Width:  |  Height:  |  Size: 899 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 737 KiB

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 267 KiB

After

Width:  |  Height:  |  Size: 470 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 402 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +1,13 @@
use crate::detections::message::AlertMessage;
use crate::detections::pivot::PivotKeyword;
use crate::detections::pivot::PIVOT_KEYWORD;
use crate::detections::print::AlertMessage;
use crate::detections::utils;
use chrono::{DateTime, Utc};
use clap::{App, CommandFactory, Parser};
use hashbrown::HashMap;
use hashbrown::HashSet;
use hashbrown::{HashMap, HashSet};
use lazy_static::lazy_static;
use regex::Regex;
use std::env::current_exe;
use std::path::PathBuf;
use std::sync::RwLock;
use terminal_size::{terminal_size, Height, Width};
@@ -32,6 +32,10 @@ lazy_static! {
pub static ref TERM_SIZE: Option<(Width, Height)> = terminal_size();
pub static ref TARGET_EXTENSIONS: HashSet<String> =
get_target_extensions(CONFIG.read().unwrap().args.evtx_file_ext.as_ref());
pub static ref CURRENT_EXE_PATH: PathBuf =
current_exe().unwrap().parent().unwrap().to_path_buf();
pub static ref EXCLUDE_STATUS: HashSet<String> =
convert_option_vecs_to_hs(CONFIG.read().unwrap().args.exclude_status.as_ref());
}
pub struct ConfigReader<'a> {
@@ -51,78 +55,74 @@ impl Default for ConfigReader<'_> {
#[derive(Parser)]
#[clap(
name = "Hayabusa",
usage = "hayabusa.exe -f file.evtx [OPTIONS] / hayabusa.exe -d evtx-directory [OPTIONS]",
usage = "hayabusa.exe <INPUT> [OTHER-ACTIONS] [OPTIONS]",
author = "Yamato Security (https://github.com/Yamato-Security/hayabusa) @SecurityYamato)",
help_template = "\n{name} {version}\n{author}\n\n{usage-heading}\n {usage}\n\n{all-args}\n",
version,
term_width = 400
)]
pub struct Config {
/// Directory of multiple .evtx files
#[clap(short = 'd', long, value_name = "DIRECTORY")]
#[clap(help_heading = Some("INPUT"), short = 'd', long, value_name = "DIRECTORY")]
pub directory: Option<PathBuf>,
/// File path to one .evtx file
#[clap(short = 'f', long, value_name = "FILE_PATH")]
#[clap(help_heading = Some("INPUT"), short = 'f', long = "file", value_name = "FILE")]
pub filepath: Option<PathBuf>,
/// Print all field information
#[clap(short = 'F', long = "full-data")]
pub full_data: bool,
/// Specify a rule directory or file (default: ./rules)
/// Specify a custom rule directory or file (default: ./rules)
#[clap(
help_heading = Some("ADVANCED"),
short = 'r',
long,
default_value = "./rules",
hide_default_value = true,
value_name = "RULE_DIRECTORY/RULE_FILE"
value_name = "DIRECTORY/FILE"
)]
pub rules: PathBuf,
/// Specify custom rule config folder (default: ./rules/config)
/// Specify custom rule config directory (default: ./rules/config)
#[clap(
help_heading = Some("ADVANCED"),
short = 'c',
long,
long = "rules-config",
default_value = "./rules/config",
hide_default_value = true,
value_name = "RULE_CONFIG_DIRECTORY"
value_name = "DIRECTORY"
)]
pub config: PathBuf,
/// Save the timeline in CSV format (ex: results.csv)
#[clap(short = 'o', long, value_name = "CSV_TIMELINE")]
#[clap(help_heading = Some("OUTPUT"), short = 'o', long, value_name = "FILE")]
pub output: Option<PathBuf>,
/// Output all tags when saving to a CSV file
#[clap(long = "all-tags")]
pub all_tags: bool,
/// Do not display EventRecordID numbers
#[clap(short = 'R', long = "hide-record-id")]
pub hide_record_id: bool,
/// Output verbose information
#[clap(short = 'v', long)]
#[clap(help_heading = Some("DISPLAY-SETTINGS"), short = 'v', long)]
pub verbose: bool,
/// Output event frequency timeline
#[clap(short = 'V', long = "visualize-timeline")]
#[clap(help_heading = Some("DISPLAY-SETTINGS"), short = 'V', long = "visualize-timeline")]
pub visualize_timeline: bool,
/// Enable rules marked as deprecated
#[clap(short = 'D', long = "enable-deprecated-rules")]
#[clap(help_heading = Some("FILTERING"), long = "enable-deprecated-rules")]
pub enable_deprecated_rules: bool,
/// Disable event ID filter to scan all events
#[clap(help_heading = Some("FILTERING"), short = 'D', long = "deep-scan")]
pub deep_scan: bool,
/// Enable rules marked as noisy
#[clap(short = 'n', long = "enable-noisy-rules")]
#[clap(help_heading = Some("FILTERING"), short = 'n', long = "enable-noisy-rules")]
pub enable_noisy_rules: bool,
/// Update to the latest rules in the hayabusa-rules github repository
#[clap(short = 'u', long = "update-rules")]
#[clap(help_heading = Some("OTHER-ACTIONS"), short = 'u', long = "update-rules")]
pub update_rules: bool,
/// Minimum level for rules (default: informational)
#[clap(
help_heading = Some("FILTERING"),
short = 'm',
long = "min-level",
default_value = "informational",
@@ -132,85 +132,101 @@ pub struct Config {
pub min_level: String,
/// Analyze the local C:\Windows\System32\winevt\Logs folder
#[clap(short = 'l', long = "live-analysis")]
#[clap(help_heading = Some("INPUT"), short = 'l', long = "live-analysis")]
pub live_analysis: bool,
/// Start time of the event logs to load (ex: "2020-02-22 00:00:00 +09:00")
#[clap(long = "start-timeline", value_name = "START_TIMELINE")]
#[clap(help_heading = Some("FILTERING"), long = "timeline-start", value_name = "DATE")]
pub start_timeline: Option<String>,
/// End time of the event logs to load (ex: "2022-02-22 23:59:59 +09:00")
#[clap(long = "end-timeline", value_name = "END_TIMELINE")]
#[clap(help_heading = Some("FILTERING"), long = "timeline-end", value_name = "DATE")]
pub end_timeline: Option<String>,
/// Output timestamp in RFC 2822 format (ex: Fri, 22 Feb 2022 22:00:00 -0600)
#[clap(long = "RFC-2822")]
#[clap(help_heading = Some("TIME-FORMAT"), long = "RFC-2822")]
pub rfc_2822: bool,
/// Output timestamp in RFC 3339 format (ex: 2022-02-22 22:00:00.123456-06:00)
#[clap(long = "RFC-3339")]
#[clap(help_heading = Some("TIME-FORMAT"), long = "RFC-3339")]
pub rfc_3339: bool,
/// Output timestamp in US time format (ex: 02-22-2022 10:00:00.123 PM -06:00)
#[clap(long = "US-time")]
#[clap(help_heading = Some("TIME-FORMAT"), long = "US-time")]
pub us_time: bool,
/// Output timestamp in US military time format (ex: 02-22-2022 22:00:00.123 -06:00)
#[clap(long = "US-military-time")]
#[clap(help_heading = Some("TIME-FORMAT"), long = "US-military-time")]
pub us_military_time: bool,
/// Output timestamp in European time format (ex: 22-02-2022 22:00:00.123 +02:00)
#[clap(long = "European-time")]
#[clap(help_heading = Some("TIME-FORMAT"), long = "European-time")]
pub european_time: bool,
/// Output time in UTC format (default: local time)
#[clap(short = 'U', long = "UTC")]
#[clap(help_heading = Some("TIME-FORMAT"), short = 'U', long = "UTC")]
pub utc: bool,
/// Disable color output
#[clap(long = "no-color")]
#[clap(help_heading = Some("DISPLAY-SETTINGS"), long = "no-color")]
pub no_color: bool,
/// Thread number (default: optimal number for performance)
#[clap(short, long = "thread-number", value_name = "NUMBER")]
#[clap(help_heading = Some("ADVANCED"), short, long = "thread-number", value_name = "NUMBER")]
pub thread_number: Option<usize>,
/// Print statistics of event IDs
#[clap(short, long)]
#[clap(help_heading = Some("OTHER-ACTIONS"), short, long)]
pub statistics: bool,
/// Print a summary of successful and failed logons
#[clap(short = 'L', long = "logon-summary")]
#[clap(help_heading = Some("OTHER-ACTIONS"), short = 'L', long = "logon-summary")]
pub logon_summary: bool,
/// Tune alert levels (default: ./rules/config/level_tuning.txt)
#[clap(
help_heading = Some("OTHER-ACTIONS"),
long = "level-tuning",
default_value = "./rules/config/level_tuning.txt",
hide_default_value = true,
value_name = "LEVEL_TUNING_FILE"
value_name = "FILE"
)]
pub level_tuning: PathBuf,
pub level_tuning: Option<Option<String>>,
/// Quiet mode: do not display the launch banner
#[clap(short, long)]
#[clap(help_heading = Some("DISPLAY-SETTINGS"), short, long)]
pub quiet: bool,
/// Quiet errors mode: do not save error logs
#[clap(short = 'Q', long = "quiet-errors")]
#[clap(help_heading = Some("ADVANCED"), short = 'Q', long = "quiet-errors")]
pub quiet_errors: bool,
/// Create a list of pivot keywords
#[clap(short = 'p', long = "pivot-keywords-list")]
#[clap(help_heading = Some("OTHER-ACTIONS"), short = 'p', long = "pivot-keywords-list")]
pub pivot_keywords_list: bool,
/// Print the list of contributors
#[clap(long)]
#[clap(help_heading = Some("OTHER-ACTIONS"), long)]
pub contributors: bool,
/// Specify additional target file extensions (ex: evtx_data) (ex: evtx1 evtx2)
#[clap(long = "target-file-ext", multiple_values = true)]
#[clap(help_heading = Some("ADVANCED"), long = "target-file-ext", multiple_values = true)]
pub evtx_file_ext: Option<Vec<String>>,
/// Ignore rules according to status (ex: experimental) (ex: stable test)
#[clap(help_heading = Some("FILTERING"), long = "exclude-status", multiple_values = true, value_name = "STATUS")]
pub exclude_status: Option<Vec<String>>,
/// Specify output profile (minimal, standard, verbose, verbose-all-field-info, verbose-details-and-all-field-info)
#[clap(help_heading = Some("OUTPUT"), short = 'P', long = "profile")]
pub profile: Option<String>,
/// Set default output profile
#[clap(help_heading = Some("OTHER-ACTIONS"), long = "set-default-profile", value_name = "PROFILE")]
pub set_default_profile: Option<String>,
/// Do not display result summary
#[clap(help_heading = Some("DISPLAY-SETTINGS"), long = "no-summary")]
pub no_summary: bool,
}
impl ConfigReader<'_> {
@@ -228,8 +244,22 @@ impl ConfigReader<'_> {
app: build_cmd,
args: parse,
headless_help: String::default(),
event_timeline_config: load_eventcode_info("config/statistics_event_info.txt"),
target_eventids: load_target_ids("config/target_eventids.txt"),
event_timeline_config: load_eventcode_info(
utils::check_setting_path(
&CURRENT_EXE_PATH.to_path_buf(),
"rules/config/statistics_event_info.txt",
)
.to_str()
.unwrap(),
),
target_eventids: load_target_ids(
utils::check_setting_path(
&CURRENT_EXE_PATH.to_path_buf(),
"rules/config/target_event_IDs.txt",
)
.to_str()
.unwrap(),
),
}
}
}
@@ -447,7 +477,7 @@ pub fn load_pivot_keywords(path: &str) {
.write()
.unwrap()
.entry(map[0].to_string())
.or_insert(PivotKeyword::new());
.or_insert_with(PivotKeyword::new);
PIVOT_KEYWORD
.write()
@@ -461,12 +491,17 @@ pub fn load_pivot_keywords(path: &str) {
/// --target-file-extで追加された拡張子から、調査対象ファイルの拡張子セットを返す関数
pub fn get_target_extensions(arg: Option<&Vec<String>>) -> HashSet<String> {
let mut target_file_extensions: HashSet<String> =
arg.unwrap_or(&Vec::new()).iter().cloned().collect();
let mut target_file_extensions: HashSet<String> = convert_option_vecs_to_hs(arg);
target_file_extensions.insert(String::from("evtx"));
target_file_extensions
}
/// Option<Vec<String>>の内容をHashSetに変換する関数
pub fn convert_option_vecs_to_hs(arg: Option<&Vec<String>>) -> HashSet<String> {
let ret: HashSet<String> = arg.unwrap_or(&Vec::new()).iter().cloned().collect();
ret
}
#[derive(Debug, Clone)]
pub struct EventInfo {
pub evttitle: String,

View File

@@ -1,36 +1,46 @@
extern crate csv;
use crate::detections::configs;
use crate::detections::pivot::insert_pivot_keyword;
use crate::detections::print::AlertMessage;
use crate::detections::print::DetectInfo;
use crate::detections::print::ERROR_LOG_STACK;
use crate::detections::print::MESSAGES;
use crate::detections::print::{CH_CONFIG, DEFAULT_DETAILS, IS_HIDE_RECORD_ID, TAGS_CONFIG};
use crate::detections::print::{
use crate::detections::utils::{format_time, write_color_buffer};
use crate::options::profile::{
LOAEDED_PROFILE_ALIAS, PRELOAD_PROFILE, PRELOAD_PROFILE_REGEX, PROFILES,
};
use chrono::{TimeZone, Utc};
use itertools::Itertools;
use termcolor::{BufferWriter, Color, ColorChoice};
use crate::detections::message::AlertMessage;
use crate::detections::message::DetectInfo;
use crate::detections::message::ERROR_LOG_STACK;
use crate::detections::message::{CH_CONFIG, DEFAULT_DETAILS, TAGS_CONFIG};
use crate::detections::message::{
LOGONSUMMARY_FLAG, PIVOT_KEYWORD_LIST_FLAG, QUIET_ERRORS_FLAG, STATISTICS_FLAG,
};
use crate::detections::pivot::insert_pivot_keyword;
use crate::detections::rule;
use crate::detections::rule::AggResult;
use crate::detections::rule::RuleNode;
use crate::detections::utils::{get_serde_number_to_string, make_ascii_titlecase};
use crate::filter;
use crate::yaml::ParseYaml;
use hashbrown;
use hashbrown::HashMap;
use serde_json::Value;
use std::fmt::Write;
use std::path::Path;
use std::sync::Arc;
use tokio::{runtime::Runtime, spawn, task::JoinHandle};
use super::message;
use super::message::LEVEL_ABBR;
// イベントファイルの1レコード分の情報を保持する構造体
#[derive(Clone, Debug)]
pub struct EvtxRecordInfo {
pub evtx_filepath: String, // イベントファイルのファイルパス ログで出力するときに使う
pub evtx_filepath: String, // イベントファイルのファイルパス ログで出力するときに使う
pub record: Value, // 1レコード分のデータをJSON形式にシリアライズしたもの
pub data_string: String,
pub key_2_value: hashbrown::HashMap<String, String>,
pub key_2_value: HashMap<String, String>,
pub record_information: Option<String>,
}
@@ -119,13 +129,11 @@ impl Detection {
.filter_map(return_if_success)
.collect();
if !*LOGONSUMMARY_FLAG {
let _ = &rulefile_loader
.rule_load_cnt
.insert(String::from("rule parsing error"), parseerror_count);
Detection::print_rule_load_info(
&rulefile_loader.rulecounter,
&rulefile_loader.rule_load_cnt,
&rulefile_loader.rule_status_cnt,
&parseerror_count,
);
}
ret
@@ -199,34 +207,14 @@ impl Detection {
rule
}
/// 条件に合致したレコードを表示するための関数
/// 条件に合致したレコードを格納するための関数
fn insert_message(rule: &RuleNode, record_info: &EvtxRecordInfo) {
let tag_info: Vec<String> = match TAGS_CONFIG.is_empty() {
false => rule.yaml["tags"]
.as_vec()
.unwrap_or(&Vec::default())
.iter()
.filter_map(|info| TAGS_CONFIG.get(info.as_str().unwrap_or(&String::default())))
.map(|str| str.to_owned())
.collect(),
true => rule.yaml["tags"]
.as_vec()
.unwrap_or(&Vec::default())
.iter()
.map(
|info| match TAGS_CONFIG.get(info.as_str().unwrap_or(&String::default())) {
Some(s) => s.to_owned(),
_ => info.as_str().unwrap_or("").replace("attack.", ""),
},
)
.collect(),
};
let tag_info: &Vec<String> = &Detection::get_tag_info(rule);
let recinfo = record_info
.record_information
.as_ref()
.map(|recinfo| recinfo.to_string());
let rec_id = if !*IS_HIDE_RECORD_ID {
let rec_id = if LOAEDED_PROFILE_ALIAS.contains("%RecordID%") {
Some(
get_serde_number_to_string(&record_info.record["Event"]["System"]["EventRecordID"])
.unwrap_or_default(),
@@ -242,73 +230,316 @@ impl Detection {
.unwrap_or_default();
let eid = get_serde_number_to_string(&record_info.record["Event"]["System"]["EventID"])
.unwrap_or_else(|| "-".to_owned());
let default_output = DEFAULT_DETAILS
.get(&format!("{}_{}", provider, &eid))
.unwrap_or(&"-".to_string())
.to_string();
let default_output = match DEFAULT_DETAILS.get(&format!("{}_{}", provider, &eid)) {
Some(str) => str.to_owned(),
None => recinfo.as_ref().unwrap_or(&"-".to_string()).to_string(),
};
let opt_record_info = if LOAEDED_PROFILE_ALIAS.contains("%RecordInformation%") {
recinfo
} else {
None
};
let default_time = Utc.ymd(1970, 1, 1).and_hms(0, 0, 0);
let time = message::get_event_time(&record_info.record).unwrap_or(default_time);
let level = rule.yaml["level"].as_str().unwrap_or("-").to_string();
let mut profile_converter: HashMap<String, String> = HashMap::new();
for (_k, v) in PROFILES.as_ref().unwrap().iter() {
let tmp = v.as_str();
for target_profile in PRELOAD_PROFILE_REGEX.matches(tmp).into_iter() {
match PRELOAD_PROFILE[target_profile] {
"%Timestamp%" => {
profile_converter
.insert("%Timestamp%".to_string(), format_time(&time, false));
}
"%Computer%" => {
profile_converter.insert(
"%Computer%".to_string(),
record_info.record["Event"]["System"]["Computer"]
.to_string()
.replace('\"', ""),
);
}
"%Channel%" => {
profile_converter.insert(
"%Channel%".to_string(),
CH_CONFIG.get(ch_str).unwrap_or(ch_str).to_string(),
);
}
"%Level%" => {
profile_converter.insert(
"%Level%".to_string(),
LEVEL_ABBR.get(&level).unwrap_or(&level).to_string(),
);
}
"%EventID%" => {
profile_converter.insert("%EventID%".to_string(), eid.to_owned());
}
"%RecordID%" => {
profile_converter.insert(
"%RecordID%".to_string(),
rec_id.as_ref().unwrap_or(&"".to_string()).to_owned(),
);
}
"%RuleTitle%" => {
profile_converter.insert(
"%RuleTitle%".to_string(),
rule.yaml["title"].as_str().unwrap_or("").to_string(),
);
}
"%RecordInformation%" => {
profile_converter.insert(
"%RecordInformation%".to_string(),
opt_record_info
.as_ref()
.unwrap_or(&"-".to_string())
.to_owned(),
);
}
"%RuleFile%" => {
profile_converter.insert(
"%RuleFile%".to_string(),
Path::new(&rule.rulepath)
.file_name()
.unwrap_or_default()
.to_str()
.unwrap_or_default()
.to_string(),
);
}
"%EvtxFile%" => {
profile_converter.insert(
"%EvtxFile%".to_string(),
Path::new(&record_info.evtx_filepath)
.to_str()
.unwrap_or_default()
.to_string(),
);
}
"%MitreTactics%" => {
let tactics: &Vec<String> = &tag_info
.iter()
.filter(|x| TAGS_CONFIG.values().contains(x))
.map(|y| y.to_owned())
.collect();
profile_converter.insert("%MitreTactics%".to_string(), tactics.join(" : "));
}
"%MitreTags%" => {
let techniques: &Vec<String> = &tag_info
.iter()
.filter(|x| {
!TAGS_CONFIG.values().contains(x)
&& (x.starts_with("attack.t")
|| x.starts_with("attack.g")
|| x.starts_with("attack.s"))
})
.map(|y| {
let mut replaced_tag = y.replace("attack.", "");
make_ascii_titlecase(&mut replaced_tag)
})
.collect();
profile_converter.insert("%MitreTags%".to_string(), techniques.join(" : "));
}
"%OtherTags%" => {
let tags: &Vec<String> = &tag_info
.iter()
.filter(|x| {
!(TAGS_CONFIG.values().contains(x)
|| x.starts_with("attack.t")
|| x.starts_with("attack.g")
|| x.starts_with("attack.s"))
})
.map(|y| y.to_owned())
.collect();
profile_converter.insert("%OtherTags%".to_string(), tags.join(" : "));
}
_ => {}
}
}
}
let detect_info = DetectInfo {
filepath: record_info.evtx_filepath.to_string(),
rulepath: rule.rulepath.to_string(),
level: rule.yaml["level"].as_str().unwrap_or("-").to_string(),
rulepath: (&rule.rulepath).to_owned(),
ruletitle: rule.yaml["title"].as_str().unwrap_or("-").to_string(),
level: LEVEL_ABBR.get(&level).unwrap_or(&level).to_string(),
computername: record_info.record["Event"]["System"]["Computer"]
.to_string()
.replace('\"', ""),
eventid: eid,
channel: CH_CONFIG.get(ch_str).unwrap_or(ch_str).to_string(),
alert: rule.yaml["title"].as_str().unwrap_or("").to_string(),
detail: String::default(),
tag_info: tag_info.join(" | "),
record_information: recinfo,
record_id: rec_id,
record_information: opt_record_info,
ext_field: PROFILES.as_ref().unwrap().to_owned(),
};
MESSAGES.lock().unwrap().insert(
message::insert(
&record_info.record,
rule.yaml["details"]
.as_str()
.unwrap_or(&default_output)
.to_string(),
detect_info,
time,
&mut profile_converter,
false,
);
}
/// insert aggregation condition detection message to output stack
fn insert_agg_message(rule: &RuleNode, agg_result: AggResult) {
let tag_info: Vec<String> = rule.yaml["tags"]
.as_vec()
.unwrap_or(&Vec::default())
.iter()
.filter_map(|info| TAGS_CONFIG.get(info.as_str().unwrap_or(&String::default())))
.map(|str| str.to_owned())
.collect();
let tag_info: &Vec<String> = &Detection::get_tag_info(rule);
let output = Detection::create_count_output(rule, &agg_result);
let rec_info = if configs::CONFIG.read().unwrap().args.full_data {
let rec_info = if LOAEDED_PROFILE_ALIAS.contains("%RecordInformation%") {
Option::Some(String::default())
} else {
Option::None
};
let rec_id = if !*IS_HIDE_RECORD_ID {
Some(String::default())
} else {
None
};
let mut profile_converter: HashMap<String, String> = HashMap::new();
let level = rule.yaml["level"].as_str().unwrap_or("-").to_string();
for (_k, v) in PROFILES.as_ref().unwrap().iter() {
let tmp = v.as_str();
for target_profile in PRELOAD_PROFILE_REGEX.matches(tmp).into_iter() {
match PRELOAD_PROFILE[target_profile] {
"%Timestamp%" => {
profile_converter.insert(
"%Timestamp%".to_string(),
format_time(&agg_result.start_timedate, false),
);
}
"%Computer%" => {
profile_converter.insert("%Computer%".to_string(), "-".to_owned());
}
"%Channel%" => {
profile_converter.insert("%Channel%".to_string(), "-".to_owned());
}
"%Level%" => {
profile_converter.insert(
"%Level%".to_string(),
LEVEL_ABBR.get(&level).unwrap_or(&level).to_string(),
);
}
"%EventID%" => {
profile_converter.insert("%EventID%".to_string(), "-".to_owned());
}
"%RecordID%" => {
profile_converter.insert("%RecordID%".to_string(), "".to_owned());
}
"%RuleTitle%" => {
profile_converter.insert(
"%RuleTitle%".to_string(),
rule.yaml["title"].as_str().unwrap_or("").to_string(),
);
}
"%RecordInformation%" => {
profile_converter.insert("%RecordInformation%".to_string(), "-".to_owned());
}
"%RuleFile%" => {
profile_converter.insert(
"%RuleFile%".to_string(),
Path::new(&rule.rulepath)
.file_name()
.unwrap_or_default()
.to_str()
.unwrap_or_default()
.to_string(),
);
}
"%EvtxFile%" => {
profile_converter.insert("%EvtxFile%".to_string(), "-".to_owned());
}
"%MitreTactics%" => {
let tactics: &Vec<String> = &tag_info
.iter()
.filter(|x| TAGS_CONFIG.values().contains(x))
.map(|y| y.to_owned())
.collect();
profile_converter.insert("%MitreTactics%".to_string(), tactics.join(" : "));
}
"%MitreTags%" => {
let techniques: &Vec<String> = &tag_info
.iter()
.filter(|x| {
!TAGS_CONFIG.values().contains(x)
&& (x.starts_with("attack.t")
|| x.starts_with("attack.g")
|| x.starts_with("attack.s"))
})
.map(|y| {
let mut replaced_tag = y.replace("attack.", "");
make_ascii_titlecase(&mut replaced_tag)
})
.collect();
profile_converter.insert("%MitreTags%".to_string(), techniques.join(" : "));
}
"%OtherTags%" => {
let tags: &Vec<String> = &tag_info
.iter()
.filter(|x| {
!(TAGS_CONFIG.values().contains(x)
|| x.starts_with("attack.t")
|| x.starts_with("attack.g")
|| x.starts_with("attack.s"))
})
.map(|y| y.to_owned())
.collect();
profile_converter.insert("%OtherTags%".to_string(), tags.join(" : "));
}
_ => {}
}
}
}
let detect_info = DetectInfo {
filepath: "-".to_owned(),
rulepath: rule.rulepath.to_owned(),
level: rule.yaml["level"].as_str().unwrap_or("").to_owned(),
rulepath: (&rule.rulepath).to_owned(),
ruletitle: rule.yaml["title"].as_str().unwrap_or("-").to_string(),
level: LEVEL_ABBR.get(&level).unwrap_or(&level).to_string(),
computername: "-".to_owned(),
eventid: "-".to_owned(),
channel: "-".to_owned(),
alert: rule.yaml["title"].as_str().unwrap_or("").to_owned(),
detail: output,
record_information: rec_info,
tag_info: tag_info.join(" : "),
record_id: rec_id,
ext_field: PROFILES.as_ref().unwrap().to_owned(),
};
MESSAGES
.lock()
.unwrap()
.insert_message(detect_info, agg_result.start_timedate)
message::insert(
&Value::default(),
rule.yaml["details"].as_str().unwrap_or("-").to_string(),
detect_info,
agg_result.start_timedate,
&mut profile_converter,
true,
)
}
/// rule内のtagsの内容を配列として返却する関数
fn get_tag_info(rule: &RuleNode) -> Vec<String> {
match TAGS_CONFIG.is_empty() {
false => rule.yaml["tags"]
.as_vec()
.unwrap_or(&Vec::default())
.iter()
.map(|info| {
if let Some(tag) = TAGS_CONFIG.get(info.as_str().unwrap_or(&String::default()))
{
tag.to_owned()
} else {
info.as_str().unwrap_or(&String::default()).to_owned()
}
})
.collect(),
true => rule.yaml["tags"]
.as_vec()
.unwrap_or(&Vec::default())
.iter()
.map(
|info| match TAGS_CONFIG.get(info.as_str().unwrap_or(&String::default())) {
Some(s) => s.to_owned(),
_ => info.as_str().unwrap_or("").to_string(),
},
)
.collect(),
}
}
///aggregation conditionのcount部分の検知出力文の文字列を返す関数
@@ -363,46 +594,83 @@ impl Detection {
rc: &HashMap<String, u128>,
ld_rc: &HashMap<String, u128>,
st_rc: &HashMap<String, u128>,
err_rc: &u128,
) {
if *STATISTICS_FLAG {
return;
}
let mut sorted_ld_rc: Vec<(&String, &u128)> = ld_rc.iter().collect();
sorted_ld_rc.sort_by(|a, b| a.0.cmp(b.0));
let args = &configs::CONFIG.read().unwrap().args;
sorted_ld_rc.into_iter().for_each(|(key, value)| {
//タイトルに利用するものはascii文字であることを前提として1文字目を大文字にするように変更する
println!(
"{} rules: {}",
make_ascii_titlecase(key.clone().as_mut()),
value,
);
if value != &0_u128 {
let disable_flag = if key == "noisy" && !args.enable_noisy_rules {
" (Disabled)"
} else {
""
};
//タイトルに利用するものはascii文字であることを前提として1文字目を大文字にするように変更する
println!(
"{} rules: {}{}",
make_ascii_titlecase(key.clone().as_mut()),
value,
disable_flag,
);
}
});
if err_rc != &0_u128 {
write_color_buffer(
&BufferWriter::stdout(ColorChoice::Always),
Some(Color::Red),
&format!("Rule parsing errors: {}", err_rc),
true,
)
.ok();
}
println!();
let mut sorted_st_rc: Vec<(&String, &u128)> = st_rc.iter().collect();
let total_loaded_rule_cnt: u128 = sorted_st_rc.iter().map(|(_, v)| v.to_owned()).sum();
sorted_st_rc.sort_by(|a, b| a.0.cmp(b.0));
sorted_st_rc.into_iter().for_each(|(key, value)| {
let rate = if value == &0_u128 {
0 as f64
} else {
(*value as f64) / (total_loaded_rule_cnt as f64) * 100.0
};
//タイトルに利用するものはascii文字であることを前提として1文字目を大文字にするように変更する
println!(
"{} rules: {} ({:.2}%)",
make_ascii_titlecase(key.clone().as_mut()),
value,
rate
);
if value != &0_u128 {
let rate = (*value as f64) / (total_loaded_rule_cnt as f64) * 100.0;
let deprecated_flag = if key == "deprecated" && !args.enable_deprecated_rules {
" (Disabled)"
} else {
""
};
//タイトルに利用するものはascii文字であることを前提として1文字目を大文字にするように変更する
write_color_buffer(
&BufferWriter::stdout(ColorChoice::Always),
None,
&format!(
"{} rules: {} ({:.2}%){}",
make_ascii_titlecase(key.clone().as_mut()),
value,
rate,
deprecated_flag
),
true,
)
.ok();
}
});
println!();
let mut sorted_rc: Vec<(&String, &u128)> = rc.iter().collect();
sorted_rc.sort_by(|a, b| a.0.cmp(b.0));
sorted_rc.into_iter().for_each(|(key, value)| {
println!("{} rules: {}", key, value);
write_color_buffer(
&BufferWriter::stdout(ColorChoice::Always),
None,
&format!("{} rules: {}", key, value),
true,
)
.ok();
});
println!("Total enabled detection rules: {}", total_loaded_rule_cnt);
println!();
}

667
src/detections/message.rs Normal file
View File

@@ -0,0 +1,667 @@
extern crate lazy_static;
use crate::detections::configs;
use crate::detections::configs::CURRENT_EXE_PATH;
use crate::detections::utils;
use crate::detections::utils::get_serde_number_to_string;
use crate::detections::utils::write_color_buffer;
use crate::options::profile::PROFILES;
use chrono::{DateTime, Local, Utc};
use dashmap::DashMap;
use hashbrown::HashMap;
use lazy_static::lazy_static;
use linked_hash_map::LinkedHashMap;
use regex::Regex;
use serde_json::Value;
use std::env;
use std::fs::create_dir;
use std::fs::File;
use std::io::BufWriter;
use std::io::{self, Write};
use std::path::Path;
use std::sync::Mutex;
use termcolor::{BufferWriter, ColorChoice};
#[derive(Debug, Clone)]
pub struct DetectInfo {
pub rulepath: String,
pub ruletitle: String,
pub level: String,
pub computername: String,
pub eventid: String,
pub detail: String,
pub record_information: Option<String>,
pub ext_field: LinkedHashMap<String, String>,
}
pub struct AlertMessage {}
lazy_static! {
#[derive(Debug,PartialEq, Eq, Ord, PartialOrd)]
pub static ref MESSAGES: DashMap<DateTime<Utc>, Vec<DetectInfo>> = DashMap::new();
pub static ref ALIASREGEX: Regex = Regex::new(r"%[a-zA-Z0-9-_\[\]]+%").unwrap();
pub static ref SUFFIXREGEX: Regex = Regex::new(r"\[([0-9]+)\]").unwrap();
pub static ref ERROR_LOG_PATH: String = format!(
"./logs/errorlog-{}.log",
Local::now().format("%Y%m%d_%H%M%S")
);
pub static ref QUIET_ERRORS_FLAG: bool = configs::CONFIG.read().unwrap().args.quiet_errors;
pub static ref ERROR_LOG_STACK: Mutex<Vec<String>> = Mutex::new(Vec::new());
pub static ref STATISTICS_FLAG: bool = configs::CONFIG.read().unwrap().args.statistics;
pub static ref LOGONSUMMARY_FLAG: bool = configs::CONFIG.read().unwrap().args.logon_summary;
pub static ref TAGS_CONFIG: HashMap<String, String> = create_output_filter_config(
utils::check_setting_path(&CURRENT_EXE_PATH.to_path_buf(), "config/mitre_tactics.txt")
.to_str()
.unwrap(),
);
pub static ref CH_CONFIG: HashMap<String, String> = create_output_filter_config(
utils::check_setting_path(
&CURRENT_EXE_PATH.to_path_buf(),
"rules/config/channel_abbreviations.txt"
)
.to_str()
.unwrap(),
);
pub static ref PIVOT_KEYWORD_LIST_FLAG: bool =
configs::CONFIG.read().unwrap().args.pivot_keywords_list;
pub static ref DEFAULT_DETAILS: HashMap<String, String> = get_default_details(&format!(
"{}/default_details.txt",
configs::CONFIG
.read()
.unwrap()
.args
.config
.as_path()
.display()
));
pub static ref LEVEL_ABBR: LinkedHashMap<String, String> = LinkedHashMap::from_iter([
("critical".to_string(), "crit".to_string()),
("high".to_string(), "high".to_string()),
("medium".to_string(), "med ".to_string()),
("low".to_string(), "low ".to_string()),
("informational".to_string(), "info".to_string()),
]);
pub static ref LEVEL_FULL: HashMap<String, String> = HashMap::from([
("crit".to_string(), "critical".to_string()),
("high".to_string(), "high".to_string()),
("med ".to_string(), "medium".to_string()),
("low ".to_string(), "low".to_string()),
("info".to_string(), "informational".to_string())
]);
}
/// ファイルパスで記載されたtagでのフル名、表示の際に置き換えられる文字列のHashMapを作成する関数。
/// ex. attack.impact,Impact
pub fn create_output_filter_config(path: &str) -> HashMap<String, String> {
let mut ret: HashMap<String, String> = HashMap::new();
let read_result = utils::read_csv(path);
if read_result.is_err() {
AlertMessage::alert(read_result.as_ref().unwrap_err()).ok();
return HashMap::default();
}
read_result.unwrap().into_iter().for_each(|line| {
if line.len() != 2 {
return;
}
let tag_full_str = line[0].trim();
let tag_replace_str = line[1].trim();
ret.insert(tag_full_str.to_owned(), tag_replace_str.to_owned());
});
ret
}
/// メッセージの設定を行う関数。aggcondition対応のためrecordではなく出力をする対象時間がDatetime形式での入力としている
pub fn insert_message(detect_info: DetectInfo, event_time: DateTime<Utc>) {
let mut v = MESSAGES.entry(event_time).or_default();
let (_, info) = v.pair_mut();
info.push(detect_info);
}
/// メッセージを設定
pub fn insert(
event_record: &Value,
output: String,
mut detect_info: DetectInfo,
time: DateTime<Utc>,
profile_converter: &mut HashMap<String, String>,
is_agg: bool,
) {
if !is_agg {
let parsed_detail = parse_message(event_record, &output)
.chars()
.filter(|&c| !c.is_control())
.collect::<String>();
detect_info.detail = if parsed_detail.is_empty() {
"-".to_string()
} else {
parsed_detail
};
}
let mut exist_detail = false;
PROFILES.as_ref().unwrap().iter().for_each(|(_k, v)| {
if v.contains("%Details%") {
exist_detail = true;
}
});
if exist_detail {
profile_converter.insert("%Details%".to_string(), detect_info.detail.to_owned());
}
let mut tmp_converted_info: LinkedHashMap<String, String> = LinkedHashMap::new();
for (k, v) in &detect_info.ext_field {
let converted_reserve_info = convert_profile_reserved_info(v, profile_converter);
if v.contains("%RecordInformation%") || v.contains("%Details%") {
tmp_converted_info.insert(k.to_owned(), converted_reserve_info);
} else {
tmp_converted_info.insert(
k.to_owned(),
parse_message(event_record, &converted_reserve_info),
);
}
}
for (k, v) in tmp_converted_info {
detect_info.ext_field.insert(k, v);
}
insert_message(detect_info, time)
}
/// profileで用いられる予約語の情報を変換する関数
fn convert_profile_reserved_info(
output: &String,
config_reserved_info: &HashMap<String, String>,
) -> String {
let mut ret = output.to_owned();
config_reserved_info.iter().for_each(|(k, v)| {
ret = ret.replace(k, v);
});
ret
}
/// メッセージ内の%で囲まれた箇所をエイリアスとしてをレコード情報を参照して置き換える関数
fn parse_message(event_record: &Value, output: &String) -> String {
let mut return_message = output.to_owned();
let mut hash_map: HashMap<String, String> = HashMap::new();
for caps in ALIASREGEX.captures_iter(&return_message) {
let full_target_str = &caps[0];
let target_length = full_target_str.chars().count() - 2; // The meaning of 2 is two percent
let target_str = full_target_str
.chars()
.skip(1)
.take(target_length)
.collect::<String>();
let array_str = if let Some(_array_str) = configs::EVENTKEY_ALIAS.get_event_key(&target_str)
{
_array_str.to_string()
} else {
format!("Event.EventData.{}", target_str)
};
let split: Vec<&str> = array_str.split('.').collect();
let mut tmp_event_record: &Value = event_record;
for s in &split {
if let Some(record) = tmp_event_record.get(s) {
tmp_event_record = record;
}
}
let suffix_match = SUFFIXREGEX.captures(&target_str);
let suffix: i64 = match suffix_match {
Some(cap) => cap.get(1).map_or(-1, |a| a.as_str().parse().unwrap_or(-1)),
None => -1,
};
if suffix >= 1 {
tmp_event_record = tmp_event_record
.get("Data")
.unwrap()
.get((suffix - 1) as usize)
.unwrap_or(tmp_event_record);
}
let hash_value = get_serde_number_to_string(tmp_event_record);
if hash_value.is_some() {
if let Some(hash_value) = hash_value {
// UnicodeのWhitespace characterをそのままCSVに出力すると見難いので、スペースに変換する。なお、先頭と最後のWhitespace characterは単に削除される。
let hash_value: Vec<&str> = hash_value.split_whitespace().collect();
let hash_value = hash_value.join(" ");
hash_map.insert(full_target_str.to_string(), hash_value);
}
} else {
hash_map.insert(full_target_str.to_string(), "n/a".to_string());
}
}
for (k, v) in &hash_map {
return_message = return_message.replace(k, v);
}
return_message
}
/// メッセージを返す
pub fn get(time: DateTime<Utc>) -> Vec<DetectInfo> {
match MESSAGES.get(&time) {
Some(v) => v.to_vec(),
None => Vec::new(),
}
}
pub fn get_event_time(event_record: &Value) -> Option<DateTime<Utc>> {
let system_time = &event_record["Event"]["System"]["TimeCreated_attributes"]["SystemTime"];
return utils::str_time_to_datetime(system_time.as_str().unwrap_or(""));
}
/// detailsのdefault値をファイルから読み取る関数
pub fn get_default_details(filepath: &str) -> HashMap<String, String> {
let read_result = utils::read_csv(filepath);
match read_result {
Err(_e) => {
AlertMessage::alert(&_e).ok();
HashMap::new()
}
Ok(lines) => {
let mut ret: HashMap<String, String> = HashMap::new();
lines
.into_iter()
.try_for_each(|line| -> Result<(), String> {
let provider = match line.get(0) {
Some(_provider) => _provider.trim(),
_ => {
return Result::Err(
"Failed to read provider in default_details.txt.".to_string(),
)
}
};
let eid = match line.get(1) {
Some(eid_str) => match eid_str.trim().parse::<i64>() {
Ok(_eid) => _eid,
_ => {
return Result::Err(
"Parse Error EventID in default_details.txt.".to_string(),
)
}
},
_ => {
return Result::Err(
"Failed to read EventID in default_details.txt.".to_string(),
)
}
};
let details = match line.get(2) {
Some(detail) => detail.trim(),
_ => {
return Result::Err(
"Failed to read details in default_details.txt.".to_string(),
)
}
};
ret.insert(format!("{}_{}", provider, eid), details.to_string());
Ok(())
})
.ok();
ret
}
}
}
impl AlertMessage {
///対象のディレクトリが存在することを確認後、最初の定型文を追加して、ファイルのbufwriterを返す関数
pub fn create_error_log(path_str: String) {
if *QUIET_ERRORS_FLAG {
return;
}
let path = Path::new(&path_str);
if !path.parent().unwrap().exists() {
create_dir(path.parent().unwrap()).ok();
}
let mut error_log_writer = BufWriter::new(File::create(path).unwrap());
error_log_writer
.write_all(
format!(
"user input: {:?}\n",
format_args!("{}", env::args().collect::<Vec<String>>().join(" "))
)
.as_bytes(),
)
.ok();
let error_logs = ERROR_LOG_STACK.lock().unwrap();
error_logs.iter().for_each(|error_log| {
writeln!(error_log_writer, "{}", error_log).ok();
});
println!(
"Errors were generated. Please check {} for details.",
*ERROR_LOG_PATH
);
println!();
}
/// ERRORメッセージを表示する関数
pub fn alert(contents: &str) -> io::Result<()> {
write_color_buffer(
&BufferWriter::stderr(ColorChoice::Always),
None,
&format!("[ERROR] {}", contents),
true,
)
}
/// WARNメッセージを表示する関数
pub fn warn(contents: &str) -> io::Result<()> {
write_color_buffer(
&BufferWriter::stderr(ColorChoice::Always),
None,
&format!("[WARN] {}", contents),
true,
)
}
}
#[cfg(test)]
mod tests {
use crate::detections::message::{get, insert_message, AlertMessage, DetectInfo};
use crate::detections::message::{parse_message, MESSAGES};
use chrono::Utc;
use hashbrown::HashMap;
use rand::Rng;
use serde_json::Value;
use std::thread;
use std::time::Duration;
use super::{create_output_filter_config, get_default_details};
#[test]
fn test_error_message() {
let input = "TEST!";
AlertMessage::alert(input).expect("[ERROR] TEST!");
}
#[test]
fn test_warn_message() {
let input = "TESTWarn!";
AlertMessage::warn(input).expect("[WARN] TESTWarn!");
}
#[test]
/// outputで指定されているキー(eventkey_alias.txt内で設定済み)から対象のレコード内の情報でメッセージをパースしているか確認する関数
fn test_parse_message() {
MESSAGES.clear();
let json_str = r##"
{
"Event": {
"EventData": {
"CommandLine": "parsetest1"
},
"System": {
"Computer": "testcomputer1",
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "commandline:parsetest1 computername:testcomputer1";
assert_eq!(
parse_message(
&event_record,
&"commandline:%CommandLine% computername:%ComputerName%".to_owned()
),
expected,
);
}
#[test]
fn test_parse_message_auto_search() {
MESSAGES.clear();
let json_str = r##"
{
"Event": {
"EventData": {
"NoAlias": "no_alias"
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "alias:no_alias";
assert_eq!(
parse_message(&event_record, &"alias:%NoAlias%".to_owned()),
expected,
);
}
#[test]
/// outputで指定されているキーが、eventkey_alias.txt内で設定されていない場合の出力テスト
fn test_parse_message_not_exist_key_in_output() {
MESSAGES.clear();
let json_str = r##"
{
"Event": {
"EventData": {
"CommandLine": "parsetest2"
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "NoExistAlias:n/a";
assert_eq!(
parse_message(&event_record, &"NoExistAlias:%NoAliasNoHit%".to_owned()),
expected,
);
}
#[test]
/// output test when no exist info in target record output and described key-value data in eventkey_alias.txt
fn test_parse_message_not_exist_value_in_record() {
MESSAGES.clear();
let json_str = r##"
{
"Event": {
"EventData": {
"CommandLine": "parsetest3"
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "commandline:parsetest3 computername:n/a";
assert_eq!(
parse_message(
&event_record,
&"commandline:%CommandLine% computername:%ComputerName%".to_owned()
),
expected,
);
}
#[test]
/// output test when no exist info in target record output and described key-value data in eventkey_alias.txt
fn test_parse_message_multiple_no_suffix_in_record() {
MESSAGES.clear();
let json_str = r##"
{
"Event": {
"EventData": {
"CommandLine": "parsetest3",
"Data": [
"data1",
"data2",
"data3"
]
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "commandline:parsetest3 data:[\"data1\",\"data2\",\"data3\"]";
assert_eq!(
parse_message(
&event_record,
&"commandline:%CommandLine% data:%Data%".to_owned()
),
expected,
);
}
#[test]
/// output test when no exist info in target record output and described key-value data in eventkey_alias.txt
fn test_parse_message_multiple_with_suffix_in_record() {
MESSAGES.clear();
let json_str = r##"
{
"Event": {
"EventData": {
"CommandLine": "parsetest3",
"Data": [
"data1",
"data2",
"data3"
]
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "commandline:parsetest3 data:data2";
assert_eq!(
parse_message(
&event_record,
&"commandline:%CommandLine% data:%Data[2]%".to_owned()
),
expected,
);
}
#[test]
/// output test when no exist info in target record output and described key-value data in eventkey_alias.txt
fn test_parse_message_multiple_no_exist_in_record() {
MESSAGES.clear();
let json_str = r##"
{
"Event": {
"EventData": {
"CommandLine": "parsetest3",
"Data": [
"data1",
"data2",
"data3"
]
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "commandline:parsetest3 data:n/a";
assert_eq!(
parse_message(
&event_record,
&"commandline:%CommandLine% data:%Data[0]%".to_owned()
),
expected,
);
}
#[test]
/// test of loading output filter config by mitre_tactics.txt
fn test_load_mitre_tactics_log() {
let actual = create_output_filter_config("test_files/config/mitre_tactics.txt");
let expected: HashMap<String, String> = HashMap::from([
("attack.impact".to_string(), "Impact".to_string()),
("xxx".to_string(), "yyy".to_string()),
]);
_check_hashmap_element(&expected, actual);
}
#[test]
/// loading test to channel_abbrevations.txt
fn test_load_abbrevations() {
let actual = create_output_filter_config("test_files/config/channel_abbreviations.txt");
let actual2 = create_output_filter_config("test_files/config/channel_abbreviations.txt");
let expected: HashMap<String, String> = HashMap::from([
("Security".to_string(), "Sec".to_string()),
("xxx".to_string(), "yyy".to_string()),
]);
_check_hashmap_element(&expected, actual);
_check_hashmap_element(&expected, actual2);
}
#[test]
fn _get_default_defails() {
let expected: HashMap<String, String> = HashMap::from([
("Microsoft-Windows-PowerShell_4104".to_string(),"%ScriptBlockText%".to_string()),("Microsoft-Windows-Security-Auditing_4624".to_string(), "User: %TargetUserName% | Comp: %WorkstationName% | IP Addr: %IpAddress% | LID: %TargetLogonId% | Process: %ProcessName%".to_string()),
("Microsoft-Windows-Sysmon_1".to_string(), "Cmd: %CommandLine% | Process: %Image% | User: %User% | Parent Cmd: %ParentCommandLine% | LID: %LogonId% | PID: %ProcessId% | PGUID: %ProcessGuid%".to_string()),
("Service Control Manager_7031".to_string(), "Svc: %param1% | Crash Count: %param2% | Action: %param5%".to_string()),
]);
let actual = get_default_details("test_files/config/default_details.txt");
_check_hashmap_element(&expected, actual);
}
/// check two HashMap element length and value
fn _check_hashmap_element(expected: &HashMap<String, String>, actual: HashMap<String, String>) {
assert_eq!(expected.len(), actual.len());
for (k, v) in expected.iter() {
assert!(actual.get(k).unwrap_or(&String::default()) == v);
}
}
#[ignore]
#[test]
fn test_insert_message_race_condition() {
MESSAGES.clear();
// Setup test detect_info before starting threads.
let mut sample_detects = vec![];
let mut rng = rand::thread_rng();
let sample_event_time = Utc::now();
for i in 1..2001 {
let detect_info = DetectInfo {
rulepath: "".to_string(),
ruletitle: "".to_string(),
level: "".to_string(),
computername: "".to_string(),
eventid: i.to_string(),
detail: "".to_string(),
record_information: None,
ext_field: Default::default(),
};
sample_detects.push((sample_event_time, detect_info, rng.gen_range(0..10)));
}
// Starting threads and randomly insert_message in parallel.
let mut handles = vec![];
for (event_time, detect_info, random_num) in sample_detects {
let handle = thread::spawn(move || {
thread::sleep(Duration::from_micros(random_num));
insert_message(detect_info, event_time);
});
handles.push(handle);
}
// Wait for all threads execution completion.
for handle in handles {
handle.join().unwrap();
}
// Expect all sample_detects to be included, but the len() size will be different each time I run it
assert_eq!(get(sample_event_time).len(), 2000)
}
}

View File

@@ -1,6 +1,6 @@
pub mod configs;
pub mod detection;
pub mod message;
pub mod pivot;
pub mod print;
pub mod rule;
pub mod utils;

View File

@@ -1,5 +1,4 @@
use hashbrown::HashMap;
use hashbrown::HashSet;
use hashbrown::{HashMap, HashSet};
use lazy_static::lazy_static;
use serde_json::Value;
use std::sync::RwLock;

View File

@@ -1,760 +0,0 @@
extern crate lazy_static;
use crate::detections::configs;
use crate::detections::utils;
use crate::detections::utils::get_serde_number_to_string;
use crate::detections::utils::write_color_buffer;
use chrono::{DateTime, Local, TimeZone, Utc};
use hashbrown::HashMap;
use lazy_static::lazy_static;
use regex::Regex;
use serde_json::Value;
use std::collections::BTreeMap;
use std::env;
use std::fs::create_dir;
use std::fs::File;
use std::io::BufWriter;
use std::io::{self, Write};
use std::path::Path;
use std::sync::Mutex;
use termcolor::{BufferWriter, ColorChoice};
#[derive(Debug)]
pub struct Message {
map: BTreeMap<DateTime<Utc>, Vec<DetectInfo>>,
}
#[derive(Debug, Clone)]
pub struct DetectInfo {
pub filepath: String,
pub rulepath: String,
pub level: String,
pub computername: String,
pub eventid: String,
pub channel: String,
pub alert: String,
pub detail: String,
pub tag_info: String,
pub record_information: Option<String>,
pub record_id: Option<String>,
}
pub struct AlertMessage {}
lazy_static! {
pub static ref MESSAGES: Mutex<Message> = Mutex::new(Message::new());
pub static ref ALIASREGEX: Regex = Regex::new(r"%[a-zA-Z0-9-_\[\]]+%").unwrap();
pub static ref SUFFIXREGEX: Regex = Regex::new(r"\[([0-9]+)\]").unwrap();
pub static ref ERROR_LOG_PATH: String = format!(
"./logs/errorlog-{}.log",
Local::now().format("%Y%m%d_%H%M%S")
);
pub static ref QUIET_ERRORS_FLAG: bool = configs::CONFIG.read().unwrap().args.quiet_errors;
pub static ref ERROR_LOG_STACK: Mutex<Vec<String>> = Mutex::new(Vec::new());
pub static ref STATISTICS_FLAG: bool = configs::CONFIG.read().unwrap().args.statistics;
pub static ref LOGONSUMMARY_FLAG: bool = configs::CONFIG.read().unwrap().args.logon_summary;
pub static ref TAGS_CONFIG: HashMap<String, String> = Message::create_output_filter_config(
"config/output_tag.txt",
true,
configs::CONFIG.read().unwrap().args.all_tags
);
pub static ref CH_CONFIG: HashMap<String, String> = Message::create_output_filter_config(
"config/channel_abbreviations.txt",
false,
configs::CONFIG.read().unwrap().args.all_tags
);
pub static ref PIVOT_KEYWORD_LIST_FLAG: bool =
configs::CONFIG.read().unwrap().args.pivot_keywords_list;
pub static ref IS_HIDE_RECORD_ID: bool = configs::CONFIG.read().unwrap().args.hide_record_id;
pub static ref DEFAULT_DETAILS: HashMap<String, String> =
Message::get_default_details(&format!(
"{}/default_details.txt",
configs::CONFIG
.read()
.unwrap()
.args
.config
.as_path()
.display()
));
}
impl Default for Message {
fn default() -> Self {
Self::new()
}
}
impl Message {
pub fn new() -> Self {
let messages: BTreeMap<DateTime<Utc>, Vec<DetectInfo>> = BTreeMap::new();
Message { map: messages }
}
/// ファイルパスで記載されたtagでのフル名、表示の際に置き換えられる文字列のHashMapを作成する関数。
/// ex. attack.impact,Impact
pub fn create_output_filter_config(
path: &str,
read_tags: bool,
pass_flag: bool,
) -> HashMap<String, String> {
let mut ret: HashMap<String, String> = HashMap::new();
if read_tags && pass_flag {
return ret;
}
let read_result = utils::read_csv(path);
if read_result.is_err() {
AlertMessage::alert(read_result.as_ref().unwrap_err()).ok();
return HashMap::default();
}
read_result.unwrap().into_iter().for_each(|line| {
if line.len() != 2 {
return;
}
let empty = &"".to_string();
let tag_full_str = line.get(0).unwrap_or(empty).trim();
let tag_replace_str = line.get(1).unwrap_or(empty).trim();
ret.insert(tag_full_str.to_owned(), tag_replace_str.to_owned());
});
ret
}
/// メッセージの設定を行う関数。aggcondition対応のためrecordではなく出力をする対象時間がDatetime形式での入力としている
pub fn insert_message(&mut self, detect_info: DetectInfo, event_time: DateTime<Utc>) {
if let Some(v) = self.map.get_mut(&event_time) {
v.push(detect_info);
} else {
let m = vec![detect_info; 1];
self.map.insert(event_time, m);
}
}
/// メッセージを設定
pub fn insert(&mut self, event_record: &Value, output: String, mut detect_info: DetectInfo) {
detect_info.detail = self.parse_message(event_record, output);
let default_time = Utc.ymd(1970, 1, 1).and_hms(0, 0, 0);
let time = Message::get_event_time(event_record).unwrap_or(default_time);
self.insert_message(detect_info, time)
}
fn parse_message(&mut self, event_record: &Value, output: String) -> String {
let mut return_message: String = output;
let mut hash_map: HashMap<String, String> = HashMap::new();
for caps in ALIASREGEX.captures_iter(&return_message) {
let full_target_str = &caps[0];
let target_length = full_target_str.chars().count() - 2; // The meaning of 2 is two percent
let target_str = full_target_str
.chars()
.skip(1)
.take(target_length)
.collect::<String>();
let array_str =
if let Some(_array_str) = configs::EVENTKEY_ALIAS.get_event_key(&target_str) {
_array_str.to_string()
} else {
"Event.EventData.".to_owned() + &target_str
};
let split: Vec<&str> = array_str.split('.').collect();
let mut tmp_event_record: &Value = event_record;
for s in &split {
if let Some(record) = tmp_event_record.get(s) {
tmp_event_record = record;
}
}
let suffix_match = SUFFIXREGEX.captures(&target_str);
let suffix: i64 = match suffix_match {
Some(cap) => cap.get(1).map_or(-1, |a| a.as_str().parse().unwrap_or(-1)),
None => -1,
};
if suffix >= 1 {
tmp_event_record = tmp_event_record
.get("Data")
.unwrap()
.get((suffix - 1) as usize)
.unwrap_or(tmp_event_record);
}
let hash_value = get_serde_number_to_string(tmp_event_record);
if hash_value.is_some() {
if let Some(hash_value) = hash_value {
// UnicodeのWhitespace characterをそのままCSVに出力すると見難いので、スペースに変換する。なお、先頭と最後のWhitespace characterは単に削除される。
let hash_value: Vec<&str> = hash_value.split_whitespace().collect();
let hash_value = hash_value.join(" ");
hash_map.insert(full_target_str.to_string(), hash_value);
}
} else {
hash_map.insert(full_target_str.to_string(), "n/a".to_string());
}
}
for (k, v) in &hash_map {
return_message = return_message.replace(k, v);
}
return_message
}
/// メッセージを返す
pub fn get(&self, time: DateTime<Utc>) -> Vec<DetectInfo> {
match self.map.get(&time) {
Some(v) => v.to_vec(),
None => Vec::new(),
}
}
/// Messageのなかに入っているメッセージすべてを表示する
pub fn debug(&self) {
println!("{:?}", self.map);
}
/// 最後に表示を行う
pub fn print(&self) {
let mut detect_count = 0;
for (key, detect_infos) in self.map.iter() {
for detect_info in detect_infos.iter() {
println!("{} <{}> {}", key, detect_info.alert, detect_info.detail);
}
detect_count += detect_infos.len();
}
println!();
println!("Total events:{:?}", detect_count);
}
pub fn iter(&self) -> &BTreeMap<DateTime<Utc>, Vec<DetectInfo>> {
&self.map
}
pub fn get_event_time(event_record: &Value) -> Option<DateTime<Utc>> {
let system_time = &event_record["Event"]["System"]["TimeCreated_attributes"]["SystemTime"];
return utils::str_time_to_datetime(system_time.as_str().unwrap_or(""));
}
/// message内のマップをクリアする。テストする際の冪等性の担保のため作成。
pub fn clear(&mut self) {
self.map.clear();
}
/// detailsのdefault値をファイルから読み取る関数
pub fn get_default_details(filepath: &str) -> HashMap<String, String> {
let read_result = utils::read_csv(filepath);
match read_result {
Err(_e) => {
AlertMessage::alert(&_e).ok();
HashMap::new()
}
Ok(lines) => {
let mut ret: HashMap<String, String> = HashMap::new();
lines
.into_iter()
.try_for_each(|line| -> Result<(), String> {
let provider = match line.get(0) {
Some(_provider) => _provider.trim(),
_ => {
return Result::Err(
"Failed to read provider in default_details.txt.".to_string(),
)
}
};
let eid = match line.get(1) {
Some(eid_str) => match eid_str.trim().parse::<i64>() {
Ok(_eid) => _eid,
_ => {
return Result::Err(
"Parse Error EventID in default_details.txt.".to_string(),
)
}
},
_ => {
return Result::Err(
"Failed to read EventID in default_details.txt.".to_string(),
)
}
};
let details = match line.get(2) {
Some(detail) => detail.trim(),
_ => {
return Result::Err(
"Failed to read details in default_details.txt.".to_string(),
)
}
};
ret.insert(format!("{}_{}", provider, eid), details.to_string());
Ok(())
})
.ok();
ret
}
}
}
}
impl AlertMessage {
///対象のディレクトリが存在することを確認後、最初の定型文を追加して、ファイルのbufwriterを返す関数
pub fn create_error_log(path_str: String) {
if *QUIET_ERRORS_FLAG {
return;
}
let path = Path::new(&path_str);
if !path.parent().unwrap().exists() {
create_dir(path.parent().unwrap()).ok();
}
let mut error_log_writer = BufWriter::new(File::create(path).unwrap());
error_log_writer
.write_all(
format!(
"user input: {:?}\n",
format_args!("{}", env::args().collect::<Vec<String>>().join(" "))
)
.as_bytes(),
)
.ok();
let error_logs = ERROR_LOG_STACK.lock().unwrap();
error_logs.iter().for_each(|error_log| {
writeln!(error_log_writer, "{}", error_log).ok();
});
println!(
"Errors were generated. Please check {} for details.",
*ERROR_LOG_PATH
);
println!();
}
/// ERRORメッセージを表示する関数
pub fn alert(contents: &str) -> io::Result<()> {
write_color_buffer(
BufferWriter::stderr(ColorChoice::Always),
None,
&format!("[ERROR] {}", contents),
)
}
/// WARNメッセージを表示する関数
pub fn warn(contents: &str) -> io::Result<()> {
write_color_buffer(
BufferWriter::stderr(ColorChoice::Always),
None,
&format!("[WARN] {}", contents),
)
}
}
#[cfg(test)]
mod tests {
use crate::detections::print::DetectInfo;
use crate::detections::print::{AlertMessage, Message};
use hashbrown::HashMap;
use serde_json::Value;
#[test]
fn test_create_and_append_message() {
let mut message = Message::new();
let json_str_1 = r##"
{
"Event": {
"EventData": {
"CommandLine": "hoge"
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record_1: Value = serde_json::from_str(json_str_1).unwrap();
message.insert(
&event_record_1,
"CommandLine1: %CommandLine%".to_string(),
DetectInfo {
filepath: "a".to_string(),
rulepath: "test_rule".to_string(),
level: "high".to_string(),
computername: "testcomputer1".to_string(),
eventid: "1".to_string(),
channel: String::default(),
alert: "test1".to_string(),
detail: String::default(),
tag_info: "txxx.001".to_string(),
record_information: Option::Some("record_information1".to_string()),
record_id: Option::Some("11111".to_string()),
},
);
let json_str_2 = r##"
{
"Event": {
"EventData": {
"CommandLine": "hoge"
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record_2: Value = serde_json::from_str(json_str_2).unwrap();
message.insert(
&event_record_2,
"CommandLine2: %CommandLine%".to_string(),
DetectInfo {
filepath: "a".to_string(),
rulepath: "test_rule2".to_string(),
level: "high".to_string(),
computername: "testcomputer2".to_string(),
eventid: "2".to_string(),
channel: String::default(),
alert: "test2".to_string(),
detail: String::default(),
tag_info: "txxx.002".to_string(),
record_information: Option::Some("record_information2".to_string()),
record_id: Option::Some("22222".to_string()),
},
);
let json_str_3 = r##"
{
"Event": {
"EventData": {
"CommandLine": "hoge"
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "2000-01-21T09:06:01Z"
}
}
}
}
"##;
let event_record_3: Value = serde_json::from_str(json_str_3).unwrap();
message.insert(
&event_record_3,
"CommandLine3: %CommandLine%".to_string(),
DetectInfo {
filepath: "a".to_string(),
rulepath: "test_rule3".to_string(),
level: "high".to_string(),
computername: "testcomputer3".to_string(),
eventid: "3".to_string(),
channel: String::default(),
alert: "test3".to_string(),
detail: String::default(),
tag_info: "txxx.003".to_string(),
record_information: Option::Some("record_information3".to_string()),
record_id: Option::Some("33333".to_string()),
},
);
let json_str_4 = r##"
{
"Event": {
"EventData": {
"CommandLine": "hoge"
}
}
}
"##;
let event_record_4: Value = serde_json::from_str(json_str_4).unwrap();
message.insert(
&event_record_4,
"CommandLine4: %CommandLine%".to_string(),
DetectInfo {
filepath: "a".to_string(),
rulepath: "test_rule4".to_string(),
level: "medium".to_string(),
computername: "testcomputer4".to_string(),
eventid: "4".to_string(),
channel: String::default(),
alert: "test4".to_string(),
detail: String::default(),
tag_info: "txxx.004".to_string(),
record_information: Option::Some("record_information4".to_string()),
record_id: Option::None,
},
);
let display = format!("{}", format_args!("{:?}", message));
println!("display::::{}", display);
let expect = "Message { map: {1970-01-01T00:00:00Z: [DetectInfo { filepath: \"a\", rulepath: \"test_rule4\", level: \"medium\", computername: \"testcomputer4\", eventid: \"4\", channel: \"\", alert: \"test4\", detail: \"CommandLine4: hoge\", tag_info: \"txxx.004\", record_information: Some(\"record_information4\"), record_id: None }], 1996-02-27T01:05:01Z: [DetectInfo { filepath: \"a\", rulepath: \"test_rule\", level: \"high\", computername: \"testcomputer1\", eventid: \"1\", channel: \"\", alert: \"test1\", detail: \"CommandLine1: hoge\", tag_info: \"txxx.001\", record_information: Some(\"record_information1\"), record_id: Some(\"11111\") }, DetectInfo { filepath: \"a\", rulepath: \"test_rule2\", level: \"high\", computername: \"testcomputer2\", eventid: \"2\", channel: \"\", alert: \"test2\", detail: \"CommandLine2: hoge\", tag_info: \"txxx.002\", record_information: Some(\"record_information2\"), record_id: Some(\"22222\") }], 2000-01-21T09:06:01Z: [DetectInfo { filepath: \"a\", rulepath: \"test_rule3\", level: \"high\", computername: \"testcomputer3\", eventid: \"3\", channel: \"\", alert: \"test3\", detail: \"CommandLine3: hoge\", tag_info: \"txxx.003\", record_information: Some(\"record_information3\"), record_id: Some(\"33333\") }]} }";
assert_eq!(display, expect);
}
#[test]
fn test_error_message() {
let input = "TEST!";
AlertMessage::alert(input).expect("[ERROR] TEST!");
}
#[test]
fn test_warn_message() {
let input = "TESTWarn!";
AlertMessage::warn(input).expect("[WARN] TESTWarn!");
}
#[test]
/// outputで指定されているキー(eventkey_alias.txt内で設定済み)から対象のレコード内の情報でメッセージをパースしているか確認する関数
fn test_parse_message() {
let mut message = Message::new();
let json_str = r##"
{
"Event": {
"EventData": {
"CommandLine": "parsetest1"
},
"System": {
"Computer": "testcomputer1",
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "commandline:parsetest1 computername:testcomputer1";
assert_eq!(
message.parse_message(
&event_record,
"commandline:%CommandLine% computername:%ComputerName%".to_owned()
),
expected,
);
}
#[test]
fn test_parse_message_auto_search() {
let mut message = Message::new();
let json_str = r##"
{
"Event": {
"EventData": {
"NoAlias": "no_alias"
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "alias:no_alias";
assert_eq!(
message.parse_message(&event_record, "alias:%NoAlias%".to_owned()),
expected,
);
}
#[test]
/// outputで指定されているキーが、eventkey_alias.txt内で設定されていない場合の出力テスト
fn test_parse_message_not_exist_key_in_output() {
let mut message = Message::new();
let json_str = r##"
{
"Event": {
"EventData": {
"CommandLine": "parsetest2"
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "NoExistAlias:n/a";
assert_eq!(
message.parse_message(&event_record, "NoExistAlias:%NoAliasNoHit%".to_owned()),
expected,
);
}
#[test]
/// output test when no exist info in target record output and described key-value data in eventkey_alias.txt
fn test_parse_message_not_exist_value_in_record() {
let mut message = Message::new();
let json_str = r##"
{
"Event": {
"EventData": {
"CommandLine": "parsetest3"
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "commandline:parsetest3 computername:n/a";
assert_eq!(
message.parse_message(
&event_record,
"commandline:%CommandLine% computername:%ComputerName%".to_owned()
),
expected,
);
}
#[test]
/// output test when no exist info in target record output and described key-value data in eventkey_alias.txt
fn test_parse_message_multiple_no_suffix_in_record() {
let mut message = Message::new();
let json_str = r##"
{
"Event": {
"EventData": {
"CommandLine": "parsetest3",
"Data": [
"data1",
"data2",
"data3"
]
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "commandline:parsetest3 data:[\"data1\",\"data2\",\"data3\"]";
assert_eq!(
message.parse_message(
&event_record,
"commandline:%CommandLine% data:%Data%".to_owned()
),
expected,
);
}
#[test]
/// output test when no exist info in target record output and described key-value data in eventkey_alias.txt
fn test_parse_message_multiple_with_suffix_in_record() {
let mut message = Message::new();
let json_str = r##"
{
"Event": {
"EventData": {
"CommandLine": "parsetest3",
"Data": [
"data1",
"data2",
"data3"
]
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "commandline:parsetest3 data:data2";
assert_eq!(
message.parse_message(
&event_record,
"commandline:%CommandLine% data:%Data[2]%".to_owned()
),
expected,
);
}
#[test]
/// output test when no exist info in target record output and described key-value data in eventkey_alias.txt
fn test_parse_message_multiple_no_exist_in_record() {
let mut message = Message::new();
let json_str = r##"
{
"Event": {
"EventData": {
"CommandLine": "parsetest3",
"Data": [
"data1",
"data2",
"data3"
]
},
"System": {
"TimeCreated_attributes": {
"SystemTime": "1996-02-27T01:05:01Z"
}
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "commandline:parsetest3 data:n/a";
assert_eq!(
message.parse_message(
&event_record,
"commandline:%CommandLine% data:%Data[0]%".to_owned()
),
expected,
);
}
#[test]
/// test of loading output filter config by output_tag.txt
fn test_load_output_tag() {
let actual =
Message::create_output_filter_config("test_files/config/output_tag.txt", true, false);
let expected: HashMap<String, String> = HashMap::from([
("attack.impact".to_string(), "Impact".to_string()),
("xxx".to_string(), "yyy".to_string()),
]);
_check_hashmap_element(&expected, actual);
}
#[test]
/// test of loading pass by output_tag.txt
fn test_no_load_output_tag() {
let actual =
Message::create_output_filter_config("test_files/config/output_tag.txt", true, true);
let expected: HashMap<String, String> = HashMap::new();
_check_hashmap_element(&expected, actual);
}
#[test]
/// loading test to channel_abbrevations.txt
fn test_load_abbrevations() {
let actual = Message::create_output_filter_config(
"test_files/config/channel_abbreviations.txt",
false,
true,
);
let actual2 = Message::create_output_filter_config(
"test_files/config/channel_abbreviations.txt",
false,
false,
);
let expected: HashMap<String, String> = HashMap::from([
("Security".to_string(), "Sec".to_string()),
("xxx".to_string(), "yyy".to_string()),
]);
_check_hashmap_element(&expected, actual);
_check_hashmap_element(&expected, actual2);
}
#[test]
fn _get_default_defails() {
let expected: HashMap<String, String> = HashMap::from([
("Microsoft-Windows-PowerShell_4104".to_string(),"%ScriptBlockText%".to_string()),("Microsoft-Windows-Security-Auditing_4624".to_string(), "User: %TargetUserName% | Comp: %WorkstationName% | IP Addr: %IpAddress% | LID: %TargetLogonId% | Process: %ProcessName%".to_string()),
("Microsoft-Windows-Sysmon_1".to_string(), "Cmd: %CommandLine% | Process: %Image% | User: %User% | Parent Cmd: %ParentCommandLine% | LID: %LogonId% | PID: %ProcessId% | PGUID: %ProcessGuid%".to_string()),
("Service Control Manager_7031".to_string(), "Svc: %param1% | Crash Count: %param2% | Action: %param5%".to_string()),
]);
let actual = Message::get_default_details("test_files/config/default_details.txt");
_check_hashmap_element(&expected, actual);
}
/// check two HashMap element length and value
fn _check_hashmap_element(expected: &HashMap<String, String>, actual: HashMap<String, String>) {
assert_eq!(expected.len(), actual.len());
for (k, v) in expected.iter() {
assert!(actual.get(k).unwrap_or(&String::default()) == v);
}
}
}

View File

@@ -1,9 +1,9 @@
use crate::detections::configs;
use crate::detections::print::AlertMessage;
use crate::detections::print::ERROR_LOG_STACK;
use crate::detections::print::QUIET_ERRORS_FLAG;
use crate::detections::message;
use crate::detections::message::AlertMessage;
use crate::detections::message::ERROR_LOG_STACK;
use crate::detections::message::QUIET_ERRORS_FLAG;
use crate::detections::rule::AggResult;
use crate::detections::rule::Message;
use crate::detections::rule::RuleNode;
use chrono::{DateTime, TimeZone, Utc};
use hashbrown::HashMap;
@@ -33,7 +33,7 @@ pub fn count(rule: &mut RuleNode, record: &Value) {
rule,
key,
field_value,
Message::get_event_time(record).unwrap_or(default_time),
message::get_event_time(record).unwrap_or(default_time),
);
}

View File

@@ -218,7 +218,7 @@ impl DefaultMatcher {
});
}
/// YEAのルールファイルのフィールド名とそれに続いて指定されるパイプを、正規表現形式の文字列に変換します。
/// Hayabusaのルールファイルのフィールド名とそれに続いて指定されるパイプを、正規表現形式の文字列に変換します。
/// ワイルドカードの文字列を正規表現にする処理もこのメソッドに実装されています。patternにワイルドカードの文字列を指定して、pipesにPipeElement::Wildcardを指定すればOK!!
fn from_pattern_to_regex_str(pattern: String, pipes: &[PipeElement]) -> String {
// パターンをPipeで処理する。
@@ -346,6 +346,17 @@ impl LeafMatcher for DefaultMatcher {
return false;
}
// yamlにnullが設定されていた場合
if self.re.is_none() {
// レコード内に対象のフィールドが存在しなければ検知したものとして扱う
for v in self.key_list.iter() {
if recinfo.get_value(v).is_none() {
return true;
}
}
return false;
}
if event_value.is_none() {
return false;
}
@@ -353,7 +364,7 @@ impl LeafMatcher for DefaultMatcher {
let event_value_str = event_value.unwrap();
if self.key_list.is_empty() {
// この場合ただのgrep検索なので、ただ正規表現に一致するかどうか調べればよいだけ
return self.re.as_ref().unwrap().is_match(event_value_str);
self.re.as_ref().unwrap().is_match(event_value_str)
} else {
// 通常の検索はこっち
self.is_regex_fullmatch(event_value_str)
@@ -523,8 +534,8 @@ mod tests {
- ホスト アプリケーション
ImagePath:
min_length: 1234321
regexes: ./rules/config/regex/detectlist_suspicous_services.txt
allowlist: ./rules/config/regex/allowlist_legitimate_services.txt
regexes: ./../../../rules/config/regex/detectlist_suspicous_services.txt
allowlist: ./../../../rules/config/regex/allowlist_legitimate_services.txt
falsepositives:
- unknown
level: medium
@@ -1111,7 +1122,7 @@ mod tests {
selection:
EventID: 4103
Channel:
- allowlist: ./rules/config/regex/allowlist_legitimate_services.txt
- allowlist: ./../../../rules/config/regex/allowlist_legitimate_services.txt
details: 'command=%CommandLine%'
"#;
@@ -1145,7 +1156,7 @@ mod tests {
selection:
EventID: 4103
Channel:
- allowlist: ./rules/config/regex/allowlist_legitimate_services.txt
- allowlist: ./../../../rules/config/regex/allowlist_legitimate_services.txt
details: 'command=%CommandLine%'
"#;
@@ -1179,7 +1190,7 @@ mod tests {
selection:
EventID: 4103
Channel:
- allowlist: ./rules/config/regex/allowlist_legitimate_services.txt
- allowlist: ./../../../rules/config/regex/allowlist_legitimate_services.txt
details: 'command=%CommandLine%'
"#;
@@ -1286,6 +1297,48 @@ mod tests {
}
}
#[test]
fn test_detect_startswith_case_insensitive() {
// startswithが大文字小文字を区別しないことを確認
let rule_str = r#"
enabled: true
detection:
selection:
Channel: Security
EventID: 4732
TargetUserName|startswith: "ADMINISTRATORS"
details: 'user added to local Administrators UserName: %MemberName% SID: %MemberSid%'
"#;
let record_json_str = r#"
{
"Event": {
"System": {
"EventID": 4732,
"Channel": "Security"
},
"EventData": {
"TargetUserName": "TestAdministrators"
}
},
"Event_attributes": {
"xmlns": "http://schemas.microsoft.com/win/2004/08/events/event"
}
}"#;
let mut rule_node = parse_rule_from_str(rule_str);
match serde_json::from_str(record_json_str) {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert!(!rule_node.select(&recinfo));
}
Err(_rec) => {
panic!("Failed to parse json record.");
}
}
}
#[test]
fn test_detect_endswith1() {
// endswithが正しく検知できることを確認
@@ -1370,6 +1423,48 @@ mod tests {
}
}
#[test]
fn test_detect_endswith_case_insensitive() {
// endswithが大文字小文字を区別せず検知するかを確認するテスト
let rule_str = r#"
enabled: true
detection:
selection:
Channel: Security
EventID: 4732
TargetUserName|endswith: "ADministRATORS"
details: 'user added to local Administrators UserName: %MemberName% SID: %MemberSid%'
"#;
let record_json_str = r#"
{
"Event": {
"System": {
"EventID": 4732,
"Channel": "Security"
},
"EventData": {
"TargetUserName": "AdministratorsTest"
}
},
"Event_attributes": {
"xmlns": "http://schemas.microsoft.com/win/2004/08/events/event"
}
}"#;
let mut rule_node = parse_rule_from_str(rule_str);
match serde_json::from_str(record_json_str) {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert!(!rule_node.select(&recinfo));
}
Err(_rec) => {
panic!("Failed to parse json record.");
}
}
}
#[test]
fn test_detect_contains1() {
// containsが正しく検知できることを確認
@@ -1454,6 +1549,48 @@ mod tests {
}
}
#[test]
fn test_detect_contains_case_insensitive() {
// containsが大文字小文字を区別せずに検知することを確認するテスト
let rule_str = r#"
enabled: true
detection:
selection:
Channel: Security
EventID: 4732
TargetUserName|contains: "ADminIstraTOrS"
details: 'user added to local Administrators UserName: %MemberName% SID: %MemberSid%'
"#;
let record_json_str = r#"
{
"Event": {
"System": {
"EventID": 4732,
"Channel": "Security"
},
"EventData": {
"TargetUserName": "Testministrators"
}
},
"Event_attributes": {
"xmlns": "http://schemas.microsoft.com/win/2004/08/events/event"
}
}"#;
let mut rule_node = parse_rule_from_str(rule_str);
match serde_json::from_str(record_json_str) {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert!(!rule_node.select(&recinfo));
}
Err(_rec) => {
panic!("Failed to parse json record.");
}
}
}
#[test]
fn test_detect_wildcard_multibyte() {
// multi byteの確認
@@ -1858,4 +1995,65 @@ mod tests {
}
}
}
#[test]
fn test_eq_field_null() {
// 値でnullであった場合に対象のフィールドが存在しないことを確認
let rule_str = r#"
enabled: true
detection:
selection:
Channel:
value: Security
Takoyaki:
value: null
details: 'command=%CommandLine%'
"#;
let record_json_str = r#"
{
"Event": {"System": {"EventID": 4103, "Channel": "Security", "Computer": "Powershell" }},
"Event_attributes": {"xmlns": "http://schemas.microsoft.com/win/2004/08/events/event"}
}"#;
let mut rule_node = parse_rule_from_str(rule_str);
match serde_json::from_str(record_json_str) {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert!(rule_node.select(&recinfo));
}
Err(_) => {
panic!("Failed to parse json record.");
}
}
}
#[test]
fn test_eq_field_null_not_detect() {
// 値でnullであった場合に対象のフィールドが存在しないことを確認するテスト
let rule_str = r#"
enabled: true
detection:
selection:
EventID: null
details: 'command=%CommandLine%'
"#;
let record_json_str = r#"{
"Event": {"System": {"EventID": 4103, "Channel": "Security", "Computer": "Powershell"}},
"Event_attributes": {"xmlns": "http://schemas.microsoft.com/win/2004/08/events/event"}
}"#;
let mut rule_node = parse_rule_from_str(rule_str);
match serde_json::from_str(record_json_str) {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert!(!rule_node.select(&recinfo));
}
Err(e) => {
panic!("Failed to parse json record.{:?}", e);
}
}
}
}

View File

@@ -1,5 +1,4 @@
extern crate regex;
use crate::detections::print::Message;
use chrono::{DateTime, Utc};

View File

@@ -3,6 +3,12 @@ extern crate csv;
extern crate regex;
use crate::detections::configs;
use crate::detections::configs::CURRENT_EXE_PATH;
use hashbrown::HashMap;
use std::path::Path;
use std::path::PathBuf;
use chrono::Local;
use termcolor::Color;
use tokio::runtime::Builder;
@@ -66,7 +72,15 @@ pub fn value_to_string(value: &Value) -> Option<String> {
}
pub fn read_txt(filename: &str) -> Result<Vec<String>, String> {
let f = File::open(filename);
let filepath = if filename.starts_with("./") {
check_setting_path(&CURRENT_EXE_PATH.to_path_buf(), filename)
.to_str()
.unwrap()
.to_string()
} else {
filename.to_string()
};
let f = File::open(filepath);
if f.is_err() {
let errmsg = format!("Cannot open file. [file:{}]", filename);
return Result::Err(errmsg);
@@ -206,8 +220,8 @@ pub fn create_rec_info(data: Value, path: String, keys: &[String]) -> EvtxRecord
// この処理を高速化するため、rec.key_2_valueというhashmapに"Event.System.EventID"というキーで値を設定しておく。
// これなら、"Event.System.EventID"というキーを1回指定するだけで値を取得できるようになるので、高速化されるはず。
// あと、serde_jsonのValueからvalue["Event"]みたいな感じで値を取得する処理がなんか遅いので、そういう意味でも早くなるかも
// それと、serde_jsonでは内部的に標準ライブラリのhashmapを使用しているが、hashbrownを使った方が早くなるらしい。
let mut key_2_values = hashbrown::HashMap::new();
// それと、serde_jsonでは内部的に標準ライブラリのhashmapを使用しているが、hashbrownを使った方が早くなるらしい。標準ライブラリがhashbrownを採用したためserde_jsonについても高速化した。
let mut key_2_values = HashMap::new();
for key in keys {
let val = get_event_value(key, &data);
if val.is_none() {
@@ -224,11 +238,8 @@ pub fn create_rec_info(data: Value, path: String, keys: &[String]) -> EvtxRecord
// EvtxRecordInfoを作る
let data_str = data.to_string();
let rec_info = if configs::CONFIG.read().unwrap().args.full_data {
Option::Some(create_recordinfos(&data))
} else {
Option::None
};
let rec_info = Option::Some(create_recordinfos(&data));
EvtxRecordInfo {
evtx_filepath: path,
record: data,
@@ -242,16 +253,30 @@ pub fn create_rec_info(data: Value, path: String, keys: &[String]) -> EvtxRecord
* 標準出力のカラー出力設定を指定した値に変更し画面出力を行う関数
*/
pub fn write_color_buffer(
wtr: BufferWriter,
wtr: &BufferWriter,
color: Option<Color>,
output_str: &str,
newline_flag: bool,
) -> io::Result<()> {
let mut buf = wtr.buffer();
buf.set_color(ColorSpec::new().set_fg(color)).ok();
writeln!(buf, "{}", output_str).ok();
if newline_flag {
writeln!(buf, "{}", output_str).ok();
} else {
write!(buf, "{}", output_str).ok();
}
wtr.print(&buf)
}
/// no-colorのオプションの指定があるかを確認し、指定されている場合はNoneをかえし、指定されていない場合は引数で指定されたColorをSomeでラップして返す関数
pub fn get_writable_color(color: Option<Color>) -> Option<Color> {
if configs::CONFIG.read().unwrap().args.no_color {
None
} else {
color
}
}
/**
* CSVのrecord infoカラムに出力する文字列を作る
*/
@@ -354,9 +379,72 @@ pub fn make_ascii_titlecase(s: &mut str) -> String {
}
}
/// base_path/path が存在するかを確認し、存在しなければカレントディレクトリを参照するpathを返す関数
pub fn check_setting_path(base_path: &Path, path: &str) -> PathBuf {
if base_path.join(path).exists() {
base_path.join(path)
} else {
Path::new(path).to_path_buf()
}
}
///タイムゾーンに合わせた情報を情報を取得する関数
pub fn format_time(time: &DateTime<Utc>, date_only: bool) -> String {
if configs::CONFIG.read().unwrap().args.utc {
format_rfc(time, date_only)
} else {
format_rfc(&time.with_timezone(&Local), date_only)
}
}
/// return rfc time format string by option
fn format_rfc<Tz: TimeZone>(time: &DateTime<Tz>, date_only: bool) -> String
where
Tz::Offset: std::fmt::Display,
{
let time_args = &configs::CONFIG.read().unwrap().args;
if time_args.rfc_2822 {
if date_only {
time.format("%a, %e %b %Y").to_string()
} else {
time.format("%a, %e %b %Y %H:%M:%S %:z").to_string()
}
} else if time_args.rfc_3339 {
if date_only {
time.format("%Y-%m-%d").to_string()
} else {
time.format("%Y-%m-%d %H:%M:%S%.6f%:z").to_string()
}
} else if time_args.us_time {
if date_only {
time.format("%m-%d-%Y").to_string()
} else {
time.format("%m-%d-%Y %I:%M:%S%.3f %p %:z").to_string()
}
} else if time_args.us_military_time {
if date_only {
time.format("%m-%d-%Y").to_string()
} else {
time.format("%m-%d-%Y %H:%M:%S%.3f %:z").to_string()
}
} else if time_args.european_time {
if date_only {
time.format("%d-%m-%Y").to_string()
} else {
time.format("%d-%m-%Y %H:%M:%S%.3f %:z").to_string()
}
} else if date_only {
time.format("%Y-%m-%d").to_string()
} else {
time.format("%Y-%m-%d %H:%M:%S%.3f %:z").to_string()
}
}
#[cfg(test)]
mod tests {
use crate::detections::utils::{self, make_ascii_titlecase};
use std::path::Path;
use crate::detections::utils::{self, check_setting_path, make_ascii_titlecase};
use regex::Regex;
use serde_json::Value;
@@ -423,7 +511,7 @@ mod tests {
#[test]
fn test_check_regex() {
let regexes: Vec<Regex> =
utils::read_txt("./rules/config/regex/detectlist_suspicous_services.txt")
utils::read_txt("./../../../rules/config/regex/detectlist_suspicous_services.txt")
.unwrap()
.into_iter()
.map(|regex_str| Regex::new(&regex_str).unwrap())
@@ -439,7 +527,7 @@ mod tests {
fn test_check_allowlist() {
let commandline = "\"C:\\Program Files\\Google\\Update\\GoogleUpdate.exe\"";
let allowlist: Vec<Regex> =
utils::read_txt("./rules/config/regex/allowlist_legitimate_services.txt")
utils::read_txt("./../../../rules/config/regex/allowlist_legitimate_services.txt")
.unwrap()
.into_iter()
.map(|allow_str| Regex::new(&allow_str).unwrap())
@@ -518,4 +606,31 @@ mod tests {
);
assert_eq!(make_ascii_titlecase("β".to_string().as_mut()), "β");
}
#[test]
/// 与えられたパスからファイルの存在確認ができているかのテスト
fn test_check_setting_path() {
let exist_path = Path::new("./test_files").to_path_buf();
let not_exist_path = Path::new("not_exist_path").to_path_buf();
assert_eq!(
check_setting_path(&not_exist_path, "rules")
.to_str()
.unwrap(),
"rules"
);
assert_eq!(
check_setting_path(&not_exist_path, "fake")
.to_str()
.unwrap(),
"fake"
);
assert_eq!(
check_setting_path(&exist_path, "rules").to_str().unwrap(),
exist_path.join("rules").to_str().unwrap()
);
assert_eq!(
check_setting_path(&exist_path, "fake").to_str().unwrap(),
"fake"
);
}
}

View File

@@ -1,7 +1,7 @@
use crate::detections::configs;
use crate::detections::print::AlertMessage;
use crate::detections::print::ERROR_LOG_STACK;
use crate::detections::print::QUIET_ERRORS_FLAG;
use crate::detections::message::AlertMessage;
use crate::detections::message::ERROR_LOG_STACK;
use crate::detections::message::QUIET_ERRORS_FLAG;
use hashbrown::HashMap;
use regex::Regex;
use std::fs::File;
@@ -29,18 +29,16 @@ impl RuleExclude {
pub fn exclude_ids() -> RuleExclude {
let mut exclude_ids = RuleExclude::default();
if !configs::CONFIG.read().unwrap().args.enable_noisy_rules {
exclude_ids.insert_ids(&format!(
"{}/noisy_rules.txt",
configs::CONFIG
.read()
.unwrap()
.args
.config
.as_path()
.display()
));
};
exclude_ids.insert_ids(&format!(
"{}/noisy_rules.txt",
configs::CONFIG
.read()
.unwrap()
.args
.config
.as_path()
.display()
));
exclude_ids.insert_ids(&format!(
"{}/exclude_rules.txt",

View File

@@ -3,41 +3,35 @@ extern crate downcast_rs;
extern crate serde;
extern crate serde_derive;
#[cfg(target_os = "windows")]
extern crate static_vcruntime;
use bytesize::ByteSize;
use chrono::{DateTime, Datelike, Local, TimeZone};
use chrono::{DateTime, Datelike, Local};
use evtx::{EvtxParser, ParserSettings};
use git2::Repository;
use hashbrown::{HashMap, HashSet};
use hayabusa::detections::configs::CURRENT_EXE_PATH;
use hayabusa::detections::configs::{load_pivot_keywords, TargetEventTime, TARGET_EXTENSIONS};
use hayabusa::detections::detection::{self, EvtxRecordInfo};
use hayabusa::detections::pivot::PivotKeyword;
use hayabusa::detections::pivot::PIVOT_KEYWORD;
use hayabusa::detections::print::{
use hayabusa::detections::message::{
AlertMessage, ERROR_LOG_PATH, ERROR_LOG_STACK, LOGONSUMMARY_FLAG, PIVOT_KEYWORD_LIST_FLAG,
QUIET_ERRORS_FLAG, STATISTICS_FLAG,
};
use hayabusa::detections::pivot::PivotKeyword;
use hayabusa::detections::pivot::PIVOT_KEYWORD;
use hayabusa::detections::rule::{get_detection_keys, RuleNode};
use hayabusa::omikuji::Omikuji;
use hayabusa::options::level_tuning::LevelTuning;
use hayabusa::yaml::ParseYaml;
use hayabusa::options::profile::PROFILES;
use hayabusa::options::{level_tuning::LevelTuning, update_rules::UpdateRules};
use hayabusa::{afterfact::after_fact, detections::utils};
use hayabusa::{detections::configs, timeline::timelines::Timeline};
use hayabusa::{detections::utils::write_color_buffer, filter};
use hhmmss::Hhmmss;
use pbr::ProgressBar;
use serde_json::Value;
use std::cmp::Ordering;
use std::ffi::{OsStr, OsString};
use std::fmt::Display;
use std::fmt::Write as _;
use std::fs::create_dir;
use std::io::{BufWriter, Write};
use std::path::Path;
use std::sync::Arc;
use std::time::SystemTime;
use std::{
env,
fs::{self, File},
@@ -82,9 +76,18 @@ impl App {
fn exec(&mut self) {
if *PIVOT_KEYWORD_LIST_FLAG {
load_pivot_keywords("config/pivot_keywords.txt");
load_pivot_keywords(
utils::check_setting_path(
&CURRENT_EXE_PATH.to_path_buf(),
"config/pivot_keywords.txt",
)
.to_str()
.unwrap(),
);
}
if PROFILES.is_none() {
return;
}
let analysis_start_time: DateTime<Local> = Local::now();
// Show usage when no arguments.
if std::env::args().len() == 1 {
@@ -113,13 +116,16 @@ impl App {
}
if configs::CONFIG.read().unwrap().args.update_rules {
match self.update_rules() {
match UpdateRules::update_rules(
configs::CONFIG.read().unwrap().args.rules.to_str().unwrap(),
) {
Ok(output) => {
if output != "You currently have the latest rules." {
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
&BufferWriter::stdout(ColorChoice::Always),
None,
"Rules updated successfully.",
true,
)
.ok();
}
@@ -131,14 +137,25 @@ impl App {
println!();
return;
}
if !Path::new("./config").exists() {
// 実行時のexeファイルのパスをベースに変更する必要があるためデフォルトの値であった場合はそのexeファイルと同一階層を探すようにする
if !CURRENT_EXE_PATH.join("config").exists() && !Path::new("./config").exists() {
AlertMessage::alert(
"Hayabusa could not find the config directory.\nPlease run it from the Hayabusa root directory.\nExample: ./hayabusa-1.0.0-windows-x64.exe"
"Hayabusa could not find the config directory.\nPlease make sure that it is in the same directory as the hayabusa executable."
)
.ok();
return;
}
// カレントディレクトリ以外からの実行の際にrules-configオプションの指定がないとエラーが発生することを防ぐための処理
if configs::CONFIG.read().unwrap().args.config == Path::new("./rules/config") {
configs::CONFIG.write().unwrap().args.config =
utils::check_setting_path(&CURRENT_EXE_PATH.to_path_buf(), "./rules/config");
}
// カレントディレクトリ以外からの実行の際にrulesオプションの指定がないとエラーが発生することを防ぐための処理
if configs::CONFIG.read().unwrap().args.rules == Path::new("./rules") {
configs::CONFIG.write().unwrap().args.rules =
utils::check_setting_path(&CURRENT_EXE_PATH.to_path_buf(), "./rules");
}
if let Some(csv_path) = &configs::CONFIG.read().unwrap().args.output {
let pivot_key_unions = PIVOT_KEYWORD.read().unwrap();
@@ -170,18 +187,20 @@ impl App {
if *STATISTICS_FLAG {
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
&BufferWriter::stdout(ColorChoice::Always),
None,
"Generating Event ID Statistics",
true,
)
.ok();
println!();
}
if *LOGONSUMMARY_FLAG {
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
&BufferWriter::stdout(ColorChoice::Always),
None,
"Generating Logons Summary",
true,
)
.ok();
println!();
@@ -194,6 +213,14 @@ impl App {
}
self.analysis_files(live_analysis_list.unwrap(), &time_filter);
} else if let Some(filepath) = &configs::CONFIG.read().unwrap().args.filepath {
if !filepath.exists() {
AlertMessage::alert(&format!(
" The file {} does not exist. Please specify a valid file path.",
filepath.as_os_str().to_str().unwrap()
))
.ok();
return;
}
if !TARGET_EXTENSIONS.contains(
filepath
.extension()
@@ -226,18 +253,23 @@ impl App {
} else if configs::CONFIG.read().unwrap().args.contributors {
self.print_contributors();
return;
} else if std::env::args()
.into_iter()
.any(|arg| arg.contains("level-tuning"))
{
let level_tuning_config_path = configs::CONFIG
} else if configs::CONFIG.read().unwrap().args.level_tuning.is_some() {
let level_tuning_val = &configs::CONFIG
.read()
.unwrap()
.args
.level_tuning
.as_path()
.clone()
.unwrap();
let level_tuning_config_path = match level_tuning_val {
Some(path) => path.to_owned(),
_ => utils::check_setting_path(
&CURRENT_EXE_PATH.to_path_buf(),
"./rules/config/level_tuning.txt",
)
.display()
.to_string();
.to_string(),
};
if Path::new(&level_tuning_config_path).exists() {
if let Err(err) = LevelTuning::run(
@@ -262,9 +294,10 @@ impl App {
return;
} else {
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
&BufferWriter::stdout(ColorChoice::Always),
None,
&configs::CONFIG.read().unwrap().headless_help,
true,
)
.ok();
return;
@@ -272,11 +305,11 @@ impl App {
let analysis_end_time: DateTime<Local> = Local::now();
let analysis_duration = analysis_end_time.signed_duration_since(analysis_start_time);
println!();
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
&BufferWriter::stdout(ColorChoice::Always),
None,
&format!("Elapsed Time: {}", &analysis_duration.hhmmssxxx()),
true,
)
.ok();
println!();
@@ -329,17 +362,30 @@ impl App {
)
.ok();
});
write_color_buffer(BufferWriter::stdout(ColorChoice::Always), None, &output).ok();
write_color_buffer(
&BufferWriter::stdout(ColorChoice::Always),
None,
&output,
true,
)
.ok();
} else {
//標準出力の場合
let output = "The following pivot keywords were found:".to_string();
write_color_buffer(BufferWriter::stdout(ColorChoice::Always), None, &output).ok();
write_color_buffer(
&BufferWriter::stdout(ColorChoice::Always),
None,
&output,
true,
)
.ok();
pivot_key_unions.iter().for_each(|(key, pivot_keyword)| {
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
&BufferWriter::stdout(ColorChoice::Always),
None,
&create_output(String::default(), key, pivot_keyword),
true,
)
.ok();
});
@@ -423,9 +469,18 @@ impl App {
}
fn print_contributors(&self) {
match fs::read_to_string("./contributors.txt") {
match fs::read_to_string(utils::check_setting_path(
&CURRENT_EXE_PATH.to_path_buf(),
"contributors.txt",
)) {
Ok(contents) => {
write_color_buffer(BufferWriter::stdout(ColorChoice::Always), None, &contents).ok();
write_color_buffer(
&BufferWriter::stdout(ColorChoice::Always),
None,
&contents,
true,
)
.ok();
}
Err(err) => {
AlertMessage::alert(&format!("{}", err)).ok();
@@ -441,9 +496,10 @@ impl App {
.min_level
.to_uppercase();
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
&BufferWriter::stdout(ColorChoice::Always),
None,
&format!("Analyzing event files: {:?}", evtx_files.len()),
true,
)
.ok();
@@ -543,11 +599,17 @@ impl App {
continue;
}
// target_eventids.txtでフィルタする。
// target_eventids.txtでイベントIDベースでフィルタする。
let data = record_result.as_ref().unwrap().data.clone();
let timestamp = record_result.unwrap().timestamp;
if !self._is_target_event_id(&data)
&& !configs::CONFIG.read().unwrap().args.deep_scan
{
continue;
}
if !self._is_target_event_id(&data) || !time_filter.is_target(&Some(timestamp)) {
// EventID側の条件との条件の混同を防ぐため時間でのフィルタリングの条件分岐を分離した
let timestamp = record_result.unwrap().timestamp;
if !time_filter.is_target(&Some(timestamp)) {
continue;
}
@@ -659,7 +721,7 @@ impl App {
/// output logo
fn output_logo(&self) {
let fp = &"art/logo.txt".to_string();
let fp = utils::check_setting_path(&CURRENT_EXE_PATH.to_path_buf(), "art/logo.txt");
let content = fs::read_to_string(fp).unwrap_or_default();
let output_color = if configs::CONFIG.read().unwrap().args.no_color {
None
@@ -667,9 +729,10 @@ impl App {
Some(Color::Green)
};
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
&BufferWriter::stdout(ColorChoice::Always),
output_color,
&content,
true,
)
.ok();
}
@@ -685,226 +748,19 @@ impl App {
match eggs.get(exec_datestr) {
None => {}
Some(path) => {
let content = fs::read_to_string(path).unwrap_or_default();
write_color_buffer(BufferWriter::stdout(ColorChoice::Always), None, &content).ok();
}
}
}
/// update rules(hayabusa-rules subrepository)
fn update_rules(&self) -> Result<String, git2::Error> {
let mut result;
let mut prev_modified_time: SystemTime = SystemTime::UNIX_EPOCH;
let mut prev_modified_rules: HashSet<String> = HashSet::default();
let hayabusa_repo = Repository::open(Path::new("."));
let hayabusa_rule_repo = Repository::open(Path::new("rules"));
if hayabusa_repo.is_err() && hayabusa_rule_repo.is_err() {
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
None,
"Attempting to git clone the hayabusa-rules repository into the rules folder.",
)
.ok();
// execution git clone of hayabusa-rules repository when failed open hayabusa repository.
result = self.clone_rules();
} else if hayabusa_rule_repo.is_ok() {
// case of exist hayabusa-rules repository
self._repo_main_reset_hard(hayabusa_rule_repo.as_ref().unwrap())?;
// case of failed fetching origin/main, git clone is not executed so network error has occurred possibly.
prev_modified_rules = self.get_updated_rules("rules", &prev_modified_time);
prev_modified_time = fs::metadata("rules").unwrap().modified().unwrap();
result = self.pull_repository(&hayabusa_rule_repo.unwrap());
} else {
// case of no exist hayabusa-rules repository in rules.
// execute update because submodule information exists if hayabusa repository exists submodule information.
prev_modified_time = fs::metadata("rules").unwrap().modified().unwrap();
let rules_path = Path::new("rules");
if !rules_path.exists() {
create_dir(rules_path).ok();
}
let hayabusa_repo = hayabusa_repo.unwrap();
let submodules = hayabusa_repo.submodules()?;
let mut is_success_submodule_update = true;
// submodule rules erase path is hard coding to avoid unintentional remove folder.
fs::remove_dir_all(".git/.submodule/rules").ok();
for mut submodule in submodules {
submodule.update(true, None)?;
let submodule_repo = submodule.open()?;
if let Err(e) = self.pull_repository(&submodule_repo) {
AlertMessage::alert(&format!("Failed submodule update. {}", e)).ok();
is_success_submodule_update = false;
}
}
if is_success_submodule_update {
result = Ok("Successed submodule update".to_string());
} else {
result = Err(git2::Error::from_str(&String::default()));
}
}
if result.is_ok() {
let updated_modified_rules = self.get_updated_rules("rules", &prev_modified_time);
result =
self.print_diff_modified_rule_dates(prev_modified_rules, updated_modified_rules);
}
result
}
/// hard reset in main branch
fn _repo_main_reset_hard(&self, input_repo: &Repository) -> Result<(), git2::Error> {
let branch = input_repo
.find_branch("main", git2::BranchType::Local)
.unwrap();
let local_head = branch.get().target().unwrap();
let object = input_repo.find_object(local_head, None).unwrap();
match input_repo.reset(&object, git2::ResetType::Hard, None) {
Ok(()) => Ok(()),
_ => Err(git2::Error::from_str("Failed reset main branch in rules")),
}
}
/// Pull(fetch and fast-forward merge) repositoryto input_repo.
fn pull_repository(&self, input_repo: &Repository) -> Result<String, git2::Error> {
match input_repo
.find_remote("origin")?
.fetch(&["main"], None, None)
.map_err(|e| {
AlertMessage::alert(&format!("Failed git fetch to rules folder. {}", e)).ok();
}) {
Ok(it) => it,
Err(_err) => return Err(git2::Error::from_str(&String::default())),
};
let fetch_head = input_repo.find_reference("FETCH_HEAD")?;
let fetch_commit = input_repo.reference_to_annotated_commit(&fetch_head)?;
let analysis = input_repo.merge_analysis(&[&fetch_commit])?;
if analysis.0.is_up_to_date() {
Ok("Already up to date".to_string())
} else if analysis.0.is_fast_forward() {
let mut reference = input_repo.find_reference("refs/heads/main")?;
reference.set_target(fetch_commit.id(), "Fast-Forward")?;
input_repo.set_head("refs/heads/main")?;
input_repo.checkout_head(Some(git2::build::CheckoutBuilder::default().force()))?;
Ok("Finished fast forward merge.".to_string())
} else if analysis.0.is_normal() {
AlertMessage::alert(
"update-rules option is git Fast-Forward merge only. please check your rules folder."
,
).ok();
Err(git2::Error::from_str(&String::default()))
} else {
Err(git2::Error::from_str(&String::default()))
}
}
/// git clone でhauyabusa-rules レポジトリをrulesフォルダにgit cloneする関数
fn clone_rules(&self) -> Result<String, git2::Error> {
match Repository::clone(
"https://github.com/Yamato-Security/hayabusa-rules.git",
"rules",
) {
Ok(_repo) => {
println!("Finished cloning the hayabusa-rules repository.");
Ok("Finished clone".to_string())
}
Err(e) => {
AlertMessage::alert(
&format!(
"Failed to git clone into the rules folder. Please rename your rules folder name. {}",
e
),
let egg_path = utils::check_setting_path(&CURRENT_EXE_PATH.to_path_buf(), path);
let content = fs::read_to_string(egg_path).unwrap_or_default();
write_color_buffer(
&BufferWriter::stdout(ColorChoice::Always),
None,
&content,
true,
)
.ok();
Err(git2::Error::from_str(&String::default()))
}
}
}
/// Create rules folder files Hashset. Format is "[rule title in yaml]|[filepath]|[filemodified date]|[rule type in yaml]"
fn get_updated_rules(
&self,
rule_folder_path: &str,
target_date: &SystemTime,
) -> HashSet<String> {
let mut rulefile_loader = ParseYaml::new();
// level in read_dir is hard code to check all rules.
rulefile_loader
.read_dir(
rule_folder_path,
"INFORMATIONAL",
&filter::RuleExclude::default(),
)
.ok();
let hash_set_keys: HashSet<String> = rulefile_loader
.files
.into_iter()
.filter_map(|(filepath, yaml)| {
let file_modified_date = fs::metadata(&filepath).unwrap().modified().unwrap();
if file_modified_date.cmp(target_date).is_gt() {
let yaml_date = yaml["date"].as_str().unwrap_or("-");
return Option::Some(format!(
"{}|{}|{}|{}",
yaml["title"].as_str().unwrap_or(&String::default()),
yaml["modified"].as_str().unwrap_or(yaml_date),
&filepath,
yaml["ruletype"].as_str().unwrap_or("Other")
));
}
Option::None
})
.collect();
hash_set_keys
}
/// print updated rule files.
fn print_diff_modified_rule_dates(
&self,
prev_sets: HashSet<String>,
updated_sets: HashSet<String>,
) -> Result<String, git2::Error> {
let diff = updated_sets.difference(&prev_sets);
let mut update_count_by_rule_type: HashMap<String, u128> = HashMap::new();
let mut latest_update_date = Local.timestamp(0, 0);
for diff_key in diff {
let tmp: Vec<&str> = diff_key.split('|').collect();
let file_modified_date = fs::metadata(&tmp[2]).unwrap().modified().unwrap();
let dt_local: DateTime<Local> = file_modified_date.into();
if latest_update_date.cmp(&dt_local) == Ordering::Less {
latest_update_date = dt_local;
}
*update_count_by_rule_type
.entry(tmp[3].to_string())
.or_insert(0b0) += 1;
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
None,
&format!(
"[Updated] {} (Modified: {} | Path: {})",
tmp[0], tmp[1], tmp[2]
),
)
.ok();
}
println!();
for (key, value) in &update_count_by_rule_type {
println!("Updated {} rules: {}", key, value);
}
if !&update_count_by_rule_type.is_empty() {
Ok("Rule updated".to_string())
} else {
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
None,
"You currently have the latest rules.",
)
.ok();
Ok("You currently have the latest rules.".to_string())
}
}
/// check architecture
fn is_matched_architecture_and_binary(&self) -> bool {
if cfg!(target_os = "windows") {
@@ -925,7 +781,6 @@ impl App {
#[cfg(test)]
mod tests {
use crate::App;
use std::time::SystemTime;
#[test]
fn test_collect_evtxfiles() {
@@ -942,20 +797,4 @@ mod tests {
assert_eq!(is_contains, &true);
})
}
#[test]
fn test_get_updated_rules() {
let app = App::new();
let prev_modified_time: SystemTime = SystemTime::UNIX_EPOCH;
let prev_modified_rules =
app.get_updated_rules("test_files/rules/level_yaml", &prev_modified_time);
assert_eq!(prev_modified_rules.len(), 5);
let target_time: SystemTime = SystemTime::now();
let prev_modified_rules2 =
app.get_updated_rules("test_files/rules/level_yaml", &target_time);
assert_eq!(prev_modified_rules2.len(), 0);
}
}

View File

@@ -2,7 +2,7 @@ use crate::detections::utils::write_color_buffer;
use crate::detections::{configs, utils};
use crate::filter::RuleExclude;
use crate::yaml::ParseYaml;
use std::collections::HashMap;
use hashbrown::HashMap;
use std::fs::{self, File};
use std::io::Write;
use termcolor::{BufferWriter, ColorChoice};
@@ -59,9 +59,10 @@ impl LevelTuning {
for (path, rule) in rulefile_loader.files {
if let Some(new_level) = tuning_map.get(rule["id"].as_str().unwrap()) {
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
&BufferWriter::stdout(ColorChoice::Always),
None,
&format!("path: {}", path),
true,
)
.ok();
let mut content = match fs::read_to_string(&path) {
@@ -94,13 +95,14 @@ impl LevelTuning {
file.write_all(content.as_bytes()).unwrap();
file.flush().unwrap();
write_color_buffer(
BufferWriter::stdout(ColorChoice::Always),
&BufferWriter::stdout(ColorChoice::Always),
None,
&format!(
"level: {} -> {}",
rule["level"].as_str().unwrap(),
new_level
),
true,
)
.ok();
}

View File

@@ -1 +1,3 @@
pub mod level_tuning;
pub mod profile;
pub mod update_rules;

309
src/options/profile.rs Normal file
View File

@@ -0,0 +1,309 @@
use crate::detections::configs::{self, CURRENT_EXE_PATH};
use crate::detections::message::AlertMessage;
use crate::detections::utils::check_setting_path;
use crate::yaml;
use hashbrown::HashSet;
use lazy_static::lazy_static;
use linked_hash_map::LinkedHashMap;
use regex::RegexSet;
use std::fs::OpenOptions;
use std::io::{BufWriter, Write};
use std::path::Path;
use yaml_rust::{Yaml, YamlEmitter, YamlLoader};
lazy_static! {
pub static ref PROFILES: Option<LinkedHashMap<String, String>> = load_profile(
check_setting_path(
&CURRENT_EXE_PATH.to_path_buf(),
"config/default_profile.yaml"
)
.to_str()
.unwrap(),
check_setting_path(&CURRENT_EXE_PATH.to_path_buf(), "config/profiles.yaml")
.to_str()
.unwrap()
);
pub static ref LOAEDED_PROFILE_ALIAS: HashSet<String> = HashSet::from_iter(
PROFILES
.as_ref()
.unwrap_or(&LinkedHashMap::default())
.values()
.cloned()
);
pub static ref PRELOAD_PROFILE: Vec<&'static str> = vec![
"%Timestamp%",
"%Computer%",
"%Channel%",
"%Level%",
"%EventID%",
"%RecordID%",
"%RuleTitle%",
"%RecordInformation%",
"%RuleFile%",
"%EvtxFile%",
"%MitreTactics%",
"%MitreTags%",
"%OtherTags%"
];
pub static ref PRELOAD_PROFILE_REGEX: RegexSet = RegexSet::new(&*PRELOAD_PROFILE).unwrap();
}
// 指定されたパスのprofileを読み込む処理
fn read_profile_data(profile_path: &str) -> Result<Vec<Yaml>, String> {
let yml = yaml::ParseYaml::new();
if let Ok(loaded_profile) = yml.read_file(Path::new(profile_path).to_path_buf()) {
match YamlLoader::load_from_str(&loaded_profile) {
Ok(profile_yml) => Ok(profile_yml),
Err(e) => Err(format!("Parse error: {}. {}", profile_path, e)),
}
} else {
Err(format!(
"The profile file({}) does not exist. Please check your default profile.",
profile_path
))
}
}
/// プロファイル情報`を読み込む関数
pub fn load_profile(
default_profile_path: &str,
profile_path: &str,
) -> Option<LinkedHashMap<String, String>> {
let conf = &configs::CONFIG.read().unwrap().args;
if conf.set_default_profile.is_some() {
if let Err(e) = set_default_profile(default_profile_path, profile_path) {
AlertMessage::alert(&e).ok();
} else {
println!("Successfully updated the default profile.");
};
}
let profile_all: Vec<Yaml> = if conf.profile.is_none() {
match read_profile_data(default_profile_path) {
Ok(data) => data,
Err(e) => {
AlertMessage::alert(&e).ok();
vec![]
}
}
} else {
match read_profile_data(profile_path) {
Ok(data) => data,
Err(e) => {
AlertMessage::alert(&e).ok();
vec![]
}
}
};
// profileを読み込んで何も結果がない場合はAlert出しているためプログラム終了のためにNoneを出力する。
if profile_all.is_empty() {
return None;
}
let profile_data = &profile_all[0];
let mut ret: LinkedHashMap<String, String> = LinkedHashMap::new();
if let Some(profile_name) = &conf.profile {
let target_data = &profile_data[profile_name.as_str()];
if !target_data.is_badvalue() {
target_data
.as_hash()
.unwrap()
.into_iter()
.for_each(|(k, v)| {
ret.insert(
k.as_str().unwrap().to_string(),
v.as_str().unwrap().to_string(),
);
});
Some(ret)
} else {
let profile_names: Vec<&str> = profile_data
.as_hash()
.unwrap()
.keys()
.map(|k| k.as_str().unwrap())
.collect();
AlertMessage::alert(&format!(
"Invalid profile specified: {}\nPlease specify one of the following profiles:\n {}",
profile_name,
profile_names.join(", ")
))
.ok();
None
}
} else {
profile_data
.as_hash()
.unwrap()
.into_iter()
.for_each(|(k, v)| {
ret.insert(
k.as_str().unwrap().to_string(),
v.as_str().unwrap().to_string(),
);
});
Some(ret)
}
}
/// デフォルトプロファイルを設定する関数
pub fn set_default_profile(default_profile_path: &str, profile_path: &str) -> Result<(), String> {
let profile_data: Vec<Yaml> = match read_profile_data(profile_path) {
Ok(data) => data,
Err(e) => {
AlertMessage::alert(&e).ok();
return Err("Failed to set the default profile.".to_string());
}
};
// デフォルトプロファイルを設定する処理
if let Some(profile_name) = &configs::CONFIG.read().unwrap().args.set_default_profile {
if let Ok(mut buf_wtr) = OpenOptions::new()
.write(true)
.truncate(true)
.open(default_profile_path)
.map(BufWriter::new)
{
let prof_all_data = &profile_data[0];
let overwrite_default_data = &prof_all_data[profile_name.as_str()];
if !overwrite_default_data.is_badvalue() {
let mut out_str = String::default();
let mut yml_writer = YamlEmitter::new(&mut out_str);
let dump_result = yml_writer.dump(overwrite_default_data);
match dump_result {
Ok(_) => match buf_wtr.write_all(out_str.as_bytes()) {
Err(e) => Err(format!(
"Failed to set the default profile file({}). {}",
profile_path, e
)),
_ => {
buf_wtr.flush().ok();
Ok(())
}
},
Err(e) => Err(format!(
"Failed to set the default profile file({}). {}",
profile_path, e
)),
}
} else {
let profile_names: Vec<&str> = prof_all_data
.as_hash()
.unwrap()
.keys()
.map(|k| k.as_str().unwrap())
.collect();
Err(format!(
"Invalid profile specified: {}\nPlease specify one of the following profiles:\n{}",
profile_name,
profile_names.join(", ")
))
}
} else {
Err(format!(
"Failed to set the default profile file({}).",
profile_path
))
}
} else {
Err("Not specified: --set-default-profile".to_string())
}
}
#[cfg(test)]
mod tests {
use linked_hash_map::LinkedHashMap;
use crate::detections::configs;
use crate::options::profile::load_profile;
#[test]
///オプションの設定が入ると値の冪等性が担保できないためテストを逐次的に処理する
fn test_load_profile() {
test_load_profile_without_profile_option();
test_load_profile_no_exist_profile_files();
test_load_profile_with_profile_option();
}
/// プロファイルオプションが設定されていないときにロードをした場合のテスト
fn test_load_profile_without_profile_option() {
configs::CONFIG.write().unwrap().args.profile = None;
let mut expect: LinkedHashMap<String, String> = LinkedHashMap::new();
expect.insert("Timestamp".to_owned(), "%Timestamp%".to_owned());
expect.insert("Computer".to_owned(), "%Computer%".to_owned());
expect.insert("Channel".to_owned(), "%Channel%".to_owned());
expect.insert("Level".to_owned(), "%Level%".to_owned());
expect.insert("EventID".to_owned(), "%EventID%".to_owned());
expect.insert("MitreAttack".to_owned(), "%MitreAttack%".to_owned());
expect.insert("RecordID".to_owned(), "%RecordID%".to_owned());
expect.insert("RuleTitle".to_owned(), "%RuleTitle%".to_owned());
expect.insert("Details".to_owned(), "%Details%".to_owned());
expect.insert(
"RecordInformation".to_owned(),
"%RecordInformation%".to_owned(),
);
expect.insert("RuleFile".to_owned(), "%RuleFile%".to_owned());
expect.insert("EvtxFile".to_owned(), "%EvtxFile%".to_owned());
expect.insert("Tags".to_owned(), "%MitreAttack%".to_owned());
assert_eq!(
Some(expect),
load_profile(
"test_files/config/default_profile.yaml",
"test_files/config/profiles.yaml"
)
);
}
/// プロファイルオプションが設定されて`おり、そのオプションに該当するプロファイルが存在する場合のテスト
fn test_load_profile_with_profile_option() {
configs::CONFIG.write().unwrap().args.profile = Some("minimal".to_string());
let mut expect: LinkedHashMap<String, String> = LinkedHashMap::new();
expect.insert("Timestamp".to_owned(), "%Timestamp%".to_owned());
expect.insert("Computer".to_owned(), "%Computer%".to_owned());
expect.insert("Channel".to_owned(), "%Channel%".to_owned());
expect.insert("EventID".to_owned(), "%EventID%".to_owned());
expect.insert("Level".to_owned(), "%Level%".to_owned());
expect.insert("RuleTitle".to_owned(), "%RuleTitle%".to_owned());
expect.insert("Details".to_owned(), "%Details%".to_owned());
assert_eq!(
Some(expect),
load_profile(
"test_files/config/default_profile.yaml",
"test_files/config/profiles.yaml"
)
);
}
/// プロファイルオプションが設定されているが、対象のオプションが存在しない場合のテスト
fn test_load_profile_no_exist_profile_files() {
configs::CONFIG.write().unwrap().args.profile = Some("not_exist".to_string());
//両方のファイルが存在しない場合
assert_eq!(
None,
load_profile(
"test_files/config/no_exist_default_profile.yaml",
"test_files/config/no_exist_profiles.yaml"
)
);
//デフォルトプロファイルは存在しているがprofileオプションが指定されているため読み込み失敗の場合
assert_eq!(
None,
load_profile(
"test_files/config/profile/default_profile.yaml",
"test_files/config/profile/no_exist_profiles.yaml"
)
);
//オプション先のターゲットのプロファイルファイルが存在しているが、profileオプションで指定されたオプションが存在しない場合
assert_eq!(
None,
load_profile(
"test_files/config/no_exist_default_profile.yaml",
"test_files/config/profiles.yaml"
)
);
}
}

273
src/options/update_rules.rs Normal file
View File

@@ -0,0 +1,273 @@
use crate::detections::message::AlertMessage;
use crate::detections::utils::write_color_buffer;
use crate::filter;
use crate::yaml::ParseYaml;
use chrono::{DateTime, Local, TimeZone};
use git2::Repository;
use std::fs::{self};
use std::path::Path;
use hashbrown::{HashMap, HashSet};
use std::cmp::Ordering;
use std::time::SystemTime;
use std::fs::create_dir;
use termcolor::{BufferWriter, ColorChoice};
pub struct UpdateRules {}
impl UpdateRules {
/// update rules(hayabusa-rules subrepository)
pub fn update_rules(rule_path: &str) -> Result<String, git2::Error> {
let mut result;
let mut prev_modified_time: SystemTime = SystemTime::UNIX_EPOCH;
let mut prev_modified_rules: HashSet<String> = HashSet::default();
let hayabusa_repo = Repository::open(Path::new("."));
let hayabusa_rule_repo = Repository::open(Path::new(rule_path));
if hayabusa_repo.is_err() && hayabusa_rule_repo.is_err() {
write_color_buffer(
&BufferWriter::stdout(ColorChoice::Always),
None,
"Attempting to git clone the hayabusa-rules repository into the rules folder.",
true,
)
.ok();
// execution git clone of hayabusa-rules repository when failed open hayabusa repository.
result = UpdateRules::clone_rules(Path::new(rule_path));
} else if hayabusa_rule_repo.is_ok() {
// case of exist hayabusa-rules repository
UpdateRules::_repo_main_reset_hard(hayabusa_rule_repo.as_ref().unwrap())?;
// case of failed fetching origin/main, git clone is not executed so network error has occurred possibly.
prev_modified_rules = UpdateRules::get_updated_rules(rule_path, &prev_modified_time);
prev_modified_time = fs::metadata(rule_path).unwrap().modified().unwrap();
result = UpdateRules::pull_repository(&hayabusa_rule_repo.unwrap());
} else {
// case of no exist hayabusa-rules repository in rules.
// execute update because submodule information exists if hayabusa repository exists submodule information.
prev_modified_time = fs::metadata(rule_path).unwrap().modified().unwrap();
let rules_path = Path::new(rule_path);
if !rules_path.exists() {
create_dir(rules_path).ok();
}
if rule_path == "./rules" {
let hayabusa_repo = hayabusa_repo.unwrap();
let submodules = hayabusa_repo.submodules()?;
let mut is_success_submodule_update = true;
// submodule rules erase path is hard coding to avoid unintentional remove folder.
fs::remove_dir_all(".git/.submodule/rules").ok();
for mut submodule in submodules {
submodule.update(true, None)?;
let submodule_repo = submodule.open()?;
if let Err(e) = UpdateRules::pull_repository(&submodule_repo) {
AlertMessage::alert(&format!("Failed submodule update. {}", e)).ok();
is_success_submodule_update = false;
}
}
if is_success_submodule_update {
result = Ok("Successed submodule update".to_string());
} else {
result = Err(git2::Error::from_str(&String::default()));
}
} else {
write_color_buffer(
&BufferWriter::stdout(ColorChoice::Always),
None,
"Attempting to git clone the hayabusa-rules repository into the rules folder.",
true,
)
.ok();
// execution git clone of hayabusa-rules repository when failed open hayabusa repository.
result = UpdateRules::clone_rules(rules_path);
}
}
if result.is_ok() {
let updated_modified_rules =
UpdateRules::get_updated_rules(rule_path, &prev_modified_time);
result = UpdateRules::print_diff_modified_rule_dates(
prev_modified_rules,
updated_modified_rules,
);
}
result
}
/// hard reset in main branch
fn _repo_main_reset_hard(input_repo: &Repository) -> Result<(), git2::Error> {
let branch = input_repo
.find_branch("main", git2::BranchType::Local)
.unwrap();
let local_head = branch.get().target().unwrap();
let object = input_repo.find_object(local_head, None).unwrap();
match input_repo.reset(&object, git2::ResetType::Hard, None) {
Ok(()) => Ok(()),
_ => Err(git2::Error::from_str("Failed reset main branch in rules")),
}
}
/// Pull(fetch and fast-forward merge) repositoryto input_repo.
fn pull_repository(input_repo: &Repository) -> Result<String, git2::Error> {
match input_repo
.find_remote("origin")?
.fetch(&["main"], None, None)
.map_err(|e| {
AlertMessage::alert(&format!("Failed git fetch to rules folder. {}", e)).ok();
}) {
Ok(it) => it,
Err(_err) => return Err(git2::Error::from_str(&String::default())),
};
let fetch_head = input_repo.find_reference("FETCH_HEAD")?;
let fetch_commit = input_repo.reference_to_annotated_commit(&fetch_head)?;
let analysis = input_repo.merge_analysis(&[&fetch_commit])?;
if analysis.0.is_up_to_date() {
Ok("Already up to date".to_string())
} else if analysis.0.is_fast_forward() {
let mut reference = input_repo.find_reference("refs/heads/main")?;
reference.set_target(fetch_commit.id(), "Fast-Forward")?;
input_repo.set_head("refs/heads/main")?;
input_repo.checkout_head(Some(git2::build::CheckoutBuilder::default().force()))?;
Ok("Finished fast forward merge.".to_string())
} else if analysis.0.is_normal() {
AlertMessage::alert(
"update-rules option is git Fast-Forward merge only. please check your rules folder."
,
).ok();
Err(git2::Error::from_str(&String::default()))
} else {
Err(git2::Error::from_str(&String::default()))
}
}
/// git clone でhauyabusa-rules レポジトリをrulesフォルダにgit cloneする関数
fn clone_rules(rules_path: &Path) -> Result<String, git2::Error> {
match Repository::clone(
"https://github.com/Yamato-Security/hayabusa-rules.git",
rules_path,
) {
Ok(_repo) => {
println!("Finished cloning the hayabusa-rules repository.");
Ok("Finished clone".to_string())
}
Err(e) => {
AlertMessage::alert(
&format!(
"Failed to git clone into the rules folder. Please rename your rules folder name. {}",
e
),
)
.ok();
Err(git2::Error::from_str(&String::default()))
}
}
}
/// Create rules folder files Hashset. Format is "[rule title in yaml]|[filepath]|[filemodified date]|[rule type in yaml]"
fn get_updated_rules(rule_folder_path: &str, target_date: &SystemTime) -> HashSet<String> {
let mut rulefile_loader = ParseYaml::new();
// level in read_dir is hard code to check all rules.
rulefile_loader
.read_dir(
rule_folder_path,
"INFORMATIONAL",
&filter::RuleExclude::default(),
)
.ok();
let hash_set_keys: HashSet<String> = rulefile_loader
.files
.into_iter()
.filter_map(|(filepath, yaml)| {
let file_modified_date = fs::metadata(&filepath).unwrap().modified().unwrap();
if file_modified_date.cmp(target_date).is_gt() {
let yaml_date = yaml["date"].as_str().unwrap_or("-");
return Option::Some(format!(
"{}|{}|{}|{}",
yaml["title"].as_str().unwrap_or(&String::default()),
yaml["modified"].as_str().unwrap_or(yaml_date),
&filepath,
yaml["ruletype"].as_str().unwrap_or("Other")
));
}
Option::None
})
.collect();
hash_set_keys
}
/// print updated rule files.
fn print_diff_modified_rule_dates(
prev_sets: HashSet<String>,
updated_sets: HashSet<String>,
) -> Result<String, git2::Error> {
let diff = updated_sets.difference(&prev_sets);
let mut update_count_by_rule_type: HashMap<String, u128> = HashMap::new();
let mut latest_update_date = Local.timestamp(0, 0);
for diff_key in diff {
let tmp: Vec<&str> = diff_key.split('|').collect();
let file_modified_date = fs::metadata(&tmp[2]).unwrap().modified().unwrap();
let dt_local: DateTime<Local> = file_modified_date.into();
if latest_update_date.cmp(&dt_local) == Ordering::Less {
latest_update_date = dt_local;
}
*update_count_by_rule_type
.entry(tmp[3].to_string())
.or_insert(0b0) += 1;
let path_str: &str = if tmp[2].starts_with("./") {
tmp[2].strip_prefix("./").unwrap()
} else {
tmp[2]
};
write_color_buffer(
&BufferWriter::stdout(ColorChoice::Always),
None,
&format!(
"[Updated] {} (Modified: {} | Path: {})",
tmp[0], tmp[1], path_str
),
true,
)
.ok();
}
println!();
for (key, value) in &update_count_by_rule_type {
println!("Updated {} rules: {}", key, value);
}
if !&update_count_by_rule_type.is_empty() {
Ok("Rule updated".to_string())
} else {
write_color_buffer(
&BufferWriter::stdout(ColorChoice::Always),
None,
"You currently have the latest rules.",
true,
)
.ok();
Ok("You currently have the latest rules.".to_string())
}
}
}
#[cfg(test)]
mod tests {
use crate::options::update_rules::UpdateRules;
use std::time::SystemTime;
#[test]
fn test_get_updated_rules() {
let prev_modified_time: SystemTime = SystemTime::UNIX_EPOCH;
let prev_modified_rules =
UpdateRules::get_updated_rules("test_files/rules/level_yaml", &prev_modified_time);
assert_eq!(prev_modified_rules.len(), 5);
let target_time: SystemTime = SystemTime::now();
let prev_modified_rules2 =
UpdateRules::get_updated_rules("test_files/rules/level_yaml", &target_time);
assert_eq!(prev_modified_rules2.len(), 0);
}
}

View File

@@ -1,4 +1,4 @@
use crate::detections::print::{LOGONSUMMARY_FLAG, STATISTICS_FLAG};
use crate::detections::message::{LOGONSUMMARY_FLAG, STATISTICS_FLAG};
use crate::detections::{detection::EvtxRecordInfo, utils};
use hashbrown::HashMap;
@@ -129,8 +129,21 @@ impl EventStatistics {
if evtid.is_none() {
continue;
}
let idnum: i64 = if evtid.unwrap().is_number() {
evtid.unwrap().as_i64().unwrap()
} else {
evtid
.unwrap()
.as_str()
.unwrap()
.parse::<i64>()
.unwrap_or_default()
};
if !(idnum == 4624 || idnum == 4625) {
continue;
}
let username = utils::get_event_value("TargetUserName", &record.record);
let idnum = evtid.unwrap();
let countlist: [usize; 2] = [0, 0];
if idnum == 4624 {
let count: &mut [usize; 2] = self

View File

@@ -1,4 +1,4 @@
use crate::detections::print::{LOGONSUMMARY_FLAG, STATISTICS_FLAG};
use crate::detections::message::{LOGONSUMMARY_FLAG, STATISTICS_FLAG};
use crate::detections::{configs::CONFIG, detection::EvtxRecordInfo};
use prettytable::{Cell, Row, Table};

View File

@@ -2,9 +2,9 @@ extern crate serde_derive;
extern crate yaml_rust;
use crate::detections::configs;
use crate::detections::print::AlertMessage;
use crate::detections::print::ERROR_LOG_STACK;
use crate::detections::print::QUIET_ERRORS_FLAG;
use crate::detections::configs::EXCLUDE_STATUS;
use crate::detections::message::AlertMessage;
use crate::detections::message::{ERROR_LOG_STACK, QUIET_ERRORS_FLAG};
use crate::filter::RuleExclude;
use hashbrown::HashMap;
use std::ffi::OsStr;
@@ -165,6 +165,19 @@ impl ParseYaml {
return io::Result::Ok(ret);
}
// ignore if tool test yml file in hayabusa-rules.
if path
.to_str()
.unwrap()
.contains("rules/tools/sigmac/test_files")
|| path
.to_str()
.unwrap()
.contains("rules\\tools\\sigmac\\test_files")
{
return io::Result::Ok(ret);
}
// 個別のファイルの読み込みは即終了としない。
let read_content = self.read_file(path);
if read_content.is_err() {
@@ -231,7 +244,28 @@ impl ParseYaml {
} else {
"noisy"
};
let entry = self.rule_load_cnt.entry(entry_key.to_string()).or_insert(0);
// テスト用のルール(ID:000...0)の場合はexcluded ruleのカウントから除外するようにする
if v != "00000000-0000-0000-0000-000000000000" {
let entry =
self.rule_load_cnt.entry(entry_key.to_string()).or_insert(0);
*entry += 1;
}
if entry_key == "excluded"
|| (entry_key == "noisy"
&& !configs::CONFIG.read().unwrap().args.enable_noisy_rules)
{
return Option::None;
}
}
}
let status = &yaml_doc["status"].as_str();
if let Some(s) = status {
if EXCLUDE_STATUS.contains(&s.to_string()) {
let entry = self
.rule_load_cnt
.entry("excluded".to_string())
.or_insert(0);
*entry += 1;
return Option::None;
}
@@ -271,19 +305,6 @@ impl ParseYaml {
if doc_level_num < args_level_num {
return Option::None;
}
if !configs::CONFIG.read().unwrap().args.enable_deprecated_rules {
let rule_status = &yaml_doc["status"].as_str().unwrap_or_default();
if *rule_status == "deprecated" {
let entry = self
.rule_status_cnt
.entry(rule_status.to_string())
.or_insert(0);
*entry += 1;
return Option::None;
}
}
Option::Some((filepath, yaml_doc))
})
.collect();
@@ -295,8 +316,8 @@ impl ParseYaml {
#[cfg(test)]
mod tests {
use crate::detections::print::AlertMessage;
use crate::detections::print::ERROR_LOG_PATH;
use crate::detections::message::AlertMessage;
use crate::detections::message::ERROR_LOG_PATH;
use crate::filter;
use crate::yaml;
use crate::yaml::RuleExclude;
@@ -439,7 +460,7 @@ mod tests {
yaml.read_dir(path, "", &exclude_ids).unwrap();
assert_eq!(
yaml.rule_status_cnt.get("deprecated").unwrap().to_owned(),
2
1
);
}
}

View File

@@ -0,0 +1,13 @@
Timestamp: "%Timestamp%"
Computer: "%Computer%"
Channel: "%Channel%"
Level: "%Level%"
EventID: "%EventID%"
MitreAttack: "%MitreAttack%"
RecordID: "%RecordID%"
RuleTitle: "%RuleTitle%"
Details: "%Details%"
RecordInformation: "%RecordInformation%"
RuleFile: "%RuleFile%"
EvtxFile: "%EvtxFile%"
Tags: "%MitreAttack%"

View File

@@ -0,0 +1,44 @@
minimal:
Timestamp: "%Timestamp%"
Computer: "%Computer%"
Channel: "%Channel%"
EventID: "%EventID%"
Level: "%Level%"
RuleTitle: "%RuleTitle%"
Details: "%Details%"
standard:
Timestamp: "%Timestamp%"
Computer: "%Computer%"
Channel: "%Channel%"
EventID: "%EventID%"
Level: "%Level%"
Tags: "%MitreAttack%"
RecordID: "%RecordID%"
RuleTitle: "%RuleTitle%"
Details: "%Details%"
verbose-1:
Timestamp: "%Timestamp%"
Computer: "%Computer%"
Channel: "%Channel%"
EventID: "%EventID%"
Level: "%Level%"
Tags: "%MitreAttack%"
RecordID: "%RecordID%"
RuleTitle: "%RuleTitle%"
Details: "%Details%"
RuleFile: "%RuleFile%"
EvtxFile: "%EvtxFile%"
verbose-2:
Timestamp: "%Timestamp%"
Computer: "%Computer%"
Channel: "%Channel%"
EventID: "%EventID%"
Level: "%Level%"
Tags: "%MitreAttack%"
RecordID: "%RecordID%"
RuleTitle: "%RuleTitle%"
Details: "%Details%"
AllFieldInfo: "%RecordInformation%"

View File

@@ -1,5 +1,5 @@
title: Sysmon Check command lines
id : 4fe151c2-ecf9-4fae-95ae-b88ec9c2fca6
title: Excluded Rule Test 1
id : 00000000-0000-0000-0000-000000000000
description: hogehoge
enabled: true
author: Yea

View File

@@ -1,13 +1,10 @@
title: Possible Exploitation of Exchange RCE CVE-2021-42321
author: Florian Roth, @testanull
title: Excluded Rule 2
date: 2021/11/18
description: Detects log entries that appear in exploitation attempts against MS Exchange
RCE CVE-2021-42321
detection:
condition: 'Cmdlet failed. Cmdlet Get-App, '
falsepositives:
- Unknown, please report false positives via https://github.com/SigmaHQ/sigma/issues
id: c92f1896-d1d2-43c3-92d5-7a5b35c217bb
id: 00000000-0000-0000-0000-000000000000
level: critical
logsource:
product: windows
@@ -15,7 +12,4 @@ logsource:
references:
- https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-42321
status: experimental
tags:
- attack.lateral_movement
- attack.t1210
ruletype: SIGMA

View File

@@ -1,8 +1,5 @@
title: Hidden Local User Creation
author: Christian Burkard
title: Excluded Rule 3
date: 2021/05/03
description: Detects the creation of a local hidden user account which should not
happen for event ID 4720.
detection:
SELECTION_1:
EventID: 4720
@@ -14,7 +11,7 @@ falsepositives:
fields:
- EventCode
- AccountName
id: 7b449a5e-1db5-4dd0-a2dc-4e3a67282538
id: 00000000-0000-0000-0000-000000000000
level: high
logsource:
product: windows
@@ -22,7 +19,4 @@ logsource:
references:
- https://twitter.com/SBousseaden/status/1387743867663958021
status: experimental
tags:
- attack.persistence
- attack.t1136.001
ruletype: SIGMA

View File

@@ -1,8 +1,5 @@
title: User Added to Local Administrators
author: Florian Roth
title: Excluded Rule 4
date: 2017/03/14
description: This rule triggers on user accounts that are added to the local Administrators
group, which could be legitimate activity or a sign of privilege escalation activity
detection:
SELECTION_1:
EventID: 4732
@@ -13,18 +10,11 @@ detection:
SELECTION_4:
SubjectUserName: '*$'
condition: ((SELECTION_1 and (SELECTION_2 or SELECTION_3)) and not (SELECTION_4))
falsepositives:
- Legitimate administrative activity
id: c265cf08-3f99-46c1-8d59-328247057d57
id: 00000000-0000-0000-0000-000000000000
level: medium
logsource:
product: windows
service: security
modified: 2021/07/07
status: stable
tags:
- attack.privilege_escalation
- attack.t1078
- attack.persistence
- attack.t1098
ruletype: SIGMA

View File

@@ -1,9 +1,5 @@
title: Local User Creation
author: Patrick Bareiss
title: Excluded Rule 5
date: 2019/04/18
description: Detects local user creation on windows servers, which shouldn't happen
in an Active Directory environment. Apply this Sigma Use Case on your windows server
logs and not on your DC logs.
detection:
SELECTION_1:
EventID: 4720
@@ -15,7 +11,7 @@ fields:
- EventCode
- AccountName
- AccountDomain
id: 66b6be3d-55d0-4f47-9855-d69df21740ea
id: 00000000-0000-0000-0000-000000000000
level: low
logsource:
product: windows
@@ -24,8 +20,4 @@ modified: 2020/08/23
references:
- https://patrick-bareiss.com/detecting-local-user-creation-in-ad-with-sigma/
status: experimental
tags:
- attack.persistence
- attack.t1136
- attack.t1136.001
ruletype: SIGMA

View File

@@ -1,7 +1,5 @@
title: WMI Event Subscription
author: Tom Ueltschi (@c_APT_ure)
title: Noisy Rule Test1
date: 2019/01/12
description: Detects creation of WMI event subscription persistence method
detection:
SELECTION_1:
EventID: 19
@@ -12,7 +10,7 @@ detection:
condition: (SELECTION_1 or SELECTION_2 or SELECTION_3)
falsepositives:
- exclude legitimate (vetted) use of WMI event subscription in your network
id: 0f06a3a5-6a09-413f-8743-e6cf35561297
id: 0090ea60-f4a2-43a8-8657-3a9a4ddcf547
level: high
logsource:
category: wmi_event

View File

@@ -1,9 +1,6 @@
title: Rare Schtasks Creations
author: Florian Roth
title: Noisy Rule Test2
date: 2017/03/23
description: Detects rare scheduled tasks creations that only appear a few times per
time frame and could reveal password dumpers, backdoor installs or other types of
malicious code
description: excluded rule
detection:
SELECTION_1:
EventID: 4698
@@ -11,21 +8,6 @@ detection:
falsepositives:
- Software installation
- Software updates
id: b0d77106-7bb0-41fe-bd94-d1752164d066
id: 8b8db936-172e-4bb7-9f84-ccc954d51d93
level: low
logsource:
definition: The Advanced Audit Policy setting Object Access > Audit Other Object
Access Events has to be configured to allow this detection (not in the baseline
recommendations by Microsoft). We also recommend extracting the Command field
from the embedded XML in the event data.
product: windows
service: security
status: experimental
tags:
- attack.execution
- attack.privilege_escalation
- attack.persistence
- attack.t1053
- car.2013-08-001
- attack.t1053.005
ruletype: SIGMA

View File

@@ -1,26 +1,13 @@
title: Rare Service Installs
author: Florian Roth
title: Noisy Rule Test 3
date: 2017/03/08
description: Detects rare service installs that only appear a few times per time frame
and could reveal password dumpers, backdoor installs or other types of malicious
services
detection:
SELECTION_1:
EventID: 7045
condition: SELECTION_1 | count() by ServiceFileName < 5
falsepositives:
- Software installation
- Software updates
id: 66bfef30-22a5-4fcd-ad44-8d81e60922ae
id: 1703ba97-b2c2-4071-a241-a16d017d25d3
level: low
logsource:
product: windows
service: system
status: experimental
tags:
- attack.persistence
- attack.privilege_escalation
- attack.t1050
- car.2013-09-005
- attack.t1543.003
ruletype: SIGMA

View File

@@ -1,8 +1,5 @@
title: Failed Logins with Different Accounts from Single Source System
author: Florian Roth
title: Noisy Rule Test 4
date: 2017/01/10
description: Detects suspicious failed logins with different user accounts from a
single source system
detection:
SELECTION_1:
EventID: 529
@@ -14,20 +11,11 @@ detection:
WorkstationName: '*'
condition: ((SELECTION_1 or SELECTION_2) and SELECTION_3 and SELECTION_4) | count(TargetUserName)
by WorkstationName > 3
falsepositives:
- Terminal servers
- Jump servers
- Other multiuser systems like Citrix server farms
- Workstations with frequently changing users
id: e98374a6-e2d9-4076-9b5c-11bdb2569995
id: 9f5663ce-6205-4753-b486-fb8498d1fae5
level: medium
logsource:
product: windows
service: security
modified: 2021/09/21
status: experimental
tags:
- attack.persistence
- attack.privilege_escalation
- attack.t1078
ruletype: SIGMA

View File

@@ -1,8 +1,5 @@
title: Failed Logins with Different Accounts from Single Source System
author: Florian Roth
title: Noisy Rule Test 5
date: 2017/01/10
description: Detects suspicious failed logins with different user accounts from a
single source system
detection:
SELECTION_1:
EventID: 4776
@@ -12,23 +9,11 @@ detection:
Workstation: '*'
condition: (SELECTION_1 and SELECTION_2 and SELECTION_3) | count(TargetUserName)
by Workstation > 3
falsepositives:
- Terminal servers
- Jump servers
- Other multiuser systems like Citrix server farms
- Workstations with frequently changing users
id: 6309ffc4-8fa2-47cf-96b8-a2f72e58e538
id: 3546ce10-19b4-4c4c-9658-f4f3b5d27ae9
level: medium
logsource:
product: windows
service: security
modified: 2021/09/21
related:
- id: e98374a6-e2d9-4076-9b5c-11bdb2569995
type: derived
status: experimental
tags:
- attack.persistence
- attack.privilege_escalation
- attack.t1078
ruletype: SIGMA