v1.2 pre-release marge (#495)

* Fix/fix clippy warn (#434)

- Fixed following Clippy Warnings(previous warning count: 671 -> after: 4)
  - clippy::needless_return
  - clippy::println_empty_string
  - clippy::redundant_field_names
  - clippy::single_char_pattern
  - clippy::len_zero
  - clippy::iter_nth_zero
  - clippy::bool_comparison
  - clippy::question_mark
  - clippy::needless_collect
  - clippy::unnecessary_unwrap
  - clippy::ptr_arg
  - clippy::needless_collect
  - clippy::needless_borrow
  - clippy::new_without_default
  - clippy::assign_op_pattern
  - clippy::bool_assert_comparison
  - clippy::into_iter_on_ref
  - clippy::deref_addrof
  - clippy::while_let_on_iterator
  - clippy::match_like_matches_macro
  - clippy::or_fun_call
  - clippy::useless_conversion
  - clippy::let_and_return
  - clippy::redundant_clone
  - clippy::redundant_closure
  - clippy::cmp_owned
  - clippy::upper_case_acronyms
  - clippy::map_identity
  - clippy::unused_io_amount
  - clippy::assertions_on_constants
  - clippy::op_ref
  - clippy::useless_vec
  - clippy::vec_init_then_push
  - clippy::useless_format
  - clippy::bind_instead_of_map
  - clippy::bool_comparison
  - clippy::clone_on_copy
  - clippy::too_many_arguments
  - clippy::module_inception
  - fixed clippy::needless_lifetimes
  - fixed clippy::borrowed_box (Thanks for helping by hach1yon!)

* Merge main and output fix#443#444 (#445)

* removed tools/sigmac (#441)

* removed tools/sigmac

- moved tools/sigmac to hayabusa-rules repo

* fixed doc link tools/sigmac

* fixed submodule track

* fixed submodule track from latest to v1.1.0 tag

* fixed link

* erased enter #444

* erased enter #444

* reverted logo enter

* fixed rules submodule target commit #444

Co-authored-by: Yamato Security <71482215+YamatoSecurity@users.noreply.github.com>

* readme update screenshots etc (#448)

* Opensslを静的にコンパイルするためにCargo.tomlの設定変更 (#437)

* cargo update - openssl static

* updated cargo

* macos2apple

* cargo update

* cargo update

* aliasキーがない場合もEvent.EventDataを自動で走査する (#442)

* add no event key

* support not-register-alias search

* added checking EventData when key do not match in alias #290

- added checking key in Event.EventData, if key is not exist in eventkey_alias.txt.

* cargo fmt

* fixed panic when filter files does not exists

* fixed errorlog format when filter config files does not exist

Co-authored-by: DustInDark <nextsasasa@gmail.com>

* changed downcast library from mopa to downcast_rs #447 (#450)

* Fixed Clippy Warnings (#451)

* fixed clippy warn

* fixed cargo clippy warnging

* fixed clippy warngings in clippy ver 0.1.59

* fixed clippy warnings clippy::unnecessary_to_owned

* added temporary blackhat arsenal badge

* added rust report card badges #453

* added repository maintenance levels badge #453

* documentation update macOS usage etc

* update

* added clippy workflow #428 (#429)

* added clippy workflow #428

* fixed action yaml to run clippy #428

* fixed indent

* fixed workflow

* fixed workflow error

* fixed indent

* changed no annotation #428

* adujusted annotation version

* fixed clippy::needless_match

* remove if let exception

* removed unnecessary permission check #428

* statistics event id update (#457)

* Feature/#440 refactoring #395 (#464)

* updated submodule

* fix degrade for pull req #464 (#468)

* fix degrade for pull req #464

* add trim

* Fearture/ added output update result#410 (#452)

* add git2 crate #391

* added Update option #391

* updated readme #391

* fixed cargo.lock

* fixed option if-statement #391

* changed utc short option and rule-update short option #391

* updated readme

* updated readme

* fixed -u long option & version number update #391

* added fast-forwarding rules repository #391

* updated command line option #391

* moved output logo prev update rule

* fixed readme #391

* removed recursive option in readme

* changed rules update from clone and pull to submodule update #391

* fixed document

* changed unnecessary clone recursively to clone only

* English message update.

* cargo fmt

* English message update. ( 4657c35e5c cherry-pick)

* added create rules folder when rules folder is not exist

* fixed gitmodules github-rules url from ssh to https

* added output of updated file #420

* fixed error #410

* changed update rule list seq

* added test

* fixed output #410

* fixed output and fixed output date field  when  modified field is lacked #410

* fixed compile error

* fixed output

- added enter after Latest rule update output
- added output when no exist new rule
- fixed Latest rule update date format
- changed output from 'Latest rule update' to 'Latest rules update'

* fixed compile error

* changed modified date source from rules folder to each yml rule file

* formatting use chrono in main.rs

* merge develop clippy ci

* fixed output when no update rule #410

- removed Latest rule update

- no output "Rules update successfully" when No rule changed

* Change English

Co-authored-by: Tanaka Zakku <71482215+YamatoSecurity@users.noreply.github.com>

* Remove unnecessary code from timeline_event_info and rename files for… (#470)

* Remove unnecessary code from timeline_event_info and rename files for issue462

* Remove unnecessary code #462

* add equalsfield pipe (#467)

* Enhancement: add config config #456 (#471)

* added config option #456

* added process of option to speicifed config folder #456

following files adjust config option.

* noisy_rules.txt

* exclude_rules.txt

* fixed usage in readme

* updated rules submodule:

* fixed process when yml file exist in .git folder

* ignore when yml file exist in .git folder

* Add: --level-tuning option's outline

* Add: read Rule files

* Add: input rule_level.txt files & read rules

* cargo fmt

* Add: level-tuning function

* Reface: split to options file

* WIP: Text overwrite failed...

* Fix: Text overwrite was failed

* Add: Error handlings

* Add: id, level validation

* mv: IDS_REGEX to configs file

* fix: level tuning's file name

* Cargo fmt

* Pivot Keyword List機能の追加 (#412)

* add get_pivot_keyword() func

* change function name and call it's function

* [WIP] support config file

* compilete output

* cargo fmt

* [WIP] add test

* add test

* support -o option in pivot

* add pivot mod

* fix miss

* pass test in pivot.rs

* add comment

* pass all test

* add fast return

* fix output

* add test config file

* review

* rebase

* cargo fmt

* test pass

* fix clippy in my commit

* cargo fmt

* little refactor

* change file input logic and config format

* [WIP] change output

* [wip] change deta structure

* change output & change data structure

* pass test

* add config

* cargo fmt & clippy & rebase

* fix cllipy

* delete /rules/ in .gitignore

* clean comment

* clean

* clean

* fix rebase miss

* fix rebase miss

* fix clippy

* file name output on -o to stdout

* add pivot_keywords.txt to ./config

* updated english

* Documentation update

* cargo fmt and clean

* updated translate japanese

* readme update

* readme update

Co-authored-by: DustInDark <nextsasasa@gmail.com>
Co-authored-by: Tanaka Zakku <71482215+YamatoSecurity@users.noreply.github.com>

* Add: test

* Add: README.md

* Cargo fmt

* Use
#[cfg(test)]

* Fixed output stop when  control char exist in windows terminal (#485)

* added control character filter in details #382

* fixed document

- removed fixed windows teminal caution in readme

* fixed level tuning test and added test files #390

* changed level_tuning.txt header from next_level to new_level

* fixed convert miss change to low level

* added run args rules path to check test easy #390

* fixed comment out processing in level_tuning.txt

* fixed config to show level-tuning option

* fixed level-tuning option usage from required to option

* reduce output mitre attack detail tachnique No. by config file (#483)

* reduced mitre attck tag output by config file #477

* prepared 1.2.0 version toml

* added test files and mitre attck strategy tag file #477

* fixed cargo.toml version

* updated cargo.lock

* output tag english update

* cargo fmt

Co-authored-by: Tanaka Zakku <71482215+YamatoSecurity@users.noreply.github.com>

* Fix: test file's path was incorrect

* Add: add test_files/config/level_tuning.txt

* Add: Flush method.

* inserted debug data

* reverted config usage

* fixed test yaml file path

* Feature/#216 output allfields csvnewcolumn (#469)

* refactoring

* refactoring

* under constructing

* underconstructing

* under construction

* underconstructing

* fix existing testcase

* finish implement

* fmt

* add option

* change name

* fix control code bug

* fix disp

* change format and fix testcase

* fix help

* Fix: show usage when hayabusa has no args

* rm: debug line

* Enhance/warning architecture#478 (#482)

* added  enhance of architecture check #478

* changed check architecture process after output logo #478

* English msg update

* fixed detect method of os-bit to windows and linux

* removed mac and unix architecture and binary and updated its process of windows

* fix clippy

* added check on Wow64 env #478

* Update contributors.txt

Co-authored-by: Tanaka Zakku <71482215+YamatoSecurity@users.noreply.github.com>

* added --level-tuning option to usage

* Revert "added --level-tuning option to usage"

This reverts commit e6a74090a3.

* readme update

* Update README-Japanese.md

* readme, version, cargo update

* typo fix

* typo fix

* rm: duplicated test & fix test name

* Add: show logo, and some infos

* small english fix

* twitter link fix (#486)

* added feature of tag output reducing to agg condition #477 (#488)

* changed level output from informational to info #491

* updated rules submodule

* v1.2 changelog update (#473)

* changelog update

* Update CHANGELOG.md

added contributor in "Fields that are not defined in eventkey_alias.txt will automatically be searched in Event.EventData."

ref #442

* Update CHANGELOG-Japanese.md

Fields that are not defined in eventkey_alias.txt will automatically be searched in Event.EventData.

added contributor in "Fields that are not defined in eventkey_alias.txt will automatically be searched in Event.EventData."

ref #442

* Update CHANGELOG.md

added bug fixes (#444) and `Performance and. accuracy`  add contributor ref(#395)

* Update CHANGELOG-Japanese.md

* Translated v1.2 change log to Japanese

v1.2の内容を日本語に修正

* fixed typo

added lacked back quote.

* added description

added following issue and pr description to readme

- #216 / #469 L8
- #390 / #459 L9
- #478 / #482 L19
- #477/ #483 L20

* added description README.md

added following issue and pr description to readme

- #216 / #469 L8
- #390 / #459 L9
- #478 / #482 L19
- #477/ #483 L20

* changelog update

* changelog update

* update

Co-authored-by: DustInDark <nextsasasa@gmail.com>

* updated rules #493 (#494)

* Resolve conflict develop (#496)

* removed tools/sigmac (#441)

* removed tools/sigmac

- moved tools/sigmac to hayabusa-rules repo

* fixed doc link tools/sigmac

* fixed submodule track

* fixed submodule track from latest to v1.1.0 tag

* fixed link

* fixed rules submodule targe #444

* updated submodule

* updated rules submodule

Co-authored-by: Yamato Security <71482215+YamatoSecurity@users.noreply.github.com>

Co-authored-by: Yamato Security <71482215+YamatoSecurity@users.noreply.github.com>
Co-authored-by: kazuminn <warugaki.k.k@gmail.com>
Co-authored-by: James / hach1yon <32596618+hach1yon@users.noreply.github.com>
Co-authored-by: garigariganzy <tosada31@hotmail.co.jp>
Co-authored-by: itiB <is0312vx@ed.ritsumei.ac.jp>
This commit is contained in:
DustInDark
2022-04-15 12:13:00 +09:00
committed by GitHub
parent 2b5837dfc8
commit bcf8a33e8c
50 changed files with 3726 additions and 1982 deletions

View File

@@ -14,19 +14,26 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
submodules: recursive
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
profile: minimal
components: rustfmt
override: true
- name: Fmt Check
run: cargo fmt -- --check
- name: Build
run: cargo build --verbose
- name: Run tests
run: cargo test --verbose
- uses: actions/checkout@v2
with:
submodules: recursive
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly
profile: minimal
components: rustfmt
override: true
- name: Fmt Check
run: cargo fmt -- --check
- name: Prepare Clippy
run: rustup component add clippy
- name: Run clippy action to produce annotations
uses: actions-rs/clippy-check@v1
with:
args: --all-targets -- -D warnings
token: ${{ secrets.GITHUB_TOKEN }}
- name: Build
run: cargo build --verbose
- name: Run tests
run: cargo test --verbose

2
.gitignore vendored
View File

@@ -4,4 +4,4 @@
/.vscode/
.DS_Store
test_*
.env
.env

View File

@@ -1,6 +1,30 @@
# 変更点
##v1.1.0 [2022/03/03]
## v1.2.0 [2022/04/15] Black Hat Asia Arsenal 2022 Preview Release
**新機能:**
- `-C / --config` オプションの追加。検知ルールのコンフィグを指定することが可能。(Windowsでのライブ調査に便利) (@hitenkoku)
- `|equalsfield` と記載することでルール内で二つのフィールドの値が一致するかを記載に対応。 (@hach1yon)
- `-p / --pivot-keywords-list` オプションの追加。攻撃されたマシン名や疑わしいユーザ名などの情報をピボットキーワードリストとして出力する。 (@kazuminn)
- `-F / --full-data`オプションの追加。検知したレコードのフィールド情報をcsvに出力することが可能。(@hach1yon)
- `--level-tuning` オプションの追加。ルールの検知ファイルを設定したコンフィグファイルに従って検知レベルをチューニングすることが可能(@itib@hitenkoku)
**改善:**
- 検知ルールとドキュメントの更新。 (@YamatoSecurity)
- MacとLinuxのバイナリに必要なOpenSSLライブラリを静的コンパイルした。 (@YamatoSecurity)
- タブ等の文字が含まれたフィールドに対しての検知性能の改善。 (@hach1yon@hitenkoku)
- eventkey_alias.txt内に定義されていないフィールドをEvent.EventData内を自動で検索することが可能。 (@kazuminn@hitenkoku)
- 検知ルールの更新時、更新されたルールのファイル名が表示される。 (@hitenkoku)
- ソースコードにあるClippyの警告を修正。 (@hitenkoku@hach1yon)
- イベントIDとタイトルが記載されたコンフィグファイルの名前を `timeline_event_info.txt` から `statistics_event_info.txt`に変更。 (@YamatoSecurity@garigariganzy)
- 64bit Windowsで32bit版のバイナリを実行しないように修正(@hitenkoku)
- MITRE ATT&CKのデータの出力を`output_tag.txt`で修正できるように修正(@hitenkoku)
**バグ修正:**
- `.git` フォルダ内にある `.yml` ファイルがパースエラーを引き起こしていた問題の修正。 (@hitenkoku)
- テスト用のルールファイルの読み込みエラーで不必要な改行が発生していた問題の修正。 (@hitenkoku)
- Windows Terminalのバグで標準出力が途中で止まる場合がありましたが、Hayabusa側で解決しました。 (@hitenkoku)
## v1.1.0 [2022/03/03]
**新機能:**
- `-r / --rules`オプションで一つのルール指定が可能。(ルールをテストする際に便利!) (@kazuminn)
- ルール更新オプション (`-u / --update-rules`): [hayabusa-rules](https://github.com/Yamato-Security/hayabusa-rules)レポジトリにある最新のルールに更新できる。 (@hitenkoku)
@@ -26,4 +50,4 @@
- Rustのevtxライブラリを0.7.2に更新。 (@YamatoSecurity)
## v1.0.0 [2021/12/25]
- 最初のリリース
- 最初のリリース

View File

@@ -1,6 +1,31 @@
# Changes
##v1.1.0 [2022/03/03]
## v1.2.0 [2022/04/15] Black Hat Asia Arsenal 2022 Preview Release
**New Features:**
- Specify config directory (`-C / --config`): When specifying a different rules directory, the rules config directory will still be the default `rules/config`, so this option is useful when you want to test rules and their config files in a different directory. (@hitenkoku)
- `|equalsfield` aggregator: In order to write rules that compare if two fields are equal or not. (@hach1yon)
- Pivot keyword list generator feature (`-p / --pivot-keywords-list`): Will generate a list of keywords to grep for to quickly identify compromised machines, suspicious usernames, files, etc... (@kazuminn)
- `-F / --full-data` option: Will output fields information in detected record to `--output` file. (@hach1yon)
- `--level-tuning` option: You can tune the risk `level` in hayabusa and sigma rules to your environment. (@itib and @hitenkoku)
**Enhancements:**
- Updated detection rules and documentation. (@YamatoSecurity)
- Mac and Linux binaries now statically compile the OpenSSL libraries. (@YamatoSecurity)
- Performance and accuracy improvement for fields with tabs, etc... in them. (@hach1yon and @hitenkoku)
- Fields that are not defined in eventkey_alias.txt will automatically be searched in Event.EventData. (@kazuminn and @hitenkoku)
- When updating rules, the names of new rules as well as the count will be displayed. (@hitenkoku)
- Removed all Clippy warnings from the source code. (@hitenkoku and @hach1yon)
- Updated the event ID and title config file (`timeline_event_info.txt`) and changed the name to `statistics_event_info.txt`. (@YamatoSecurity and @garigariganzy)
- 32-bit Hayabusa Windows binaries are now prevented from running on 64-bit Windows as it would cause unexpected results. (@hitenkoku)
- MITRE ATT&CK tag output can be customized in `output_tag.txt`. (@hitenkoku)
**Bug Fixes:**
- `.yml` files in the `.git` folder would cause parse errors so they are now ignored. (@hitenkoku)
- Removed unnecessary newline due to loading test file rules. (@hitenkoku)
- Fixed output stopping in Windows Terminal due a bug in Terminal itself. (@hitenkoku)
## v1.1.0 [2022/03/03]
**New Features:**
- Can specify a single rule with the `-r / --rules` option. (Great for testing rules!) (@kazuminn)
- Rule update option (`-u / --update-rules`): Update to the latest rules in the [hayabusa-rules](https://github.com/Yamato-Security/hayabusa-rules) repository. (@hitenkoku)
@@ -26,4 +51,4 @@
- Updated the Rust evtx library to 0.7.2 (@YamatoSecurity)
## v1.0.0 [2021/12/25]
- Initial release.
- Initial release.

339
Cargo.lock generated
View File

@@ -48,9 +48,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.53"
version = "1.0.56"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "94a45b455c14666b85fc40a019e8ab9eb75e3a124e05494f5397122bc9eb06e0"
checksum = "4361135be9122e0870de935d7c439aef945b9f9ddd4199a553b5270b49c82a27"
[[package]]
name = "atty"
@@ -65,15 +65,18 @@ dependencies = [
[[package]]
name = "autocfg"
version = "0.1.7"
version = "0.1.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d49d90015b3c36167a20fe2810c5cd875ad504b39cff3d4eae7977e6b7c1cb2"
checksum = "0dde43e75fd43e8a1bf86103336bc699aa8d17ad1be60c76c0bdfd4828e19b78"
dependencies = [
"autocfg 1.1.0",
]
[[package]]
name = "autocfg"
version = "1.0.1"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cdb031dd78e28731d87d56cc8ffef4a8f36ca26c38fe2de700543e627f8a464a"
checksum = "d468802bab17cbc0cc575e9b053f41e72aa36bfa6b7f55e3529ffa43161b97fa"
[[package]]
name = "backtrace"
@@ -190,22 +193,22 @@ dependencies = [
[[package]]
name = "cargo_metadata"
version = "0.14.1"
version = "0.14.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ba2ae6de944143141f6155a473a6b02f66c7c3f9f47316f802f80204ebfe6e12"
checksum = "4acbb09d9ee8e23699b9634375c72795d095bf268439da88562cf9b501f181fa"
dependencies = [
"camino",
"cargo-platform",
"semver 1.0.4",
"semver 1.0.7",
"serde",
"serde_json",
]
[[package]]
name = "cc"
version = "1.0.72"
version = "1.0.73"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "22a9137b95ea06864e018375b72adfb7db6e6f68cfc8df5a04d00288050485ee"
checksum = "2fff2a6927b3bb87f9595d67196a70493f627687a71d87a0d692242c33f58c11"
dependencies = [
"jobserver",
]
@@ -322,9 +325,9 @@ dependencies = [
[[package]]
name = "core-foundation"
version = "0.9.2"
version = "0.9.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6888e10551bb93e424d8df1d07f1a8b4fceb0001a3a4b048bfc47554946f47b3"
checksum = "194a7a9e6de53fa55116934067c844d9d749312f75c6f6d0980e8c252f8c2146"
dependencies = [
"core-foundation-sys",
"libc",
@@ -347,21 +350,21 @@ dependencies = [
[[package]]
name = "crc32fast"
version = "1.3.1"
version = "1.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a2209c310e29876f7f0b2721e7e26b84aff178aa3da5d091f9bfbf47669e60e3"
checksum = "b540bd8bc810d3885c6ea91e2018302f68baba2129ab3e88f32389ee9370880d"
dependencies = [
"cfg-if 1.0.0",
]
[[package]]
name = "crossbeam-channel"
version = "0.5.2"
version = "0.5.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e54ea8bc3fb1ee042f5aace6e3c6e025d3874866da222930f70ce62aceba0bfa"
checksum = "5aaa7bd5fb665c6864b5f963dd9097905c54125909c7aa94c9e18507cdbe6c53"
dependencies = [
"cfg-if 1.0.0",
"crossbeam-utils 0.8.6",
"crossbeam-utils 0.8.8",
]
[[package]]
@@ -382,8 +385,8 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6455c0ca19f0d2fbf751b908d5c55c1f5cbc65e03c4225427254b46890bdde1e"
dependencies = [
"cfg-if 1.0.0",
"crossbeam-epoch 0.9.6",
"crossbeam-utils 0.8.6",
"crossbeam-epoch 0.9.8",
"crossbeam-utils 0.8.8",
]
[[package]]
@@ -392,7 +395,7 @@ version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "058ed274caafc1f60c4997b5fc07bf7dc7cca454af7c6e81edffe5f33f70dace"
dependencies = [
"autocfg 1.0.1",
"autocfg 1.1.0",
"cfg-if 0.1.10",
"crossbeam-utils 0.7.2",
"lazy_static",
@@ -403,12 +406,13 @@ dependencies = [
[[package]]
name = "crossbeam-epoch"
version = "0.9.6"
version = "0.9.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "97242a70df9b89a65d0b6df3c4bf5b9ce03c5b7309019777fbde37e7537f8762"
checksum = "1145cf131a2c6ba0615079ab6a638f7e1973ac9c2634fcbeaaad6114246efe8c"
dependencies = [
"autocfg 1.1.0",
"cfg-if 1.0.0",
"crossbeam-utils 0.8.6",
"crossbeam-utils 0.8.8",
"lazy_static",
"memoffset 0.6.5",
"scopeguard",
@@ -431,16 +435,16 @@ version = "0.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c3c7c73a2d1e9fc0886a08b93e98eb643461230d5f1925e4036204d5f2e261a8"
dependencies = [
"autocfg 1.0.1",
"autocfg 1.1.0",
"cfg-if 0.1.10",
"lazy_static",
]
[[package]]
name = "crossbeam-utils"
version = "0.8.6"
version = "0.8.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cfcae03edb34f947e64acdb1c33ec169824e20657e9ecb61cef6c8c74dcb8120"
checksum = "0bf124c720b7686e3c2663cf54062ab0f68a88af2fb6a030e87e30bf721fcb38"
dependencies = [
"cfg-if 1.0.0",
"lazy_static",
@@ -492,6 +496,12 @@ version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77c90badedccf4105eca100756a0b1289e191f6fcbdadd3cee1d2f614f97da8f"
[[package]]
name = "downcast-rs"
version = "1.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ea835d29036a4087793836fa931b08837ad5e957da9e23886b29586fb9b6650"
[[package]]
name = "dtoa"
version = "0.4.8"
@@ -576,9 +586,9 @@ checksum = "a246d82be1c9d791c5dfde9a2bd045fc3cbba3fa2b11ad558f27d01712f00569"
[[package]]
name = "encoding_rs"
version = "0.8.30"
version = "0.8.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7896dc8abb250ffdda33912550faa54c88ec8b998dec0b2c55ab224921ce11df"
checksum = "9852635589dc9f9ea1b6fe9f05b50ef208c85c834a562f0c6abb1c475736ec2b"
dependencies = [
"cfg-if 1.0.0",
]
@@ -761,13 +771,13 @@ dependencies = [
[[package]]
name = "getrandom"
version = "0.2.4"
version = "0.2.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "418d37c8b1d42553c93648be529cb70f920d3baf8ef469b74b9638df426e0b4c"
checksum = "9be70c98951c83b8d2f8f60d7065fa6d5146873094452a1008da8c2f1e4205ad"
dependencies = [
"cfg-if 1.0.0",
"libc",
"wasi",
"wasi 0.10.0+wasi-snapshot-preview1",
]
[[package]]
@@ -832,7 +842,7 @@ dependencies = [
[[package]]
name = "hayabusa"
version = "1.1.0"
version = "1.2.0"
dependencies = [
"base64 0.13.0",
"chrono",
@@ -840,6 +850,7 @@ dependencies = [
"colored",
"csv",
"dotenv",
"downcast-rs",
"evtx",
"flate2",
"git2",
@@ -849,8 +860,8 @@ dependencies = [
"is_elevated",
"lazy_static",
"linked-hash-map",
"mopa",
"num_cpus",
"openssl",
"pbr",
"quick-xml",
"regex",
@@ -859,7 +870,7 @@ dependencies = [
"serde_json",
"slack-hook",
"static_vcruntime",
"tokio 1.16.1",
"tokio 1.17.0",
"yaml-rust",
]
@@ -919,9 +930,9 @@ dependencies = [
[[package]]
name = "httparse"
version = "1.5.1"
version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "acd94fdbe1d4ff688b67b04eee2e17bd50995534a61539e45adfefb45e5e5503"
checksum = "9100414882e15fb7feccb4897e5f0ff0ff1ca7d1a86a23208ada4d7a18e6c6c4"
[[package]]
name = "humantime"
@@ -999,19 +1010,19 @@ dependencies = [
[[package]]
name = "indexmap"
version = "1.8.0"
version = "1.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "282a6247722caba404c065016bbfa522806e51714c34f5dfc3e4a3a46fcb4223"
checksum = "0f647032dfaa1f8b6dc29bd3edb7bbef4861b8b8007ebb118d6db284fd59f6ee"
dependencies = [
"autocfg 1.0.1",
"autocfg 1.1.0",
"hashbrown 0.11.2",
]
[[package]]
name = "indoc"
version = "1.0.3"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5a75aeaaef0ce18b58056d306c27b07436fbb34b8816c53094b76dd81803136"
checksum = "e7906a9fababaeacb774f72410e497a1d18de916322e33797bb2cd29baa23c9e"
dependencies = [
"unindent",
]
@@ -1103,9 +1114,9 @@ checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]]
name = "libc"
version = "0.2.117"
version = "0.2.122"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e74d72e0f9b65b5b4ca49a346af3976df0f9c61d550727f349ecd559f251a26c"
checksum = "ec647867e2bf0772e28c8bcde4f0d19a9216916e890543b5a03ed8ef27b8f259"
[[package]]
name = "libgit2-sys"
@@ -1137,9 +1148,9 @@ dependencies = [
[[package]]
name = "libz-sys"
version = "1.1.3"
version = "1.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "de5435b8549c16d423ed0c03dbaafe57cf6c3344744f1242520d59c9d8ecec66"
checksum = "6f35facd4a5673cb5a48822be2be1d4236c1c99cb4113cab7061ac720d5bf859"
dependencies = [
"cc",
"libc",
@@ -1164,18 +1175,19 @@ dependencies = [
[[package]]
name = "lock_api"
version = "0.4.6"
version = "0.4.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "88943dd7ef4a2e5a4bfa2753aaab3013e34ce2533d1996fb18ef591e315e2b3b"
checksum = "327fa5b6a6940e4699ec49a9beae1ea4845c6bab9314e4f84ac68742139d8c53"
dependencies = [
"autocfg 1.1.0",
"scopeguard",
]
[[package]]
name = "log"
version = "0.4.14"
version = "0.4.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "51b9bbe6c47d51fc3e1a9b945965946b4c44142ab8792c50835a980d362c2710"
checksum = "6389c490849ff5bc16be905ae24bc913a9c8892e19b2341dbc175e14c341c2b8"
dependencies = [
"cfg-if 1.0.0",
]
@@ -1204,7 +1216,7 @@ version = "0.5.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "043175f069eda7b85febe4a74abbaeff828d9f8b448515d3151a14a3542811aa"
dependencies = [
"autocfg 1.0.1",
"autocfg 1.1.0",
]
[[package]]
@@ -1213,7 +1225,7 @@ version = "0.6.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5aa361d4faea93603064a027415f07bd8e1d5c88c9fbf68bf56a285428fd79ce"
dependencies = [
"autocfg 1.0.1",
"autocfg 1.1.0",
]
[[package]]
@@ -1224,9 +1236,9 @@ checksum = "2a60c7ce501c71e03a9c9c0d35b861413ae925bd979cc7a4e30d060069aaac8d"
[[package]]
name = "mime_guess"
version = "2.0.3"
version = "2.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2684d4c2e97d99848d30b324b00c8fcc7e5c897b7cbb5819b09e7c90e8baf212"
checksum = "4192263c238a5f0d0c6bfd21f336a313a4ce1c450542449ca191bb657b4642ef"
dependencies = [
"mime",
"unicase",
@@ -1239,7 +1251,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a92518e98c078586bc6c934028adcca4c92a53d6a958196de835170a01d84e4b"
dependencies = [
"adler",
"autocfg 1.0.1",
"autocfg 1.1.0",
]
[[package]]
@@ -1263,14 +1275,15 @@ dependencies = [
[[package]]
name = "mio"
version = "0.7.14"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8067b404fe97c70829f082dec8bcf4f71225d7eaea1d8645349cb76fa06205cc"
checksum = "52da4364ffb0e4fe33a9841a98a3f3014fb964045ce4f7a45a398243c8d6b0c9"
dependencies = [
"libc",
"log",
"miow 0.3.7",
"ntapi",
"wasi 0.11.0+wasi-snapshot-preview1",
"winapi 0.3.9",
]
@@ -1295,17 +1308,11 @@ dependencies = [
"winapi 0.3.9",
]
[[package]]
name = "mopa"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a785740271256c230f57462d3b83e52f998433a7062fc18f96d5999474a9f915"
[[package]]
name = "native-tls"
version = "0.2.8"
version = "0.2.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "48ba9f7719b5a0f42f338907614285fb5fd70e53858141f69898a1fb7203b24d"
checksum = "fd7e2f3618557f980e0b17e8856252eee3c97fa12c54dff0ca290fb6266ca4a9"
dependencies = [
"lazy_static",
"libc",
@@ -1332,9 +1339,9 @@ dependencies = [
[[package]]
name = "ntapi"
version = "0.3.6"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3f6bb902e437b6d86e03cce10a7e2af662292c5dfef23b65899ea3ac9354ad44"
checksum = "c28774a7fd2fbb4f0babd8237ce554b73af68021b5f695a3cebd6c59bac0980f"
dependencies = [
"winapi 0.3.9",
]
@@ -1356,7 +1363,7 @@ version = "0.1.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d2cc698a63b549a70bc047073d2949cce27cd1c7b0a4a862d08a8031bc2801db"
dependencies = [
"autocfg 1.0.1",
"autocfg 1.1.0",
"num-traits",
]
@@ -1366,7 +1373,7 @@ version = "0.2.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9a64b1ec5cda2586e284722486d802acf1f7dbdc623e2bfc57e65ca1cd099290"
dependencies = [
"autocfg 1.0.1",
"autocfg 1.1.0",
]
[[package]]
@@ -1390,9 +1397,9 @@ dependencies = [
[[package]]
name = "once_cell"
version = "1.9.0"
version = "1.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "da32515d9f6e6e489d7bc9d84c71b060db7247dc035bbe44eac88cf87486d8d5"
checksum = "87f3e037eac156d1775da914196f0f37741a274155e34a0b7e427c35d2a2ecb9"
[[package]]
name = "openssl"
@@ -1414,15 +1421,25 @@ version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ff011a302c396a5197692431fc1948019154afc178baf7d8e37367442a4601cf"
[[package]]
name = "openssl-src"
version = "111.18.0+1.1.1n"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7897a926e1e8d00219127dc020130eca4292e5ca666dd592480d72c3eca2ff6c"
dependencies = [
"cc",
]
[[package]]
name = "openssl-sys"
version = "0.9.72"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7e46109c383602735fa0a2e48dd2b7c892b048e1bf69e5c3b1d804b7d9c203cb"
dependencies = [
"autocfg 1.0.1",
"autocfg 1.1.0",
"cc",
"libc",
"openssl-src",
"pkg-config",
"vcpkg",
]
@@ -1440,13 +1457,12 @@ dependencies = [
[[package]]
name = "parking_lot"
version = "0.11.2"
version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7d17b78036a60663b797adeaee46f5c9dfebb86948d1255007a1d6be0271ff99"
checksum = "87f5ec2493a61ac0506c0f4199f99070cbe83857b0337006a30f3e6719b8ef58"
dependencies = [
"instant",
"lock_api 0.4.6",
"parking_lot_core 0.8.5",
"lock_api 0.4.7",
"parking_lot_core 0.9.2",
]
[[package]]
@@ -1466,16 +1482,15 @@ dependencies = [
[[package]]
name = "parking_lot_core"
version = "0.8.5"
version = "0.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d76e8e1493bcac0d2766c42737f34458f1c8c50c0d23bcb24ea953affb273216"
checksum = "995f667a6c822200b0433ac218e05582f0e2efa1b922a3fd2fbaadc5f87bab37"
dependencies = [
"cfg-if 1.0.0",
"instant",
"libc",
"redox_syscall 0.2.10",
"redox_syscall 0.2.13",
"smallvec 1.8.0",
"winapi 0.3.9",
"windows-sys",
]
[[package]]
@@ -1510,9 +1525,9 @@ checksum = "e280fbe77cc62c91527259e9442153f4688736748d24660126286329742b4c6c"
[[package]]
name = "pkg-config"
version = "0.3.24"
version = "0.3.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "58893f751c9b0412871a09abd62ecd2a00298c6c83befa223ef98c52aef40cbe"
checksum = "1df8c4ec4b0627e53bdf214615ad287367e482558cf84b109250b37464dc03ae"
[[package]]
name = "proc-macro-hack"
@@ -1522,9 +1537,9 @@ checksum = "dbf0c48bc1d91375ae5c3cd81e3722dff1abcf81a30960240640d223f59fe0e5"
[[package]]
name = "proc-macro2"
version = "1.0.36"
version = "1.0.37"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c7342d5883fbccae1cc37a2353b09c87c9b0f3afd73f5fb9bba687a1f733b029"
checksum = "ec757218438d5fda206afc041538b2f6d889286160d649a86a24d37e1235afd1"
dependencies = [
"unicode-xid",
]
@@ -1568,9 +1583,9 @@ dependencies = [
[[package]]
name = "quote"
version = "1.0.15"
version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "864d3e96a899863136fc6e99f3d7cae289dafe43bf2c5ac19b70df7210c0a145"
checksum = "a1feb54ed693b93a84e14094943b84b7c4eae204c512b7ccb95ab0c66d278ad1"
dependencies = [
"proc-macro2",
]
@@ -1581,7 +1596,7 @@ version = "0.6.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6d71dacdc3c88c1fde3885a3be3fbab9f35724e6ce99467f7d9c5026132184ca"
dependencies = [
"autocfg 0.1.7",
"autocfg 0.1.8",
"libc",
"rand_chacha",
"rand_core 0.4.2",
@@ -1600,7 +1615,7 @@ version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "556d3a1ca6600bfcbab7c7c91ccb085ac7fbbcd70e008a98742e7847f4f7bcef"
dependencies = [
"autocfg 0.1.7",
"autocfg 0.1.8",
"rand_core 0.3.1",
]
@@ -1668,7 +1683,7 @@ version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "abf9b09b01790cfe0364f52bf32995ea3c39f4d2dd011eac241d2914146d0b44"
dependencies = [
"autocfg 0.1.7",
"autocfg 0.1.8",
"rand_core 0.4.2",
]
@@ -1687,7 +1702,7 @@ version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c06aca804d41dbc8ba42dfd964f0d01334eceb64314b9ecf7c5fad5188a06d90"
dependencies = [
"autocfg 1.0.1",
"autocfg 1.1.0",
"crossbeam-deque 0.8.1",
"either",
"rayon-core",
@@ -1701,7 +1716,7 @@ checksum = "d78120e2c850279833f1dd3582f730c4ab53ed95aeaaaa862a2a5c71b1656d8e"
dependencies = [
"crossbeam-channel",
"crossbeam-deque 0.8.1",
"crossbeam-utils 0.8.6",
"crossbeam-utils 0.8.8",
"lazy_static",
"num_cpus",
]
@@ -1723,18 +1738,18 @@ checksum = "41cc0f7e4d5d4544e8861606a285bb08d3e70712ccc7d2b84d7c0ccfaf4b05ce"
[[package]]
name = "redox_syscall"
version = "0.2.10"
version = "0.2.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8383f39639269cde97d255a32bdb68c047337295414940c68bdd30c2e13203ff"
checksum = "62f25bc4c7e55e0b0b7a1d43fb893f4fa1361d0abe38b9ce4f323c2adfe6ef42"
dependencies = [
"bitflags",
]
[[package]]
name = "regex"
version = "1.5.4"
version = "1.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d07a8629359eb56f1e2fb1652bb04212c072a87ba68546a04065d525673ac461"
checksum = "1a11647b6b25ff05a515cb92c365cec08801e83423a235b51e231e1808747286"
dependencies = [
"aho-corasick",
"memchr",
@@ -1864,9 +1879,9 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]]
name = "security-framework"
version = "2.6.0"
version = "2.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fed7948b6c68acbb6e20c334f55ad635dc0f75506963de4464289fbd3b051ac"
checksum = "2dc14f172faf8a0194a3aded622712b0de276821addc574fa54fc0a1167e10dc"
dependencies = [
"bitflags",
"core-foundation",
@@ -1877,9 +1892,9 @@ dependencies = [
[[package]]
name = "security-framework-sys"
version = "2.6.0"
version = "2.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a57321bf8bc2362081b2599912d2961fe899c0efadf1b4b2f8d48b3e253bb96c"
checksum = "0160a13a177a45bfb43ce71c01580998474f556ad854dcbca936dd2841a5c556"
dependencies = [
"core-foundation-sys",
"libc",
@@ -1896,9 +1911,9 @@ dependencies = [
[[package]]
name = "semver"
version = "1.0.4"
version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "568a8e6258aa33c13358f81fd834adb854c6f7c9468520910a9b1e8fac068012"
checksum = "d65bd28f48be7196d222d95b9243287f48d27aca604e08497513019ff0502cc4"
dependencies = [
"serde",
]
@@ -1931,9 +1946,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.78"
version = "1.0.79"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d23c1ba4cf0efd44be32017709280b32d1cea5c3f1275c3b6d9e8bc54f758085"
checksum = "8e8d9fa5c3b304765ce1fd9c4c8a3de2c8db365a5b91be52f186efc675681d95"
dependencies = [
"itoa 1.0.1",
"ryu",
@@ -2004,9 +2019,9 @@ dependencies = [
[[package]]
name = "slab"
version = "0.4.5"
version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9def91fd1e018fe007022791f865d0ccc9b3a0d5001e01aabb8b40e46000afb5"
checksum = "eb703cfe953bccee95685111adeedb76fabe4e97549a58d16f03ea7b9367bb32"
[[package]]
name = "slack-hook"
@@ -2039,6 +2054,16 @@ version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f2dd574626839106c320a323308629dcb1acfc96e32a8cba364ddc61ac23ee83"
[[package]]
name = "socket2"
version = "0.4.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "66d72b759436ae32898a2af0a14218dbf55efde3feeb170eb623637db85ee1e0"
dependencies = [
"libc",
"winapi 0.3.9",
]
[[package]]
name = "standback"
version = "0.2.17"
@@ -2120,9 +2145,9 @@ checksum = "8ea5119cdb4c55b55d432abb513a0429384878c15dde60cc77b1c99de1a95a6a"
[[package]]
name = "syn"
version = "1.0.86"
version = "1.0.91"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a65b3f4ffa0092e9887669db0eae07941f023991ab58ea44da8fe8e2d511c6b"
checksum = "b683b2b825c8eef438b77c36a06dc262294da3d5a5813fac20da149241dcd44d"
dependencies = [
"proc-macro2",
"quote",
@@ -2150,16 +2175,16 @@ dependencies = [
"cfg-if 1.0.0",
"fastrand",
"libc",
"redox_syscall 0.2.10",
"redox_syscall 0.2.13",
"remove_dir_all",
"winapi 0.3.9",
]
[[package]]
name = "termcolor"
version = "1.1.2"
version = "1.1.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2dfed899f0eb03f32ee8c6a0aabdb8a7949659e3466561fc0adf54e26d88c5f4"
checksum = "bab24d30b911b2376f3a13cc2cd443142f0c81dda04c118693e35b3835757755"
dependencies = [
"winapi-util",
]
@@ -2210,7 +2235,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6db9e6914ab8b1ae1c260a4ae7a49b6c5611b40328a735b21862567685e73255"
dependencies = [
"libc",
"wasi",
"wasi 0.10.0+wasi-snapshot-preview1",
"winapi 0.3.9",
]
@@ -2288,19 +2313,20 @@ dependencies = [
[[package]]
name = "tokio"
version = "1.16.1"
version = "1.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c27a64b625de6d309e8c57716ba93021dccf1b3b5c97edd6d3dd2d2135afc0a"
checksum = "2af73ac49756f3f7c01172e34a23e5d0216f6c32333757c2c61feb2bbff5a5ee"
dependencies = [
"bytes 1.1.0",
"libc",
"memchr",
"mio 0.7.14",
"mio 0.8.2",
"num_cpus",
"once_cell",
"parking_lot 0.11.2",
"parking_lot 0.12.0",
"pin-project-lite",
"signal-hook-registry",
"socket2",
"tokio-macros",
"winapi 0.3.9",
]
@@ -2483,9 +2509,9 @@ checksum = "8ccb82d61f80a663efe1f787a51b16b5a51e3314d6ac365b08639f52387b33f3"
[[package]]
name = "unindent"
version = "0.1.7"
version = "0.1.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f14ee04d9415b52b3aeab06258a3f07093182b88ba0f9b8d203f211a7a7d41c7"
checksum = "514672a55d7380da379785a4d70ca8386c8883ff7eaae877be4d2081cebe73d8"
[[package]]
name = "url"
@@ -2576,10 +2602,16 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a143597ca7c7793eff794def352d41792a93c481eb1042423ff7ff72ba2c31f"
[[package]]
name = "wasm-bindgen"
version = "0.2.79"
name = "wasi"
version = "0.11.0+wasi-snapshot-preview1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "25f1af7423d8588a3d840681122e72e6a24ddbcb3f0ec385cac0d12d24256c06"
checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423"
[[package]]
name = "wasm-bindgen"
version = "0.2.80"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "27370197c907c55e3f1a9fbe26f44e937fe6451368324e009cba39e139dc08ad"
dependencies = [
"cfg-if 1.0.0",
"wasm-bindgen-macro",
@@ -2587,9 +2619,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-backend"
version = "0.2.79"
version = "0.2.80"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b21c0df030f5a177f3cba22e9bc4322695ec43e7257d865302900290bcdedca"
checksum = "53e04185bfa3a779273da532f5025e33398409573f348985af9a1cbf3774d3f4"
dependencies = [
"bumpalo",
"lazy_static",
@@ -2602,9 +2634,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.79"
version = "0.2.80"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2f4203d69e40a52ee523b2529a773d5ffc1dc0071801c87b3d270b471b80ed01"
checksum = "17cae7ff784d7e83a2fe7611cfe766ecf034111b49deb850a3dc7699c08251f5"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
@@ -2612,9 +2644,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.79"
version = "0.2.80"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bfa8a30d46208db204854cadbb5d4baf5fcf8071ba5bf48190c3e59937962ebc"
checksum = "99ec0dc7a4756fffc231aab1b9f2f578d23cd391390ab27f952ae0c9b3ece20b"
dependencies = [
"proc-macro2",
"quote",
@@ -2625,9 +2657,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.79"
version = "0.2.80"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3d958d035c4438e28c70e4321a2911302f10135ce78a9c7834c0cab4123d06a2"
checksum = "d554b7f530dee5964d9a9468d95c1f8b8acae4f282807e7d27d4b03099a46744"
[[package]]
name = "winapi"
@@ -2672,6 +2704,49 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
[[package]]
name = "windows-sys"
version = "0.34.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5acdd78cb4ba54c0045ac14f62d8f94a03d10047904ae2a40afa1e99d8f70825"
dependencies = [
"windows_aarch64_msvc",
"windows_i686_gnu",
"windows_i686_msvc",
"windows_x86_64_gnu",
"windows_x86_64_msvc",
]
[[package]]
name = "windows_aarch64_msvc"
version = "0.34.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "17cffbe740121affb56fad0fc0e421804adf0ae00891205213b5cecd30db881d"
[[package]]
name = "windows_i686_gnu"
version = "0.34.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2564fde759adb79129d9b4f54be42b32c89970c18ebf93124ca8870a498688ed"
[[package]]
name = "windows_i686_msvc"
version = "0.34.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9cd9d32ba70453522332c14d38814bceeb747d80b3958676007acadd7e166956"
[[package]]
name = "windows_x86_64_gnu"
version = "0.34.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cfce6deae227ee8d356d19effc141a509cc503dfd1f850622ec4b0f84428e1f4"
[[package]]
name = "windows_x86_64_msvc"
version = "0.34.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d19538ccc21819d01deaf88d6a17eae6596a12e9aafdbb97916fb49896d89de9"
[[package]]
name = "winreg"
version = "0.6.2"
@@ -2720,6 +2795,6 @@ dependencies = [
[[package]]
name = "zeroize"
version = "1.5.2"
version = "1.5.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c88870063c39ee00ec285a2f8d6a966e5b6fb2becc4e8dac77ed0d370ed6006"
checksum = "7eb5728b8afd3f280a869ce1d4c554ffaed35f45c231fc41bfbd0381bef50317"

View File

@@ -1,6 +1,6 @@
[package]
name = "hayabusa"
version = "1.1.0"
version = "1.2.0"
authors = ["Yamato Security @SecurityYamato"]
edition = "2021"
@@ -23,7 +23,7 @@ yaml-rust = "0.4.*"
linked-hash-map = "0.5.*"
tokio = { version = "1", features = ["full"] }
num_cpus = "1.13.*"
mopa = "0.2.*"
downcast-rs = "1.2.0"
slack-hook = "0.8"
dotenv = "0.15.*"
hhmmss = "*"
@@ -37,5 +37,8 @@ git2="0.13"
is_elevated = "0.1.2"
static_vcruntime = "1.5.*"
[target.'cfg(unix)'.dependencies] #Mac and Linux
openssl = { version = "*", features = ["vendored"] } #vendored is needed to compile statically.
[profile.release]
lto = true

View File

@@ -10,8 +10,11 @@
[tag-1]: https://img.shields.io/github/downloads/Yamato-Security/hayabusa/total?style=plastic&label=GitHub%F0%9F%A6%85DownLoads
[tag-2]: https://img.shields.io/github/stars/Yamato-Security/hayabusa?style=plastic&label=GitHub%F0%9F%A6%85Stars
[tag-3]: https://img.shields.io/github/v/release/Yamato-Security/hayabusa?display_name=tag&label=latest-version&style=plastic
[tag-4]: https://img.shields.io/badge/Black%20Hat%20Arsenal-Asia%202022-blue
[tag-5]: https://rust-reportcard.xuri.me/badge/github.com/Yamato-Security/hayabusa
[tag-6]: https://img.shields.io/badge/Maintenance%20Level-Actively%20Developed-brightgreen.svg
![tag-1] ![tag-2] ![tag-3]
![tag-1] ![tag-2] ![tag-3] ![tag-4] ![tag-5] ![tag-6]
# Hayabusa について
@@ -24,7 +27,6 @@ Hayabusaは、日本の[Yamato Security](https://yamatosecurity.connpass.com/)
- [主な目的](#主な目的)
- [スレット(脅威)ハンティング](#スレット脅威ハンティング)
- [フォレンジックタイムラインの高速生成](#フォレンジックタイムラインの高速生成)
- [開発について](#開発について)
- [スクリーンショット](#スクリーンショット)
- [起動画面:](#起動画面)
- [ターミナル出力画面:](#ターミナル出力画面)
@@ -33,7 +35,7 @@ Hayabusaは、日本の[Yamato Security](https://yamatosecurity.connpass.com/)
- [Timeline Explorerでの解析:](#timeline-explorerでの解析)
- [Criticalアラートのフィルタリングとコンピュータごとのグルーピング:](#criticalアラートのフィルタリングとコンピュータごとのグルーピング)
- [タイムラインのサンプル結果](#タイムラインのサンプル結果)
- [特徴](#特徴)
- [特徴&機能](#特徴機能)
- [予定されている機能](#予定されている機能)
- [ダウンロード](#ダウンロード)
- [ソースコードからのコンパイル(任意)](#ソースコードからのコンパイル任意)
@@ -41,32 +43,40 @@ Hayabusaは、日本の[Yamato Security](https://yamatosecurity.connpass.com/)
- [macOSでのコンパイルの注意点](#macosでのコンパイルの注意点)
- [Linuxでのコンパイルの注意点](#linuxでのコンパイルの注意点)
- [アドバンス: Rustパッケージの更新](#アドバンス-rustパッケージの更新)
- [サンプルevtxファイルでHayabusaをテストする](#サンプルevtxファイルでhayabusaをテストする)
- [Hayabusaの実行](#hayabusaの実行)
- [注意: アンチウィルス/EDRの誤検知](#注意-アンチウィルスedrの誤検知)
- [Windows](#windows)
- [Linux](#linux)
- [macOS](#macos)
- [使用方法](#使用方法)
- [Windows Terminalで利用する際の注意事項](#windows-terminalで利用する際の注意事項)
- [コマンドラインオプション](#コマンドラインオプション)
- [使用例](#使用例)
- [ピボットキーワードの作成](#ピボットキーワードの作成)
- [サンプルevtxファイルでHayabusaをテストする](#サンプルevtxファイルでhayabusaをテストする)
- [Hayabusaの出力](#hayabusaの出力)
- [プログレスバー](#プログレスバー)
- [標準出力へのカラー設定](#標準出力へのカラー設定)
- [Hayabusa ルール](#hayabusa-ルール)
- [Hayabusaルール](#hayabusaルール)
- [Hayabusa v.s. 変換されたSigmaルール](#hayabusa-vs-変換されたsigmaルール)
- [検知ルールのチューニング](#検知ルールのチューニング)
- [検知レベルのlevelチューニング](#検知レベルのlevelチューニング)
- [イベントIDフィルタリング](#イベントidフィルタリング)
- [その他のWindowsイベントログ解析ツールおよび関連プロジェクト](#その他のwindowsイベントログ解析ツールおよび関連プロジェクト)
- [Sigmaをサポートする他の類似ツールとの比較](#sigmaをサポートする他の類似ツールとの比較)
- [Windowsイベントログ設定のススメ](#windowsイベントログ設定のススメ)
- [Sysmon関係のプロジェクト](#sysmon関係のプロジェクト)
- [コミュニティによるドキュメンテーション](#コミュニティによるドキュメンテーション)
- [英語](#英語)
- [日本語](#日本語)
- [貢献](#貢献)
- [バグの報告](#バグの報告)
- [ライセンス](#ライセンス)
- [Twitter](#twitter)
## 主な目的
### スレット(脅威)ハンティング
Hayabusa には現在、1000以上のSigmaルールと約50のHayabusa検知ルールがあり、定期的にルールが追加されています。 最終的な目標はインシデントレスポンスや定期的なスレットハンティングのために、HayabusaエージェントをすべてのWindows端末にインストールして、中央サーバーにアラートを返す仕組みを作ることです。
Hayabusa には現在、1300以上のSigmaルールと約70のHayabusa検知ルールがあり、定期的にルールが追加されています。 最終的な目標はインシデントレスポンスや定期的なスレットハンティングのために、HayabusaエージェントをすべてのWindows端末にインストールして、中央サーバーにアラートを返す仕組みを作ることです。
### フォレンジックタイムラインの高速生成
@@ -76,10 +86,6 @@ Windowsのイベントログは、
から、従来は非常に長い時間と手間がかかる解析作業となっていました。 Hayabusa は、有用なデータのみを抽出し、専門的なトレーニングを受けた分析者だけでなく、Windowsのシステム管理者であれば誰でも利用できる読みやすい形式で提示することを主な目的としています。
[Evtx Explorer](https://ericzimmerman.github.io/#!index.md)や[Event Log Explorer](https://eventlogxp.com/)のような深掘り分析を行うツールの代替ではなく、分析者が20%の時間で80%の作業を行えるようにすることを目的としています。
# 開発について
[DeepBlueCLI](https://github.com/sans-blue-team/DeepBlueCLI)というWindowsイベントログ解析ツールに触発されて、2020年に[RustyBlue](https://github.com/Yamato-Security/RustyBlue)プロジェクト用にRustに移植することから始めました。その後、YMLで書かれたSigmaのような柔軟な検知シグネチャを作り、SigmaルールをHayabusaルール形式へ変換するツールも作成しました。
# スクリーンショット
## 起動画面:
@@ -108,50 +114,54 @@ Windowsのイベントログは、
# タイムラインのサンプル結果
CSVと手動で編集したXLSXのタイムライン結果のサンプルは[こちら](https://github.com/Yamato-Security/hayabusa/tree/main/sample-results)で確認できます。
CSVのタイムライン結果のサンプルは[こちら](https://github.com/Yamato-Security/hayabusa/tree/main/sample-results)で確認できます。
CSVのタイムラインをExcelやTimeline Explorerで分析する方法は[こちら](doc/CSV-AnalysisWithExcelAndTimelineExplorer-Japanese.pdf)で紹介しています。
# 特徴
# 特徴&機能
* クロスプラットフォーム対応: Windows, Linux, macOS
* クロスプラットフォーム対応: Windows, Linux, macOS
* Rustで開発され、メモリセーフでハヤブサよりも高速です
* マルチスレッド対応により、最大5倍のスピードアップを実現!
* マルチスレッド対応により、最大5倍のスピードアップを実現
* フォレンジック調査やインシデントレスポンスのために、分析しやすいCSVタイムラインを作成します。
* 読みやすい/作成/編集可能なYMLベースのHayabusaルールで作成されたIoCシグネチャに基づくスレット
* 読みやすい/作成/編集可能なYMLベースのHayabusaルールで作成されたIoCシグネチャに基づくスレット
* SigmaルールをHayabusaルールに変換するためのSigmaルールのサポートがされています。
* 現在、他の類似ツールに比べ最も多くのSigmaルールをサポートしており、カウントルールにも対応しています。
* イベントログの統計どのような種類のイベントがあるのかを把握し、ログ設定のチューニングに有効です。
* イベントログの統計。(どのような種類のイベントがあるのかを把握し、ログ設定のチューニングに有効です。)
* 不良ルールやノイズの多いルールを除外するルールチューニング設定が可能です。
* MITRE ATT&CKとのマッピング
* MITRE ATT&CKとのマッピング (CSVの出力ファイルのみ)。
* ルールレベルのチューニング。
* イベントログから不審なユーザやファイルを素早く特定するのに有用な、ピボットキーワードの一覧作成。
# 予定されている機能
* すべてのエンドポイントでの企業全体のスレットハンティング
* 日本語対応
* MITRE ATT&CKのヒートマップ生成機能
* ユーザーログオンと失敗したログオンのサマリー
* JSONログからの入力
* JSONへの出力→Elastic Stack/Splunkへのインポート
* すべてのエンドポイントでの企業全体のスレットハンティング
* 日本語対応
* MITRE ATT&CKのヒートマップ生成機能
* ユーザーログオンと失敗したログオンのサマリー
* JSONログからの入力
* JSONへの出力→Elastic Stack/Splunkへのインポート
# ダウンロード
Hayabusaの[Releases](https://github.com/Yamato-Security/hayabusa/releases)から最新版をダウンロードできます。
Hayabusaの[Releases](https://github.com/Yamato-Security/hayabusa/releases)からコンパイルされたバイナリが含まれている最新版をダウンロードできます。
または、以下の`git clone`コマンドでレポジトリをダウンロードし、ソースコードからコンパイルして使用することも可能です
または、以下の`git clone`コマンドでレポジトリをダウンロードし、ソースコードからコンパイルして使用することも可能です
```bash
git clone https://github.com/Yamato-Security/hayabusa.git --recursive
```
--recursive をつけ忘れた場合、サブモジュールとして管理されている rules/ 内のファイルが取得できません。
Hayabusaでは検知ルールを`rules/`フォルダの取得はコンパイル後に以下のコマンドでルールの最新版を取得することができます。
rulesフォルダ配下でファイルを削除や更新をしていた場合は更新されないのでその場合はrulesフォルダを他の名前にリネームしたうえで以下のコマンドを打ってください。
注意: `--recursive`をつけ忘れた場合、サブモジュールとして管理されている`rules`フォルダ内のファイルはダウンロードされません。
`git pull --recurse-submodules`コマンド、もしくは以下のコマンドで`rules`フォルダを同期し、Hayabusaの最新のルールを更新することができます:
```bash
.\hayabusa.exe -u
hayabusa.exe -u
```
アップデートが失敗した場合は、`rules`フォルダの名前を変更してから、もう一回アップデートしてみて下さい。
# ソースコードからのコンパイル(任意)
Rustがインストールされている場合、以下のコマンドでソースコードからコンパイルすることができます:
@@ -163,7 +173,7 @@ cargo build --release
以下のコマンドで定期的にRustをアップデートしてください
```bash
rustup update
rustup update stable
```
コンパイルされたバイナリは`target/release`フォルダ配下で作成されます。
@@ -213,27 +223,69 @@ cargo update
※ アップデート後、何か不具合がありましたらお知らせください。
## サンプルevtxファイルでHayabusaをテストする
# Hayabusaの実行
Hayabusaをテストしたり、新しいルールを作成したりするためのサンプルevtxファイルをいくつか提供しています: [https://github.com/Yamato-Security/Hayabusa-sample-evtx](https://github.com/Yamato-Security/Hayabusa-sample-evtx)
## 注意: アンチウィルス/EDRの誤検知
以下のコマンドで、サンプルのevtxファイルを新しいサブディレクトリ `hayabusa-sample-evtx` にダウンロードすることができます:
Hayabusaを実行する際にアンチウィルスやEDRにブロックされる可能性があります
誤検知のため、セキュリティ対策の製品がHayabusaを許可するように設定する必要があります。
マルウェア感染が心配のであれば、ソースコードを確認した上で、自分でバイナリをコンパイルして下さい。
## Windows
コマンドプロンプトやWindows Terminalから32ビットもしくは64ビットのWindowsバイナリをHayabusaのルートディレクトリから実行します。
例: `hayabusa-1.2.0-windows-x64.exe`
## Linux
まず、バイナリに実行権限を与える必要があります。
```bash
git clone https://github.com/Yamato-Security/hayabusa-sample-evtx.git
chmod +x ./hayabusa-1.2.0-linux-x64
```
> ※ 以下の例でHayabusaを試したい方は、上記コマンドをhayabusaのルートフォルダから実行してください。
次に、Hayabusaのルートディレクトリから実行します
```bash
./hayabusa-1.2.0-linux-x64
```
## macOS
まず、ターミナルやiTerm2からバイナリに実行権限を与える必要があります。
```bash
chmod +x ./hayabusa-1.2.0-mac-intel
```
次に、Hayabusaのルートディレクトリから実行してみてください
```bash
./hayabusa-1.2.0-mac-intel
```
macOSの最新版では、以下のセキュリティ警告が出る可能性があります
![Mac Error 1 JP](/screenshots/MacOS-RunError-1-JP.png)
macOSの環境設定から「セキュリティとプライバシー」を開き、「一般」タブから「このまま許可」ボタンをクリックしてください。
![Mac Error 2 JP](/screenshots/MacOS-RunError-2-JP.png)
その後、ターミナルからもう一回実行してみてください:
```bash
./hayabusa-1.2.0-mac-intel
```
以下の警告が出るので、「開く」をクリックしてください。
![Mac Error 3 JP](/screenshots/MacOS-RunError-3-JP.png)
これで実行できるようになります。
# 使用方法
> 注意: Hayabusaのルートディレクトリから、バイナリを実行する必要があります。例`.\hayabusa.exe`
## Windows Terminalで利用する際の注意事項
2021/02/01現在、Windows Terminalから標準出力でhayabusaを使ったときに、コントロールコード(0x9D等)が検知結果に入っていると出力が止まることが確認されています。
Windows Terminalからhayabusaを標準出力で解析させたい場合は、 `-c` (カラー出力)のオプションをつければ出力が止まることを回避できます。
## コマンドラインオプション
```bash
@@ -242,6 +294,7 @@ USAGE:
-f --filepath=[FILEPATH] '1つの.evtxファイルのパス。'
-r --rules=[RULEFILE/RULEDIRECTORY] 'ルールファイルまたはルールファイルを持つディレクトリ。(デフォルト: ./rules)'
-c --color 'カラーで出力する。 (ターミナルはTrue Colorに対応する必要がある。)'
-C --config=[RULECONFIGDIRECTORY] 'ルールフォルダのコンフィグディレクトリ(デフォルト: ./rules/config)'
-o --output=[CSV_TIMELINE] 'タイムラインをCSV形式で保存する。(例: results.csv)'
-v --verbose '詳細な情報を出力する。'
-D --enable-deprecated-rules 'Deprecatedルールを有効にする。'
@@ -258,6 +311,8 @@ USAGE:
-s --statistics 'イベント ID の統計情報を表示する。'
-q --quiet 'Quietモード。起動バナーを表示しない。'
-Q --quiet-errors 'Quiet errorsモード。エラーログを保存しない。'
--level-tuning <LEVEL_TUNING_FILE> 'ルールlevelのチューニング [default: ./config/level_tuning.txt]'
-p --pivot-keywords-list 'ピボットキーワードの一覧作成。'
--contributors 'コントリビュータの一覧表示。'
```
@@ -266,73 +321,79 @@ USAGE:
* つのWindowsイベントログファイルに対してHayabusaを実行します:
```bash
.\hayabusa.exe -f eventlog.evtx
hayabusa.exe -f eventlog.evtx
```
* 複数のWindowsイベントログファイルのあるsample-evtxディレクトリに対して、Hayabusaを実行します:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx
hayabusa.exe -d .\hayabusa-sample-evtx
```
* つのCSVファイルにエクスポートして、EXCELやTimeline Explorerでさらに分析することができます:
* つのCSVファイルにエクスポートして、ExcelやTimeline Explorerでさらに分析することができます:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx -o results.csv
```
* Hayabusaルールのみを実行しますデフォルトでは `-r .\rules` にあるすべてのルールが利用されます):
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa -o results.csv
```
* Windowsでデフォルトで有効になっているログに対してのみ、Hayabusaルールを実行します:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default -o results.csv
```
* Sysmonログに対してのみHayabusaルールを実行します:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\sysmon -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\sysmon -o results.csv
```
* Sigmaルールのみを実行します:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\sigma -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\sigma -o results.csv
```
* 廃棄(deprecated)されたルール(`status``deprecated`になっているルール)とノイジールール(`.\rules\config\noisy_rules.txt`にルールIDが書かれているルール)を有効にします:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx --enable-deprecated-rules --enable-noisy-rules -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx --enable-deprecated-rules --enable-noisy-rules -o results.csv
```
* ログオン情報を分析するルールのみを実行し、UTCタイムゾーンで出力します:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -r ./rules/Hayabusa/default/events/Security/Logons -U -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default\events\Security\Logons -U -o results.csv
```
* 起動中のWindows端末上で実行しAdministrator権限が必要、アラート悪意のある可能性のある動作のみを検知します:
```bash
.\hayabusa.exe -l -m low
hayabusa.exe -l -m low
```
* criticalレベルのアラートからピボットキーワードの一覧を作成します(結果は結果毎に`keywords-Ip Address.txt``keyworss-Users.txt`等に出力されます):
```bash
hayabusa.exe -l -m critical -p -o keywords
```
* イベントIDの統計情報を取得します:
```bash
.\hayabusa.exe -f Security.evtx -s
hayabusa.exe -f Security.evtx -s
```
* 詳細なメッセージを出力します(処理に時間がかかるファイル、パースエラー等を特定するのに便利):
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -v
hayabusa.exe -d .\hayabusa-sample-evtx -v
```
* Verbose出力の例:
@@ -350,10 +411,40 @@ Checking target evtx FilePath: "./hayabusa-sample-evtx/YamatoSecurity/T1218.004_
5 / 509 [=>------------------------------------------------------------------------------------------------------------------------------------------] 0.98 % 1s
```
* Quiet error mode:
* エラーログの出力をさせないようにする:
デフォルトでは、Hayabusaはエラーメッセージをエラーログに保存します。
エラーメッセージを保存したくない場合は、`-Q`を追加してください。
## ピボットキーワードの作成
`-p`もしくは`--pivot-keywords-list`オプションを使うことで不審なユーザやホスト名、プロセスなどを一覧で出力することができ、イベントログから素早く特定することができます。
ピボットキーワードのカスタマイズは`config/pivot_keywords.txt`を変更することで行うことができます。以下はデフォルトの設定になります。:
```
Users.SubjectUserName
Users.TargetUserName
Users.User
Logon IDs.SubjectLogonId
Logon IDs.TargetLogonId
Workstation Names.WorkstationName
Ip Addresses.IpAddress
Processes.Image
```
形式は`KeywordName.FieldName`となっています。例えばデフォルトの設定では、`Users`というリストは検知したイベントから`SubjectUserName``TargetUserName``User`のフィールドの値が一覧として出力されます。hayabusaのデフォルトでは検知したすべてのイベントから結果を出力するため、`--pivot-keyword-list`オプションを使うときには `-m` もしくは `--min-level` オプションを併せて使って検知するイベントのレベルを指定することをおすすめします。まず`-m critical`を指定して、最も高い`critical`レベルのアラートのみを対象として、レベルを必要に応じて下げていくとよいでしょう。結果に正常なイベントにもある共通のキーワードが入っている可能性が高いため、手動で結果を確認してから、不審なイベントにありそうなキーワードリストを1つのファイルに保存し、`grep -f keywords.txt timeline.csv`等のコマンドで不審なアクティビティに絞ったタイムラインを作成することができます。
# サンプルevtxファイルでHayabusaをテストする
Hayabusaをテストしたり、新しいルールを作成したりするためのサンプルevtxファイルをいくつか提供しています: [https://github.com/Yamato-Security/Hayabusa-sample-evtx](https://github.com/Yamato-Security/Hayabusa-sample-evtx)
以下のコマンドで、サンプルのevtxファイルを新しいサブディレクトリ `hayabusa-sample-evtx` にダウンロードすることができます:
```bash
git clone https://github.com/Yamato-Security/hayabusa-sample-evtx.git
```
> ※ 以下の例でHayabusaを試したい方は、上記コマンドをhayabusaのルートフォルダから実行してください。
# Hayabusaの出力
Hayabusaの結果を標準出力に表示しているときデフォルトは、以下の情報を表示します:
@@ -383,7 +474,7 @@ CSVファイルとして保存する場合、以下の2つのフィールドが
注意: True Colorに対応しているターミナルが必要です。
例: [Windows Terminal](https://docs.microsoft.com/en-us/windows/terminal/install) またはmacOSの[iTerm2](https://iterm2.com/)。
# Hayabusa ルール
# Hayabusaルール
Hayabusa検知ルールはSigmaのようなYML形式で記述されています。`rules`ディレクトリに入っていますが、将来的には[https://github.com/Yamato-Security/hayabusa-rules](https://github.com/Yamato-Security/hayabusa-rules)のレポジトリで管理する予定なので、ルールのissueとpull requestはhayabusaのレポジトリではなく、ルールレポジトリへお願いします。
@@ -402,9 +493,9 @@ Hayabusaルールのディレクトリ構造は、3つのディレクトリに
ルールはさらにログタイプSecurity、Systemなどによってディレクトリに分けられ、次の形式で名前が付けられます。
* アラート形式: `<イベントID>_<MITRE ATT&CKの攻撃手法名>_<詳細>.yml`
* アラート例: `1102_IndicatorRemovalOnHost-ClearWindowsEventLogs_SecurityLogCleared.yml`
* イベント形式: `<イベントID>_<詳細>.yml`
* アラート形式: `<イベントID>_<イベントの説明>_<リスクの説明>.yml`
* アラート例: `1102_SecurityLogCleared_PossibleAntiForensics.yml`
* イベント形式: `<イベントID>_<イベントの説明>.yml`
* イベント例: `4776_NTLM-LogonToLocalAccount.yml`
現在のルールをご確認いただき、新規作成時のテンプレートとして、また検知ロジックの確認用としてご利用ください。
@@ -421,8 +512,7 @@ Sigmaルールは、最初にHayabusaルール形式に変換する必要があ
1. [Rust正規表現クレート](https://docs.rs/regex/1.5.4/regex/)では機能しない正規表現を使用するルール。
2. [Sigmaルール仕様](https://github.com/SigmaHQ/Sigma/wiki/Specification)の`count`以外の集計式。
> 注意この制限はSigmaルールの変換ツールにあり、Hayabusa自身にあるわけではありません。
3. `|near`を使用するルール。
## 検知ルールのチューニング
@@ -430,7 +520,21 @@ Sigmaルールは、最初にHayabusaルール形式に変換する必要があ
ルールID(例: `4fe151c2-ecf9-4fae-95ae-b88ec9c2fca6`) を `rules/config/exclude_rules.txt`に追加すると、不要なルールや利用できないルールを無視することができます。
ルールIDを `rules/config/noisy_rules.txt`に追加して、デフォルトでルールを無視することもできますが、` -n`または `--enable-noisy-rules`オプションを指定してルールを使用することもできます。
ルールIDを `rules/config/noisy_rules.txt`に追加して、デフォルトでルールを無視することもできますが、`-n`または `--enable-noisy-rules`オプションを指定してルールを使用することもできます。
## 検知レベルのlevelチューニング
Hayabusaルール、Sigmaルールはそれぞれの作者が検知した際のリスクレベルを決めています。
ユーザが独自のリスクレベルに設定するには`./config/level_tuning.txt`に変換情報を書き、`hayabusa.exe --level-tuning`を実行することでルールファイルが書き換えられます。
ルールファイルが直接書き換えられることに注意して使用してください。
`./config/level_tuning.txt`の例:
```
id,new_level
00000000-0000-0000-0000-000000000000,informational # sample level tuning line
```
ルールディレクトリ内で`id``00000000-0000-0000-0000-000000000000`のルールのリスクレベルが`informational`に書き換えられます。
## イベントIDフィルタリング
@@ -449,6 +553,7 @@ Sigmaルールは、最初にHayabusaルール形式に変換する必要があ
* [Awesome Event IDs](https://github.com/stuhli/awesome-event-ids) - フォレンジック調査とインシデント対応に役立つイベントIDのリソース。
* [Chainsaw](https://github.com/countercept/chainsaw) - Rustで開発された同様のSigmaベースの攻撃検知ツール。
* [DeepBlueCLI](https://github.com/sans-blue-team/DeepBlueCLI) - [Eric Conrad](https://twitter.com/eric_conrad) によってPowershellで開発された攻撃検知ツール。
* [Epagneul](https://github.com/jurelou/epagneul) - Windowsイベントログの可視化ツール。
* [EventList](https://github.com/miriamxyra/EventList/) - [Miriam Wiesner](https://github.com/miriamxyra)によるセキュリティベースラインの有効なイベントIDをMITRE ATT&CKにマッピングするPowerShellツール。
* [EvtxECmd](https://github.com/EricZimmerman/evtx) - [Eric Zimmerman](https://twitter.com/ericrzimmerman)によるEvtxパーサー。
* [EVTXtract](https://github.com/williballenthin/EVTXtract) - 未使用領域やメモリダンプからEVTXファイルを復元するツール。
@@ -460,27 +565,25 @@ Sigmaルールは、最初にHayabusaルール形式に変換する必要があ
* [RustyBlue](https://github.com/Yamato-Security/RustyBlue) - 大和セキュリティによるDeepBlueCLIのRust版。
* [Sigma](https://github.com/SigmaHQ/Sigma) - コミュニティベースの汎用SIEMルール。
* [so-import-evtx](https://docs.securityonion.net/en/2.3/so-import-evtx.html) - evtxファイルをSecurityOnionにインポートするツール。
* [SysmonTools](https://github.com/nshalabi/SysmonTools) - Sysmonの設定とオフライン可視化ツール。
* [Timeline Explorer](https://ericzimmerman.github.io/#!index.md) - [Eric Zimmerman](https://twitter.com/ericrzimmerman) による最高のCSVタイムラインアナライザ。
* [Windows Event Log Analysis - Analyst Reference](https://www.forwarddefense.com/media/attachments/2021/05/15/windows-event-log-analyst-reference.pdf) - Forward DefenseのSteve AnsonによるWindowsイベントログ解析の参考資料。
* [WELA (Windows Event Log Analyzer)](https://github.com/Yamato-Security/WELA/) - [Yamato Security](https://github.com/Yamato-Security/)によるWindowsイベントログ解析のマルチツール。
* [Zircolite](https://github.com/wagga40/Zircolite) - Pythonで書かれたSigmaベースの攻撃検知ツール。
## Sigmaをサポートする他の類似ツールとの比較
# Windowsイベントログ設定のススメ
対象となるサンプルデータ、コマンドラインオプション、ルールのチューニング等によって結果が異なるため、完全な比較はできませんが、ご了承ください
我々のテストでは、Hayabusaはすべてのツールの中で最も多くのSigmaルールをサポートしながらも、非常に高速な速度を維持し、大量のメモリを必要としないことが分かっています。
Windows機での悪性な活動を検知する為には、デフォルトのログ設定を改善することが必要です
以下のサイトを閲覧することをおすすめします。:
* [JSCU-NL (Joint Sigint Cyber Unit Netherlands) Logging Essentials](https://github.com/JSCU-NL/logging-essentials)
* [ACSC (Australian Cyber Security Centre) Logging and Fowarding Guide](https://www.cyber.gov.au/acsc/view-all-content/publications/windows-event-logging-and-forwarding)
* [Malware Archaeology Cheat Sheets](https://www.malwarearchaeology.com/cheat-sheets)
以下のベンチマークは、2021/12/23に [sample-evtx repository](https://github.com/Yamato-Security/Hayabusa-sample-evtx) から約500個のevtxファイル130MBを基に、Lenovo P51で計測したものです。Hayabusa 1.0.0を使いました。
# Sysmon関係のプロジェクト
| | 経過時間 | メモリ使用量 | 利用可能のSigmaルール数 |
| :-------: | :---------: | :--------------------------------------------: | :---------------------: |
| Chainsaw | 7.5 seconds | 70 MB | 170 |
| Hayabusa | 7.8 seconds | 340 MB | 267 |
| Zircolite | 34 seconds | 380 MB (通常、ログファイルの3倍のサイズが必要) | 237 |
* Hayabusaルールも有効にすると、約300のユニークなアラートとイベントを検知します。
* 合計7.5GBの多数のイベントログファイルでテストしたところ、7分以内に終了し、1GB以上のメモリを使用しませんでした。消費されるメモリ量は、ターゲットのevtxファイルのサイズではなく、結果のサイズによって増えます。
* [Timeline Explorer](https://ericzimmerman.github.io/#!index.md)などのツールで解析するために、結果を1つのCSVタイムラインにまとめる唯一のツールです。
フォレンジックに有用な証拠を作り、高い精度で検知をさせるためには、sysmonをインストールする必要があります。以下のサイトを参考に設定することをおすすめします。:
* [Sysmon Modular](https://github.com/olafhartong/sysmon-modular)
* [TrustedSec Sysmon Community Guide](https://github.com/trustedsec/SysmonCommunityGuide)
# コミュニティによるドキュメンテーション
@@ -507,3 +610,7 @@ Sigmaルールは、最初にHayabusaルール形式に変換する必要があ
# ライセンス
Hayabusaは[GPLv3](https://www.gnu.org/licenses/gpl-3.0.en.html)で公開され、すべてのルールは[Detection Rule License (DRL) 1.1](https://github.com/SigmaHQ/sigma/blob/master/LICENSE.Detection.Rules.md)で公開されています。
# Twitter
[@SecurityYamato](https://twitter.com/SecurityYamato)でHayabusa、ルール更新、その他の大和セキュリティツール等々について情報を提供しています。

347
README.md
View File

@@ -10,8 +10,11 @@
[tag-1]: https://img.shields.io/github/downloads/Yamato-Security/hayabusa/total?style=plastic&label=GitHub%F0%9F%A6%85DownLoads
[tag-2]: https://img.shields.io/github/stars/Yamato-Security/hayabusa?style=plastic&label=GitHub%F0%9F%A6%85Stars
[tag-3]: https://img.shields.io/github/v/release/Yamato-Security/hayabusa?display_name=tag&label=latest-version&style=plastic
[tag-4]: https://img.shields.io/badge/Black%20Hat%20Arsenal-Asia%202022-blue
[tag-5]: https://rust-reportcard.xuri.me/badge/github.com/Yamato-Security/hayabusa
[tag-6]: https://img.shields.io/badge/Maintenance%20Level-Actively%20Developed-brightgreen.svg
![tag-1] ![tag-2] ![tag-3]
![tag-1] ![tag-2] ![tag-3] ![tag-4] ![tag-5] ![tag-6]
# About Hayabusa
@@ -21,74 +24,77 @@ Hayabusa is a **Windows event log fast forensics timeline generator** and **thre
- [About Hayabusa](#about-hayabusa)
- [Table of Contents](#table-of-contents)
- [Main goals](#main-goals)
- [Threat hunting](#threat-hunting)
- [Fast forensics timeline generation](#fast-forensics-timeline-generation)
- [About the development](#about-the-development)
- [Main Goals](#main-goals)
- [Threat Hunting](#threat-hunting)
- [Fast Forensics Timeline Generation](#fast-forensics-timeline-generation)
- [Screenshots](#screenshots)
- [Startup](#startup)
- [Terminal output](#terminal-output)
- [Results summary](#results-summary)
- [Terminal Output](#terminal-output)
- [Results Summary](#results-summary)
- [Analysis in Excel](#analysis-in-excel)
- [Analysis in Timeline Explorer](#analysis-in-timeline-explorer)
- [Critical alert filtering and computer grouping in Timeline Explorer](#critical-alert-filtering-and-computer-grouping-in-timeline-explorer)
- [Sample timeline results](#sample-timeline-results)
- [Critical Alert Filtering and Computer Grouping in Timeline Explorer](#critical-alert-filtering-and-computer-grouping-in-timeline-explorer)
- [Sample Timeline Results](#sample-timeline-results)
- [Features](#features)
- [Planned Features](#planned-features)
- [Downloads](#downloads)
- [Compiling from source (Optional)](#compiling-from-source-optional)
- [Cross-compiling 32-bit Windows binaries](#cross-compiling-32-bit-windows-binaries)
- [Notes on compiling on macOS](#notes-on-compiling-on-macos)
- [Notes on compiling on Linux](#notes-on-compiling-on-linux)
- [Advanced: Updating Rust packages](#advanced-updating-rust-packages)
- [Testing hayabusa out on sample evtx files](#testing-hayabusa-out-on-sample-evtx-files)
- [Compiling From Source (Optional)](#compiling-from-source-optional)
- [Cross-compiling 32-bit Windows Binaries](#cross-compiling-32-bit-windows-binaries)
- [macOS Compiling Notes](#macos-compiling-notes)
- [Linux Compiling Notes](#linux-compiling-notes)
- [Advanced: Updating Rust Packages](#advanced-updating-rust-packages)
- [Running Hayabusa](#running-hayabusa)
- [Caution: Anti-Virus/EDR Warnings](#caution-anti-virusedr-warnings)
- [Windows](#windows)
- [Linux](#linux)
- [macOS](#macos)
- [Usage](#usage)
- [Caution: Output printed to screen may stop in Windows Terminal](#caution-output-printed-to-screen-may-stop-in-windows-terminal)
- [Command line options](#command-line-options)
- [Usage examples](#usage-examples)
- [Hayabusa output](#hayabusa-output)
- [Progress bar](#progress-bar)
- [Command Line Options](#command-line-options)
- [Usage Examples](#usage-examples)
- [Pivot Keyword Generator](#pivot-keyword-generator)
- [Testing Hayabusa on Sample Evtx Files](#testing-hayabusa-on-sample-evtx-files)
- [Hayabusa Output](#hayabusa-output)
- [Progress Bar](#progress-bar)
- [Color Output](#color-output)
- [Hayabusa rules](#hayabusa-rules)
- [Hayabusa v.s. converted Sigma rules](#hayabusa-vs-converted-sigma-rules)
- [Detection rule tuning](#detection-rule-tuning)
- [Event ID filtering](#event-id-filtering)
- [Other Windows event log analyzers and related projects](#other-windows-event-log-analyzers-and-related-projects)
- [Comparison to other similar tools that support sigma](#comparison-to-other-similar-tools-that-support-sigma)
- [Hayabusa Rules](#hayabusa-rules)
- [Hayabusa v.s. Converted Sigma Rules](#hayabusa-vs-converted-sigma-rules)
- [Detection Rule Tuning](#detection-rule-tuning)
- [Detection Level Tuning](#detection-level-tuning)
- [Event ID Filtering](#event-id-filtering)
- [Other Windows Event Log Analyzers and Related Projects](#other-windows-event-log-analyzers-and-related-projects)
- [Windows Logging Recommendations](#windows-logging-recommendations)
- [Sysmon Related Projects](#sysmon-related-projects)
- [Community Documentation](#community-documentation)
- [English](#english)
- [Japanese](#japanese)
- [Contribution](#contribution)
- [Bug Submission](#bug-submission)
- [License](#license)
- [Twitter](#twitter)
## Main goals
## Main Goals
### Threat hunting
### Threat Hunting
Hayabusa currently has over 1000 sigma rules and around 50 hayabusa rules with more rules being added regularly. The ultimate goal is to be able to push out hayabusa agents to all Windows endpoints after an incident or for periodic threat hunting and have them alert back to a central server.
Hayabusa currently has over 1300 sigma rules and around 70 hayabusa rules with more rules being added regularly. The ultimate goal is to be able to push out hayabusa agents to all Windows endpoints after an incident or for periodic threat hunting and have them alert back to a central server.
### Fast forensics timeline generation
### Fast Forensics Timeline Generation
Windows event log analysis has traditionally been a very long and tedious process because Windows event logs are 1) in a data format that is hard to analyze and 2) the majority of data is noise and not useful for investigations. Hayabusa's main goal is to extract out only useful data and present it in an easy-to-read format that is usable not only by professionally trained analysts but any Windows system administrator.
Hayabusa is not intended to be a replacement for tools like [Evtx Explorer](https://ericzimmerman.github.io/#!index.md) or [Event Log Explorer](https://eventlogxp.com/) for more deep-dive analysis but is intended for letting analysts get 80% of their work done in 20% of the time.
# About the development
First inspired by the [DeepBlueCLI](https://github.com/sans-blue-team/DeepBlueCLI) Windows event log analyzer, we started in 2020 porting it over to Rust for the [RustyBlue](https://github.com/Yamato-Security/RustyBlue) project, then created sigma-like flexible detection signatures written in YML, and then added a backend to sigma to support converting sigma rules into our hayabusa rule format.
# Screenshots
## Startup
![Hayabusa Startup](/screenshots/Hayabusa-Startup.png)
## Terminal output
## Terminal Output
![Hayabusa terminal output](/screenshots/Hayabusa-Results.png)
## Results summary
## Results Summary
![Hayabusa results summary](/screenshots/HayabusaResultsSummary.png)
@@ -100,59 +106,61 @@ First inspired by the [DeepBlueCLI](https://github.com/sans-blue-team/DeepBlueCL
![Hayabusa analysis in Timeline Explorer](screenshots/TimelineExplorer-ColoredTimeline.png)
## Critical alert filtering and computer grouping in Timeline Explorer
## Critical Alert Filtering and Computer Grouping in Timeline Explorer
![Critical alert filtering and computer grouping in Timeline Explorer](screenshots/TimelineExplorer-CriticalAlerts-ComputerGrouping.png)
# Sample timeline results
# Sample Timeline Results
You can check out sample CSV and manually edited XLSX timeline results [here](https://github.com/Yamato-Security/hayabusa/tree/main/sample-results).
You can check out sample CSV timelines [here](https://github.com/Yamato-Security/hayabusa/tree/main/sample-results).
You can learn how to analyze CSV timelines in Excel and Timeline Explorer [here](doc/CSV-AnalysisWithExcelAndTimelineExplorer-English.pdf).
# Features
* Cross-platform support: Windows, Linux, macOS
* Cross-platform support: Windows, Linux, macOS.
* Developed in Rust to be memory safe and faster than a hayabusa falcon!
* Multi-thread support delivering up to a 5x speed improvement!
* Creates a single easy-to-analyze CSV timeline for forensic investigations and incident response
* Threat hunting based on IoC signatures written in easy to read/create/edit YML based hayabusa rules
* Sigma rule support to convert sigma rules to hayabusa rules
* Currently it supports the most sigma rules compared to other similar tools and even supports count rules
* Event log statistics (Useful for getting a picture of what types of events there are and for tuning your log settings)
* Rule tuning configuration by excluding unneeded or noisy rules
* MITRE ATT&CK mapping
* Multi-thread support delivering up to a 5x speed improvement.
* Creates a single easy-to-analyze CSV timeline for forensic investigations and incident response.
* Threat hunting based on IoC signatures written in easy to read/create/edit YML based hayabusa rules.
* Sigma rule support to convert sigma rules to hayabusa rules.
* Currently it supports the most sigma rules compared to other similar tools and even supports count rules.
* Event log statistics. (Useful for getting a picture of what types of events there are and for tuning your log settings.)
* Rule tuning configuration by excluding unneeded or noisy rules.
* MITRE ATT&CK mapping of tactics (only in saved CSV files).
* Rule level tuning.
* Create a list of unique pivot keywords to quickly identify abnormal users, hostnames, processes, etc... as well as correlate events.
# Planned Features
* Enterprise-wide hunting on all endpoints
* Japanese language support
* MITRE ATT&CK heatmap generation
* User logon and failed logon summary
* Input from JSON logs
* Enterprise-wide hunting on all endpoints.
* Japanese language support.
* MITRE ATT&CK heatmap generation.
* User logon and failed logon summary.
* Input from JSON logs.
* JSON support for sending alerts to Elastic Stack/Splunk, etc...
# Downloads
You can download the latest Hayabusa version from the [Releases](https://github.com/Yamato-Security/hayabusa/releases) page.
You can download the latest stable version of hayabusa with compiled binaries from the [Releases](https://github.com/Yamato-Security/hayabusa/releases) page.
You can also `git clone` the repository with the following command and compile binary from source code.:
You can also `git clone` the repository with the following command and compile binary from source code:
```bash
git clone https://github.com/Yamato-Security/hayabusa.git --recursive
```
If you forget to use --recursive option, rules/ files which managed in submodule did not cloned.
You can get latest Hayabusa rules with the execute following command.
Note: If you forget to use --recursive option, the `rules` folder, which is managed as a git submodule, will not be cloned.
When you modified or erased in rules/ , update is failed.
In this case, you can get latest Hayabusa if you renamed rules folder and execute following command.
You can sync the `rules` folder and get latest Hayabusa rules with `git pull --recurse-submodules` or use the following command:
```bash
.\hayabusa.exe -u
hayabusa.exe -u
```
# Compiling from source (Optional)
If the update fails, you may need to rename the `rules` folder and try again.
# Compiling From Source (Optional)
If you have Rust installed, you can compile from source with the following command:
@@ -164,12 +172,12 @@ cargo build --release
Be sure to periodically update Rust with:
```bash
rustup update
rustup update stable
```
The compiled binary will be outputted in the `target/release` folder.
## Cross-compiling 32-bit Windows binaries
## Cross-compiling 32-bit Windows Binaries
You can create 32-bit binaries on 64-bit Windows systems with the following:
```bash
@@ -178,7 +186,7 @@ rustup target add i686-pc-windows-msvc
rustup run stable-i686-pc-windows-msvc cargo build --release
```
## Notes on compiling on macOS
## macOS Compiling Notes
If you receive compile errors about openssl, you will need to install [Homebrew](https://brew.sh/) and then install the following packages:
```bash
@@ -186,7 +194,7 @@ brew install pkg-config
brew install openssl
```
## Notes on compiling on Linux
## Linux Compiling Notes
If you receive compile errors about openssl, you will need to install the following package.
@@ -200,7 +208,7 @@ Fedora-based distros:
sudo yum install openssl-devel
```
## Advanced: Updating Rust packages
## Advanced: Updating Rust Packages
You can update to the latest Rust crates before compiling to get the latest libraries:
@@ -210,29 +218,68 @@ cargo update
Please let us know if anything breaks after you update.
## Testing hayabusa out on sample evtx files
# Running Hayabusa
We have provided some sample evtx files for you to test hayabusa and/or create new rules at [https://github.com/Yamato-Security/hayabusa-sample-evtx](https://github.com/Yamato-Security/hayabusa-sample-evtx)
## Caution: Anti-Virus/EDR Warnings
You can download the sample evtx files to a new `hayabusa-sample-evtx` sub-directory with the following command:
You may receive warning from anti-virus or EDR when trying to run hayabusa. These are false positives so you may need to configure your security products to allow running hayabusa. If you are worried about malware, please check the hayabusa source code and compile the binaries yourself.
## Windows
In Command Prompt or Windows Terminal, just run the 32-bit or 64-bit Windows binary from the hayabusa root directory.
Example: `hayabusa-1.2.0-windows-x64.exe`
## Linux
You first need to make the binary executable.
```bash
git clone https://github.com/Yamato-Security/hayabusa-sample-evtx.git
chmod +x ./hayabusa-1.2.0-linux-x64
```
> Note: You need to run the binary from the Hayabusa root directory.
Then run it from the Hayabusa root directory:
```bash
./hayabusa-1.2.0-linux-x64
```
## macOS
From Terminal or iTerm2, you first need to make the binary executable.
```bash
chmod +x ./hayabusa-1.2.0-mac-intel
```
Then, try to run it from the Hayabusa root directory:
```bash
./hayabusa-1.2.0-mac-intel
```
On the latest version of macOS, you may receive the following security error when you try to run it:
![Mac Error 1 EN](/screenshots/MacOS-RunError-1-EN.png)
Click "Cancel" and then from System Preferences, open "Security & Privacy" and from the General tab, click "Allow Anyway".
![Mac Error 2 EN](/screenshots/MacOS-RunError-2-EN.png)
After that, try to run it again.
```bash
./hayabusa-1.2.0-mac-intel
```
The following warning will pop up, so please click "Open".
![Mac Error 3 EN](/screenshots/MacOS-RunError-3-EN.png)
You should now be able to run hayabusa.
# Usage
> Note: You need to run the Hayabusa binary from the Hayabusa root directory. Example: `.\hayabusa.exe`
## Caution: Output printed to screen may stop in Windows Terminal
As of Feb 1, 2022, Windows Terminal will freeze midway when displaying results to the screen when run against the sample evtx files.
This is because there is a control code (0x9D) in the output.
This is known Windows Terminal bug which will eventually be fixed but for the meantime, you can avoid this bug by adding the `-c` (colored output) option when you run hayabusa.
## Command line options
## Command Line Options
```bash
USAGE:
@@ -240,6 +287,7 @@ USAGE:
-f --filepath=[FILEPATH] 'File path to one .evtx file.'
-r --rules=[RULEFILE/RULEDIRECTORY] 'Rule file or directory. (Default: ./rules)'
-c --color 'Output with color. (Terminal needs to support True Color.)'
-C --config=[RULECONFIGDIRECTORY] 'Rule config folder. (Default: ./rules/config)'
-o --output=[CSV_TIMELINE] 'Save the timeline in CSV format. (Example: results.csv)'
-v --verbose 'Output verbose information.'
-D --enable-deprecated-rules 'Enable rules marked as deprecated.'
@@ -256,81 +304,89 @@ USAGE:
-s --statistics 'Prints statistics of event IDs.'
-q --quiet 'Quiet mode. Do not display the launch banner.'
-Q --quiet-errors 'Quiet errors mode. Do not save error logs.'
--level-tuning <LEVEL_TUNING_FILE> 'Tune the rule level [default: ./config/level_tuning.txt]'
-p --pivot-keywords-list 'Create a list of pivot keywords.'
--contributors 'Prints the list of contributors.'
```
## Usage examples
## Usage Examples
* Run hayabusa against one Windows event log file:
```bash
.\hayabusa.exe -f eventlog.evtx
hayabusa.exe -f eventlog.evtx
```
* Run hayabusa against the sample-evtx directory with multiple Windows event log files:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx
hayabusa.exe -d .\hayabusa-sample-evtx
```
* Export to a single CSV file for further analysis with excel or timeline explorer:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx -o results.csv
```
* Only run hayabusa rules (the default is to run all the rules in `-r .\rules`):
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa -o results.csv
```
* Only run hayabusa rules for logs that are enabled by default on Windows:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default -o results.csv
```
* Only run hayabusa rules for sysmon logs:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\sysmon -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\sysmon -o results.csv
```
* Only run sigma rules:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\sigma -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\sigma -o results.csv
```
* Enable deprecated rules (those with `status` marked as `deprecated`) and noisy rules (those whose rule ID is listed in `.\rules\config\noisy_rules.txt`):
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx --enable-noisy-rules --enable-deprecated-rules -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx --enable-noisy-rules --enable-deprecated-rules -o results.csv
```
* Only run rules to analyze logons and output in the UTC timezone:
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default\events\Security\Logons -U -o results.csv
hayabusa.exe -d .\hayabusa-sample-evtx -r .\rules\hayabusa\default\events\Security\Logons -U -o results.csv
```
* Run on a live Windows machine (requires Administrator privileges) and only detect alerts (potentially malicious behavior):
```bash
.\hayabusa.exe -l -m low
hayabusa.exe -l -m low
```
* Get event ID statistics:
* Create a list of pivot keywords from critical alerts and save the results. (Results will be saved to `keywords-Ip Addresses.txt`, `keywords-Users.txt`, etc...):
```bash
.\hayabusa.exe -f Security.evtx -s
hayabusa.exe -l -m critical -p -o keywords
```
* Print Event ID statistics:
```bash
hayabusa.exe -f Security.evtx -s
```
* Print verbose information (useful for determining which files take long to process, parsing errors, etc...):
```bash
.\hayabusa.exe -d .\hayabusa-sample-evtx -v
hayabusa.exe -d .\hayabusa-sample-evtx -v
```
* Verbose output example:
@@ -352,23 +408,53 @@ Checking target evtx FilePath: "./hayabusa-sample-evtx/YamatoSecurity/T1218.004_
By default, hayabusa will save error messages to error log files.
If you do not want to save error messages, please add `-Q`.
# Hayabusa output
## Pivot Keyword Generator
When Hayabusa output is being displayed to the screen (the default), it will display the following information:
You can use the `-p` or `--pivot-keywords-list` option to create a list of unique pivot keywords to quickly identify abnormal users, hostnames, processes, etc... as well as correlate events. You can customize what keywords you want to search for by editing `config/pivot_keywords.txt`.
This is the default setting:
```
Users.SubjectUserName
Users.TargetUserName
Users.User
Logon IDs.SubjectLogonId
Logon IDs.TargetLogonId
Workstation Names.WorkstationName
Ip Addresses.IpAddress
Processes.Image
```
The format is `KeywordName.FieldName`. For example, when creating the list of `Users`, hayabusa will list up all the values in the `SubjectUserName`, `TargetUserName` and `User` fields. By default, hayabusa will return results from all events (informational and higher) so we highly recommend combining the `--pivot-keyword-list` option with the `-m` or `--min-level` option. For example, start off with only creating keywords from `critical` alerts with `-m critical` and then continue with `-m high`, `-m medium`, etc... There will most likely be common keywords in your results that will match on many normal events, so after manually checking the results and creating a list of unique keywords in a single file, you can then create a narrowed down timeline of suspicious activity with a command like `grep -f keywords.txt timeline.csv`.
# Testing Hayabusa on Sample Evtx Files
We have provided some sample evtx files for you to test hayabusa and/or create new rules at [https://github.com/Yamato-Security/hayabusa-sample-evtx](https://github.com/Yamato-Security/hayabusa-sample-evtx)
You can download the sample evtx files to a new `hayabusa-sample-evtx` sub-directory with the following command:
```bash
git clone https://github.com/Yamato-Security/hayabusa-sample-evtx.git
```
> Note: You need to run the binary from the Hayabusa root directory.
# Hayabusa Output
When hayabusa output is being displayed to the screen (the default), it will display the following information:
* `Timestamp`: Default is `YYYY-MM-DD HH:mm:ss.sss +hh:mm` format. This comes from the `<Event><System><TimeCreated SystemTime>` field in the event log. The default timezone will be the local timezone but you can change the timezone to UTC with the `--utc` option.
* `Computer`: This comes from the `<Event><System><Computer>` field in the event log.
* `Event ID`: This comes from the `<Event><System><EventID>` field in the event log.
* `Level`: This comes from the `level` field in the YML detection rule. (`informational`, `low`, `medium`, `high`, `critical`) By default, all level alerts will be displayed but you can set the minimum level with `-m`. For example, you can set `-m high`) in order to only scan for and display high and critical alerts.
* `Title`: This comes from the `title` field in the YML detection rule.
* `Details`: This comes from the `details` field in the YML detection rule, however, only Hayabusa rules have this field. This field gives extra information about the alert or event and can extract useful data from the `<Event><System><EventData>` portion of the log. For example, usernames, command line information, process information, etc...
* `Details`: This comes from the `details` field in the YML detection rule, however, only hayabusa rules have this field. This field gives extra information about the alert or event and can extract useful data from the `<Event><System><EventData>` portion of the log. For example, usernames, command line information, process information, etc...
When saving to a CSV file an additional two fields will be added:
* `Rule Path`: The path to the detection rule that generated the alert or event.
* `File Path`: The path to the evtx file that caused the alert or event.
## Progress bar
## Progress Bar
The progress bar will only work with multiple evtx files.
It will display in real time the number and percent of evtx files that it has finished analyzing.
@@ -380,7 +466,7 @@ You can change the default colors in the config file at `./config/level_color.tx
Note: Color can only be displayed in terminals that support [True Color](https://en.wikipedia.org/wiki/Color_depth#True_color_(24-bit)).
Example: [Windows Terminal](https://docs.microsoft.com/en-us/windows/terminal/install) or [iTerm2](https://iterm2.com/) for macOS.
# Hayabusa rules
# Hayabusa Rules
Hayabusa detection rules are written in a sigma-like YML format and are located in the `rules` folder. In the future, we plan to host the rules at [https://github.com/Yamato-Security/hayabusa-rules](https://github.com/Yamato-Security/hayabusa-rules) so please send any issues and pull requests for rules there instead of the main hayabusa repository.
@@ -398,14 +484,14 @@ The hayabusa rule directory structure is separated into 3 directories:
Rules are further seperated into directories by log type (Example: Security, System, etc...) and are named in the following format:
* Alert format: `<EventID>_<MITRE ATT&CK Name>_<Description>.yml`
* Alert example: `1102_IndicatorRemovalOnHost-ClearWindowsEventLogs_SecurityLogCleared.yml`
* Event format: `<EventID>_<Description>.yml`
* Alert format: `<EventID>_<EventDescription>_<AttackDescription>.yml`
* Alert example: `1102_SecurityLogCleared_PossibleAntiForensics.yml`
* Event format: `<EventID>_<EventDescription>.yml`
* Event example: `4776_NTLM-LogonToLocalAccount.yml`
Please check out the current rules to use as a template in creating new ones or for checking the detection logic.
## Hayabusa v.s. converted Sigma rules
## Hayabusa v.s. Converted Sigma Rules
Sigma rules need to first be converted to hayabusa rule format explained [here](https://github.com/Yamato-Security/hayabusa-rules/blob/main/tools/sigmac/README.md). Hayabusa rules are designed solely for Windows event log analysis and have the following benefits:
@@ -417,10 +503,9 @@ Sigma rules need to first be converted to hayabusa rule format explained [here](
1. Rules that use regular expressions that do not work with the [Rust regex crate](https://docs.rs/regex/1.5.4/regex/)
2. Aggregation expressions besides `count` in the [sigma rule specification](https://github.com/SigmaHQ/sigma/wiki/Specification).
3. Rules that use `|near`.
> Note: the limitation is in the sigma rule converter and not in hayabusa itself.
## Detection rule tuning
## Detection Rule Tuning
Like firewalls and IDSes, any signature-based tool will require some tuning to fit your environment so you may need to permanently or temporarily exclude certain rules.
@@ -428,7 +513,23 @@ You can add a rule ID (Example: `4fe151c2-ecf9-4fae-95ae-b88ec9c2fca6`) to `rule
You can also add a rule ID to `rules/config/noisy_rules.txt` in order to ignore the rule by default but still be able to use the rule with the `-n` or `--enable-noisy-rules` option.
## Event ID filtering
## Detection Level Tuning
Hayabusa and Sigma rule authors will determine the risk level of the alert when writing their rules.
However, the actual risk level will differ between environments.
You can tune the risk level of the rules by adding them to `./config/level_tuning.txt` and executing `hayabusa.exe --level-tuning` which will update the `level` line in the rule file.
Please note that the rule file will be updated directly.
`./config/level_tuning.txt` sample line:
```
id,new_level
00000000-0000-0000-0000-000000000000,informational # sample level tuning line
```
In this case, the risk level of the rule with an `id` of `00000000-0000-0000-0000-000000000000` in the rules directory will have its `level` rewritten to `informational`.
## Event ID Filtering
You can filter on event IDs by placing event ID numbers in `config/target_eventids.txt`.
This will increase performance so it is recommended if you only need to search for certain IDs.
@@ -437,7 +538,7 @@ We have provided a sample ID filter list at [`config/target_eventids_sample.txt`
Please use this list if you want the best performance but be aware that there is a slight possibility for missing events (false negatives).
# Other Windows event log analyzers and related projects
# Other Windows Event Log Analyzers and Related Projects
There is no "one tool to rule them all" and we have found that each has its own merits so we recommend checking out these other great tools and projects and seeing which ones you like.
@@ -445,6 +546,7 @@ There is no "one tool to rule them all" and we have found that each has its own
* [Awesome Event IDs](https://github.com/stuhli/awesome-event-ids) - Collection of Event ID resources useful for Digital Forensics and Incident Response
* [Chainsaw](https://github.com/countercept/chainsaw) - A similar sigma-based attack detection tool written in Rust.
* [DeepBlueCLI](https://github.com/sans-blue-team/DeepBlueCLI) - Attack detection tool written in Powershell by [Eric Conrad](https://twitter.com/eric_conrad).
* [Epagneul](https://github.com/jurelou/epagneul) - Graph visualization for Windows event logs.
* [EventList](https://github.com/miriamxyra/EventList/) - Map security baseline event IDs to MITRE ATT&CK by [Miriam Wiesner](https://github.com/miriamxyra).
* [EvtxECmd](https://github.com/EricZimmerman/evtx) - Evtx parser by [Eric Zimmerman](https://twitter.com/ericrzimmerman).
* [EVTXtract](https://github.com/williballenthin/EVTXtract) - Recover EVTX log files from unallocated space and memory images.
@@ -456,27 +558,24 @@ There is no "one tool to rule them all" and we have found that each has its own
* [RustyBlue](https://github.com/Yamato-Security/RustyBlue) - Rust port of DeepBlueCLI by Yamato Security.
* [Sigma](https://github.com/SigmaHQ/sigma) - Community based generic SIEM rules.
* [so-import-evtx](https://docs.securityonion.net/en/2.3/so-import-evtx.html) - Import evtx files into Security Onion.
* [SysmonTools](https://github.com/nshalabi/SysmonTools) - Configuration and off-line log visualization tool for Sysmon.
* [Timeline Explorer](https://ericzimmerman.github.io/#!index.md) - The best CSV timeline analyzer by [Eric Zimmerman](https://twitter.com/ericrzimmerman).
* [Windows Event Log Analysis - Analyst Reference](https://www.forwarddefense.com/media/attachments/2021/05/15/windows-event-log-analyst-reference.pdf) - by Forward Defense's Steve Anson.
* [WELA (Windows Event Log Analyzer)](https://github.com/Yamato-Security/WELA) - The swiff-army knife for Windows event logs by [Yamato Security](https://github.com/Yamato-Security/)
* [Zircolite](https://github.com/wagga40/Zircolite) - Sigma-based attack detection tool written in Python.
## Comparison to other similar tools that support sigma
# Windows Logging Recommendations
Please understand that it is not possible to do a perfect comparison as results will differ based on the target sample data, command-line options, rule tuning, etc...
In our tests, we have found hayabusa to support the largest number of sigma rules out of all the tools while still maintaining very fast speeds and does not require a great amount of memory.
In order to properly detect malicious activity on Windows machines, you will need to improve the default log settings. We recommend the following sites for guidance:
* [JSCU-NL (Joint Sigint Cyber Unit Netherlands) Logging Essentials](https://github.com/JSCU-NL/logging-essentials)
* [ACSC (Australian Cyber Security Centre) Logging and Fowarding Guide](https://www.cyber.gov.au/acsc/view-all-content/publications/windows-event-logging-and-forwarding)
* [Malware Archaeology Cheat Sheets](https://www.malwarearchaeology.com/cheat-sheets)
The following benchmarks were taken on a Lenovo P51 based on approximately 500 evtx files (130MB) from our [sample-evtx repository](https://github.com/Yamato-Security/hayabusa-sample-evtx) at 2021/12/23 with Hayabusa version 1.0.0.
# Sysmon Related Projects
| | Elapsed Time | Memory Usage | Unique Sigma Rules With Detections |
| :-------: | :----------: | :----------------------------------------------------------: | :--------------------------------: |
| Chainsaw | 7.5 seconds | 75 MB | 170 |
| Hayabusa | 7.8 seconds | 340 MB | 267 |
| Zircolite | 34 seconds | 380 MB (normally requires 3 times the size of the log files) | 237 |
* With hayabusa rules enabled, it will detect around 300 unique alerts and events.
* When tested on many event logs files totaling 7.5 GB, it finished in under 7 minutes and used around 1 GB of memory. The amount of memory consumed is based on the size of the results, not on the size of the target evtx files.
* It is the only tool that provides a consolidated single CSV timeline to analysis in tools like [Timeline Explorer](https://ericzimmerman.github.io/#!index.md).
To create the most forensic evidence and detect with the highest accuracy, you need to install sysmon. We recommend the following sites:
* [Sysmon Modular](https://github.com/olafhartong/sysmon-modular)
* [TrustedSec Sysmon Community Guide](https://github.com/trustedsec/SysmonCommunityGuide)
# Community Documentation
@@ -503,4 +602,8 @@ This project is currently actively maintained and we are happy to fix any bugs r
# License
Hayabusa is released under [GPLv3](https://www.gnu.org/licenses/gpl-3.0.en.html) and all rules are released under the [Detection Rule License (DRL) 1.1](https://github.com/SigmaHQ/sigma/blob/master/LICENSE.Detection.Rules.md).
Hayabusa is released under [GPLv3](https://www.gnu.org/licenses/gpl-3.0.en.html) and all rules are released under the [Detection Rule License (DRL) 1.1](https://github.com/SigmaHQ/sigma/blob/master/LICENSE.Detection.Rules.md).
# Twitter
You can recieve the latest news about Hayabusa, rule updates, other Yamato Security tools, etc... by following us on Twitter at [@SecurityYamato](https://twitter.com/SecurityYamato).

2
config/level_tuning.txt Normal file
View File

@@ -0,0 +1,2 @@
id,new_level
00000000-0000-0000-0000-000000000000,informational # sample level tuning line

15
config/output_tag.txt Normal file
View File

@@ -0,0 +1,15 @@
tag_full_str,tag_output_str
attack.reconnaissance,Recon
attack.resource_development,ResDev
attack.initial_access,InitAccess
attack.execution,Exec
attack.persistence,Persis
attack.privilege_escalation,PrivEsc
attack.defense_evasion,Evas
attack.credential_access,CredAccess
attack.discovery,Disc
attack.lateral_movement,LatMov
attack.collection,Collect
attack.command_and_control,C2
attack.exfiltration,Exfil
attack.impact,Impact

View File

@@ -0,0 +1,8 @@
Users.SubjectUserName
Users.TargetUserName
Users.User
Logon IDs.SubjectLogonId
Logon IDs.TargetLogonId
Workstation Names.WorkstationName
Ip Addresses.IpAddress
Processes.Image

View File

@@ -0,0 +1,496 @@
eventid,event_title
6406,%1 registered to Windows Firewall to control filtering for the following: %2
1,Process Creation.
2,File Creation Timestamp Changed. (Possible Timestomping)
3,Network Connection.
4,Sysmon Service State Changed.
5,Process Terminated.
6,Driver Loaded.
7,Image Loaded.
8,Remote Thread Created. (Possible Code Injection)
9,Raw Access Read.
10,Process Access.
11,File Creation or Overwrite.
12,Registry Object Created/Deletion.
13,Registry Value Set.
14,Registry Key or Value Rename.
15,Alternate Data Stream Created.
16,Sysmon Service Configuration Changed.
17,Named Pipe Created.
18,Named Pipe Connection.
19,WmiEventFilter Activity.
20,WmiEventConsumer Activity.
21,WmiEventConsumerToFilter Activity.
22,DNS Query.
23,Deleted File Archived.
24,Clipboard Changed.
25,Process Tampering. (Possible Process Hollowing or Herpaderping)
26,File Deleted.
27,KDC Encryption Type Configuration
31,Windows Update Failed
34,Windows Update Failed
35,Windows Update Failed
43,New Device Information
81,Processing client request for operation CreateShell
82,Entering the plugin for operation CreateShell with a ResourceURI
104,Event Log was Cleared
106,A task has been scheduled
134,Sending response for operation CreateShell
169,Creating WSMan Session (on Server)
255,Sysmon Error.
400,New Mass Storage Installation
410,New Mass Storage Installation
800,Summary of Software Activities
903,New Application Installation
904,New Application Installation
905,Updated Application
906,Updated Application
907,Removed Application
908,Removed Application
1001,BSOD
1005,Scan Failed
1006,Detected Malware
1008,Action on Malware Failed
1009,Hotpatching Failed
1010,Failed to remove item from quarantine
1022,New MSI File Installed
1033,New MSI File Installed
1100,The event logging service has shut down
1101,Audit events have been dropped by the transport.
1102,The audit log was cleared
1104,The security Log is now full
1105,Event log automatic backup
1108,The event logging service encountered an error
1125,Group Policy: Internal Error
1127,Group Policy: Generic Internal Error
1129,Group Policy: Group Policy Application Failed due to Connectivity
1149,User authentication succeeded
2001,Failed to update signatures
2003,Failed to update engine
2004,Firewall Rule Add
2004,Reverting to last known good set of signatures
2005,Firewall Rule Change
2006,Firewall Rule Deleted
2009,Firewall Failed to load Group Policy
2033,Firewall Rule Deleted
3001,Code Integrity Check Warning
3002,Code Integrity Check Warning
3002,Real-Time Protection failed
3003,Code Integrity Check Warning
3004,Code Integrity Check Warning
3010,Code Integrity Check Warning
3023,Code Integrity Check Warning
4103,Module logging. Executing Pipeline.
4104,Script Block Logging.
4105,CommandStart - Started
4106,CommandStart - Stoppeed
4608,Windows is starting up
4609,Windows is shutting down
4610,An authentication package has been loaded by the Local Security Authority
4611,A trusted logon process has been registered with the Local Security Authority
4612,"Internal resources allocated for the queuing of audit messages have been exhausted, leading to the loss of some audits."
4614,A notification package has been loaded by the Security Account Manager.
4615,Invalid use of LPC port
4616,The system time was changed.
4618,A monitored security event pattern has occurred
4621,Administrator recovered system from CrashOnAuditFail
4622,A security package has been loaded by the Local Security Authority.
4624,Logon Success
4625,Logon Failure
4627,Group Membership Information
4634,Account Logoff
4646,IKE DoS-prevention mode started
4647,User initiated logoff
4648,Explicit Logon
4649,A replay attack was detected
4650,An IPsec Main Mode security association was established
4651,An IPsec Main Mode security association was established
4652,An IPsec Main Mode negotiation failed
4653,An IPsec Main Mode negotiation failed
4654,An IPsec Quick Mode negotiation failed
4655,An IPsec Main Mode security association ended
4656,A handle to an object was requested
4657,A registry value was modified
4658,The handle to an object was closed
4659,A handle to an object was requested with intent to delete
4660,An object was deleted
4661,A handle to an object was requested
4662,An operation was performed on an object
4663,An attempt was made to access an object
4664,An attempt was made to create a hard link
4665,An attempt was made to create an application client context.
4666,An application attempted an operation
4667,An application client context was deleted
4668,An application was initialized
4670,Permissions on an object were changed
4671,An application attempted to access a blocked ordinal through the TBS
4672,Admin Logon
4673,A privileged service was called
4674,An operation was attempted on a privileged object
4675,SIDs were filtered
4685,The state of a transaction has changed
4688,Process Creation.
4689,A process has exited
4690,An attempt was made to duplicate a handle to an object
4691,Indirect access to an object was requested
4692,Backup of data protection master key was attempted
4693,Recovery of data protection master key was attempted
4694,Protection of auditable protected data was attempted
4695,Unprotection of auditable protected data was attempted
4696,A primary token was assigned to process
4697,A service was installed in the system
4698,A scheduled task was created
4699,A scheduled task was deleted
4700,A scheduled task was enabled
4701,A scheduled task was disabled
4702,A scheduled task was updated
4704,A user right was assigned
4705,A user right was removed
4706,A new trust was created to a domain
4707,A trust to a domain was removed
4709,IPsec Services was started
4710,IPsec Services was disabled
4711,PAStore Engine
4712,IPsec Services encountered a potentially serious failure
4713,Kerberos policy was changed
4714,Encrypted data recovery policy was changed
4715,The audit policy (SACL) on an object was changed
4716,Trusted domain information was modified
4717,System security access was granted to an account
4718,System security access was removed from an account
4719,System audit policy was changed
4720,A user account was created
4722,A user account was enabled
4723,An attempt was made to change an account's password
4724,An attempt was made to reset an accounts password
4725,A user account was disabled
4726,A user account was deleted
4727,A security-enabled global group was created
4728,A member was added to a security-enabled global group
4729,A member was removed from a security-enabled global group
4730,A security-enabled global group was deleted
4731,A security-enabled local group was created
4732,A member was added to a security-enabled local group
4733,A member was removed from a security-enabled local group
4734,A security-enabled local group was deleted
4735,A security-enabled local group was changed
4737,A security-enabled global group was changed
4738,A user account was changed
4739,Domain Policy was changed
4740,A user account was locked out
4741,A computer account was created
4742,A computer account was changed
4743,A computer account was deleted
4744,A security-disabled local group was created
4745,A security-disabled local group was changed
4746,A member was added to a security-disabled local group
4747,A member was removed from a security-disabled local group
4748,A security-disabled local group was deleted
4749,A security-disabled global group was created
4750,A security-disabled global group was changed
4751,A member was added to a security-disabled global group
4752,A member was removed from a security-disabled global group
4753,A security-disabled global group was deleted
4754,A security-enabled universal group was created
4755,A security-enabled universal group was changed
4756,A member was added to a security-enabled universal group
4757,A member was removed from a security-enabled universal group
4758,A security-enabled universal group was deleted
4759,A security-disabled universal group was created
4760,A security-disabled universal group was changed
4761,A member was added to a security-disabled universal group
4762,A member was removed from a security-disabled universal group
4763,A security-disabled universal group was deleted
4764,A groups type was changed
4765,SID History was added to an account
4766,An attempt to add SID History to an account failed
4767,A user account was unlocked
4768,A Kerberos authentication ticket (TGT) was requested
4769,A Kerberos service ticket was requested
4770,A Kerberos service ticket was renewed
4771,Kerberos pre-authentication failed
4772,A Kerberos authentication ticket request failed
4773,A Kerberos service ticket request failed
4774,An account was mapped for logon
4775,An account could not be mapped for logon
4776,The domain controller attempted to validate the credentials for an account
4777,The domain controller failed to validate the credentials for an account
4778,A session was reconnected to a Window Station
4779,A session was disconnected from a Window Station
4780,The ACL was set on accounts which are members of administrators groups
4781,The name of an account was changed
4782,The password hash an account was accessed
4783,A basic application group was created
4784,A basic application group was changed
4785,A member was added to a basic application group
4786,A member was removed from a basic application group
4787,A non-member was added to a basic application group
4788,A non-member was removed from a basic application group..
4789,A basic application group was deleted
4790,An LDAP query group was created
4791,A basic application group was changed
4792,An LDAP query group was deleted
4793,The Password Policy Checking API was called
4794,An attempt was made to set the Directory Services Restore Mode administrator password
4800,The workstation was locked
4801,The workstation was unlocked
4802,The screen saver was invoked
4803,The screen saver was dismissed
4816,RPC detected an integrity violation while decrypting an incoming message
4817,Auditing settings on object were changed.
4864,A namespace collision was detected
4865,A trusted forest information entry was added
4866,A trusted forest information entry was removed
4867,A trusted forest information entry was modified
4868,The certificate manager denied a pending certificate request
4869,Certificate Services received a resubmitted certificate request
4870,Certificate Services revoked a certificate
4871,Certificate Services received a request to publish the certificate revocation list (CRL)
4872,Certificate Services published the certificate revocation list (CRL)
4873,A certificate request extension changed
4874,One or more certificate request attributes changed.
4875,Certificate Services received a request to shut down
4876,Certificate Services backup started
4877,Certificate Services backup completed
4878,Certificate Services restore started
4879,Certificate Services restore completed
4880,Certificate Services started
4881,Certificate Services stopped
4882,The security permissions for Certificate Services changed
4883,Certificate Services retrieved an archived key
4884,Certificate Services imported a certificate into its database
4885,The audit filter for Certificate Services changed
4886,Certificate Services received a certificate request
4887,Certificate Services approved a certificate request and issued a certificate
4888,Certificate Services denied a certificate request
4889,Certificate Services set the status of a certificate request to pending
4890,The certificate manager settings for Certificate Services changed.
4891,A configuration entry changed in Certificate Services
4892,A property of Certificate Services changed
4893,Certificate Services archived a key
4894,Certificate Services imported and archived a key
4895,Certificate Services published the CA certificate to Active Directory Domain Services
4896,One or more rows have been deleted from the certificate database
4897,Role separation enabled
4898,Certificate Services loaded a template
4899,A Certificate Services template was updated
4900,Certificate Services template security was updated
4902,The Per-user audit policy table was created
4904,An attempt was made to register a security event source
4905,An attempt was made to unregister a security event source
4906,The CrashOnAuditFail value has changed
4907,Auditing settings on object were changed
4908,Special Groups Logon table modified
4909,The local policy settings for the TBS were changed
4910,The group policy settings for the TBS were changed
4912,Per User Audit Policy was changed
4928,An Active Directory replica source naming context was established
4929,An Active Directory replica source naming context was removed
4930,An Active Directory replica source naming context was modified
4931,An Active Directory replica destination naming context was modified
4932,Synchronization of a replica of an Active Directory naming context has begun
4933,Synchronization of a replica of an Active Directory naming context has ended
4934,Attributes of an Active Directory object were replicated
4935,Replication failure begins
4936,Replication failure ends
4937,A lingering object was removed from a replica
4944,The following policy was active when the Windows Firewall started
4945,A rule was listed when the Windows Firewall started
4946,A change has been made to Windows Firewall exception list. A rule was added
4947,A change has been made to Windows Firewall exception list. A rule was modified
4948,A change has been made to Windows Firewall exception list. A rule was deleted
4949,Windows Firewall settings were restored to the default values
4950,A Windows Firewall setting has changed
4951,A rule has been ignored because its major version number was not recognized by Windows Firewall
4952,Parts of a rule have been ignored because its minor version number was not recognized by Windows Firewall
4953,A rule has been ignored by Windows Firewall because it could not parse the rule
4954,Windows Firewall Group Policy settings has changed. The new settings have been applied
4956,Windows Firewall has changed the active profile
4957,Windows Firewall did not apply the following rule
4958,Windows Firewall did not apply the following rule because the rule referred to items not configured on this computer
4960,IPsec dropped an inbound packet that failed an integrity check
4961,IPsec dropped an inbound packet that failed a replay check
4962,IPsec dropped an inbound packet that failed a replay check
4963,IPsec dropped an inbound clear text packet that should have been secured
4964,Special groups have been assigned to a new logon
4965,IPsec received a packet from a remote computer with an incorrect Security Parameter Index (SPI).
4976,"During Main Mode negotiation, IPsec received an invalid negotiation packet."
4977,"During Quick Mode negotiation, IPsec received an invalid negotiation packet."
4978,"During Extended Mode negotiation, IPsec received an invalid negotiation packet."
4979,IPsec Main Mode and Extended Mode security associations were established
4980,IPsec Main Mode and Extended Mode security associations were established
4981,IPsec Main Mode and Extended Mode security associations were established
4982,IPsec Main Mode and Extended Mode security associations were established
4983,An IPsec Extended Mode negotiation failed
4984,An IPsec Extended Mode negotiation failed
4985,The state of a transaction has changed
5008,Unexpected Error
5024,The Windows Firewall Service has started successfully
5025,The Windows Firewall Service has been stopped
5027,The Windows Firewall Service was unable to retrieve the security policy from the local storage
5028,The Windows Firewall Service was unable to parse the new security policy.
5029,The Windows Firewall Service failed to initialize the driver
5030,The Windows Firewall Service failed to start
5031,The Windows Firewall Service blocked an application from accepting incoming connections on the network.
5032,Windows Firewall was unable to notify the user that it blocked an application from accepting incoming connections on the network
5033,The Windows Firewall Driver has started successfully
5034,The Windows Firewall Driver has been stopped
5035,The Windows Firewall Driver failed to start
5037,The Windows Firewall Driver detected critical runtime error. Terminating
5038,Code integrity determined that the image hash of a file is not valid
5039,A registry key was virtualized.
5040,A change has been made to IPsec settings. An Authentication Set was added.
5041,A change has been made to IPsec settings. An Authentication Set was modified
5042,A change has been made to IPsec settings. An Authentication Set was deleted
5043,A change has been made to IPsec settings. A Connection Security Rule was added
5044,A change has been made to IPsec settings. A Connection Security Rule was modified
5045,A change has been made to IPsec settings. A Connection Security Rule was deleted
5046,A change has been made to IPsec settings. A Crypto Set was added
5047,A change has been made to IPsec settings. A Crypto Set was modified
5048,A change has been made to IPsec settings. A Crypto Set was deleted
5049,An IPsec Security Association was deleted
5050,An attempt to programmatically disable the Windows Firewall using a call to INetFwProfile
5051,A file was virtualized
5056,A cryptographic self test was performed
5057,A cryptographic primitive operation failed
5058,Key file operation
5059,Key migration operation
5060,Verification operation failed
5061,Cryptographic operation
5062,A kernel-mode cryptographic self test was performed
5063,A cryptographic provider operation was attempted
5064,A cryptographic context operation was attempted
5065,A cryptographic context modification was attempted
5066,A cryptographic function operation was attempted
5067,A cryptographic function modification was attempted
5068,A cryptographic function provider operation was attempted
5069,A cryptographic function property operation was attempted
5070,A cryptographic function property operation was attempted
5120,OCSP Responder Service Started
5121,OCSP Responder Service Stopped
5122,A Configuration entry changed in the OCSP Responder Service
5123,A configuration entry changed in the OCSP Responder Service
5124,A security setting was updated on OCSP Responder Service
5125,A request was submitted to OCSP Responder Service
5126,Signing Certificate was automatically updated by the OCSP Responder Service
5127,The OCSP Revocation Provider successfully updated the revocation information
5136,A directory service object was modified
5137,A directory service object was created
5138,A directory service object was undeleted
5139,A directory service object was moved
5140,A network share object was accessed
5141,A directory service object was deleted
5142,A network share object was added.
5143,A network share object was modified
5144,A network share object was deleted.
5145,A network share object was checked to see whether client can be granted desired access
5148,The Windows Filtering Platform has detected a DoS attack and entered a defensive mode; packets associated with this attack will be discarded.
5149,The DoS attack has subsided and normal processing is being resumed.
5150,The Windows Filtering Platform has blocked a packet.
5151,A more restrictive Windows Filtering Platform filter has blocked a packet.
5152,The Windows Filtering Platform blocked a packet
5153,A more restrictive Windows Filtering Platform filter has blocked a packet
5154,The Windows Filtering Platform has permitted an application or service to listen on a port for incoming connections
5155,The Windows Filtering Platform has blocked an application or service from listening on a port for incoming connections
5156,The Windows Filtering Platform has allowed a connection
5157,The Windows Filtering Platform has blocked a connection
5158,The Windows Filtering Platform has permitted a bind to a local port
5159,The Windows Filtering Platform has blocked a bind to a local port
5168,Spn check for SMB/SMB2 fails.
5376,Credential Manager credentials were backed up
5377,Credential Manager credentials were restored from a backup
5378,The requested credentials delegation was disallowed by policy
5440,The following callout was present when the Windows Filtering Platform Base Filtering Engine started
5441,The following filter was present when the Windows Filtering Platform Base Filtering Engine started
5442,The following provider was present when the Windows Filtering Platform Base Filtering Engine started
5443,The following provider context was present when the Windows Filtering Platform Base Filtering Engine started
5444,The following sub-layer was present when the Windows Filtering Platform Base Filtering Engine started
5446,A Windows Filtering Platform callout has been changed
5447,A Windows Filtering Platform filter has been changed
5448,A Windows Filtering Platform provider has been changed
5449,A Windows Filtering Platform provider context has been changed
5450,A Windows Filtering Platform sub-layer has been changed
5451,An IPsec Quick Mode security association was established
5452,An IPsec Quick Mode security association ended
5453,An IPsec negotiation with a remote computer failed because the IKE and AuthIP IPsec Keying Modules (IKEEXT) service is not started
5456,PAStore Engine applied Active Directory storage IPsec policy on the computer
5457,PAStore Engine failed to apply Active Directory storage IPsec policy on the computer
5458,PAStore Engine applied locally cached copy of Active Directory storage IPsec policy on the computer
5459,PAStore Engine failed to apply locally cached copy of Active Directory storage IPsec policy on the computer
5460,PAStore Engine applied local registry storage IPsec policy on the computer
5461,PAStore Engine failed to apply local registry storage IPsec policy on the computer
5462,PAStore Engine failed to apply some rules of the active IPsec policy on the computer
5463,PAStore Engine polled for changes to the active IPsec policy and detected no changes
5464,"PAStore Engine polled for changes to the active IPsec policy, detected changes, and applied them to IPsec Services"
5465,PAStore Engine received a control for forced reloading of IPsec policy and processed the control successfully
5466,"PAStore Engine polled for changes to the Active Directory IPsec policy, determined that Active Directory cannot be reached, and will use the cached copy of the Active Directory IPsec policy instead"
5467,"PAStore Engine polled for changes to the Active Directory IPsec policy, determined that Active Directory can be reached, and found no changes to the policy"
5468,"PAStore Engine polled for changes to the Active Directory IPsec policy, determined that Active Directory can be reached, found changes to the policy, and applied those changes"
5471,PAStore Engine loaded local storage IPsec policy on the computer
5472,PAStore Engine failed to load local storage IPsec policy on the computer
5473,PAStore Engine loaded directory storage IPsec policy on the computer
5474,PAStore Engine failed to load directory storage IPsec policy on the computer
5477,PAStore Engine failed to add quick mode filter
5478,IPsec Services has started successfully
5479,IPsec Services has been shut down successfully
5480,IPsec Services failed to get the complete list of network interfaces on the computer
5483,IPsec Services failed to initialize RPC server. IPsec Services could not be started
5484,IPsec Services has experienced a critical failure and has been shut down
5485,IPsec Services failed to process some IPsec filters on a plug-and-play event for network interfaces
6144,Security policy in the group policy objects has been applied successfully
6145,One or more errors occured while processing security policy in the group policy objects
6272,Network Policy Server granted access to a user
6273,Network Policy Server denied access to a user
6274,Network Policy Server discarded the request for a user
6275,Network Policy Server discarded the accounting request for a user
6276,Network Policy Server quarantined a user
6277,Network Policy Server granted access to a user but put it on probation because the host did not meet the defined health policy
6278,Network Policy Server granted full access to a user because the host met the defined health policy
6279,Network Policy Server locked the user account due to repeated failed authentication attempts
6280,Network Policy Server unlocked the user account
6281,Code Integrity determined that the page hashes of an image file are not valid...
6400,BranchCache: Received an incorrectly formatted response while discovering availability of content.
6401,BranchCache: Received invalid data from a peer. Data discarded.
6402,BranchCache: The message to the hosted cache offering it data is incorrectly formatted.
6403,BranchCache: The hosted cache sent an incorrectly formatted response to the client.
6404,BranchCache: Hosted cache could not be authenticated using the provisioned SSL certificate.
6405,BranchCache: %2 instance(s) of event id %1 occurred.
6407,1% (no more info in MSDN)
6408,Registered product %1 failed and Windows Firewall is now controlling the filtering for %2
6410,Code integrity determined that a file does not meet the security requirements to load into a process.
7022,Windows Service Fail or Crash
7023,The %1 service terminated with the following error: %2
7023,Windows Service Fail or Crash
7024,Windows Service Fail or Crash
7026,Windows Service Fail or Crash
7030,"The service is marked as an interactive service. However, the system is configured to not allow interactive services. This service may not function properly."
7031,Windows Service Fail or Crash
7032,Windows Service Fail or Crash
7034,Windows Service Fail or Crash
7035,The %1 service was successfully sent a %2 control.
7036,The service entered the running/stopped state
7040,The start type of the %1 service was changed from %2 to %3.
7045,New Windows Service
8000,Starting a Wireless Connection
8001,Successfully connected to Wireless connection
8002,Wireless Connection Failed
8003,AppLocker Block Error
8003,Disconnected from Wireless connection
8004,AppLocker Block Warning
8005,AppLocker permitted the execution of a PowerShell script
8006,AppLocker Warning Error
8007,AppLocker Warning
8011,Starting a Wireless Connection
10000,Network Connection and Disconnection Status (Wired and Wireless)
10001,Network Connection and Disconnection Status (Wired and Wireless)
11000,Wireless Association Status
11001,Wireless Association Status
11002,Wireless Association Status
11004,"Wireless Security Started, Stopped, Successful, or Failed"
11005,"Wireless Security Started, Stopped, Successful, or Failed"
11006,"Wireless Security Started, Stopped, Successful, or Failed"
11010,"Wireless Security Started, Stopped, Successful, or Failed"
12011,Wireless Authentication Started and Failed
12012,Wireless Authentication Started and Failed
12013,Wireless Authentication Started and Failed
unregistered_event_id,Unknown

View File

@@ -1,80 +0,0 @@
eventid,event_title,detect_flg,comment
1,Sysmon process creation,Yes,
59,Bits Job Creation,Yes,
1100,Event logging service was shut down,,Good for finding signs of anti-forensics but most likely false positives when the system shuts down.
1101,Audit Events Have Been Dropped By The Transport,,
1102,Event log was cleared,Yes,Should not happen normally so this is a good event to look out for.
1107,Event processing error,,
4103,Powershell execution pipeline,Yes,
4608,Windows started up,,
4610,An authentication package has been loaded by the Local Security Authority,,
4611,A trusted logon process has been registered with the Local Security Authority,,
4614,A notification package has been loaded by the Security Account Manager,,
4616,System time was changed,,
4622,A security package has been loaded by the Local Security Authority,,
4624,Account logon,Yes,
4625,Failed logon,Yes,
4634,Logoff,Yes,
4647,Logoff,Yes,
4648,Explicit logon,Yes,
4672,Admin logon,Yes,
4688,New process started,,
4696,Primary token assigned to process,,
4692,Backup of data protection master key was attempted,,
4697,Service installed,,
4768,Kerberos TGT request,Yes,
4769,Kerberos service ticket request,Yes,
4717,System security access was granted to an account,,
4719,System audit policy was changed,,
4720,User account created,Yes,
4722,User account enabled,,
4724,Password reset,,
4725,User account disabled,,
4726,User account deleted,,
4728,User added to security global group,,
4729,User removed from security global group,,
4732,User added to security local group,,
4733,User removed from security local group,,
4735,Security local group was changed,,
4727,Security global group was changed,,
4738,User accounts properties changed,,
4739,Domain policy changed,,
4776,NTLM logon to local user,Yes,
4778,RDP session reconnected or user switched back through Fast User Switching,,
4779,RDP session disconnected or user switched away through Fast User Switching,,
4797,Attempt to query the account for a blank password,,
4798,Users local group membership was enumerated,,
4799,Local group membership was enumerated,,
4781,User name was changed,,
4800,Workstation was locked,,
4801,Workstation was unlocked,,
4826,Boot configuration data loaded,,
4902,Per-user audit policy table was created,,
4904,Attempt to register a security event source,,
4905,Attempt to unregister a security event source,,
4907,Auditing settings on object was changed,,
4944,Policy active when firewall started,,
4945,Rule listed when the firewall started,,Too much noise when firewall starts
4946,Rule added to firewall exception list,,
4947,Rule modified in firewall exception list,,
4948,Rule deleted in firewall exception list,,
4954,New setting applied to firewall group policy,,
4956,Firewall active profile changed,,
5024,Firewall started,,
5033,Firewall driver started,,
5038,Code integrity determined that the image hash of a file is not valid,,
5058,Key file operation,,
5059,Key migration operation,,
5061,Cryptographic operation,,
5140,Network share access,Yes,
5142,A network share object was added,,
5144,A network share object was deleted,,
5145,Network shared file access,Yes,
5379,Credential Manager credentials were read,,
5381,Vault credentials were read,,
5382,Vault credentials were read,,
5478,IPsec Services started,,
5889,An object was deleted to the COM+ Catalog,,
5890,An object was added to the COM+ Catalog,,
8001,Wireless access point connect,Yes,
unregistered_event_id,Unknown,,

Binary file not shown.

Before

Width:  |  Height:  |  Size: 199 KiB

After

Width:  |  Height:  |  Size: 356 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 121 KiB

After

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

After

Width:  |  Height:  |  Size: 158 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 321 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 329 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

View File

@@ -21,9 +21,11 @@ pub struct CsvFormat<'a> {
computer: &'a str,
event_i_d: &'a str,
level: &'a str,
mitre_attack: &'a str,
rule_title: &'a str,
details: &'a str,
mitre_attack: &'a str,
#[serde(skip_serializing_if = "Option::is_none")]
record_information: Option<&'a str>,
rule_path: &'a str,
file_path: &'a str,
}
@@ -37,6 +39,8 @@ pub struct DisplayFormat<'a> {
pub level: &'a str,
pub rule_title: &'a str,
pub details: &'a str,
#[serde(skip_serializing_if = "Option::is_none")]
pub record_information: Option<&'a str>,
}
/// level_color.txtファイルを読み込み対応する文字色のマッピングを返却する関数
@@ -49,7 +53,7 @@ pub fn set_output_color() -> Option<HashMap<String, Vec<u8>>> {
// color情報がない場合は通常の白色の出力が出てくるのみで動作への影響を与えない為warnとして処理する
AlertMessage::warn(
&mut BufWriter::new(std::io::stderr().lock()),
&read_result.as_ref().unwrap_err(),
read_result.as_ref().unwrap_err(),
)
.ok();
return None;
@@ -71,12 +75,12 @@ pub fn set_output_color() -> Option<HashMap<String, Vec<u8>>> {
return;
}
let color_code = convert_color_result.unwrap();
if level.len() == 0 || color_code.len() < 3 {
if level.is_empty() || color_code.len() < 3 {
return;
}
color_map.insert(level.to_string(), color_code);
});
return Some(color_map);
Some(color_map)
}
pub fn after_fact() {
@@ -120,16 +124,15 @@ fn emit_csv<W: std::io::Write>(
displayflag: bool,
color_map: Option<HashMap<String, Vec<u8>>>,
) -> io::Result<()> {
let mut wtr;
if displayflag {
wtr = csv::WriterBuilder::new()
let mut wtr = if displayflag {
csv::WriterBuilder::new()
.double_quote(false)
.quote_style(QuoteStyle::Never)
.delimiter(b'|')
.from_writer(writer);
.from_writer(writer)
} else {
wtr = csv::WriterBuilder::new().from_writer(writer);
}
csv::WriterBuilder::new().from_writer(writer)
};
let messages = print::MESSAGES.lock().unwrap();
// levelの区分が"Critical","High","Medium","Low","Informational","Undefined"の6つであるため
@@ -139,82 +142,49 @@ fn emit_csv<W: std::io::Write>(
for (time, detect_infos) in messages.iter() {
for detect_info in detect_infos {
let mut level = detect_info.level.to_string();
if level == "informational" {
level = "info".to_string();
}
if displayflag {
if color_map.is_some() {
let output_color =
_get_output_color(&color_map.as_ref().unwrap(), &detect_info.level);
wtr.serialize(DisplayFormat {
timestamp: &format!(
"{} ",
&format_time(time).truecolor(
output_color[0],
output_color[1],
output_color[2]
)
),
level: &format!(
" {} ",
&detect_info.level.truecolor(
output_color[0],
output_color[1],
output_color[2]
)
),
computer: &format!(
" {} ",
&detect_info.computername.truecolor(
output_color[0],
output_color[1],
output_color[2]
)
),
event_i_d: &format!(
" {} ",
&detect_info.eventid.truecolor(
output_color[0],
output_color[1],
output_color[2]
)
),
rule_title: &format!(
" {} ",
&detect_info.alert.truecolor(
output_color[0],
output_color[1],
output_color[2]
)
),
details: &format!(
" {}",
&detect_info.detail.truecolor(
output_color[0],
output_color[1],
output_color[2]
)
),
})?;
} else {
wtr.serialize(DisplayFormat {
timestamp: &format!("{} ", &format_time(time)),
level: &format!(" {} ", &detect_info.level),
computer: &format!(" {} ", &detect_info.computername),
event_i_d: &format!(" {} ", &detect_info.eventid),
rule_title: &format!(" {} ", &detect_info.alert),
details: &format!(" {}", &detect_info.detail),
})?;
}
let colors = color_map
.as_ref()
.map(|cl_mp| _get_output_color(cl_mp, &detect_info.level));
let colors = colors.as_ref();
let recinfo = detect_info
.record_information
.as_ref()
.map(|recinfo| _format_cell(recinfo, ColPos::Last, colors));
let details = detect_info
.detail
.chars()
.filter(|&c| !c.is_control())
.collect::<String>();
let dispformat = DisplayFormat {
timestamp: &_format_cell(&format_time(time), ColPos::First, colors),
level: &_format_cell(&level, ColPos::Other, colors),
computer: &_format_cell(&detect_info.computername, ColPos::Other, colors),
event_i_d: &_format_cell(&detect_info.eventid, ColPos::Other, colors),
rule_title: &_format_cell(&detect_info.alert, ColPos::Other, colors),
details: &_format_cell(&details, ColPos::Other, colors),
record_information: recinfo.as_deref(),
};
wtr.serialize(dispformat)?;
} else {
// csv出力時フォーマット
wtr.serialize(CsvFormat {
timestamp: &format_time(time),
file_path: &detect_info.filepath,
rule_path: &detect_info.rulepath,
level: &detect_info.level,
level: &level,
computer: &detect_info.computername,
event_i_d: &detect_info.eventid,
mitre_attack: &detect_info.tag_info,
rule_title: &detect_info.alert,
details: &detect_info.detail,
mitre_attack: &detect_info.tag_info,
record_information: detect_info.record_information.as_deref(),
file_path: &detect_info.filepath,
rule_path: &detect_info.rulepath,
})?;
}
let level_suffix = *configs::LEVELMAP
@@ -227,10 +197,10 @@ fn emit_csv<W: std::io::Write>(
total_detect_counts_by_level[level_suffix] += 1;
}
}
println!("");
println!();
wtr.flush()?;
println!("");
println!();
_print_unique_results(
total_detect_counts_by_level,
"Total".to_string(),
@@ -246,6 +216,29 @@ fn emit_csv<W: std::io::Write>(
Ok(())
}
enum ColPos {
First, // 先頭
Last, // 最後
Other, // それ以外
}
fn _format_cellpos(column: ColPos, colval: &str) -> String {
return match column {
ColPos::First => format!("{} ", colval),
ColPos::Last => format!(" {}", colval),
ColPos::Other => format!(" {} ", colval),
};
}
fn _format_cell(word: &str, column: ColPos, output_color: Option<&Vec<u8>>) -> String {
if let Some(color) = output_color {
let colval = format!("{}", word.truecolor(color[0], color[1], color[2]));
_format_cellpos(column, &colval)
} else {
_format_cellpos(column, word)
}
}
/// 与えられたユニークな検知数と全体の検知数の情報(レベル別と総計)を元に結果文を標準出力に表示する関数
fn _print_unique_results(
mut counts_by_level: Vec<u128>,
@@ -276,34 +269,32 @@ fn _print_unique_results(
)
.ok();
for (i, level_name) in levels.iter().enumerate() {
let output_str;
let output_raw_str = format!(
"{} {} {}: {}",
head_word, level_name, tail_word, counts_by_level[i]
);
if color_map.is_none() {
output_str = output_raw_str;
let output_str = if color_map.is_none() {
output_raw_str
} else {
let output_color =
_get_output_color(&color_map.as_ref().unwrap(), &level_name.to_string());
output_str = output_raw_str
let output_color = _get_output_color(color_map.as_ref().unwrap(), level_name);
output_raw_str
.truecolor(output_color[0], output_color[1], output_color[2])
.to_string();
}
.to_string()
};
writeln!(wtr, "{}", output_str).ok();
}
wtr.flush().ok();
}
/// levelに対応したtruecolorの値の配列を返す関数
fn _get_output_color(color_map: &HashMap<String, Vec<u8>>, level: &String) -> Vec<u8> {
fn _get_output_color(color_map: &HashMap<String, Vec<u8>>, level: &str) -> Vec<u8> {
// カラーをつけない場合は255,255,255で出力する
let mut output_color: Vec<u8> = vec![255, 255, 255];
let target_color = color_map.get(level);
if target_color.is_some() {
output_color = target_color.unwrap().to_vec();
if let Some(color) = target_color {
output_color = color.to_vec();
}
return output_color;
output_color
}
fn format_time(time: &DateTime<Utc>) -> String {
@@ -319,11 +310,11 @@ where
Tz::Offset: std::fmt::Display,
{
if configs::CONFIG.read().unwrap().args.is_present("rfc-2822") {
return time.to_rfc2822();
time.to_rfc2822()
} else if configs::CONFIG.read().unwrap().args.is_present("rfc-3339") {
return time.to_rfc3339();
time.to_rfc3339()
} else {
return time.format("%Y-%m-%d %H:%M:%S%.3f %:z").to_string();
time.format("%Y-%m-%d %H:%M:%S%.3f %:z").to_string()
}
}
@@ -331,6 +322,7 @@ where
mod tests {
use crate::afterfact::emit_csv;
use crate::detections::print;
use crate::detections::print::DetectInfo;
use chrono::{Local, TimeZone, Utc};
use serde_json::Value;
use std::fs::File;
@@ -353,6 +345,7 @@ mod tests {
let test_eventid = "1111";
let output = "pokepoke";
let test_attack = "execution/txxxx.yyy";
let test_recinfo = "record_infoinfo11";
{
let mut messages = print::MESSAGES.lock().unwrap();
messages.clear();
@@ -372,15 +365,19 @@ mod tests {
"##;
let event: Value = serde_json::from_str(val).unwrap();
messages.insert(
testfilepath.to_string(),
testrulepath.to_string(),
&event,
test_level.to_string(),
test_computername.to_string(),
test_eventid.to_string(),
test_title.to_string(),
output.to_string(),
test_attack.to_string(),
DetectInfo {
filepath: testfilepath.to_string(),
rulepath: testrulepath.to_string(),
level: test_level.to_string(),
computername: test_computername.to_string(),
eventid: test_eventid.to_string(),
alert: test_title.to_string(),
detail: String::default(),
tag_info: test_attack.to_string(),
record_information: Option::Some(test_recinfo.to_string()),
},
);
}
let expect_time = Utc
@@ -388,7 +385,7 @@ mod tests {
.unwrap();
let expect_tz = expect_time.with_timezone(&Local);
let expect =
"Timestamp,Computer,EventID,Level,RuleTitle,Details,MitreAttack,RulePath,FilePath\n"
"Timestamp,Computer,EventID,Level,MitreAttack,RuleTitle,Details,RecordInformation,RulePath,FilePath\n"
.to_string()
+ &expect_tz
.clone()
@@ -401,18 +398,19 @@ mod tests {
+ ","
+ test_level
+ ","
+ test_attack
+ ","
+ test_title
+ ","
+ output
+ ","
+ test_attack
+ test_recinfo
+ ","
+ testrulepath
+ ","
+ &testfilepath.to_string()
+ testfilepath
+ "\n";
let mut file: Box<dyn io::Write> =
Box::new(File::create("./test_emit_csv.csv".to_string()).unwrap());
let mut file: Box<dyn io::Write> = Box::new(File::create("./test_emit_csv.csv").unwrap());
assert!(emit_csv(&mut file, false, None).is_ok());
match read_to_string("./test_emit_csv.csv") {
Err(_) => panic!("Failed to open file."),
@@ -452,15 +450,19 @@ mod tests {
"##;
let event: Value = serde_json::from_str(val).unwrap();
messages.insert(
testfilepath.to_string(),
testrulepath.to_string(),
&event,
test_level.to_string(),
test_computername.to_string(),
test_eventid.to_string(),
test_title.to_string(),
output.to_string(),
test_attack.to_string(),
DetectInfo {
filepath: testfilepath.to_string(),
rulepath: testrulepath.to_string(),
level: test_level.to_string(),
computername: test_computername.to_string(),
eventid: test_eventid.to_string(),
alert: test_title.to_string(),
detail: String::default(),
tag_info: test_attack.to_string(),
record_information: Option::Some(String::default()),
},
);
messages.debug();
}
@@ -468,7 +470,8 @@ mod tests {
.datetime_from_str("1996-02-27T01:05:01Z", "%Y-%m-%dT%H:%M:%SZ")
.unwrap();
let expect_tz = expect_time.with_timezone(&Local);
let expect_header = "Timestamp|Computer|EventID|Level|RuleTitle|Details\n";
let expect_header =
"Timestamp|Computer|EventID|Level|RuleTitle|Details|RecordInformation\n";
let expect_colored = expect_header.to_string()
+ &get_white_color_string(
&expect_tz
@@ -486,6 +489,8 @@ mod tests {
+ &get_white_color_string(test_title)
+ " | "
+ &get_white_color_string(output)
+ " | "
+ &get_white_color_string("")
+ "\n";
let expect_nocoloed = expect_header.to_string()
+ &expect_tz
@@ -502,10 +507,12 @@ mod tests {
+ test_title
+ " | "
+ output
+ " | "
+ ""
+ "\n";
let mut file: Box<dyn io::Write> =
Box::new(File::create("./test_emit_csv_display.txt".to_string()).unwrap());
Box::new(File::create("./test_emit_csv_display.txt").unwrap());
assert!(emit_csv(&mut file, true, None).is_ok());
match read_to_string("./test_emit_csv_display.txt") {
Err(_) => panic!("Failed to open file."),
@@ -520,6 +527,6 @@ mod tests {
let white_color_header = "\u{1b}[38;2;255;255;255m";
let white_color_footer = "\u{1b}[0m";
return white_color_header.to_owned() + target + white_color_footer;
white_color_header.to_owned() + target + white_color_footer
}
}

View File

@@ -1,10 +1,13 @@
use crate::detections::pivot::PivotKeyword;
use crate::detections::pivot::PIVOT_KEYWORD;
use crate::detections::print::AlertMessage;
use crate::detections::utils;
use chrono::{DateTime, Utc};
use clap::{App, AppSettings, ArgMatches};
use clap::{App, AppSettings, Arg, ArgMatches};
use hashbrown::HashMap;
use hashbrown::HashSet;
use lazy_static::lazy_static;
use regex::Regex;
use std::io::BufWriter;
use std::sync::RwLock;
lazy_static! {
@@ -16,24 +19,38 @@ lazy_static! {
levelmap.insert("MEDIUM".to_owned(), 3);
levelmap.insert("HIGH".to_owned(), 4);
levelmap.insert("CRITICAL".to_owned(), 5);
return levelmap;
levelmap
};
pub static ref EVENTKEY_ALIAS: EventKeyAliasConfig =
load_eventkey_alias("./rules/config/eventkey_alias.txt");
pub static ref EVENTKEY_ALIAS: EventKeyAliasConfig = load_eventkey_alias(&format!(
"{}/eventkey_alias.txt",
CONFIG.read().unwrap().folder_path
));
pub static ref IDS_REGEX: Regex =
Regex::new(r"^[0-9a-z]{8}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{12}$").unwrap();
}
#[derive(Clone)]
pub struct ConfigReader {
pub args: ArgMatches<'static>,
pub folder_path: String,
pub event_timeline_config: EventInfoConfig,
pub target_eventids: TargetEventIds,
}
impl Default for ConfigReader {
fn default() -> Self {
Self::new()
}
}
impl ConfigReader {
pub fn new() -> Self {
let arg = build_app();
let folder_path_str = arg.value_of("config").unwrap_or("rules/config").to_string();
ConfigReader {
args: build_app(),
event_timeline_config: load_eventcode_info("config/timeline_event_info.txt"),
args: arg,
folder_path: folder_path_str,
event_timeline_config: load_eventcode_info("config/statistics_event_info.txt"),
target_eventids: load_target_ids("config/target_eventids.txt"),
}
}
@@ -41,7 +58,7 @@ impl ConfigReader {
fn build_app<'a>() -> ArgMatches<'a> {
let program = std::env::args()
.nth(0)
.next()
.and_then(|s| {
std::path::PathBuf::from(s)
.file_stem()
@@ -55,8 +72,10 @@ fn build_app<'a>() -> ArgMatches<'a> {
let usages = "-d --directory=[DIRECTORY] 'Directory of multiple .evtx files.'
-f --filepath=[FILEPATH] 'File path to one .evtx file.'
-r --rules=[RULEFILE/RULEDIRECTORY] 'Rule file or directory. (Default: ./rules)'
-F --full-data 'Print all field information.'
-r --rules=[RULEDIRECTORY/RULEFILE] 'Rule file or directory (default: ./rules)'
-c --color 'Output with color. (Terminal needs to support True Color.)'
-C --config=[RULECONFIGDIRECTORY] 'Rule config folder. (Default: ./rules/config)'
-o --output=[CSV_TIMELINE] 'Save the timeline in CSV format. (Example: results.csv)'
-v --verbose 'Output verbose information.'
-D --enable-deprecated-rules 'Enable rules marked as deprecated.'
@@ -73,12 +92,18 @@ fn build_app<'a>() -> ArgMatches<'a> {
-s --statistics 'Prints statistics of event IDs.'
-q --quiet 'Quiet mode. Do not display the launch banner.'
-Q --quiet-errors 'Quiet errors mode. Do not save error logs.'
-p --pivot-keywords-list 'Create a list of pivot keywords.'
--contributors 'Prints the list of contributors.'";
App::new(&program)
.about("Hayabusa: Aiming to be the world's greatest Windows event log analysis tool!")
.version("1.1.0")
.author("Yamato Security (https://github.com/Yamato-Security/hayabusa)")
.version("1.2.0")
.author("Yamato Security (https://github.com/Yamato-Security/hayabusa) @SecurityYamato")
.setting(AppSettings::VersionlessSubcommands)
.arg(
// TODO: When update claps to 3.x, these can write in usage texts...
Arg::from_usage("--level-tuning=[LEVEL_TUNING_FILE] 'Adjust rule level.'")
.default_value("./config/level_tuning.txt"),
)
.usage(usages)
.args_from_usage(usages)
.get_matches()
@@ -91,7 +116,7 @@ fn is_test_mode() -> bool {
}
}
return false;
false
}
#[derive(Debug, Clone)]
@@ -99,19 +124,25 @@ pub struct TargetEventIds {
ids: HashSet<String>,
}
impl Default for TargetEventIds {
fn default() -> Self {
Self::new()
}
}
impl TargetEventIds {
pub fn new() -> TargetEventIds {
return TargetEventIds {
TargetEventIds {
ids: HashSet::new(),
};
}
}
pub fn is_target(&self, id: &String) -> bool {
pub fn is_target(&self, id: &str) -> bool {
// 中身が空の場合は全EventIdを対象とする。
if self.ids.is_empty() {
return true;
}
return self.ids.contains(id);
self.ids.contains(id)
}
}
@@ -121,7 +152,7 @@ fn load_target_ids(path: &str) -> TargetEventIds {
if lines.is_err() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&lines.as_ref().unwrap_err(),
lines.as_ref().unwrap_err(),
)
.ok();
return ret;
@@ -134,7 +165,7 @@ fn load_target_ids(path: &str) -> TargetEventIds {
ret.ids.insert(line);
}
return ret;
ret
}
#[derive(Debug, Clone)]
@@ -143,6 +174,12 @@ pub struct TargetEventTime {
end_time: Option<DateTime<Utc>>,
}
impl Default for TargetEventTime {
fn default() -> Self {
Self::new()
}
}
impl TargetEventTime {
pub fn new() -> Self {
let start_time =
@@ -180,17 +217,17 @@ impl TargetEventTime {
} else {
None
};
return Self::set(start_time, end_time);
Self::set(start_time, end_time)
}
pub fn set(
start_time: Option<chrono::DateTime<chrono::Utc>>,
end_time: Option<chrono::DateTime<chrono::Utc>>,
input_start_time: Option<chrono::DateTime<chrono::Utc>>,
input_end_time: Option<chrono::DateTime<chrono::Utc>>,
) -> Self {
return Self {
start_time: start_time,
end_time: end_time,
};
Self {
start_time: input_start_time,
end_time: input_end_time,
}
}
pub fn is_target(&self, eventtime: &Option<DateTime<Utc>>) -> bool {
@@ -207,7 +244,7 @@ impl TargetEventTime {
return false;
}
}
return true;
true
}
}
@@ -219,34 +256,41 @@ pub struct EventKeyAliasConfig {
impl EventKeyAliasConfig {
pub fn new() -> EventKeyAliasConfig {
return EventKeyAliasConfig {
EventKeyAliasConfig {
key_to_eventkey: HashMap::new(),
key_to_split_eventkey: HashMap::new(),
};
}
}
pub fn get_event_key(&self, alias: &String) -> Option<&String> {
return self.key_to_eventkey.get(alias);
pub fn get_event_key(&self, alias: &str) -> Option<&String> {
self.key_to_eventkey.get(alias)
}
pub fn get_event_key_split(&self, alias: &String) -> Option<&Vec<usize>> {
return self.key_to_split_eventkey.get(alias);
pub fn get_event_key_split(&self, alias: &str) -> Option<&Vec<usize>> {
self.key_to_split_eventkey.get(alias)
}
}
impl Default for EventKeyAliasConfig {
fn default() -> Self {
Self::new()
}
}
fn load_eventkey_alias(path: &str) -> EventKeyAliasConfig {
let mut config = EventKeyAliasConfig::new();
// eventkey_aliasが読み込めなかったらエラーで終了とする。
let read_result = utils::read_csv(path);
if read_result.is_err() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&read_result.as_ref().unwrap_err(),
read_result.as_ref().unwrap_err(),
)
.ok();
return config;
}
// eventkey_aliasが読み込めなかったらエラーで終了とする。
read_result.unwrap().into_iter().for_each(|line| {
if line.len() != 2 {
return;
@@ -255,39 +299,71 @@ fn load_eventkey_alias(path: &str) -> EventKeyAliasConfig {
let empty = &"".to_string();
let alias = line.get(0).unwrap_or(empty);
let event_key = line.get(1).unwrap_or(empty);
if alias.len() == 0 || event_key.len() == 0 {
if alias.is_empty() || event_key.is_empty() {
return;
}
config
.key_to_eventkey
.insert(alias.to_owned(), event_key.to_owned());
let splits = event_key.split(".").map(|s| s.len()).collect();
let splits = event_key.split('.').map(|s| s.len()).collect();
config
.key_to_split_eventkey
.insert(alias.to_owned(), splits);
});
config.key_to_eventkey.shrink_to_fit();
return config;
config
}
///設定ファイルを読み込み、keyとfieldsのマップをPIVOT_KEYWORD大域変数にロードする。
pub fn load_pivot_keywords(path: &str) {
let read_result = utils::read_txt(path);
if read_result.is_err() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
read_result.as_ref().unwrap_err(),
)
.ok();
}
read_result.unwrap().into_iter().for_each(|line| {
let map: Vec<&str> = line.split('.').collect();
if map.len() != 2 {
return;
}
//存在しなければ、keyを作成
PIVOT_KEYWORD
.write()
.unwrap()
.entry(map[0].to_string())
.or_insert(PivotKeyword::new());
PIVOT_KEYWORD
.write()
.unwrap()
.get_mut(&map[0].to_string())
.unwrap()
.fields
.insert(map[1].to_string());
});
}
#[derive(Debug, Clone)]
pub struct EventInfo {
pub evttitle: String,
pub detectflg: String,
pub comment: String,
}
impl Default for EventInfo {
fn default() -> Self {
Self::new()
}
}
impl EventInfo {
pub fn new() -> EventInfo {
let evttitle = "Unknown".to_string();
let detectflg = "".to_string();
let comment = "".to_string();
return EventInfo {
evttitle,
detectflg,
comment,
};
EventInfo { evttitle }
}
}
#[derive(Debug, Clone)]
@@ -295,14 +371,20 @@ pub struct EventInfoConfig {
eventinfo: HashMap<String, EventInfo>,
}
impl Default for EventInfoConfig {
fn default() -> Self {
Self::new()
}
}
impl EventInfoConfig {
pub fn new() -> EventInfoConfig {
return EventInfoConfig {
EventInfoConfig {
eventinfo: HashMap::new(),
};
}
}
pub fn get_event_id(&self, eventid: &String) -> Option<&EventInfo> {
return self.eventinfo.get(eventid);
pub fn get_event_id(&self, eventid: &str) -> Option<&EventInfo> {
self.eventinfo.get(eventid)
}
}
@@ -313,33 +395,29 @@ fn load_eventcode_info(path: &str) -> EventInfoConfig {
if read_result.is_err() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&read_result.as_ref().unwrap_err(),
read_result.as_ref().unwrap_err(),
)
.ok();
return config;
}
// timeline_event_infoが読み込めなかったらエラーで終了とする。
// statistics_event_infoが読み込めなかったらエラーで終了とする。
read_result.unwrap().into_iter().for_each(|line| {
if line.len() != 4 {
if line.len() != 2 {
return;
}
let empty = &"".to_string();
let eventcode = line.get(0).unwrap_or(empty);
let event_title = line.get(1).unwrap_or(empty);
let detect_flg = line.get(2).unwrap_or(empty);
let comment = line.get(3).unwrap_or(empty);
infodata = EventInfo {
evttitle: event_title.to_string(),
detectflg: detect_flg.to_string(),
comment: comment.to_string(),
};
config
.eventinfo
.insert(eventcode.to_owned(), infodata.to_owned());
});
return config;
config
}
#[cfg(test)]
@@ -375,9 +453,9 @@ mod tests {
let within_range = Some("2019-02-27T01:05:01Z".parse::<DateTime<Utc>>().unwrap());
let out_of_range2 = Some("2021-02-27T01:05:01Z".parse::<DateTime<Utc>>().unwrap());
assert_eq!(time_filter.is_target(&out_of_range1), false);
assert_eq!(time_filter.is_target(&within_range), true);
assert_eq!(time_filter.is_target(&out_of_range2), false);
assert!(!time_filter.is_target(&out_of_range1));
assert!(time_filter.is_target(&within_range));
assert!(!time_filter.is_target(&out_of_range2));
}
#[test]
@@ -386,7 +464,7 @@ mod tests {
let end_time = Some("2020-03-30T12:00:09Z".parse::<DateTime<Utc>>().unwrap());
let time_filter = configs::TargetEventTime::set(start_time, end_time);
assert_eq!(time_filter.is_target(&start_time), true);
assert_eq!(time_filter.is_target(&end_time), true);
assert!(time_filter.is_target(&start_time));
assert!(time_filter.is_target(&end_time));
}
}

View File

@@ -1,11 +1,15 @@
extern crate csv;
use crate::detections::configs;
use crate::detections::pivot::insert_pivot_keyword;
use crate::detections::print::AlertMessage;
use crate::detections::print::DetectInfo;
use crate::detections::print::ERROR_LOG_STACK;
use crate::detections::print::MESSAGES;
use crate::detections::print::PIVOT_KEYWORD_LIST_FLAG;
use crate::detections::print::QUIET_ERRORS_FLAG;
use crate::detections::print::STATISTICS_FLAG;
use crate::detections::print::TAGS_CONFIG;
use crate::detections::rule;
use crate::detections::rule::AggResult;
use crate::detections::rule::RuleNode;
@@ -28,11 +32,12 @@ pub struct EvtxRecordInfo {
pub record: Value, // 1レコード分のデータをJSON形式にシリアライズしたもの
pub data_string: String,
pub key_2_value: hashbrown::HashMap<String, String>,
pub record_information: Option<String>,
}
impl EvtxRecordInfo {
pub fn get_value(&self, key: &String) -> Option<&String> {
return self.key_2_value.get(key);
pub fn get_value(&self, key: &str) -> Option<&String> {
self.key_2_value.get(key)
}
}
@@ -42,12 +47,12 @@ pub struct Detection {
}
impl Detection {
pub fn new(rules: Vec<RuleNode>) -> Detection {
return Detection { rules: rules };
pub fn new(rule_nodes: Vec<RuleNode>) -> Detection {
Detection { rules: rule_nodes }
}
pub fn start(self, rt: &Runtime, records: Vec<EvtxRecordInfo>) -> Self {
return rt.block_on(self.execute_rules(records));
rt.block_on(self.execute_rules(records))
}
// ルールファイルをパースします。
@@ -104,9 +109,9 @@ impl Detection {
});
}
parseerror_count += 1;
println!(""); // 一行開けるためのprintln
println!(); // 一行開けるためのprintln
});
return Option::None;
Option::None
};
// parse rule files
let ret = rulefile_loader
@@ -120,7 +125,7 @@ impl Detection {
&parseerror_count,
&rulefile_loader.ignorerule_count,
);
return ret;
ret
}
// 複数のイベントレコードに対して、複数のルールを1個実行します。
@@ -132,10 +137,7 @@ impl Detection {
.into_iter()
.map(|rule| {
let records_cloned = Arc::clone(&records_arc);
return spawn(async move {
let moved_rule = Detection::execute_rule(rule, records_cloned);
return moved_rule;
});
spawn(async move { Detection::execute_rule(rule, records_cloned) })
})
.collect();
@@ -151,7 +153,7 @@ impl Detection {
// self.rulesが再度所有権を取り戻せるように、Detection::execute_ruleで引数に渡したruleを戻り値として返すようにしている。
self.rules = rules;
return self;
self
}
pub fn add_aggcondition_msges(self, rt: &Runtime) {
@@ -175,17 +177,23 @@ impl Detection {
fn execute_rule(mut rule: RuleNode, records: Arc<Vec<EvtxRecordInfo>>) -> RuleNode {
let agg_condition = rule.has_agg_condition();
for record_info in records.as_ref() {
let result = rule.select(&record_info);
let result = rule.select(record_info);
if !result {
continue;
}
if *PIVOT_KEYWORD_LIST_FLAG {
insert_pivot_keyword(&record_info.record);
continue;
}
// aggregation conditionが存在しない場合はそのまま出力対応を行う
if !agg_condition {
Detection::insert_message(&rule, &record_info);
Detection::insert_message(&rule, record_info);
}
}
return rule;
rule
}
/// 条件に合致したレコードを表示するための関数
@@ -193,23 +201,33 @@ impl Detection {
let tag_info: Vec<String> = rule.yaml["tags"]
.as_vec()
.unwrap_or(&Vec::default())
.into_iter()
.map(|info| info.as_str().unwrap_or("").replace("attack.", ""))
.iter()
.filter_map(|info| TAGS_CONFIG.get(info.as_str().unwrap_or(&String::default())))
.map(|str| str.to_owned())
.collect();
MESSAGES.lock().unwrap().insert(
record_info.evtx_filepath.to_string(),
rule.rulepath.to_string(),
&record_info.record,
rule.yaml["level"].as_str().unwrap_or("-").to_string(),
record_info.record["Event"]["System"]["Computer"]
let recinfo = record_info
.record_information
.as_ref()
.map(|recinfo| recinfo.to_string());
let detect_info = DetectInfo {
filepath: record_info.evtx_filepath.to_string(),
rulepath: rule.rulepath.to_string(),
level: rule.yaml["level"].as_str().unwrap_or("-").to_string(),
computername: record_info.record["Event"]["System"]["Computer"]
.to_string()
.replace("\"", ""),
get_serde_number_to_string(&record_info.record["Event"]["System"]["EventID"])
.unwrap_or("-".to_owned())
.to_string(),
rule.yaml["title"].as_str().unwrap_or("").to_string(),
.replace('\"', ""),
eventid: get_serde_number_to_string(&record_info.record["Event"]["System"]["EventID"])
.unwrap_or_else(|| "-".to_owned()),
alert: rule.yaml["title"].as_str().unwrap_or("").to_string(),
detail: String::default(),
tag_info: tag_info.join(" | "),
record_information: recinfo,
};
MESSAGES.lock().unwrap().insert(
&record_info.record,
rule.yaml["details"].as_str().unwrap_or("").to_string(),
tag_info.join(" : "),
detect_info,
);
}
@@ -218,21 +236,32 @@ impl Detection {
let tag_info: Vec<String> = rule.yaml["tags"]
.as_vec()
.unwrap_or(&Vec::default())
.into_iter()
.map(|info| info.as_str().unwrap_or("").replace("attack.", ""))
.iter()
.filter_map(|info| TAGS_CONFIG.get(info.as_str().unwrap_or(&String::default())))
.map(|str| str.to_owned())
.collect();
let output = Detection::create_count_output(rule, &agg_result);
MESSAGES.lock().unwrap().insert_message(
"-".to_owned(),
rule.rulepath.to_owned(),
agg_result.start_timedate,
rule.yaml["level"].as_str().unwrap_or("").to_owned(),
"-".to_owned(),
"-".to_owned(),
rule.yaml["title"].as_str().unwrap_or("").to_owned(),
output.to_owned(),
tag_info.join(" : "),
)
let rec_info = if configs::CONFIG.read().unwrap().args.is_present("full-data") {
Option::Some(String::default())
} else {
Option::None
};
let detect_info = DetectInfo {
filepath: "-".to_owned(),
rulepath: rule.rulepath.to_owned(),
level: rule.yaml["level"].as_str().unwrap_or("").to_owned(),
computername: "-".to_owned(),
eventid: "-".to_owned(),
alert: rule.yaml["title"].as_str().unwrap_or("").to_owned(),
detail: output,
record_information: rec_info,
tag_info: tag_info.join(" : "),
};
MESSAGES
.lock()
.unwrap()
.insert_message(detect_info, agg_result.start_timedate)
}
///aggregation conditionのcount部分の検知出力文の文字列を返す関数
@@ -242,15 +271,11 @@ impl Detection {
let agg_condition_raw_str: Vec<&str> = rule.yaml["detection"]["condition"]
.as_str()
.unwrap()
.split("|")
.split('|')
.collect();
// この関数が呼び出されている段階で既にaggregation conditionは存在する前提なのでunwrap前の確認は行わない
let agg_condition = rule.get_agg_condition().unwrap();
let exist_timeframe = rule.yaml["detection"]["timeframe"]
.as_str()
.unwrap_or("")
.to_string()
!= "";
let exist_timeframe = rule.yaml["detection"]["timeframe"].as_str().unwrap_or("") != "";
// この関数が呼び出されている段階で既にaggregation conditionは存在する前提なのでagg_conditionの配列の長さは2となる
ret.push_str(agg_condition_raw_str[1].trim());
if exist_timeframe {
@@ -281,8 +306,9 @@ impl Detection {
));
}
return ret;
ret
}
pub fn print_rule_load_info(
rc: &HashMap<String, u128>,
parseerror_count: &u128,
@@ -302,7 +328,7 @@ impl Detection {
"Total enabled detection rules: {}",
total - ignore_count - parseerror_count
);
println!("");
println!();
}
}
@@ -498,4 +524,7 @@ mod tests {
expected_output
);
}
#[test]
fn test_create_fields_value() {}
}

View File

@@ -1,5 +1,6 @@
pub mod configs;
pub mod detection;
pub mod pivot;
pub mod print;
pub mod rule;
pub mod utils;

270
src/detections/pivot.rs Normal file
View File

@@ -0,0 +1,270 @@
use hashbrown::HashMap;
use hashbrown::HashSet;
use lazy_static::lazy_static;
use serde_json::Value;
use std::sync::RwLock;
use crate::detections::configs;
use crate::detections::utils::get_serde_number_to_string;
#[derive(Debug)]
pub struct PivotKeyword {
pub keywords: HashSet<String>,
pub fields: HashSet<String>,
}
lazy_static! {
pub static ref PIVOT_KEYWORD: RwLock<HashMap<String, PivotKeyword>> =
RwLock::new(HashMap::new());
}
impl Default for PivotKeyword {
fn default() -> Self {
Self::new()
}
}
impl PivotKeyword {
pub fn new() -> PivotKeyword {
PivotKeyword {
keywords: HashSet::new(),
fields: HashSet::new(),
}
}
}
///levelがlowより大きいレコードの場合、keywordがrecord内にみつかれば、
///それをPIVOT_KEYWORD.keywordsに入れる。
pub fn insert_pivot_keyword(event_record: &Value) {
//levelがlow異常なら続ける
let mut is_exist_event_key = false;
let mut tmp_event_record: &Value = event_record;
for s in ["Event", "System", "Level"] {
if let Some(record) = tmp_event_record.get(s) {
is_exist_event_key = true;
tmp_event_record = record;
}
}
if is_exist_event_key {
let hash_value = get_serde_number_to_string(tmp_event_record);
if hash_value.is_some() && hash_value.as_ref().unwrap() == "infomational"
|| hash_value.as_ref().unwrap() == "undefined"
|| hash_value.as_ref().unwrap() == "-"
{
return;
}
} else {
return;
}
for (_, pivot) in PIVOT_KEYWORD.write().unwrap().iter_mut() {
for field in &pivot.fields {
if let Some(array_str) = configs::EVENTKEY_ALIAS.get_event_key(&String::from(field)) {
let split: Vec<&str> = array_str.split('.').collect();
let mut is_exist_event_key = false;
let mut tmp_event_record: &Value = event_record;
for s in split {
if let Some(record) = tmp_event_record.get(s) {
is_exist_event_key = true;
tmp_event_record = record;
}
}
if is_exist_event_key {
let hash_value = get_serde_number_to_string(tmp_event_record);
if let Some(value) = hash_value {
if value == "-" || value == "127.0.0.1" || value == "::1" {
continue;
}
pivot.keywords.insert(value);
};
}
}
}
}
}
#[cfg(test)]
mod tests {
use crate::detections::configs::load_pivot_keywords;
use crate::detections::pivot::insert_pivot_keyword;
use crate::detections::pivot::PIVOT_KEYWORD;
use serde_json;
//PIVOT_KEYWORDはグローバルなので、他の関数の影響も考慮する必要がある。
#[test]
fn insert_pivot_keyword_local_ip4() {
load_pivot_keywords("test_files/config/pivot_keywords.txt");
let record_json_str = r#"
{
"Event": {
"System": {
"Level": "high"
},
"EventData": {
"IpAddress": "127.0.0.1"
}
}
}"#;
insert_pivot_keyword(&serde_json::from_str(record_json_str).unwrap());
assert!(!PIVOT_KEYWORD
.write()
.unwrap()
.get_mut("Ip Addresses")
.unwrap()
.keywords
.contains("127.0.0.1"));
}
#[test]
fn insert_pivot_keyword_ip4() {
load_pivot_keywords("test_files/config/pivot_keywords.txt");
let record_json_str = r#"
{
"Event": {
"System": {
"Level": "high"
},
"EventData": {
"IpAddress": "10.0.0.1"
}
}
}"#;
insert_pivot_keyword(&serde_json::from_str(record_json_str).unwrap());
assert!(PIVOT_KEYWORD
.write()
.unwrap()
.get_mut("Ip Addresses")
.unwrap()
.keywords
.contains("10.0.0.1"));
}
#[test]
fn insert_pivot_keyword_ip_empty() {
load_pivot_keywords("test_files/config/pivot_keywords.txt");
let record_json_str = r#"
{
"Event": {
"System": {
"Level": "high"
},
"EventData": {
"IpAddress": "-"
}
}
}"#;
insert_pivot_keyword(&serde_json::from_str(record_json_str).unwrap());
assert!(!PIVOT_KEYWORD
.write()
.unwrap()
.get_mut("Ip Addresses")
.unwrap()
.keywords
.contains("-"));
}
#[test]
fn insert_pivot_keyword_local_ip6() {
load_pivot_keywords("test_files/config/pivot_keywords.txt");
let record_json_str = r#"
{
"Event": {
"System": {
"Level": "high"
},
"EventData": {
"IpAddress": "::1"
}
}
}"#;
insert_pivot_keyword(&serde_json::from_str(record_json_str).unwrap());
assert!(!PIVOT_KEYWORD
.write()
.unwrap()
.get_mut("Ip Addresses")
.unwrap()
.keywords
.contains("::1"));
}
#[test]
fn insert_pivot_keyword_level_infomational() {
load_pivot_keywords("test_files/config/pivot_keywords.txt");
let record_json_str = r#"
{
"Event": {
"System": {
"Level": "infomational"
},
"EventData": {
"IpAddress": "10.0.0.2"
}
}
}"#;
insert_pivot_keyword(&serde_json::from_str(record_json_str).unwrap());
assert!(!PIVOT_KEYWORD
.write()
.unwrap()
.get_mut("Ip Addresses")
.unwrap()
.keywords
.contains("10.0.0.2"));
}
#[test]
fn insert_pivot_keyword_level_low() {
load_pivot_keywords("test_files/config/pivot_keywords.txt");
let record_json_str = r#"
{
"Event": {
"System": {
"Level": "low"
},
"EventData": {
"IpAddress": "10.0.0.1"
}
}
}"#;
insert_pivot_keyword(&serde_json::from_str(record_json_str).unwrap());
assert!(PIVOT_KEYWORD
.write()
.unwrap()
.get_mut("Ip Addresses")
.unwrap()
.keywords
.contains("10.0.0.1"));
}
#[test]
fn insert_pivot_keyword_level_none() {
load_pivot_keywords("test_files/config/pivot_keywords.txt");
let record_json_str = r#"
{
"Event": {
"System": {
"Level": "-"
},
"EventData": {
"IpAddress": "10.0.0.3"
}
}
}"#;
insert_pivot_keyword(&serde_json::from_str(record_json_str).unwrap());
assert!(!PIVOT_KEYWORD
.write()
.unwrap()
.get_mut("Ip Addresses")
.unwrap()
.keywords
.contains("10.0.0.3"));
}
}

View File

@@ -2,8 +2,6 @@ extern crate lazy_static;
use crate::detections::configs;
use crate::detections::utils;
use crate::detections::utils::get_serde_number_to_string;
use crate::filter::DataFilterRule;
use crate::filter::FILTER_REGEX;
use chrono::{DateTime, Local, TimeZone, Utc};
use hashbrown::HashMap;
use lazy_static::lazy_static;
@@ -33,6 +31,7 @@ pub struct DetectInfo {
pub alert: String,
pub detail: String,
pub tag_info: String,
pub record_information: Option<String>,
}
pub struct AlertMessage {}
@@ -55,6 +54,19 @@ lazy_static! {
.unwrap()
.args
.is_present("statistics");
pub static ref TAGS_CONFIG: HashMap<String, String> =
Message::create_tags_config("config/output_tag.txt");
pub static ref PIVOT_KEYWORD_LIST_FLAG: bool = configs::CONFIG
.read()
.unwrap()
.args
.is_present("pivot-keywords-list");
}
impl Default for Message {
fn default() -> Self {
Self::new()
}
}
impl Message {
@@ -63,74 +75,54 @@ impl Message {
Message { map: messages }
}
/// メッセージの設定を行う関数。aggcondition対応のためrecordではなく出力をする対象時間がDatetime形式での入力としてい
pub fn insert_message(
&mut self,
target_file: String,
rule_path: String,
event_time: DateTime<Utc>,
level: String,
computername: String,
eventid: String,
event_title: String,
event_detail: String,
tag_info: String,
) {
let detect_info = DetectInfo {
filepath: target_file,
rulepath: rule_path,
level: level,
computername: computername,
eventid: eventid,
alert: event_title,
detail: event_detail,
tag_info: tag_info,
};
/// ファイルパスで記載されたtagでのフル名、表示の際に置き換えられる文字列のHashMapを作成する関数。tagではこのHashMapのキーに対応しない出力は出力しないものとす
/// ex. attack.impact,Impact
pub fn create_tags_config(path: &str) -> HashMap<String, String> {
let read_result = utils::read_csv(path);
if read_result.is_err() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
read_result.as_ref().unwrap_err(),
)
.ok();
return HashMap::default();
}
let mut ret: HashMap<String, String> = HashMap::new();
read_result.unwrap().into_iter().for_each(|line| {
if line.len() != 2 {
return;
}
match self.map.get_mut(&event_time) {
Some(v) => {
v.push(detect_info);
}
None => {
let m = vec![detect_info; 1];
self.map.insert(event_time, m);
}
let empty = &"".to_string();
let tag_full_str = line.get(0).unwrap_or(empty).trim();
let tag_replace_str = line.get(1).unwrap_or(empty).trim();
ret.insert(tag_full_str.to_owned(), tag_replace_str.to_owned());
});
ret
}
/// メッセージの設定を行う関数。aggcondition対応のためrecordではなく出力をする対象時間がDatetime形式での入力としている
pub fn insert_message(&mut self, detect_info: DetectInfo, event_time: DateTime<Utc>) {
if let Some(v) = self.map.get_mut(&event_time) {
v.push(detect_info);
} else {
let m = vec![detect_info; 1];
self.map.insert(event_time, m);
}
}
/// メッセージを設定
pub fn insert(
&mut self,
target_file: String,
rule_path: String,
event_record: &Value,
level: String,
computername: String,
eventid: String,
event_title: String,
output: String,
tag_info: String,
) {
let message = &self.parse_message(event_record, output);
pub fn insert(&mut self, event_record: &Value, output: String, mut detect_info: DetectInfo) {
detect_info.detail = self.parse_message(event_record, output);
let default_time = Utc.ymd(1970, 1, 1).and_hms(0, 0, 0);
let time = Message::get_event_time(event_record).unwrap_or(default_time);
self.insert_message(
target_file,
rule_path,
time,
level,
computername,
eventid,
event_title,
message.to_string(),
tag_info,
)
self.insert_message(detect_info, time)
}
fn parse_message(&mut self, event_record: &Value, output: String) -> String {
let mut return_message: String = output;
let mut hash_map: HashMap<String, String> = HashMap::new();
let mut output_filter: Option<&DataFilterRule> = None;
for caps in ALIASREGEX.captures_iter(&return_message) {
let full_target_str = &caps[0];
let target_length = full_target_str.chars().count() - 2; // The meaning of 2 is two percent
@@ -140,26 +132,29 @@ impl Message {
.take(target_length)
.collect::<String>();
if let Some(array_str) = configs::EVENTKEY_ALIAS.get_event_key(&target_str) {
let split: Vec<&str> = array_str.split(".").collect();
let mut is_exist_event_key = false;
let mut tmp_event_record: &Value = event_record.into();
for s in &split {
if let Some(record) = tmp_event_record.get(s) {
is_exist_event_key = true;
tmp_event_record = record;
output_filter = FILTER_REGEX.get(&s.to_string());
}
let array_str =
if let Some(_array_str) = configs::EVENTKEY_ALIAS.get_event_key(&target_str) {
_array_str.to_string()
} else {
"Event.EventData.".to_owned() + &target_str
};
let split: Vec<&str> = array_str.split('.').collect();
let mut is_exist_event_key = false;
let mut tmp_event_record: &Value = event_record;
for s in &split {
if let Some(record) = tmp_event_record.get(s) {
is_exist_event_key = true;
tmp_event_record = record;
}
if is_exist_event_key {
let mut hash_value = get_serde_number_to_string(tmp_event_record);
if hash_value.is_some() {
if output_filter.is_some() {
hash_value =
utils::replace_target_character(hash_value.as_ref(), output_filter);
}
hash_map.insert(full_target_str.to_string(), hash_value.unwrap());
}
}
if is_exist_event_key {
let hash_value = get_serde_number_to_string(tmp_event_record);
if let Some(hash_value) = hash_value {
// UnicodeのWhitespace characterをそのままCSVに出力すると見難いので、スペースに変換する。なお、先頭と最後のWhitespace characterは単に削除される。
let hash_value: Vec<&str> = hash_value.split_whitespace().collect();
let hash_value = hash_value.join(" ");
hash_map.insert(full_target_str.to_string(), hash_value);
}
}
}
@@ -193,7 +188,7 @@ impl Message {
}
detect_count += detect_infos.len();
}
println!("");
println!();
println!("Total events:{:?}", detect_count);
}
@@ -224,44 +219,40 @@ impl AlertMessage {
}
let mut error_log_writer = BufWriter::new(File::create(path).unwrap());
error_log_writer
.write(
.write_all(
format!(
"user input: {:?}\n",
format_args!(
"{}",
env::args()
.map(|arg| arg)
.collect::<Vec<String>>()
.join(" ")
)
format_args!("{}", env::args().collect::<Vec<String>>().join(" "))
)
.as_bytes(),
)
.unwrap();
.ok();
for error_log in ERROR_LOG_STACK.lock().unwrap().iter() {
writeln!(error_log_writer, "{}", error_log).ok();
}
println!(
"Errors were generated. Please check {} for details.",
ERROR_LOG_PATH.to_string()
*ERROR_LOG_PATH
);
println!("");
println!();
}
/// ERRORメッセージを表示する関数
pub fn alert<W: Write>(w: &mut W, contents: &String) -> io::Result<()> {
pub fn alert<W: Write>(w: &mut W, contents: &str) -> io::Result<()> {
writeln!(w, "[ERROR] {}", contents)
}
/// WARNメッセージを表示する関数
pub fn warn<W: Write>(w: &mut W, contents: &String) -> io::Result<()> {
pub fn warn<W: Write>(w: &mut W, contents: &str) -> io::Result<()> {
writeln!(w, "[WARN] {}", contents)
}
}
#[cfg(test)]
mod tests {
use crate::detections::print::DetectInfo;
use crate::detections::print::{AlertMessage, Message};
use hashbrown::HashMap;
use serde_json::Value;
use std::io::BufWriter;
@@ -284,15 +275,19 @@ mod tests {
"##;
let event_record_1: Value = serde_json::from_str(json_str_1).unwrap();
message.insert(
"a".to_string(),
"test_rule".to_string(),
&event_record_1,
"high".to_string(),
"testcomputer1".to_string(),
"1".to_string(),
"test1".to_string(),
"CommandLine1: %CommandLine%".to_string(),
"txxx.001".to_string(),
DetectInfo {
filepath: "a".to_string(),
rulepath: "test_rule".to_string(),
level: "high".to_string(),
computername: "testcomputer1".to_string(),
eventid: "1".to_string(),
alert: "test1".to_string(),
detail: String::default(),
tag_info: "txxx.001".to_string(),
record_information: Option::Some("record_information1".to_string()),
},
);
let json_str_2 = r##"
@@ -311,15 +306,19 @@ mod tests {
"##;
let event_record_2: Value = serde_json::from_str(json_str_2).unwrap();
message.insert(
"a".to_string(),
"test_rule2".to_string(),
&event_record_2,
"high".to_string(),
"testcomputer2".to_string(),
"2".to_string(),
"test2".to_string(),
"CommandLine2: %CommandLine%".to_string(),
"txxx.002".to_string(),
DetectInfo {
filepath: "a".to_string(),
rulepath: "test_rule2".to_string(),
level: "high".to_string(),
computername: "testcomputer2".to_string(),
eventid: "2".to_string(),
alert: "test2".to_string(),
detail: String::default(),
tag_info: "txxx.002".to_string(),
record_information: Option::Some("record_information2".to_string()),
},
);
let json_str_3 = r##"
@@ -338,15 +337,19 @@ mod tests {
"##;
let event_record_3: Value = serde_json::from_str(json_str_3).unwrap();
message.insert(
"a".to_string(),
"test_rule3".to_string(),
&event_record_3,
"high".to_string(),
"testcomputer3".to_string(),
"3".to_string(),
"test3".to_string(),
"CommandLine3: %CommandLine%".to_string(),
"txxx.003".to_string(),
DetectInfo {
filepath: "a".to_string(),
rulepath: "test_rule3".to_string(),
level: "high".to_string(),
computername: "testcomputer3".to_string(),
eventid: "3".to_string(),
alert: "test3".to_string(),
detail: String::default(),
tag_info: "txxx.003".to_string(),
record_information: Option::Some("record_information3".to_string()),
},
);
let json_str_4 = r##"
@@ -360,41 +363,39 @@ mod tests {
"##;
let event_record_4: Value = serde_json::from_str(json_str_4).unwrap();
message.insert(
"a".to_string(),
"test_rule4".to_string(),
&event_record_4,
"medium".to_string(),
"testcomputer4".to_string(),
"4".to_string(),
"test4".to_string(),
"CommandLine4: %CommandLine%".to_string(),
"txxx.004".to_string(),
DetectInfo {
filepath: "a".to_string(),
rulepath: "test_rule4".to_string(),
level: "medium".to_string(),
computername: "testcomputer4".to_string(),
eventid: "4".to_string(),
alert: "test4".to_string(),
detail: String::default(),
tag_info: "txxx.004".to_string(),
record_information: Option::Some("record_information4".to_string()),
},
);
let display = format!("{}", format_args!("{:?}", message));
println!("display::::{}", display);
let expect = "Message { map: {1970-01-01T00:00:00Z: [DetectInfo { filepath: \"a\", rulepath: \"test_rule4\", level: \"medium\", computername: \"testcomputer4\", eventid: \"4\", alert: \"test4\", detail: \"CommandLine4: hoge\", tag_info: \"txxx.004\" }], 1996-02-27T01:05:01Z: [DetectInfo { filepath: \"a\", rulepath: \"test_rule\", level: \"high\", computername: \"testcomputer1\", eventid: \"1\", alert: \"test1\", detail: \"CommandLine1: hoge\", tag_info: \"txxx.001\" }, DetectInfo { filepath: \"a\", rulepath: \"test_rule2\", level: \"high\", computername: \"testcomputer2\", eventid: \"2\", alert: \"test2\", detail: \"CommandLine2: hoge\", tag_info: \"txxx.002\" }], 2000-01-21T09:06:01Z: [DetectInfo { filepath: \"a\", rulepath: \"test_rule3\", level: \"high\", computername: \"testcomputer3\", eventid: \"3\", alert: \"test3\", detail: \"CommandLine3: hoge\", tag_info: \"txxx.003\" }]} }";
let expect = "Message { map: {1970-01-01T00:00:00Z: [DetectInfo { filepath: \"a\", rulepath: \"test_rule4\", level: \"medium\", computername: \"testcomputer4\", eventid: \"4\", alert: \"test4\", detail: \"CommandLine4: hoge\", tag_info: \"txxx.004\", record_information: Some(\"record_information4\") }], 1996-02-27T01:05:01Z: [DetectInfo { filepath: \"a\", rulepath: \"test_rule\", level: \"high\", computername: \"testcomputer1\", eventid: \"1\", alert: \"test1\", detail: \"CommandLine1: hoge\", tag_info: \"txxx.001\", record_information: Some(\"record_information1\") }, DetectInfo { filepath: \"a\", rulepath: \"test_rule2\", level: \"high\", computername: \"testcomputer2\", eventid: \"2\", alert: \"test2\", detail: \"CommandLine2: hoge\", tag_info: \"txxx.002\", record_information: Some(\"record_information2\") }], 2000-01-21T09:06:01Z: [DetectInfo { filepath: \"a\", rulepath: \"test_rule3\", level: \"high\", computername: \"testcomputer3\", eventid: \"3\", alert: \"test3\", detail: \"CommandLine3: hoge\", tag_info: \"txxx.003\", record_information: Some(\"record_information3\") }]} }";
assert_eq!(display, expect);
}
#[test]
fn test_error_message() {
let input = "TEST!";
AlertMessage::alert(
&mut BufWriter::new(std::io::stdout().lock()),
&input.to_string(),
)
.expect("[ERROR] TEST!");
AlertMessage::alert(&mut BufWriter::new(std::io::stdout().lock()), input)
.expect("[ERROR] TEST!");
}
#[test]
fn test_warn_message() {
let input = "TESTWarn!";
AlertMessage::warn(
&mut BufWriter::new(std::io::stdout().lock()),
&input.to_string(),
)
.expect("[WARN] TESTWarn!");
AlertMessage::warn(&mut BufWriter::new(std::io::stdout().lock()), input)
.expect("[WARN] TESTWarn!");
}
#[test]
@@ -426,6 +427,27 @@ mod tests {
expected,
);
}
#[test]
fn test_parse_message_auto_search() {
let mut message = Message::new();
let json_str = r##"
{
"Event": {
"EventData": {
"NoAlias": "no_alias"
}
}
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "alias:no_alias";
assert_eq!(
message.parse_message(&event_record, "alias:%NoAlias%".to_owned()),
expected,
);
}
#[test]
/// outputで指定されているキーが、eventkey_alias.txt内で設定されていない場合の出力テスト
fn test_parse_message_not_exist_key_in_output() {
@@ -445,9 +467,9 @@ mod tests {
}
"##;
let event_record: Value = serde_json::from_str(json_str).unwrap();
let expected = "NoExistKey:%TESTNoExistKey%";
let expected = "NoExistAlias:%NoAliasNoHit%";
assert_eq!(
message.parse_message(&event_record, "NoExistKey:%TESTNoExistKey%".to_owned()),
message.parse_message(&event_record, "NoExistAlias:%NoAliasNoHit%".to_owned()),
expected,
);
}
@@ -479,4 +501,18 @@ mod tests {
expected,
);
}
#[test]
/// output_tag.txtの読み込みテスト
fn test_load_output_tag() {
let actual = Message::create_tags_config("test_files/config/output_tag.txt");
let expected: HashMap<String, String> = HashMap::from([
("attack.impact".to_string(), "Impact".to_string()),
("xxx".to_string(), "yyy".to_string()),
]);
assert_eq!(actual.len(), expected.len());
for (k, v) in expected.iter() {
assert!(actual.get(k).unwrap_or(&String::default()) == v);
}
}
}

View File

@@ -28,15 +28,15 @@ pub struct AggregationParseInfo {
#[derive(Debug)]
pub enum AggregationConditionToken {
COUNT(String), // count
SPACE, // 空白
Count(String), // count
Space, // 空白
BY, // by
EQ, // ..と等しい
LE, // ..以下
LT, // ..未満
GE, // ..以上
GT, // .よりおおきい
KEYWORD(String), // BYのフィールド名
Keyword(String), // BYのフィールド名
}
/// SIGMAルールでいうAggregationConditionを解析する。
@@ -52,12 +52,12 @@ impl AggegationConditionCompiler {
pub fn compile(&self, condition_str: String) -> Result<Option<AggregationParseInfo>, String> {
let result = self.compile_body(condition_str);
if let Result::Err(msg) = result {
return Result::Err(format!(
Result::Err(format!(
"An aggregation condition parse error has occurred. {}",
msg
));
))
} else {
return result;
result
}
}
@@ -78,11 +78,11 @@ impl AggegationConditionCompiler {
.unwrap()
.as_str()
.to_string()
.replacen("|", "", 1);
.replacen('|', "", 1);
let tokens = self.tokenize(aggregation_str)?;
return self.parse(tokens);
self.parse(tokens)
}
/// 字句解析します。
@@ -90,10 +90,10 @@ impl AggegationConditionCompiler {
&self,
condition_str: String,
) -> Result<Vec<AggregationConditionToken>, String> {
let mut cur_condition_str = condition_str.clone();
let mut cur_condition_str = condition_str;
let mut tokens = Vec::new();
while cur_condition_str.len() != 0 {
while !cur_condition_str.is_empty() {
let captured = self::AGGREGATION_REGEXMAP.iter().find_map(|regex| {
return regex.captures(cur_condition_str.as_str());
});
@@ -105,7 +105,7 @@ impl AggegationConditionCompiler {
let mached_str = captured.unwrap().get(0).unwrap().as_str();
let token = self.to_enum(mached_str.to_string());
if let AggregationConditionToken::SPACE = token {
if let AggregationConditionToken::Space = token {
// 空白は特に意味ないので、読み飛ばす。
cur_condition_str = cur_condition_str.replacen(mached_str, "", 1);
continue;
@@ -115,19 +115,19 @@ impl AggegationConditionCompiler {
cur_condition_str = cur_condition_str.replacen(mached_str, "", 1);
}
return Result::Ok(tokens);
Result::Ok(tokens)
}
/// 比較演算子かどうか判定します。
fn is_cmp_op(&self, token: &AggregationConditionToken) -> bool {
return match token {
AggregationConditionToken::EQ => true,
AggregationConditionToken::LE => true,
AggregationConditionToken::LT => true,
AggregationConditionToken::GE => true,
AggregationConditionToken::GT => true,
_ => false,
};
matches!(
token,
AggregationConditionToken::EQ
| AggregationConditionToken::LE
| AggregationConditionToken::LT
| AggregationConditionToken::GE
| AggregationConditionToken::GT
)
}
/// 構文解析します。
@@ -144,7 +144,7 @@ impl AggegationConditionCompiler {
let token = token_ite.next().unwrap();
let mut count_field_name: Option<String> = Option::None;
if let AggregationConditionToken::COUNT(field_name) = token {
if let AggregationConditionToken::Count(field_name) = token {
if !field_name.is_empty() {
count_field_name = Option::Some(field_name);
}
@@ -173,7 +173,7 @@ impl AggegationConditionCompiler {
);
}
if let AggregationConditionToken::KEYWORD(keyword) = after_by.unwrap() {
if let AggregationConditionToken::Keyword(keyword) = after_by.unwrap() {
by_field_name = Option::Some(keyword);
token_ite.next()
} else {
@@ -200,14 +200,14 @@ impl AggegationConditionCompiler {
);
}
let token = token_ite.next().unwrap_or(AggregationConditionToken::SPACE);
let cmp_number = if let AggregationConditionToken::KEYWORD(number) = token {
let token = token_ite.next().unwrap_or(AggregationConditionToken::Space);
let cmp_number = if let AggregationConditionToken::Keyword(number) = token {
let number: Result<i64, _> = number.parse();
if number.is_err() {
if let Ok(num) = number {
num
} else {
// 比較演算子の後に数値が無い。
return Result::Err("The compare operator needs a number like '> 3'.".to_string());
} else {
number.unwrap()
}
} else {
// 比較演算子の後に数値が無い。
@@ -224,7 +224,7 @@ impl AggegationConditionCompiler {
_cmp_op: cmp_token,
_cmp_num: cmp_number,
};
return Result::Ok(Option::Some(info));
Result::Ok(Option::Some(info))
}
/// 文字列をConditionTokenに変換する。
@@ -232,25 +232,25 @@ impl AggegationConditionCompiler {
if token.starts_with("count(") {
let count_field = token
.replacen("count(", "", 1)
.replacen(")", "", 1)
.replace(" ", "");
return AggregationConditionToken::COUNT(count_field);
.replacen(')', "", 1)
.replace(' ', "");
AggregationConditionToken::Count(count_field)
} else if token == " " {
return AggregationConditionToken::SPACE;
AggregationConditionToken::Space
} else if token == "by" {
return AggregationConditionToken::BY;
AggregationConditionToken::BY
} else if token == "==" {
return AggregationConditionToken::EQ;
AggregationConditionToken::EQ
} else if token == "<=" {
return AggregationConditionToken::LE;
AggregationConditionToken::LE
} else if token == ">=" {
return AggregationConditionToken::GE;
AggregationConditionToken::GE
} else if token == "<" {
return AggregationConditionToken::LT;
AggregationConditionToken::LT
} else if token == ">" {
return AggregationConditionToken::GT;
AggregationConditionToken::GT
} else {
return AggregationConditionToken::KEYWORD(token);
AggregationConditionToken::Keyword(token)
}
}
}
@@ -266,9 +266,9 @@ mod tests {
// countが無いパターン
let compiler = AggegationConditionCompiler::new();
let result = compiler.compile("select1 and select2".to_string());
assert_eq!(true, result.is_ok());
assert!(result.is_ok());
let result = result.unwrap();
assert_eq!(true, result.is_none());
assert!(result.is_none());
}
#[test]
@@ -276,43 +276,23 @@ mod tests {
// 正常系 countの中身にフィールドが無い 各種演算子を試す
let token =
check_aggregation_condition_ope("select1 and select2|count() > 32".to_string(), 32);
let is_gt = match token {
AggregationConditionToken::GT => true,
_ => false,
};
assert_eq!(is_gt, true);
assert!(matches!(token, AggregationConditionToken::GT));
let token =
check_aggregation_condition_ope("select1 and select2|count() >= 43".to_string(), 43);
let is_gt = match token {
AggregationConditionToken::GE => true,
_ => false,
};
assert_eq!(is_gt, true);
assert!(matches!(token, AggregationConditionToken::GE));
let token =
check_aggregation_condition_ope("select1 and select2|count() < 59".to_string(), 59);
let is_gt = match token {
AggregationConditionToken::LT => true,
_ => false,
};
assert_eq!(is_gt, true);
assert!(matches!(token, AggregationConditionToken::LT));
let token =
check_aggregation_condition_ope("select1 and select2|count() <= 12".to_string(), 12);
let is_gt = match token {
AggregationConditionToken::LE => true,
_ => false,
};
assert_eq!(is_gt, true);
assert!(matches!(token, AggregationConditionToken::LE));
let token =
check_aggregation_condition_ope("select1 and select2|count() == 28".to_string(), 28);
let is_gt = match token {
AggregationConditionToken::EQ => true,
_ => false,
};
assert_eq!(is_gt, true);
assert!(matches!(token, AggregationConditionToken::EQ));
}
#[test]
@@ -320,19 +300,15 @@ mod tests {
let compiler = AggegationConditionCompiler::new();
let result = compiler.compile("select1 or select2 | count() by iiibbb > 27".to_string());
assert_eq!(true, result.is_ok());
assert!(result.is_ok());
let result = result.unwrap();
assert_eq!(true, result.is_some());
assert!(result.is_some());
let result = result.unwrap();
assert_eq!("iiibbb".to_string(), result._by_field_name.unwrap());
assert_eq!(true, result._field_name.is_none());
assert!(result._field_name.is_none());
assert_eq!(27, result._cmp_num);
let is_ok = match result._cmp_op {
AggregationConditionToken::GT => true,
_ => false,
};
assert_eq!(true, is_ok);
assert!(matches!(result._cmp_op, AggregationConditionToken::GT));
}
#[test]
@@ -340,19 +316,15 @@ mod tests {
let compiler = AggegationConditionCompiler::new();
let result = compiler.compile("select1 or select2 | count( hogehoge ) > 3".to_string());
assert_eq!(true, result.is_ok());
assert!(result.is_ok());
let result = result.unwrap();
assert_eq!(true, result.is_some());
assert!(result.is_some());
let result = result.unwrap();
assert_eq!(true, result._by_field_name.is_none());
assert!(result._by_field_name.is_none());
assert_eq!("hogehoge", result._field_name.unwrap());
assert_eq!(3, result._cmp_num);
let is_ok = match result._cmp_op {
AggregationConditionToken::GT => true,
_ => false,
};
assert_eq!(true, is_ok);
assert!(matches!(result._cmp_op, AggregationConditionToken::GT));
}
#[test]
@@ -361,19 +333,15 @@ mod tests {
let result =
compiler.compile("select1 or select2 | count( hogehoge) by snsn > 3".to_string());
assert_eq!(true, result.is_ok());
assert!(result.is_ok());
let result = result.unwrap();
assert_eq!(true, result.is_some());
assert!(result.is_some());
let result = result.unwrap();
assert_eq!("snsn".to_string(), result._by_field_name.unwrap());
assert_eq!("hogehoge", result._field_name.unwrap());
assert_eq!(3, result._cmp_num);
let is_ok = match result._cmp_op {
AggregationConditionToken::GT => true,
_ => false,
};
assert_eq!(true, is_ok);
assert!(matches!(result._cmp_op, AggregationConditionToken::GT));
}
#[test]
@@ -381,7 +349,7 @@ mod tests {
let compiler = AggegationConditionCompiler::new();
let result = compiler.compile("select1 or select2 |".to_string());
assert_eq!(true, result.is_err());
assert!(result.is_err());
assert_eq!(
"An aggregation condition parse error has occurred. There are no strings after the pipe(|)."
.to_string(),
@@ -395,7 +363,7 @@ mod tests {
let result =
compiler.compile("select1 or select2 | count( hogeess ) by ii-i > 33".to_string());
assert_eq!(true, result.is_err());
assert!(result.is_err());
assert_eq!(
"An aggregation condition parse error has occurred. An unusable character was found."
.to_string(),
@@ -410,7 +378,7 @@ mod tests {
let result =
compiler.compile("select1 or select2 | by count( hogehoge) by snsn > 3".to_string());
assert_eq!(true, result.is_err());
assert!(result.is_err());
assert_eq!("An aggregation condition parse error has occurred. The aggregation condition can only use count.".to_string(),result.unwrap_err());
}
@@ -420,7 +388,7 @@ mod tests {
let compiler = AggegationConditionCompiler::new();
let result = compiler.compile("select1 or select2 | count( hogehoge) 3".to_string());
assert_eq!(true, result.is_err());
assert!(result.is_err());
assert_eq!("An aggregation condition parse error has occurred. The count keyword needs a compare operator and number like '> 3'".to_string(),result.unwrap_err());
}
@@ -430,7 +398,7 @@ mod tests {
let compiler = AggegationConditionCompiler::new();
let result = compiler.compile("select1 or select2 | count( hogehoge) by".to_string());
assert_eq!(true, result.is_err());
assert!(result.is_err());
assert_eq!("An aggregation condition parse error has occurred. The by keyword needs a field name like 'by EventID'".to_string(),result.unwrap_err());
}
@@ -441,7 +409,7 @@ mod tests {
let result =
compiler.compile("select1 or select2 | count( hogehoge ) by hoe >".to_string());
assert_eq!(true, result.is_err());
assert!(result.is_err());
assert_eq!("An aggregation condition parse error has occurred. The compare operator needs a number like '> 3'.".to_string(),result.unwrap_err());
}
@@ -452,7 +420,7 @@ mod tests {
let result =
compiler.compile("select1 or select2 | count( hogehoge ) by hoe > 3 33".to_string());
assert_eq!(true, result.is_err());
assert!(result.is_err());
assert_eq!(
"An aggregation condition parse error has occurred. An unnecessary word was found."
.to_string(),
@@ -464,14 +432,14 @@ mod tests {
let compiler = AggegationConditionCompiler::new();
let result = compiler.compile(expr);
assert_eq!(true, result.is_ok());
assert!(result.is_ok());
let result = result.unwrap();
assert_eq!(true, result.is_some());
assert!(result.is_some());
let result = result.unwrap();
assert_eq!(true, result._by_field_name.is_none());
assert_eq!(true, result._field_name.is_none());
assert!(result._by_field_name.is_none());
assert!(result._field_name.is_none());
assert_eq!(cmp_num, result._cmp_num);
return result._cmp_op;
result._cmp_op
}
}

View File

@@ -57,7 +57,7 @@ impl IntoIterator for ConditionToken {
impl ConditionToken {
fn replace_subtoken(&self, sub_tokens: Vec<ConditionToken>) -> ConditionToken {
return match self {
match self {
ConditionToken::ParenthesisContainer(_) => {
ConditionToken::ParenthesisContainer(sub_tokens)
}
@@ -74,12 +74,12 @@ impl ConditionToken {
ConditionToken::SelectionReference(name) => {
ConditionToken::SelectionReference(name.clone())
}
};
}
}
pub fn sub_tokens<'a>(&'a self) -> Vec<ConditionToken> {
pub fn sub_tokens(&self) -> Vec<ConditionToken> {
// TODO ここでcloneを使わずに実装できるようにしたい。
return match self {
match self {
ConditionToken::ParenthesisContainer(sub_tokens) => sub_tokens.clone(),
ConditionToken::AndContainer(sub_tokens) => sub_tokens.clone(),
ConditionToken::OrContainer(sub_tokens) => sub_tokens.clone(),
@@ -92,14 +92,14 @@ impl ConditionToken {
ConditionToken::And => vec![],
ConditionToken::Or => vec![],
ConditionToken::SelectionReference(_) => vec![],
};
}
}
pub fn sub_tokens_without_parenthesis<'a>(&'a self) -> Vec<ConditionToken> {
return match self {
pub fn sub_tokens_without_parenthesis(&self) -> Vec<ConditionToken> {
match self {
ConditionToken::ParenthesisContainer(_) => vec![],
_ => self.sub_tokens(),
};
}
}
}
@@ -119,8 +119,8 @@ impl ConditionCompiler {
) -> Result<Box<dyn SelectionNode>, String> {
// パイプはここでは処理しない
let captured = self::RE_PIPE.captures(&condition_str);
let condition_str = if captured.is_some() {
let captured = captured.unwrap().get(0).unwrap().as_str().to_string();
let condition_str = if let Some(cap) = captured {
let captured = cap.get(0).unwrap().as_str().to_string();
condition_str.replacen(&captured, "", 1)
} else {
condition_str
@@ -128,9 +128,9 @@ impl ConditionCompiler {
let result = self.compile_condition_body(condition_str, name_2_node);
if let Result::Err(msg) = result {
return Result::Err(format!("A condition parse error has occured. {}", msg));
Result::Err(format!("A condition parse error has occured. {}", msg))
} else {
return result;
result
}
}
@@ -144,7 +144,7 @@ impl ConditionCompiler {
let parsed = self.parse(tokens)?;
return self.to_selectnode(parsed, name_2_node);
self.to_selectnode(parsed, name_2_node)
}
/// 構文解析を実行する。
@@ -161,7 +161,7 @@ impl ConditionCompiler {
let token = self.parse_operand_container(tokens)?;
// 括弧で囲まれている部分を探して、もしあればその部分を再帰的に構文解析します。
return self.parse_rest_parenthesis(token);
self.parse_rest_parenthesis(token)
}
/// 括弧で囲まれている部分を探して、もしあればその部分を再帰的に構文解析します。
@@ -172,7 +172,7 @@ impl ConditionCompiler {
}
let sub_tokens = token.sub_tokens();
if sub_tokens.len() == 0 {
if sub_tokens.is_empty() {
return Result::Ok(token);
}
@@ -181,15 +181,15 @@ impl ConditionCompiler {
let new_token = self.parse_rest_parenthesis(sub_token)?;
new_sub_tokens.push(new_token);
}
return Result::Ok(token.replace_subtoken(new_sub_tokens));
Result::Ok(token.replace_subtoken(new_sub_tokens))
}
/// 字句解析を行う
fn tokenize(&self, condition_str: &String) -> Result<Vec<ConditionToken>, String> {
let mut cur_condition_str = condition_str.clone();
fn tokenize(&self, condition_str: &str) -> Result<Vec<ConditionToken>, String> {
let mut cur_condition_str = condition_str.to_string();
let mut tokens = Vec::new();
while cur_condition_str.len() != 0 {
while !cur_condition_str.is_empty() {
let captured = self::CONDITION_REGEXMAP.iter().find_map(|regex| {
return regex.captures(cur_condition_str.as_str());
});
@@ -210,25 +210,25 @@ impl ConditionCompiler {
cur_condition_str = cur_condition_str.replacen(mached_str, "", 1);
}
return Result::Ok(tokens);
Result::Ok(tokens)
}
/// 文字列をConditionTokenに変換する。
fn to_enum(&self, token: String) -> ConditionToken {
if token == "(" {
return ConditionToken::LeftParenthesis;
ConditionToken::LeftParenthesis
} else if token == ")" {
return ConditionToken::RightParenthesis;
ConditionToken::RightParenthesis
} else if token == " " {
return ConditionToken::Space;
ConditionToken::Space
} else if token == "not" {
return ConditionToken::Not;
ConditionToken::Not
} else if token == "and" {
return ConditionToken::And;
ConditionToken::And
} else if token == "or" {
return ConditionToken::Or;
ConditionToken::Or
} else {
return ConditionToken::SelectionReference(token.clone());
ConditionToken::SelectionReference(token)
}
}
@@ -241,10 +241,7 @@ impl ConditionCompiler {
let mut token_ite = tokens.into_iter();
while let Some(token) = token_ite.next() {
// まず、左括弧を探す。
let is_left = match token {
ConditionToken::LeftParenthesis => true,
_ => false,
};
let is_left = matches!(token, ConditionToken::LeftParenthesis);
if !is_left {
ret.push(token);
continue;
@@ -254,7 +251,7 @@ impl ConditionCompiler {
let mut left_cnt = 1;
let mut right_cnt = 0;
let mut sub_tokens = vec![];
while let Some(token) = token_ite.next() {
for token in token_ite.by_ref() {
if let ConditionToken::LeftParenthesis = token {
left_cnt += 1;
} else if let ConditionToken::RightParenthesis = token {
@@ -275,22 +272,19 @@ impl ConditionCompiler {
}
// この時点で右括弧が残っている場合は右括弧の数が左括弧よりも多いことを表している。
let is_right_left = ret.iter().any(|token| {
return match token {
ConditionToken::RightParenthesis => true,
_ => false,
};
});
let is_right_left = ret
.iter()
.any(|token| matches!(token, ConditionToken::RightParenthesis));
if is_right_left {
return Result::Err("'(' was expected but not found.".to_string());
}
return Result::Ok(ret);
Result::Ok(ret)
}
/// AND, ORをパースする。
fn parse_and_or_operator(&self, tokens: Vec<ConditionToken>) -> Result<ConditionToken, String> {
if tokens.len() == 0 {
if tokens.is_empty() {
// 長さ0は呼び出してはいけない
return Result::Err("Unknown error.".to_string());
}
@@ -339,7 +333,7 @@ impl ConditionCompiler {
// 次にOrでつながっている部分をまとめる
let or_contaienr = ConditionToken::OrContainer(operands);
return Result::Ok(or_contaienr);
Result::Ok(or_contaienr)
}
/// OperandContainerの中身をパースする。現状はNotをパースするためだけに存在している。
@@ -360,7 +354,7 @@ impl ConditionCompiler {
}
// 0はありえないはず
if sub_tokens.len() == 0 {
if sub_tokens.is_empty() {
return Result::Err("Unknown error.".to_string());
}
@@ -380,20 +374,20 @@ impl ConditionCompiler {
let second_token = sub_tokens_ite.next().unwrap();
if let ConditionToken::Not = first_token {
if let ConditionToken::Not = second_token {
return Result::Err("Not is continuous.".to_string());
Result::Err("Not is continuous.".to_string())
} else {
let not_container = ConditionToken::NotContainer(vec![second_token]);
return Result::Ok(not_container);
Result::Ok(not_container)
}
} else {
return Result::Err(
Result::Err(
"Unknown error. Maybe it is because there are multiple names of selection nodes."
.to_string(),
);
)
}
} else {
let sub_tokens = parent_token.sub_tokens_without_parenthesis();
if sub_tokens.len() == 0 {
if sub_tokens.is_empty() {
return Result::Ok(parent_token);
}
@@ -403,7 +397,7 @@ impl ConditionCompiler {
new_sub_tokens.push(new_sub_token);
}
return Result::Ok(parent_token.replace_subtoken(new_sub_tokens));
Result::Ok(parent_token.replace_subtoken(new_sub_tokens))
}
}
@@ -416,14 +410,14 @@ impl ConditionCompiler {
// RefSelectionNodeに変換
if let ConditionToken::SelectionReference(selection_name) = token {
let selection_node = name_2_node.get(&selection_name);
if selection_node.is_none() {
let err_msg = format!("{} is not defined.", selection_name);
return Result::Err(err_msg);
} else {
let selection_node = selection_node.unwrap();
if let Some(select_node) = selection_node {
let selection_node = select_node;
let selection_node = Arc::clone(selection_node);
let ref_node = RefSelectionNode::new(selection_node);
return Result::Ok(Box::new(ref_node));
} else {
let err_msg = format!("{} is not defined.", selection_name);
return Result::Err(err_msg);
}
}
@@ -459,16 +453,12 @@ impl ConditionCompiler {
return Result::Ok(Box::new(select_not_node));
}
return Result::Err("Unknown error".to_string());
Result::Err("Unknown error".to_string())
}
/// ConditionTokenがAndまたはOrTokenならばTrue
fn is_logical(&self, token: &ConditionToken) -> bool {
return match token {
ConditionToken::And => true,
ConditionToken::Or => true,
_ => false,
};
matches!(token, ConditionToken::And | ConditionToken::Or)
}
/// ConditionToken::OperandContainerに変換できる部分があれば変換する。
@@ -478,8 +468,7 @@ impl ConditionCompiler {
) -> Result<Vec<ConditionToken>, String> {
let mut ret = vec![];
let mut grouped_operands = vec![]; // ANDとORの間にあるトークンを表す。ANDとORをOperatorとしたときのOperand
let mut token_ite = tokens.into_iter();
while let Some(token) = token_ite.next() {
for token in tokens.into_iter() {
if self.is_logical(&token) {
// ここに来るのはエラーのはずだが、後でエラー出力するので、ここではエラー出さない。
if grouped_operands.is_empty() {
@@ -498,7 +487,7 @@ impl ConditionCompiler {
ret.push(ConditionToken::OperandContainer(grouped_operands));
}
return Result::Ok(ret);
Result::Ok(ret)
}
}
@@ -542,7 +531,7 @@ mod tests {
assert_eq!(rule_node.select(&recinfo), expect_select);
}
Err(_rec) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -582,10 +571,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), true);
assert!(rule_node.select(&recinfo));
}
Err(_rec) => {
assert!(false, "Failed to parse json record.");
Err(_) => {
panic!("Failed to parse json record.");
}
}
}
@@ -626,10 +615,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), false);
assert!(!rule_node.select(&recinfo));
}
Err(_rec) => {
assert!(false, "Failed to parse json record.");
Err(_) => {
panic!("Failed to parse json record.");
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -21,7 +21,7 @@ use self::count::{AggRecordTimeInfo, TimeFrameInfo};
use super::detection::EvtxRecordInfo;
pub fn create_rule(rulepath: String, yaml: Yaml) -> RuleNode {
return RuleNode::new(rulepath, yaml);
RuleNode::new(rulepath, yaml)
}
/// Ruleファイルを表すード
@@ -34,7 +34,7 @@ pub struct RuleNode {
impl Debug for RuleNode {
fn fmt(&self, _f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
return Result::Ok(());
Result::Ok(())
}
}
@@ -42,13 +42,13 @@ unsafe impl Sync for RuleNode {}
unsafe impl Send for RuleNode {}
impl RuleNode {
pub fn new(rulepath: String, yaml: Yaml) -> RuleNode {
return RuleNode {
rulepath: rulepath,
yaml: yaml,
pub fn new(rule_path: String, yaml_data: Yaml) -> RuleNode {
RuleNode {
rulepath: rule_path,
yaml: yaml_data,
detection: DetectionNode::new(),
countdata: HashMap::new(),
};
}
}
pub fn init(&mut self) -> Result<(), Vec<String>> {
@@ -56,14 +56,14 @@ impl RuleNode {
// detection node initialization
let detection_result = self.detection.init(&self.yaml["detection"]);
if detection_result.is_err() {
errmsgs.extend(detection_result.unwrap_err());
if let Err(err_detail) = detection_result {
errmsgs.extend(err_detail);
}
if errmsgs.is_empty() {
return Result::Ok(());
Result::Ok(())
} else {
return Result::Err(errmsgs);
Result::Err(errmsgs)
}
}
@@ -72,11 +72,11 @@ impl RuleNode {
if result && self.has_agg_condition() {
count::count(self, &event_record.record);
}
return result;
result
}
/// aggregation conditionが存在するかを返す関数
pub fn has_agg_condition(&self) -> bool {
return self.detection.aggregation_condition.is_some();
self.detection.aggregation_condition.is_some()
}
/// Aggregation Conditionの結果を配列で返却する関数
pub fn judge_satisfy_aggcondition(&self) -> Vec<AggResult> {
@@ -84,22 +84,18 @@ impl RuleNode {
if !self.has_agg_condition() {
return ret;
}
ret.append(&mut count::aggregation_condition_select(&self));
return ret;
ret.append(&mut count::aggregation_condition_select(self));
ret
}
pub fn check_exist_countdata(&self) -> bool {
self.countdata.len() > 0
!self.countdata.is_empty()
}
/// ルール内のAggregationParseInfo(Aggregation Condition)を取得する関数
pub fn get_agg_condition(&self) -> Option<&AggregationParseInfo> {
match self.detection.aggregation_condition.as_ref() {
None => {
return None;
}
Some(agg_parse_info) => {
return Some(agg_parse_info);
}
if self.detection.aggregation_condition.as_ref().is_some() {
return self.detection.aggregation_condition.as_ref();
}
None
}
}
@@ -110,22 +106,24 @@ pub fn get_detection_keys(node: &RuleNode) -> Vec<String> {
for key in detection.name_to_selection.keys() {
let selection = &detection.name_to_selection[key];
let desc = selection.get_descendants();
let keys = desc.iter().filter_map(|node| {
desc.iter().for_each(|node| {
if !node.is::<LeafSelectionNode>() {
return Option::None;
return;
}
let node = node.downcast_ref::<LeafSelectionNode>().unwrap();
let key = node.get_key();
if key.is_empty() {
return Option::None;
}
return Option::Some(key.to_string());
let keys = node.get_keys();
let keys = keys.iter().filter_map(|key| {
if key.is_empty() {
return None;
}
Some(key.to_string())
});
ret.extend(keys);
});
ret.extend(keys);
}
return ret;
ret
}
/// Ruleファイルのdetectionを表すード
@@ -138,12 +136,12 @@ struct DetectionNode {
impl DetectionNode {
fn new() -> DetectionNode {
return DetectionNode {
DetectionNode {
name_to_selection: HashMap::new(),
condition: Option::None,
aggregation_condition: Option::None,
timeframe: Option::None,
};
}
}
fn init(&mut self, detection_yaml: &Yaml) -> Result<(), Vec<String>> {
@@ -169,7 +167,7 @@ impl DetectionNode {
]);
}
keys.nth(0).unwrap().to_string()
keys.next().unwrap().to_string()
};
// conditionをパースして、SelectionNodeに変換する
@@ -193,9 +191,9 @@ impl DetectionNode {
}
if err_msgs.is_empty() {
return Result::Ok(());
Result::Ok(())
} else {
return Result::Err(err_msgs);
Result::Err(err_msgs)
}
}
@@ -205,7 +203,7 @@ impl DetectionNode {
}
let condition = &self.condition.as_ref().unwrap();
return condition.select(event_record);
condition.select(event_record)
}
/// selectionードをパースします。
@@ -221,7 +219,7 @@ impl DetectionNode {
let mut err_msgs = vec![];
for key in keys {
let name = key.as_str().unwrap_or("");
if name.len() == 0 {
if name.is_empty() {
continue;
}
// condition等、特殊なキーワードを無視する。
@@ -231,11 +229,11 @@ impl DetectionNode {
// パースして、エラーメッセージがあれば配列にためて、戻り値で返す。
let selection_node = self.parse_selection(&detection_hash[key]);
if selection_node.is_some() {
let mut selection_node = selection_node.unwrap();
if let Some(node) = selection_node {
let mut selection_node = node;
let init_result = selection_node.init();
if init_result.is_err() {
err_msgs.extend(init_result.unwrap_err());
if let Err(err_detail) = init_result {
err_msgs.extend(err_detail);
} else {
let rc_selection = Arc::new(selection_node);
self.name_to_selection
@@ -248,18 +246,18 @@ impl DetectionNode {
}
// selectionードが無いのはエラー
if self.name_to_selection.len() == 0 {
if self.name_to_selection.is_empty() {
return Result::Err(vec![
"There is no selection node under detection.".to_string()
]);
}
return Result::Ok(());
Result::Ok(())
}
/// selectionをパースします。
fn parse_selection(&self, selection_yaml: &Yaml) -> Option<Box<dyn SelectionNode>> {
return Option::Some(self.parse_selection_recursively(vec![], selection_yaml));
Option::Some(self.parse_selection_recursively(vec![], selection_yaml))
}
/// selectionをパースします。
@@ -280,7 +278,7 @@ impl DetectionNode {
let child_node = self.parse_selection_recursively(child_key_list, child_yaml);
and_node.child_nodes.push(child_node);
});
return Box::new(and_node);
Box::new(and_node)
} else if yaml.as_vec().is_some() {
// 配列はOR条件と解釈する。
let mut or_node = selectionnodes::OrSelectionNode::new();
@@ -289,13 +287,13 @@ impl DetectionNode {
or_node.child_nodes.push(child_node);
});
return Box::new(or_node);
Box::new(or_node)
} else {
// 連想配列と配列以外は末端ノード
return Box::new(selectionnodes::LeafSelectionNode::new(
Box::new(selectionnodes::LeafSelectionNode::new(
key_list,
yaml.clone(),
));
))
}
}
}
@@ -317,19 +315,19 @@ pub struct AggResult {
impl AggResult {
pub fn new(
data: i64,
key: String,
field_values: Vec<String>,
start_timedate: DateTime<Utc>,
condition_op_num: String,
count_data: i64,
key_name: String,
field_value: Vec<String>,
event_start_timedate: DateTime<Utc>,
condition_op_number: String,
) -> AggResult {
return AggResult {
data: data,
key: key,
field_values: field_values,
start_timedate: start_timedate,
condition_op_num: condition_op_num,
};
AggResult {
data: count_data,
key: key_name,
field_values: field_value,
start_timedate: event_start_timedate,
condition_op_num: condition_op_number,
}
}
}
@@ -341,12 +339,12 @@ mod tests {
pub fn parse_rule_from_str(rule_str: &str) -> RuleNode {
let rule_yaml = YamlLoader::load_from_str(rule_str);
assert_eq!(rule_yaml.is_ok(), true);
assert!(rule_yaml.is_ok());
let rule_yamls = rule_yaml.unwrap();
let mut rule_yaml = rule_yamls.into_iter();
let mut rule_node = create_rule("testpath".to_string(), rule_yaml.next().unwrap());
assert_eq!(rule_node.init().is_ok(), true);
return rule_node;
assert!(rule_node.init().is_ok());
rule_node
}
#[test]
@@ -371,10 +369,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), true);
assert!(rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -401,10 +399,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), false);
assert!(!rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -431,10 +429,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), false);
assert!(!rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -514,10 +512,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), true);
assert!(rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -573,10 +571,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), false);
assert!(!rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -639,10 +637,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), true);
assert!(rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -683,10 +681,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), true);
assert!(rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -728,10 +726,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), false);
assert!(!rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -792,10 +790,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), true);
assert!(rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -856,10 +854,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), false);
assert!(!rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -902,10 +900,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), true);
assert!(rule_node.select(&recinfo));
}
Err(_rec) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -961,15 +959,15 @@ mod tests {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
let result = rule_node.select(&recinfo);
assert_eq!(rule_node.detection.aggregation_condition.is_some(), true);
assert_eq!(result, true);
assert!(rule_node.detection.aggregation_condition.is_some());
assert!(result);
assert_eq!(
*&rule_node.countdata.get(key).unwrap().len() as i32,
rule_node.countdata.get(key).unwrap().len() as i32,
expect_count
);
}
Err(_rec) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}

View File

@@ -1,13 +1,12 @@
use crate::detections::{detection::EvtxRecordInfo, utils};
use crate::filter::FILTER_REGEX;
use mopa::mopafy;
use downcast_rs::Downcast;
use std::{sync::Arc, vec};
use yaml_rust::Yaml;
use super::matchers;
use super::matchers::{self, DefaultMatcher};
// Ruleファイルの detection- selection配下のードはこのtraitを実装する。
pub trait SelectionNode: mopa::Any {
pub trait SelectionNode: Downcast {
// 引数で指定されるイベントログのレコードが、条件に一致するかどうかを判定する
// このトレイトを実装する構造体毎に適切な判定処理を書く必要がある。
fn select(&self, event_record: &EvtxRecordInfo) -> bool;
@@ -19,12 +18,12 @@ pub trait SelectionNode: mopa::Any {
fn init(&mut self) -> Result<(), Vec<String>>;
// 子ノードを取得する(グラフ理論のchildと同じ意味)
fn get_childs(&self) -> Vec<&Box<dyn SelectionNode>>;
fn get_childs(&self) -> Vec<&dyn SelectionNode>;
// 子孫ノードを取得する(グラフ理論のdescendantと同じ意味)
fn get_descendants(&self) -> Vec<&Box<dyn SelectionNode>>;
fn get_descendants(&self) -> Vec<&dyn SelectionNode>;
}
mopafy!(SelectionNode);
downcast_rs::impl_downcast!(SelectionNode);
/// detection - selection配下でAND条件を表すード
pub struct AndSelectionNode {
@@ -33,17 +32,17 @@ pub struct AndSelectionNode {
impl AndSelectionNode {
pub fn new() -> AndSelectionNode {
return AndSelectionNode {
AndSelectionNode {
child_nodes: vec![],
};
}
}
}
impl SelectionNode for AndSelectionNode {
fn select(&self, event_record: &EvtxRecordInfo) -> bool {
return self.child_nodes.iter().all(|child_node| {
return child_node.select(event_record);
});
self.child_nodes
.iter()
.all(|child_node| child_node.select(event_record))
}
fn init(&mut self) -> Result<(), Vec<String>> {
@@ -52,50 +51,47 @@ impl SelectionNode for AndSelectionNode {
.iter_mut()
.map(|child_node| {
let res = child_node.init();
if res.is_err() {
return res.unwrap_err();
if let Err(err) = res {
err
} else {
return vec![];
vec![]
}
})
.fold(
vec![],
|mut acc: Vec<String>, cur: Vec<String>| -> Vec<String> {
acc.extend(cur.into_iter());
return acc;
acc
},
);
if err_msgs.is_empty() {
return Result::Ok(());
Result::Ok(())
} else {
return Result::Err(err_msgs);
Result::Err(err_msgs)
}
}
fn get_childs(&self) -> Vec<&Box<dyn SelectionNode>> {
fn get_childs(&self) -> Vec<&dyn SelectionNode> {
let mut ret = vec![];
self.child_nodes.iter().for_each(|child_node| {
ret.push(child_node);
ret.push(child_node.as_ref());
});
return ret;
ret
}
fn get_descendants(&self) -> Vec<&Box<dyn SelectionNode>> {
fn get_descendants(&self) -> Vec<&dyn SelectionNode> {
let mut ret = self.get_childs();
self.child_nodes
.iter()
.map(|child_node| {
return child_node.get_descendants();
})
.flatten()
.flat_map(|child_node| child_node.get_descendants())
.for_each(|descendant_node| {
ret.push(descendant_node);
});
return ret;
ret
}
}
@@ -106,17 +102,17 @@ pub struct OrSelectionNode {
impl OrSelectionNode {
pub fn new() -> OrSelectionNode {
return OrSelectionNode {
OrSelectionNode {
child_nodes: vec![],
};
}
}
}
impl SelectionNode for OrSelectionNode {
fn select(&self, event_record: &EvtxRecordInfo) -> bool {
return self.child_nodes.iter().any(|child_node| {
return child_node.select(event_record);
});
self.child_nodes
.iter()
.any(|child_node| child_node.select(event_record))
}
fn init(&mut self) -> Result<(), Vec<String>> {
@@ -125,50 +121,47 @@ impl SelectionNode for OrSelectionNode {
.iter_mut()
.map(|child_node| {
let res = child_node.init();
if res.is_err() {
return res.unwrap_err();
if let Err(err) = res {
err
} else {
return vec![];
vec![]
}
})
.fold(
vec![],
|mut acc: Vec<String>, cur: Vec<String>| -> Vec<String> {
acc.extend(cur.into_iter());
return acc;
acc
},
);
if err_msgs.is_empty() {
return Result::Ok(());
Result::Ok(())
} else {
return Result::Err(err_msgs);
Result::Err(err_msgs)
}
}
fn get_childs(&self) -> Vec<&Box<dyn SelectionNode>> {
fn get_childs(&self) -> Vec<&dyn SelectionNode> {
let mut ret = vec![];
self.child_nodes.iter().for_each(|child_node| {
ret.push(child_node);
ret.push(child_node.as_ref());
});
return ret;
ret
}
fn get_descendants(&self) -> Vec<&Box<dyn SelectionNode>> {
fn get_descendants(&self) -> Vec<&dyn SelectionNode> {
let mut ret = self.get_childs();
self.child_nodes
.iter()
.map(|child_node| {
return child_node.get_descendants();
})
.flatten()
.flat_map(|child_node| child_node.get_descendants())
.for_each(|descendant_node| {
ret.push(descendant_node);
});
return ret;
ret
}
}
@@ -178,26 +171,26 @@ pub struct NotSelectionNode {
}
impl NotSelectionNode {
pub fn new(node: Box<dyn SelectionNode>) -> NotSelectionNode {
return NotSelectionNode { node: node };
pub fn new(select_node: Box<dyn SelectionNode>) -> NotSelectionNode {
NotSelectionNode { node: select_node }
}
}
impl SelectionNode for NotSelectionNode {
fn select(&self, event_record: &EvtxRecordInfo) -> bool {
return !self.node.select(event_record);
!self.node.select(event_record)
}
fn init(&mut self) -> Result<(), Vec<String>> {
return Result::Ok(());
Result::Ok(())
}
fn get_childs(&self) -> Vec<&Box<dyn SelectionNode>> {
return vec![];
fn get_childs(&self) -> Vec<&dyn SelectionNode> {
vec![]
}
fn get_descendants(&self) -> Vec<&Box<dyn SelectionNode>> {
return self.get_childs();
fn get_descendants(&self) -> Vec<&dyn SelectionNode> {
self.get_childs()
}
}
@@ -210,28 +203,28 @@ pub struct RefSelectionNode {
}
impl RefSelectionNode {
pub fn new(selection_node: Arc<Box<dyn SelectionNode>>) -> RefSelectionNode {
return RefSelectionNode {
selection_node: selection_node,
};
pub fn new(select_node: Arc<Box<dyn SelectionNode>>) -> RefSelectionNode {
RefSelectionNode {
selection_node: select_node,
}
}
}
impl SelectionNode for RefSelectionNode {
fn select(&self, event_record: &EvtxRecordInfo) -> bool {
return self.selection_node.select(event_record);
self.selection_node.select(event_record)
}
fn init(&mut self) -> Result<(), Vec<String>> {
return Result::Ok(());
Result::Ok(())
}
fn get_childs(&self) -> Vec<&Box<dyn SelectionNode>> {
return vec![&self.selection_node];
fn get_childs(&self) -> Vec<&dyn SelectionNode> {
vec![self.selection_node.as_ref().as_ref()]
}
fn get_descendants(&self) -> Vec<&Box<dyn SelectionNode>> {
return self.get_childs();
fn get_descendants(&self) -> Vec<&dyn SelectionNode> {
self.get_childs()
}
}
@@ -244,17 +237,35 @@ pub struct LeafSelectionNode {
}
impl LeafSelectionNode {
pub fn new(key_list: Vec<String>, value_yaml: Yaml) -> LeafSelectionNode {
return LeafSelectionNode {
pub fn new(keys: Vec<String>, value_yaml: Yaml) -> LeafSelectionNode {
LeafSelectionNode {
key: String::default(),
key_list: key_list,
key_list: keys,
select_value: value_yaml,
matcher: Option::None,
};
}
}
pub fn get_key(&self) -> &String {
return &self.key;
&self.key
}
pub fn get_keys(&self) -> Vec<&String> {
let mut keys = vec![];
if !self.key.is_empty() {
keys.push(&self.key);
}
if let Some(matcher) = &self.matcher {
let matcher = matcher.downcast_ref::<DefaultMatcher>();
if let Some(matcher) = matcher {
if let Some(eq_key) = matcher.get_eqfield_key() {
keys.push(eq_key);
}
}
}
keys
}
fn _create_key(&self) -> String {
@@ -263,8 +274,8 @@ impl LeafSelectionNode {
}
let topkey = self.key_list[0].to_string();
let values: Vec<&str> = topkey.split("|").collect();
return values[0].to_string();
let values: Vec<&str> = topkey.split('|').collect();
values[0].to_string()
}
/// JSON形式のEventJSONから値を取得する関数 aliasも考慮されている。
@@ -274,18 +285,18 @@ impl LeafSelectionNode {
return Option::Some(&record.data_string);
}
return record.get_value(self.get_key());
record.get_value(self.get_key())
}
/// matchers::LeafMatcherの一覧を取得する。
/// 上から順番に調べて、一番始めに一致したMatcherが適用される
fn get_matchers(&self) -> Vec<Box<dyn matchers::LeafMatcher>> {
return vec![
vec![
Box::new(matchers::MinlengthMatcher::new()),
Box::new(matchers::RegexesFileMatcher::new()),
Box::new(matchers::AllowlistFileMatcher::new()),
Box::new(matchers::DefaultMatcher::new()),
];
]
}
}
@@ -315,12 +326,8 @@ impl SelectionNode for LeafSelectionNode {
]
}
*/
let filter_rule = FILTER_REGEX.get(self.get_key());
if self.get_key() == "EventData" {
let values =
utils::get_event_value(&"Event.EventData.Data".to_string(), &event_record.record);
let values = utils::get_event_value("Event.EventData.Data", &event_record.record);
if values.is_none() {
return self
.matcher
@@ -333,15 +340,12 @@ impl SelectionNode for LeafSelectionNode {
let eventdata_data = values.unwrap();
if eventdata_data.is_boolean() || eventdata_data.is_i64() || eventdata_data.is_string()
{
let replaced_str = utils::replace_target_character(
event_record.get_value(self.get_key()),
filter_rule,
);
let event_value = event_record.get_value(self.get_key());
return self
.matcher
.as_ref()
.unwrap()
.is_match(replaced_str.as_ref(), event_record);
.is_match(event_value, event_record);
}
// 配列の場合は配列の要素のどれか一つでもルールに合致すれば条件に一致したことにする。
if eventdata_data.is_array() {
@@ -350,15 +354,12 @@ impl SelectionNode for LeafSelectionNode {
.unwrap()
.iter()
.any(|ary_element| {
let replaced_str = utils::replace_target_character(
utils::value_to_string(ary_element).as_ref(),
filter_rule,
);
let event_value = utils::value_to_string(ary_element);
return self
.matcher
.as_ref()
.unwrap()
.is_match(replaced_str.as_ref(), event_record);
.is_match(event_value.as_ref(), event_record);
});
} else {
return self
@@ -369,14 +370,12 @@ impl SelectionNode for LeafSelectionNode {
}
}
let replaced_str =
utils::replace_target_character(self.get_event_value(&event_record), filter_rule);
let event_value = self.get_event_value(event_record);
return self
.matcher
.as_ref()
.unwrap()
.is_match(replaced_str.as_ref(), event_record);
.is_match(event_value, event_record);
}
fn init(&mut self) -> Result<(), Vec<String>> {
@@ -409,12 +408,12 @@ impl SelectionNode for LeafSelectionNode {
.init(&match_key_list, &self.select_value);
}
fn get_childs(&self) -> Vec<&Box<dyn SelectionNode>> {
return vec![];
fn get_childs(&self) -> Vec<&dyn SelectionNode> {
vec![]
}
fn get_descendants(&self) -> Vec<&Box<dyn SelectionNode>> {
return vec![];
fn get_descendants(&self) -> Vec<&dyn SelectionNode> {
vec![]
}
}
@@ -445,10 +444,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), true);
assert!(rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -478,10 +477,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), false);
assert!(!rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -510,10 +509,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), true);
assert!(rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -542,10 +541,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), true);
assert!(rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}
@@ -574,10 +573,10 @@ mod tests {
Ok(record) => {
let keys = detections::rule::get_detection_keys(&rule_node);
let recinfo = utils::create_rec_info(record, "testpath".to_owned(), &keys);
assert_eq!(rule_node.select(&recinfo), false);
assert!(!rule_node.select(&recinfo));
}
Err(_) => {
assert!(false, "Failed to parse json record.");
panic!("Failed to parse json record.");
}
}
}

View File

@@ -3,7 +3,6 @@ extern crate csv;
extern crate regex;
use crate::detections::configs;
use crate::filter::DataFilterRule;
use tokio::runtime::Builder;
use tokio::runtime::Runtime;
@@ -11,76 +10,56 @@ use tokio::runtime::Runtime;
use chrono::{DateTime, TimeZone, Utc};
use regex::Regex;
use serde_json::Value;
use std::cmp::Ordering;
use std::fs::File;
use std::io::prelude::*;
use std::io::{BufRead, BufReader};
use std::str;
use std::string::String;
use std::vec;
use super::detection::EvtxRecordInfo;
pub fn concat_selection_key(key_list: &Vec<String>) -> String {
pub fn concat_selection_key(key_list: &[String]) -> String {
return key_list
.iter()
.fold("detection -> selection".to_string(), |mut acc, cur| {
acc = acc + " -> " + cur;
return acc;
acc
});
}
pub fn check_regex(string: &str, regex_list: &Vec<Regex>) -> bool {
pub fn check_regex(string: &str, regex_list: &[Regex]) -> bool {
for regex in regex_list {
if regex.is_match(string) == false {
if !regex.is_match(string) {
continue;
}
return true;
}
return false;
false
}
/// replace string from all defined regex in input to replace_str
pub fn replace_target_character<'a>(
input_str: Option<&'a String>,
replace_rule: Option<&'a DataFilterRule>,
) -> Option<String> {
if input_str.is_none() {
return None;
}
if replace_rule.is_none() {
return Some(input_str.unwrap().to_string());
}
let replace_regex_rule = &replace_rule.unwrap().regex_rule;
let replace_str = &replace_rule.unwrap().replace_str;
return Some(
replace_regex_rule
.replace_all(input_str.unwrap(), replace_str)
.to_string(),
);
}
pub fn check_allowlist(target: &str, regexes: &Vec<Regex>) -> bool {
pub fn check_allowlist(target: &str, regexes: &[Regex]) -> bool {
for regex in regexes {
if regex.is_match(target) {
return true;
}
}
return false;
false
}
pub fn value_to_string(value: &Value) -> Option<String> {
return match value {
match value {
Value::Null => Option::None,
Value::Bool(b) => Option::Some(b.to_string()),
Value::Number(n) => Option::Some(n.to_string()),
Value::String(s) => Option::Some(s.to_string()),
Value::String(s) => Option::Some(s.trim().to_string()),
Value::Array(_) => Option::None,
Value::Object(_) => Option::None,
};
}
}
pub fn read_txt(filename: &str) -> Result<Vec<String>, String> {
@@ -90,12 +69,12 @@ pub fn read_txt(filename: &str) -> Result<Vec<String>, String> {
return Result::Err(errmsg);
}
let reader = BufReader::new(f.unwrap());
return Result::Ok(
Result::Ok(
reader
.lines()
.map(|line| line.unwrap_or(String::default()))
.map(|line| line.unwrap_or_default())
.collect(),
);
)
}
pub fn read_csv(filename: &str) -> Result<Vec<Vec<String>>, String> {
@@ -106,11 +85,11 @@ pub fn read_csv(filename: &str) -> Result<Vec<Vec<String>>, String> {
let mut contents: String = String::new();
let mut ret = vec![];
let read_res = f.unwrap().read_to_string(&mut contents);
if read_res.is_err() {
return Result::Err(read_res.unwrap_err().to_string());
if let Err(e) = read_res {
return Result::Err(e.to_string());
}
let mut rdr = csv::Reader::from_reader(contents.as_bytes());
let mut rdr = csv::ReaderBuilder::new().from_reader(contents.as_bytes());
rdr.records().for_each(|r| {
if r.is_err() {
return;
@@ -122,19 +101,19 @@ pub fn read_csv(filename: &str) -> Result<Vec<Vec<String>>, String> {
ret.push(v);
});
return Result::Ok(ret);
Result::Ok(ret)
}
pub fn is_target_event_id(s: &String) -> bool {
return configs::CONFIG.read().unwrap().target_eventids.is_target(s);
pub fn is_target_event_id(s: &str) -> bool {
configs::CONFIG.read().unwrap().target_eventids.is_target(s)
}
pub fn get_event_id_key() -> String {
return "Event.System.EventID".to_string();
"Event.System.EventID".to_string()
}
pub fn get_event_time() -> String {
return "Event.System.TimeCreated_attributes.SystemTime".to_string();
"Event.System.TimeCreated_attributes.SystemTime".to_string()
}
pub fn str_time_to_datetime(system_time_str: &str) -> Option<DateTime<Utc>> {
@@ -146,62 +125,59 @@ pub fn str_time_to_datetime(system_time_str: &str) -> Option<DateTime<Utc>> {
if rfc3339_time.is_err() {
return Option::None;
}
let datetime = Utc
.from_local_datetime(&rfc3339_time.unwrap().naive_utc())
.single();
if datetime.is_none() {
return Option::None;
} else {
return Option::Some(datetime.unwrap());
}
Utc.from_local_datetime(&rfc3339_time.unwrap().naive_utc())
.single()
}
/// serde:Valueの型を確認し、文字列を返します。
pub fn get_serde_number_to_string(value: &serde_json::Value) -> Option<String> {
if value.is_string() {
return Option::Some(value.as_str().unwrap_or("").to_string());
Option::Some(value.as_str().unwrap_or("").to_string())
} else if value.is_object() {
// Object type is not specified record value.
return Option::None;
Option::None
} else {
return Option::Some(value.to_string());
Option::Some(value.to_string())
}
}
pub fn get_event_value<'a>(key: &String, event_value: &'a Value) -> Option<&'a Value> {
if key.len() == 0 {
pub fn get_event_value<'a>(key: &str, event_value: &'a Value) -> Option<&'a Value> {
if key.is_empty() {
return Option::None;
}
let event_key = configs::EVENTKEY_ALIAS.get_event_key(key);
let mut ret: &Value = event_value;
if let Some(event_key) = event_key {
let mut ret: &Value = event_value;
// get_event_keyが取得できてget_event_key_splitが取得できないことはない
let splits = configs::EVENTKEY_ALIAS.get_event_key_split(key);
let mut start_idx = 0;
for key in splits.unwrap() {
if ret.is_object() == false {
if !ret.is_object() {
return Option::None;
}
let val = &event_key[start_idx..(*key + start_idx)];
ret = &ret[val];
start_idx = *key + start_idx;
start_idx += *key;
start_idx += 1;
}
return Option::Some(ret);
Option::Some(ret)
} else {
let mut ret: &Value = event_value;
let event_key = key;
for key in event_key.split(".") {
if ret.is_object() == false {
let event_key = if !key.contains('.') {
"Event.EventData.".to_string() + key
} else {
key.to_string()
};
for key in event_key.split('.') {
if !ret.is_object() {
return Option::None;
}
ret = &ret[key];
}
return Option::Some(ret);
Option::Some(ret)
}
}
@@ -212,28 +188,19 @@ pub fn get_thread_num() -> usize {
.args
.value_of("thread-number")
.unwrap_or(def_thread_num_str.as_str());
return threadnum.parse::<usize>().unwrap().clone();
threadnum.parse::<usize>().unwrap()
}
pub fn create_tokio_runtime() -> Runtime {
return Builder::new_multi_thread()
Builder::new_multi_thread()
.worker_threads(get_thread_num())
.thread_name("yea-thread")
.build()
.unwrap();
.unwrap()
}
// EvtxRecordInfoを作成します。
pub fn create_rec_info(data: Value, path: String, keys: &Vec<String>) -> EvtxRecordInfo {
// EvtxRecordInfoを作る
let data_str = data.to_string();
let mut rec = EvtxRecordInfo {
evtx_filepath: path,
record: data,
data_string: data_str,
key_2_value: hashbrown::HashMap::new(),
};
pub fn create_rec_info(data: Value, path: String, keys: &[String]) -> EvtxRecordInfo {
// 高速化のための処理
// 例えば、Value型から"Event.System.EventID"の値を取得しようとすると、value["Event"]["System"]["EventID"]のように3回アクセスする必要がある。
@@ -241,8 +208,9 @@ pub fn create_rec_info(data: Value, path: String, keys: &Vec<String>) -> EvtxRec
// これなら、"Event.System.EventID"というキーを1回指定するだけで値を取得できるようになるので、高速化されるはず。
// あと、serde_jsonのValueからvalue["Event"]みたいな感じで値を取得する処理がなんか遅いので、そういう意味でも早くなるかも
// それと、serde_jsonでは内部的に標準ライブラリのhashmapを使用しているが、hashbrownを使った方が早くなるらしい。
let mut key_2_values = hashbrown::HashMap::new();
for key in keys {
let val = get_event_value(key, &rec.record);
let val = get_event_value(key, &data);
if val.is_none() {
continue;
}
@@ -252,45 +220,206 @@ pub fn create_rec_info(data: Value, path: String, keys: &Vec<String>) -> EvtxRec
continue;
}
rec.key_2_value.insert(key.to_string(), val.unwrap());
key_2_values.insert(key.to_string(), val.unwrap());
}
return rec;
// EvtxRecordInfoを作る
let data_str = data.to_string();
let rec_info = if configs::CONFIG.read().unwrap().args.is_present("full-data") {
Option::Some(create_recordinfos(&data))
} else {
Option::None
};
EvtxRecordInfo {
evtx_filepath: path,
record: data,
data_string: data_str,
key_2_value: key_2_values,
record_information: rec_info,
}
}
/**
* CSVのrecord infoカラムに出力する文字列を作る
*/
fn create_recordinfos(record: &Value) -> String {
let mut output = vec![];
_collect_recordinfo(&mut vec![], "", record, &mut output);
// 同じレコードなら毎回同じ出力になるようにソートしておく
output.sort_by(|(left, left_data), (right, right_data)| {
let ord = left.cmp(right);
if ord == Ordering::Equal {
left_data.cmp(right_data)
} else {
ord
}
});
let summary: Vec<String> = output
.iter()
.map(|(key, value)| {
return format!("{}:{}", key, value);
})
.collect();
// 標準出力する時はセルがハイプ区切りになるので、パイプ区切りにしない
if configs::CONFIG.read().unwrap().args.is_present("output") {
summary.join(" | ")
} else {
summary.join(" ")
}
}
/**
* CSVのfieldsカラムに出力する要素を全て収集する
*/
fn _collect_recordinfo<'a>(
keys: &mut Vec<&'a str>,
parent_key: &'a str,
value: &'a Value,
output: &mut Vec<(String, String)>,
) {
match value {
Value::Array(ary) => {
for sub_value in ary {
_collect_recordinfo(keys, parent_key, sub_value, output);
}
}
Value::Object(obj) => {
// lifetimeの関係でちょっと変な実装になっている
if !parent_key.is_empty() {
keys.push(parent_key);
}
for (key, value) in obj {
// 属性は出力しない
if key.ends_with("_attributes") {
continue;
}
// Event.Systemは出力しない
if key.eq("System") && keys.get(0).unwrap_or(&"").eq(&"Event") {
continue;
}
_collect_recordinfo(keys, key, value, output);
}
if !parent_key.is_empty() {
keys.pop();
}
}
Value::Null => (),
_ => {
// 一番子の要素の値しか収集しない
let strval = value_to_string(value);
if let Some(strval) = strval {
let strval = strval.trim().chars().fold(String::default(), |mut acc, c| {
if c.is_control() || c.is_ascii_whitespace() {
acc.push(' ');
} else {
acc.push(c);
};
acc
});
output.push((parent_key.to_string(), strval));
}
}
}
}
#[cfg(test)]
mod tests {
use crate::detections::utils;
use crate::filter::DataFilterRule;
use regex::Regex;
use serde_json::Value;
#[test]
fn test_create_recordinfos() {
let record_json_str = r#"
{
"Event": {
"System": {"EventID": 4103, "Channel": "PowerShell", "Computer":"DESKTOP-ICHIICHI"},
"UserData": {"User": "u1", "AccessMask": "%%1369", "Process":"lsass.exe"},
"UserData_attributes": {"xmlns": "http://schemas.microsoft.com/win/2004/08/events/event"}
},
"Event_attributes": {"xmlns": "http://schemas.microsoft.com/win/2004/08/events/event"}
}"#;
match serde_json::from_str(record_json_str) {
Ok(record) => {
let ret = utils::create_recordinfos(&record);
// Systemは除外される/属性(_attributesも除外される)/key順に並ぶ
let expected = "AccessMask:%%1369 Process:lsass.exe User:u1".to_string();
assert_eq!(ret, expected);
}
Err(_) => {
panic!("Failed to parse json record.");
}
}
}
#[test]
fn test_create_recordinfos2() {
// EventDataの特殊ケース
let record_json_str = r#"
{
"Event": {
"System": {"EventID": 4103, "Channel": "PowerShell", "Computer":"DESKTOP-ICHIICHI"},
"EventData": {
"Binary": "hogehoge",
"Data":[
"Data1",
"DataData2",
"",
"DataDataData3"
]
},
"EventData_attributes": {"xmlns": "http://schemas.microsoft.com/win/2004/08/events/event"}
},
"Event_attributes": {"xmlns": "http://schemas.microsoft.com/win/2004/08/events/event"}
}"#;
match serde_json::from_str(record_json_str) {
Ok(record) => {
let ret = utils::create_recordinfos(&record);
// Systemは除外される/属性(_attributesも除外される)/key順に並ぶ
let expected = "Binary:hogehoge Data: Data:Data1 Data:DataData2 Data:DataDataData3"
.to_string();
assert_eq!(ret, expected);
}
Err(_) => {
panic!("Failed to parse json record.");
}
}
}
#[test]
fn test_check_regex() {
let regexes = utils::read_txt("./rules/config/regex/detectlist_suspicous_services.txt")
.unwrap()
.into_iter()
.map(|regex_str| Regex::new(&regex_str).unwrap())
.collect();
let regexes: Vec<Regex> =
utils::read_txt("./rules/config/regex/detectlist_suspicous_services.txt")
.unwrap()
.into_iter()
.map(|regex_str| Regex::new(&regex_str).unwrap())
.collect();
let regextext = utils::check_regex("\\cvtres.exe", &regexes);
assert!(regextext == true);
assert!(regextext);
let regextext = utils::check_regex("\\hogehoge.exe", &regexes);
assert!(regextext == false);
assert!(!regextext);
}
#[test]
fn test_check_allowlist() {
let commandline = "\"C:\\Program Files\\Google\\Update\\GoogleUpdate.exe\"";
let allowlist = utils::read_txt("./rules/config/regex/allowlist_legitimate_services.txt")
.unwrap()
.into_iter()
.map(|allow_str| Regex::new(&allow_str).unwrap())
.collect();
assert!(true == utils::check_allowlist(commandline, &allowlist));
let allowlist: Vec<Regex> =
utils::read_txt("./rules/config/regex/allowlist_legitimate_services.txt")
.unwrap()
.into_iter()
.map(|allow_str| Regex::new(&allow_str).unwrap())
.collect();
assert!(utils::check_allowlist(commandline, &allowlist));
let commandline = "\"C:\\Program Files\\Google\\Update\\GoogleUpdate2.exe\"";
assert!(false == utils::check_allowlist(commandline, &allowlist));
assert!(!utils::check_allowlist(commandline, &allowlist));
}
#[test]
@@ -350,31 +479,4 @@ mod tests {
assert!(utils::get_serde_number_to_string(&event_record["Event"]["EventData"]).is_none());
}
#[test]
/// 指定された文字から指定されたregexぉ実行する関数が動作するかのテスト
fn test_remove_space_control() {
let test_filter_rule = DataFilterRule {
regex_rule: Regex::new(r"[\r\n\t]+").unwrap(),
replace_str: "".to_string(),
};
let none_test_str: Option<&String> = None;
assert_eq!(
utils::replace_target_character(none_test_str, None).is_none(),
true
);
assert_eq!(
utils::replace_target_character(none_test_str, Some(&test_filter_rule)).is_none(),
true
);
let tmp = "h\ra\ny\ta\tb\nu\r\nsa".to_string();
let test_str: Option<&String> = Some(&tmp);
assert_eq!(
utils::replace_target_character(test_str, Some(&test_filter_rule)).unwrap(),
"hayabusa"
);
}
}

View File

@@ -2,92 +2,18 @@ use crate::detections::configs;
use crate::detections::print::AlertMessage;
use crate::detections::print::ERROR_LOG_STACK;
use crate::detections::print::QUIET_ERRORS_FLAG;
use crate::detections::utils;
use hashbrown::HashMap;
use hashbrown::HashSet;
use lazy_static::lazy_static;
use regex::Regex;
use std::fs::File;
use std::io::BufWriter;
use std::io::{BufRead, BufReader};
lazy_static! {
static ref IDS_REGEX: Regex =
Regex::new(r"^[0-9a-z]{8}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{12}$").unwrap();
pub static ref FILTER_REGEX: HashMap<String, DataFilterRule> = load_record_filters();
}
#[derive(Debug)]
pub struct DataFilterRule {
pub regex_rule: Regex,
pub replace_str: String,
}
fn load_record_filters() -> HashMap<String, DataFilterRule> {
let file_path = "./rules/config/regex/record_data_filter.txt";
let read_result = utils::read_csv(file_path);
let mut ret = HashMap::new();
if read_result.is_err() {
if configs::CONFIG.read().unwrap().args.is_present("verbose") {
AlertMessage::warn(
&mut BufWriter::new(std::io::stderr().lock()),
&format!("{} does not exist", file_path),
)
.ok();
}
if !*QUIET_ERRORS_FLAG {
ERROR_LOG_STACK
.lock()
.unwrap()
.push(format!("{} does not exist", file_path));
}
return HashMap::default();
}
read_result.unwrap().into_iter().for_each(|line| {
if line.len() != 3 {
return;
}
let empty = &"".to_string();
let key = line.get(0).unwrap_or(empty).trim();
let regex_str = line.get(1).unwrap_or(empty).trim();
let replaced_str = line.get(2).unwrap_or(empty).trim();
if key.len() == 0 || regex_str.len() == 0 {
return;
}
let regex_rule: Option<Regex> = match Regex::new(regex_str) {
Ok(regex) => Some(regex),
Err(_err) => {
let errmsg = format!("failed to read regex filter in record_data_filter.txt");
if configs::CONFIG.read().unwrap().args.is_present("verbose") {
AlertMessage::alert(&mut BufWriter::new(std::io::stderr().lock()), &errmsg)
.ok();
}
if !*QUIET_ERRORS_FLAG {
ERROR_LOG_STACK
.lock()
.unwrap()
.push(format!("[ERROR] {}", errmsg));
}
None
}
};
if regex_rule.is_none() {
return;
}
ret.insert(
key.to_string(),
DataFilterRule {
regex_rule: regex_rule.unwrap(),
replace_str: replaced_str.to_string(),
},
);
});
return ret;
}
#[derive(Clone, Debug)]
pub struct RuleExclude {
pub no_use_rule: HashSet<String>,
@@ -104,12 +30,18 @@ pub fn exclude_ids() -> RuleExclude {
.args
.is_present("enable-noisy-rules")
{
exclude_ids.insert_ids("./rules/config/noisy_rules.txt");
exclude_ids.insert_ids(&format!(
"{}/noisy_rules.txt",
configs::CONFIG.read().unwrap().folder_path
));
};
exclude_ids.insert_ids("./rules/config/exclude_rules.txt");
exclude_ids.insert_ids(&format!(
"{}/exclude_rules.txt",
configs::CONFIG.read().unwrap().folder_path
));
return exclude_ids;
exclude_ids
}
impl RuleExclude {
@@ -129,14 +61,14 @@ impl RuleExclude {
.unwrap()
.push(format!("{} does not exist", filename));
}
return ();
return;
}
let reader = BufReader::new(f.unwrap());
for v in reader.lines() {
let v = v.unwrap().split("#").collect::<Vec<&str>>()[0]
let v = v.unwrap().split('#').collect::<Vec<&str>>()[0]
.trim()
.to_string();
if v.is_empty() || !IDS_REGEX.is_match(&v) {
if v.is_empty() || !configs::IDS_REGEX.is_match(&v) {
// 空行は無視する。IDの検証
continue;
}

View File

@@ -3,5 +3,6 @@ pub mod detections;
pub mod filter;
pub mod notify;
pub mod omikuji;
pub mod options;
pub mod timeline;
pub mod yaml;

View File

@@ -1,33 +1,41 @@
extern crate downcast_rs;
extern crate serde;
extern crate serde_derive;
#[cfg(target_os = "windows")]
extern crate static_vcruntime;
use chrono::Datelike;
use chrono::{DateTime, Local};
use chrono::{DateTime, Datelike, Local, TimeZone};
use evtx::{EvtxParser, ParserSettings};
use git2::Repository;
use hashbrown::{HashMap, HashSet};
use hayabusa::detections::configs::load_pivot_keywords;
use hayabusa::detections::detection::{self, EvtxRecordInfo};
use hayabusa::detections::pivot::PIVOT_KEYWORD;
use hayabusa::detections::print::{
AlertMessage, ERROR_LOG_PATH, ERROR_LOG_STACK, QUIET_ERRORS_FLAG, STATISTICS_FLAG,
AlertMessage, ERROR_LOG_PATH, ERROR_LOG_STACK, PIVOT_KEYWORD_LIST_FLAG, QUIET_ERRORS_FLAG,
STATISTICS_FLAG,
};
use hayabusa::detections::rule::{get_detection_keys, RuleNode};
use hayabusa::filter;
use hayabusa::omikuji::Omikuji;
use hayabusa::options::level_tuning::LevelTuning;
use hayabusa::yaml::ParseYaml;
use hayabusa::{afterfact::after_fact, detections::utils};
use hayabusa::{detections::configs, timeline::timeline::Timeline};
use hayabusa::{detections::configs, timeline::timelines::Timeline};
use hhmmss::Hhmmss;
use pbr::ProgressBar;
use serde_json::Value;
use std::collections::{HashMap, HashSet};
use std::ffi::OsStr;
use std::cmp::Ordering;
use std::ffi::{OsStr, OsString};
use std::fmt::Display;
use std::fs::create_dir;
use std::io::BufWriter;
use std::io::{BufWriter, Write};
use std::path::Path;
use std::sync::Arc;
use std::time::SystemTime;
use std::{
env,
fs::{self, File},
path::PathBuf,
vec,
@@ -37,7 +45,7 @@ use tokio::spawn;
use tokio::task::JoinHandle;
#[cfg(target_os = "windows")]
use {is_elevated::is_elevated, std::env};
use is_elevated::is_elevated;
// 一度にtimelineやdetectionを実行する行数
const MAX_DETECT_RECORDS: usize = 5000;
@@ -53,25 +61,56 @@ pub struct App {
rule_keys: Vec<String>,
}
impl Default for App {
fn default() -> Self {
Self::new()
}
}
impl App {
pub fn new() -> App {
return App {
App {
rt: utils::create_tokio_runtime(),
rule_keys: Vec::new(),
};
}
}
fn exec(&mut self) {
if *PIVOT_KEYWORD_LIST_FLAG {
load_pivot_keywords("config/pivot_keywords.txt");
}
let analysis_start_time: DateTime<Local> = Local::now();
// Show usage when no arguments.
if std::env::args().len() == 1 {
self.output_logo();
println!();
println!("{}", configs::CONFIG.read().unwrap().args.usage());
println!();
return;
}
if !configs::CONFIG.read().unwrap().args.is_present("quiet") {
self.output_logo();
println!("");
println!();
self.output_eggs(&format!(
"{:02}/{:02}",
&analysis_start_time.month().to_owned(),
&analysis_start_time.day().to_owned()
));
}
if !self.is_matched_architecture_and_binary() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
"The hayabusa version you ran does not match your PC architecture.\nPlease use the correct architecture. (Binary ending in -x64.exe for 64-bit and -x86.exe for 32-bit.)",
)
.ok();
println!();
return;
}
if configs::CONFIG
.read()
.unwrap()
@@ -79,7 +118,11 @@ impl App {
.is_present("update-rules")
{
match self.update_rules() {
Ok(_ok) => println!("Rules updated successfully."),
Ok(output) => {
if output != "You currently have the latest rules." {
println!("Rules updated successfully.");
}
}
Err(e) => {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
@@ -88,25 +131,34 @@ impl App {
.ok();
}
}
println!();
return;
}
if !Path::new("./config").exists() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&"Hayabusa could not find the config directory.\nPlease run it from the Hayabusa root directory.\nExample: ./hayabusa-1.0.0-windows-x64.exe".to_string()
"Hayabusa could not find the config directory.\nPlease run it from the Hayabusa root directory.\nExample: ./hayabusa-1.0.0-windows-x64.exe"
)
.ok();
return;
}
if configs::CONFIG.read().unwrap().args.args.len() == 0 {
println!(
"{}",
configs::CONFIG.read().unwrap().args.usage().to_string()
);
println!("");
return;
}
if let Some(csv_path) = configs::CONFIG.read().unwrap().args.value_of("output") {
for (key, _) in PIVOT_KEYWORD.read().unwrap().iter() {
let keywords_file_name = csv_path.to_owned() + "-" + key + ".txt";
if Path::new(&keywords_file_name).exists() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&format!(
" The file {} already exists. Please specify a different filename.",
&keywords_file_name
),
)
.ok();
return;
}
}
if Path::new(csv_path).exists() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
@@ -119,9 +171,10 @@ impl App {
return;
}
}
if *STATISTICS_FLAG {
println!("Generating Event ID Statistics");
println!("");
println!();
}
if configs::CONFIG
.read()
@@ -138,26 +191,26 @@ impl App {
if !filepath.ends_with(".evtx")
|| Path::new(filepath)
.file_stem()
.unwrap_or(OsStr::new("."))
.unwrap_or_else(|| OsStr::new("."))
.to_str()
.unwrap()
.trim()
.starts_with(".")
.starts_with('.')
{
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&"--filepath only accepts .evtx files. Hidden files are ignored.".to_string(),
"--filepath only accepts .evtx files. Hidden files are ignored.",
)
.ok();
return;
}
self.analysis_files(vec![PathBuf::from(filepath)]);
} else if let Some(directory) = configs::CONFIG.read().unwrap().args.value_of("directory") {
let evtx_files = self.collect_evtxfiles(&directory);
if evtx_files.len() == 0 {
let evtx_files = self.collect_evtxfiles(directory);
if evtx_files.is_empty() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&"No .evtx files were found.".to_string(),
"No .evtx files were found.",
)
.ok();
return;
@@ -171,27 +224,116 @@ impl App {
{
self.print_contributors();
return;
} else if configs::CONFIG
.read()
.unwrap()
.args
.is_present("level-tuning")
{
let level_tuning_config_path = configs::CONFIG
.read()
.unwrap()
.args
.value_of("level-tuning")
.unwrap_or("./config/level_tuning.txt")
.to_string();
if Path::new(&level_tuning_config_path).exists() {
if let Err(err) = LevelTuning::run(
&level_tuning_config_path,
configs::CONFIG
.read()
.unwrap()
.args
.value_of("rules")
.unwrap_or("rules"),
) {
AlertMessage::alert(&mut BufWriter::new(std::io::stderr().lock()), &err).ok();
}
} else {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
"Need rule_levels.txt file to use --level-tuning option [default: ./config/level_tuning.txt]",
)
.ok();
}
return;
}
let analysis_end_time: DateTime<Local> = Local::now();
let analysis_duration = analysis_end_time.signed_duration_since(analysis_start_time);
println!("");
println!();
println!("Elapsed Time: {}", &analysis_duration.hhmmssxxx());
println!("");
println!();
// Qオプションを付けた場合もしくはパースのエラーがない場合はerrorのstackが9となるのでエラーログファイル自体が生成されない。
if ERROR_LOG_STACK.lock().unwrap().len() > 0 {
AlertMessage::create_error_log(ERROR_LOG_PATH.to_string());
}
if *PIVOT_KEYWORD_LIST_FLAG {
//ファイル出力の場合
if let Some(pivot_file) = configs::CONFIG.read().unwrap().args.value_of("output") {
for (key, pivot_keyword) in PIVOT_KEYWORD.read().unwrap().iter() {
let mut f = BufWriter::new(
fs::File::create(pivot_file.to_owned() + "-" + key + ".txt").unwrap(),
);
let mut output = "".to_string();
output += &format!("{}: ", key).to_string();
output += "( ";
for i in pivot_keyword.fields.iter() {
output += &format!("%{}% ", i).to_string();
}
output += "):";
output += "\n";
for i in pivot_keyword.keywords.iter() {
output += &format!("{}\n", i).to_string();
}
f.write_all(output.as_bytes()).unwrap();
}
//output to stdout
let mut output =
"Pivot keyword results saved to the following files:\n".to_string();
for (key, _) in PIVOT_KEYWORD.read().unwrap().iter() {
output += &(pivot_file.to_owned() + "-" + key + ".txt" + "\n");
}
println!("{}", output);
} else {
//標準出力の場合
let mut output = "The following pivot keywords were found:\n".to_string();
for (key, pivot_keyword) in PIVOT_KEYWORD.read().unwrap().iter() {
output += &format!("{}: ", key).to_string();
output += "( ";
for i in pivot_keyword.fields.iter() {
output += &format!("%{}% ", i).to_string();
}
output += "):";
output += "\n";
for i in pivot_keyword.keywords.iter() {
output += &format!("{}\n", i).to_string();
}
output += "\n";
}
print!("{}", output);
}
}
}
#[cfg(not(target_os = "windows"))]
fn collect_liveanalysis_files(&self) -> Option<Vec<PathBuf>> {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&"-l / --liveanalysis needs to be run as Administrator on Windows.\r\n".to_string(),
"-l / --liveanalysis needs to be run as Administrator on Windows.\r\n",
)
.ok();
return None;
None
}
#[cfg(target_os = "windows")]
@@ -200,22 +342,22 @@ impl App {
let log_dir = env::var("windir").expect("windir is not found");
let evtx_files =
self.collect_evtxfiles(&[log_dir, "System32\\winevt\\Logs".to_string()].join("/"));
if evtx_files.len() == 0 {
if evtx_files.is_empty() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&"No .evtx files were found.".to_string(),
"No .evtx files were found.",
)
.ok();
return None;
}
return Some(evtx_files);
Some(evtx_files)
} else {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&"-l / --liveanalysis needs to be run as Administrator on Windows.\r\n".to_string(),
"-l / --liveanalysis needs to be run as Administrator on Windows.\r\n",
)
.ok();
return None;
None
}
}
@@ -243,27 +385,27 @@ impl App {
let path = e.unwrap().path();
if path.is_dir() {
path.to_str().and_then(|path_str| {
path.to_str().map(|path_str| {
let subdir_ret = self.collect_evtxfiles(path_str);
ret.extend(subdir_ret);
return Option::Some(());
Option::Some(())
});
} else {
let path_str = path.to_str().unwrap_or("");
if path_str.ends_with(".evtx")
&& !Path::new(path_str)
.file_stem()
.unwrap_or(OsStr::new("."))
.unwrap_or_else(|| OsStr::new("."))
.to_str()
.unwrap()
.starts_with(".")
.starts_with('.')
{
ret.push(path);
}
}
}
return ret;
ret
}
fn print_contributors(&self) {
@@ -295,10 +437,10 @@ impl App {
&filter::exclude_ids(),
);
if rule_files.len() == 0 {
if rule_files.is_empty() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&"No rules were loaded. Please download the latest rules with the --update-rules option.\r\n".to_string(),
"No rules were loaded. Please download the latest rules with the --update-rules option.\r\n",
)
.ok();
return;
@@ -316,7 +458,7 @@ impl App {
pb.inc();
}
detection.add_aggcondition_msges(&self.rt);
if !*STATISTICS_FLAG {
if !*STATISTICS_FLAG && !*PIVOT_KEYWORD_LIST_FLAG {
after_fact();
}
}
@@ -369,14 +511,14 @@ impl App {
// target_eventids.txtでフィルタする。
let data = record_result.unwrap().data;
if self._is_target_event_id(&data) == false {
if !self._is_target_event_id(&data) {
continue;
}
// EvtxRecordInfo構造体に変更
records_per_detect.push(data);
}
if records_per_detect.len() == 0 {
if records_per_detect.is_empty() {
break;
}
@@ -397,7 +539,7 @@ impl App {
tl.tm_stats_dsp_msg();
return detection;
detection
}
async fn create_rec_infos(
@@ -407,28 +549,28 @@ impl App {
) -> Vec<EvtxRecordInfo> {
let path = Arc::new(path.to_string());
let rule_keys = Arc::new(rule_keys);
let threads: Vec<JoinHandle<EvtxRecordInfo>> = records_per_detect
.into_iter()
.map(|rec| {
let arc_rule_keys = Arc::clone(&rule_keys);
let arc_path = Arc::clone(&path);
return spawn(async move {
let rec_info =
utils::create_rec_info(rec, arc_path.to_string(), &arc_rule_keys);
return rec_info;
let threads: Vec<JoinHandle<EvtxRecordInfo>> = {
let this = records_per_detect
.into_iter()
.map(|rec| -> JoinHandle<EvtxRecordInfo> {
let arc_rule_keys = Arc::clone(&rule_keys);
let arc_path = Arc::clone(&path);
spawn(async move {
utils::create_rec_info(rec, arc_path.to_string(), &arc_rule_keys)
})
});
})
.collect();
FromIterator::from_iter(this)
};
let mut ret = vec![];
for thread in threads.into_iter() {
ret.push(thread.await.unwrap());
}
return ret;
ret
}
fn get_all_keys(&self, rules: &Vec<RuleNode>) -> Vec<String> {
fn get_all_keys(&self, rules: &[RuleNode]) -> Vec<String> {
let mut key_set = HashSet::new();
for rule in rules {
let keys = get_detection_keys(rule);
@@ -436,7 +578,7 @@ impl App {
}
let ret: Vec<String> = key_set.into_iter().collect();
return ret;
ret
}
// target_eventids.txtの設定を元にフィルタする。
@@ -446,11 +588,11 @@ impl App {
return true;
}
return match eventid.unwrap() {
match eventid.unwrap() {
Value::String(s) => utils::is_target_event_id(s),
Value::Number(n) => utils::is_target_event_id(&n.to_string()),
_ => true, // レコードからEventIdが取得できない場合は、特にフィルタしない
};
}
}
fn evtx_to_jsons(&self, evtx_filepath: PathBuf) -> Option<EvtxParser<File>> {
@@ -462,11 +604,11 @@ impl App {
parse_config = parse_config.num_threads(0); // 設定しないと遅かったので、設定しておく。
let evtx_parser = evtx_parser.with_configuration(parse_config);
return Option::Some(evtx_parser);
Option::Some(evtx_parser)
}
Err(e) => {
eprintln!("{}", e);
return Option::None;
Option::None
}
}
}
@@ -479,8 +621,8 @@ impl App {
/// output logo
fn output_logo(&self) {
let fp = &format!("art/logo.txt");
let content = fs::read_to_string(fp).unwrap_or("".to_owned());
let fp = &"art/logo.txt".to_string();
let content = fs::read_to_string(fp).unwrap_or_default();
println!("{}", content);
}
@@ -495,29 +637,36 @@ impl App {
match eggs.get(exec_datestr) {
None => {}
Some(path) => {
let content = fs::read_to_string(path).unwrap_or("".to_owned());
let content = fs::read_to_string(path).unwrap_or_default();
println!("{}", content);
}
}
}
/// update rules(hayabusa-rules subrepository)
fn update_rules(&self) -> Result<(), git2::Error> {
fn update_rules(&self) -> Result<String, git2::Error> {
let mut result;
let mut prev_modified_time: SystemTime = SystemTime::UNIX_EPOCH;
let mut prev_modified_rules: HashSet<String> = HashSet::default();
let hayabusa_repo = Repository::open(Path::new("."));
let hayabusa_rule_repo = Repository::open(Path::new("./rules"));
let hayabusa_rule_repo = Repository::open(Path::new("rules"));
if hayabusa_repo.is_err() && hayabusa_rule_repo.is_err() {
println!(
"Attempting to git clone the hayabusa-rules repository into the rules folder."
);
// レポジトリが開けなかった段階でhayabusa rulesのgit cloneを実施する
self.clone_rules()
result = self.clone_rules();
} else if hayabusa_rule_repo.is_ok() {
// rulesのrepositoryが確認できる場合
// origin/mainのfetchができなくなるケースはネットワークなどのケースが考えられるため、git cloneは実施しない
self.pull_repository(hayabusa_rule_repo.unwrap())
prev_modified_rules = self.get_updated_rules("rules", &prev_modified_time);
prev_modified_time = fs::metadata("rules").unwrap().modified().unwrap();
result = self.pull_repository(hayabusa_rule_repo.unwrap());
} else {
//hayabusa repositoryがあればsubmodule情報もあると思われるのでupdate
let rules_path = Path::new("./rules");
// hayabusa-rulesのrepositoryがrulesに存在しない場合
// hayabusa repositoryがあればsubmodule情報もあると思われるのでupdate
prev_modified_time = fs::metadata("rules").unwrap().modified().unwrap();
let rules_path = Path::new("rules");
if !rules_path.exists() {
create_dir(rules_path).ok();
}
@@ -529,28 +678,31 @@ impl App {
for mut submodule in submodules {
submodule.update(true, None)?;
let submodule_repo = submodule.open()?;
match self.pull_repository(submodule_repo) {
Ok(it) => it,
Err(e) => {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&format!("Failed submodule update. {}", e),
)
.ok();
is_success_submodule_update = false;
}
if let Err(e) = self.pull_repository(submodule_repo) {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&format!("Failed submodule update. {}", e),
)
.ok();
is_success_submodule_update = false;
}
}
if is_success_submodule_update {
Ok(())
result = Ok("Successed submodule update".to_string());
} else {
Err(git2::Error::from_str(&String::default()))
result = Err(git2::Error::from_str(&String::default()));
}
}
if result.is_ok() {
let updated_modified_rules = self.get_updated_rules("rules", &prev_modified_time);
result =
self.print_diff_modified_rule_dates(prev_modified_rules, updated_modified_rules);
}
result
}
/// Pull(fetch and fast-forward merge) repositoryto input_repo.
fn pull_repository(&self, input_repo: Repository) -> Result<(), git2::Error> {
fn pull_repository(&self, input_repo: Repository) -> Result<String, git2::Error> {
match input_repo
.find_remote("origin")?
.fetch(&["main"], None, None)
@@ -568,18 +720,18 @@ impl App {
let fetch_commit = input_repo.reference_to_annotated_commit(&fetch_head)?;
let analysis = input_repo.merge_analysis(&[&fetch_commit])?;
if analysis.0.is_up_to_date() {
Ok(())
Ok("Already up to date".to_string())
} else if analysis.0.is_fast_forward() {
let mut reference = input_repo.find_reference("refs/heads/main")?;
reference.set_target(fetch_commit.id(), "Fast-Forward")?;
input_repo.set_head("refs/heads/main")?;
input_repo.checkout_head(Some(git2::build::CheckoutBuilder::default().force()))?;
Ok(())
Ok("Finished fast forward merge.".to_string())
} else if analysis.0.is_normal() {
AlertMessage::alert(
&mut BufWriter::new(std::io::stderr().lock()),
&"update-rules option is git Fast-Forward merge only. please check your rules folder."
.to_string(),
"update-rules option is git Fast-Forward merge only. please check your rules folder."
,
).ok();
Err(git2::Error::from_str(&String::default()))
} else {
@@ -588,14 +740,14 @@ impl App {
}
/// git clone でhauyabusa-rules レポジトリをrulesフォルダにgit cloneする関数
fn clone_rules(&self) -> Result<(), git2::Error> {
fn clone_rules(&self) -> Result<String, git2::Error> {
match Repository::clone(
"https://github.com/Yamato-Security/hayabusa-rules.git",
"rules",
) {
Ok(_repo) => {
println!("Finished cloning the hayabusa-rules repository.");
Ok(())
Ok("Finished clone".to_string())
}
Err(e) => {
AlertMessage::alert(
@@ -610,11 +762,106 @@ impl App {
}
}
}
/// Create rules folder files Hashset. Format is "[rule title in yaml]|[filepath]|[filemodified date]|[rule type in yaml]"
fn get_updated_rules(
&self,
rule_folder_path: &str,
target_date: &SystemTime,
) -> HashSet<String> {
let mut rulefile_loader = ParseYaml::new();
// level in read_dir is hard code to check all rules.
rulefile_loader
.read_dir(
rule_folder_path,
"INFORMATIONAL",
&filter::RuleExclude {
no_use_rule: HashSet::new(),
},
)
.ok();
let hash_set_keys: HashSet<String> = rulefile_loader
.files
.into_iter()
.filter_map(|(filepath, yaml)| {
let file_modified_date = fs::metadata(&filepath).unwrap().modified().unwrap();
if file_modified_date.cmp(target_date).is_gt() {
let yaml_date = yaml["date"].as_str().unwrap_or("-");
return Option::Some(format!(
"{}|{}|{}|{}",
yaml["title"].as_str().unwrap_or(&String::default()),
yaml["modified"].as_str().unwrap_or(yaml_date),
&filepath,
yaml["ruletype"].as_str().unwrap_or("Other")
));
}
Option::None
})
.collect();
hash_set_keys
}
/// print updated rule files.
fn print_diff_modified_rule_dates(
&self,
prev_sets: HashSet<String>,
updated_sets: HashSet<String>,
) -> Result<String, git2::Error> {
let diff = updated_sets.difference(&prev_sets);
let mut update_count_by_rule_type: HashMap<String, u128> = HashMap::new();
let mut latest_update_date = Local.timestamp(0, 0);
for diff_key in diff {
let tmp: Vec<&str> = diff_key.split('|').collect();
let file_modified_date = fs::metadata(&tmp[2]).unwrap().modified().unwrap();
let dt_local: DateTime<Local> = file_modified_date.into();
if latest_update_date.cmp(&dt_local) == Ordering::Less {
latest_update_date = dt_local;
}
*update_count_by_rule_type
.entry(tmp[3].to_string())
.or_insert(0b0) += 1;
println!(
"[Updated] {} (Modified: {} | Path: {})",
tmp[0], tmp[1], tmp[2]
);
}
println!();
for (key, value) in &update_count_by_rule_type {
println!("Updated {} rules: {}", key, value);
}
if !&update_count_by_rule_type.is_empty() {
Ok("Rule updated".to_string())
} else {
println!("You currently have the latest rules.");
Ok("You currently have the latest rules.".to_string())
}
}
/// check architecture
fn is_matched_architecture_and_binary(&self) -> bool {
if cfg!(target_os = "windows") {
let is_processor_arch_32bit = env::var_os("PROCESSOR_ARCHITECTURE")
.unwrap_or_default()
.eq("x86");
// PROCESSOR_ARCHITEW6432は32bit環境には存在しないため、環境変数存在しなかった場合は32bit環境であると判断する
let not_wow_flag = env::var_os("PROCESSOR_ARCHITEW6432")
.unwrap_or_else(|| OsString::from("x86"))
.eq("x86");
return (cfg!(target_pointer_width = "64") && !is_processor_arch_32bit)
|| (cfg!(target_pointer_width = "32") && is_processor_arch_32bit && not_wow_flag);
}
true
}
}
#[cfg(test)]
mod tests {
use crate::App;
use std::time::SystemTime;
#[test]
fn test_collect_evtxfiles() {
@@ -631,4 +878,20 @@ mod tests {
assert_eq!(is_contains, &true);
})
}
#[test]
fn test_get_updated_rules() {
let app = App::new();
let prev_modified_time: SystemTime = SystemTime::UNIX_EPOCH;
let prev_modified_rules =
app.get_updated_rules("test_files/rules/level_yaml", &prev_modified_time);
assert_eq!(prev_modified_rules.len(), 5);
let target_time: SystemTime = SystemTime::now();
let prev_modified_rules2 =
app.get_updated_rules("test_files/rules/level_yaml", &target_time);
assert_eq!(prev_modified_rules2.len(), 0);
}
}

View File

@@ -18,7 +18,7 @@ impl SlackNotify {
eprintln!("WEBHOOK_URL not found");
return false;
}
return true;
true
}
// send message to slack.

161
src/options/level_tuning.rs Normal file
View File

@@ -0,0 +1,161 @@
use crate::detections::{configs, utils};
use crate::filter;
use crate::yaml::ParseYaml;
use std::collections::HashMap;
use std::fs::{self, File};
use std::io::Write;
pub struct LevelTuning {}
impl LevelTuning {
pub fn run(level_tuning_config_path: &str, rules_path: &str) -> Result<(), String> {
let read_result = utils::read_csv(level_tuning_config_path);
if read_result.is_err() {
return Result::Err(read_result.as_ref().unwrap_err().to_string());
}
// Read Tuning files
let mut tuning_map: HashMap<String, String> = HashMap::new();
read_result.unwrap().into_iter().try_for_each(|line| -> Result<(), String> {
let id = match line.get(0) {
Some(_id) => {
if !configs::IDS_REGEX.is_match(_id) {
return Result::Err(format!("Failed to read level tuning file. {} is not correct id format, fix it.", _id));
}
_id
}
_ => return Result::Err("Failed to read id...".to_string())
};
let level = match line.get(1) {
Some(_level) => {
if _level.starts_with("informational")
|| _level.starts_with("low")
|| _level.starts_with("medium")
|| _level.starts_with("high")
|| _level.starts_with("critical") {
_level.split('#').collect::<Vec<&str>>()[0]
} else {
return Result::Err("level tuning file's level must in informational, low, medium, high, critical".to_string())
}
}
_ => return Result::Err("Failed to read level...".to_string())
};
tuning_map.insert(id.to_string(), level.to_string());
Ok(())
})?;
// Read Rule files
let mut rulefile_loader = ParseYaml::new();
let result_readdir =
rulefile_loader.read_dir(rules_path, "informational", &filter::exclude_ids());
if result_readdir.is_err() {
return Result::Err(format!("{}", result_readdir.unwrap_err()));
}
// Convert rule files
for (path, rule) in rulefile_loader.files {
if let Some(new_level) = tuning_map.get(rule["id"].as_str().unwrap()) {
println!("path: {}", path);
let mut content = match fs::read_to_string(&path) {
Ok(_content) => _content,
Err(e) => return Result::Err(e.to_string()),
};
let past_level = "level: ".to_string() + rule["level"].as_str().unwrap();
if new_level.starts_with("informational") {
content = content.replace(&past_level, "level: informational");
}
if new_level.starts_with("low") {
content = content.replace(&past_level, "level: low");
}
if new_level.starts_with("medium") {
content = content.replace(&past_level, "level: medium");
}
if new_level.starts_with("high") {
content = content.replace(&past_level, "level: high");
}
if new_level.starts_with("critical") {
content = content.replace(&past_level, "level: critical");
}
let mut file = match File::options().write(true).truncate(true).open(&path) {
Ok(file) => file,
Err(e) => return Result::Err(e.to_string()),
};
file.write_all(content.as_bytes()).unwrap();
file.flush().unwrap();
println!(
"level: {} -> {}",
rule["level"].as_str().unwrap(),
new_level
);
}
}
Result::Ok(())
}
}
#[cfg(test)]
mod tests {
// use crate::{filter::RuleExclude, yaml};
// use hashbrown::HashSet;
use super::*;
#[test]
fn rule_level_failed_to_open_file() -> Result<(), String> {
let level_tuning_config_path = "./none.txt";
let res = LevelTuning::run(level_tuning_config_path, "");
let expected = Result::Err("Cannot open file. [file:./none.txt]".to_string());
assert_eq!(res, expected);
Ok(())
}
#[test]
fn rule_level_id_error_file() -> Result<(), String> {
let level_tuning_config_path = "./test_files/config/level_tuning_error1.txt";
let res = LevelTuning::run(level_tuning_config_path, "");
let expected = Result::Err("Failed to read level tuning file. 12345678-1234-1234-1234-12 is not correct id format, fix it.".to_string());
assert_eq!(res, expected);
Ok(())
}
#[test]
fn rule_level_level_error_file() -> Result<(), String> {
let level_tuning_config_path = "./test_files/config/level_tuning_error2.txt";
let res = LevelTuning::run(level_tuning_config_path, "");
let expected = Result::Err(
"level tuning file's level must in informational, low, medium, high, critical"
.to_string(),
);
assert_eq!(res, expected);
Ok(())
}
#[test]
fn test_level_tuning_update_rule_files() {
let level_tuning_config_path = "./test_files/config/level_tuning.txt";
let rule_str = r#"
id: 12345678-1234-1234-1234-123456789012
level: informational
"#;
let expected_rule = r#"
id: 12345678-1234-1234-1234-123456789012
level: high
"#;
let path = "test_files/rules/level_tuning_test.yml";
let mut file = File::create(path).unwrap();
let buf = rule_str.as_bytes();
file.write_all(buf).unwrap();
file.flush().unwrap();
let res = LevelTuning::run(level_tuning_config_path, path);
assert_eq!(res, Ok(()));
assert_eq!(fs::read_to_string(path).unwrap(), expected_rule);
fs::remove_file(path).unwrap();
}
}

1
src/options/mod.rs Normal file
View File

@@ -0,0 +1 @@
pub mod level_tuning;

View File

@@ -1,2 +1,2 @@
pub mod statistics;
pub mod timeline;
pub mod timelines;

View File

@@ -20,16 +20,16 @@ impl EventStatistics {
end_time: String,
stats_list: HashMap<String, usize>,
) -> EventStatistics {
return EventStatistics {
EventStatistics {
total,
filepath,
start_time,
end_time,
stats_list,
};
}
}
pub fn start(&mut self, records: &Vec<EvtxRecordInfo>) {
pub fn start(&mut self, records: &[EvtxRecordInfo]) {
// 引数でstatisticsオプションが指定されている時だけ、統計情報を出力する。
if !configs::CONFIG
.read()
@@ -49,8 +49,8 @@ impl EventStatistics {
self.stats_eventid(records);
}
fn stats_time_cnt(&mut self, records: &Vec<EvtxRecordInfo>) {
if records.len() == 0 {
fn stats_time_cnt(&mut self, records: &[EvtxRecordInfo]) {
if records.is_empty() {
return;
}
self.filepath = records[0].evtx_filepath.as_str().to_owned();
@@ -59,21 +59,19 @@ impl EventStatistics {
// もうちょっと感じに書けるといえば書けます。
for record in records.iter() {
let evttime = utils::get_event_value(
&"Event.System.TimeCreated_attributes.SystemTime".to_string(),
"Event.System.TimeCreated_attributes.SystemTime",
&record.record,
)
.and_then(|evt_value| {
return Option::Some(evt_value.to_string());
});
.map(|evt_value| evt_value.to_string());
if evttime.is_none() {
continue;
}
let evttime = evttime.unwrap();
if self.start_time.len() == 0 || evttime < self.start_time {
if self.start_time.is_empty() || evttime < self.start_time {
self.start_time = evttime.to_string();
}
if self.end_time.len() == 0 || evttime > self.end_time {
if self.end_time.is_empty() || evttime > self.end_time {
self.end_time = evttime;
}
}
@@ -81,10 +79,10 @@ impl EventStatistics {
}
// EventIDで集計
fn stats_eventid(&mut self, records: &Vec<EvtxRecordInfo>) {
fn stats_eventid(&mut self, records: &[EvtxRecordInfo]) {
// let mut evtstat_map = HashMap::new();
for record in records.iter() {
let evtid = utils::get_event_value(&"EventID".to_string(), &record.record);
let evtid = utils::get_event_value("EventID", &record.record);
if evtid.is_none() {
continue;
}

View File

@@ -8,6 +8,12 @@ pub struct Timeline {
pub stats: EventStatistics,
}
impl Default for Timeline {
fn default() -> Self {
Self::new()
}
}
impl Timeline {
pub fn new() -> Timeline {
let totalcnt = 0;
@@ -17,10 +23,10 @@ impl Timeline {
let statslst = HashMap::new();
let statistic = EventStatistics::new(totalcnt, filepath, starttm, endtm, statslst);
return Timeline { stats: statistic };
Timeline { stats: statistic }
}
pub fn start(&mut self, records: &Vec<EvtxRecordInfo>) {
pub fn start(&mut self, records: &[EvtxRecordInfo]) {
self.stats.start(records);
}
@@ -41,12 +47,12 @@ impl Timeline {
sammsges.push(format!("Total Event Records: {}\n", self.stats.total));
sammsges.push(format!("First Timestamp: {}", self.stats.start_time));
sammsges.push(format!("Last Timestamp: {}\n", self.stats.end_time));
sammsges.push("Count (Percent)\tID\tEvent\t\tTimeline".to_string());
sammsges.push("--------------- ------- --------------- -------".to_string());
sammsges.push("Count (Percent)\tID\tEvent\t".to_string());
sammsges.push("--------------- ------- ---------------".to_string());
// 集計件数でソート
let mut mapsorted: Vec<_> = self.stats.stats_list.iter().collect();
mapsorted.sort_by(|x, y| y.1.cmp(&x.1));
mapsorted.sort_by(|x, y| y.1.cmp(x.1));
// イベントID毎の出力メッセージ生成
let stats_msges: Vec<String> = self.tm_stats_set_msg(mapsorted);
@@ -68,33 +74,31 @@ impl Timeline {
// イベント情報取得(eventtitleなど)
let conf = configs::CONFIG.read().unwrap();
// timeline_event_info.txtに登録あるものは情報設定
// statistics_event_info.txtに登録あるものは情報設定
match conf.event_timeline_config.get_event_id(*event_id) {
Some(e) => {
// 出力メッセージ1行作成
msges.push(format!(
"{0} ({1:.1}%)\t{2}\t{3}\t{4}",
"{0} ({1:.1}%)\t{2}\t{3}",
event_cnt,
(rate * 1000.0).round() / 10.0,
event_id,
e.evttitle,
e.detectflg
));
}
None => {
// 出力メッセージ1行作成
msges.push(format!(
"{0} ({1:.1}%)\t{2}\t{3}\t{4}",
"{0} ({1:.1}%)\t{2}\t{3}",
event_cnt,
(rate * 1000.0).round() / 10.0,
event_id,
"Unknown".to_string(),
"".to_string()
"Unknown",
));
}
}
}
msges.push("---------------------------------------".to_string());
return msges;
msges
}
}

View File

@@ -23,6 +23,12 @@ pub struct ParseYaml {
pub errorrule_count: u128,
}
impl Default for ParseYaml {
fn default() -> Self {
Self::new()
}
}
impl ParseYaml {
pub fn new() -> ParseYaml {
ParseYaml {
@@ -37,7 +43,7 @@ impl ParseYaml {
let mut file_content = String::new();
let mut fr = fs::File::open(path)
.map(|f| BufReader::new(f))
.map(BufReader::new)
.map_err(|e| e.to_string())?;
fr.read_to_string(&mut file_content)
@@ -76,7 +82,7 @@ impl ParseYaml {
.as_ref()
.to_path_buf()
.extension()
.unwrap_or(OsStr::new(""))
.unwrap_or_else(|| OsStr::new(""))
!= "yml"
{
return io::Result::Ok(String::default());
@@ -126,7 +132,7 @@ impl ParseYaml {
yaml_docs.extend(yaml_contents.unwrap().into_iter().map(|yaml_content| {
let filepath = format!("{}", path.as_ref().to_path_buf().display());
return (filepath, yaml_content);
(filepath, yaml_content)
}));
} else {
let mut entries = fs::read_dir(path)?;
@@ -144,7 +150,12 @@ impl ParseYaml {
// 拡張子がymlでないファイルは無視
let path = entry.path();
if path.extension().unwrap_or(OsStr::new("")) != "yml" {
if path.extension().unwrap_or_else(|| OsStr::new("")) != "yml" {
return io::Result::Ok(ret);
}
// ignore if yml file in .git folder.
if path.to_str().unwrap().contains("/.git/") {
return io::Result::Ok(ret);
}
@@ -192,10 +203,10 @@ impl ParseYaml {
let yaml_contents = yaml_contents.unwrap().into_iter().map(|yaml_content| {
let filepath = format!("{}", entry.path().display());
return (filepath, yaml_content);
(filepath, yaml_content)
});
ret.extend(yaml_contents);
return io::Result::Ok(ret);
io::Result::Ok(ret)
})?;
}
@@ -254,11 +265,11 @@ impl ParseYaml {
}
}
return Option::Some((filepath, yaml_doc));
Option::Some((filepath, yaml_doc))
})
.collect();
self.files.extend(files);
return io::Result::Ok(String::default());
io::Result::Ok(String::default())
}
}
@@ -283,7 +294,7 @@ mod tests {
no_use_rule: HashSet::new(),
};
let _ = &yaml.read_dir(
"test_files/rules/yaml/1.yml".to_string(),
"test_files/rules/yaml/1.yml",
&String::default(),
&exclude_ids,
);
@@ -298,11 +309,7 @@ mod tests {
let exclude_ids = RuleExclude {
no_use_rule: HashSet::new(),
};
let _ = &yaml.read_dir(
"test_files/rules/yaml/".to_string(),
&String::default(),
&exclude_ids,
);
let _ = &yaml.read_dir("test_files/rules/yaml/", &String::default(), &exclude_ids);
assert_ne!(yaml.files.len(), 0);
}
@@ -329,7 +336,7 @@ mod tests {
let path = Path::new("test_files/rules/yaml/error.yml");
let ret = yaml.read_file(path.to_path_buf()).unwrap();
let rule = YamlLoader::load_from_str(&ret);
assert_eq!(rule.is_err(), true);
assert!(rule.is_err());
}
#[test]
@@ -337,8 +344,7 @@ mod tests {
fn test_default_level_read_yaml() {
let mut yaml = yaml::ParseYaml::new();
let path = Path::new("test_files/rules/level_yaml");
yaml.read_dir(path.to_path_buf(), &"", &filter::exclude_ids())
.unwrap();
yaml.read_dir(path, "", &filter::exclude_ids()).unwrap();
assert_eq!(yaml.files.len(), 5);
}
@@ -346,7 +352,7 @@ mod tests {
fn test_info_level_read_yaml() {
let mut yaml = yaml::ParseYaml::new();
let path = Path::new("test_files/rules/level_yaml");
yaml.read_dir(path.to_path_buf(), &"informational", &filter::exclude_ids())
yaml.read_dir(path, "informational", &filter::exclude_ids())
.unwrap();
assert_eq!(yaml.files.len(), 5);
}
@@ -354,15 +360,14 @@ mod tests {
fn test_low_level_read_yaml() {
let mut yaml = yaml::ParseYaml::new();
let path = Path::new("test_files/rules/level_yaml");
yaml.read_dir(path.to_path_buf(), &"LOW", &filter::exclude_ids())
.unwrap();
yaml.read_dir(path, "LOW", &filter::exclude_ids()).unwrap();
assert_eq!(yaml.files.len(), 4);
}
#[test]
fn test_medium_level_read_yaml() {
let mut yaml = yaml::ParseYaml::new();
let path = Path::new("test_files/rules/level_yaml");
yaml.read_dir(path.to_path_buf(), &"MEDIUM", &filter::exclude_ids())
yaml.read_dir(path, "MEDIUM", &filter::exclude_ids())
.unwrap();
assert_eq!(yaml.files.len(), 3);
}
@@ -370,15 +375,14 @@ mod tests {
fn test_high_level_read_yaml() {
let mut yaml = yaml::ParseYaml::new();
let path = Path::new("test_files/rules/level_yaml");
yaml.read_dir(path.to_path_buf(), &"HIGH", &filter::exclude_ids())
.unwrap();
yaml.read_dir(path, "HIGH", &filter::exclude_ids()).unwrap();
assert_eq!(yaml.files.len(), 2);
}
#[test]
fn test_critical_level_read_yaml() {
let mut yaml = yaml::ParseYaml::new();
let path = Path::new("test_files/rules/level_yaml");
yaml.read_dir(path.to_path_buf(), &"CRITICAL", &filter::exclude_ids())
yaml.read_dir(path, "CRITICAL", &filter::exclude_ids())
.unwrap();
assert_eq!(yaml.files.len(), 1);
}
@@ -388,8 +392,7 @@ mod tests {
let mut yaml = yaml::ParseYaml::new();
let path = Path::new("test_files/rules/yaml");
yaml.read_dir(path.to_path_buf(), &"", &filter::exclude_ids())
.unwrap();
yaml.read_dir(path, "", &filter::exclude_ids()).unwrap();
assert_eq!(yaml.ignorerule_count, 10);
}
#[test]
@@ -401,8 +404,7 @@ mod tests {
let exclude_ids = RuleExclude {
no_use_rule: HashSet::new(),
};
yaml.read_dir(path.to_path_buf(), &"", &exclude_ids)
.unwrap();
yaml.read_dir(path, "", &exclude_ids).unwrap();
assert_eq!(yaml.ignorerule_count, 0);
}
#[test]
@@ -412,8 +414,7 @@ mod tests {
let exclude_ids = RuleExclude {
no_use_rule: HashSet::new(),
};
yaml.read_dir(path.to_path_buf(), &"", &exclude_ids)
.unwrap();
yaml.read_dir(path, "", &exclude_ids).unwrap();
assert_eq!(yaml.ignorerule_count, 1);
}
}

View File

@@ -0,0 +1,2 @@
id,next_level
12345678-1234-1234-1234-123456789012,high

View File

@@ -0,0 +1,2 @@
id,new_level
12345678-1234-1234-1234-12,informational # sample level tuning line

View File

@@ -0,0 +1,2 @@
id,new_level
00000000-0000-0000-0000-000000000000,no_exist_level # sample level tuning line

View File

@@ -0,0 +1,3 @@
tag_full_str,tag_output_str
attack.impact,Impact
xxx,yyy

View File

@@ -0,0 +1,8 @@
Users.SubjectUserName
Users.TargetUserName
Users.User
Logon IDs.SubjectLogonId
Logon IDs.TargetLogonId
Workstation Names.WorkstationName
Ip Addresses.IpAddress
Processes.Image