Artifact collection tool for *nix systems
fennec is an artifact collection tool written in Rust to be used during incident response on *nix based systems. fennec allows you to write a configuration file that contains how to collect artifacts.
OS Details | Architecture | Success? | Details |
---|---|---|---|
Ubuntu 20.04.3 LTS | x86_64 | β | |
Ubuntu 19.04 | x86_64 | β | |
Ubuntu 18.04.6 LTS | x86_64 | β | |
Ubuntu 17.04 | x86_64 | β | |
Ubuntu 16.04.7 LTS | x86_64 | β | |
Ubuntu 15.10 | x86_64 | β | |
Ubuntu 14.04.6 LTS | x86_64 | β | |
Ubuntu 13.04 | x86_64 | β | |
Ubuntu 12.04.5 LTS | x86_64 | β | |
CentOS 8.4.2105 | x86_64 | β | |
CentOS 7.9.2009 | x86_64 | β | |
CentOS 6.10 | x86_64 | β | |
CentOS 5.11 | x86_64 | β | osquery requires libc >= 2.12 |
Ubuntu 20.04 | aarch64 | β | |
MacOS Monterey v12.0.1 | x86_64 | β | configuration tuning is required. If you have experience in MacOS artifact feel free to contribute |
Oracle Linux Server 7.9 | x86_64 | β |
fennec 0.4.1
AbdulRhman Alfaifi <[email protected]>
Aritfact collection tool for *nix systems
USAGE:
fennec [OPTIONS]
OPTIONS:
-c, --config <FILE>
Sets a custom config file (Embedded : true)
-o, --output <FILE>
Sets output file name [default: ABDULRHMAN-PC.zip]
-l, --log-level <LEVEL>
Sets the log level [default: info] [possible values: trace, debug, info, error]
-f, --log-file <FILE>
Sets the log file name [default: fennec.log]
-u, --upload-artifact <CONFIG>...
Upload configuration string. Supported Protocols:
* s3 : Upload artifact package to S3 bucket (ex. minio)
* Format :
s3://<ACCESS_KEY>:<SECRET_ACCESS_KEY>@(http|https)://<HOSTNAME>:<PORT>/<BUCKET_NAME>:<PATH>
* Example (minio): s3://minioadmin:minioadmin@http://192.168.100.190:9000/fennec:/
* aws3 : Upload artifact package to AWS S3 bucket
* Format : aws3://<ACCESS_KEY>:<SECRET_ACCESS_KEY>@<AWS_REGOIN>.<BUCKET_NAME>:<PATH>
* Example: aws3://AKIAXXX:[email protected]:/
* scp : Upload artifact package to a server using SCP protocol
* Format : scp://<USERNAME>:<PASSWORD>@<HOSTNAME>:<PORT>:<PATH>
* Example: scp://testusername:[email protected]:22:/dev/shm
-q, --quiet
Do not print logs to stdout
-t, --timeout <SEC>
Sets osquery queries timeout in seconds [default: 60]
-h, --help
Print help information
--non-root
Run Fennec with non root permisions. This isn't recommended, most artifacts require root
permissions
--osquery-path <PATH>
Sets osquery path, if osquery is embedded it will be writen to this path otherwise the
path will be used to spawn osquery instance (Embedded : true) [default: ./osqueryd]
--output-format <FORMAT>
Sets output format [default: jsonl] [possible values: jsonl, csv, kjson]
--show-config
Show the embedded configuration file
--show-embedded
Show the embedded files metadata
-V, --version
Print version information
-c
, --config
: Use the specified configuration file instead of the embedded configuration-f
, --log-file
: Change the default name for the log file (default: fennec.log
)-h
, --help
: Print help message-l
, --log-level
: Change the default log level (default: info
)-o
, --output
: Change the default output file name for the zip file (default: {HOSTNAME}.zip
, where hostname is the runtime evaluated machine hostname)--osquery-path
: Path to osquery executable, This value will be used based on these conditions:
fennec
then extract it and dump it to --osquery-path
fennec
then use the osquery binary in the path --osquery-path
--output-format
: Choose the output format, Supported formats:
-q
, --quiet
: Do not print logs to stdout
--non-root
: Run Fennec with non root permissions. By default, Fennec requires root
permissions and it will exit with error message if not root.--show-config
: Print the embedded configuration then exit--show-embedded
: Show embedded files-t
, --timeout
: Sets the timeout in seconds for each osquery in query artifact type-u
, --upload-artifact
: Upload artifact package to a remote server. Supported protocoles:
s3
: Upload artifact package to S3 bucket
Format
: s3://<ACCESS_KEY>:<SECRET_ACCESS_KEY>@(http|https)://<HOSTNAME>:<PORT>/<BUCKET_NAME>:<PATH>Example
: s3://minioadmin:minioadmin@http://192.168.100.190:9000/fennec:/aws3
: Upload artifact package to AWS S3 bucket
Format
: aws3://<ACCESS_KEY>:<SECRET_ACCESS_KEY>@<AWS_REGOIN>.<BUCKET_NAME>:<PATH>Example
: aws3://AKIAXXXXXXXXXXXXXXXXX:[email protected]:/scp
: Upload artifact package to a server using SCP protocol
Format
: scp://<USERNAME>
:<PASSWORD>
@<HOSTNAME>
:<PORT>
:<PATH>
Example
: scp://testusername:[email protected]:22:/dev/shm-V
, --version
: Print fennec
version then exitfennec depends on osquery
to run the artifacts with the type query
. The directory called deps
contains the file that will be embedded into the binary depending on the target OS and architecture, Before compiling follow the below steps:
Modify the configuration file deps/<TARGET_OS>/fennec.yaml
as needed
Build the binary using one of the commands below:
cargo build --release
RUSTFLAGS="-C target-feature=+crt-static" cargo build --release --target x86_64-unknown-linux-gnu
You can also use the precompiled binaries in the release section.
The following is an example ran on Ubuntu 20
with the same configurations in this repo:
To output data to Kuiper supported format execute Fennec with the following argument:
sudo ./fennec --output-format kjson
or add the following to the args
section in the configuration:
args:
- "--output-format"
- "kjson"
recompile then execute:
sudo ./fennec
then upload the resulting zip file to Kuiper, the following is an example:
By default the configuration in the path deps/<TARGET_OS>/fennec.yaml
will be embedded into the executable during compilation. The configuration is in YAML format and have two sections:
contains a list of arguments to be passed to the executable as command line arguments, the following is an example for the args
section that will set the output format to jsonl
and the log file name to fennec.log
:
args:
- "--output-format"
- "jsonl"
- "--log-file"
- "fennec.log"
...
The command line arguments will be used in the following priorities:
Contains a list of artifacts to be collected. Each artifact contains the following fields:
stdout
in case of command artifactExecute osquery SQL queries. The following example artifact to retrieve all users on the system:
artifacts:
- name: users
type: query
description: "List all local users"
queries:
- 'select * from groups join user_groups using (gid) join users using (uid)'
...
This artifact type collect files/folders specified in the field paths. The following is an example of this artifact type that collect system logs:
artifacts:
- name: logs
type: collection
description: "Collect system logs"
paths:
- '/var/log/**/*'
...
Execute system commands using the shell command interpreter in the following priority:
This is an example of this artifact type that retrieve bad logins:
artifacts:
- name: bad_logins
type: command
description: "Get failed logins (/var/log/btmp)"
commands:
- "lastb --time-format=iso | head -n -1"
timeout: 30
regex: '(?P<username>[^ ]+)[ ]+?(?P<tty>[^ ]+)[ ]+?(?P<src_ip>[^ ]+)?[ ]+?(?P<login_time>[^ ]+) - (?P<logout_time>[^ ]+)[ ]+?(\()?(?P<duration>[^ ]+)(\))'
This artifact type will execute the commands in the list commands
and parse the stdout
using the regular expression specified in the field regex
. Note that the regex will only be processed on stdout
stream and not stderr
. Also, the field regex
is optional. Here is an example of the results both using regex
field and without it:
regex
field{
"line": 0,
"stdout": "root pts/1 2023-09-12T17:13:28+03:00 - 2023-09-12T17:13:28+03:00 (00:00)"
}
regex
field{
"username": "root",
"tty": "pts/1",
"src_ip": null,
"login_time": "2023-09-12 14:13:28",
"logout_time": "2023-09-12T17:13:28+03:00",
"duration": "00:00",
"@timestamp": "2023-09-12 14:13:28"
}
This artifact type provides the ability to parse text files using regex and return the data it in structured format. The example bellow parse nginx access logs and return the results in structured format:
artifcats:
- name: nginx_access
type: parse
description: "Nginx access logs"
paths:
- /var/log/nginx/access.*
regex: '(?P<c_ip>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}) - (?P<remote_user>[^ ]+) \[(?P<time>[0-9]{2}/[a-zA-Z]{3}/[0-9]{4}:[0-9]{2}:[0-9]{2}:[0-9]{2} \+[0-9]{4})\] "(?P<method>[A-Z]+)?[ ]?(?P<uri>.*?)[ ]?(HTTP/(?P<http_prot>[0-9\.]+))?" (?P<status_code>[0-9]{3}) (?P<body_bytes_sent>[0-9]+) "(?P<referer>.*?)" "(?P<user_agent>.*?)"'
This configuration will read the files in the path /var/log/nginx/access.*
line by line and run the regex to extract fields. This artifact also check if the file is in gzip
format which is used to compress old logs to save space and decompresses them and parses them. The regex should be in named captures format as documented in the rust regex library. The following is an example nginx access record before and after parsing:
192.168.133.70 - - [23/Jan/2022:19:14:37 +0000] "GET /blog/ HTTP/1.1" 200 2497 "https://u0041.co/" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
parsed record
{
"c_ip": "192.168.133.70",
"remote_user": "-",
"time": "23/Jan/2022:19:14:37 +0000",
"method": "GET",
"uri": "/blog/",
"http_prot": "1.1",
"status_code": "200",
"body_bytes_sent": "2497",
"referer": "https://u0041.co/",
"user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0",
"full_path": "/var/log/nginx/access.log.9.gz"
}
This optional field can be used to change result field names and run post processing called modifiers on the field value. The below example will show the results for parsing nginx access record without maps:
artifcats:
- name: nginx_access
type: parse
description: "Nginx access logs"
paths:
- /var/log/nginx/access.*
regex: '(?P<c_ip>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}) - (?P<remote_user>[^ ]+) \[(?P<time>[0-9]{2}/[a-zA-Z]{3}/[0-9]{4}:[0-9]{2}:[0-9]{2}:[0-9]{2} \+[0-9]{4})\] "(?P<method>[A-Z]+)?[ ]?(?P<uri>.*?)[ ]?(HTTP/(?P<http_prot>[0-9\.]+))?" (?P<status_code>[0-9]{3}) (?P<body_bytes_sent>[0-9]+) "(?P<referer>.*?)" "(?P<user_agent>.*?)"'
192.168.133.70 - - [23/Jan/2022:19:14:37 +0000] "GET /blog/ HTTP/1.1" 200 2497 "https://u0041.co/" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0"
{
"c_ip": "192.168.133.70",
"remote_user": "-",
"time": "23/Jan/2022:19:14:37 +0000",
"method": "GET",
"uri": "/blog/",
"http_prot": "1.1",
"status_code": "200",
"body_bytes_sent": "2497",
"referer": "https://u0041.co/",
"user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0",
"full_path": "/var/log/nginx/access.log.9.gz"
}
To change the field name time
to @timestamp
we add the following maps configuration to the artifact configurations:
artifcats:
- name: nginx_access
type: parse
description: "Nginx access logs"
paths:
- /var/log/nginx/access.*
regex: '(?P<c_ip>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}) - (?P<remote_user>[^ ]+) \[(?P<time>[0-9]{2}/[a-zA-Z]{3}/[0-9]{4}:[0-9]{2}:[0-9]{2}:[0-9]{2} \+[0-9]{4})\] "(?P<method>[A-Z]+)?[ ]?(?P<uri>.*?)[ ]?(HTTP/(?P<http_prot>[0-9\.]+))?" (?P<status_code>[0-9]{3}) (?P<body_bytes_sent>[0-9]+) "(?P<referer>.*?)" "(?P<user_agent>.*?)"'
maps:
- from: time #change field name from
to: '@timestamp' # to this name
After running the collection tool with the configuration on the same nginx access log we get the following output:
{
"c_ip": "192.168.133.70",
"remote_user": "-",
"@timestamp": "23/Jan/2022:19:14:37 +0000",
"method": "GET",
"uri": "/blog/",
"http_prot": "1.1",
"status_code": "200",
"body_bytes_sent": "2497",
"referer": "https://u0041.co/",
"user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0",
"full_path": "/var/log/nginx/access.log.9.gz"
}
modifiers provides post processing on field value of the artifact results. For example reformatting date and time. Continuing on the example above we can change the date and time format in the field @timestamp
to the format %Y-%m-%d %H:%M:%S
. We can add the following to the artifact configurations to accomplish that:
artifacts:
- name: nginx_access
type: parse
description: "Nginx access logs"
paths:
- /var/log/nginx/access.*
regex: '(?P<c_ip>[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}) - (?P<remote_user>[^ ]+) \[(?P<time>[0-9]{2}/[a-zA-Z]{3}/[0-9]{4}:[0-9]{2}:[0-9]{2}:[0-9]{2} \+[0-9]{4})\] "(?P<method>[A-Z]+)?[ ]?(?P<uri>.*?)[ ]?(HTTP/(?P<http_prot>[0-9\.]+))?" (?P<status_code>[0-9]{3}) (?P<body_bytes_sent>[0-9]+) "(?P<referer>.*?)" "(?P<user_agent>.*?)"'
maps:
- from: time
to: "@timestamp"
modifier:
name: datetime_to_iso
parameters:
input_time_format: '%d/%b/%Y:%H:%M:%S %z'
output_time_format: '%Y-%m-%d %H:%M:%S'
The resulting record will look like this:
{
"c_ip": "192.168.133.70",
"remote_user": "-",
"@timestamp": "2022-01-23 19:14:37",
"method": "GET",
"uri": "/blog/",
"http_prot": "1.1",
"status_code": "200",
"body_bytes_sent": "2497",
"referer": "https://u0041.co/",
"user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0",
"full_path": "/var/log/nginx/access.log.9.gz"
}
The available modifiers are:
Name | Details | input_time_format | output_time_format |
---|---|---|---|
epoch_to_iso | Converts epoch timestamp to custom date and time format | N/A | specify the output date and time format , default is %Y-%m-%d %H:%M:%S |
datetime_to_iso | Reformat date and time form the format input_time_format to the format output_time_format |
specify the input date and time format | specify the output date and time format , default is %Y-%m-%d %H:%M:%S |
time_without_year_to_iso | Format date and time without a year data form the format input_time_format to the format output_time_format |
specify the input date and time format | specify the output date and time format , default is %Y-%m-%d %H:%M:%S |
to_int | Convert string data (like command & parse artifact types) to integers (i64 i.e signed 64bit integer). This is useful with field like the file size so we can do checks like size < 1024 using the data platform of our choice |
N/A | N/A |
The time_without_year_to_iso
modifier works as follows:
This modifier assumes the logs are for ONLY one year, use this modifier with caution