...
The Devo Wiz collector allows customers to retrieve Wiz cloud security issues into Devo to query, correlate, analyze, and visualize to enable Enterprise IT and Cybersecurity teams to take the most impactful decisions at the petabyte scale. The collector processes the Wiz API responses and sends them to the Devo platform, which then categorizes all data received on tables along rows and columns in your Devo domain.
Data sources
Data source | Description | API |
---|
Endpoint | Collector service name | Devo table | Available from release |
---|---|---|---|
Issues | An issue in |
wiz is a vulnerability that is detected in the cloud infrastructure |
|
|
|
|
Devo collector features
...
Feature
...
Details
...
Allow parallel downloading (multipod
)
...
Not allowed
...
Running environments
...
Collector Server
, On Premise
...
Populated Devo events
...
Table
...
Flattening preprocessing
...
Yes
Flattening preprocessing
...
Vulnerability | Vulnerabilities are weaknesses in computer systems that can be exploited by malicious attackers. Whether they are caused by bugs or design flaws, vulnerabilities can allow attackers to execute code in an environment or elevate privileges. |
|
|
|
|
Audit Logs | The Audit Log records key events in Wiz, such as login, logout, and user update. The Audit Log is primarily used to investigate potentially suspicious activity or diagnose and troubleshoot errors. |
|
|
|
|
Cloud Configuration Findings | This returns the problems with configurations and the remediation solutions for the same. |
|
|
|
|
Custom Service | This provides an option to add custom graphql query in the config and ingest data. |
|
|
User can provide override tag in the config if the parser is deployed for their custom query or if they want a different table in my.app . |
|
Devo collector features
Feature | Details |
---|---|
Allow parallel downloading ( |
|
Running environments |
|
Populated Devo events |
|
Flattening preprocessing |
|
Flattening preprocessing
In order to improve the data exploitation and enrichment, this collector applies some flattening actions to the collected data before delivering it to Devo:
Data source | Collector service | Optional | Flattening details |
---|---|---|---|
Issues |
|
|
|
...
Vulnerabilities |
|
|
|
Audit Logs |
|
|
|
Cloud Configuration Findings |
|
|
|
Custom Service |
|
| N/A |
How to enable the collection in the vendor
...
Setting | Details |
---|---|
| By default, the base URLis URLs |
| User Client ID to authenticate to the service. |
| User Secret Key to authenticate to the service. |
...
Rw ui tabs macro | ||||||||
---|---|---|---|---|---|---|---|---|
We use a piece of software called Collector Server to host and manage all our available collectors. To enable the collector for a customer:
Editing the JSON configuration
Please replace the placeholders with real world values following the description table below: | ||||||||
Parameter | Data Type | Type | Value Range / Format | Details | ||||
|
|
|
| If the value is | ||||
|
|
| Minimum length: 1 | Use this param to give an unique id to this input service.
| ||||
|
|
|
| Use this param to enable or disable the given input logic when running the collector. If the value is | ||||
|
|
| Valid URL following this regex: | By default, the base url is This parameter should be removed if it is not used. | ||||
|
|
| UTC with format: | This configuration allows you to set a custom date as the beginning of the period to download. This allows downloading historical data (1 month back for example) before downloading new events. If this setting is not set, the default value is the current time. This parameter should be removed if it is not used. | ||||
|
|
| Any | User Client ID to authenticate to the service. | ||||
|
|
| Any | User Secret Key to authenticate to the service. | ||||
|
|
| Minimum length: 1 | Period in seconds used between each data pulling, this value will overwrite the default value (60 seconds) This parameter should be removed if it is not used. | ||||
|
|
| Possible values:
| Filter by Issue type. You can specify multiple values in an array. Example 1:
Example 2 (multiple values):
This parameter should be removed if it is not used. |
Rw tab | ||
---|---|---|
|
This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.
Structure
The following directory structure should be created for being used when running the collector:
Code Block |
---|
<any_directory>
└── devo-collectors/
└── <product_name>/
├── certs/
│ ├── chain.crt
│ ├── <your_domain>.key
│ └── <your_domain>.crt
├── state/
└── config/
└── config-<product_name>.yaml |
Note |
---|
Replace |
Devo credentials
In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in <product_name>/certs/
. Learn more about security credentials in Devo here.
Replace <product_name>
with the proper value.
Editing the config.yaml file
Code Block |
---|
globals:
debug: <debug_status>
id: <collector_id>
name: <collector_name>
persistence:
type: filesystem
config:
directory_name: state
multiprocessing: false
queue_max_size_in_mb: 1024
queue_max_size_in_messages: 1000
queue_max_elapsed_time_in_sec: 60
queue_wrap_max_size_in_messages: 100
outputs:
devo_1:
type: devo_platform
config:
address: <devo_address>
port: 443
type: SSL
chain: <chain_filename>
cert: <cert_filename>
key: <key_filename>
inputs:
wiz_data_puller:
id: <short_unique_id>
enabled: <input_status>
override_api_base_url: <base_url>
credentials:
client_id: <client_id>
client_secret: <client_secret>
services:
issues:
request_period_in_seconds: <request_period_in_seconds>
historic_date_utc: <historic_date_utc>
filters:
type: <type_list> |
Info |
---|
All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the |
Replace the placeholders with your required values following the description table below:
Parameter
Data type
Type
Value range
Details
debug_status
bool
Mandatory
false
/ true
If the value is true
, the debug logging traces will be enabled when running the collector. If the value is false
, only the info
, warning
and error
logging levels will be printed.
collector_id
int
Mandatory
Minimum length: 1
Maximum length: 5
Use this param to give an unique id to this collector.
collector_name
str
Mandatory
Minimum length: 1
Maximum length: 10
Use this param to give a valid name to this collector.
devo_address
str
Mandatory
collector-us.devo.io
collector-eu.devo.io
Use this param to identify the Devo Cloud where the events will be sent.
chain_filename
str
Mandatory
Minimum length: 4
Maximum length: 20
Use this param to identify the chain.cert file downloaded from your Devo domain. Usually this file's name is: chain.crt
cert_filename
str
Mandatory
Minimum length: 4
Maximum length: 20
Use this param to identify the file.cert
downloaded from your Devo domain.
key_filename
str
Mandatory
Minimum length: 4
Maximum length: 20
Use this param to identify the file.key
downloaded from your Devo domain.
short_unique_id
int
Mandatory
Minimum length: 1
Maximum length: 5
Use this param to give an unique id to this input service.
Note |
---|
This parameter is used to build the persistence address, do not use the same value for multiple collectors. It could cause a collision. |
input_status
bool
Mandatory
false
/ true
Use this param to enable or disable the given input logic when running the collector. If the value is true
, the input will be run. If the value is false
, it will be ignored.
requests_per_seconds
int
Optional
Minimum value: 1
Customize the maximum number of API requests per second. If not used, the default setting will be used: 100000
requests/sec.
Info |
---|
This parameter can be left blank, removed or commented. |
base_url
str
Optional
Valid URL following this regex: ^https:\/\/([a-z0-9]+[.]{1})([a-z0-9]+[.]{1})*[a-z]{2,}(:[0-9]{2,5})?$
By default, the base url is https://api.us1.app.wiz.io
. This parameter allows you to customize the base url.
Info |
---|
This parameter can be left blank, removed or commented. |
historic_date_utc
str
Optional
UTC with format: YYYY-mm-ddTHH:MM:SS.sssZ
This configuration allows you to set a custom date as the beginning of the period to download. This allows downloading historical data (1 month back for example) before downloading new events.
If this setting is not set, the default value is the current time.
Note |
---|
Note that update this value triggers the clearing of the Collector’s persistence and cannot be recovered in any way. Resetting persistence could result in duplicate or lost events. |
Info |
---|
This parameter can be removed or commented. |
client_id
str
Mandatory
Any
User Client ID to authenticate to the service.
client_secret
str
Mandatory
Any
User Secret Key to authenticate to the service.
request_period_in_seconds
int
Optional
Minimum length: 1
Period in seconds used between each data pulling, this value will overwrite the default value (60 seconds)
Info |
---|
This parameter can be removed or commented. |
type_list
list
Optional
Possible values:
"TOXIC_COMBINATION"
"THREAT_DETECTION",
"CLOUD_CONFIGURATION"
Filter by Issue type. You can specify multiple values in an array.
Example 1:
Code Block |
---|
type:
- THREAT_DETECTION |
Example 2 (multiple values):
Code Block |
---|
type:
- TOXIC_COMBINATION
- THREAT_DETECTION |
This parameter should be removed if it is not used.
Download the Docker image
The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:
Collector Docker image
SHA-256 hash
b9e82a00676ade05561e403f5ccaa7561b66dd384c74de76d29680c93a3262ce
Use the following command to add the Docker image to the system:
Code Block |
---|
gunzip -c <image_file>-<version>.tgz | docker load |
Note |
---|
Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace |
The Docker image can be deployed on the following services:
Docker
Execute the following command on the root directory <any_directory>/devo-collectors/<product_name>/
Code Block |
---|
docker run
--name collector-<product_name>
--volume $PWD/certs:/devo-collector/certs
--volume $PWD/config:/devo-collector/config
--volume $PWD/state:/devo-collector/state
--env CONFIG_FILE=config.yaml
--rm
--interactive
--tty
<image_name>:<version> |
Note |
---|
Replace |
Docker Compose
The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/<product_name>/
directory.
Code Block |
---|
version: '3'
services:
collector-<product_name>:
image: <image_name>:${IMAGE_VERSION:-latest}
container_name: collector-<product_name>
volumes:
- ./certs:/devo-collector/certs
- ./config:/devo-collector/config
- ./credentials:/devo-collector/credentials
- ./state:/devo-collector/state
environment:
- CONFIG_FILE=${CONFIG_FILE:-config.yaml} |
To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/<product_name>/
directory:
Code Block |
---|
IMAGE_VERSION=<version> docker-compose up -d |
Note |
---|
Replace |
Collector service details
Expand | ||
---|---|---|
| ||
All events of this service are ingested into the table |
Expand | ||
---|---|---|
| ||
Issue service is based on the following GraphQL command:
|
Expand | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components: | ||||||||
Component | Description | |||||||
Setup | The setup module is in charge of authenticating the service and managing the token expiration when needed. | |||||||
Puller | The setup module is in charge of pulling the data in a organized way and delivering the events via SDK. |
Code Block |
---|
INFO InputProcess::WizDataPullerSetup(wiz_collector,wiz_data_puller#111,issues#predefined) -> Puller Setup Started
INFO InputProcess::WizDataPullerSetup(wiz_collector,wiz_data_puller#111,issues#predefined) -> successfully generated new access token
INFO InputProcess::WizDataPullerSetup(wiz_collector,wiz_data_puller#111,issues#predefined) -> The credentials provided in the configuration have required permissions to request issues from Wiz server
INFO InputProcess::WizDataPullerSetup(wiz_collector,wiz_data_puller#111,issues#predefined) -> Puller Setup Terminated
INFO InputProcess::WizDataPullerSetup(wiz_collector,wiz_data_puller#111,issues#predefined) -> Setup for module <WizDataPuller> has been successfully executed |
Puller output
A successful initial run has the following output messages for the puller module:
Info |
---|
Note that the |
Code Block |
---|
INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> PrePull Started. INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> User has specified 2022-01-01 00:00:00 as the datetime. Historical polling will consider this datetime for creating the default values. INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> No saved state found, initializing with state: {'historic_date_utc': datetime.datetime(2022, 1, 1, 0, 0), 'last_polled_timestamp': datetime.datetime(2022, 1, 1, 0, 0), 'ids_with_same_timestamp': [], 'buffer_timestamp_with_duplication_risk': datetime.datetime(1970, 1, 1, 0, 0), 'buffer_ids_with_duplication_risk': []} WARNING InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Saved state loaded: {'historic_date_utc': datetime.datetime(2022, 1, 1, 0, 0), 'last_polled_timestamp': datetime.datetime(2022, 1, 1, 0, 0), 'ids_with_same_timestamp': [], 'buffer_timestamp_with_duplication_risk': datetime.datetime(1970, 1, 1, 0, 0), 'buffer_ids_with_duplication_risk': []} INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> PrePull Terminated 2INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Starting data collection every 60 seconds INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Pull Started INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Fetching for issues from 2022-01-01T00:00:00 INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Requesting Wiz API for issues INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> successfully retried issues from Wiz INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Total number of issues in this poll: 45 INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Removing the duplicate issues if present INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Flatten data is set to True. Flattening the data and adding 'devo_pulling_id' to events INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Delivering issues to the SDK INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> 20 issues delivered INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> State has been updated during pagination: {'historic_date_utc': datetime.datetime(2022, 1, 1, 0, 0), 'last_polled_timestamp': datetime.datetime(2022, 1, 1, 0, 0), 'ids_with_same_timestamp': [], 'buffer_timestamp_with_duplication_risk': datetime.datetime(2022, 5, 12, 19, 13, 20, 193191), 'buffer_ids_with_duplication_risk': ['09992ee4-1450-44fa-951c-d5fc4815473a']}. INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> (Partial) Statistics for this pull cycle (@devo_pulling_id=1656602793.044179) so far: Number of requests made: 1; Number of events received: 45; Number of duplicated events filtered out: 0; Number of events generated and sent: 20. INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Requesting Wiz API for issues INFO OutputProcess::SyslogSender(standard_senders,syslog_sender_0) -> syslog_sender_0 -> Created sender: {"client_name": "collector-4ac42f93cffaa59c-9dc9f67c9-cgm84", "url": "sidecar-service-default.integrations-factory-collectors:601", "object_id": "140446617222352"} INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> successfully retried issues from Wiz INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Removing the duplicate issues if present INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Flatten data is set to True. Flattening the data and adding 'devo_pulling_id' to events INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> Delivering issues to the SDK INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> 20 issues delivered INFO InputProcess::WizDataPuller(wiz_data_puller,00011,issues,predefined) -> State has been updated during pagination"request_period_in_seconds": <request_period_in_seconds>, "historic_date_utc": <historic_date_utc> }, "auditLogs": { "request_period_in_seconds": <request_period_in_seconds>, "historic_date_utc": <historic_date_utc> }, "cloudConfiguration": { "request_period_in_seconds": <request_period_in_seconds>, "historic_date_utc": <historic_date_utc> }, "custom_query": { "types": [ "custom_graphql_query" ], "request_period_in_seconds": "<request_period_in_seconds>", "historic_date_utc": "<start_date_utc>", "graphql_query": "<graphql_query>", "filter_by": "<filterBy_value_as_dict>", "filter_by_time_key": "<filter_by_time_key>", "response_time_key": "<response_time_key>", "override_devo_tag": "<override_devo_tag>" } } } } } |
Info |
---|
All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the |
Please replace the placeholders with real world values following the description table below:
Parameter | Data type | Type | Value range / Format | Details | ||||
|
|
|
| If the value is | ||||
|
|
| Minimum length: 1 | Use this param to give an unique id to this input service.
| ||||
|
|
|
| Use this param to enable or disable the given input logic when running the collector. If the value is | ||||
|
|
| Valid URL following this regex: | By default, the base url is This parameter should be removed if it is not used. | ||||
|
|
| UTC with format: | This configuration allows you to set a custom date as the beginning of the period to download. This allows downloading historical data (1 month back for example) before downloading new events. If this setting is not set, the default value is the current time. This parameter should be removed if it is not used. | ||||
|
|
| Any | User Client ID to authenticate to the service. | ||||
|
|
| Any | User Secret Key to authenticate to the service. | ||||
|
|
| Minimum length: 1 | Period in seconds used between each data pulling, this value will overwrite the default value (60 seconds) This parameter should be removed if it is not used. | ||||
|
|
| Possible values:
| Filter by Issue type. You can specify multiple values in an array. Example 1:
Example 2 (multiple values):
This parameter should be removed if it is not used. | ||||
|
|
| Devo Tag | Use this to override Devo tag. For | ||||
|
|
| Valid graphql query format | Used in | ||||
|
|
| Valid json | Filters can be added under the filter_by parameter as a dict. | ||||
|
|
| Minimum length: 1 | You need to specify the datetime parameter which the qraphql query allows filtering on . For instance, for | ||||
|
|
| Minimum length: 1 | You need to specify the datetime parameter in the response of graphql query which the filter_by_time_key was applied on. For instance, for |
Rw tab | ||
---|---|---|
|
This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running.
Structure
The following directory structure should be created for being used when running the collector:
Code Block |
---|
<any_directory>
└── devo-collectors/
└── <product_name>/
├── certs/
│ ├── chain.crt
│ ├── <your_domain>.key
│ └── <your_domain>.crt
├── state/
└── config/
└── config-<product_name>.yaml |
Note |
---|
Replace |
Devo credentials
In Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in <product_name>/certs/
. Learn more about security credentials in Devo here.
Replace <product_name>
with the proper value.
Editing the config.yaml file
Code Block |
---|
globals:
debug: <debug_status>
id: <collector_id>
name: <collector_name>
persistence:
type: filesystem
config:
directory_name: state
multiprocessing: false
queue_max_size_in_mb: 1024
queue_max_size_in_messages: 1000
queue_max_elapsed_time_in_sec: 60
queue_wrap_max_size_in_messages: 100
outputs:
devo_1:
type: devo_platform
config:
address: <devo_address>
port: 443
type: SSL
chain: <chain_filename>
cert: <cert_filename>
key: <key_filename>
inputs:
wiz_data_puller:
id: <short_unique_id>
enabled: <input_status>
override_api_base_url: <base_url>
credentials:
client_id: <client_id>
client_secret: <client_secret>
services:
issues:
request_period_in_seconds: <request_period_in_seconds>
historic_date_utc: <historic_date_utc>
filters:
type: <type_list>
override_devo_tag : <override_tag_value>
vulnerabilities:
request_period_in_seconds: <request_period_in_seconds>
historic_date_utc: <historic_date_utc>
override_devo_tag: <override_tag_value>
auditLogs:
request_period_in_seconds: <request_period_in_seconds>
historic_date_utc: <historic_date_utc>
override_devo_tag: <override_tag_value>
cloudConfiguration:
request_period_in_seconds: <request_period_in_seconds>
historic_date_utc: <historic_date_utc>
override_devo_tag: <override_tag_value>
custom_query:
types:
- custom_graphql_query
request_period_in_seconds: <request_period_in_seconds>
historic_date_utc: <start_date_in_utc>
graphql_query: <graphql_query>
filter_by: <filterBy_as_dict>
filter_by_time_key: <filter_by_time_key_value>
response_time_key: <response_time_key_value>
override_devo_tag: <override_devo_tag> |
Info |
---|
All defined service entities will be executed by the collector. If you do not want to run any of them, just remove the entity from the |
Replace the placeholders with your required values following the description table below:
Parameter | Data type | Type | Value range | Details | ||||
|
|
|
| If the value is | ||||
|
|
| Minimum length: 1 | Use this param to give an unique id to this collector. | ||||
|
|
| Minimum length: 1 | Use this param to give a valid name to this collector. | ||||
|
|
|
| Use this param to identify the Devo Cloud where the events will be sent. | ||||
|
|
| Minimum length: 4 | Use this param to identify the chain.cert file downloaded from your Devo domain. Usually this file's name is: | ||||
|
|
| Minimum length: 4 | Use this param to identify the | ||||
|
|
| Minimum length: 4 | Use this param to identify the | ||||
|
|
| Minimum length: 1 | Use this param to give an unique id to this input service.
| ||||
|
|
|
| Use this param to enable or disable the given input logic when running the collector. If the value is | ||||
|
|
| Minimum value: 1 | Customize the maximum number of API requests per second. If not used, the default setting will be used: This parameter can be left blank, removed or commented. | ||||
|
|
| Valid URL following this regex: | By default, the base url is This parameter can be left blank, removed or commented. | ||||
|
|
| UTC with format: | This configuration allows you to set a custom date as the beginning of the period to download. This allows downloading historical data (1 month back for example) before downloading new events. If this setting is not set, the default value is the current time.
This parameter can be removed or commented. | ||||
|
|
| Any | User Client ID to authenticate to the service. | ||||
|
|
| Any | User Secret Key to authenticate to the service. | ||||
|
|
| Minimum length: 1 | Period in seconds used between each data pulling, this value will overwrite the default value (60 seconds) This parameter can be removed or commented. | ||||
|
|
| Possible values:
| Filter by Issue type. You can specify multiple values in an array. Example 1:
Example 2 (multiple values):
This parameter should be removed if it is not used. | ||||
|
|
| Devo Tag | Use this to override Devo tag. For | ||||
|
|
| Valid graphql query format | Used in | ||||
|
|
| Valid json | Filters can be added under the filter_by parameter as a dict. | ||||
|
|
| Minimum length: 1 | You need to specify the datetime parameter which the qraphql query allows filtering on . For instance, for | ||||
|
|
| Minimum length: 1 | You need to specify the datetime parameter in the response of graphql query which the filter_by_time_key was applied on. For instance, for |
Download the Docker image
The collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:
Collector Docker image | SHA-256 hash |
---|---|
|
Use the following command to add the Docker image to the system:
Code Block |
---|
gunzip -c <image_file>-<version>.tgz | docker load |
Note |
---|
Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace |
The Docker image can be deployed on the following services:
Docker
Execute the following command on the root directory <any_directory>/devo-collectors/<product_name>/
Code Block |
---|
docker run
--name collector-<product_name>
--volume $PWD/certs:/devo-collector/certs
--volume $PWD/config:/devo-collector/config
--volume $PWD/state:/devo-collector/state
--env CONFIG_FILE=config.yaml
--rm
--interactive
--tty
<image_name>:<version> |
Note |
---|
Replace |
Docker Compose
The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/<product_name>/
directory.
Code Block |
---|
version: '3'
services:
collector-<product_name>:
image: <image_name>:${IMAGE_VERSION:-latest}
container_name: collector-<product_name>
volumes:
- ./certs:/devo-collector/certs
- ./config:/devo-collector/config
- ./credentials:/devo-collector/credentials
- ./state:/devo-collector/state
environment:
- CONFIG_FILE=${CONFIG_FILE:-config.yaml} |
To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/<product_name>/
directory:
Code Block |
---|
IMAGE_VERSION=<version> docker-compose up -d |
Note |
---|
Replace |
Collector service details
Issue Service
Expand | ||
---|---|---|
| ||
All events of this service are ingested into the table |
Expand | ||
---|---|---|
| ||
Issue service is based on the following GraphQL command:
|
Expand | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
Note that a
|
Vulnerability Service
Expand | ||
---|---|---|
| ||
All events of this service are ingested into the table |
Expand | ||
---|---|---|
| ||
Issue service is based on the following GraphQL command:
|
Expand | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
Note that a
|
AuditLogs Service
Expand | ||
---|---|---|
| ||
All events of this service are ingested into the table |
Expand | ||
---|---|---|
| ||
Issue service is based on the following GraphQL command:
|
Expand | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
Note that a
|
CloudConfiguration Service
Expand | ||
---|---|---|
| ||
All events of this service are ingested into the table |
Expand | ||
---|---|---|
| ||
Issue service is based on the following GraphQL command:
|
Expand | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
Note that a
|
Expand | ||
---|---|---|
| ||
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.
|
Expand | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Custom Service
Expand | ||
---|---|---|
| ||
All events of this service are ingested into the table |
Expand | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
|
Expand | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
Note that a
|
...
Expand | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Expand | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
InitializationThe initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration. A successful run has the following output messages for the initializer module: Code Block |
The error is on Wiz’s side. Wiz can be contacted for more info. It should work again when the incident at Wiz is solved.
This error is raised when the token being used to make requests to the API is valid, but there has been an unexpected return from the API. This is an internal issue. Contact with Devo Support team.
This error is raised when the token being used to make requests to the API is valid, but we are constantly receiving a Check throttle limitations on Wiz API and change the value of |
Collector operations
This section is intended to explain how to proceed with specific operations of this collector.
|
Collector operations
This section is intended to explain how to proceed with specific operations of this collector.
...
Release
...
Released on
...
Release type
...
Details
...
Expand | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||||
InitializationThe initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration. A successful run has the following output messages for the initializer module:
Events delivery and Devo ingestionThe event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method. A successful run has the following output messages for the initializer module:
Events delivery and Devo ingestionThe event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method. A successful run has the following output messages for the initializer module:
Sender servicesThe Integrations Factory Collector SDK has 3 different senders services depending on the event type to delivery (
Sender statisticsEach service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:
Sender servicesThe Integrations Factory Collector SDK has 3 different senders services depending on the event type to delivery ( | ||||||||||||||||||||||||||
Sender services | Description | |||||||||||||||||||||||||
| In charge of delivering internal metrics to Devo such as logging traces or metrics. | |||||||||||||||||||||||||
| In charge of delivering pulled events to Devo. | |||||||||||||||||||||||||
Logging trace | Description | |||||||||||||||||||||||||
| Displays the number of concurrent senders available for the given Sender Service. | |||||||||||||||||||||||||
| Displays the items available in the internal sender queue.
| |||||||||||||||||||||||||
| Displayes the number of events from the last time and following the given example, the following conclusions can be obtained:
|
Expand | ||||
---|---|---|---|---|
| ||||
To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.
|
Expand | ||
---|---|---|
| ||
Sometimes it is necessary to activate the debug mode of the collector's logging. This debug mode increases the verbosity of the log and allows you to print execution traces that are very helpful in resolving incidents or detecting bottlenecks in heavy download processes.
For more information, visit the configuration and parameterization section corresponding to the chosen deployment mode. |
Change log
|
Expand | ||||
---|---|---|---|---|
| ||||
To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.
|
Expand | ||
---|---|---|
| ||
Sometimes it is necessary to activate the debug mode of the collector's logging. This debug mode increases the verbosity of the log and allows you to print execution traces that are very helpful in resolving incidents or detecting bottlenecks in heavy download processes.
For more information, visit the configuration and parameterization section corresponding to the chosen deployment mode. |
Change log
Release | Released on | Release type | Details | Recommendations | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
| New Features
Improvements
|
| |||||||||||||
|
| Bug Fixes
|
| |||||||||||||
|
| Bug Fixes
Improvements
|
| |||||||||||||
|
| New Features
Improvements
|
| |||||||||||||
|
| New Features
Improvements
|
| |||||||||||||
|
| Improvements:
Bug Fix:
|
| |||||||||||||
|
| Improvements:
|
| |||||||||||||
|
| Bug fixes:
|
| |||||||||||||
|
| New features:
|
| |||||||||||||
|
| New features:
|
|