Feature | Details |
---|---|
Allow parallel downloading (multipod) |
|
Running environments |
|
Populated Devo events |
|
Flattening preprocessing |
|
Data source | Description | API endpoint | Collector service name | Devo table | Available from release |
---|---|---|---|---|---|
Vulnerabilities | Get a list of Vulnerabilities . |
|
|
| v1.0.0 |
Assets | Get a list of Assets. |
|
|
| |
Sites | Get a list of sites. |
|
|
| |
Scans | Get a list of Scans. |
|
|
|
For more information on how the events are parsed, visit our page. |
api_key |
Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.
This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check running the collector section for details.
Setting | Details |
---|---|
| The |
| The |
| The |
Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).
We use a piece of software called Collector Server to host and manage all our available collectors. To enable the collector for a customer:
Editing the JSON configuration
This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running. StructureThe following directory structure should be created for being used when running the collector:
Devo credentialsIn Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in ![]()
Editing the config.yaml file
Replace the placeholders with your required values following the description table below:
|
Collector Docker image | SHA-256 hash |
---|---|
|
Use the following command to add the Docker image to the system:
gunzip -c <image_file>-<version>.tgz | docker load |
Once the Docker image is imported, it will show the real name of the Docker image (including version info). Replace |
The Docker image can be deployed on the following services:
Execute the following command on the root directory <any_directory>/devo-collectors/<product_name>/
docker run --name collector-<product_name> --volume $PWD/certs:/devo-collector/certs --volume $PWD/config:/devo-collector/config --volume $PWD/state:/devo-collector/state --env CONFIG_FILE=config.yaml --rm --interactive --tty <image_name>:<version> |
The following Docker Compose file can be used to execute the Docker container. It must be created in the <any_directory>/devo-collectors/<product_name>/
directory.
version: '3' services: collector-<product_name>: image: <image_name>:${IMAGE_VERSION:-latest} container_name: collector-<product_name> restart: always volumes: - ./certs:/devo-collector/certs - ./config:/devo-collector/config - ./credentials:/devo-collector/credentials - ./state:/devo-collector/state environment: - CONFIG_FILE=${CONFIG_FILE:-config.yaml} |
To run the container using docker-compose, execute the following command from the <any_directory>/devo-collectors/<product_name>/
directory:
IMAGE_VERSION=<version> docker-compose up -d |
Replace |
This section is intended to explain how to proceed with specific actions for services.
|
Verify data collectionOnce the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.
Puller outputA successful initial run has the following output messages for the puller module: Note that the PrePull action is executed only one time before the first run of the Pull action.
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
|
Verify data collectionOnce the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Puller outputA successful initial run has the following output messages for the puller module: Note that the PrePull action is executed only one time before the first run of the Pull action.
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
|
Verify data collectionOnce the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Puller outputA successful initial run has the following output messages for the puller module: Note that the PrePull action is executed only one time before the first run of the Pull action.
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
|
Verify data collectionOnce the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Puller outputA successful initial run has the following output messages for the puller module: Note that the PrePull action is executed only one time before the first run of the Pull action.
|
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided. Note that this action clears the persistence and cannot be recovered in any way. Resetting persistence could result in duplicate or lost events. |
|
InitializationThe initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration. A successful run has the following output messages for the initializer module:
Events delivery and Devo ingestionThe event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method. A successful run has the following output messages for the initializer module:
By default, these information traces will be displayed every 10 minutes. Sender servicesThe Integrations Factory Collector SDK has 3 different sender services depending on the event type to delivery (internal, standard, and lookup). This collector uses the following Sender Services:
Sender statisticsEach service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:
|
To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.
Differences between RSS and VMS memory usage:
|
Release | Released on | Release type | Recommendations |
---|---|---|---|
|
| ||
| |||
|
|
|