Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 30 Next »

Configuration requirements

To run this collector, there are some configurations detailed below that you need to take into account.

Configuration

Details

GCP console access

  • You should have credentials to access the console.

Permissions

  • Administrator permissions to access the GCP console.

Logging services

The following features have been configured:

  • GCP project

  • Service account

  • GCP Pub/Sub

  • Sink (optional)

Enable SCC

  • SCC Audit logs: you used the logging service to pull the data source.

  • SSC Comand Center: you used scc_findigs service to pull the data source.

Credentials

  • JSON credentials have been filled or deleted.

More information

Refer to the Vendor setup section to know more about these configurations.

Overview

This collector lets you build, deploy and scale applications, websites, and services on the same infrastructure as Google. It also provides the possibility to integrate the Google Cloud Platform (GCP) with the Devo platform making it easy to query and analyze GCP event data. You can view it in the pre-configures Activeboards or you can customize it.

Devo’s GCP collector also enables to retrieve data stored in the GCP via Google Cloud APIs such as audit logs, Security Command Center findings, networking, load balance, and more available via Pub/Sub into Devo to query, correlate, analyze and visualize to enable Enterprise IT and Cybersecurity teams to take the most impactful decisions at the petabyte scale.

Devo collector features

Feature

Details

Allow parallel downloading (multipod)

  • Allowed

Running environments

  • Collector server

  • On-premise

Populated Devo events

  • Table

Flattening preprocessing

  • No

For more information on how the events are parsed, visit our page.

Data sources

Data source

Description

API endpoint

Collector service name

Devo table

Available from release

Logging (formerly StackDrive)

Cloud logging allows you to store, search, analyze, monitor, and alert on logging data and events from Google Cloud and Amazon Web Services.

pub/sub queue

logging

cloud.gcp.<logname_part1>.<logname_part2>

This service allows you to select two different autodispatcher system. This is the structure for the one based on logname.

cloud.gcp.<resource_type_part1>.<resource_type_part2>

This service allows you to select two different autodispatcher system. This is the structure for the one based on resource type.

v1.0.20

Security Command Center Findings

Security Command Center is a Google Cloud’s centralized vulnerability and threat reporting service.

pub/sub queue

scc_findings

cloud.gcp.scc.findings

v1.1.4

Vendor setup

To enable the collection in the vendor setup, there are some minimal requirements to follow:

  1. GCP console access: You should have credentials to access the GCP console.

  2. Owner or Administrator permissions within the GCP console.

Enable the Logging service

Here you will find how to enable the Logging service (formerly Stackdriver)

Logging Service Overview

GCP centralizes all the monitoring information from all services in the cloud catalog that is inside the service named after Logging.

You have to use the logging service to pull this data source.

Some information is enabled by default and free of charge. Other information, that in case of activating its generation, will concur some costs, so it must be enabled manually. In both cases, the generated information (messages) will arrive at the Logging service.

The diagram is only an example to the GCP services. There are many more GCP services.

The Logging service has different ways of exporting the information stored and structured in messages. In this case, it’s being used by another GCP service called PubSub, basically, this service will contain a topic object that will receive a filtered set of messages from the Logging service, then the GCP collector will retrieve all those messages from the topic object using a subscription (in the pull mode).

To facilitate the retrieve is recommended to split the source message using different topic objects, you can split it by resource type, region, project ID, and so on:

Configuration of the Logging service

Here you will find which features you need to configure to receive events from both services:

  1. GCP Project: You need to have a GCP Project in the console to be able to receive data.

  2. Service account: The Service account is a Google service that allows.

  3. GCP Pub/Sub: It is the queue from which the events will be downloaded, it is necessary to create a Topic and a Subscription.

  4. Sink (optional): The sink is a filter to receive only the type of events that you want.

Here you will find the steps to configure each feature:

 Creating a new project
  1. Go to the left-bar many, select IAM & Admin, and then click on Create a Project.

  2. Fill in the project details and click on Create.

  3. Select the project and click Open.

 Copy the Project ID
  1. Click on the name of the project in the top menu.

  2. Copy the Project ID.

Save the Project ID

It is important to save this value to later configure the collector.

 Setting up a Service Account
  1. Go to your Google GCP console project, click to open the left bar menu, click on IAM & Admin and click on service account to create a GCP credential.

  2. Click on + Create Service Account to create the credentials.

    1. Fill in the section Service account details fields and click on CREATE AND CONTINUE.

    2. In the section Grant users access to this service account: In role field put Pub/sub Subscriber. Click on CONTINUE.
      If you want to enable the undelivered messages logging feature, will be needed to also add the Monitoring Viewer role to the Service Account.

    3. The section Grant users access to this service account is an optional field and it is not necessary to fill in.

    4. Finally, Click on DONE.

  3. Now, we have to add the Keys to the service account that was previously created and download it as a JSON format. After clicking on Done, you’ll be redirected to the Services Accounts of your project. Search for the service account that you created and click on it.

  4. On Service Account Details click on the KEYS tab.

  5. Click on the button ADD KEY and Create new key.

  6. Select JSON format and click on CREATE .

  7. Download the credentials file and move it to <any_directory>/devo-collectors/gcp/credentials/ directory.

  8. Copy the content of the json file. You can use any free software to convert the content of the json file to base64.

It is important to save the credentials file to later run the collector in the collector server.

  1. Paste it into a base64 encoder and copy the result.

It is important to save this value to later run the collector on-premise and in the collector server.

 Setting a Pub/Sub
  1. Use the search tool to find the Pub/Sub service.

  2. Click on Create a topic.

  3. Fill in the Topic ID.

  4. Mark the Add a default subscription box, then the subscription will be automatically created.

  5. Click Create. The Subscription is created by default and is located in the Subscription tab.

It is important to save the Subscription ID to use it later in the collector configuration. In this example is called test-topic-sub1.

 Restrict a Service account to subscription

This is an optional step

This step is optional, the Service account can already access to all subscriptions. Refer to the Access control IAM documentation for more information on Access control for subscriptions.

  1. Click on the Subscription ID created in the previous step.

  2. Copy and save the subscription name.

  3. Search for IAM in the search tool and click on the IAM service.

  4. Click on the edit button of the service account that was created.

  5. Click on Add condition and fill it as the following:

    1. Condition type: Select Resource → Name

    2. Operator: is

    3. Value: Use the name of the subscription you already copied.

  6. Click on the Save button

More information

For more information on Access Control refer to the following article.

 Setting up a Sink

This is an optional step

This step is optional, the Service account can already access to all subscriptions. Refer to the Access control IAM documentation for more information on Access control for subscriptions.

  1. Use the search tool and look for the Logging service.

  2. Click on Logs Router and click on Create Sink.

  3. Follow the steps and when you finish click on Create sink.

More information

Refer to the official Google documentation about how to Configure and manage sinks.

Enable the Security Command Center Service (SCC)

It is mandatory that you have configured the Logging service to enable the SCC.

Events can be retrieved differently depending on the source:

  • SCC Audit logs: Events obtained through the Logging service.

  • SCC Findings: Events obtained from external services.

Enable the Security Command Center (SCC) Audit logs

The events will be obtained through the centralized Logging service. Refer to the Configuration of the Logging service section to know how to configure it.

You have to use the logging service to pull this data source.

Here you will find the steps to filter this type of event:

Action

Steps

1

Activate Security Command Center service

When SCC is activated, the events will go directly through the Logging service to the default sink. The following steps are optional but recommended to filter SCC events on another Pub/Sub.

In order to receive this type of event, it is necessary to have the Security Command Center service activated.

Refer to the Security Command Center Quickstart video from the Google guide.

2

Setting up a new topic

Refer to the Configuration of the Loggingsection to know how to do it.

3

Setting up a Pub/Sub

Refer to the Configuration of the Loggingsection to know how to do it.

4

Setting up a sink

Refer to the Configuration of the Loggingsection to know how to do it.

Enable the Security Command Center (SCC) Findings

These events are obtained from the Security Command Center service and are injected directly into the Pub/Sub without going through the Logging service.

You have to use the scc_findigs service to pull this data source.

Action

Steps

1

Configure Identity and Access Management (IAM) roles.

Refer to the official Google guide in which additional configurations are described.

2

Activate the Security Command Center API.

3

Setting up a Pub/Sub topic.

4

Creating a Notification configuration.

Minimum configuration required for basic pulling

Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.

This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check setting sections for details.

Setting

Details

source_id_value

This param allows you to assign a custom name for identifying the environment of the infrastructure.

project_id_value

The name of the GCP project. Refer to the Configuration of the Logging service section to know got to get this value.

file_content_base64

The service account credentials in base64. Refer to the Configuration of the Logging service section to know got to get this value.

subscription_id

The ID of the Pub/Sub subscription. Refer to the Configuration of the Logging service section to know got to get this value.

See the Accepted authentication methods section to verify what settings are required based on the desired authentication method.

Accepted authentication methods

Depending on how did you obtain the credentials, you will have to either fill or delete the following properties on the JSON credentials configuration block.

Authentication method

Project ID

Base64 credentials

File credentials

Available on

1

Service account with Base64.

REQUIRED

REQUIRED

  • Collector Server

  • On-Premise

2

Service account with the file credentials.

REQUIRED

REQUIRED

  • On-Premise

Run the collector

Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).

Collector services detail

This section is intended to explain how to proceed with specific actions for services.

Custom Service

This is the only service that GCP has. Multiple custom services can be created to ingest data from different pub/sub sinks, however, the only data sources supported by this collector are:

  • Logging events: The previous sections Running the data collector explain how to configure a Logging service but more custom logging services can be created with different Pub/Sub filters.

  • SCC findings events: This service is also a custom service configured with data coming from a source external to the Logging service.

Devo categorization and destination

The following table shows the Devo tables and the tags to which the events are ingested based on each data source:

Data Source

Devo tables

Devo tag

Details

Logging service

cloud.gcp.<logname_part1>.<logname_part2>

cloud.gcp.<logname_part1>.<logname_part2>

This is an autocalculated default tag structure to which the events that come from the Logging service are sent. These events are of type LogEntry.

This tag structure is based on the following message fields:

  • logname_part1 -> It is the first part of the logname field, for example: inlogName: "projects/projectabc-1234/logs/cloudaudit.googleapis.com%2Factivity" the logname_part1 is cloudaudit

  • logname_part2 -> It is the second part of the logname field, for example: in logName: "projects/projectabc-1234/logs/cloudaudit.googleapis.com%2Factivity" the logname_part1 is activity

For more information consult the official GCP documentation: LogEntry  |  Cloud Logging  |  Google Cloud

cloud.gcp.<resource_type_part1>.<resource_type_part2>

cloud.gcp.<resource_type_part1>.<resource_type_part2>

This is an autocalculated default to which the events that come from the Logging service are sent. These events are of type MonitoredResource.

This tag structure is based on the following message fields:

  • resource_type_part1 -> It is the first part of the type field, for example: in"type": "gce_instance" the resource_type_part1 is gce

  • resource_type_part2 -> It is the second part of the type field, for example: in "type": "gce_instance" the resource_type_part2 is instance

For more information consult the official GCP documentation:

custom_tag

custom_tag

If the user adds a custom tag all events will be sent to that custom tag.

cloud.gcp.unknown.none

cloud.gcp.unknown.none

All events that are not in JSON format are sent to that tag (unless a custom tag has been defined)

SCC findings

cloud.gcp.scc.findings

cloud.gcp.scc.findings

This is the recommended value for custom_tag parameter when ingesting SCC Findings. This tag must be defined in the custom tag.

It is not an autocalculated tag. It needs to be defined as custom_tag.

custom_tag

custom_tag

You also can define a different tag for these events, but bare in mind that only cloud.gcp.scc.findings will be native parsed by Devo as SCC Findings.

Events service

 Verify data collection

Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.

This service has the following components:

Component

Description

Setup

The setup module is in charge of authenticating the service and managing the token expiration when needed.

Puller

The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.

Setup output

A successful run has the following output messages for the setup module:

INFO InputProcess::MainThread -> CollectorGCPPullerSetup(test-andrea,gcp#123,custom_service#custom,all) -> Starting thread
INFO InputProcess::CollectorGCPPullerSetup(test-andrea,gcp#123,custom_service#custom,all) -> File "/devo-collector/.../credentials_file.json" has been created from base64 content
WARNING InputProcess::CollectorGCPPullerSetup(test-andrea,gcp#123,custom_service#custom,all) -> Remote auto-setup: Disabled, due it is used a custom queue name from the configuration
INFO InputProcess::CollectorGCPPullerSetup(test-andrea,gcp#123,custom_service#custom,all) -> Setup for module "CollectorGCPPuller" has been successfully executed

Puller output

A successful initial run has the following output messages for the puller module:

Note that the PrePull action is executed only one time before the first run of the Pull action.

INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Starting a new pulling at "2022-07-15T14:55:32.282345+00:00"
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Requested 1000, received 3 messages
INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> Consumed messages: 1, total_bytes: 2372 (216.608593 seconds)
INFO OutputProcess::SyslogSender(standard_senders,syslog_sender_0) -> syslog_sender_0 -> Created sender: {"client_name": "collector-abc", "url": "collector-url", "object_id": "1234567890"}
INFO OutputProcess::SyslogSender(standard_senders,syslog_sender_0) -> Consumed messages: 1 messages (216.614309 seconds) => 0 msg/sec
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Processed 3 messages, total processed: 3
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Requested 1000, received 1 messages
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Processed 1 messages, total processed: 4
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Received 2 response(s) with data, generated 4 message(s), avg time per request: 9462.763 ms
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Data collection completed. Elapsed time: 18.926 seconds. Waiting for 11.074 second(s) until the next one

After a successful collector’s execution (that is, no error logs found), you will see the following log message:

INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Received 2 response(s) with data, generated 4 message(s), avg time per request: 9462.763 ms
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Data collection completed. Elapsed time: 18.926 seconds. Waiting for 11.074 second(s) until the next one

The value @devo_pulling_id is injected in each event to group all events ingested by the same pull action. You can use it to get the exact events downloaded in that Pull action in Devo’s search window.

 Restart the persistence

This collector does not use persistence because it consumes events from a GCP Pub/sub queue.

 Troubleshooting

This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.

ErrorType

Error Id

Error Message

Cause

Solution

GCPCredentialsException

1

 Credentials file has at least one wrong/invalid property. Exception message: <exception_message>

 This error is raised when the config file has invalid properties.

Review the documentation and edit the config file so that it complies with the required structure.

2

Credentials file must be \"service_account\" type. Current value: <credentials_type>

This error is raised when the credentials do not have the structure of GCP service_account

Edit the credentials in the config file so they are in the correct service_account structure.

3

Credentials file has unexpected format, \"type\" entry hasn't found

 This error is raised when the credentials are not in the correct format str

 Edit the credentials in the config.json so they are in the correct str format.

4

Credentials file does not exists

 This error is raised when the credentials file does not exists.

Make sure the credentials file exists. If it's not created, check the documentation and create one.

5

Credentials filename is empty or blank

 This error is raised when credentials filename is empty or blank in the config file.

Edit the credentials filename in config.json and add a valid value.

6

Credentials not valid for \"Subscription\" management (Subscriber API). Exception message: <exception_message>

This error is raised when the credentials are correct but are not valid for the current subscription.

Check the documentation to correctly generate credentials and create subscriptions, then edit the config.json and add the correct values.

7

Credentials not valid for \"Subscription\" management (Subscriber API). Exception message: <exception_message>

This error is raised when the credentials are correct but are not valid for the current subscription.

Check the documentation to correctly generate credentials and create subscriptions, then edit the config.json and add the correct values.

8

Credentials not valid for \"Subscription\" management (Subscriber API). Exception message: <exception_message>

This error is raised when the credentials are correct but are not valid for the current subscription.

Check the documentation to correctly generate credentials and create subscriptions, then edit the config.json and add the correct values.

9

Credentials not valid for \"Topic\" management (Publisher API).""Exception message:<exception_message>

This error is raised when the credentials are correct but are not valid for the current topic.

Check the documentation to correctly generate credentials and create topics, then edit the config.json and add the correct values.

10

Credentials not valid for \"Topic\" management (Publisher API). Exception message:<exception_message>

This error is raised when the credentials are correct but are not valid for the current topic.

Check the documentation to correctly generate credentials and create topics, then edit the config.json and add the correct values.

11

Credentials not valid for \"Topic\" management (Publisher API). Exception message:<exception_message>

This error is raised when the credentials are correct but are not valid for the current topic.

Check the documentation to correctly generate credentials and create topics, then edit the config.json and add the correct values.

12

Credentials not valid for \"Sink\" management (Logging API). Exception message:<exception_message>

This error is raised when the credentials are correct but are not valid for the current sink.

Check the documentation to correctly generate credentials and create sinks, then edit the config.json and add the correct values.

13

Credentials not valid for \"Sink\" management (Logging API). Exception message:<exception_message>

This error is raised when the credentials are correct but are not valid for the current sink.

Check the documentation to correctly generate credentials and create sinks, then edit the config.json and add the correct values.

GCPTopicException
(GCP Library)

1

<topic_name> <exception_message>

This error is raised when an unknown problem occurs creating the topic

This is an internal issue. Contact with Devo Support team.

2

<topic_name> <exception_message>

This error is raised when the credentials provided in the config file are valid, but the authentication endpoint that is being requested to get a token is not found.

This is an internal issue. Contact with Devo Support team.

3

<topic_name> <exception_message>

This error is raised when the credentials provided in the config file are valid, but the authentication endpoint that is being requested to get a token is not found.

This is an internal issue. Contact with Devo Support team.

4

<topic_name> <exception_message>

This error is raised when the credentials provided in the config file are valid, but the authentication endpoint that is being requested to get a token is not found.

This is an internal issue. Contact with Devo Support team.

GCPSinkException
(GCP Library)

1

<sink_name> <exception_message>

This error is raised when an unknown problem occurs creating the sink.

This is an internal issue. Contact with Devo Support team.

2

<sink_name> <exception_message>

This error is raised when the credentials provided in the config file are valid, but the authentication endpoint that is being requested to get a token is not found.

This is an internal issue. Contact with Devo Support team.

3

<sink_name> <exception_message>

This error is raised when the credentials provided in the config file are valid, but the authentication endpoint that is being requested to get a token is not found.

This is an internal issue. Contact with Devo Support team.

4

<sink_name> <exception_message>

This error is raised when the credentials provided in the config file are valid, but the authentication endpoint that is being requested to get a token is not found.

This is an internal issue. Contact with Devo Support team.

GCPIAMPolicyException
(GCP Library)

1

<topic_name> <sink_name> <exception_message>

This error is raised when there is a problem with the GCP API.

This is an internal issue. Contact with Devo Support team.

2

<topic_name> <sink_name> <exception_message>

This error is raised when there is a problem with the GCP API.

This is an internal issue. Contact with Devo Support team.

3

<topic_name> <sink_name> <exception_message>

This error is raised when there IAM access permissions are not correct.

Review the GCP documentation to grant the correct permissions.

4

<topic_name> <sink_name> <exception_message>

Topic or/and sink are not valid

This error is raised when the topic or the sink are not valid.

Check that the topic and sink are valid and assign IAM permissions correctly.

GCPSubscriptionException
(GCP Library)

1

<subscription_name> <exception_message>

This error is raised when an unknown problem occurs creating the subscription.

This is an internal issue. Contact with Devo Support team.

2

<subscription_name> <exception_message>

This error is raised when the credentials provided in the config file are valid, but the authentication endpoint that is being requested to get a token is not found.

This is an internal issue. Contact with Devo Support team.

3

<subscription_name> <exception_message>

This error is raised when the credentials provided in the config file are valid, but the authentication endpoint that is being requested to get a token is not found.

This is an internal issue. Contact with Devo Support team.

4

<subscription_name> <exception_message>

This error is raised when the credentials provided in the config file are valid, but the authentication endpoint that is being requested to get a token is not found.

This is an internal issue. Contact with Devo Support team.

GCPPullerSubscriptionException
(GCP Library)

0

Subscription client can not be created

This error is raised when the client for the subscription could not be created because it has invalid data

Edit the config.json and add a client and their valid credentials.

1

Subscription "<subscription_path>": <exception_message>

This error is raised when an unknown problem occurs creating the subscription.

This is an internal issue. Contact with Devo Support team.

2

Retries limit reached: <exception_message>

This error is raised when the API has failed and the number of retries has been reached.

This is an internal issue. Contact with Devo Support team.

3

Subscription "<subscription_path>" does not exists: {exception_message}

This error is raised when the subscription does not exist or is invalid.

Edit the config.json and add a correct subscription.

4

Monitoring client can not be created: {ex}

This error is raised when the Monitoring client is not able to establish the TSL connection.

Please, ensure you have no network issues and try again.
If the problem persist, contact with Devo Support team.

GCPPullerCreationException
(GCP Library)

0

A \"sink_filter_resource\" property must exists, inside service data from the inputs_configuration file.

This error is raised when sink_filter_resource property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

1

A \"sink_filter_resource_region\" property must exists, inside service data from the inputs_configuration file.

This error is raised when sink_filter_resource_region property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

2

The property "pull_retries" must exist

This error is raised when pull_retries property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

3

The property "pull_retries" must be an integer

This error is raised when pull_retries property is not integer in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

4

The property "pull_retries" must be a positive integer

This error is raised when pull_retries property is not a possitive number in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

10

Property \"tag_base\" is missing from service inputs_configuration section

This error is raised when tag_base property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

11

Property \"unknown_tag\" is missing from service inputs_configuration section

This error is raised when unknown_tag property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

12

"enable_pubsub_undelivered_messages_logging" is a required field. ''Specify it in collector "definitions"

This error is raised when enable_pubsub_undelivered_messages_logging property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

13

\"enable_pubsub_undelivered_messages_logging\" is not of expected type: bool

This error is raised when enable_pubsub_undelivered_messages_logging property is not bool in collector_definitions.yaml.

Make sure the value of enable_pubsub_undelivered_messages_logging is a valid boolean, if so, it may be an internal issue. Contact with Devo Support team.

14

\"pubsub_undelivered_messages_request_interval\" is a required field. Specify it in collector \"definitions\"

This error is raised when pubsub_undelivered_messages_request_interval property is not found in collector_definitions.yaml.

This is an internal issue. Contact with Devo Support team.

15

\"pubsub_undelivered_messages_request_interval\" is not of expected type: int

This error is raised when pubsub_undelivered_messages_request_interval property is not an integer in collector_definitions.yaml.

Make sure the value of pubsub_undelivered_messages_request_interval is a valid integer, if so, it may be an internal issue. Contact with Devo Support team.

16

Property \"pubsub_undelivered_messages_request_interval\" must be greater or equal to 1

This error is raised when pubsub_undelivered_messages_request_interval property is not a positive number in collector_definitions.yaml.

Make sure that pubsub_undelivered_messages_request_interval value in the configuration file is numeric and greater or equal to 1

20

The property \"start_time\" from configuration is having a wrong format, ""expected: YYYY-mm-ddTHH:MM:SS.ssssssZ

This error is raised when optional value start_time does not match the required regex.

Make the value of start_time in config.json match the indicated format.

30

\"credentials\" property is mandatory in the configuration. [\"inputs\" > \"credentials\"]

This error is raised when the required property credentials is not found in the config file.

Add credentials dictionary in config.json, including client_id and client_secret fields.

31

\"source_id\" property is mandatory in the configuration. [\"inputs\" > \"credentials\" > \"source_id\"]

This error is raised when the required property source_id is not found in the config file, into credentials dictionary.

Add source_id property in config.json, into credentials dictionary.

32

\"project_id\" property is mandatory in the configuration. [\"inputs\" > \"credentials\" > \"project_id\"]

This error is raised when the required property project_id is not found in the config file, into credentials dictionary.

Add project_id property in config.json, into credentials dictionary.

33

<credentials_file_content_base64>.file_content_base64property must be a string

This error is raised when file_content_base64 is defined in the config file but the format is not str.

Edit the value of file_content_base64 in config.json, into credentials dictionary, so it is of type str.

34

<credentials_file_content_base64>.file_content_base64'f'must be in a valid base64 format

This error is raised when file_content_base64 is defined in the config file but the format is not in a valid base64 format.

Edit the value of file_content_base64 in config.json, into credentials dictionary, so it is of valid base64 format.

35

\"filename\" property is mandatory in the configuration.""[\"inputs\" > \"credentials\" > \"filename\"]

This error is raised when the required property filename is not found in the config file, into credentials dictionary.

Add filename property in config.json, into credentials dictionary.

GCPPullerRegionException
(GCP Library)

300

Seems that connection has been lost: <exception_message>

This error is raised when the connection to the API is lost.

This is an internal issue. Contact with Devo Support team.

301

Unknown exception when retrieving messages: <exception_message>

This error is raised when an unknown problem occurs retrieving messages.

This is an internal issue. Contact with Devo Support team.

302

Retries limit reached: <exception_message>

This error is raised when the API has failed and the number of retries has been reached.

This is an internal issue. Contact with Devo Support team.

303

Unknown exception when retrieving messages: <exception_message>

This error is raised when an unknown problem occurs retrieving messages.

This is an internal issue. Contact with Devo Support team.

304

Subscriber client object does not exists

This error is raised when the client does not exist.

Edit the config.json and check that the client data is entered and that it is correct.

Collector operations

This section is intended to explain how to proceed with specific operations of this collector.

 Verify collector operations

Initialization

The initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration.

A successful run has the following output messages for the initializer module:

INFO MainProcess::MainThread -> {"build_time": "2022-06-21T14:10:37.827890025+0000", "os_info": "Linux-abc", "collector_name": "test-doc", "collector_version": "1.1.4", "collector_owner": "owner@devo.com", "started_at": "2022-07-15T14:51:58.184301Z"}
INFO MainProcess::MainThread -> (CollectorMultiprocessingQueue) standard_queue_multiprocessing -> max_size_in_messages: 10000, max_size_in_mb: 1024, max_wrap_size_in_items: 100
INFO MainProcess::MainThread -> [OUTPUT] OutputMultiprocessingController::__init__ Configuration -> {'sidecar_0': {'type': 'sidecar', 'config': {'port': 601, 'address': 'example-adress', 'concurrent_connections': 1, 'period_sender_stats_in_seconds': 300, 'activate_final_queue': False, 'threshold_for_using_gzip_in_transport_layer': 1.1, 'compression_level': 6, 'compression_buffer_in_bytes': 51200, 'generate_metrics': False}}}
INFO MainProcess::MainThread -> OutputProcess - Starting thread (executing_period=300s)
INFO MainProcess::MainThread -> InputProcess - Starting thread (executing_period=300s)
INFO OutputProcess::MainThread -> Process started
INFO InputProcess::MainThread -> Process Started
INFO InputProcess::MainThread -> There is not defined any submodule, using the default one with value "none"
INFO OutputProcess::MainThread -> [INTERNAL LOGIC] SyslogSender::_validate_kwargs_for_method__init__ -> The <address> does not appear to be an IP address and cannot be verified: sidecar-service-default.integrations-factory-collectors
INFO InputProcess::MainThread -> Using the default auto-categorization mode: "logname_field"
INFO InputProcess::MainThread -> InputThread(gcp,123) - Starting thread (execution_period=600s)
INFO InputProcess::MainThread -> ServiceThread(gcp,123,custom_service,custom) - Starting thread (execution_period=600s)
INFO InputProcess::MainThread -> CollectorGCPPullerSetup(test-doc,gcp#123,custom_service#custom,all) -> Starting thread
INFO InputProcess::MainThread -> CollectorGCPPuller(gcp,123,custom_service,custom,all) - Starting thread
WARNING OutputProcess::MainThread -> [OUTPUT] OutputSenderManagerListLookup -> Lookup Service is UNAVAILABLE due to no compatible outputs have been found.
WARNING InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Waiting until setup will be executed
INFO OutputProcess::MainThread -> [INTERNAL LOGIC] SyslogSender::_validate_kwargs_for_method__init__ -> The <address> does not appear to be an IP address and cannot be verified: sidecar-service-default.integrations-factory-collectors
INFO OutputProcess::MainThread -> SyslogSender(standard_senders,syslog_sender_0) -> Starting thread
INFO OutputProcess::MainThread -> SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Starting thread (every 300 seconds)
INFO OutputProcess::MainThread -> SyslogSenderManager(standard_senders,manager,sidecar_0) -> Starting thread
INFO OutputProcess::MainThread -> SyslogSender(internal_senders,syslog_sender_0) -> Starting thread
INFO OutputProcess::MainThread -> SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Starting thread (every 300 seconds)
INFO OutputProcess::MainThread -> SyslogSenderManager(internal_senders,manager,sidecar_0) -> Starting thread
INFO OutputProcess::SyslogSender(internal_senders,syslog_sender_0) -> syslog_sender_0 -> Created sender: {"client_name": "collector-abcd", "url": "example_url", "object_id": "1234567890'¡"}
INFO InputProcess::MainThread -> [GC] global: 21.8% -> 21.9%, process: RSS(46.79MiB -> 46.80MiB), VMS(446.02MiB -> 446.02MiB)
INFO InputProcess::CollectorGCPPullerSetup(test-doc,gcp#123,custom_service#custom,all) -> File "/devo-collector/../credentials_file.json" has been created from base64 content
INFO OutputProcess::MainThread -> [GC] global: 21.8% -> 21.9%, process: RSS(47.06MiB -> 47.14MiB), VMS(742.04MiB -> 742.04MiB)
WARNING InputProcess::CollectorGCPPullerSetup(test-doc,gcp#123,custom_service#custom,all) -> Remote auto-setup: Disabled, due it is used a custom queue name from the configuration
INFO InputProcess::CollectorGCPPullerSetup(test-doc,gcp#123,custom_service#custom,all) -> Setup for module "CollectorGCPPuller" has been successfully executed

Events delivery and Devo ingestion

The event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method.

A successful run has the following output messages for the initializer module:

INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Starting a new pulling at "2022-07-15T14:55:32.282345+00:00"
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Requested 1000, received 3 messages
INFO OutputProcess::OutputStandardConsumer(standard_senders_consumer_0) -> Consumed messages: 1, total_bytes: 2372 (216.608593 seconds)
INFO OutputProcess::SyslogSender(standard_senders,syslog_sender_0) -> syslog_sender_0 -> Created sender: {"client_name": "collector-babc", "url": "example-url", "object_id": "1234567890"}
INFO OutputProcess::SyslogSender(standard_senders,syslog_sender_0) -> Consumed messages: 1 messages (216.614309 seconds) => 0 msg/sec
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Processed 3 messages, total processed: 3
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Requested 1000, received 1 messages
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Processed 1 messages, total processed: 4
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Received 2 response(s) with data, generated 4 message(s), avg time per request: 9462.763 ms
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Data collection completed. Elapsed time: 18.926 seconds. Waiting for 11.074 second(s) until the next one
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Number of available senders: 1, sender manager internal queue size: 0
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> enqueued_elapsed_times_in_seconds_stats: {}
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Sender: SyslogSender(standard_senders,syslog_sender_0), status: {"internal_queue_size": 0, "is_connection_open": True}
INFO OutputProcess::SyslogSenderManagerMonitor(standard_senders,sidecar_0) -> Standard - Total number of messages sent: 4, messages sent since "2022-07-15 14:51:58.276560+00:00": 4 (elapsed 0.005 seconds)
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Number of available senders: 1, sender manager internal queue size: 0
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> enqueued_elapsed_times_in_seconds_stats: {}
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Sender: SyslogSender(internal_senders,syslog_sender_0), status: {"internal_queue_size": 0, "is_connection_open": True}
INFO OutputProcess::SyslogSenderManagerMonitor(internal_senders,sidecar_0) -> Internal - Total number of messages sent: 2, messages sent since "2022-07-15 14:51:58.282239+00:00": 2 (elapsed 0.040 seconds)
INFO InputProcess::MainThread -> [GC] global: 8.9% -> 8.9%, process: RSS(60.92MiB -> 60.92MiB), VMS(890.61MiB -> 890.61MiB)
INFO OutputProcess::MainThread -> [GC] global: 8.9% -> 8.9%, process: RSS(47.44MiB -> 47.44MiB), VMS(742.29MiB -> 742.29MiB)
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Starting a new pulling at "2022-07-15T14:57:02.283510+00:00"
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Received 0 response(s) with data, generated 0 message(s), avg time per request: 10.001 ms
INFO InputProcess::CollectorGCPPuller(gcp,123,custom_service,custom,all) -> Data collection completed. Elapsed time: 10.002 seconds. Waiting for 19.998 second(s) until the next one

By default, these information traces will be displayed every 10 minutes.

Sender services

The Integrations Factory Collector SDK has 3 different senders services depending on the event type to delivery (internal, standard, and lookup). This collector uses the following Sender Services:

Sender services

Description

internal_senders

In charge of delivering internal metrics to Devo such as logging traces or metrics.

standard_senders

In charge of delivering pulled events to Devo.

Sender statistics

Each service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:

Logging trace

Description

Number of available senders: 1

Displays the number of concurrent senders available for the given Sender Service.

sender manager internal queue size: 0

Displays the items available in the internal sender queue.

This value helps detect bottlenecks and needs to increase the performance of data delivery to Devo. This last can be made by increasing the concurrent senders.

Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 21 (elapsed 0.007 seconds)

Displayes the number of events from the last time and following the given example, the following conclusions can be obtained:

  • 44 events were sent to Devo since the collector started.

  • The last checkpoint timestamp was 2022-06-28 10:39:22.511671+00:00.

  • 21 events where sent to Devo between the last UTC checkpoint and now.

  • Those 21 events required 0.007 seconds to be delivered.

By default these traces will be shown every 10 minutes.

 Check memory usage

To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.

  • The used memory is displayed by running processes and the sum of both values will give the total used memory for the collector.

  • The global pressure of the available memory is displayed in the global value.

  • All metrics (Global, RSS, VMS) include the value before freeing and after previous -> after freeing memory

INFO InputProcess::MainThread -> [GC] global: 20.4% -> 20.4%, process: RSS(34.50MiB -> 34.08MiB), VMS(410.52MiB -> 410.02MiB)
INFO OutputProcess::MainThread -> [GC] global: 20.4% -> 20.4%, process: RSS(28.41MiB -> 28.41MiB), VMS(705.28MiB -> 705.28MiB)

Differences between RSS and VMS memory usage:

  • RSS is the Resident Set Size, which is the actual physical memory the process is using

  • VMS is the Virtual Memory Size which is the virtual memory that process is using

 Enable/disable the logging debug mode

Sometimes it is necessary to activate the debug mode of the collector's logging. This debug mode increases the verbosity of the log and allows you to print execution traces that are very helpful in resolving incidents or detecting bottlenecks in heavy download processes.

  • To enable this option you just need to edit the configuration file and change the debug_status parameter from false to true and restart the collector.

  • To disable this option, you just need to update the configuration file and change the debug_status parameter from true to false and restart the collector.

For more information, visit the configuration and parameterization section corresponding to the chosen deployment mode.

Change log

Release

Released on

Release type

Details

Recommendations

v1.7.0

IMPROVEMENTS

Improvements

  • Added small changes to make the configuration compatible with older versions than 1.2.1

  • wheel upgraded from 0.42.0 to 0.43.0

  • google-cloud-logging upgraded from 3.6.0 to 3.10.0

  • google-cloud-pubsub upgraded from 2.18.4 to 2.21.4

  • google-cloud-monitoring upgraded from 2.15.1 to 2.21.0

  • pandas upgraded from 1.3.5 to 1.5.3

Recommended version

v1.6.0

IMPROVEMENTS

Improvements

  • Upgraded DCSDK from 1.9.2 to 1.11.1

  • Upgrade the Docker base image to 1.2.0

Upgrade

v1.5.0

IMPROVEMENTSNEW FEATURES

Improvements

  • Upgraded DCSDK from 1.9.0 to 1.9.2

    • Store lookup instances into DevoSender to avoid creation of new instances for the same lookup

    • Ensure service_config is a dict into templates

    • Upgrade internal dependencies

New features

Upgrade

v1.4.0

IMPROVEMENTS

Improvements

  • Updated DCSDK from 1.7.2 to 1.9.0

    • Changed log level to some messages from info to debug

    • Changed some wrong log messages

    • Upgraded some internal dependencies

    • Changed queue passed to setup instance constructor

    • Ability to validate collector setup and exit without pulling any data

    • Ability to store in the persistence the messages that couldn't be sent after the collector stopped

    • Ability to send messages from the persistence when the collector starts and before the puller begins working

    • Ensure special characters are properly sent to the platform

Upgrade

v1.3.0

IMPROVEMENTSBUG FIXING

Improvements

  • Improved base64 generation.

  • Updated DCSDK from 1.6.3 to 1.7.2.

    • Added a lock to enhance sender object

    • Added new class attrs to the __setstate__ and __getstate__ queue methods

    • Fix sending attribute value to the __setstate__ and __getstate__ queue methods

    • Added log traces when queues are full and have to wait

    • Added log traces of queues time waiting every minute in debug mode

    • Added method to calculate queue size in bytes

    • Block incoming events in queues when there are no space left

    • Send telemetry events to Devo platform

    • Upgraded internal Python dependency Redis to v4.5.4

    • Upgraded internal Python dependency DevoSDK to v5.1.3

    • Fixed obfuscation not working when messages are sent from templates

    • Obfuscation service can be now configured from user config and module definiton

    • Obfuscation service can now obfuscate items inside arrays.

Bug fixing

  • Fixed a known issue on the DevoSender with the DCSDK update.

Upgrade

v1.2.2

Febr 27, 2023

-

-

-

v1.2.1

Nov 29, 2022

IMPROVEMENTSBUG FIXING

Improvements

  • Devo Collector SDK upgraded from version 1.4.2 to version 1.4.4b.

    • Added some extra checks for supporting MacOS as development environment

    • The "template" supports the controlled stop functionality

    • Some log traces now are shown less frequently

    • The default value for the logging frequency for "main" processes hsa been changed (to 120 seconds)

    • Added log traces for knowing the execution environment status (debug mode)

    • Fixes in the current puller template version

    • The Docker container exits with the proper error code

Bug fixing

  • Configurable logging traces for undelivered messages in GCP moved to thread model to avoid a special case in which is never triggered.
    pubsub_undelivered_messages_request_interval changed to pubsub_undelivered_messages_request_interval_in_seconds. New default value, every 600 seconds.

Recommended version

v1.1.4

IMPROVEMENT

Improvements

  • New tag cloud.gcp.unknown.none for all services.

  • When the collector processes a message that is not in JSON format, it sends it to the cloud.gcp.unknown.none table (only if the custom tag is not used).

  • The behaviour of custom tags has been changed: If a custom tag is used the message will always go to the custom tag even if it is not in JSON format.

Upgrade

v1.1.3

IMPROVEMENT

Improvements

  • Validated base64 variables from config.yaml. A new function was created to check if the base64 token in the configuration file has a valid format.

  • Increase the Queue consuming throughput

Upgrade

v1.1.2

IMPROVEMENT

VULNS

Improvements

  • The underlay Devo Collector SDK has been upgraded to v1.1.4 to improve efficiency, increase the resilience and mitigate vulnerabilities.

  • The hard-reset procedure when losing connection with Devo has been improved.

Vulnerabilities mitigated

  • CVE-2022-1664

  • CVE-2021-33574

  • CVE-2022-23218

  • CVE-2022-23219

  • CVE-2019-8457

  • CVE-2022-1586

  • CVE-2022-1292

  • CVE-2022-2068

  • CVE-2022-1304

  • CVE-2022-1271

  • CVE-2021-3999

  • CVE-2021-33560

  • CVE-2022-29460

  • CVE-2022-29458

  • CVE-2022-0778

  • CVE-2022-2097

  • CVE-2020-16156

  • CVE-2018-25032

Upgrade

v1.1.1

IMPROVEMENT

Improvements

  • The underlay Devo Collector SDK has been upgraded to v1.1.3 to improve efficiency and performance.

  • When the collector loses the connection with Devo it executes a hard-restart protocol to force the reconnection with a fresh configuration.

Upgrade

v1.1.0

IMPROVEMENT

Improvements

  • The following properties have been renamed to be more user-readable:

    • credentials_file to filename

    • credentials_file_content_base64 to file_content_base64

  • Added new optional categorization mode which categorizes the events based on their fields to create the Devo Tag.

  • The underlay Devo Collector SDK has been upgraded to v1.1.0 to improve efficiency.

Upgrade

  • No labels