Document toolboxDocument toolbox

Google Workspace Reports collector

Configuration requirements

To run this collector, there are some configurations detailed below that you need to consider.

Configuration

Requirements

Configuration

Requirements

User interface

  • Get a graphical user interface.

Python

  • Get Phyton 3.6 or greater.

Google Account

  • Collect data with administrator privileges.

Credentials

  • Get OAuth credentials to authenticate the collector

More information

Refer to the Vendor setup section to know more about these configurations.

Overview

Google Workspace is Google’s suite of products that includes email, calendar, drive, meet, and other solutions. This collector provides the possibility to integrate Google Workspace with Devo Platform making it easy to query and analyze the relevant data from Workspace, view it in the pre-configured Activeboards, or customize them to enable Enterprise IT and Cybersecurity teams to make impactful data-driven decisions.

Google Workspace API Reports is for gaining insights on content management with Google activity, audit administrator actions, and generating customer and user usage reports.

Devo collector features

Feature

Details

Feature

Details

Allow parallel downloading (multipod)

  • Not allowed

Running environments

  • Collector server

  • On-premise

Populated Devo events

  • Table

Flattening preprocessing

  • Yes

Data sources

Data Source

Description

API Endpoint

Collector service name

Devo Table

Available from release

Data Source

Description

API Endpoint

Collector service name

Devo Table

Available from release

Access Transparency

Activity events from a Activity events from a Google Workspace resource was accessed by Google.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/access_transparency

access_transparency

cloud.gsuite.reports.access_transparency

1.0.10

Admin

Report returns information on the Admin console activities of all of your account's administrators.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/admin

admin

cloud.gsuite.reports.admin

1.0.10

Calendar

Report returns information about how your account's users manage and modify their Google Calendar events.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/calendar

calendar

cloud.gsuite.reports.calendar

1.0.10

Google Chat

The Chat activity report returns information about how your account's users use and manage Spaces. Each report uses the basic endpoint request with report-specific parameters such as uploads or message operations.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/chat

chat

cloud.gsuite.reports.chat

1.0.10

Google Drive

Report returns information about how your account's users manage, modify, and share their Google Drive documents.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/drive

drive

cloud.gsuite.reports.drive

1.0.10

Google Cloud Platform

Activity events Interaction with the Cloud OS Login API.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/gcp

gcp

cloud.gsuite.reports.gcp

1.0.10

Groups

Activity report returns information about how your account's users manage and modify their groups.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/groups

groups

cloud.gsuite.reports.groups

1.0.10

Google+

Activity report returns information about the Google+ activity of all of your account's users.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/gplus

gplus

cloud.gsuite.reports.gplus

1.0.10

Enterprise Groups

Audit activity events from action performed by moderator.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/groups_enterprise

groups_enterprise

cloud.gsuite.reports.cloud.gsuite.reports.groups_enterprise

1.0.10

Jamboard

Activity of interactive whiteboard.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/jamboard

jamboard

cloud.gsuite.reports.jamboard

1.0.10

Meet

Hangouts Meet Audit activity events Hangouts Meet Audit activity events describing a single Hangouts endpoint.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/meet

meet

cloud.gsuite.reports.meet

1.0.10

Logins

Activity report returns information about the login activity of all of your account's users.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/login

login

cloud.gsuite.reports.login

1.0.10

Mobile Audit

Activity report returns information on all activities in a mobile device with Work account, managed by Google Mobile Management.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/mobile

mobile

cloud.gsuite.reports.mobile

1.0.10

SAML

Audit activity events from lAudit activity events from login event type. 

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/saml

saml

cloud.gsuite.reports.saml

1.0.10

Authorization Tokens

Activity report returns information about third party websites and applications your users have granted access for.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/token

token

cloud.gsuite.reports.token

1.0.10

Rules

Activity report returns information about how the rules (that have been set up in Admin console) are performing.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/rules

rules

cloud.gsuite.reports.rules

1.0.10

Users Account

User Accounts Audit activity events.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/user_accounts

user_accounts

cloud.gsuite.reports.user_account

1.0.10

Data Studio

The Data Studio activity report returns information about the Data Studio activity of all of your account's users. Each report uses the basic endpoint request and provides report-specific parameters such as ACL changes and report creation or deletion.

admin.googleapis.com/admin/reports/v1/activity/users/all/applications/data_studio

data_studio

cloud.gsuite.reports.data_studio

1.4.0

For more information on how the events are parsed, visit our page.

Flattening preprocessing

Data Source

Collector Service

Optional

Flattening Details

all

all

No

When events is received as an event detail, the flattening is applied as shown:

Received data (an object):

{ ... 'events' : [ {'type': 'ALERT_CENTER', 'name': 'ALERT_CENTER_VIEW', 'parameters': [...] }, {'type': 'ALERT_CENTER', 'name': 'EXAMPLE_NAME', 'parameters': [...] } ] }

Flattened message 1:

{ ... 'event_type': 'ALERT_CENTER', 'event_name': 'ALERT_CENTER_VIEW', 'event_parameters': [...] }

Flattened message 2:

{ ... 'event_type': 'ALERT_CENTER', 'event_name': 'EXAMPLE_NAME', 'event_parameters': [...] }

For each event a flattened message is generated.

Vendor setup

There are some requirements to enable this collector:

  1. A graphical user interface (the script opens a browser to complete authorization).

  2. Python 3.6 or greater.

  3. The pip package management tool.

  4. A Google account in the domain you want to collect data from with administrator privileges, or at least with enough permissions over the following scopes:

    https://www.googleapis.com/auth/admin.reports.audit.readonly

    https://www.googleapis.com/auth/admin.reports.usage.readonly

In order to retrieve the data, we need to create OAuth credentials to authenticate the collector.

Action

Steps

Creating a Project.

This step is optional, if you already have a project, you do not need to create another.

  1. Login to Google APIs console.

  2. In the search bar, search Create a Project.

  3. Click on Create a Project.

  4. Fill in the required fields.

  5. Click on Create.

Enabling Admin SDK API.

  1. Login to Google APIs console.

  2. In the search bar, search Admin SDK API.

  3. Click on Admin SDK API.

  4. Activate the API by clicking Enable.

Activating Oauth Consent Screen.

  1. In the search bar, search Credentials.

  2. Click on Credentials (APIs & Services).

  3. Click on the Oauth Consent Screen tab on the left-side menu.

  4. In User Type select Internal.

  5. Click on Create.

  6. Fill in the required fields and then click Save and continue.

  7. In the next section (Scopes) click on Save and continue.

Creating Credentials.

  1. In the search bar open Credentials.

  2. Click on Credentials (APIs & Services).

  3. Click on the Credentials tab on the left side menu.

  4. Click the + Create credentials button.

  5. Select OAuth client ID.

  6. In Application type select Desktop app.

  7. Choose a name for the Name field, for example: Gsuite Reports Collector.

  8. Click on Create. A pop-up window will appear indicating that the OAuth client is created.

  9. Click on Download JSON.

  10. Click on Ok.

  11. Rename the file to credentials.json.

  12. Copy the file credentials.json.

  13. Save the file credentials.json to <any_directory>/devo-collectors/gsuite-google-workspace-reports/credentials/.

Authorizing the scopes and generating the token.json

It is necessary to authorize the scopes and generate the token.json file. This step is completed using a Google provided script. This script can be executed from any computer.

  1. Copy the following script:

     

  2. Save the script to <any_directory>/devo-collectors/gsuite-google-workspace-reports/credentials/.

  3. Rename the script to quickstart.py.

  4. Copy the credentials.json file downloaded in the previous step to the same directory.

  5. Install the Google Auth API library in the <any_directory>/devo-collectors/gsuite-google-workspace-reports/credentials/ directory (It is recommended to use a virtual environment):

    Mac/Linux

    Windows

  6. Run the command below in the <any_directory>/devo-collectors/gsuite-google-workspace-reports/credentials/ directory to create the “token.json” file. A Google consent window will prompt asking for permission scopes to be accepted, follow the instructions in the browser, and allow the application.

  7. The script will output a line starting with Base64 encoded token.json: Copy the base64 value as this will be required for the collector configuration.

Minimum configuration required for basic pulling

Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.

Setting

Details

filename_value

This parameter is the name that you want to give to the token generated by the Collector. For example: token.pickle

token_pickle_content_base64_value

This parameter is the credentials in base64 format. To know how to obtain this value review the section How to enable the collection in the vendor.

Accepted authentication methods

Depending on how did you obtain your credentials, you will have to either fill or delete the following properties on the JSON credentials configuration block.

Authentication Method

Token pickle filename

Token pickle content base64

OAuth

REQUIRED

REQUIRED

Run the collector

Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).

Collector services detail

This section is intended to explain how to proceed with specific actions for services.

All the services in Gsuite Reports are common to each other. All reports are handled in the same way, the only difference is the type of report (application_name).

The reports are ingested in the Devo tables with the format cloud.gsuite.reports.<application_name> when application_name is the application from which the report will be generated. The application names are listed below:

Application name

Devo Table

access_transparency

cloud.gsuite.reports.access_transparency

admin

cloud.gsuite.reports.admin

calendar

cloud.gsuite.reports.calendar

chat

cloud.gsuite.reports.chat

drive

cloud.gsuite.reports.drive

gcp

cloud.gsuite.reports.gcp

groups

cloud.gsuite.reports.groups

gplus

cloud.gsuite.reports.gplus

groups_enterprise

cloud.gsuite.reports.cloud.gsuite.reports.groups_enterprise

jamboard

cloud.gsuite.reports.jamboard

meet

cloud.gsuite.reports.meet

login

cloud.gsuite.reports.login

mobile

cloud.gsuite.reports.mobile

saml

cloud.gsuite.reports.saml

token

cloud.gsuite.reports.token

rules

cloud.gsuite.reports.rules

user_accounts

cloud.gsuite.reports.users_account

data_studio

cloud.gsuite.reports.data_studio

access_transparency

cloud.gsuite.reports.access_transparency

admin

cloud.gsuite.reports.admin

calendar

cloud.gsuite.reports.calendar

chat

cloud.gsuite.reports.chat

drive

cloud.gsuite.reports.drive

gcp

cloud.gsuite.reports.gcp

groups

cloud.gsuite.reports.groups

gplus

cloud.gsuite.reports.gplus

groups_enterprise

cloud.gsuite.reports.cloud.gsuite.reports.groups_enterprise

jamboard

cloud.gsuite.reports.jamboard

meet

cloud.gsuite.reports.meet

login

cloud.gsuite.reports.login

mobile

cloud.gsuite.reports.mobile

saml

cloud.gsuite.reports.saml

token

cloud.gsuite.reports.token

rules

cloud.gsuite.reports.rules

user_accounts

cloud.gsuite.reports.users_account

data_studio

cloud.gsuite.reports.data_studio

Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console.

This service has the following components:

Component

Description

Component

Description

Setup

The setup module is in charge of authenticating the service and managing the token expiration when needed.

Puller

The setup module is in charge of pulling the data in a organized way and delivering the events via SDK.

Setup output

A successful run has the following output messages for the setup module:

Puller output

A successful initial run has the following output messages for the puller module:

After a successful collector’s execution (that is, no error logs found), you will see the following log message:

This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:

  1. Edit the configuration file.

  2. Change the value of the start_time parameter to a different one.

  3. Save the changes.

  4. Restart the collector.

The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.

This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.

ErrorType

Error Id

Error Message

Cause

Solution

ErrorType

Error Id

Error Message

Cause

Solution

GSuiteReportsPullerCredentialsException

2

File <filename> does not exist. Please, learn how to generate a token pickle on: https://docs.devo.com/confluence/ndt/v7.9.0/sending-data-to-devo/collectors/g-suite-collectors/g-suite-reports-collector

This error is raised when token.pickle does not exist.

Regenerate the token.json and save it in the file devo-collector-gsuite-google-workspace-reports/credentials/.

To know how the token is regenerated, consult the section How to enable the collection in the vendor.

3

Access token object does not exists

This error is raised when token.pickle does not exist.

Regenerate the token.json and save it in the file devo-collector-gsuite-google-workspace-reports/credentials/.

To know how the token is regenerated, consult the section How to enable the collection in the vendor.

4

The access token is not valid but not expired

This error is raised when the token is invalid

Regenerate the token.json and save it in the file devo-collector-gsuite-google-workspace-reports/credentials/.

To know how the token is regenerated, consult the section How to enable the collection in the vendor.

5

Property "refresh_token" does not exists so it is not possible to refresh the access token

This error is raised when the token cannot be refreshed. The reason may be that it has been deleted.

Regenerate the token.json and save it in the file devo-collector-gsuite-google-workspace-reports/credentials/.

To know how the token is regenerated, consult the section How to enable the collection in the vendor.

6

<error_message>

This error is raised when an HTTP error appears during setup.

The solution depends on the type of error.. Contact the support team.

21

<input_config_key_path> property must be a dictionary

This error is raised when the required property input_config_key_path is not found in the config file.

Add input_config_key_path to config file, for example: gsuit_alerts:

22

<input_config_credentials_key_path> mandatory property is missing or empty

This error is raised when the required property credentials is not found in the config file.

Add credentials dictionary in config.

23

<input_config_credentials_key_path> property must be a dictionary

This error is raised when credentials is defined in the configfile but the format is not dict.

Edit the value of credentials in config file, so it is of valid dict format.

24

<input_config_credentials_key_path>.token_pickle_filename" mandatory property is missing or empty

This error is raised when the required property token_pickle_filename is not found in the config file, into credentials dictionary.

Add token_pickle_filename property in config file, into credentials dictionary.

25

<input_config_credentials_key_path>.token_pickle_filename" property must be a string

This error is raised when token_pickle_filename is defined in the configfile but the format is not str.

Edit the value of token_pickle_filename in config file, so it is of valid str format.

26

<input_config_credentials_key_path>.token_pickle_content_base64" property must be a string

This error is raised when token_pickle_content_base64 is defined in the configfile but the format is not str.

Edit the value of token_pickle_content_base64 in config file, so it is of valid str format.

27

<token_pickle_content_base64>.token_pickle_content_base64"must be in a valid base64 format

This error is raised when token_pickle_content_base64 is defined in the configfile but the format is not base64 string.

Edit the value of token_pickle_content_base64 in config file, so it is of valid base64 string format.

GSuiteReportsSetupException

1

Error loading token pickle file: <exception_message>

This error is raised when the token is invalid

Regenerate the token.json and save it in the file devo-collector-gsuite-google-workspace-reports/credentials/.

To know how the token is regenerated, consult the section How to enable the collection in the vendor.

GSuiteReportsPullerRetrieveException

50

Unexpected error

This error is raised when the persistence could not be loaded.

Contact the internal team or restart persistence by changing the start_time parameter in the configuration file.

51

Unexpected status persistence date should always exists at this point

This error is raised when the persistence could not be loaded.

Contact the internal team or restart persistence by changing the start_time parameter in the configuration file.

52

Unexpected status "event_last_timestamp" key should should always exists at this point

This error is raised when event_last_timestamp is not in the persistence.

Contact the internal team or restart persistence by changing the start_time parameter in the configuration file.

53

Error processing messages: <error_message>

This error is raised when messages could not be processed for some unknown reason.

This is an internal issue. Contact the support team.

54

timestamp must be not empty

This error is raised when timestamp is empty.

This is an internal issue. Contact the support team.

55

"timestamp" must be follow the regex: <GsuiteReportsData.DATE_FORMAT_REGEX>

This error is raised when the timestamp variable does not match the regex: '\d{4}-(?:0\d|1[0-2])-(?:[0-2]\d|3[0-1])T(?:2[0-3]|[01]\d):[0-5]\d:[0-5]\d\.\d{1,6}Z

This is an internal issue. Contact the support team.

56

"timestamp" must be string or datetime, no other type is supported

This error is raised when the timestamp variable is not of type string or datetime.

This is an internal issue. Contact the support team.

10

Uncontrolled code flow not expected to have an empty value of "initial_start_time_to_save"

This error is raised when initial_start_time_to_save is empty.

This is an internal issue. Contact the support team.

GSuiteReportsPullerConnectionLostException

0

Operation timed out: <error message>

This error is raised when the maximum time to wait for the connection has been exceeded.

Check that the connection is working properly.

1

HTTP/1.1 503 Service Unavailable at moment - Retrying reconnection: <exception message>

This error is raised when the service is not available.

This is an internal issue. Contact the support team.

2

New connection failed. Retrying new connection: <exception message>

This error is raised when the connection fails.

Wait until the maximum number of retries is reached, it is trying to automatically reconnect.

3

Retries limit reached: <exception message>

The connection failed and the maximum number of unsuccessful retries has been reached.

Check that the connection is working properly.

4

DefaultCredentialsError: <error message>

This error is raised when credentials cannot be automatically determined.

This is an internal issue. Contact the support team.

7

Unable to refresh token or client Auth was deleted. Detail: <error message>

This error is raised when the token cannot be refreshed. The reason may be that it has been deleted.

Regenerate the token.json and save it in the file devo-collector-gsuite-google-workspace-reports/credentials/.

To know how the token is regenerated, consult the section How to enable the collection in the vendor.

8

Operation timed out

This error is raised when the maximum time to wait for the connection has been exceeded.

Check that the connection is working properly.

9

HTTP/1.1 503 Service Unavailable at moment - Retrying reconnection

This error is raised when the service is not available.

This is an internal issue. Contact the support team.

10

DefaultCredentials: <error message>

This error is raised when credentials cannot be automatically determined.

This is an internal issue. Contact the support team.

11

New connection failed. Retrying new connection

This error is raised when the connection fails.

Wait until the maximum number of retries is reached, it is trying to automatically reconnect.

12

Retries limit reached: <exception message>

The connection failed and the maximum number of unsuccessful retries has been reached.

Check that the connection is working properly.

ModuleDefinitionError

1

<module_properties_key_path> mandatory property is missing or empty

This error is raised when module_properties property is not found in collector_definitions.yaml

This is an internal issue. Contact the support team.

2

<module_properties_key_path> property must be a dictionary

This error is raised when module_properties is defined in the collector_definitions.yaml file but the format is not dict.

This is an internal issue. Contact the support team.

3

<module_properties_key_path>.alert_type mandatory property is missing or empty

This error is raised when application_name property is not found in collector_definitions.yaml

This is an internal issue. Contact the support team.

4

<module_properties_key_path>.application_name" property must be a string

This error is raised when application_name is defined in the collector_definitions.yaml file but the format is not str.

This is an internal issue. Contact the support team.

5

<module_properties_key_path>.tag_base" mandatory property is missing or empty'

This error is raised when tag_base property is not found in collector_definitions.yaml

This is an internal issue. Contact the support team.

6

<module_properties_key_path>.tag_base property must be a string

This error is raised when tag_base is defined in the collector_definitions.yaml file but the format is not str.

This is an internal issue. Contact the support team.

7

<module_properties_key_path>.start_time_regex" mandatory property is missing or empty

This error is raised when start_time_regex property is not found in collector_definitions.yaml

This is an internal issue. Contact the support team.

8

<module_properties_key_path>.start_time_regex" property must be a string

This error is raised when start_time_regex is defined in the collector_definitions.yaml file but the format is not str.

This is an internal issue. Contact the support team.

9

<module_properties_key_path>.start_time_regex" property is not a valid regular expression

This error is raised when start_time_regex is defined in the collector_definitions.yaml file but is not a valid regular expresion.

This is an internal issue. Contact the support team.

10

<module_properties_key_path>.max_request_period_in_seconds" mandatory property is missing or empty

This error is raised when max_request_period_in_seconds property is not found in collector_definitions.yaml

This is an internal issue. Contact the support team.

11

<module_properties_key_path>.max_request_period_in_seconds" property must be a string

This error is raised when max_request_period_in_seconds is defined in the collector_definitions.yaml file but the format is not str.

This is an internal issue. Contact the support team.

12

<module_properties_key_path>.max_request_period_in_seconds" property must be greater or equal to 60

This error is raised when max_request_period_in_seconds is defined in the collector_definitions.yaml file but the value is not greater or equal to 60.

This is an internal issue. Contact the support team.

13

<module_properties_key_path>.max_lag_time_in_minutes" mandatory property is missing or empty

This error is raised when max_lag_time_in_minutes property is not found in collector_definitions.yaml

This is an internal issue. Contact the support team.

14

<module_properties_key_path>.max_lag_time_in_minutes" property must be an integer

This error is raised when max_lag_time_in_minutes is defined in the collector_definitions.yaml file but the format is not integer.

This is an internal issue. Contact the support team.

15

<module_properties_key_path>.max_lag_time_in_minutes" property must be between 0 and 72

This error is raised when max_lag_time_in_minutes is defined in the collector_definitions.yaml file but the value is not in the range [0-72]

This is an internal issue. Contact the support team.

36

<service_config_key_path>.max_request_period_in_seconds" property must be a string

This error is raised when optional value max_request_period_in_seconds is defined in the config file but is not in the str format.

Edit the value of max_request_period_in_seconds in the configuration file so it is of valid str format.

37

<service_config_key_path>.max_request_period_in_seconds" 'f'property must be between 60 and 3600 seconds

This error is raised when optional value max_request_period_in_seconds is defined in the config file but is not in the range [60-3600].

Edit the value of max_request_period_in_seconds in the configuration file so it is between 60 and 3600

38

<service_config_key_path>.max_lag_time_in_minutes" property must be an integer

This error is raised when optional value max_lag_time_in_minutes is defined in the config file but is not in the str format.

Edit the value of max_lag_time_in_minutes in the configuration file so it is of valid int format.

39

<max_lag_time_in_minutes>.max_lag_time_in_minutes" property must be between 0 and 72

This error is raised when optional value max_lag_time_in_minutes is defined in the config file but is not in the range [0-72].

Edit the value of max_lag_time_in_minutes in the configuration file so it is between 0 and 72

ServiceConfigurationError

31

<service_config_key_path> mandatory property is missing or empty

This error is raised when the required property input_config_key_path is not found in the config file.

Add input_config_key_path to config file, for example: gsuit_alerts:

32

<service_config_key_path> property must be a dictionary

This error is raised when service_config_key_path is defined in the configfile but the format is not dict.

Edit the value of service_config_key_path in config file, so it is of valid dict format.

33

<service_config_key_path>.start_time property must be a string

This error is raised when start_time is defined in the configfile but the format is not str.

Edit the value of start_time in config file, so it is of valid str format.

34

<service_config_key_path>.start_time property value should fulfil the regex: <start_time_regex>

This error is raised when start_time is defined in the configfile but It does not comply with the following regex: \d{4}-(?:0\d|1[0-2])-(?:[0-2]\d|3[0-1])T(?:2[0-3]|[01]\d):[0-5]\d:[0-5]\d\.\d{1,3}Z.

Edit the value of start_time in config file, so that it complies with the regex: \d{4}-(?:0\d|1[0-2])-(?:[0-2]\d|3[0-1])T(?:2[0-3]|[01]\d):[0-5]\d:[0-5]\d\.\d{1,3}Z.

35

<service_config_key_path>.tag property must be a string

This error is raised when tag is defined in the configfile but the format is not str.

Edit the value of tag in config file, so it is of valid str format.

Collector operations

This section is intended to explain how to proceed with specific operations of this collector.

Initialization

The initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration.

A successful run has the following output messages for the initializer module:

Events delivery and Devo ingestion

The event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method.

A successful run has the following output messages for the initializer module:

Sender services

The Integrations Factory Collector SDK has 3 different senders services depending on the event type to delivery (internal, standard, and lookup). This collector uses the following Sender Services:

Sender services

Description

Sender services

Description

internal_senders

In charge of delivering internal metrics to Devo such as logging traces or metrics.

standard_senders

In charge of delivering pulled events to Devo.

Sender statistics

Each service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:

Logging trace

Description

Logging trace

Description

Number of available senders: 1

Displays the number of concurrent senders available for the given Sender Service.

sender manager internal queue size: 0

Displays the items available in the internal sender queue.

Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 21 (elapsed 0.007 seconds)

Displayes the number of events from the last time and following the given example, the following conclusions can be obtained:

  • 44 events were sent to Devo since the collector started.

  • The last checkpoint timestamp was 2022-06-28 10:39:22.511671+00:00.

  • 21 events where sent to Devo between the last UTC checkpoint and now.

  • Those 21 events required 0.007 seconds to be delivered.

Change log

Release

Released on

Release type

Details

Recommendations

Release

Released on

Release type

Details

Recommendations

v1.10.0

Sep 16, 2024

BUG FIX

  • Updated DCSDK from 1.10.2 to 1.12.4

  • Upgraded Base Docker Image to 1.3.0

Recommended version

v1.9.0

Jan 19, 2024

IMPROVEMENT

  • upgraded DCSDK from 1.9.0 to 1.10.2

    • Changed log level to some messages from info to debug

    • Changed some wrong log messages

    • Upgraded some internal dependencies

    • Changed queue passed to setup instance constructor

    • Ability to validate collector setup and exit without pulling any data

    • Ability to store in the persistence the messages that couldn't be sent after the collector stopped

    • Ability to send messages from the persistence when the collector starts and before the puller begins working

    • Ensure special characters are properly sent to the platform

  • Fixed the conditions to refresh access token.

Update

v1.8.0

Aug 9, 2023

IMPROVEMENT

Improvements:

  • Upgraded DCSDK from 1.1.4 to 1.9.0

    • New "templates" functionality

    • Functionality for detecting some system signals for starting the controlled stopping

    • Input objects sends again the internal messages to devo.collectors.out table

    • Upgraded DevoSDK to version 3.6.4 to fix a bug related to a connection loss with Devo

    • Refactored source code structure

    • Changed way of executing the controlled stopping

    • Minimized probabilities of suffering a DevoSDK bug related to "sender" to be null

    • Ability to validate collector setup and exit without pulling any data

    • Ability to store in the persistence the messages that couldn't be sent after the collector stopped

    • Ability to send messages from the persistence when the collector starts and before the puller begins working

    • Ensure special characters are properly sent to the platform

    • Added a lock to enhance sender object

    • Added new class attrs to the __setstate__ and __getstate__ queue methods

    • Fix sending attribute value to the __setstate__ and __getstate__ queue methods

    • Added log traces when queues are full and have to wait

    • Added log traces of queues time waiting every minute in debug mode

    • Added method to calculate queue size in bytes

    • Block incoming events in queues when there are no space lefta

    • Send telemetry events to Devo platform

    • Upgraded internal Python dependency Redis to v4.5.4

    • Upgraded internal Python dependency DevoSDK to v5.1.3

    • Fixed obfuscation not working when messages are sent from templates

    • New method to figure out if a puller thread is stopping

    • Upgraded internal Python dependency DevoSDK to v5.0.6

    • Improved logging on messages/bytes sent to Devo platform

    • Fixed wrong bytes size calculation for queues

    • New functionality to count bytes sent to Devo Platform (shown in console log)

    • Upgraded internal Python dependency DevoSDK to v5.0.4

    • Fixed bug in persistence management process, related to persistence reset

    • Aligned source code typing to be aligned with Python 3.9.x

    • Inject environment property from user config

    • Obfuscation service can be now configured from user config and module definiton

    • Obfuscation service can now obfuscate items inside arrays

    • Ensure special characters are properly sent to the platform

    • Changed log level to some messages from info to debug

    • Changed some wrong log messages

    • Upgraded some internal dependencies

    • Changed queue passed to setup instance constructor

-

v1.7.0

Nov 8, 2022

IMPROVEMENT

Improvements:

  • Updated max lag time for google reports to minutes. This provides more flexibility for available data recovery.

Update

v1.6.0

Oct 10, 2022

IMPROVEMENT

Improvements:

  • This new feature adds a custom time delay based on the Workspace Reports maximum delay for each data source. Google Workspace Reporting API has a "lag time" until all events are fully available. This maximum lag can be modified through the user configuration. For more information on lag times visit Data retention and lag times - Google Workspace Admin Help

Update

v1.5.0

Sep 8, 2022

IMPROVEMENT

Improvements:

  • The Google Workspace Collector has been divided into two: Google Workspace Alerts and Google Workspace Reports to make deployments easier.

  • The collector has been optimized to reduce the number of API request based on the API max lag times (Data retention and lag times - Google Workspace Admin Help) for each source and does not repeat the empty time windows when they are older than the expected maximum lag. This value can be overridden from the user config through max_lag_time_in_hours parameter.

  • Added base64 validation for the credential payload to help detect invalid user settings early.

Update

v1.4.2

Aug 12, 2023

BUG FIX

Bugs fixes:

  • Fixed a bug that prevented Syslog output from being enabled.

Update

v1.4.1

Aug 12, 2022

IMPROVEMENT

Improvements:

  • When allocating memory to buffer events before sorting them, the RAM requirements may hit available environment limits. The puller has been enhanced with a new compression logic that compresses messages with the DEFLATE algorithm (zlib) with a compression ratio of approximately 0.98, thus reducing their spatial complexity and maintaining a linear O(n) time complexity, taking approximately 270 ms for 12K messages for both compression and decompression.

  • Upgraded underlay Devo Collector SDK from v1.1.4 to v1.4.1.

  • The resilience has been improved with a new feature that restart the collector when the Devo connections is lost and it cannot be recovered.

  • When an exception is raised by the Collector Setup, the collector retries after 5 seconds. For consecutive exceptions, the waiting time is multiplied by 5 until hits 1800 seconds, which is the maximum waiting time allowed. No maximum retries are applied.

  • When an exception is raised by the Collector Pull method, the collector retries after 5 seconds. For consecutive exceptions, the waiting time is multiplied by 5 until hits 1800 seconds, which is the maximum waiting time allowed. No maximum retries are applied.

  • When an exception is raised by the Collector pre-pull method, the collector retries after 30 seconds. No maximum retries are applied.

  • Updated the underlying DevoSDK package to v3.6.4 and dependencies, this upgrade increases the resilience of the collector when the connection with Devo or the Syslog server is lost. The collector is able to reconnect in some scenarios without running the self-kill feature.

  • Support for stopping the collector when a GRACEFULL_SHUTDOWN system signal is received.

  • Re-enabled the logging to devo.collector.out for Input threads.

  • Improved self-kill functionality behavior.

  • Added more details in log traces.

  • Added log traces for knowing system memory usage.

Update

v1.2.0

Apr 29, 2022

NEW FEATURE
IMPROVEMENT

New features:

  • We added to the Alerts puller the feature to restart the persistence when the config start_time  is updated at service level.

Improvements:

  • The performance has been improved after switching the internal delivery method. The events are delivered in batches instead of one by one.

Update