If you are migrating to version 2.X from version 1.X please read this section.
Overview
The Microsoft Graph Collector provides the ability to collect data and intelligence from services such as Microsoft 365, Windows, and Enterprise Mobility and Security. Currently, this data collector is able to ingest only security alerts, scores, provisioning, audit and sign ins retrieved from the Microsoft products. This empowers customers to streamline security operations and better defend against increasing cyber threats faced in their Azure AD and Microsoft 365 environments and beyond.
Devo’s Microsoft Graph collector also enables customers to correlate events and context to improve threat protection and response, and includes key entities described in the next sections.
Configuration requirements
To run this collector, there are some configurations detailed below that you need to consider.
Configuration
Details
Azure account
Azure account with admin level permissions and Azure AD tenant.
Credentials
The credentials configuration block has been filled correctly.
More information
Refer to the Vendor setup section to know more about these configurations.
Devo collector features
Feature
Details
Allow parallel downloading (multipod)
not allowed
Running environments
collector server
on-premise
Populated Devo events
table
Flattening preprocessing
no
Allowed source events obfuscation
yes
Data sources
Data source
Description
API endpoint
Collector service name
Devo table
Audit logs - provisioning
Represents an action performed by the Microsoft Entra provisioning service and its associated properties.
v1.0/auditLogs/provisioning
provisioning_audits
cloud.azure.ad.provisioning.*.msgraph
Audit logs - directory
Represents the directory audit items and its collection.
v1.0/auditLogs/directoryaudits
directory_audits
cloud.azure.ad.audit.*.msgraph
Audit logs - sign-ins
Details user and application sign-in activity for a tenant (directory). You must have a Microsoft Entra ID P1 or P2 license to download sign-in logs using the Microsoft Graph API.
v1.0/auditLogs/signIns
signIns
cloud.azure.ad.signin.*.msgraph
Audit logs - sign-ins (v2)
Details user and application sign-in activity for a tenant (directory). You must have a Microsoft Entra ID P1 or P2 license to download sign-in logs using the Microsoft Graph API.
This resource corresponds to the first generation of alerts in the Microsoft Graph security API, representing potential security issues within a customer's tenant that Microsoft or a partner security solution has identified.
This type of alerts federates calling of supported Azure and Microsoft 365 Defender security providers listed in Use the Microsoft Graph security API. It aggregates common alert data among the different domains to allow applications to unify and streamline management of security issues across all integrated solutions.
v1.0/security/alerts
alerts
cloud.azure.ad.alerts.*.msgraph
cloud.office365.cloud_apps.alerts.*.msgraph
cloud.office365.endpoint.alerts.*.msgraph
cloud.office365.security.alerts.*.msgraph
cloud.azure.sentinel.alerts.*.msgraph
cloud.office365.identity.alerts.*.msgraph
cloud.azure.securitycenter.alerts.*.msgraph
Alerts (v2)
This resource corresponds to the latest generation of alerts in the Microsoft Graph security API, representing potential security issues within a customer's tenant that Microsoft 365 Defender, or a security provider integrated with Microsoft 365 Defender, has identified.
When detecting a threat, a security provider creates an alert in the system. Microsoft 365 Defender pulls this alert data from the security provider, and consumes the alert data to return valuable clues in an alert resource about any related attack, impacted assets, and associated evidence. It automatically correlates other alerts with the same attack techniques or the same attacker into an incident to provide a broader context of an attack. Aggregating alerts in this manner makes it easy for analysts to collectively investigate and respond to threats.
v1.0/security/alerts_v2
alerts_v2
cloud.msgraph.security.alerts_v2.*
Secure Scores
Represents a tenant's secure score per day of scoring data, at the tenant and control level. By default, 90 days of data is held. This data is sorted by createdDateTime, from latest to earliest. This will allow you to page responses by using $top=n, where n = the number of days of data that you want to retrieve.
v1.0/security/secureScores
secure_scores
cloud.office365.security.scores.*.msgraph
Secure Scores Control Profiles
Represents a tenant's secure score per control data. By default, this resource returns all controls for a tenant and can explicitly pull individual controls.
v1.0/security/secureScoreControlProfiles/
secure_score_control_profiles
cloud.office365.security.scorecontrol.*.msgraph
Vendor setup
Microsoft Graph data collector works over Microsoft products. To activate the resources from the Microsoft Graph API, you need:
An Azure account that has an active subscription.
The Azure account must have permission to manage applications in Azure Active Directory (Azure AD).
A working Azure AD tenant.
You will need to register a new application and apply the required permissions to the corresponding resources to authenticate the collector in order to retrieve the data.
You need the Admin level permissions on the Azure portal as the subscription setup will require admin consent API permissions, authentications, and audits.
Action
Steps
1
Register and configure the application
Go to Azure portal and click on Azure Active Directory.
Click on App registration on the left-menu side. Then click on + New registration.
On the Register and Application page:
Name the application.
Select Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox) in Supported Accounts type.
In Redirect URI (optional) leave it as default (blank).
Click Register.
App registration page will open. Click on your app to configure it and give it permissions. You will see your app’s dashboard with information (docs, endpoints, etc.) when clicking it.
Click Authentication on the left-menu side, then choose + Add a platform and select Mobile and desktop application.
Click + Add permission in case you don’t have Microsoft Graph in the API/Permission list.
Select Application permissions and check SecurityEvents.Read.All.
Check the following permissions: AuditLog.Read.All,Directory.Read.All and User.Read.All. If you did everything correctly, permissions will display.
Select Grant admin consent for the applications.
You do not need to activate permissions if you are not going to use its corresponding resource. Check the Permissions reference per service section for a detailed breakdown on resource and their needed permissions.
3
Obtain the requires credentials for the collector
Go to Certificates & Secrets, select + New client secret . Named it and copy the token value.
Go to Overview to get your Tenant ID and Client ID and copy both values.
The token will display only once. You will need to create another one if you didn’t copy it the first time.
Sometimes you’ll see this error: Unable to save changes. One or more of the following permission(s) are currently not supported: SecurityEvents.Read.All or SecurityActions.Read.All. Please remove these permission(s) and retry your request. [O6b9].
It might that you did not set up the permission correctly. Please, make sure that the permissions are exactly are showing above.
Minimum configuration required for basic pulling
Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.
This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check setting sections for details.
Setting
Details
tenant_id_value
This is the Tenant’s ID you created in Azure AD. You can obtain it from the Overview page in your registered application.
client_id_value
This is the Client’s ID you created in Azure AD. You can obtain it from the Overview page in your registered application.
Sometimes you will find the client_id as Application (client) ID.
client_secret_value
This is the Client’s secret you created in Azure AD. You can obtain it from the Certificates & secrets page in your registered application.
See the Accepted authentication methods section to verify what settings are required based on the desired authentication method.
Accepted authentication methods
This collector only accepts one single authentication method. You will have to fill the following properties on the credentials configuration block:
Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).
Collector services detail
This section is intended to explain how to proceed with specific actions for services.
directory_audits
Internal process and deduplication method
All directory audit records are continuously pulled subject to the activityDateTime timestamp property. A unique hash value is computed for each event and used for deduplication purposes to ensure events are not fetched multiple times in subsequent pulls. The collector utilizes the @odata.nextLink link (if provided) to fetch the next page of records for a given query window until no more remain.
Devo categorization and destination
All events of this service are ingested into the table cloud.azure.ad.audit.
provisioning_audits
Internal process and deduplication method
All provisioning audit records are continuously pulled subject to the activityDateTime timestamp property. A unique hash value is computed for each event and used for deduplication purposes to ensure events are not fetched multiple times in subsequent pulls. The collector utilizes the @odata.nextLink link (if provided) to fetch the next page of records for a given query window until no more remain.
Devo categorization and destination
All events of this service are ingested into the table cloud.azure.ad.provisioning.
signIns
Internal process and deduplication method
All sign-in records are continuously pulled subject to the createdDateTime timestamp property. A unique hash value is computed for each event and used for deduplication purposes to ensure events are not fetched multiple times in subsequent pulls. The collector utilizes the @odata.nextLink link (if provided) to fetch the next page of records for a given query window until no more remain.
Devo categorization and destination
All events of this service are ingested into the table cloud.azure.ad.signin.
signIns_v2
Internal process and deduplication method
All sign-in_v2 records are continuously pulled subject to the createdDateTime timestamp property. A unique hash value is computed for each event and used for deduplication purposes to ensure events are not fetched multiple times in subsequent pulls. The collector utilizes the @odata.nextLink link (if provided) to fetch the next page of records for a given query window until no more remain.
Devo categorization and destination
Events of this service are ingested into the following tables:
signIn event type
Devo table
interactiveUser
cloud.azure.ad.interactive_user_signin
nonInteractiveUser
cloud.azure.ad.noninteractive_user_signin
managedIdentity
cloud.azure.ad.managed_identity_signin
servicePrincipal
cloud.azure.ad.service_principal_signin
unknownFutureValue
cloud.azure.ad.unknown_future_value_signin
alert
Internal process and deduplication method
All alert records are continuously pulled subject to the createdDateTime timestamp property. A unique hash value is computed for each event and used for deduplication purposes to ensure events are not fetched multiple times in subsequent pulls. The collector utilizes the @odata.nextLink link (if provided) to fetch the next page of records for a given query window until no more remain.
Devo categorization and destination
Events of this service are ingested into the following tables:
Vendor
Devo table
IPC
cloud.azure.ad.alerts
MCAS
cloud.office365.cloud_apps.alerts
Microsoft Defender ATP
cloud.office365.endpoint.alerts
Microsoft 365 Defender
cloud.office365.endpoint.alerts
Office 365 Security and Compliance
cloud.office365.security.alerts
Azure Sentinel
cloud.azure.sentinel.alerts
ASC
cloud.office365.identity.alerts
Azure Advanced Threat Protection
cloud.azure.securitycenter.alerts
alerts_v2
Internal process and deduplication method
All alerts_v2 records are continuously pulled subject to the activityDateTime timestamp property. A unique hash value is computed for each event and used for deduplication purposes to ensure events are not fetched multiple times in subsequent pulls. The collector utilizes the @odata.nextLink link (if provided) to fetch the next page of records for a given query window until no more remain.
Devo categorization and destination
All events of this service are ingested into the table cloud.msgraph.security.alerts_v2.
secure_scores
Internal process and deduplication method
All secure score records are continuously pulled subject to the createdDateTime timestamp property. A unique hash value is computed for each event and used for deduplication purposes to ensure events are not fetched multiple times in subsequent pulls. The collector utilizes the @odata.nextLink link (if provided) to fetch the next page of records for a given query window until no more remain.
Devo categorization and destination
All events of this service are ingested into the table cloud.office365.security.scores.
secure_score_control_profiles
Internal process and deduplication method
All secure score profile records are continuously pulled subject to the createdDateTime timestamp property. A unique hash value is computed for each event and used for deduplication purposes to ensure events are not fetched multiple times in subsequent pulls. The collector utilizes the @odata.nextLink link (if provided) to fetch the next page of records for a given query window until no more remain.
Devo categorization and destination
All events of this service are ingested into the table cloud.office365.security.scores.
Restart the persistence for a service
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
Edit the configuration file.
Change the value of the start_time_in_utc parameter to a different one.
Save the changes.
Restart the collector. The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.
Troubleshooting
This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.
Common logic
Error type
Error ID
Error message
Cause
Solution
InitVariablesError
1
Invalid start_time_in_utc: {ini_start_str}. Must be in parseable datetime format.
The configured start_time_in_utc parameter is a non-parseable format.
Update the start_time_in_utc value to have the recommended format as indicated in the guide.
InitVariablesError
2
Invalid start_time_in_utc: {ini_start_str}. Must be in the past.
The configured start_time_in_utc parameter is a future date.
Update the start_time_in_utc value to a past datetime.
SetupError
101
Failed to fetch OAuth token from {token_endpoint}. Exception: {e}.
The provided credentials, base URL, and/or token endpoint are incorrect.
Revisit the configuration steps and ensure that the correct values were specified in the config file.
SetupError
102
Failed to fetch data from {endpoint}. Source is not pullable.
The provided credentials, base URL, and/or token endpoint are incorrect.
Revisit the configuration steps and ensure that the correct values were specified in the config file.
ApiError
401
Error during API call to [API provider HTML error response here]
The server returned an HTTP 401 response.
Ensure that the provided credentials are correct and provide read access to the targeted data.
ApiError
429
Error during API call to [API provider HTML error response here]
The server returned an HTTP 429 response.
The collector will attempt to retry requests (default up to 3 times) and respect back-off headers if they exist. If the collector repeatedly encounters this error, adjust the rate limit and/or contact the API provider to ensure that you have enough quota to complete the data pull.
Please note that IBM Cloud supports a limited amount of concurrent searches for the Activity Tracker instances at any given time. Please refer to your IBM Cloud plan for these limitations (most plans only support one concurrent search). If you encounter a 429 due to a concurrent search, and the collector proceeds normally during the subsequent pull, then there is nothing to correct. If you repeatedly encounter a 429 due to a concurrent search, make sure that you do not have multiple collectors running against the same Activity Tracker instance and that you do not have other, non-Devo-collector API searches running against the Activity Tracker instance.
ApiError
498
Error during API call to [API provider HTML error response here]
The server returned an HTTP 500 response.
If the API returns a 500 but successfully completes subsequent runs then you may ignore this error. If the API repeatedly returns a 500 error, ensure the server is reachable and operational.
Collector operations
Verify collector operations
Initialization
The initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration. A successful run has the following output messages for the initializer module:
2023-01-10T15:22:57.146 INFO MainProcess::MainThread -> Loading configuration using the following files: {"full_config": "config-test-local.yaml", "job_config_loc": null, "collector_config_loc": null}
2023-01-10T15:22:57.146 INFO MainProcess::MainThread -> Using the default location for "job_config_loc" file: "/etc/devo/job/job_config.json"
2023-01-10T15:22:57.147 INFO MainProcess::MainThread -> "\etc\devo\job" does not exists
2023-01-10T15:22:57.147 INFO MainProcess::MainThread -> Using the default location for "collector_config_loc" file: "/etc/devo/collector/collector_config.json"
2023-01-10T15:22:57.148 INFO MainProcess::MainThread -> "\etc\devo\collector" does not exists
2023-01-10T15:22:57.148 INFO MainProcess::MainThread -> Results of validation of config files parameters: {"config": "C:\git\collectors2\devo-collector-<name>\config\config.yaml", "config_validated": True, "job_config_loc": "/etc/devo/job/job_config.json", "job_config_loc_default": True, "job_config_loc_validated": False, "collector_config_loc": "/etc/devo/collector/collector_config.json", "collector_config_loc_default": True, "collector_config_loc_validated": False}
2023-01-10T15:22:57.171 WARNING MainProcess::MainThread -> [WARNING] Illegal global setting has been ignored -> multiprocessing: False
Events delivery and Devo ingestion
The event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method.
A successful run has the following output messages for the initializer module:
The Integrations Factory Collector SDK has 3 different sender services depending on the event type to deliver (internal, standard, and lookup). This collector uses the following Sender Services:
Logging trace
Description
Number of available senders: 1
Displays the number of concurrent senders available for the given Sender Service.
Sender manager internal queue size: 0
Displays the items available in the internal sender queue.
This value helps detect bottlenecks and needs to increase the performance of data delivery to Devo. This last can be made by increasing the concurrent senders.
Total number of messages sent: 44, messages sent since "2022-06-28 10:39:22.511671+00:00": 21 (elapsed 0.007 seconds)
Displays the number of events from the last time the collector executed the pull logic. Following the given example, the following conclusions can be obtained:
44 events were sent to Devo since the collector started.
The last checkpoint timestamp was 2022-06-28 10:39:22.511671+00:00.
21 events were sent to Devo between the last UTC checkpoint and now.
Those 21 events required 0.007 seconds to be delivered.
By default these traces will be shown every 10 minutes.
Sender statistics
Each service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:
Logging trace
Description
Number of available senders: 1
Displays the number of concurrent senders available for the given Sender Service
Sender manager internal queue size: 0
Displays the items available in the internal sender queue.
Standard - Total number of messages sent: 57, messages sent since "2023-01-10 16:09:16.116750+00:00": 0 (elapsed 0.000 seconds
Displays the number of events from the last time the collector executed the pull logic. Following the given example, the following conclusions can be obtained:
44 events were sent to Devo since the collector started.
The last checkpoint timestamp was 2023-01-10 16:09:16.116750+00:00.
21 events were sent to Devo between the last UTC checkpoint and now.
Those 21 events required 0.00 seconds to be delivered.
Check memory usage
To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.
The used memory is displayed by running processes and the sum of both values will give the total used memory for the collector.
The global pressure of the available memory is displayed in the global value.
All metrics (Global, RSS, VMS) include the value before freeing and after previous -> after freeing memory
Version 2.0.0 introduced several changes to the collector's configuration. When upgrading the collector, users must make changes to their configuration file to ensure that the collector continues to work as expected.
Important
Note that tag_version behaviour has been replaced by override_tag
In versions 1.X of the collector, some services had a config parameter tag_version with values v1or v2. The effect of this parameter is that the destination table for these services will be different. These are the destination tables according to the tag_version value:
v1
v2 (default)
cloud.msgraph.security.score
cloud.office365.security.score
cloud.msgraph.security.scorecontrol
cloud.office365.security.scorecontrol
In the new collector 2.0.0, the old config parameter tag_version has been removed. The same effect as v1can be made using override_tag, with these values:
Service
override_tag (to obtain the behaviour equivalent to tag_version: v1)
Note that security alerts are sent to different tables according to their categories.
In old versions of this collector, all alerts were sent to the table cloud.msgraph.security.alerts
However, as there are alerts of several types, in version 2.0.0 alerts are categorized according to their vendorInformation-provider field (their vendor) and sent to different tables:
Vendor
Old table
New table
IPC
cloud.msgraph.security.alerts
cloud.azure.ad.alerts
MCAS
cloud.office365.cloud_apps.alerts
Microsoft Defender ATP
cloud.office365.endpoint.alerts
Microsoft 365 Defender
cloud.office365.endpoint.alerts
Office 365 Security and Compliance
cloud.office365.security.alerts
Azure Sentinel
cloud.azure.sentinel.alerts
ASC
cloud.office365.identity.alerts
Azure Advanced Threat Protection
cloud.azure.securitycenter.alerts
Others
cloud.msgraph.security.alerts
It is possible to avoid this feature and send all alerts to the same table by editing the config file and changing the tag:
The URL endpoints (override_base_url_main, override_base_url_vendor, override_base_url_vendor_with_sub_provider , override_login_url) have been moved from the individual services to the global configuration section.
override_base_url_main has been renamed to override_base_url.
tag_version has been removed.
pull_sliding_window_timespan_period has been removed.
reset_persistence_auth has been removed.
override_time_delta_in_days has been removed.
ms_365_environment has been replaced by override_top_level_domain. GCC High Gov should use us in the override_top_level_domain. Alternatively, users can use override_base_url to specify the GCC High Gov base URL.
additional_filter has been added to all services. Users can use this field to specify additional filters that will be applied when querying the Microsoft Graph API.
The collector can use new services from Graph (beta endpoint in Graph), that services, that use to be in a separate service for each type, have been consolidated into one service called signIns_v2. Users should remove all three services from their config and use only the signIns_v2 service.
start_time has been renamed to start_time_in_utc.
Persistence changes
The persistence object consists of the following fields: persistence_version, last_event_time_in_utc, last_ids, and next_link.
The collector will automatically map old key names (e.g. last_polled_timestamp → last_event_time_in_utc) to the appropriate value.
Change log
Release
Released on
Release type
Details
Recommendations
v2.0.0
IMPROVEMENT
Improvements
Complete reimplementation of the collector, refactoring all the services
Recommended version
v1.7.1
NEW FEATURE
IMPROVEMENT
BUG FIX
New features
time_delta_in_days can now be overwritten in the user configuration by override_time_delta_in_days in all the service that are “time-based”. This new parameter cannot be combined with reset_persistence_auth and/or start_time.
Improvements
The state is now persisted more frequently for most of the services. This means that, in case of a collector restart, the chances of duplicating data have been reduced considerably, as the collector will continue pulling data from the same point where it was when the collector was stopped.
Bug fixing
The collector will get the most recent token available before performing any new request, reducing the possibilities to get a 401 code as a response.
The 504 code responses were returned many time; some of them for having asked for too old data. This used to cause a locking state in the collector, as it was not able to continue. Some mechanisms has been added to avoid requiring that old data to the API. Anyway, if a 504 appears now for any other reason, the improvement related to persisting the state frequently makes the collector continue collecting correctly after the service restart.
Upgrade
v1.7.0
NEW FEATURE
IMPROVEMENT
BUG FIX
New features:
alerts_v2 service included, keeping old alerts service for compatibility.
Compliance for MS 365 GCC High US environments added.
Improvements:
Update DCSDK from 1.8.0 to 1.9.2
Bug fixing:
The collector now keeps retrieving events when it is up-to-date.
Added extra protection to refresh token and avoid 401 status errors.
When a 401 status code is received from a response, the collector tries the request again using the access_token available in the collector_variables, instead of raising en Error. This definitely fixes the bug that used to make the collector restart due to 401 errors.
A vendor thread termination event has been set, including three different check points in the thread's run method, as a protection against non-terminated vendor threads, causing the alerts service to stop. Some extra logging has also been added to identify the root cause in case this keeps happening.
Upgrade
v1.6.2
BUG FIX
IMPROVEMENT
Improvements:
Update DCSDK from 1.6.0 to 1.8.0
Bug fixing:
Fix in the service urls: they were not being formatted correctly with the start_time variable, which allows the user to select the date from which they want to collect events.
Updated the limits of the API: The limits have been modified with the official values. This fixes throttling issues.
Updated the default value of star_time from 61 days in the past to 30, as this is the maximum limit the API allows.
Upgrade
v1.6.0
NEW FEATURE
New features:
The pulling mechanism now uses a sliding window to avoid event loss and duplication.
Improvements:
DevoCollectorSDK upgraded to v1.6.0:
Added:
More log traces related to execution environment details.
Global rate limiters functionality.
Extra checks for supporting MacOS as development environment.
Obfuscation functionality.
Changed:
Some log traces now are shown less frequently.
The default value for the logging frequency for "main" processes has been changed (to 120 seconds).
Updated some Python Packages.
Controlled stopping functionality more stable when using the "template".
Improved some log messages related to Devo certificates (when using the Devo sender).
Validate json objects before saving them to persistence (using filesystem).
Upgrade
v1.4.2
BUG FIX
Fixed bugs:
Fixes bug with non-time-based puller state.
Upgrade
v1.4.1
BUG FIX
Fixed bugs:
Fix error with vendor state when checking the reset_persistence_auth parameter.
Allow using v2 tags for secure_scores and secure_scores_control_profile tags.
Add missing Devo metadata into events.
Upgrade
v1.4.0
IMPROVEMENT
Improvements:
Automatic outdated start_time correction for audit-based services.
New “reset persistence” functionality.
Upgrade
v1.3.0
IMPROVEMENT
BUG FIX
Improvements:
start_time configuration parameter normalization for audit and provisioning services.
Upgraded devocollectorsdk from 1.4.0 to 1.4.4b:
Added:
New "templates" functionality.
New controlled stopping condition when any input thread fatally fails.
Log traces for knowing the execution environment status (debug mode).
Changed:
Improved log trace details when runtime exceptions happen
Refactored source code structure
Fixes in the current puller template version
The Docker container exits with the proper error code
Bug fixing:
Correct token validation when a Partial Content response is received.
Use appropriate destination tag for provisioning events.
Upgrade
v1.2.0
NEW FEATURE
IMPROVEMENT
New features:
New supported sources
Sign In (signIn service)
Audit (audit service)
Provisioning (provisioning service)
Previous services modification
The new tagging introduced in the previous v1.1.3 release is now customizable through the tag_version service parameter. The default tagging has been reverted to the original one.
The alerts source, when setting the tag_version to v2, will try to categorize the events by applying different tags based on the event’s provider.
Improvements:
Token validation is now performed against the corresponding endpoint.