Table of Contents | ||||
---|---|---|---|---|
|
Note |
---|
If you are migrating from old 1v1.x.x versionsto v2.0.0, you can find a complete guide at Azure Collector Migration Guide in this article. |
Overview
Microsoft Azure is an ever-expanding set of cloud computing services to help your organization meet its business challenges. Azure gives you the freedom to build, manage, and deploy applications on a massive, global network using your preferred tools and frameworks.
...
Features | Details |
---|---|
Allow parallel downloading ( | Partial (supported for |
Running environments |
|
Populated Devo events |
|
Flattening pre-processing |
|
Allowed source events obfuscation |
|
Data source description
Data source | Description | API endpoint | Collector service name | Devo table |
---|---|---|---|---|
VM Metrics | With the advantages of the Microsoft Azure API, one can obtain metrics about the deployed Virtual Machines, gathering them on our platform, making it easier to query and analyze in the Devo platform and Activeboards. | Azure Compute Management Client SDK and Azure Monitor Management Client SDK |
|
|
Event Hubs | Several Microsoft Azure services can generate some type of execution information to be sent to an EventHub service. (see next section) | Azure Event Hubs SDK |
|
|
...
|
Event hubs: Auto-categorization of Microsoft Azure service messages
...
Note |
---|
In case the amount of egress data exceeds Throughput per Unit limits set by Azure (2 MB/s or 4096 events per second), it won’t be possible for Devo to continue reliable ingestion of data. You can monitor ingress/egress throughput in Azure Portal EventHub Namespace, and based on trends/alerts, you can add another EventHub to resolve this. To avoid this from happening in the first place, please follow scalability guidance provided by Microsoft in their technical documentation. |
Learn more in this article.
Vendor setup
The Microsoft Azure collector centralizes the data with an Event Hub using the Azure SDK. To use it, you need to configure the resources in the Azure Portal and set the right permissions to access the information.
Anchor | ||||
---|---|---|---|---|
|
...
After creating the App registration (or Service Principal), go to the desired Resource Group (or subscription if you want to retrieve metrics from all the available virtual machines).
Select Access control (IAM) in the left menu and click Add.
Select at least the Reader role and choose the previously created App registration.
Confirm the changes.
...
Anchor |
---|
...
|
Getting credentials (Storage Account) (Optional)
...
Setting up the Event Hubs
Now, search the Monitor service and click on it.
Click the Diagnostic Settings option in the left area.
A list of the deployed resources will be shown. Search for the resources that you want to monitor, select them, and click Add diagnostic setting.
Type a name for the rule and check the required category details (logs will be sent to the cloud.azure.eh.events table, and metrics will be sent to the cloud.azure.eh.metrics table).
Check Stream to an Event Hub, and select the corresponding Event hub namespace, Event hub name, and Event hub policy name.
Click Save to finish the process.
Event Hub Auto Discover
To configure access to event hubs for the auto-discovery feature, you need to grant the necessary permissions to the registered application to access the Event Hub without using the RootManageSharedAccessKey
. Furthermore, the auto-discovery feature will enumerate a namespace and resource group for all available event hubs and optionally create consumer groups (if the configuration specifies a consumer group other than $Default
and that consumer group does not exist when he collector connects to the event hub) and optionally create Azure Blob Storage containers for checkpointing purposes (if the user specifies a storage account and container in the configuration file).
...
Info |
---|
For Azure Event Hub, it is enough with the event hub name and the connection string (and optionally consumer group). No credentials are required. |
Accepted authentication methods
Authentication method | Tenant ID | Client ID | Client secret | Subscription ID | ||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OAuth2 |
|
|
|
|
Run the collector
Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).
Rw ui tabs macro | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running. StructureThe following directory structure will be required as part of the setup procedure (it can be created under any directory):
Devo credentialsIn Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in Editing the config-azure.yaml fileIn the config-azure.yaml file, replace the
Download the Docker imageThe collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:
Use the following command to add the Docker image to the system:
The Docker image can be deployed on the following services:
Execute the following command on the root directory
The following Docker Compose file can be used to execute the Docker container. It must be created in the
To run the container using docker-compose, execute the following command from the
We use a piece of software called Collector Server to host and manage all our available collectors. To enable the collector for a customer:
Editing the JSON configuration
The following table outlines the parameters available for configuring the collector. Each parameter is categorized by its necessity (mandatory or optional), data type, acceptable values or formats, and a brief description.
Info |
|
...
Expand | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
General principlesRefer to Event Hubs - General Principles for general principles. Configuration optionsDevo supports only one for this service. Connection strings are not supported. Event Hubs Auto Discover authentication configurationEvent Hubs authentication can be via connection strings or client credentials (assigning the Preference is given to connection string configuration when both are available.
Azure Blob storage checkpoint configurationOptional and configurable via connection strings or client credentials. If all possible parameters are present, the collector will favor the connection string configuration.
Internal process and deduplication methodThe collector uses the All deduplication methods and checkpointing methods listed in the The Due to the nature of this service, if a user has configure Azure Blob Storage checkpointing, the collector will attempt to create containers in the configured Azure Blob storage account. If the configured credentials do not have write access to the storage account, an error will be presented to the logs and indicate that the user must grant write access to the credentials.
TroubleshootingThis collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors. Common logic
Typical issues
|
Collector operations
This section is intended to explain how to proceed with specific operations of this collector.
Expand | ||||||
---|---|---|---|---|---|---|
|
Code Block |
---|
2023-01-10T15:22:57.146 INFO MainProcess::MainThread -> Loading configuration using the following files: {"full_config": "config-test-local.yaml", "job_config_loc": null, "collector_config_loc": null}
2023-01-10T15:22:57.146 INFO MainProcess::MainThread -> Using the default location for "job_config_loc" file: "/etc/devo/job/job_config.json"
2023-01-10T15:22:57.147 INFO MainProcess::MainThread -> "\etc\devo\job" does not exists
2023-01-10T15:22:57.147 INFO MainProcess::MainThread -> Using the default location for "collector_config_loc" file: "/etc/devo/collector/collector_config.json"
2023-01-10T15:22:57.148 INFO MainProcess::MainThread -> "\etc\devo\collector" does not exists
2023-01-10T15:22:57.148 INFO MainProcess::MainThread -> Results of validation of config files parameters: {"config": "C:\git\collectors2\devo-collector-<name>\config\config.yaml", "config_validated": True, "job_config_loc": "/etc/devo/job/job_config.json", "job_config_loc_default": True, "job_config_loc_validated": False, "collector_config_loc": "/etc/devo/collector/collector_config.json", "collector_config_loc_default": True, "collector_config_loc_validated": False}
2023-01-10T15:22:57.171 WARNING MainProcess::MainThread -> [WARNING] Illegal global setting has been ignored -> multiprocessing: FalseEvents delivery and Devo ingestion |
The event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method.
A successful run has the following output messages for the initializer module:
Code Block |
---|
023-01-10T15:23:00.788 INFO OutputProcess::MainThread -> DevoSender(standard_senders,devo_sender_0) -> Starting thread
2023-01-10T15:23:00.789 INFO OutputProcess::MainThread -> DevoSenderManagerMonitor(standard_senders,devo_1) -> Starting thread (every 300 seconds)
2023-01-10T15:23:00.790 INFO OutputProcess::MainThread -> DevoSenderManager(standard_senders,manager,devo_1) -> Starting thread
2023-01-10T15:23:00.842 INFO OutputProcess::MainThread -> global_status: {"output_process": {"process_id": 18804, "process_status": "running", "thread_counter": 21, "thread_names": ["MainThread", "pydevd.Writer", "pydevd.Reader", "pydevd.CommandThread", "pydevd.CheckAliveThread", "DevoSender(standard_senders,devo_sender_0)", "DevoSenderManagerMonitor(standard_senders,devo_1)", "DevoSenderManager(standard_senders,manager,devo_1)", "OutputStandardConsumer(standard_senders_consumer_0)", |
Sender services
The Integrations Factory Collector SDK has 3 different senders services depending on the event type to delivery (internal
, standard
, and lookup
). This collector uses the following Sender Services:
Sender services
Description
internal_senders
In charge of delivering internal metrics to Devo such as logging traces or metrics.
standard_senders
In charge of delivering pulled events to Devo.
Sender statistics
Each service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:
Logging trace
Description
Number of available senders: 1
Displays the number of concurrent senders available for the given Sender Service.
sender manager internal queue size: 0
Displays the items available in the internal sender queue.
Info |
---|
This value helps detect bottlenecks and needs to increase the performance of data delivery to Devo. This last can be made by increasing the concurrent senders. |
Total number of messages sent: 44, messages sent since "2022-06-
28 10:39:22.511671+00:00": 21 (elapsed 0.007 seconds)
Displays the number of events from the last time and following the given example, the following conclusions can be obtained:
44 events were sent to Devo since the collector started.
The last checkpoint timestamp was
2022-06-28 10:39:22.511671+00:00
.21 events where sent to Devo between the last UTC checkpoint and now.
Those 0 events required
0.007 seconds
to be delivered.
Info |
---|
By default these traces will be shown every 10 minutes. |
Expand | ||
---|---|---|
| ||
To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.
|
Azure collector migration guide
This section will walk you through the process of updating your configuration from the old version (1.x.x) to the new version (2.0.0). The new version introduces significant improvements and changes to the configuration style to enhance performance, usability, and security.
Overview of changes
The new configuration format introduces several key changes:
Multiple inputs: The configuration now supports multiple inputs to better represent the different data sources and access mechanisms (
azure
andazure_event_hub
)Rename credential config parameters: The credentials configuration field names now follow names that are consistent with Microsoft Azure documentation:
tenant_id
,client_id
,client_secret
.Azure Blob storage checkpoint support: The configuration now accepts Azure Blob Storage related parameters in the queue-specific configuration:
blob_storage_connection_string
,blob_storage_container_name
,blob_storage_account_name
.Moved VM metrics to dedicated service: The VM metrics input has been moved to a dedicated service. The customer service configuration is no longer valid.
Moved Event Hub to dedicated service: The Event Hub input has been moved to a dedicated service. The customer service configuration is no longer valid.
Preparing for migration
Before starting the migration process, we recommend the following steps:
Backup your current configuration: Always ensure you have a backup of your existing configuration files to prevent any data loss.
Review the new configuration documentation: Familiarize yourself with the new configuration options available in version 2.0.0.
Migration steps
Step 1: Update credential configuration parameter field names
The credential configuration field names have been updated:
active_directory_id
→tenant_id
secret
→client_secret
app_id
→client_id
An example of the old and new configuration is shown below:
Code Block |
---|
# Old Version (1.x.x)
credentials:
app_id: <app_id_value>
active_directory_id: <active_directory_id_value>
subscription_id: <subscription_id_value>
secret: <secret_value>
user_guide.md 2024-05-21
57 / 60
↓
# New Version (2.0.0)
credentials:
client_id: <client_id_value>
tenant_id: <tenant_id_value>
subscription_id: <subscription_id_value>
client_secret: <client_secret_value> |
Step 2: Update VM metrics configuration
The VM Metrics service has been moved to the azure
input and a dedicated vm_metrics
service.
An example of the new configuration is shown below:
Code Block |
---|
azure:
id: <short_id>
enabled: true
credentials:
client_id: <client_id_value>
client_secret: <client_secret_value>
tenant_id: <tenant_id_value>
environment: <environment_value>
services:
vm_metrics:
start_time_in_utc: <start_time_in_utc_value>
request_period_in_seconds: 300 |
If you wish to continue from the old configuration, you must input the time of the latest in Devo in the start_time_in_utc
field to indicate the time from which the puller will start collecting data.
Step 3: Update Event Hub Configuration
The Event Hub service(s) have been moved to the azure_event_hub
input and a dedicated event_hub
service.
Code Block |
---|
azure_event_hub:
id: <short_id>
enabled: true
credentials:
client_id: <client_id_value>
client_secret: <client_secret_value>
tenant_id: <tenant_id_value>
environment: <environment_value>
services:
event_hubs:
queues:
<queue_name>:
event_hub_name: <event_hub_name_value>
event_hub_connection_string: <event_hub_connection_string_value>
consumer_group: <consumer_group_value>
events_use_autocategory: <events_use_autocategory_value>
blob_storage_connection_string: <blob_storage_connection_string_value>
blob_storage_container_name: <blob_storage_container_name_value>
blob_storage_account_name: <blob_storage_account_name_value>
compatibility_version: <compatibility_version_value>
duplicated_messages_mechanism: <duplicated_messages_mechanism>
override_starting_position: <override_starting_position_value> |
The new configuration now accepts the blob_storage_connection_string
, blob_storage_container_name
, and blob_storage_account_name
parameters in the queue-specific configuration. These parameters are new, optional, and only required for those users who wish to leverage the Azure Blob Storage for checkpoint. This guide focuses on migrating the configuration from the old version to the new version -- for this reason, the new Azure Blob Storage checkpoint parameters are not relevant to older configurations because they use local, file-based checkpointing.
By default, the collector will begin pulling from the latest event in the queue if there is not already a pre-existing checkpoint. To ensure your migrated collectors fetch from the last event previously sent to Devo, identify the date time of the last event in Devo for the relevant queue and input it into theoverride_starting_position
field in the format %Y-%m-%dT%H:%M:%SZ
. When the collector begins pulling from the queue, the collector will begin fetching from the indicated date time for the first checkpoint.
Step 4: Example before and after configuration
Putting it all together, see below for an example of the old and new configuration:
Code Block |
---|
# Old Version (1.x.x)
inputs:
azure:
id: 10001
enabled: true
credentials:
app_id: app_id_acme
active_directory_id: active_directory_id_acme
subscription_id: subscription_id_acme
secret: secret_acme
environment: test_environment
requests_limits:
- period: 1d
number_of_requests: -1
services:
my_service_1:
request_period_in_seconds: 300
types:
- eh_services
queues:
queue_a:
event_hub_name: the-event-hub-name
consumer_group: the-consumer-group
connection_str: the-connection-string
events_use_autocategory: true
compatibility_version: 1.2.1
duplicated_messages_mechanism: global
use_global_counter_per_queue: true
my_service_2:
request_period_in_seconds: 300
types:
- vm_metrics |
Code Block |
---|
# New Version (2.0.0)
inputs:
azure:
id: 100001
enabled: true
credentials:
subscription_id: subscription_id_acme
client_id: app_id_acme
client_secret: secret_acme
tenant_id: active_directory_id_acme
environment: test-env
services:
vm_metrics:
request_period_in_seconds: 300
azure_event_hub:
id: 100001
enabled: true
credentials:
subscription_id: subscription_id_acme
client_id: app_id_acme
client_secret: secret_acme
tenant_id: active_directory_id_acme
environment: test-env
services:
event_hubs:
queues:
queue_a:
event_hub_name: the-event-hub-name
event_hub_connection_string: the-connection-string
consumer_group: the-consumer-group
events_use_autocategory: true
compatibility_version: 1.2.0
duplicated_messages_mechanism: global
override_starting_position: "2022-01-01T00:00:00Z" # Replace with the datetime of the last event in Devo.
Otherwise, collector pulls from latest event for the first checkpoint. |
Tag mapping configuration guide
The events from Event Hubs are by default auto-categorized to Devo tags according the values explained here. But sometimes it can be needed to change this categorization or create a new categorization for a new kind of events. It is possible to change the categories without creating a new version of the collector, editing the config file.
This guide explains how to configure mapping using the tag parameter in the YAML configuration.
Overview
By default, the override_tag
parameter accepts a simple string that will be applied to all records; however, the advanced override_tag
parameter allows you to define a default tag and a set of tag mapping rules based on JMESPath expressions. The collector will use these rules to assign tags to records based on their content.
Info |
---|
You can find a tutorial and a complete reference for JMESPath here. |
Template / Example
Code Block |
---|
override_tag:
default_tag: <default_tag_value>
jmespath_refs:
<jmespath_ref_placeholder_name>: <jmespath_ref_placeholder_value>
tag_map:
- jmespath: <jmespath_expression_value>
tag: <tag_value> |
Configuration
Default tag
Use the
default_tag
parameter to specify the default tag that will be applied to records that do not match any JMESPath expression.
JMESPath references (Optional)
Define reusable JMESPath expressions in the
jmespath_refs
section.These expressions can be referenced in the
tag_map
section using placeholders (e.g.,{events_base}
).
Tag map
Define a list of tag mapping rules in the
tag_map
section.Each rule consists of a
jmespath
expression and a corresponding tag.The
jmespath
expression is evaluated against each record, and if it matches, the corresponding tag is applied to the record.The tag value can include placeholders (e.g.,
{queue_name}
,{collector_version}
) that will be substituted with values from the record itself or the collector variables.
Example simple configuration (used internally by the Google Workspace Logs in BigQuery collector)
Code Block |
---|
override_tag:
default_tag: my.app.gsuite_activity.{record_type}
tag_map:
- jmespath: "[?record_type == 'gmail']"
tag: cloud.gcp.bigquery.gmailExample advanced configuration (used internally by the Azure collector) |
Example advanced configuration (used internally by the Azure collector)
Code Block |
---|
override_tag:
default_tag: my.app.cloud_azure.unknown_events
jmespath_refs:
lower_resource_id: "lower(resourceid || resourceId || _ResourceId)"
lower_category: "lower(category || Category)"
events_base: "[?not_null(category, Category)]"
metrics_base: "[?not_null(metricName)]"
vm_base: "[?SourceSystem == 'Linux' || SourceSystem == 'OpsManager']"
tag_map:
- jmespath: "{events_base}"
tag: cloud.azure.others.events.{queue_name}.{collector_version}.eh
- jmespath: "{metrics_base}"
tag: cloud.azure.eh.metrics.{queue_name}.{collector_version}
- jmespath: "{vm_base} | [?Type == 'SecurityEvent' || (Type == 'Event' && EventLog == 'Security')]"
tag: cloud.azure.vm.securityevent.{queue_name}.{collector_version}.eh
- jmespath: "{vm_base} | [?Type == 'Syslog' && SourceSystem == 'Linux']"
tag: cloud.azure.vm.unix.{queue_name}.{collector_version}.eh
- jmespath: "{vm_base} | [?Type == 'Event' && EventLog == 'Application']"
tag: cloud.azure.vm.applicationevent.{queue_name}.{collector_version}.eh
- jmespath: "{vm_base} | [?Type == 'Event' && EventLog == 'System']"
tag: cloud.azure.vm.systemevent.{queue_name}.{collector_version}.eh
- jmespath: "{vm_base}"
tag: cloud.azure.vm.unknown_events.{queue_name}.{collector_version}.eh |
Evaluation process
The collector evaluates each record against the JMESPath expressions in the
tag_map
section, in top-down order.If a record matches a JMESPath expression, the corresponding tag is applied, and the record is not evaluated against subsequent expressions.
If a record does not match any JMESPath expression, the
default_tag
is applied.
Sending records
After all evaluations are made for a given recordset, the collector groups the records by their assigned tags.
The collector sends each group of records to Devo on a per-tag basis.
...
| ||||||||||||||||||||||||||||||||||||||||||||||||||
This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.
|
Collector operations
This section is intended to explain how to proceed with specific operations of this collector.
Expand | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||||||||||||||
InitializationThe initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration. A successful run has the following output messages for the initializer module:
The event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method. A successful run has the following output messages for the initializer module:
Sender servicesThe Integrations Factory Collector SDK has 3 different senders services depending on the event type to delivery (
Sender statisticsEach service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:
|
Expand | ||
---|---|---|
| ||
To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.
|
Change log
Release | Released on | Release type | Details | Recommendations | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
| Feature
Improvements
|
| |||||||||||||
|
|
| Improvements
|
| ||||||||||||
|
| Improvements
|
| |||||||||||||
|
| Improvements
Bug fixing
|
| |||||||||||||
|
| Bug fixing
|
| |||||||||||||
|
| Improvements
Bug fixing
|
| |||||||||||||
|
| Improvements
Bug fixing
|
| |||||||||||||
|
| Bug fixing
|
| |||||||||||||
|
| Improvements
|
| |||||||||||||
|
| Improvements New events types are accepted for the service
|
| |||||||||||||
|
| Bug fixing A configuration bug has been fixed to enable the autocategorization of the following events
|
|