Proofpoint Targeted Attack Protection (TAP) helps you stay ahead of attackers with an innovative approach that detects, analyzes and blocks advanced threats.
Feature | Details |
---|---|
Allow parallel downloading ( |
|
Running environments |
|
Populated Devo events |
|
Flattening preprocessing |
|
Data source | Description | API endpoint | Collector service name | Devo table | Available from release |
---|---|---|---|---|---|
| Fetch events for clicks to malicious URLs blocked in the specified time period |
|
|
|
|
| Fetch events for clicks to malicious URLs permitted in the specified time period |
|
|
|
|
| Fetch events for messages blocked in the specified time period that contained a known threat. |
|
|
|
|
| Fetch events for messages delivered in the specified time period which contained a known threat. |
|
|
|
|
| The Threats API allows administrators to pull detailed attributes about individual threats observed in their environment. It can be used to retrieve more intelligence for threats identified in the SIEM or Campaign API responses |
|
|
| |
| If Forensics is enabled the events are flattened into the table |
|
|
| |
| The Campaign API allows administrators to pull campaign IDs in a timeframe and specific details about campaigns, including: their description; the actor, malware family, and techniques associated with the campaign; and the threat variants which have been associated with the campaign |
|
|
| |
| If Forensics is enabled the events are flattened into the table |
|
|
| |
| The People API allows administrators to identify which users in their organizations were most attacked or are the top clickers during a specified period. Fetch the identities and attack index of the top clickers within your organization for a given period. Top clickers are the users who have demonstrated a tendency to click on malicious URLs, regardless of whether the clicks were blocked or not. Knowing who are more susceptible to threats is useful for proactive security approaches such as security training assignments. |
|
|
| |
| The People API allows administrators to identify which users in their organizations were most attacked or are the top clickers during a specified period. Fetch the identities and attack index breakdown of Very Attacked People within your organization for a given period. |
|
|
|
For more information on how the events are parsed, visit our page.
Data source | Collector service | Optional | Flattening details |
---|---|---|---|
|
|
| not required |
|
|
| not required |
Authentication method | username | password |
---|---|---|
|
Although this collector supports advanced configuration, the fields required to retrieve data with basic configuration are defined below.
This minimum configuration refers exclusively to those specific parameters of this integration. There are more required parameters related to the generic behavior of the collector. Check setting sections for details. |
Setting | Details |
---|---|
| The username for proofpoint Tap |
| The password(credential) for proofpoint |
| Start Time which is not more than 7 days into the past |
See the Accepted authentication methods section to verify what settings are required based on the desired authentication method. |
clicksBlocked
, clicksPermitted
, messageBlocked
, messageDelivered
ClicksBlocked
(/v2/siem/clicks/blocked
) , messageBlocked
(/v2/siem/messages/blocked
), messageDelivered
(/v2/siem/messages/delivered
) can collectively make 1800 requests per day as per api documentation
clicksPermitted
(/v2/siem/clicks/permitted
) can make 1800 reuqests per day as per api documentation.
To simplify the code and handle all the four services, code has been written in a way that each of the four service can make 220 request per day.
We have put a limit on the api call, which will block the api requests after 220 requests for a service.
If the collector took, let’s say, 15 hours to make 220 requests, the collector will block the api call for 9 hours (9 + 15 = 24 hours) and it will resume making api requests after 9 hours. For these 9 hours no ingestion will occur for that service.
Issues:
Let’s say all the services make 1000 request in 15 hours and then due to some issue collector restarts. Now, after the restart collector is going to assume that service still make a 1800 request , ignoring the 1000 requests collector made before the restart. This will lead to surpass the 1800 requets per day api limit causing a 429 error.
threats
, campaigns
, people_topclicks
, people_vap
threats
service has no api limit
people_topclicks
(v2/people/top-clickers
) and people_vap
(v2/people/vap
) services can make only 50 requests in 1 day as per the api documentation
campaigns
(v2/campaign/ids
) service can only make 50 request in 1 day as per the api documentation
We have put a limit on the api call, which will block the api requests after 50 requests for a service
If the collector took , let’s say, 10 hours to make 50 request, the collector will block the api call for 14 hours (10 + 14 = 24 hours) and it will resume making api requests after 14 hours. For these 14 hours no ingestion will occur for that service
Issues:
Let’s say a service make 30 request in 3 hours and then due to some issue collector restarts. Now, after the restart collector is going to assume that service still make a 50 request , ignoring the 30 requests collector made before the restart. This will lead to surpass the 50 requets per day api limit causing a 429 error for that service.
Once the data source is configured, you can either send us the required information if you want us to host and manage the collector for you (Cloud collector), or deploy and host the collector in your own machine using a Docker image (On-premise collector).
We use a piece of software called Collector Server to host and manage all our available collectors. If you want us to host this collector for you, get in touch with us and we will guide you through the configuration.
This data collector can be run in any machine that has the Docker service available because it should be executed as a docker container. The following sections explain how to prepare all the required setup for having the data collector running. StructureThe following directory structure should be created for being used when running the collector:
Devo credentialsIn Devo, go to Administration → Credentials → X.509 Certificates, download the Certificate, Private key and Chain CA and save them in ![]()
Editing the config.yaml file
Replace the placeholders with your required values following the description table below:
Download the Docker imageThe collector should be deployed as a Docker container. Download the Docker image of the collector as a .tgz file by clicking the link in the following table:
Use the following command to add the Docker image to the system:
The Docker image can be deployed on the following services: DockerExecute the following command on the root directory
Docker ComposeThe following Docker Compose file can be used to execute the Docker container. It must be created in the
To run the container using docker-compose, execute the following command from the
|
This section is intended to explain how to proceed with specific actions for services.
No. of request this service can make in a day is 220. |
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
|
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.
|
This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.
|
No. of request this service can make in a day is 220. |
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
|
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.
|
This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.
|
No. of request this service can make in a day is 220. |
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
|
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.
|
This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.
|
No. of request this service can make in a day is 220. |
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
|
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.
|
This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.
|
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
|
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.
|
This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.
|
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
|
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.
|
This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.
|
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
|
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.
|
This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.
|
Once the collector has been launched, it is important to check if the ingestion is performed in a proper way. To do so, go to the collector’s logs console. This service has the following components:
Setup outputA successful run has the following output messages for the setup module:
Puller outputA successful initial run has the following output messages for the puller module:
After a successful collector’s execution (that is, no error logs found), you will see the following log message:
|
This collector uses persistent storage to download events in an orderly fashion and avoid duplicates. In case you want to re-ingest historical data or recreate the persistence, you can restart the persistence of this collector by following these steps:
The collector will detect this change and will restart the persistence using the parameters of the configuration file or the default configuration in case it has not been provided.
|
This collector has different security layers that detect both an invalid configuration and abnormal operation. This table will help you detect and resolve the most common errors.
|
This section is intended to explain how to proceed with specific operations of this collector.
InitializationThe initialization module is in charge of setup and running the input (pulling logic) and output (delivering logic) services and validating the given configuration. A successful run has the following output messages for the initializer module:
Events delivery and Devo ingestionThe event delivery module is in charge of receiving the events from the internal queues where all events are injected by the pullers and delivering them using the selected compatible delivery method. A successful run has the following output messages for the initializer module:
Sender servicesThe Integrations Factory Collector SDK has 3 different senders services depending on the event type to delivery (
Sender statisticsEach service displays its own performance statistics that allow checking how many events have been delivered to Devo by type:
|
To check the memory usage of this collector, look for the following log records in the collector which are displayed every 5 minutes by default, always after running the memory-free process.
|
Release | Released on | Release type | Recommendations |
---|---|---|---|
|
|
| |
| |||
|
| ||
| |||
|
|
| |
| |||
|
|
| |
| |||
|
| ||
| |||
|
| ||
| |||
|
|
| |
|