Skip to main content
All CollectionsConnectors
Connector Activity Log
Connector Activity Log
Updated over 2 months ago

About

The Connector's Activity Log UI makes monitoring and troubleshooting your data syncs more efficient and transparent. It allows you to gain comprehensive insights into your sync life cycle and status.

To access the connector's activity log:

Go to Connectors > Select the relevant connector > Log tab.

Connector processing vs. waiting time

The Connector Logs in Vulcan provide detailed insights into the processing lifecycle of your connectors, including both the actual processing time and any waiting time that occurs when multiple connectors are syncing simultaneously.

  • Processing: Appears under "Data Lifecycle Stages" on the right. This includes both the time spent processing the connector and any waiting time while other connectors complete their synchronization.

  • Waiting: This is the period when the connector is idle, waiting for other connectors to finish their sync before it can proceed with processing. Waiting time is now displayed in the logs, allowing you to easily differentiate between actual processing time and time spent waiting.

How to read the logs

  1. Locate the processing time: The logs will show a total processing time (e.g., 11 hours and 47 minutes). This includes both the actual processing and waiting time.

  2. Check for waiting time: In the same log entry, you will find the waiting time clearly specified (e.g., 6 hours and 51 minutes).

  3. Calculate actual processing Time: To understand how long the connector spent in actual processing, subtract the waiting time from the total processing time. In the above example, the actual processing time would be around 4 hours and 56 minutes.

Navigating through the Connector's Activity Log

Status outer view

The connectors' color-coded status indicators provide a quick and intuitive way to understand the state of each connector at a glance.

Green Dot

The connector is functioning correctly through all its integral stages. These stages include initiating, testing connectivity, fetching, normalizing, waiting for processing, processing, and being connected. As long as the connector operates without any issues through these stages, it will display a green dot.

Red Dot

There is an error that causes the process to fail. When a connector encounters an error at any stage of its operation, the dot will turn red, and an error message will be displayed on the connector box. This serves as an immediate visual indicator of a problem that needs attention.

Grey Dot

The connector is either disabled or in the process of being deleted. The grey dot signifies that the connector is not active or in a transitional state where it's being removed from the system.

Logs

The connector logs are presented in a user-friendly table format. Each sync is listed, making distinguishing between different sync cycles easy. Expand a sync listing to access more details, data lifecycle stages, and any warnings or additional sync information.

  • The table retains data for a 14-day period, offering extensive visibility into your sync history.

  • The logs present the sync start time, data and processing durations, status and type.

Filtering Sync Listings

  • Filter the table listings based on the current status and log level.

  • Quickly locate specific sync operations, view their duration, and understand the sync types.

Expanding Sync Details

  • Expand a table listing to access detailed logs.

  • Logs offer insights into the data lifecycle stages, progress indicators, and information on any failures that may have occurred during the sync process.

Sync Types

Vulcan's data synchronization process with vendor connectors can be categorized into two distinct types: Full Fetch and Incremental Fetch.

Full Fetch

This process entails a comprehensive data retrieval from the vendor. In a Full Fetch, Vulcan systematically gathers all available data from the vendor's system and updates it within Vulcan.

This method is typically used to ensure that Vulcan's data repository is fully aligned with the vendor's data, offering a complete and up-to-date reflection of the information.

Incremental Fetch

The Incremental Fetch approach is applied to specific connectors where data is obtained in segments. In this process, Vulcan fetches only new or updated data that has emerged since the last successful synchronization cycle.

This method is particularly efficient as it focuses on the most recent changes, reducing the volume of data transferred and processed. During Incremental Fetch, the logs explicitly indicate the 'delta' period – a term used to describe the specific timeframe from which the new data is sourced. This period represents the window between the last successful sync and the current one, ensuring that Vulcan captures all relevant updates without redundant data retrieval.

Understanding the Connectors LifeCycle Stages

The Vulcan connectors integrate with various organizational tools. They are programmed to trigger on a daily basis, typically at 9 PM UTC. This ensures that new data is fetched and updated regularly in Vulcan, maintaining up-to-date information across platforms.

Every time the connector syncs, it goes through 5 different stages:

1

Initiating

Activities:

  • Configuration retrieval and selection based on RFM settings (Full Fetch or Incremental, Incremental period, and relevant weekday).

  • Gateway validation check.

  • Queuing the connector cycle task.

Possible Failures:

  • Invalid user input.

  • Gateway errors (specifically OVA errors).

  • Internal configuration errors.

Error Message: "Can't start sync due to an internal error. Vulcan team is informed and will provide a solution soon."

Next Stage:

  • Moves to Connectivity Testing if first in queue.

  • Skips to the next stage based on RFM settings (if set to not run on specific days).

2

Connectivity Testing

Activities: Testing API endpoints configured for connectivity (usually all endpoints in use).

Possible Failures:

  • Request issues (missing vendor info/internal errors) leading to data fetch failure.

  • Permission issues.

  • Network or server errors on the vendor's side.

Error Mapping: Detailed error descriptions provided with relevant user guidance and support links.

Next Stage:

  • Moves forward if all required endpoints respond successfully (not all tests are mandatory).

  • Successful test does not guarantee final successful connection.

3

Fetching

Activities:

  • Data retrieval from the vendor as per configured HTTP requests.

  • Data validation based on status codes and response headers.

Possible Failures:

  • Invalid status codes, response headers, or formats.

  • Vendor’s server or network issues.

Next Stage: Progresses when all responses are valid.

4

Normalizing

Activities:

  • Conversion of fetched data into Vulcan entities.

  • Update of existing entities and creation of new ones.

Possible Failures:Internal issues in syncing data.

Error Message: Internal Error. Vulcan team is informed and will provide a solution soon.

Next Stage: Proceeds once all data is synced.

5

Processing

Activities:

  • Asset deduping, aggregation, risk calculation, prioritization, SLA calculation.

  • Orchestration involving playbooks, automations, ticketing.

Possible Failures:

  • Internal processing issues.

Next Stage: Completion when all tenant data is processed.

Important Note: This stage is at the tenant level, not the connector level. Failures can be updated to 'Done' once processing is complete on the tenant.

Did this answer your question?