All Collections
Connectors
Understanding Data Lifecycle
Understanding Data Lifecycle
Updated over a week ago

About

Data ingestion in Vulcan's connectors is a process that involves accessing various platforms through API requests, fetching and mapping data, and subsequently processing it for meaningful insights. There is a structured lifecycle to ensure seamless integration regarding syncing data in Vulcan Cyber Connectors. By understanding the steps in fetching, mapping, and processing data, users gain insights into how the platform transforms raw information into actionable intelligence. This process ensures that data is ingested and refined to provide accurate, up-to-date insights for informed decision-making while applying the business context to vulnerability prioritization.

Stage A: Syncing and Data Ingestion

This is the initial stage that starts once a connector setup is established. This is where Vulcan accesses the relevant platform via API requests to fetch the relevant data (assets, vulnerabilities, connections) and map it to Vulcan's entities. The API requests and field mapping are different between connector types. On the backend, the process is divided into three steps:

  1. Data Fetching: First, the Vulcan Platform initiates API requests to access the relevant platform and retrieve raw data, encompassing assets, vulnerabilities, and connections. This data is then stored in Vulcan's storage for further processing.

  2. Pre-Mapping: Next, the Vulcan Platform transforms the raw API responses into Vulcan's standardized entities. This involves establishing relationships between resources and enriching these relationships for context.

  3. Dynamic Mapping: In this phase, the Vulcan Platform unpacks the now well-structured JSON data and meticulously maps it to specific fields within Vulcan's database.

Stage B: Post-Ingestions

Data isn't immediately visible in the user interface (UI) after ingestion. Instead, it undergoes a processing phase to transform it into actionable insights.

After the data is ingested and right before processing begins, the information goes through certain preliminary stages before it's ready for meaningful analysis:

  1. Raw Data Storage: The fetched data is stored in its raw form within Vulcan's storage, preserving the original state for reference.

  2. Standardized Data: The data is standardized into Vulcan's strict model, ensuring consistency and uniformity in its structure. The standardized data becomes available in Vulcan's database for processing.

  3. Status update: The data's processing status changes to "Waiting for processing," indicating that it's primed for the subsequent analysis.

Stage C: Processing

After data ingestion, the information goes through certain preliminary stages before it's ready for meaningful analysis. The main processing stages are:

  1. Deduplication to identify and remove duplicates;

  2. Aggregation to present summarized insights;

  3. Prioritization to impact risk calculations based on the pre-defined business context and SLA

Detailed Processing Steps

Vulcan's processing phase involves a series of sequential steps, each responsible for refining the data further.

  1. Asset Ingestion: Assets from various sources are processed individually, resulting in their creation or update within Vulcan's database.

  2. Unique Vulnerability Ingestion: Unique vulnerabilities are ingested, facilitating the identification of specific security concerns.

  3. Vulnerability Instances Ingestion: Vulnerabilities are integrated, capturing the full scope of potential threats.

  4. Risk and SLA Calculation: Risk and Service Level Agreement (SLA) calculations are performed, aiding in prioritization and resource allocation.

  5. Campaigns and Automation Handlers: Campaigns and automation handlers are executed, streamlining workflows and enhancing efficiency.

These processing steps unfold in a specific order, as seen on the System Status page. For example:

Stage D: Impact on UI Visibility

While processing occurs, the data is gradually made visible in the UI. However, the information displayed might not provide the complete picture until processing is completed.

Once the data is fully processed and refined, the Vulcan Platform UI reflects accurate, up-to-date insights for informed decision-making.

Stage E: Update Mechanisms

Each connector syncs daily with the vendor's platform. In general, every time a sync is established, the following occurs:

  1. Asset and vulnerability data in the Vulcan Platform get updated daily with any changes or updates as fetched from the connector.

  2. The Vulcan Platform queries the connector daily for any new discoveries (Assets or Vulnerabilities) and updates the UI accordingly.

Each connector has its status update mechanism, which you can refer to in the connector's user guide.

Here is an example of the update mechanism of the Armis connector:

Did this answer your question?