Data Ingestion Overview and Popular Patterns

May 10th, 2024

data-governance-what-why-how-thumbnail

In this blog, we are going to show the knowledge of effective data ingestion with our overview. Learn about continuous vs. batch processing, and homogeneous vs. heterogeneous data, and choose the right approach for actionable insights.

Abstract

In today's digital age, data stands as the lifeblood of our interconnected world.

Retailers, social media platforms, and e-commerce sites take advantage of data to understand their users, deliver personalized experiences, drive sales, come up with better market strategies, reduce costs, and help businesses make quick and precise decisions...

Furthermore, in sectors such as finance, healthcare and logistics, data serves as the foundation for trust and reliability. Accurate data ensures transparent transactions, effective treatments, and an efficient supply chain.

However, data in businesses is often stored in many different systems such as web. mobile application, ERP systems, CRM systems, POS systems, and in many different departments. Information in a department or an application is not easily or fully accessible by other departments or applications.

How do we get this data from its scattered origins into a usable form for analysis? That's where data ingestion comes in. They're the secret sauce behind building a smooth-running data pipeline, the foundation for turning information into actionable intelligence.

This blog post delves into the world of data ingestion patterns, exploring established methods and emerging strategies for seamlessly integrating data from diverse sources into your analytics ecosystem.

How to choose the right Data Ingestion method for your use case?

Choosing the right data ingestion method requires careful consideration of several factors. Asking the right questions is key: Where is your data originating from? What's the intended storage destination, and how is the data currently structured within that system?

Full data loads might be suitable for initial setup, but incremental updates might be more efficient for ongoing data flows. Understanding the data format (e.g., CSV, JSON) and desired storage format (e.g., relational database, data lake) is crucial.

Finally, determining the optimal ingestion frequency – hourly, daily, or real-time – ensures you have the data you need when you need it. By addressing these questions, you can tailor your data ingestion approach for optimal efficiency and valuable insights.

1. Frequency - How often does the data ingestion job run?

Continuous ingestion (stream processing)

  • Ingest records continually and process sets of records as they arrive on the stream.
  • Use case:
    • clickstream data from users of social media platforms (Twitter, TikTok, Facebook), and e-commerce sites (Amazon, eBay, Shopee, Lazada, ..) are sent continuously. Data must be analyzed immediately to power real-time recommendation systems.
    • real-time analytics on IoT devices or fraud detection
  • Pros:
    • Freshness: Streaming ingestion ensures that data is processed and analyzed as soon as it becomes available, providing up-to-date information for decision-making
    • Low Latency: Streaming data ingestion enables low-latency processing, allowing for real-time insights and immediate responses to data events.
  • Cons:
    • Complexity: Implementing streaming data ingestion pipelines can be complex, requiring specialized infrastructure and expertise to manage real-time data processing.
    • Challenges:
      • handle data quality issues when data arrives late, or streaming sources go down.
      • need to group for optimizing throughput.

Regular ingestion (batch ingestion)

  • ingestion and process a batch of records as a dataset. Run on demand, on a schedule or based on an event.

  • batch job queries the source, transforms data, and loads it into the pipeline.

  • Use case:

    • sales transactions and orders data from retailers across the world are sent periodically to a central location. Data is analyzed overnight, and reports are sent to branches in the morning.
  • Pros:

    • Simplicity: The finite data chunks make handling and debugging less complex.
    • Scalability: Batch data ingestion allows organizations to process large volumes of data efficiently by breaking it into manageable chunks. It facilitates horizontal scalability, enabling the addition of more resources to meet growing data demands.
    • Cost-effective: Since batch processing is not real-time, it can be scheduled for off-peak hours, reducing infrastructure costs associated with constant data flow.
    • Complex Transformations: Batch processing is well-suited for complex data transformations and analytics that require multiple stages of processing.
  • Cons:

    • Latency: Data processing experiences a delay since it's accumulated in batches before processing. This isn't ideal for real-time insights or applications requiring immediate action.
  • Complex system: requires workflow orchestration layer to help you handle interdependencies between jobs and manage failures/ retries within a set of jobs.

2. Data type - Is the input data type the same as the output data type?

Homogeneous data ingestion

  • These patterns involve transferring data from the source to the destination without much altering its format or storage structure. The main goals here often include achieving fast data transfer, ensuring data security through encryption during transit and storage, maintaining data integrity, and automating the process for continuous ingestion.
  • These patterns usually fall under the "extract and load" piece of extract, transform, load (ETL), and can be an intermediary step before transformations are done after the ingestion.
  • Use case:
    • Relational data ingestion between the same data engines. This use case can apply to migrating your on-premises workload into the cloud database.
      • examples: Microsoft SQL Server to Amazon RDS for SQL Server or SQL Server on Amazon EC2, or Oracle to Amazon RDS for Oracle
    • Data file ingestion from on-premises storage to a cloud data lake
      • example: move CSV or parquet data from Hadoop HDFS to cloud object storage (amazon S3, Google Cloud Storage or Azure blob storage) to build new data lake capability in the cloud.
    • Large objects (BLOB, photos, videos) ingestion into cloud object storage.

Heterogeneous data ingestion

  • These are scenarios in which data undergoes transformations during its ingestion into the destination storage system. These transformations range from basic adjustments like altering the data type or format to adhere to the requirements of the destination, to more intricate processes such as employing machine learning algorithms to generate new data attributes. This approach typically occupies most of the time for data engineers and ETL developers, who work on cleansing, standardizing, formatting, and structuring the data according to business and technological specifications.
  • Use case:
    • Relational data ingestion between different data engines
      • example: MySQL to elastic search to adapt with search query pattern from the user.
    • Streaming data ingestion from data sources like Internet of Things (IoT) devices or log files to a central data lake.

3. Data volume - How much data gets ingested every time?

Full load

  • involves transferring the entire dataset from the source to the destination every time the ingestion process is executed.
  • the process will delete the entire contents of a target dataset and insert the new dataset.
  • This approach can be very straightforward to implement, does not require complex logic to select data but can be time-consuming and resource-intensive, especially for large datasets.
  • use case:
    • Initial data migration: When setting up a new data warehouse or system, a full load is often used to transfer all historical data.
    • Periodic refresh: In scenarios where data changes infrequently or where historical data needs to be refreshed periodically, a full load may be sufficient.

Incremental load

  • involves transferring only the new or modified data since the last ingestion process.
  • Incremental loads are significantly faster than full loads, as they only process a subset of the data. This also reduces network and processing demands, leading to lower infrastructure and operational costs.
  • Implementing an incremental load requires the identification of new or updated records in the source system since the last extraction. Some of the approaches:
    • Column-Based Incremental Load
      • Numeric Primary Key Column
      • Date Time Column
    • Log-Based Incremental Load
    • Hash-Based Incremental Load — For Entire Table and Each Row
    • Slowly Changing Dimensions (SCD)
    • Change Data Capture (CDC)
  • Use Cases:
    • Real-time analytics: In applications where timely insights are critical, such as monitoring systems or financial trading platforms, incremental load ensures that the latest data is available for analysis.
    • Continuous data integration: In environments with rapidly changing data, such as social media feeds or IoT sensors, incremental load allows for continuous ingestion of new data without overwhelming resources.

Conclusion

In conclusion, data ingestion plays a crucial role in modern data management, serving as the foundation for downstream analytics, decision-making, and business intelligence. By understanding and leveraging different data ingestion patterns, organizations can effectively extract, transform, and load data from diverse sources into their storage systems, thereby unlocking its full potential.

Tags
Artificial Intelligence
Technology
share icon
Share