Implementing a Dimensional Information Warehouse with Databricks SQL: Half 2


As organizations consolidate analytics workloads to Databricks, they typically must adapt conventional information warehouse methods. This collection explores the best way to implement dimensional modeling—particularly, star schemas—on Databricks. The primary weblog centered on schema design. This weblog walks via ETL pipelines for dimension tables, together with Slowly Altering Dimensions (SCD) Kind-1 and Kind-2 patterns. The final weblog will present you the best way to construct ETL pipelines for truth tables.

Slowly Altering Dimensions (SCD)

Within the final weblog, we outlined our star schema, together with a truth desk and its associated dimensions.  We highlighted one dimension desk specifically, DimCustomer, as proven right here (with some attributes eliminated to preserve area):

The final three fields on this desk, i.e., StartDate, EndDate and IsLateArriving, characterize metadata that assists us with versioning data.  As a given buyer’s revenue, marital standing, house possession, variety of youngsters at house, or different traits change, we’ll need to create new data for that buyer in order that information corresponding to our on-line gross sales transactions in FactInternetSales are related to the fitting illustration of that buyer.  The pure (aka enterprise) key, CustomerAlternateKey, would be the identical throughout these data however the metadata will differ, permitting us to know the interval for which that model of the client was legitimate, as will the surrogate key, CustomerKey, permitting our information to hyperlink to the fitting model.  

NOTE: As a result of the surrogate secret is generally used to hyperlink information and dimensions, dimension tables are sometimes clustered primarily based on this key. Not like conventional relational databases that make the most of b-tree indexes on sorted data, Databricks implements a novel clustering technique often known as liquid clustering. Whereas the specifics of liquid clustering are exterior the scope of this weblog, we persistently use the CLUSTER BY clause on the surrogate key of our dimension tables throughout their definition to leverage this function successfully.

This sample of versioning dimension data as attributes change is called the Kind-2 Slowly Altering Dimension (or just Kind-2 SCD) sample. The Kind-2 SCD sample is most popular for recording dimension information within the traditional dimensional methodology. Nevertheless, there are different methods to cope with modifications in dimension data.

One of the crucial widespread methods to cope with altering dimension values is to replace current data in place.  Just one model of the document is ever created, in order that the enterprise key stays the distinctive identifier for the document.  For varied causes, not the least of that are efficiency and consistency, we nonetheless implement a surrogate key and hyperlink our truth data to those dimensions on these keys. Nonetheless, the StartDate and EndDate metadata fields that describe the time intervals over which a given dimension document is taken into account energetic usually are not wanted. This is called the Kind-1 SCD sample.  The Promotion dimension in our star schema offers a great instance of a Kind-1 dimension desk implementation:

However what in regards to the IsLateArriving metadata discipline seen within the Kind-2 Buyer dimension however lacking from the Kind-1 Promotion dimension? This discipline is used to flag data as late arriving.  A late arriving document is one for which the enterprise key exhibits up throughout a truth ETL cycle, however there isn’t a document for that key positioned throughout prior dimension processing.  Within the case of the Kind-2 SCDs, this discipline is used to indicate that when the information for a late arriving document is first noticed in a dimension ETL cycle, the document ought to be up to date in place (similar to in a Kind-1 SCD sample) after which versioned from that time ahead.  Within the case of the Kind-1 SCDs, this discipline isn’t essential as a result of the document can be up to date in place regardless.

NOTE: The Kimball Group acknowledges extra SCD patterns, most of that are variations and mixtures of the Kind-1 and Kind-2 patterns. As a result of the Kind-1 and Kind-2 SCDs are probably the most continuously carried out of those patterns and the methods used with the others are carefully associated to what’s employed with these, we’re limiting this weblog to only these two dimension varieties. For extra details about the eight forms of SCDs acknowledged by the Kimball Group, please see the Slowly Altering Dimension Methods part of this doc.

Implementing the Kind-1 SCD Sample

With information being up to date in place, the Kind-1 SCD workflow sample is probably the most easy of the two-dimensional ETL patterns. To help these kind of dimensions, we merely:

  1. Extract the required information from our operational system(s)
  2. Carry out any required information cleaning operations
  3. Evaluate our incoming data to these already within the dimension desk
  4. Replace any current data the place incoming attributes differ from what’s already recorded
  5. Insert any incoming data that wouldn’t have a corresponding document within the dimension desk

For example a Kind-1 SCD implementation, we’ll outline the ETL for the continued inhabitants of the DimPromotion desk.

Step 1: Extract information from an operational system

Our first step is to extract the information from our operational system.  As our information warehouse is patterned after the AdventureWorksDW pattern database offered by Microsoft, we’re utilizing the carefully related AdventureWorks (OLTP) pattern database as our supply. This database has been deployed to an Azure SQL Database occasion and made accessible inside our Databricks atmosphere through a federated question.  Extraction is then facilitated with a easy question (with some fields redacted to preserve area), with the question outcomes continued in a desk in our staging schema (that’s made accessible solely to the information engineers in the environment via permission settings not proven right here). That is however certainly one of some ways we will entry supply system information on this atmosphere:

Step 2: Evaluate incoming data to these within the desk

Assuming we’ve got no extra information cleaning steps to carry out (which we might implement with an UPDATE or one other CREATE TABLE AS assertion),  we will then deal with our dimension information replace/insert operations in a single step utilizing a MERGE assertion, matching our staged information and dimension information on the enterprise key:

One necessary factor to notice in regards to the assertion, because it’s been written right here, is that we replace any current data when a match is discovered between the staged and printed dimension desk information. We might add extra standards to the WHEN MATCHED clause to restrict updates to these cases when a document in staging has completely different data from what’s discovered within the dimension desk, however given the comparatively small variety of data on this specific desk, we’ve elected to make use of the comparatively leaner logic proven right here.  (We are going to use the extra WHEN MATCHED logic with DimCustomer, which incorporates much more information.)

The Kind-2 SCD sample

The Kind-2 SCD sample is a little more advanced. To help these kind of dimensions, we should:

  1. Extract the required information from our operational system(s)
  2. Carry out any required information cleaning operations
  3. Replace any late-arriving member data within the goal desk
  4. Expire any current data within the goal desk for which new variations are present in staging
  5. Insert any new (or new variations) of data into the goal desk

Step 1: Extract and cleanse information from a supply system

As within the Kind-1 SCD sample, our first steps are to extract and cleanse information from the supply system.  Utilizing the identical method as above, we concern a federated question and persist the extracted information to a desk in our staging schema:

Step 2: Evaluate to a dimension desk

With this information landed, we will now examine it to our dimension desk in an effort to make any required information modifications.  The primary of those is to replace in place any data flagged as late arriving from prior truth desk ETL processes.  Please observe that these updates are restricted to these data flagged as late arriving and the IsLateArriving flag is being reset with the replace in order that these data behave as regular Kind-2 SCDs shifting ahead:

Step 3: Expire versioned data

The following set of knowledge modifications is to run out any data that have to be versioned.  It’s necessary that the EndDate worth we set for these matches the StartDate of the brand new document variations we’ll implement within the subsequent step.  For that purpose, we’ll set a timestamp variable for use between these two steps:

NOTE: Relying on the information out there to you, chances are you’ll elect to make use of an EndDate worth originating from the supply system, at which level you wouldn’t essentially declare a variable as proven right here.

Please observe the extra standards used within the WHEN MATCHED clause.  As a result of we’re solely performing one operation with this assertion, it will be attainable to maneuver this logic to the ON clause, however we stored it separated from the core matching logic, the place we’re matching to the present model of the dimension document for readability and maintainability.

As a part of this logic, we’re making heavy use of the equal_null() perform.  This perform returns TRUE when the primary and second values are the identical or each NULL; in any other case, it returns FALSE.  This offers an environment friendly technique to search for modifications on a column-by-column foundation.  For extra particulars on how Databricks helps NULL semantics, please seek advice from this doc.

At this stage, any prior variations of data within the dimension desk which have expired have been end-dated.  

Step 4: Insert new data

We will now insert new data, each really new and newly versioned:

As earlier than, this might have been carried out utilizing an INSERT assertion, however the consequence is similar.  With this assertion, we’ve got recognized any data within the staging desk that don’t have an unexpired corresponding document within the dimension tables. These data are merely inserted with a StartDate worth in line with any expired data which will exist on this desk.

Subsequent steps: implementing the actual fact desk ETL

With the scale carried out and populated with information, we will now give attention to the actual fact tables. Within the subsequent weblog, we’ll show how the ETL for these tables might be carried out.

To be taught extra about Databricks SQL, go to our web site or learn the documentation. You too can take a look at the product tour for Databricks SQL. Suppose you need to migrate your current warehouse to a high-performance, serverless information warehouse with an amazing person expertise and decrease complete value. In that case, Databricks SQL is the answer — strive it free of charge.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles