Transactional data models are designed to support day-to-day business operations by organizing data in a normalized table structure. As normalized data structures avoid data redundancies as much as possible, they ensure that the typical CRUD (Create, Read, Update, and Delete) operations of your application can be easily supported. However, when it comes to analytically consuming this data (e.g. in a dashboard widget), these models can be suboptimal due to the inherently different load of analytical queries.
Transactional data models are commonly used in applications to capture and store data related to business transactions. These models consist of multiple linked tables, each storing discrete and uniquely identifiable entity values. The purpose of a transactional data model is to maintain data integrity and ensure consistency in the format of the stored data.
In a transactional data model, each table represents a specific entity or concept, such as customers, orders, products, or transactions. These tables are designed to store all the relevant information about these entities, such as their attributes and relationships with other entities.
Transactional data models are optimized for transactional workloads, and are well-suited for consistently capturing and processing real-time data.
Transactional and analytical workloads have different data access patterns.
In short, typical transactional loads (OLTP or Online Transaction Processing) involve manipulating a small number of rows with most/all columns (e.g. creating a row, retrieving one or a few rows, updating a row, etc.)

While analytical loads (OLAP or Online Analytical Processing) focus on analyzing a small subset of columns across a large number of rows (e.g. summing numeric values from a column, grouped by a category column, and filtered by a date column).

Performing analytical queries on a transactional data model can be challenging and resource-intensive due to the need for multiple join operations. Typical analytical queries are slicing and dicing on all sorts of metrics, so optimizing your transactional data source for this load isn't always straightforward as it might be hard to impossible to predict all queries that might need to be handled.
From time to time, it might be tempting to throw things together to get insights visualized as soon as possible. This typically results in cut corners, like pre-aggregating data together and exposing it to Luzmo. This does often result in hard-to-scale solutions due to the following reasons:
One of the few reasons to pre-aggregate your data before exposing it to Luzmo, is when you have extremely large amounts of data and the granularity of (historical) data is less important. A typical use-case is performantly querying historical data, where sales of several years ago might not require the same order-level granularity as insights on recently occurred sales.
To ensure easy-to-consume and performant customer-facing insights, we strongly recommend investing the proper resources into designing and setting up a scalable analytical data model. In the next article, we'll introduce you to a simple but well-proven analytical data structure: the "Star schema".