golden record

In our last tutorial for Data Governance, we now look at Master Data Management. This is the last of our four pillars. Master Data is the core data in the company, which should be clean, accurate and in a clear data model.

What is the goal of Master Data Management?

It is important to have exactly one dataset of key data assets within the company. This could for instance be the data about a customer or a supplier. It is useful to have one customer exactly once. Many companies have their customer data spread over different systems and thus having issues getting a connection between those systems. If a customer walks into a store, the sales agents often have to use different CRM tools to get a holistic picture of the customer. This often leads to not fully understanding the customer within a company.

In order to reach this, it is necessary to harmonise within a company. Reducing double entries and finding the “golden record” is a key challenge in MDM: all data about one customer should be connected and in one place. Today, this is often called “Customer 360”. But achieving this isn’t easy at all.

How to find the “Golden Record”?

Basically, there are several options to find the golden record within a dataset. Let’s imagine we have the following dataset; each of the entries is exactly the same person, but names are written different:

NameSocial Security NumberPassportMatching Group ID
Mario Meir123-45-6789
Meir Mario123-45-6789P 123456 M
M. MeirP 123456 M
How to find the golden record in a dataset

Basically, in this dataset, we see that there is a match on the social security number and on the passport. So, we can apply hierarchical matching. First, we match those entries that are rather unique. Normally, the social security number is unique, as well as the passport ID. In this case, we could match the dataset to one dataset. This would be now represented in matching groups:

NameSocial Security NumberPassportMatching Group ID
Mario Meir123-45-67891
Meir Mario123-45-6789P 123456 M1
M. MeirP 123456 M
Hierarchical matching

What else can be done to increase the quality of your Master Data?

Basically, in addition to hierarchical matching, there are several other techniques available. The most common one is the “manual matching”, where employees seek for duplicated data and thus match this data. However, a better approach is to match data via machine learning and combine it with the “manual matching”!

This tutorial is part of the Data Governance Tutorial. You can learn more about Data Governance by going through this tutorial. On Cloudvane, there are many more tutorials about (Big) Data, Data Science and alike, read about them in the Big Data Tutorials here. If you look for great datasets to play with, I would recommend you Kaggle.

Next to Data Security & Privacy as well as Data Quality Management, there is a huge importance in Data Access and Search. This topic focuses on finding and accessing data in your data assets. Most large enterprises have a lot of data at their finger tips, but different business units don’t know where and how to find it. In this tutorial, we will have a look at how to solve this issue.

What are the ingredients for successful Data Access and Search?

There are several pre-conditions that need to be fulfilled in order to make data accessible. One of the pre-conditions is to have data security and privacy solved. If you want to make data accessible in large-scale, it is very important to ensure that only those users can access the data they should access. As a result of this, all users should see data assets in the company via a data catalog, but not the data itself. In this catalog, people should have the possibility to browse different data assets available in the company and start asking more questions.

A good data catalog constantly checks the data for updates to the catalog itself and to possible modifications. In addition to these requirements mentioned before, the data catalog checks for different data quality measures as described in the previous tutorial.

What should be inside a data catalog?

Based on the above mentioned things, a data catalog contains a lot of data about data. Next to different data assets available, each data asset should be described and offer several informations about it:

  • Titel. Title of the dataset
  • Description. What this dataset is about.
  • Categories. Tags, to enable search.
  • Business Unit. Unit, maintaining the dataset (z.b. Marketing)
  • Data Owner. Person, in charge of maintaining the dataset.
  • Data Producer. System that produces the data
  • Data Steward. Person taking care of the dataset, if not data owner itself.
  • Timespan. This indicates a date when to when the data was recorded.
  • Data refresh interval. If not in real-time available, indication how often the data gets refreshed
  • Quality metrics. Indications on data quality.
  • Data Access or Sample Data. Information on how to access the data or a sample dataset to explore the data
  • Transformations. When and how was the data transformed?

How does a data catalog looks like?

This items above are samples for the contents of a data catalog entry. A good data catalog makes it easy for users to find and search within the metadata. The following sample shows the data catalog from the US government:

US government open data portal

This tutorial is part of the Data Governance Tutorial. You can learn more about Data Governance by going through this tutorial. On Cloudvane, there are many more tutorials about (Big) Data, Data Science and alike, read about them in the Big Data Tutorials here. If you look for great datasets to play with, I would recommend you Kaggle.

We started our tutorial with a general intro to Data Governance and then went a bit deeper into data security and data privacy. In this post, we will have a look at how to ensure a certain level of data quality in your data sets. Data Quality is a very important aspect. Imagine, you have wrong data about your customers and you build your marketing campaign on it. The campaign might return wrong results. This can damage your brand and turn away previously loyal customers. Therefore, data quality is highly essential.

How to measure data quality?

There are several aspects on how to measure data quality. I’ve summarised them into 5 core metrics. If you browse different literature, you might find more or less metrics. However, these five metrics should give you a core understanding of data quality management.

The 5 dimensions of data quality
The 5 dimensions of data quality

Availability

Availability states that data should be available. If we want to query all existing users interested in luxury cars, we are not interested in a subset but all of them. Availability is also a challenge addressed by the CAP-Theorem. In this case, it doesn’t focus on the general availability of the database but at the availability of each dataset itself. The algorithm querying the data should be as good as possible to retrieve all available data. There should be easy to use tools and languages to retrieve the data. Normally, each database provides a query language such as SQL, or O/R Mappers to developers.

With availability is also meant that the data used for a specific use-case should be available to data analysts in business units. A data relevant for a marketing campaign might be existing but not available for the campaign. For instance, the company might have specific customer data available in the data warehouse, but it isn’t know to business units that the data actually exists.

Correctness & Completness

Correctness means that Data has to be correct. If we again query for all existing users on a web portal interested in luxury cars, the data about that should be correct. By correctness, it is meant that the data should really represent people interested in luxury cars and that faked entries should be removed. A data set is also not correct if the user changed his or her address without the company knowing about it. Therefore, it must be tracked when which dataset was last updated.

Similar to correctness is completness. Data should be complete. Targeting all users interested in luxury cars only makes sense if we can target them somehow, e.g. by e-mail. If the e-mail field is blank or any other field we would like to target our users, data is not complete for our use-case.

Timeliness

Data should be up-to date. A user might change the e-mail address after a while and our database should reflect these changes whenever and wherever possible. If we target our users for luxury cars, it won’t be good at all if only 50% of the user’s e-mail addresses are correct. We might have “big data” but the data is not correct since updates didn’t occur for a while.

Consistency

This shouldn’t be confused with the consistency requirement by the CAP-Theorem. Data might be duplicated, since users might register several times to get various benefits. The user might select “luxury cars” and with another account “budget cars”. Duplicate accounts leads to inconsistency of data and it is a frequent problem in large web portals such as Facebook

Understandability

It should be easy to understand data. If we query our database for people interested in luxury cars, we should have the possibility to easily understand what the data is about. Once the data is returned, we should use our favorite tool to work with the data. The data itself should describe itself and we should know how to handle it. If the data returns a “zip” column, we know that this is the ZIP-code individual users are living in.

What can you do to improve your data quality?

Basically, it all starts with starting. You need to start tracking your data quality at some point and then need to continuously improve it. There are several tools existing that support your endeavour. But keep in mind: bad data creates bad decisions!

This tutorial is part of the Data Governance Tutorial. You can learn more about Data Governance by going through this tutorial. On Cloudvane, there are many more tutorials about (Big) Data, Data Science and alike, read about them in the Big Data Tutorials here. If you look for great datasets to play with, I would recommend you Kaggle.

In our previous tutorial intro, we outlined the four pillars that are relevant to data governance. In this post, I will go for a deeper dive into the data security and data privacy aspects of data governance.

What is data security?

Data security is all about securing the data against intrusions from the in- or outside of an organisation. Basically, it deals with hardening any systems that store data and making sure that data is only stored in a safe and secure way.

When dealing with data privacy, it comes in several layers:

  • Infrastructure: ensuring that the physical infrastructure is protected against any unwanted access. This starts with physical access control to servers and any devices associated with the organisation. This layer is only relevant when done on-premise.
  • Operating Systems and virtualisation: here it needs to be ensured that the operating system is in a secure state. If done on-premise, it requires both the host and the guest OS and the virtualisation software. When done in the cloud, it only applies to IaaS
  • Databases and Data Stores: any databases need to be constantly checked for vulnerabilities. If using any other stores such as object stores, they also need to be secured. This applies to on-premise and IaaS cloud solutions, but not to PaaS or SaaS cloud solutions
  • Application Security: When building a software on top of the previous stack, it is necessary to write this software in a secure manner. This applies to both on-prem and cloud. When using PaaS or SaaS solutions, it is the only relevant security challenge for companies implementing it. Therefore, it is highly important to look for a comprehensive security concept on this layer.

What if you ignore it?

Having issues with data security is a frequent failure of companies. There are a lot of examples of data leaks like with LinkedIn, Deutsche Telekom or Twitter. Almost nobody is secure and thus this block needs to be taken into consideration at the highest level when building a data strategy. Experts argue that it might not be a question when an intrusion happens. The only question might be how long the organisation needs to realise it and thus take counter-measures and minimise the damage.

A key recommendation (but not the only one) is to encrypt all data, so that it is more challenging to get full access.

What is data privacy?

Another important block is data privacy. This now deals more with the question on who can read or access the data within a company. Basically, algorithms and people should work with (pseudo) anonymised data whenever possible. Analysts or Data Scientists shouldn’t see any personal information within the data that they are dealing with. If we take a marketing campaign, the analysts working with the data should only see the minimum available data for them necessary to build the campaign. The marketing tool should then combine the results of their selection with the addresses of their target. There are several tools available that obfuscate personal identifiable data (PID) and thus make the work with it easier.

The above described is also called as the “need to know principle”. People should only see the data that they really need to know. When looking at how companies build their access rights to data, it is often built on a very individual basis. People ask for access, state why they need it and the data owner gives them access. However, this is rather manual and not necessarily fit for the new era of privacy.

A business driven role-based access model

A much better approach is to build on a role-based access model. By roles, it doesn’t necessarily mean Active Directory roles. It is more built on the business roles that users are in. For example, a role would be “Marketing Analyst”. This user would get access to specific data that he or she needs for the daily work. Access to all data that are relevant should be given, but nothing more than that. The roles in this approach should be clearly business focused and not technology-focused.

Another key aspect in data privacy is to understand who was accessing what data. It is necessary to store a comprehensive audit log about all data access and thus make data breaches trackable.

This tutorial is part of the Data Governance Tutorial. You can learn more about Data Governance by going through this tutorial. On Cloudvane, there are many more tutorials about (Big) Data, Data Science and alike, read about them in the Big Data Tutorials here. If you look for great datasets to play with, I would recommend you Kaggle.

Data Governance

Everybody is talking about Data Science and Big Data, but one heavily ignored topic is Data Governance and Data Quality. Executives all over the world want to invest into doing data science, but they often ignore Data Governance. Some month ago I wrote about this and shared my frustration about it. Now I’ve decided to go for a more pragmatic approach and describe what Data Governance is all about. This should bring some clarity into the topic and reduce emotions.

Why is Data Governance important?

It is important to keep a certain level of quality in the data. Making decisions on Bad Data Quality leads to bad overall decisions. Data Governance efforts are increasing exponentially when not done in the very beginning of your Data Strategy.

Also, there are a lot of challenges around Data Governance:

  • Keeping a high level of security is often slowing down business implementations
  • Initial investments are necessary – that don’t show value for month to years
  • Benefits are only visible “on top” of governance – e.g. with faster business results or better insights and thus it is not easy to “quantify” the impact
  • Data Governance is often considered as “unsexy” to do. Everybody talks about data science, but nobody about data governance. In fact, Data Scientists can do almost nothing without data governance
  • Data Governance tools are rare – and those that are available are very expensive. Open Source doesn’t focus too much on it, as there is less “buzz” around it than AI. However, this also creates opportunities for us

Companies can basically follow three different strategies. Each strategy differs in the level of maturity:

  • Reactive Governance: Efforts are rather designed to respond to current pains. This happens when the organization has suffered a regulatory breach or a data disaster
  • Pre-emptive Governance: The organization is facing a major change or threat. This strategy is designed to ward off significant issues that could affect success of the company. Often it is driven by impending regulatory & compliance needs
  • Proactive Governance: All efforts are designed to improve capabilities to resolve risk and data issues. This strategy builds on reactive governance to create an ever-increasing body of validated rules, standards, and tested processes. It is also part of a wider Information Management strategy

The 4 pillars

4 data governance pillars
The 4 pillars of Data Governance

As you can see in the image, there are basically 4 main pillars. During the next weeks, I will describe each of them in detail. But let’s have a first look at them now:

  • Data Security & Data Privacy: The overall goal in here is to keep the data secure against external access. It is built on encryption, access management and accessibility. Often, a Roles-based access is defined in this process. A typical definition in here is privacy and security by design
  • Data Quality Management: In this pillar, different measures for Data Quality are defined and tracked. Typically, for each dataset, specific quality measures are looked after. This gives data consumers an overview of the data quality.
  • Data Access & Search: This pillar is all about making data accessible and searchable within the company assets. A typical sample here is a Data Catalog, that shows all available company data to end users.
  • Master Data Management: master data is the common data of the company – e.g. the customer data, the data of suppliers and alike. Data in here should be of high quality and consistent. One physical customer should occur exactly as one person and not as multiple persons

For each of the above mentioned pillars, I will write individual articles over the next weeks.

This tutorial is part of the Data Governance Tutorial. You can learn more about Data Governance by going through this tutorial. On Cloudvane, there are many more tutorials about (Big) Data, Data Science and alike, read about them in the Big Data Tutorials here. If you look for great datasets to play with, I would recommend you Kaggle.