But as part of this process we “descoped” everything that was non-sensitive, including (crucially) empty tables. Now empty tables are a common occurrence and most people I speak to have a bunch of them, for a multitude of reasons; they’re using an ERP/CRM and aren’t using some of the modules, some of them are older and are gradually being phased out etc.
What happens over time however, is that some tables which we originally weren’t using suddenly start to populate with data, and others which were in-scope now suddenly aren’t. But because this isn’t technically a schema change and we can’t necessarily rely on the developers to know when this change will happen – how do we catch it?
The easiest thing to do in this instance is an automated check – write once, run many times. As we’ve seen from the many, many blog posts before this the answer is pretty straight forward – SQL Data Catalog’s PowerShell module once again saves the day!
(You can find the full documentation on the PowerShell module and a cmdlet reference here in the documentation.)
The Process
The most obvious thing to do is to re-scan our DB so Catalog is up to date and then fetch all of our columns first into a collection:
# Connect to SQL Data Catalog
Connect-SqlDataCatalog -AuthToken $authToken -ServerUrl $serverUrl
# Quickly refresh the scan of the Instance to get accurate row counts
Start-ClassificationDatabaseScan -FullyQualifiedInstanceName $instanceName -DatabaseName $databaseName | Wait-SqlDataCatalogOperation
# Get all columns into a collection
$allColumns = Get-ClassificationColumn -instanceName $instanceName -databaseName $databaseName
Each of the columns returned by Data Catalog here has a number of properties that we can use:
The properties of each column object returned by the SQL Data Catalog PowerShell
All of these properties can be useful for something (and have been in the past) – in this case though the properties we care about are the id, the tags collection (name property) and the tableRowCount. The reason being that id uniquely identifies our columns so can be used to easily sort and filter on without having to match on text, the tags collection means we can search for any columns that do or do not have the Out of Scope – Unused tag and the tableRowCount lets us know if the table is or is not populated with data.
Its then fairly easy to filter these down:
# Collection for columns that are on empty tables
$emptyTableColumns = $allColumns | Where-Object { $_.tableRowCount -eq 0 }
# Collection for columns that are tagged as empty
$ColumnstaggedAsEmpty = $allColumns | Where-Object { $_.tags.name -eq $unusedTag }
# Collections for columns both tagged as empty but not, and empty columns not tagged as such
$ColumnsNowInUse = $ColumnstaggedAsEmpty | Where-Object { $_.id -notin $emptyTableColumns.id}
$ColumnsMissingClassification = $emptyTableColumns | Where-Object { $_.id -notin $ColumnstaggedAsEmpty.id}
You can easily visualize these in Data Catalog itself too:
Columns that are on empty tables, regardless of tag
Columns that are tagged as Out of Scope Unused regardless of if they have data or not
We can already see a difference in the lists – if we run the entire PowerShell however (which you can find right here in my GitHub, and I’ll include it at the end) then we actually find those differences out:
The PowerShell result: 2 classified as empty columns now with data, 3 columns not classified as out of scope, now on empty tables
So we can correct these by classifying the 3 now empty columns as now Out Of Scope – Unused, and we can change the 2 now-populated columns to be In Scope:
Suppliers is now in use, and corrected to be In-Scope
The missing Out Of Scope – Unused tags are corrected on the empty columns
…and now when we run the PowerShell again we should find everything is rectified:
Everything has been corrected, now no columns are incorrectly labelled or missing labels
Ongoing
Now we have the script we can run to identify these issues, you could implement it on the back-end of the DevOps process described by the blog post or video linked at the start of this blog post – and this would allow you to constantly be sure that your view of the Production database is accurate and up to date.
The key to a good classification process as part of on-going governance and an up-to-date view of your structured data estate is minimizing the manual effort that has to go into maintaining this process, which with the help from the Data Catalog PowerShell module – we can do easily (if we wanted a process that would go out of date after 2 weeks, we could have just used MS Excel…)
“Doing the best at this moment puts you in the best place for the next moment.” – Oprah Winfrey
One of the things that comes up the most when I’m talking to people about database classification is multi language support. Often people are resigned to:
Waiting on additional language support from their vendor of choice (defer)
Writing their own software to do it in their native tongue (shift effort)
Going the long way around i.e. manual classification (brute force)
Do nothing (abandon)
There are currently over 7000 languages in the world and when it comes down to classification, you can’t expect that whatever your choice of technology for this classification workflow will have packs or native support for your languages and/or the languages your databases have been configured in.
Now, deferring and abandoning as above really aren’t options any more and if you’ve read my post here then you’ll know that database classification is a key concern to address before trying to base any business decisions on data. But how then do we tackle t he above issues of shifting effort and brute force?
Sometimes it’s about finding ways to still be effective and still take advantage of the benefits such a classification technology affords you.
With this in mind – I wanted to share with you 3 tips and tricks in SQL Data Catalog to make it work better for your language(s):
1 – Customize your taxonomy to be relevant / Passen Sie Ihre Taxonomie an, um relevant zu sein / Ajustați-vă taxonomia pentru a fi relevante
The great thing about SQL Data Catalog is that it is incredibly easy to alter the taxonomy to best reflect your view of your data. Not all standard fields will be relevant to you and some fields that are ABSOLUTELY necessary will be missing.
So add them in!
Let’s say, in my country of Christopia, it is mandatory that people identify a certain government specified “Classification Category” from 1-5, where 1 is the most sensitive and 5 is basically public knowledge. I can add that as a category and tag:
That goes equally for fields that we will not be making use of. If it is irrelevant then remove it. Simple.
I will say this for the above step though – don’t think of this as being the easiest and most obvious part, in many ways this is actually the hardest step because you will need buy in. Guess who also cares how you describe data stored in databases? Data Governance, Legal, InfoSec, Database Admins… the list goes on. Just make sure you do the 2 most fundamental parts that are required for success here – communicate & collaborate. If you have buy in from other concerned parties, you will establish a ‘common language’ with which data can be described and consumed, and based on which complex data-driven business decisions can be made.
2 – Customize your search rules / Passen Sie Ihre Suchregeln an / Personalizați-vă regulile de căutare
Going through every single column is a colossal waste of time, especially when we have more than about 6 databases. I’ve spoken with people who have spent day, weeks and even months working on classifications in SSMS or Excel, only to find out that 90% of their columns matched common regular expressions, and moreover that 50-60% of their columns were out of scope anyway!
Fortunately within SQL Data Catalog, you can configure the rules that the tool runs against columns and tables to identify any clearly sensitive columns which it will then suggest to you each time it finds something that matches.
In my experience, between the analytical filters and the suggestions, you can make excellent progress very very quickly, and do you know the best part?
Data Catalog is evergreen.
It constantly keeps track of your ever evolving schema and checks the suggestions against them and that means if somebody makes a change to the schema (like in my post from last week) you’ll get the suggestions coming through and you can easily catch anything that might put your sensitive data in harms way.
3 – Use PowerShell / Verwenden Sie PowerShell / Utilizați PowerShell
Ok I get it – this is my catch all for everything whenever I talk about using Data Catalog – but did you know, using PowerShell with Data Catalog is really easy.
Like even I can do it easy.
But the good thing is, when you use PowerShell you’re not constrained by particular functions or waiting for things to happen, rather you can take control of the process and fill it with your own logic! There’s even a full on guide that shows you:
How to generate an authorization token
A full cmdlet reference
Complete worked examples
and the best part is it’s just part of the documentation! Don’t you love it when people keep documentation up to date and useful? Yeah me too!
Bonus Tip / Bonus-Tipp / Sfaturi Bonus
If you work in an environment that is multi-lingual and has a mix of languages associated with different databases which haven’t been standardized, that should just highlight further that this is a team game.
Pairing and mobbing, just like in coding, can be vital and powerful tools at your disposal – so it’s important to have the right people in place to handle this.
Classification is not infallible, much as you may wish to think it is. Data is data, and if there’s one thing I have learned about data it’s that it always manages to surprise you, so a second pair of eyes (or more) can be invaluable in helping you weed out some of the problems you may be facing.
“If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.” – Albert Einstein
So I just spent about 20 minutes trying to come up with a suitable title for this blog post, and then it struck me – one of my favorite movies of all time (which I will soon be ensuring my wife and I watch again) is Star Trek into Darkness, featuring the magnificent Benedict Cumberbatch, in which he masterfully growls the phrase “shall we begin?”.
This sums up perfectly where people find themselves at the beginning of their classification activities. “Where do I start?” is usually the first question I get asked – regardless if you’re using an excel sheet, Azure Data Catalog or Redgate SQL Data Catalog to carry out this process, you will find yourself in the same place, asking the same question.
Classifying your SQL Server Tables and Columns is not straight forward, I’ll say that up front – you have to be prepared for whatever may come your way – but give yourself a fighting chance! Whether you’re looking to better understand your data, protect it, or you’re just hoping to prepare your business to be more able to deal with things such as Subject Access Requests (SARs), the Right to be Forgotten *cough* I’m looking at _you_ GDPR *cough* – or even just key development in systems containing sensitive information; this is the ultimate starting point. As per my blog post on data masking here, you can’t protect what you don’t know you have.
This is my effort to give you the best possible start with your classification process, whether this feeds into a wider data lineage process, data retention or, of course, data masking. So… shall we begin?
Get a taxonomy set up
This is perhaps the most crucial part of your success. Even if you have the best classification process in the world it really means nothing if you’ve basically described your data in one of say, 3 possible ways. The thing to bear in mind before getting started is that the data cataloging process is not specific to one job.
You may think at this point in time that you’re going to use it to highlight what you want to mask for Dev/Test environments, or maybe it’s your hit list for implementing TDE or column level encryption – but this _thing_ you’re building is going to be useful for everyone.
DBAs will be able to use this to help them prioritize systems they look after and being more proactive when it comes to security checks or updates, backups etc.
Developers will be able to use this to better understand the tables and environments they are working on, helping them contextualize their work and therefore engage and work with any other teams or individuals who may be affected or who may need to be involved.
Governance teams and auditors will be able to use this to better understand what information is held by the business, who is responsible for keeping it up to date and how it is classified and protected.
The list goes on.
So all of the above will need to be engaged in a first run to help actually describe the data you’re working with. What do you actually care about? What do you want to know about data at a first glance? Below is the standard taxonomy that comes out of the box with Redgate’s Data Catalog:
Some of my favorites are in here, which I would encourage you to include as well! If nothing else, having Classification Scope as a category is an absolute must – but I’ll come to this soon. You can see though, how being able to include tags such as who owns the data (and is therefore in charge of keeping it up to date), what regulation(s) it falls under and even what our treatment policy is in line with any of those regulations is, gives us so much more to go on. We can be sure we are appropriately building out our defensible position.
Having a robust Taxonomy will enable you to not only know more about your data but to easily communicate and collaborate with others on the data you hold and the structure of your tables.
Decide who is in charge
This seems like an odd one, but actually one of the most common questions I get is about who will be carrying out the classification process, and this is where the true nature of collaboration within a company is going to be absolutely critical.
Some people believe that a DBA or a couple of developers will suffice but as you’ll see later on, this is not a simple process that only 1 or 2 people can handle by themselves. Be prepared to spend hours on this and actually the implementation of classification means by nature you are going to need a team effort in the first instance.
You will need representation from people who know the database structure, people who know the function of the various tables and people who know the business and how data should be protected. You will require representation on this team and the collaboration between Dev, DBAs, Testers, Governance and DevOps, and you will need someone central to coordinate this effort. When you have key representation from these teams, it will make it easier to identify and collaborate on hot spots of data, so ensure you have this knowledge up front.
Get rid of what doesn’t matter
You may be surprised that the next step is technically an execution step, but it is an important point nonetheless and will absolutely help with the classification effort. This is where the Classification Scope category comes in, and this is why it’s my favorite.
One of the biggest problems that people face when actually executing on their classification is the sheer enormity of the task. There is no “average” measure we can rely on unfortunately but even small schemas can be not insubstantial – recently, some work I did with a customer meant they provided me with just ONE of their database schemas which had well in advance of 1800 columns across dozens of tables. When you scale that same amount to potentially hundreds of databases, it will become rapidly clear that going over every single column is going to be unmanageable.
To start then, the knowledge brought by the team mentioned above will be invaluable because we’re going to need to “de-scope” everything that is not relevant to this process. It is very rare to find a company with more than 50% of columns per database which contain PII/PHI and even if you are one of those companies, this process can help you too.
There could be many reasons that something shouldn’t be included in this process. Perhaps it is an empty table that exists as part of a 3rd party database schema, such as in an ERP or CRM solution. It could be a purely system specific table that holds static/reference data or gathers application specific information. Regardless what the table is, use the knowledge the team has to quickly identify these and then assign them all with the necessary “Out of Scope” tag.
This will not only help you reduce the number of columns you’re going to need to process significantly, but will give you greater focus on what does need to be processed. One of the greatest quotes I’ve heard about this process comes from @DataMacas (a full on genius, wonderful person and someone who over the years I have aspired to learn as much from as possible) who referred to it as “moving from a battleships style approach to one more akin to minesweeper“. Which is just so incredibly accurate.
In my example database below with only 150 odd columns, using the “Empty Tables” filter, and then filtering down to system tables I know about, I was able to de-scope just under half of the database, just as a starting point:
Figure out patterns and speed up
Many of the people carrying out this process will already have some anecdotal knowledge of the database as I’ve already mentioned, but now it’s time to turn this from what _isn’t_ important, to what is.
The fastest way to do this is to build up some examples of column naming conventions you already have in place across multiple databases – there will likely be columns with names that contain things like Name, Email or SSN in some format. Both SQL Data Catalog from Redgate and Microsoft’s Azure Data Catalog have suggestions out of the box that will look at your column names and make suggestions as to what might be sensitive for you to check and accept the classification tags.
Now these suggestions are incredibly helpful but they do both have a reduced scope because they’re just matching against common types of PII, so it’s important to customize them to better reflect your own environments. You can do this fairly easily, one or both of the following ways:
1 – Customize the suggestions
In Redgate’s SQL Data Catalog you can actually, prior to even looking at the suggestions and accepting them, customize the regular expressions that are being run over the column naming convention to actually check that they are more indicative of your own schemas – either by editing existing rules or by creating your own rules and choosing which tags should be associated with the columns as a result:
You can then go through and accept these suggestions if you so wish, obviously making sure to give them a sense check first:
2 – POWER ALL THE SHELL
In both of the aforementioned solutions you can call the PowerShell API to actually carry out mass classification against columns with known formats – this will allow you to rapidly hit any known targets to further reduce the amount of time spent looking directly at columns, an example of the SQL Data Catalog PowerShell in action is the below, which will classify any columns it finds where the name is like Email but not like ID (as Primary and Foreign keys may, in most cases, fall under the de-scoping work we did above) with a tag for sensitivity and information type (full worked example here):
Finally – get classifying
This is the last stage, or the “hunt” stage. It’s time for us to get going with classifying what’s left i.e. anything that wasn’t de-scoped and wasn’t caught by your default suggestions or PowerShell rules.
You can obviously start going through each column one by one, but it makes the most sense to start by filtering down by tables which have the highest concentration of columns (i.e. the widest tables) or the columns that are specifically known as containing sensitive information (anecdotally or by # of tags) and classifying those as in or out of scope and what information they hold, who owns it and what the treatment intent is at the very least.
The approach i take in this instance is to use filtering to it’s utmost – in SQL Data Catalog we can filter by table and column names but also by Data Type. Common offenders can be found with certain NVARCHAR, XML or VARBINARY types, such as NVARCHAR(MAX) – to me, that sounds like an XML, JSON, document or free-text field which will likely contain some kind of difficult to identify but ultimately sensitive information.
Following the NVARCHAR classification I move on and look at DATETIME and INT/DECIMAL fields for any key candidates like dates when someone attended an event or even a Date of Birth field. This helps especially when the naming conventions don’t necessarily reflect the information that is stored in the column.
Finally, one thing to add is that you will need access to the databases or tables at some point. You can’t truly carry out a full-on data classification process purely against the schema, especially for the reason above. Data will often exist in places you weren’t aware of, and knowing the contents, format and sensitivity of the data can only reasonably be found and tagged if you have the support of the data team to do so.
Conclusion?
This is not a one time thing. The initial classification is going to be the hardest part but that can be mitigated if you follow some of the processes in this post and ultimately work as a team.
Classification though, is ongoing. It is there to be an evergreen solution, something that provides context for any data governance or DevOps processes that you have in place and therefore should be maintained and treated as what it is. The place that everyone is able to use to gather information about what they will be using, and what the company might have at risk.