“But I don’t wanna INSTALL it!”: Data Masker on the fly in Azure DevOps (with an Azure SQL DB)

“There is always a way to go if you look for it.”
Ernest A. Fitzgerald

As many of you know, I really enjoy talking about Data Masking. I fundamentally believe it is an absolutely ESSENTIAL part of Test Data Management and specifically the provisioning of Pre-Production environments. If you hold sensitive PII/PHI/PCI in your Production environments, you have no excuses for porting any of that back into Dev and Test.

I also stand firmly behind the belief (and it is just that, my belief) that masking is much safer than anything you can achieve with encryption or limiting access alone. Static masking, when done correctly, means that even if all other security measures are bypassed, or we accidentally expose data somehow, it doesn’t matter because the PII/PHI has been wiped, but it is still fundamentally useful for development and testing environments.

I’m also a huge proponent of using classification and data masking as part of a DevOps process, and often you’ll find me using Azure DevOps to actually kick off my masking process – but one of the most frequent questions I get is “do I have to install Data Masker somewhere?” and the answer I always have to give is… yes.

When we use SQL Provision to spin up our environments in non-Prod, generally we know they have to be in IaaS VMs or On-Prem, due to the nature of the technology (keep your eyes open on THIS because big changes coming soon *squeals in excitement*) but sometimes we’re working entirely with PaaS DBs and VMs don’t even come into our vocabulary – but we still need them to be masked for non-Prod use.

I have some data in an Azure SQL DB – a copy of DMDatabase (get it from my GitHub):

DM_CUSTOMER table in the DMDatabase

and I would like to get this masked before I create copies of it. Now I have written scripts in the past to make sure that Azure SQL DBs can be masked and then copied into non-Prod environments and you can get those scripts here, and you could easily wrap something like the below INTO a script like that – but much of the work I’ve done on this always involves having Data Masker installed somewhere and invoked on that machine – a VM, my laptop, whatever.

So, how do we avoid having to install Data Masker each time?

Data Masker for SQL Server does not (at time of writing) have publicly available a docker container or method for installing using Choco or something like that – once we do, trust me, I’ll be blogging about it A LOT as I will be very excited. But there is an install file available: https://download.red-gate.com/installers/DataMaskerforSQLServer/ and this might be enough.

The Process

Initially I created a Masking Set locally for the DMDatabase copy in Azure, it was nice and simple just masking a few of the fields on the CUSTOMER table:

It relies on SQL auth for the connection and I’m remembering the credentials, though these could be subbed in later on using the PARFILE (documentation on that here).

Next I put this masking set into a newly created Azure DevOps Git repo, which I cloned down onto my local machine – and then committed and pushed my changes up into Azure DevOps:

Now that this was all in ADO, it was time to set up a pipeline for it – so let’s jump into some YAML! Now, Data Masker currently needs to run on a Windows machine so we’ll set the pool to Windows-Latest:

trigger:
- main

pool:
  vmImage: windows-latest

The next step is to grab the installer – which I know I can easily do with PowerShell. I’m sure you could be more clever about this, but with limited time I chose the most recent version and hard coded that in to a PowerShell Invoke-WebRequest cmdlet:

- task: PowerShell@2
  inputs:
    targetType: 'inline'
    script: |
      $source = 'https://download.red-gate.com/installers/DataMaskerforSQLServer/2021-03-15/DataMaskerforSQLServer.exe'
      New-Item -ItemType directory -Path C:\download
      $destination = 'C:\download\DataMaskerforSQLServer.exe'
      Invoke-WebRequest -Uri $source -OutFile $destination
      New-Item -ItemType directory -Path C:\download\DMlogs
  displayName: 'Download Data Masker'

This PowerShell task is going to grab the most recent exe for Data Masker and pull it down into C:\download on the hosted VM we’re using for the pipeline and it’s also going to create a directory for the Data Masker logs as well (if you wanted to extend the YAML at the end to wrap these logs up and publish them as a result of the pipeline masking, then go for it and tweet me to let me know!)

Next we have to extract and install Data Masker from that download, which is fairly easy to do with a cmd call:

- task: CmdLine@2
  inputs:
    script: |
      "C:\download\DataMaskerforSQLServer.exe" products "Data Masker for SQL Server" log c:\download temp c:\download /IAgreeToTheEula RG_LICESE=%RGLICENSE%
  displayName: 'Install DMS Headlessly'

Note I’m using the guidance from this page, making sure to accept the EULA and I’m passing in my Redgate License as a variable, that I have specified for the pipeline and kept secret. This will put Data Masker in the default location in C:\Program Files\… and means we will then be able to call it. I do however need to make sure that this now works. So I saved my pipeline and ran it to see what happened:

Fabulous, that all works nicely. Now to pass the DMSMaskSet file to Data Masker and get it to run – ah but I forgot, I’m going to need a PARFILE as per the cmdline documentation that specifies where the files etc. are for the run. So I create my PARFILE.txt as such:

MASKINGSET=C:\download\DMDB_MaskingTime.DMSMaskSet
LOGFILEDIR=C:\download\DMlogs
DATASETSDIR=C:\Program Files\Red Gate\Data Masker for SQL Server 7\DataSets 
REPORTSDIR=C:\download\DMlogs
INTERIM_REPORTS=false

As you can see, very simple. But to make sure these files are “simply” in the right folder, and because I don’t have time to explore how I could pass an environment variable into the .txt file itself, I’m going to add a quick file copy task to make sure that my masking set and PARFILE both make it into that location:

- task: CopyFiles@2
  inputs:
    SourceFolder: '$(Build.Repository.LocalPath)'
    Contents: '**'
    TargetFolder: 'C:\download'
  displayName: 'Copy files from working directory'

Of course now that we have Data Masker installed on the pipeline VM, the masking set AND the PARFILE… let’s get masking!

- task: CmdLine@2
  inputs:
    script: '"C:\Program Files\Red Gate\Data Masker for SQL Server 7\DataMaskerCmdLine.exe" PARFILE=C:\download\PARFILE.txt'
  displayName: 'Run Data Masker'

I agree the command is now less than impressive given all the prep work, but when you run all of this in it’s entirety…

Big success! But let’s check the data of course and make sure it is as we expect:

It all works, Lynne has been masked to Muna (and the rest has been masked too!), and I didn’t need to have Data Masker installed on a VM in my environment, with an Azure DevOps self hosted agent to run it – I could just do it programmatically. #Winning.

I’ll put the full YAML below however some caveats:

  • You cannot run this pipeline a LOT in a short space of time, I found that out. The RedGate downloads page is not particularly designed for this process, so it should probably be run sparingly otherwise the pipeline times out because the server, by way of balancing, will prevent you from pulling the file too often.
  • You will need to update the PowerShell step reasonably often if you want the DMS most recent installer, or make it more futureproof to grab the latest version – I just didn’t investigate that.
  • Data Masker is dependent on the power of the machine it is running on in MANY ways and Azure DevOps pipeline VMs are not particularly the most powerful beasts in the world – so if you have a lot of masking that needs doing, I would be weary of this method and might stick to an Application Server VM you’ve got hanging around, just in case.
  • This is likely to change a lot in the future so may not be relevant when you’re reading it after a few months – so before implementing anything like this, if it’s been a few months, contact me or Redgate and just confirm that there isn’t some better way of doing this, if I haven’t already blogged it!

Thanks for stopping by and have a great week!

Full YAML:

trigger:
- main

pool:
  vmImage: windows-latest

steps:
- task: PowerShell@2
  inputs:
    targetType: 'inline'
    script: |
      $source = 'https://download.red-gate.com/installers/DataMaskerforSQLServer/2021-03-15/DataMaskerforSQLServer.exe'
      New-Item -ItemType directory -Path C:\download
      $destination = 'C:\download\DataMaskerforSQLServer.exe'
      Invoke-WebRequest -Uri $source -OutFile $destination
      New-Item -ItemType directory -Path C:\download\DMlogs
  displayName: 'Download Data Masker'

- task: CmdLine@2
  inputs:
    script: |
      "C:\download\DataMaskerforSQLServer.exe" products "Data Masker for SQL Server" log c:\download temp c:\download /IAgreeToTheEula RG_LICESE=%RGLICENSE%
  displayName: 'Install DMS Headlessly'

- task: CopyFiles@2
  inputs:
    SourceFolder: '$(Build.Repository.LocalPath)'
    Contents: '**'
    TargetFolder: 'C:\download'
  displayName: 'Copy files from working directory'

- task: CmdLine@2
  inputs:
    script: '"C:\Program Files\Red Gate\Data Masker for SQL Server 7\DataMaskerCmdLine.exe" PARFILE=C:\download\PARFILE.txt'
  displayName: 'Run Data Masker'

Moving teams and users between multiple SQL Clone Servers

“Portability should be the default.”
Larry Wall

Greetings one-and-all! It’s been a long time hasn’t it?

Well – I’ll be honest. I’ve not been great… I’ve been struggling. With the pressures of everything going on; from National Lockdown 3: The Lockdown Strikes Back, to working hard, and trying to lose weight it’s all been a bit too much – but that’s not really any excuse for neglecting y’all is it?

As we go further into 2021 I can’t promise that you’ll see LOADS more here, but I’m determined to bring you something, and so long as it helps 1 person out there, I’ll consider it a success.

Well I was asked a question yesterday I wasn’t really sure of the answer to, but is definitely a nice-to-know and I thought I would share it with all of you.

“If I have multiple SQL Clone Servers, or even just a new SQL Clone Server, how do I move the teams and permissions from one to the other without manually setting everything up again?”

Quite the head-scratcher indeed!

Emoji Emotion GIF by Anne Arundel Community College

Well fortunately we know where to start – in SQL Clone every action, every change is recorded by virtue of a back-end SQL Server DB called the SQL Clone Config DB (you may remember this from such classics as Using things weirdly part 1!) and this holds all of the information we might need to move things over.

Let’s start by taking a look at the schema in the DB:

There is already fairly comprehensive documentation on moving the whole SQL Clone Config DB and even the SQL Clone Server to another instance here, but we ONLY care about users and permissions for this particular run, so we’ll focus on that.

The schema is reasonably straight forward and well-named thankfully which means we can isolate the tables we’ll need for the move. Anything that mentions Teams or Principles (Users) is generally going to be in our remit. I took a reasonably fresh install of SQL Clone and added a few users and teams (I’ve been watching a lot of Sword Art Online lately, so you might sense a trend with these teams…

Now when we investigate some of the tables in the config DB we’ll notice that everything is really nicely normalized and mapped together:

But what does this look like in a fresh install with NO teams defined?

Wait… no teams defined/ activated but there’s… a team. The default team.

aaron paul what GIF by Breaking Bad

It turns out once you hit the “Enable Teams” option in the SQL Clone Dashboard, that entry magically goes away though and it looks more like we’re expecting:

Now we know the lay of the land – how do we move our users? Well to answer that I’m going to work on 3 assumptions here:

  • I’m going to stick to just these 3 tables (Principles, Teams and the mappings). There’s a wider question around Images and Clones etc. that we’ve added for those teams and any overlap that may occur, but we’re working on the belief that the SQL Clone Server we’re moving to is segregated and independent (so will not have access to the same images), so it only needs the users and teams.
  • The target SQL Clone Config DB we’re moving these to is brand new, and does not already have any users/permissions or teams set up except clicking the “enable teams” button in the dash.
  • The same user has set both SQL Clone Servers up, and is therefore position 1 in the dbo.SqlClonePrinciples table, however you could easily overwrite this one value manually if needed or use a local copy with the same #1 user to generate the script initially E.g. Chris.Unwin is ID1 in both the Source and Target Config DBs below, and then just add in the missing user manually to the target config DB.

1. SQL Data Compare – my recommended

SQL Data Compare is our friend here. It knows all about constraints and identities and anything we may come across. This is a fairly small scale need however if you open it up and simply compare those 3 tables we’ve been looking at, it should be able to pick up the users and permissions to insert, and generate us a script quickly and easily we can run against the target Config DB:

I’m going to generate the SQL Script and then run it against my target – notice how Data Compare takes care of any issues we might face with constraints and identity inserts to make sure everything 100% ties up? Good isn’t it!?

This was my script:

SET NUMERIC_ROUNDABORT OFF
GO
SET ANSI_PADDING, ANSI_WARNINGS, CONCAT_NULL_YIELDS_NULL, ARITHABORT, QUOTED_IDENTIFIER, ANSI_NULLS, NOCOUNT ON
GO
SET DATEFORMAT YMD
GO
SET XACT_ABORT ON
GO
SET TRANSACTION ISOLATION LEVEL Serializable
GO
BEGIN TRANSACTION

PRINT(N'Drop constraints from [dbo].[TeamSqlClonePrincipalMappings]')
ALTER TABLE [dbo].[TeamSqlClonePrincipalMappings] NOCHECK CONSTRAINT [FK_TeamSqlClonePrincipalMappings_SqlClonePrincipals_SqlClonePrincipalId]
ALTER TABLE [dbo].[TeamSqlClonePrincipalMappings] NOCHECK CONSTRAINT [FK_TeamSqlClonePrincipalMappings_Teams_TeamId]

PRINT(N'Drop constraint FK_TeamImageMappings_Teams_TeamId from [dbo].[TeamImageMappings]')
ALTER TABLE [dbo].[TeamImageMappings] NOCHECK CONSTRAINT [FK_TeamImageMappings_Teams_TeamId]

PRINT(N'Drop constraint FK_TeamInstanceMappings_Teams_TeamId from [dbo].[TeamInstanceMappings]')
ALTER TABLE [dbo].[TeamInstanceMappings] NOCHECK CONSTRAINT [FK_TeamInstanceMappings_Teams_TeamId]

PRINT(N'Add rows to [dbo].[SqlClonePrincipals]')
SET IDENTITY_INSERT [dbo].[SqlClonePrincipals] ON
INSERT INTO [dbo].[SqlClonePrincipals] ([Id], [FriendlyName], [Role], [Sid], [IsGroup]) VALUES (2, N'RED-GATE\Chris.Kerswell', 2, 0x010500000000000515000000fd43461e19525f12828ba62895680000, 0)
INSERT INTO [dbo].[SqlClonePrincipals] ([Id], [FriendlyName], [Role], [Sid], [IsGroup]) VALUES (3, N'RED-GATE\David.Ong', 3, 0x010500000000000515000000fd43461e19525f12828ba628cb7b0000, 0)
SET IDENTITY_INSERT [dbo].[SqlClonePrincipals] OFF
PRINT(N'Operation applied to 2 rows out of 2')

PRINT(N'Add rows to [dbo].[Teams]')
SET IDENTITY_INSERT [dbo].[Teams] ON
INSERT INTO [dbo].[Teams] ([Id], [Name], [Color], [IncludeAllPrincipals], [IncludeAllImages], [IncludeAllInstances]) VALUES (2, N'TeamKirito', N'#373737', 0, 0, 0)
INSERT INTO [dbo].[Teams] ([Id], [Name], [Color], [IncludeAllPrincipals], [IncludeAllImages], [IncludeAllInstances]) VALUES (3, N'TeamAsuna', N'#fc9003', 0, 1, 1)
SET IDENTITY_INSERT [dbo].[Teams] OFF
PRINT(N'Operation applied to 2 rows out of 2')

PRINT(N'Add rows to [dbo].[TeamSqlClonePrincipalMappings]')
INSERT INTO [dbo].[TeamSqlClonePrincipalMappings] ([TeamId], [SqlClonePrincipalId]) VALUES (2, 3)
INSERT INTO [dbo].[TeamSqlClonePrincipalMappings] ([TeamId], [SqlClonePrincipalId]) VALUES (3, 2)
PRINT(N'Operation applied to 2 rows out of 2')

PRINT(N'Add constraints to [dbo].[TeamSqlClonePrincipalMappings]')
ALTER TABLE [dbo].[TeamSqlClonePrincipalMappings] WITH CHECK CHECK CONSTRAINT [FK_TeamSqlClonePrincipalMappings_SqlClonePrincipals_SqlClonePrincipalId]
ALTER TABLE [dbo].[TeamSqlClonePrincipalMappings] WITH CHECK CHECK CONSTRAINT [FK_TeamSqlClonePrincipalMappings_Teams_TeamId]
ALTER TABLE [dbo].[TeamImageMappings] WITH CHECK CHECK CONSTRAINT [FK_TeamImageMappings_Teams_TeamId]
ALTER TABLE [dbo].[TeamInstanceMappings] WITH CHECK CHECK CONSTRAINT [FK_TeamInstanceMappings_Teams_TeamId]
COMMIT TRANSACTION
GO

and success! Everything has been replicated as we expect:

I also tried this on a VM that does not have the same AD access as I do locally and the users were imported successfully, but because they can’t be resolved they don’t show in the Clone dash so it doesn’t cause us any dead or invalid relationship issues.

2. & 3. Script from one DB to the other or SSIS load

Another simple-ish option would be to write the INSERT INTO SELECT FROM statement, or create a very simple SSIS package if you can connect to both – you would have to take care of the Constraints and Identity Inserts yourself, but it would be equally effective:

Important Note(s)

You will have to manually tie up any differences if you have already started using the new install with Teams and Permissions as THAT will get MUCH more complicated, especially if you have already assigned images and instances and users to teams – so this process is best done at the very start of life for the new SQL Clone Server.

Also. You will have to restart the SQL Clone Server Service after loading the users and teams in, as SQL Clone won’t recognize they’re there straight away.

Conclusion

This is going to be a rare occasion for most – the SQL Clone Server is not designed nor intended to deal with being duplicated many times. Although there is a relatively easy method for MOVING SQL Clone Servers simply moving a part of the Config like Users & Teams is definitely possible, but is not 100% fool-proof, so on said rare occasion, just be very, very careful!

Shall we begin? (With data classification)

“If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.”
Albert Einstein

So I just spent about 20 minutes trying to come up with a suitable title for this blog post, and then it struck me – one of my favorite movies of all time (which I will soon be ensuring my wife and I watch again) is Star Trek into Darkness, featuring the magnificent Benedict Cumberbatch, in which he masterfully growls the phrase “shall we begin?”.

shall we begin benedict cumberbatch GIF

This sums up perfectly where people find themselves at the beginning of their classification activities. “Where do I start?” is usually the first question I get asked – regardless if you’re using an excel sheet, Azure Data Catalog or Redgate SQL Data Catalog to carry out this process, you will find yourself in the same place, asking the same question.

Classifying your SQL Server Tables and Columns is not straight forward, I’ll say that up front – you have to be prepared for whatever may come your way – but give yourself a fighting chance! Whether you’re looking to better understand your data, protect it, or you’re just hoping to prepare your business to be more able to deal with things such as Subject Access Requests (SARs), the Right to be Forgotten *cough* I’m looking at _you_ GDPR *cough* – or even just key development in systems containing sensitive information; this is the ultimate starting point. As per my blog post on data masking here, you can’t protect what you don’t know you have.

This is my effort to give you the best possible start with your classification process, whether this feeds into a wider data lineage process, data retention or, of course, data masking. So… shall we begin?

Get a taxonomy set up

This is perhaps the most crucial part of your success. Even if you have the best classification process in the world it really means nothing if you’ve basically described your data in one of say, 3 possible ways. The thing to bear in mind before getting started is that the data cataloging process is not specific to one job.

You may think at this point in time that you’re going to use it to highlight what you want to mask for Dev/Test environments, or maybe it’s your hit list for implementing TDE or column level encryption – but this _thing_ you’re building is going to be useful for everyone.

  • DBAs will be able to use this to help them prioritize systems they look after and being more proactive when it comes to security checks or updates, backups etc.
  • Developers will be able to use this to better understand the tables and environments they are working on, helping them contextualize their work and therefore engage and work with any other teams or individuals who may be affected or who may need to be involved.
  • Governance teams and auditors will be able to use this to better understand what information is held by the business, who is responsible for keeping it up to date and how it is classified and protected.

The list goes on.

So all of the above will need to be engaged in a first run to help actually describe the data you’re working with. What do you actually care about? What do you want to know about data at a first glance? Below is the standard taxonomy that comes out of the box with Redgate’s Data Catalog:

Some of my favorites are in here, which I would encourage you to include as well! If nothing else, having Classification Scope as a category is an absolute must – but I’ll come to this soon. You can see though, how being able to include tags such as who owns the data (and is therefore in charge of keeping it up to date), what regulation(s) it falls under and even what our treatment policy is in line with any of those regulations is, gives us so much more to go on. We can be sure we are appropriately building out our defensible position.

Having a robust Taxonomy will enable you to not only know more about your data but to easily communicate and collaborate with others on the data you hold and the structure of your tables.

Decide who is in charge

This seems like an odd one, but actually one of the most common questions I get is about who will be carrying out the classification process, and this is where the true nature of collaboration within a company is going to be absolutely critical.

Some people believe that a DBA or a couple of developers will suffice but as you’ll see later on, this is not a simple process that only 1 or 2 people can handle by themselves. Be prepared to spend hours on this and actually the implementation of classification means by nature you are going to need a team effort in the first instance.

You will need representation from people who know the database structure, people who know the function of the various tables and people who know the business and how data should be protected. You will require representation on this team and the collaboration between Dev, DBAs, Testers, Governance and DevOps, and you will need someone central to coordinate this effort. When you have key representation from these teams, it will make it easier to identify and collaborate on hot spots of data, so ensure you have this knowledge up front.

Get rid of what doesn’t matter

You may be surprised that the next step is technically an execution step, but it is an important point nonetheless and will absolutely help with the classification effort. This is where the Classification Scope category comes in, and this is why it’s my favorite.

One of the biggest problems that people face when actually executing on their classification is the sheer enormity of the task. There is no “average” measure we can rely on unfortunately but even small schemas can be not insubstantial – recently, some work I did with a customer meant they provided me with just ONE of their database schemas which had well in advance of 1800 columns across dozens of tables. When you scale that same amount to potentially hundreds of databases, it will become rapidly clear that going over every single column is going to be unmanageable.

To start then, the knowledge brought by the team mentioned above will be invaluable because we’re going to need to “de-scope” everything that is not relevant to this process. It is very rare to find a company with more than 50% of columns per database which contain PII/PHI and even if you are one of those companies, this process can help you too.

There could be many reasons that something shouldn’t be included in this process. Perhaps it is an empty table that exists as part of a 3rd party database schema, such as in an ERP or CRM solution. It could be a purely system specific table that holds static/reference data or gathers application specific information. Regardless what the table is, use the knowledge the team has to quickly identify these and then assign them all with the necessary “Out of Scope” tag.

This will not only help you reduce the number of columns you’re going to need to process significantly, but will give you greater focus on what does need to be processed. One of the greatest quotes I’ve heard about this process comes from @DataMacas (a full on genius, wonderful person and someone who over the years I have aspired to learn as much from as possible) who referred to it as “moving from a battleships style approach to one more akin to minesweeper“. Which is just so incredibly accurate.

In my example database below with only 150 odd columns, using the “Empty Tables” filter, and then filtering down to system tables I know about, I was able to de-scope just under half of the database, just as a starting point:

Figure out patterns and speed up

Many of the people carrying out this process will already have some anecdotal knowledge of the database as I’ve already mentioned, but now it’s time to turn this from what _isn’t_ important, to what is.

The fastest way to do this is to build up some examples of column naming conventions you already have in place across multiple databases – there will likely be columns with names that contain things like Name, Email or SSN in some format. Both SQL Data Catalog from Redgate and Microsoft’s Azure Data Catalog have suggestions out of the box that will look at your column names and make suggestions as to what might be sensitive for you to check and accept the classification tags.

Now these suggestions are incredibly helpful but they do both have a reduced scope because they’re just matching against common types of PII, so it’s important to customize them to better reflect your own environments. You can do this fairly easily, one or both of the following ways:

1 – Customize the suggestions

In Redgate’s SQL Data Catalog you can actually, prior to even looking at the suggestions and accepting them, customize the regular expressions that are being run over the column naming convention to actually check that they are more indicative of your own schemas – either by editing existing rules or by creating your own rules and choosing which tags should be associated with the columns as a result:

You can then go through and accept these suggestions if you so wish, obviously making sure to give them a sense check first:

2 – POWER ALL THE SHELL

In both of the aforementioned solutions you can call the PowerShell API to actually carry out mass classification against columns with known formats – this will allow you to rapidly hit any known targets to further reduce the amount of time spent looking directly at columns, an example of the SQL Data Catalog PowerShell in action is the below, which will classify any columns it finds where the name is like Email but not like ID (as Primary and Foreign keys may, in most cases, fall under the de-scoping work we did above) with a tag for sensitivity and information type (full worked example here):

Finally – get classifying

This is the last stage, or the “hunt” stage. It’s time for us to get going with classifying what’s left i.e. anything that wasn’t de-scoped and wasn’t caught by your default suggestions or PowerShell rules.

You can obviously start going through each column one by one, but it makes the most sense to start by filtering down by tables which have the highest concentration of columns (i.e. the widest tables) or the columns that are specifically known as containing sensitive information (anecdotally or by # of tags) and classifying those as in or out of scope and what information they hold, who owns it and what the treatment intent is at the very least.

The approach i take in this instance is to use filtering to it’s utmost – in SQL Data Catalog we can filter by table and column names but also by Data Type. Common offenders can be found with certain NVARCHAR, XML or VARBINARY types, such as NVARCHAR(MAX) – to me, that sounds like an XML, JSON, document or free-text field which will likely contain some kind of difficult to identify but ultimately sensitive information.

Following the NVARCHAR classification I move on and look at DATETIME and INT/DECIMAL fields for any key candidates like dates when someone attended an event or even a Date of Birth field. This helps especially when the naming conventions don’t necessarily reflect the information that is stored in the column.

Finally, one thing to add is that you will need access to the databases or tables at some point. You can’t truly carry out a full-on data classification process purely against the schema, especially for the reason above. Data will often exist in places you weren’t aware of, and knowing the contents, format and sensitivity of the data can only reasonably be found and tagged if you have the support of the data team to do so.

Conclusion?

This is not a one time thing. The initial classification is going to be the hardest part but that can be mitigated if you follow some of the processes in this post and ultimately work as a team.

Classification though, is ongoing. It is there to be an evergreen solution, something that provides context for any data governance or DevOps processes that you have in place and therefore should be maintained and treated as what it is. The place that everyone is able to use to gather information about what they will be using, and what the company might have at risk.