Flyway Desktop: Don’t be afraid of your own Shadow (DB)

Just don’t hold back. Don’t be afraid to make mistakes and stuff.
Kristen Stewart

<HolidayTalk>

Howdy folks! Welcome back! Well, I guess that should be aimed at me – it’s been a few weeks *cough* months *cough* since I last blogged anything on here and this is because I was on a sabbatical – I went to the Canary Islands with my wonderful wife for a few weeks and just spent the time doing what I do worst… relaxing. Anyway – enough about that and on to the post – but if you’re interested in seeing what we got up to while we were there, I took pictures every day and I’m PlantBasedSQL on Instagram too!

</HolidayTalk>

I’ve spent far too much time of late talking about Database Cataloging and Data Masking, and it occurred to me that it was about time for a new DevOps-y post, but the trouble was I had no idea what to write – and then something happened last week that I think could really help get people up and running, not just with Flyway Teams, but also with Flyway Desktop (formerly Redgate Change Control), which is the developer-assisting GUI found in Redgate Deploy.

Note: The problem that I’m going to describe below is universal with Flyway Desktop as of writing – whether you’re using it for SQL Server or Oracle etc. the solution I will describe is also universal, which is why I haven’t tailored this blog post to a specific RDBMS.

Flyway Desktop v5.0.682

The Tech

Flyway Desktop uses a principle called the Shadow Database; you have your dedicated development database (DEV) which you make database-first changes too, and the Shadow, which is and entirely separate database constructed by running your Baseline script against an empty database. Your Baseline is the script generated from an upstream environment like PROD or TEST containing… well, everything. All objects and the entire state of that Database at that point in time. It’s useful because once you’ve created that baseline and run it to create the Shadow, a comparison is carried out to detect pending changes in DEV (so you don’t have to throw any work away that’s not in PROD) and if some are found your initial V001 migration script is generated into your local repo. It’s pretty neat.

What is also really neat is that in certain situations (like swapping branches and resolving local migrations against migrations on the database), Flyway Desktop cleans the Shadow DB and builds it again from scratch including everything from the baseline up, all the way to your new migration script – this is awesome because you’re effectively doing a full database build every time you generate a migration and testing that the script is buildable and the database deployable.

My Oracle Dev environment in SQL Developer & the Shadow DB
My SQL Server Dev environment in SSMS & the Shadow DB

The Problem

What is not awesome though, is that if you have a REALLY REALLY big database, with dozens, if not hundreds of thousands of objects you might not want to have this baseline script run every single time you create a new migration script – it could take minutes, even hours!

Following along with that radical school of thought that shockingly “not all databases are perfect” there will also be other occasions where the size of the baseline or number of objects is irrelevant; one example of this might be if you use 3 or 4 part naming conventions in your SQL Server databases. A backup and restore will work, but if you try to actively create a view, for instance, that references a cross server or database object that doesn’t exist then the script cannot run against the Shadow and instead of hanging or taking forever it simply won’t work. Caveat: If you’re using Azure SQL Database then obviously this isn’t going to be that much of a problem for you for obvious reasons, but invalid objects can still cause major problems both with the Shadow and your own databases later down the line!

The Solution

In previous situations like this, such as with SQL Change Automation, we were able to use a SQL Clone as Baseline, instead of a full baseline script and I have no doubt that that kind of functionality will be available in the future (you’re possibly already reading this laughing to yourself thinking “Chris, it’s been a feature for months now!”, but right now it isn’t a feature, so indulge me for now!).

But it got me thinking: is there a way to get around the “big or invalid Shadow DB” problem… now?

Running Flyway Desktop there’s quite few things happening under the hood, but the method of “cleaning” and “updating” the Shadow database is, you guessed it… Flyway. Flyway has a number of callbacks which can run at any time during the cycle: between migrations, after cleaning etc. whatever you want.

Definition of Callbacks from the Flyway website

My assumption when first approaching the problem was that I could use a beforeMigrate or afterClean callback within my local repository to effectively swap out the database I was using for the Shadow – however in my initial testing with Oracle (that I have since also proven in SQL Server) this turned out to be a big no-no. The reason? When Flyway runs ANY command, it initializes the JDBC connection first, even with callbacks and even if it’s just running a script that does nothing in the context of the database. This means I’m effectively trying to drop the Shadow whilst connected to it – so depending on the RDBMS I experienced 1 of 2 scenarios:

  1. The command runs successfully, the Shadow database is replaced causing the JDBC connection to crash out, causing Flyway to stop and not migrate the new Shadow.
  2. The command doesn’t run successfully because “the database is in use” so nothing happens

This was… annoying and it was thanks to a member of the development team we were able to establish exactly what was happening. Fortunately, I work with some amazing people and we were able to come up with an ingeniously simple solution that all of a sudden creates a brand new realm of possiblities.

A new callback: beforeConnect

The new callback ‘beforeConnect‘ delivered by my rather excellent colleague was released into Flyway 8.0.5 and will (as of writing) soon be released into Flyway Desktop, meaning that if you want to use a SQL Clone Database as a Shadow for your SQL Server Flyway Desktop project, you can! If you want to use Pluggable Database as a Shadow for your Oracle Flyway Desktop project, you can!

Note: These are the two I’ve tested however beforeConnect will run any and all scripts you give it prior to the JDBC connection being established meaning you can use any methods you like for replacing the Shadow, and also include it within your pipeline upstream in case you have any preparation steps you require pre-deployment.

One (well two) Solution(s)

Like I said I initially tried this out with an Oracle pluggable database to test if it was feasible – PDBs have been available for a long time and have only gained popularity. I have a copy of the ACCEPTANCE PDB which has the Flyway_Schema_History table on it and I’m using it as a base PDB from which I copy each time – this script will only run though if it doesn’t detect the Flyway_Schema_History table on the Shadow, because this means it has been cleaned – if the table is still present, there is no need to replace it.

Same story with SQL Server, though I’m using SQL Clone to “reset” the cloned Shadow database – this will work like a backup/restore but faster AND solve any pesky 3-part naming convention errors you might have with your baseline!

Oracle

beforeConnect.cmd

cd C:\[Your Local Repo]
echo @clone-shadow.sql | sqlplus -L -S [User]/[Password]@localhost:1521/ORCL AS SYSDBA

clone-shadow.sql

DECLARE
    HistoryTable INT;
BEGIN
    SELECT COUNT(*) INTO HistoryTable FROM CDB_TABLES t LEFT JOIN DBA_PDBS p ON p.CON_ID = t.CON_ID
  WHERE p.PDB_NAME = [ShadowDBName]
	AND OWNER = 'HR'
    AND t.TABLE_NAME = 'flyway_schema_history';
IF HistoryTable = 0 THEN
   execute immediate 'alter pluggable database [ShadowDBName] close immediate';
   execute immediate 'drop pluggable database [ShadowDBName] including datafiles';
   execute immediate 'create pluggable database [ShadowDBName] from [BasePDBName]';
   execute immediate 'alter pluggable database [ShadowDBName] open';
END IF;
END;
/

SQL Server

beforeConnect.ps1 (uses DBATools PowerShell module)

#Set Variables

$instance = '[Machine Name]'
$instanceName = '[Instance Name]'
$machinePlusInstance = $instance + "\" + $instanceName
$cloneServer = "http://" + $machinePlusInstance + ":14145"

# Query the Shadow DB to see if it has been cleaned

$SqlQuery = "SELECT COUNT(*) FROM [YourShadowDatabase].INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'Flyway_Schema_History'"
$result = Invoke-DbaQuery -SqlInstance $machinePlusInstance -Query $SqlQuery

# If it has been cleaned, replace it

If($result.Column1 -eq 0) {
    Connect-SqlClone -Server $cloneServer
    $SqlServerInstance = Get-SqlCloneSqlServerInstance -MachineName $instance -InstanceName $instanceName
    $CloneToReset = Get-SqlClone -Name '[YourShadowDatabase]' -Location $SqlServerInstance
    Reset-SqlClone -Clone $CloneToReset
}

Setting it all up

The above callback works wonders when you’re simply replacing your copy of the Shadow DB instead of running the baseline every time, but how do you set it up in the first place? How do we set up the full project from scratch? Well it’s actually pretty easy step by step:

1 – Create the Dev and Shadow Databases

2 – Developer creates local git repo & creates new FWD project

3 – Developer links DEV database & commits schema model

4 – Developer “sets up” shadow database and generates Baseline migration

5 – !IMPORTANT! User does NOT hit finish (else you get an ugly error)

5 – Developer changes Flyway.conf file in local repo to a) baseline on migrate and b) baseline with the script just generated

6 – Developer hits finish and Flyway_Schema_History table is created and baseline marked as applied (no clean is ever run)

7 – Changes can now be made, scripts generated and put into Version Control as expected

8 – Add your callbacks (as above) to your repo to replace the Shadow and filter this file out with your gitignore*

*SQL Server Example given above

9 – Each user that pulls down the project will need their own beforeConnect callback to recreate their own Shadow DB, but once they’ve created it, the .gitignore will filter it out by default and they won’t need to create it again

Done

SQL Data Catalog, Data Masker and your DevOps pipeline: How do I make sure everything is being masked?

“However fast regulation moves, technology moves faster. Especially as far as data is concerned.”
Elizabeth Denham

You’re probably sick of me constantly talking about how the cataloging of columns should be part of the DevOps upstream deployment process. I’ve blogged about it. I’ve even produced a video demonstrating this in action. But one question that this always throws up is:

If we include cataloging in the upstream process, how do we make sure our masking sets are also staying up to date?

The benefits of including the classifications in the upstream pipeline is that nothing ever gets to Production that hasn’t been classified – we constantly have a perfectly up to date idea of the nature of our structured data estate, how data is distributed, what risk is associated with which systems? etc. but one of the biggest wins is that we constantly know which fields need to be masked when we’re pulling copies back into non-Production, constantly.

Add a table? Add columns? We know about them, they’re classified, they’re deployed… so now they need to be masked on our next refresh. But how? Well it all depends on which approach we want to take:

  • Automated
  • Manual

Doesn’t it always boil down to those options? What I mean is that either we have an appetite to completely generate our masking set afresh every single time based on our classifications (Automated) or do we want to ensure that we configure each rule ourselves (Manual). They both have benefits and drawbacks.

Automated

How: We can generate a masking set using the SQL Data Catalog PowerShell each and every time as part of our pipeline. Add a column, tag it in the pipeline and then simply wait for the necessary rulesets to be generated in the pipeline (perhaps as part of your build) for you to run when you bring a copy back. Check out my walkthrough for how to set this up.

Benefits: The process is automatic. Its headless and you don’t need to think about it at all. As long as the classifications are provided (and if you follow the steps from the blog post and video you should be providing them) then you’re always generating rules for every classified column.

Drawbacks: This process can be fragile. If we don’t classify correctly we can end up masking in the wrong way or trying to mask the wrong field (e.g. a Primary Key, Constraint, Identity etc.) which can cause masking failures and then you have to spend time fixing the pipeline/masking set. This also means that the nature of the masking is dependent on your classifications, and the values you will get masked into the columns will be less realistic as a result (i.e. you can’t generate Row-Internal Sync Rules using the integration).

Drawback Mitigation: To avoid the process breaking, be sure to really focus on how you set up your API settings / how you pick which categories and tags are used to generate the masking rules (like I discussed here). This will at least help you make sure you map common data sets into columns (and don’t hit columns that have constraints or keys).

Only columns marked with Static Masking as the Treatment Intent will get a rule created for them
An Information Type is given to every column we intend on masking – these are then mapped to templates in masker to ensure more realistic data

Manual

How: Either rely on developers to check in masking set changes along side their code changes or build in a manual intervention step to your upstream process to ensure that someone opens and updates the schema and rules within the masking set, and then put this back into the pipeline.

Benefits: This results in more accurate, more likely to run sets that generate significantly more reliable and realistic data as an end result. Masked DB copies can be more easily used for anything beyond simple development changes, including analytics etc. You can include any specialist rules you need and apply your own understanding and knowledge of the database.

Drawbacks: This is obviously a manual process – less than ideal. Anything that involves a human can invariably go wrong because we are humans and we make mistakes (which we learn from of course). This also takes significantly more time as part of the process.

Drawback Mitigation: This is a harder one to mitigate as we’re reliant on manual intervention, however the best way to check this could be with another team member checking (maybe as part of a pull request) or you could include an automated PowerShell script to effectively rationalize the columns to be masked from Catalog vs the columns currently in the Data Masker masking set – this would help you understand if all the necessary columns have been updated or if any were missed. I have included an example of this PowerShell below.

### CHANGE THIS ###

$MaskingSet = "yourmaskingset.DMSMaskSet" # Your masking set including the DMSMaskSet file extension
$instance = "yourinstance" # The Instance as it is shown in Data Catalog that hosts the database
$DatabaseName = "yourdatabase" # The DB you want classification info for
$CatalogServer="http://yourmachine:15156" # The lcoation of your catalog server, ending on :15156
$authToken="redacted" # Your Data Catalog Auth token from the Settings page
$tagName = "Static Masking" # The tag you're using to identify which columns need to be masked

### DONT CHANGE THIS ###

Invoke-WebRequest -Uri "$CatalogServer/powershell" -OutFile 'data-catalog.psm1' -Headers @{"Authorization"="Bearer $authToken"}
Import-Module .\data-catalog.psm1 -Force
Connect-SqlDataCatalog -ServerUrl $CatalogServer -AuthToken $authToken 
$ColumnsMarkedForMasking = Get-ClassificationColumn `
    -InstanceName $instance `
    -DatabaseName $DatabaseName | Where-Object {$_.tags.name -eq $tagName} 
$MaskingSetXML = [xml](Get-Content -Path $MaskingSet)
$subrules = $MaskingSetXML.SelectNodes('//DMSSetContainer_MaskingSet/DMSSetContainer/DMSRuleBindingList/RuleSubstitution')
$internalrules = $MaskingSetXML.SelectNodes('//DMSSetContainer_MaskingSet/DMSSetContainer/DMSRuleBindingList/RuleRowInternal')
$shufflerules = $MaskingSetXML.SelectNodes('//DMSSetContainer_MaskingSet/DMSSetContainer/DMSRuleBindingList/RuleShuffle')
$searchreplacerules = $MaskingSetXML.SelectNodes('//DMSSetContainer_MaskingSet/DMSSetContainer/DMSRuleBindingList/RuleSearchReplace')
$TablesAndColumns = @()

$subrules | ForEach-Object {`
    $CurrentTable = $_.TargetTableName.value
    $_.DMSPickedColumnAndDataSetCollection.DMSPickedColumnAndDataSet.N2KSQLServerEntity_PickedColumn.ColumnName.value | ForEach-Object {$TablesAndColumns+= $CurrentTable + "." + $_ }
}

$internalrules | ForEach-Object {`
    $TablesAndColumns+= $_.TargetTableName.value + "." + $_.TargetColumnName.value
}

$shufflerules | ForEach-Object {`
    $CurrentTable = $_.TargetTableName.value
    $_.DMSPickedColumnCollection.DMSPickedColumn.N2KSQLServerEntity_PickedColumn.ColumnName.value | ForEach-Object {$TablesAndColumns+= $CurrentTable + "." + $_ }
}

$searchreplacerules | ForEach-Object {`
    $TablesAndColumns+= $_.TargetTableName.value + "." + $_.TargetColumnName.value
}

$result = $TablesAndColumns | Sort -Unique
$ColumnsNeedingRules = $ColumnsMarkedForMasking | Where-Object {($_.tableName + "." + $_.columnName) -notin $result}

"`nThere are " + $ColumnsMarkedForMasking.count + " columns that require masking for database " + $DatabaseName + "in SQL Data Catalog."
"You are masking " + $result.count + " distinct columns in masking set: " + $MaskingSet
"`nThe columns that do not currently have a mask configured are:`n"

$ColumnsNeedingRules | ForEach-Object {$_.tableName + "." + $_.columnName + "     (" + $_.dataType + ")"}

$next = Read-Host -Prompt "`nWould you like to see the columns currently in your masking set? (Y/N)"
if ($next -in ("Y", "y")) {$result}

This code can also be found on my GitHub here.

The output of running the script – 13 columns in the masking set, 14 columns outstanding to be masked

You can technically even use this same check approach for the automated masking set generation option, to ensure that everything has been tagged correctly.