Moving from Redgate SQL Source Control pipelines to Flyway Desktop with Redgate Deploy

“Like all magnificent things, it’s very simple.”
Natalie Babbitt

There has been a lot of change over the years in the Redgate solutions – I hasten to add this is a good thing. Back in my day it was SQL Source Control to store your database in Version Control; at the time it was probably a 50/50 split between people who used Git and people who used other systems like SVN, TFVC (TFS/VSTS) and Vault or Mercurial etc. and you could then use DLM Automation to build and deploy this state-based database project to Test, Prod and so on.

SQL Source Control and DLM Automation (later SQL Change Automation) have formed the basis for many a pipeline for many many years, and they have been reliable, in some cases life changing for those who have used them… but the times, they are-a changing!

These technologies are still a great option and are still present in Redgate Deploy for those whom they work for, however with the rise of still further distributed computing topologies, and the dominance of cloud-hosted architecture and PaaS databases in todays world – something new is needed.

Enter Flyway Desktop.

As you’ve seen in some of my previous posts, Flyway Desktop is really really easy to get up and running with, not only that but it combines the State and Migrations models together creating one repo with ALL the benefits, and none of the deciding which model is best for you. It was architected from the ground up to be 3 things:

  • Ingeniously simple: to set up, to use, to everything.
  • Cloud ready: designed for use with IaaS and PaaS database options
  • A combination of the best of the best: all of the benefits of previous Redgate solutions, few to none of the drawbacks

...but what if you’re already using Redgate?

Yes Flyway Desktop and Redgate Deploy in general are super easy to get up and running with for new databases, even difficult, monolithic databases (thank you Clone as shadow!), but what about projects you already have under source control? Like I mentioned, SQL Source Control has been around for years and is beloved by many, and SQL Change Automation is still in use by thousands too. We want to maintain the history of our changes for reference, and we don’t want to simply disregard the whole pipeline. So the big question is how do we upgrade our state-based pipeline? Let’s find out together!

Note: This post is for people who want to or are interested in moving to a newer solution (and to give them an idea of what to expect) and in no way reflects any level of urgency you should be feeling – I’m certainly not pushing you to move any of your pipelines now, especially if you’re happy with what you have!

Setup

For starters I set up an end to end SQL Source Control and SQL Change Automation pipeline in Azure DevOps – my understanding of the approach I’m going to take is that this should work wherever your pipeline is (TeamCity & Octopus Deploy, Bamboo, whatever) so don’t feel that this post is not for you just because I used Azure DevOps.

I set up a copy of the DMDatabase on my local SQL Developer Instance, and then created an Azure DevOps repo and cloned it down to my machine:

I linked my database to the repo, created a filter to filter out users and committed it to my repo – then I set up the YAML for the build, and the Release steps for SQL Change Automation:

My SQL Source Control Project in Azure DevOps (Git)
The YAML to build my SQL Source Control Project
Release Steps in Azure DevOps
Deployment Steps

Everything seems to be deploying ok, I’ve even set up an Azure SQL Database as the target for my database changes. Now we have this SQL Source Control -> SQL Change Automation pipeline running, lets investigate replacing it.

SQL Source Control

The first thing I did was to open Flyway Desktop and create a new project – I pointed the project at my Dev DB and at the same local repo that I host my SQL Source Control files in:

and without committing the state to my schema-model folder, only linking to the Dev database, we end up with our repo looking like this:

I’m going to delete the Redgate.ssc file, because we’re no longer in SQL Source Control and I’m going to move every other file to the schema-model folder that is now under my project name (DMDatabase) – full on Copy Paste style:

…and then hit refresh in the Schema Model tab of Flyway Desktop:

and… nothing should happen. Absolutely nothing, because the state of your project, the Schema-Model folder should now exactly match the state of your development database (assuming you had everything committed to SQL Source Control!) – so now we come across to the version control tab aaaand…

WAIT!

If we commit now it will break our CI build, because when we trigger with a new push, my YAML will be expecting $(Pipeline.Workspace)/s/Database as the input, but now we have a slightly altered project we want to build a slightly different path. I’m going to temporarily disable my CI trigger in the YAML pipeline:

and now I’m going to Pull (to get the YAML file in my local repo) and then commit and push my changes:

Now I’m going to change my build YAML file to $(Pipeline.Workspace)/s/Database/DMDatabase/schema-model then save and re-enable Continuous Integration:

et voila!

SQL Change Automation sees it as a regular state based repo and builds and deploys it with no issues whatsoever:

and just like that! SQL Source Control is replaced – our teams can now pull down the latest copy of the Repo with the Flyway Desktop project in and open it. All they will need to do is re-specify their Dev Database Connection. If you are only using SQL Source Control or you’re using SQL Source Control with the SQL Compare GUI for more manual deployments currently then you’re done! When you want to extend your pipeline, you can read below.

SQL Change Automation

This is the step where we have to fundamentally change the way the pipeline works. It’s easy to switch across from a SQL Source Control to Flyway Desktop, which means we get immediate upgrades in speed, reliability and stability in our development process, especially where we’re working with Cloud-hosted databases.

With Redgate Deploy though, we’re fundamentally leveraging the Flyway command line capability for smooth, incremental deployments, and this is always a migrations only deployment – to move across to using Flyway then we’re going to need to make a few alterations to how the pipeline works.

First-things-first: We need some migrations, more specifically: THE migration. When you create a Flyway Desktop project usually you create a Baseline script. This script is the state of your Production environment(s), or a copy of them, and is used to basically be the starting point for your incremental migration scripts in the pipeline. The Baseline, once generated, is run against an empty database referred to in Flyway Desktop as the Shadow Database, although this can of course be a Clone too. Not every developer necessarily needs this – only the ones who will be generating the deployable artifacts, the migrations themselves, and putting them into source control, but they are definitely needed for deployments.

Note: I have some clients I’m working with who want every developer to affect schema changes and then immediately generate the migration for this and share with the team, but equally I have others who want 10 or so developers to share the responsibility of schema changes, and then once they’ve reviewed at the end of a sprint, they generate the Migration for the changes, source control it and approve it.

So in Flyway Desktop we set up our erasable database, our Shadow DB:

I use an empty database I stood up quickly in the Azure Portal:

and on the Generate Migrations tab I’m now prompted to create a baseline script:

I’m going to create the Baseline from my “Prod” environment that I’ve been using for my SQL Source Control deployments and hit baseline:

When you save and finish this will now run the baseline against the Shadow DB to recreate everything – and this is going to give you a chance to detect any changes you still have outstanding in the schema model – Flyway Desktop will compare the environments and detect any outstanding Dev changes, allowing you to also produce a migration for them.

Note: If your plan is to use this process to capture any outstanding code in a V002 “Delta” script to bring all environments back into line, you absolutely can but I would advise you to make the script idempotent – if you add all the necessary IF EXISTS statements for the deployment, you should be ok and it will only create or alter the objects that have to be, in order to sync all the environments up.

First Pull any pending changes from your repo then commit and push this into your Git remote:

and it should look a little like this:

Now for second-things-second, the build. This is actually going to be a very simple step, perhaps the easiest to change. We’re already using YAML, and as you know from previous posts it’s really very easy to leverage the Flyway command line as part of your YAML pipeline, so I’m going to simply swap out the SQL Change Automation build YAML with an updated version of the Flyway YAML from that post:

trigger:
- main

pool:
  vmImage: 'ubuntu-latest'
 
steps:
- task: DockerInstaller@0
  inputs:
    dockerVersion: '17.09.0-ce'
  displayName: 'Install Docker'
 
- task: Bash@3
  inputs:
    targettype: 'inline'
    script: docker run -v $(locations):/flyway/sql flyway/flyway clean -url=$(JDBC) -user=$(userName) -password=$(password)
  displayName: 'Clean build schema'
 
- task: Bash@3
  inputs:
    targettype: 'inline'
    script: docker run -v $(locations):/flyway/sql flyway/flyway migrate -url=$(JDBC) -user=$(userName) -password=$(password)
  displayName: 'Run flyway for build'

My password and username I shall hold back for the JDBC connection variable needs to be encapsulated in quotes, to prevent it being escaped or running partially because of the semi-colon:

jdbc:sqlserver://dmnonproduction.database.windows.net:1433;database=DMDatabase_Build”

and the locations variable was my newly created migrations folder:

$(Pipeline.Workspace)/s/Database/DMDatabase/migrations

Fortunately these few changes mean that I now have a green build where I’m cleaning my Build DB and then building all of my files from there:

Deploying to Production is the only thing left. There’s a decision to be made here – because we’re just invoking the Flyway Docker Container, and we already have the YAML pipeline set up for the build we can:

  • As part of the build, zip up the migrations from the repo and publish them as an artifact, which we can then hand off to the Release portion of Azure DevOps, or indeed any other solution such as Octopus Deploy and run Flyway command line from there
  • OR we can simply expand out the YAML file – discard the “Release” pipeline and go FULL pipeline as code (which is also easier to audit changes on).

Given that we’re modernizing our deployment pipeline and introducing lean deployments of these incremental migration scripts, I’m opting for the latter, so I disable and archive my Release pipeline specifically and simply expand my YAML file with an additional step and an additional variable for the ProdJDBC instead of the Build DB:

- task: Bash@3
  inputs:
    targettype: 'inline'
    script: docker run -v $(locations):/flyway/sql flyway/flyway migrate -url=$(ProdJDBC) -user=$(userName) -password=$(password) -baselineOnMigrate=true -baselineVersion=001.20211210091210
  displayName: 'Deploy to Prod'

and of course in that YAML not forgetting the all important –baselineOnMigrate and –baselineVersion switches (which I’ve always been forgetting) – these are important because we’ll be marking the baseline script as deployed against our target and not actually running the baseline script – we don’t want to try to recreate all of the objects that already exist there.

This is the result:

Successful deployment to Prod, successful move to Flyway Desktop

Pre- and Post- Deployment Scripts

You might leverage pre- and post-deployment scripts in your SQL Source Control pipeline, something that has to happen each time before or after a deployment – if you want to maintain these in your new repo moving forwards you’ll need to make use of the Flyway callback functionality; take your pre-deployment scripts and turn them into a beforeMigrate callback and turn your post-deployment into an afterMigrate callback. These can sit in your migrations folder but:

  1. You may not need these now – because you have access to the migrations first deployment model, most changes can now be tailor-made to your deployment needs, such as injecting DML. statements in with your DDL scripts
  2. They will also run every time against your Shadow DB when you generate a new migration – just something to be aware of.

Final Word

It was much much easier than I thought it would be to move across, but I by no means believe that this will be as easy for everyone who needs or wants to move in the medium-long term. I am always an advocate of testing things out prior to setting them up in earnest, and would encourage you to try this workflow out for yourself first, perhaps in tandem with your SQL Source Control pipeline against a dummy Prod DB temporarily to see how comfortable your team is with the process, and to give yourself the time to ask the questions you might have.

3 simple pipelines for database development with Redgate Deploy – Part 1: Setup & GitLab

Society must adapt to diamonds, diamonds don’t adapt to society.
Abhijit Naskar

The world is changed… I feel it in the water… I feel it in the earth… smell it in the air. On a totally unrelated note did you know December 2021 marks the 20th Anniversary of the Lord of the Rings films? Just in case you were looking for your reminder to go and watch those masterpieces again, this is that sign!

Seriously though – gone are the days when I would demonstrate database pipelines on 1 or two different technologies. Over the last 6 years I have walked people through database deployments using an array of CICD options; Jenkins, TeamCity, Octopus Deploy, Bamboo… and most recently I’ve spent most of my time on Azure DevOps. At times it can even feel like Azure DevOps is the only solution you’ll need, but increasingly it’s becoming obvious that isn’t the case and there are new, shiny providers who offer some amazing experiences and awesome functionality.

Now seems like the best time to explore 3 of the ones I’m coming across more and more – CircleCI, GitLab and GitHub Actions.

The interesting part of this is that I genuinely believe that this will be incredibly easy. Maybe I’m naïve but from the looks of all 3 they seem straight-forward, understandable… and of course I’ll be using Flyway in my pipeline, which is the easiest, cross-platform friendly solution to use for this.

Note: I will assume you have some familiarity with Flyway in this post, if you don’t read more about the capabilities of Redgate Deploy here.

The Setup

For this “challenge” (if I can call it that) I’m going to be using Flyway Desktop installed on my Windows laptop, GitHub as my Version Control system and 5 Azure SQL Databases: 2 for “Dev” & “Dev_Shadow” (from which I will generate 3 independent repos) and 3 environments for PROD_GitLab, PROD_CircleCI and PROD_GitHub respectively. The structure of the database will be the DMDatabase, unsurprisingly the database I use for pretty much everything I do on this blog.

Note: Everything I’m doing today uses SQL Server (well… Azure SQL Database) however everything here is cross OS – you can set up similar pipelines for everything from Oracle to PostgreSQL to CockroachDB if you would like!

5 Databases ready to go – as shown in the Azure Portal

Fortunately CTRL+C, CTRL+V exists, so I’ll only have to setup once and then I’ll just copy the files across into the other two repos; I set up a new private Repo in my GitHub specifically for GitLab but you could easily repeat these steps below separately for GitHub Actions or CircleCI:

GitLab repo in GitHub

I clone this down onto my Windows machine using Git Bash and then linked to and created my Flyway Desktop project (don’t know how? Try this!):

Link the development database and the shadow, generate the Schema Model and the Baseline Migration from DMDatabase_PROD_GitLab (I just grab the relevant JDBC connectors from the Azure Portalthis makes it much easier!), don’t forget to specify the list of Schemas, I did and it ain’t pretty (but is an easy enough fix)

Then I commit and push the schema model files and the baseline migration up into GitHub:

For good measure I also changed the DM_CUSTOMER table on the Dev environment and generated a new schema-model and migration change so I know what is going to be deployed to my “Prod” environments as part of this test:

Then after committing and pushing to my repo, I copied all of the files over to my GitHub and CircleCI repositories too:

A quick check of my other repos and everything seems good to go!

Principles

I’m setting up 3 separate pipelines in this post which will all effectively do the same thing, but for different “Prod” copies of databases, however when building and deploying in practice you will have a number of tasks you will want to accomplish in and around the process itself (such as really useful things like Unit Tests, Code Analysis etc.). To keep things simple I will be creating a 6th Database – the “Build” database which will act as our CI validation step and our process for all 3 pipelines will be:

  • Invoking a Flyway Clean against the “Builddatabase – this step will remove every object on the database leaving it empty
  • Invoking a Flyway Migrate against the “Builddatabase – this step will build the database from scratch to validate our baseline script and any further migrations build successfully
  • Invoking a further Flyway Migrate against our respective “Prod” database, to deploy the latest scripts we have generated.

GitLab

After following the Setup instructions above, in GitLab I need to create a New Project and I want it to Build/Deploy from my GitHub repo, so I pick “Run CI/CD for external repository

Fortunately it’s very easy to connect directly from GitHub, but you will have to generate a Personal Authentication Token which you can do by going to https://github.com/settings/tokens and then authorizing the main repo you want to build from – for me this is GitLab_Flyway:

Painless! From here I select CI/CD template and because I’m starting from scratch I’m going to use the starter 3 stage template:

It has a rather neat layout and is pretty darn easy to get up and running with:

I may have tried several combinations to get the Flyway Docker container up and running but essentially the code I ended up running for my pipeline was:

stages:          # List of stages for jobs, and their order of execution
  - build
  - deploy

variables:
    userName: "MyUserName"
    password: "MyPassword"
    prodJDBC: "jdbc:[TheJDBCConnectionToTheProdDBYoureUsing]"
    ciJDBC: "jdbc:[TheJDBCConnectionToTheBuildDBYoureUsing]"
    migrationPath: $CI_PROJECT_DIR

build-job:       # This job runs in the build stage, which runs first.
  image:
    name: flyway/flyway:latest-alpine
    entrypoint: [""]
  stage: build
  script:
    - flyway clean -url="$ciJDBC" -user="$userName" -password="$password" -locations="filesystem:$migrationPath"
    - flyway migrate -url="$ciJDBC" -user="$userName" -password="$password" -locations="filesystem:$migrationPath"

deploy-job:      # This job runs in the deploy stage.
  image:
    name: flyway/flyway:latest-alpine
    entrypoint: [""]
  stage: deploy  # It only runs when *both* jobs in the test stage complete successfully.
  script:
    - flyway migrate -url="$prodJDBC" -user="$userName" -password="$password" -locations="filesystem:$migrationPath" -baselineOnMigrate=true -baselineVersion="MyBaselineVersion"

It was actually quite easy to spot where I had failed in previous runs and GitLab breaks things up quite nicely for us to see:

Some interesting things I noted using this setup:

  • Because we are deploying up stream to an environment that hasn’t been deployed to with Flyway before you have to pass in the -baselineOnMigrate switch, what was interesting though is that I also needed to specify the -baselineVersion, otherwise it tried to baseline V1, which of course did not exist as my baseline was named something completely different (V001_DateTimeStamp_blahlablah)
  • The entrypoint is specified as it is because it drops you right at the Flyway executable so you can issue the Flyway commands – without this it doesn’t work
  • You can ABSOLUTELY pass your variables in (like *cough* password and username) in a much more secure way through GitHub using variables, but this was a great start for me
  • To pass in the files using a hosted repo, I had to use the environment variable $CI_PROJECT_DIR and that’s where the repo is checked out to, where your migrations are

But it ultimately ended up in what I was expecting – the database was migrated using the Flyway command steps:

Conclusion

Is it possible to setup a nice easy pipeline from Dev -> Prod with Redgate Deploy and GitLab? Yes, absolutely it is, and you can build out the pipeline in whatever fashion you want. Thankfully, the Docker container makes things much much easier!

Now, let’s see how we get on with GitHub Actions!

Flyway Desktop: Don’t be afraid of your own Shadow (DB)

Just don’t hold back. Don’t be afraid to make mistakes and stuff.
Kristen Stewart

<HolidayTalk>

Howdy folks! Welcome back! Well, I guess that should be aimed at me – it’s been a few weeks *cough* months *cough* since I last blogged anything on here and this is because I was on a sabbatical – I went to the Canary Islands with my wonderful wife for a few weeks and just spent the time doing what I do worst… relaxing. Anyway – enough about that and on to the post – but if you’re interested in seeing what we got up to while we were there, I took pictures every day and I’m PlantBasedSQL on Instagram too!

</HolidayTalk>

I’ve spent far too much time of late talking about Database Cataloging and Data Masking, and it occurred to me that it was about time for a new DevOps-y post, but the trouble was I had no idea what to write – and then something happened last week that I think could really help get people up and running, not just with Flyway Teams, but also with Flyway Desktop (formerly Redgate Change Control), which is the developer-assisting GUI found in Redgate Deploy.

Note: The problem that I’m going to describe below is universal with Flyway Desktop as of writing – whether you’re using it for SQL Server or Oracle etc. the solution I will describe is also universal, which is why I haven’t tailored this blog post to a specific RDBMS.

Flyway Desktop v5.0.682

The Tech

Flyway Desktop uses a principle called the Shadow Database; you have your dedicated development database (DEV) which you make database-first changes too, and the Shadow, which is and entirely separate database constructed by running your Baseline script against an empty database. Your Baseline is the script generated from an upstream environment like PROD or TEST containing… well, everything. All objects and the entire state of that Database at that point in time. It’s useful because once you’ve created that baseline and run it to create the Shadow, a comparison is carried out to detect pending changes in DEV (so you don’t have to throw any work away that’s not in PROD) and if some are found your initial V001 migration script is generated into your local repo. It’s pretty neat.

What is also really neat is that in certain situations (like swapping branches and resolving local migrations against migrations on the database), Flyway Desktop cleans the Shadow DB and builds it again from scratch including everything from the baseline up, all the way to your new migration script – this is awesome because you’re effectively doing a full database build every time you generate a migration and testing that the script is buildable and the database deployable.

My Oracle Dev environment in SQL Developer & the Shadow DB
My SQL Server Dev environment in SSMS & the Shadow DB

The Problem

What is not awesome though, is that if you have a REALLY REALLY big database, with dozens, if not hundreds of thousands of objects you might not want to have this baseline script run every single time you create a new migration script – it could take minutes, even hours!

Following along with that radical school of thought that shockingly “not all databases are perfect” there will also be other occasions where the size of the baseline or number of objects is irrelevant; one example of this might be if you use 3 or 4 part naming conventions in your SQL Server databases. A backup and restore will work, but if you try to actively create a view, for instance, that references a cross server or database object that doesn’t exist then the script cannot run against the Shadow and instead of hanging or taking forever it simply won’t work. Caveat: If you’re using Azure SQL Database then obviously this isn’t going to be that much of a problem for you for obvious reasons, but invalid objects can still cause major problems both with the Shadow and your own databases later down the line!

The Solution

In previous situations like this, such as with SQL Change Automation, we were able to use a SQL Clone as Baseline, instead of a full baseline script and I have no doubt that that kind of functionality will be available in the future (you’re possibly already reading this laughing to yourself thinking “Chris, it’s been a feature for months now!”, but right now it isn’t a feature, so indulge me for now!).

But it got me thinking: is there a way to get around the “big or invalid Shadow DB” problem… now?

Running Flyway Desktop there’s quite few things happening under the hood, but the method of “cleaning” and “updating” the Shadow database is, you guessed it… Flyway. Flyway has a number of callbacks which can run at any time during the cycle: between migrations, after cleaning etc. whatever you want.

Definition of Callbacks from the Flyway website

My assumption when first approaching the problem was that I could use a beforeMigrate or afterClean callback within my local repository to effectively swap out the database I was using for the Shadow – however in my initial testing with Oracle (that I have since also proven in SQL Server) this turned out to be a big no-no. The reason? When Flyway runs ANY command, it initializes the JDBC connection first, even with callbacks and even if it’s just running a script that does nothing in the context of the database. This means I’m effectively trying to drop the Shadow whilst connected to it – so depending on the RDBMS I experienced 1 of 2 scenarios:

  1. The command runs successfully, the Shadow database is replaced causing the JDBC connection to crash out, causing Flyway to stop and not migrate the new Shadow.
  2. The command doesn’t run successfully because “the database is in use” so nothing happens

This was… annoying and it was thanks to a member of the development team we were able to establish exactly what was happening. Fortunately, I work with some amazing people and we were able to come up with an ingeniously simple solution that all of a sudden creates a brand new realm of possiblities.

A new callback: beforeConnect

The new callback ‘beforeConnect‘ delivered by my rather excellent colleague was released into Flyway 8.0.5 and will (as of writing) soon be released into Flyway Desktop, meaning that if you want to use a SQL Clone Database as a Shadow for your SQL Server Flyway Desktop project, you can! If you want to use Pluggable Database as a Shadow for your Oracle Flyway Desktop project, you can!

Note: These are the two I’ve tested however beforeConnect will run any and all scripts you give it prior to the JDBC connection being established meaning you can use any methods you like for replacing the Shadow, and also include it within your pipeline upstream in case you have any preparation steps you require pre-deployment.

One (well two) Solution(s)

Like I said I initially tried this out with an Oracle pluggable database to test if it was feasible – PDBs have been available for a long time and have only gained popularity. I have a copy of the ACCEPTANCE PDB which has the Flyway_Schema_History table on it and I’m using it as a base PDB from which I copy each time – this script will only run though if it doesn’t detect the Flyway_Schema_History table on the Shadow, because this means it has been cleaned – if the table is still present, there is no need to replace it.

Same story with SQL Server, though I’m using SQL Clone to “reset” the cloned Shadow database – this will work like a backup/restore but faster AND solve any pesky 3-part naming convention errors you might have with your baseline!

Oracle

beforeConnect.cmd

cd C:\[Your Local Repo]
echo @clone-shadow.sql | sqlplus -L -S [User]/[Password]@localhost:1521/ORCL AS SYSDBA

clone-shadow.sql

DECLARE
    HistoryTable INT;
BEGIN
    SELECT COUNT(*) INTO HistoryTable FROM CDB_TABLES t LEFT JOIN DBA_PDBS p ON p.CON_ID = t.CON_ID
  WHERE p.PDB_NAME = [ShadowDBName]
	AND OWNER = 'HR'
    AND t.TABLE_NAME = 'flyway_schema_history';
IF HistoryTable = 0 THEN
   execute immediate 'alter pluggable database [ShadowDBName] close immediate';
   execute immediate 'drop pluggable database [ShadowDBName] including datafiles';
   execute immediate 'create pluggable database [ShadowDBName] from [BasePDBName]';
   execute immediate 'alter pluggable database [ShadowDBName] open';
END IF;
END;
/

SQL Server

beforeConnect.ps1 (uses DBATools PowerShell module)

#Set Variables

$instance = '[Machine Name]'
$instanceName = '[Instance Name]'
$machinePlusInstance = $instance + "\" + $instanceName
$cloneServer = "http://" + $machinePlusInstance + ":14145"

# Query the Shadow DB to see if it has been cleaned

$SqlQuery = "SELECT COUNT(*) FROM [YourShadowDatabase].INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'Flyway_Schema_History'"
$result = Invoke-DbaQuery -SqlInstance $machinePlusInstance -Query $SqlQuery

# If it has been cleaned, replace it

If($result.Column1 -eq 0) {
    Connect-SqlClone -Server $cloneServer
    $SqlServerInstance = Get-SqlCloneSqlServerInstance -MachineName $instance -InstanceName $instanceName
    $CloneToReset = Get-SqlClone -Name '[YourShadowDatabase]' -Location $SqlServerInstance
    Reset-SqlClone -Clone $CloneToReset
}

Setting it all up

The above callback works wonders when you’re simply replacing your copy of the Shadow DB instead of running the baseline every time, but how do you set it up in the first place? How do we set up the full project from scratch? Well it’s actually pretty easy step by step:

1 – Create the Dev and Shadow Databases

2 – Developer creates local git repo & creates new FWD project

3 – Developer links DEV database & commits schema model

4 – Developer “sets up” shadow database and generates Baseline migration

5 – !IMPORTANT! User does NOT hit finish (else you get an ugly error)

5 – Developer changes Flyway.conf file in local repo to a) baseline on migrate and b) baseline with the script just generated

6 – Developer hits finish and Flyway_Schema_History table is created and baseline marked as applied (no clean is ever run)

7 – Changes can now be made, scripts generated and put into Version Control as expected

8 – Add your callbacks (as above) to your repo to replace the Shadow and filter this file out with your gitignore*

*SQL Server Example given above

9 – Each user that pulls down the project will need their own beforeConnect callback to recreate their own Shadow DB, but once they’ve created it, the .gitignore will filter it out by default and they won’t need to create it again

Done