3 simple pipelines for database development with Redgate Deploy – Part 1: Setup & GitLab

Society must adapt to diamonds, diamonds don’t adapt to society.
Abhijit Naskar

The world is changed… I feel it in the water… I feel it in the earth… smell it in the air. On a totally unrelated note did you know December 2021 marks the 20th Anniversary of the Lord of the Rings films? Just in case you were looking for your reminder to go and watch those masterpieces again, this is that sign!

Seriously though – gone are the days when I would demonstrate database pipelines on 1 or two different technologies. Over the last 6 years I have walked people through database deployments using an array of CICD options; Jenkins, TeamCity, Octopus Deploy, Bamboo… and most recently I’ve spent most of my time on Azure DevOps. At times it can even feel like Azure DevOps is the only solution you’ll need, but increasingly it’s becoming obvious that isn’t the case and there are new, shiny providers who offer some amazing experiences and awesome functionality.

Now seems like the best time to explore 3 of the ones I’m coming across more and more – CircleCI, GitLab and GitHub Actions.

The interesting part of this is that I genuinely believe that this will be incredibly easy. Maybe I’m naïve but from the looks of all 3 they seem straight-forward, understandable… and of course I’ll be using Flyway in my pipeline, which is the easiest, cross-platform friendly solution to use for this.

Note: I will assume you have some familiarity with Flyway in this post, if you don’t read more about the capabilities of Redgate Deploy here.

The Setup

For this “challenge” (if I can call it that) I’m going to be using Flyway Desktop installed on my Windows laptop, GitHub as my Version Control system and 5 Azure SQL Databases: 2 for “Dev” & “Dev_Shadow” (from which I will generate 3 independent repos) and 3 environments for PROD_GitLab, PROD_CircleCI and PROD_GitHub respectively. The structure of the database will be the DMDatabase, unsurprisingly the database I use for pretty much everything I do on this blog.

Note: Everything I’m doing today uses SQL Server (well… Azure SQL Database) however everything here is cross OS – you can set up similar pipelines for everything from Oracle to PostgreSQL to CockroachDB if you would like!

5 Databases ready to go – as shown in the Azure Portal

Fortunately CTRL+C, CTRL+V exists, so I’ll only have to setup once and then I’ll just copy the files across into the other two repos; I set up a new private Repo in my GitHub specifically for GitLab but you could easily repeat these steps below separately for GitHub Actions or CircleCI:

GitLab repo in GitHub

I clone this down onto my Windows machine using Git Bash and then linked to and created my Flyway Desktop project (don’t know how? Try this!):

Link the development database and the shadow, generate the Schema Model and the Baseline Migration from DMDatabase_PROD_GitLab (I just grab the relevant JDBC connectors from the Azure Portalthis makes it much easier!), don’t forget to specify the list of Schemas, I did and it ain’t pretty (but is an easy enough fix)

Then I commit and push the schema model files and the baseline migration up into GitHub:

For good measure I also changed the DM_CUSTOMER table on the Dev environment and generated a new schema-model and migration change so I know what is going to be deployed to my “Prod” environments as part of this test:

Then after committing and pushing to my repo, I copied all of the files over to my GitHub and CircleCI repositories too:

A quick check of my other repos and everything seems good to go!

Principles

I’m setting up 3 separate pipelines in this post which will all effectively do the same thing, but for different “Prod” copies of databases, however when building and deploying in practice you will have a number of tasks you will want to accomplish in and around the process itself (such as really useful things like Unit Tests, Code Analysis etc.). To keep things simple I will be creating a 6th Database – the “Build” database which will act as our CI validation step and our process for all 3 pipelines will be:

  • Invoking a Flyway Clean against the “Builddatabase – this step will remove every object on the database leaving it empty
  • Invoking a Flyway Migrate against the “Builddatabase – this step will build the database from scratch to validate our baseline script and any further migrations build successfully
  • Invoking a further Flyway Migrate against our respective “Prod” database, to deploy the latest scripts we have generated.

GitLab

After following the Setup instructions above, in GitLab I need to create a New Project and I want it to Build/Deploy from my GitHub repo, so I pick “Run CI/CD for external repository

Fortunately it’s very easy to connect directly from GitHub, but you will have to generate a Personal Authentication Token which you can do by going to https://github.com/settings/tokens and then authorizing the main repo you want to build from – for me this is GitLab_Flyway:

Painless! From here I select CI/CD template and because I’m starting from scratch I’m going to use the starter 3 stage template:

It has a rather neat layout and is pretty darn easy to get up and running with:

I may have tried several combinations to get the Flyway Docker container up and running but essentially the code I ended up running for my pipeline was:

stages:          # List of stages for jobs, and their order of execution
  - build
  - deploy

variables:
    userName: "MyUserName"
    password: "MyPassword"
    prodJDBC: "jdbc:[TheJDBCConnectionToTheProdDBYoureUsing]"
    ciJDBC: "jdbc:[TheJDBCConnectionToTheBuildDBYoureUsing]"
    migrationPath: $CI_PROJECT_DIR

build-job:       # This job runs in the build stage, which runs first.
  image:
    name: flyway/flyway:latest-alpine
    entrypoint: [""]
  stage: build
  script:
    - flyway clean -url="$ciJDBC" -user="$userName" -password="$password" -locations="filesystem:$migrationPath"
    - flyway migrate -url="$ciJDBC" -user="$userName" -password="$password" -locations="filesystem:$migrationPath"

deploy-job:      # This job runs in the deploy stage.
  image:
    name: flyway/flyway:latest-alpine
    entrypoint: [""]
  stage: deploy  # It only runs when *both* jobs in the test stage complete successfully.
  script:
    - flyway migrate -url="$prodJDBC" -user="$userName" -password="$password" -locations="filesystem:$migrationPath" -baselineOnMigrate=true -baselineVersion="MyBaselineVersion"

It was actually quite easy to spot where I had failed in previous runs and GitLab breaks things up quite nicely for us to see:

Some interesting things I noted using this setup:

  • Because we are deploying up stream to an environment that hasn’t been deployed to with Flyway before you have to pass in the -baselineOnMigrate switch, what was interesting though is that I also needed to specify the -baselineVersion, otherwise it tried to baseline V1, which of course did not exist as my baseline was named something completely different (V001_DateTimeStamp_blahlablah)
  • The entrypoint is specified as it is because it drops you right at the Flyway executable so you can issue the Flyway commands – without this it doesn’t work
  • You can ABSOLUTELY pass your variables in (like *cough* password and username) in a much more secure way through GitHub using variables, but this was a great start for me
  • To pass in the files using a hosted repo, I had to use the environment variable $CI_PROJECT_DIR and that’s where the repo is checked out to, where your migrations are

But it ultimately ended up in what I was expecting – the database was migrated using the Flyway command steps:

Conclusion

Is it possible to setup a nice easy pipeline from Dev -> Prod with Redgate Deploy and GitLab? Yes, absolutely it is, and you can build out the pipeline in whatever fashion you want. Thankfully, the Docker container makes things much much easier!

Now, let’s see how we get on with GitHub Actions!

Creating rollback scripts automatically with SQL Compare and SQL Data Compare in Azure DevOps (migrations approach)

“Life’s under no obligation to give us what we expect.”
Margaret Mitchell

Hello everyone and happy 2021! Wait, what? It’s FEBRUARY!?! Oops.

You may well have noticed I have not been around for a couple of months. That is because I’ve been struggling. A lot. We’ve had some home worries, and with the state of the world playing on our minds constantly, it has never felt right to sit down and blog anything other than some of the recipes I’m working on (purely to get out of my own head!) but this week I was asked about rollbacks when working in the Migrations based approach with SQL Change Automation, and I couldn’t resist writing something.

Before I get into this though, I want to be very clear in my preferences. I don’t believe database changes SHOULD ever be rolled back. Always forward. If for any reason you need to undo a change, either for a disaster or just because you want to, and other code is being rolled back – then this should be a task. A rollback branch should be created, the respective changes should be made in dev and then using your fabulous, flexible and repeatable pipeline should be merged, built, checked, tested and deployed upstream to Prod. This rolls the database forwards and does so in a way that is controlled, auditable and prevents data loss.

Jimmy Fallon Agree GIF

However there MAY be some scenarios where a rollback is unavoidable, for example if you are required to do so immediately, or you’re not sure which of the 3 changes that were just deployed actually caused something to go bang, so you want to blanket undo until you can investigate properly.

With this in mind then there are a couple of ways of rolling back in the migrations approach that can absolutely be a part of your pipeline, one or two of which I want to try and flesh out in this post. This post would of course be much shorter if we were working with the State-Based approach, as we would simply need to compare a previous artifact with our target and generate a down script, but as we know, the state-based method does indeed come with it’s own challenges.

  1. Generate down scripts up front – this is an option that is described fairly well in the SQL Change Automation documentation, and may be preferable to some, however this is limited to VS and does definitely include some additional work for developers, and potentially generates quite a lot of scripts that may have to be rationalized
  2. Ad-hoc rollback with SQL Compare – again, this is described in the documentation but they will have to be changes that are then recaptured in the development pipeline in some way, plus this method is ideal for programmable objects but not so much for table changes.

But I think we’re onto something there with a combination of these 2 approaches. Is there a way to use SQL Compare and SQL Data Compare alongside SQL Change Automation, to automatically generate down scripts as artifacts, before each Production deployment? This is what I want to investigate.

Set Up

As usual, I set up an Azure DevOps repo (this time called RollBackPipe) and created myself a set of local databases that I’ve been using to record videos lately so I now have the MaskMePlease database in Dev, Test, Staging and Production flavors. I’m doing this locally on my machine with an Azure DevOps agent I had hanging around anyway – it saved me firing up a bunch of Azure SQL DBs, and I just used SQL Clone.

Implementation

I got the pipeline running straight through, I was simply using YAML for this pipeline as I didn’t want to “faff” around with the Release stages etc. but this could also have been achieved with Classic Pipelines of course.

Ignore the terrible stage names, essentially I was doing 3 things with SQL Change Automation:

  • Database Build: Producing a Nuget Package to represent the successfully building files stored in my Git repo, to consume later in the pipeline
  • Create Release Artifact: This stage looks at the target environments (Test, STG and Production) and creates a number of reports in my local DatabaseDeploymentResources folder for me to review
  • Deploy From Release Artifact: Take the patch script that has been produced and run this against the target DB

So it was time to try and build in the SQL Compare Command Lines. Note if I was doing this on an Azure Pipelines agent I might need to run this as a container or programmatically install the command lines instead, but fortunately they’re just installed on my machine so I could just add a command line stage.

There was actually a couple of ways to do this, I can either pass in the details of the 2 databases completely via the command line (full documentation here), or I could generate an XML config file to pass in and store that in version control, or I could pre-construct a SQL Compare / Data Compare “Project” and just pass that in. The latter seemed easier, so of course I did that!

I opened up SQL Compare and did a Database to Database comparison for Staging and Production, but in the reverse direction to ensure we create a down script and not an up script!

I then saved this as a project, then using the red SQL Compare logo in the top left I was able to launch SQL Data Compare with the same project, so it loads the settings for you. The reason for doing this is that if we undo anything that was created as part of a migration, then try to redeploy it, it won’t actually recognize those features as missing, because the Migration Log table tracks all the migrations, and it will still be saying that it has been deployed. So we need to compare the MigrationLog tables to re-sync back to the previous state, before we ran the migrations.

However I DID make sure that before comparing I only the __MigrationLog table in the comparison in Tables and Mappings:

I then had to change the Comparison Key being used for the migration log table to [Custom], because the default comparison key includes the date/time executed, so this is what I went for:

I saved this project down with the Compare script:

and then following the rather helpful documentation on the command lines, I was able to make the following calls as part of my pipeline (YAML included):

- task: CmdLine@2
  inputs: 
    script: '"C:\Program Files (x86)\Red Gate\SQL Compare 14\sqlcompare.exe" /project:"C:\DatabaseDeploymentResources\MaskMePlease\_ComparisonResources\ProdToStagingDownScript.scp" /scriptfile:"C:\DatabaseDeploymentResources\MaskMePlease\%BuildId%\SchemaDowngradeScript.sql" /include:Identical'
  env:
    BuildId: $(Build.BuildId)
- task: CmdLine@2
  inputs:
    script: '"C:\Program Files (x86)\Red Gate\SQL Data Compare 14\sqldatacompare.exe" /project:"C:\DatabaseDeploymentResources\MaskMePlease\_ComparisonResources\ProdToStagingDataDownScript.sdc" /scriptfile:"C:\DatabaseDeploymentResources\MaskMePlease\%BuildId%\DataDowngradeScript.sql" /include:Identical'
  env:
    BuildId: $(Build.BuildId)

and sure enough, having this run after the deployment to staging but before the deployment to Production I was able to get the reports being produced into the artifacts folder with the relevant changes document:

So it was time to try it out! I created a BadTable on Dev and generated the migration for it, and committed it to source control:

This then triggered the pipeline to run:

and produced the correct reports and .sql scripts:

Then I ran these scripts to rollback Production (of course I could have modified them to preserve any data I needed to retain, this actually works best with Programmable objects like sprocs). Then I re-ran the deployment pipeline and the BadTable lives once again!

So does this work with multiple changes too? I created a stored procedure called “deletemeplease” and modified our BadTable to add a “randomthing” column, to see if we could roll those back:

Production after deployment:

Production after rollback:

All I have to do to roll back forwards now is run my pipeline:

and they’re back:

This image has an empty alt attribute; its file name is image-24.png
This image has an empty alt attribute; its file name is image-25.png

Conclusions

Is it perfectly possible to have a version specific script output as an artifact as part of your Azure DevOps pipelines? Yes it absolutely is.

Is it perfectly possible to rollback changes that you’ve made when taking a Migrations-First approach to database development? Yes it absolutely is.

Would I recommend this approach? No. Absolutely not.

Just because this is possible doesn’t mean it is good process. I’ve been asked about it enough times that I was more curious than anything – however after actually IMPLEMENTING this process, it is clear to see there are many aspects to it that can fall down and it makes it clearer now, more so than ever, that the best approach would be to either roll forwards from Dev, through the pipeline in an automated, auditable and controllable way, or if we just need to roll back temporarily because of other code being rolled back, then use SQL Compare ad-hoc. This pipeline was very simple, with very few changes and when i made mistakes I could rectify them easily myself – but with a larger dev team, when under pressure to fix things, and many changes to consider, this approach can very quickly fall apart.

Long story short, (I believe) we should follow and trust the documentation.

YAML from Azure DevOps:

trigger:
- main

pool: 'default'

steps:
- task: RedgateSqlChangeAutomationBuild@4
  inputs:
    operation: 'Build SQL Change Automation Project'
    sqlProj: 'RollBackPipe\RollBackPipe.sqlproj'
    packageName: 'MaskMePlease.Database'
    tempServerTypeBuild: 'localDB'
    nugetPackageVersionSelector: 'Specific'
    nugetPackageVersion: '1.0'
    nugetPackageVersionUseBuildId: true
    requiredVersionOfDlma: 'latest'

- task: RedgateSqlChangeAutomationRelease@4
  inputs:
    Operation: 'Create'
    NuGetFile: 'MaskMePlease.Database.1.0.$(Build.BuildID).nupkg'
    ExportPath: 'C:\DatabaseDeploymentResources\MaskMePlease\$(Build.BuildID)\Test'
    AppendEnvironment: false
    TargetDatabaseConnectionType: 'explicit'
    TargetDatabaseServer: '.\TOOLS'
    TargetDatabaseName: 'MaskMePlease_Test'
    TargetAuthMethod: 'windowsAuth'
    TransactionIsolationLevel: 'serializable'
    AbortOnWarning: 'none'
    requiredVersionOfDlma: 'latest'


- task: RedgateSqlChangeAutomationRelease@4
  inputs:
    Operation: 'DeployFromResources'
    ImportPath: 'C:\DatabaseDeploymentResources\MaskMePlease\$(Build.BuildID)\Test'
    AppendEnvironment: false
    TargetDatabaseConnectionType: 'explicit'
    TargetDatabaseServer: '.\TOOLS'
    TargetDatabaseName: 'MaskMePlease_Test'
    TargetAuthMethod: 'windowsAuth'
    requiredVersionOfDlma: 'latest'

- task: RedgateSqlChangeAutomationRelease@4
  inputs:
    Operation: 'Create'
    NuGetFile: 'MaskMePlease.Database.1.0.$(Build.BuildID).nupkg'
    ExportPath: 'C:\DatabaseDeploymentResources\MaskMePlease\$(Build.BuildID)\Staging'
    AppendEnvironment: false
    TargetDatabaseConnectionType: 'explicit'
    TargetDatabaseServer: '.\TOOLS'
    TargetDatabaseName: 'MaskMePlease_STG'
    TargetAuthMethod: 'windowsAuth'
    TransactionIsolationLevel: 'serializable'
    AbortOnWarning: 'none'
    requiredVersionOfDlma: 'latest'


- task: RedgateSqlChangeAutomationRelease@4
  inputs:
    Operation: 'DeployFromResources'
    ImportPath: 'C:\DatabaseDeploymentResources\MaskMePlease\$(Build.BuildID)\Staging'
    AppendEnvironment: false
    TargetDatabaseConnectionType: 'explicit'
    TargetDatabaseServer: '.\TOOLS'
    TargetDatabaseName: 'MaskMePlease_STG'
    TargetAuthMethod: 'windowsAuth'
    requiredVersionOfDlma: 'latest'

- task: CmdLine@2
  inputs: 
    script: '"C:\Program Files (x86)\Red Gate\SQL Compare 14\sqlcompare.exe" /project:"C:\DatabaseDeploymentResources\MaskMePlease\_ComparisonResources\ProdToStagingDownScript.scp" /scriptfile:"C:\DatabaseDeploymentResources\MaskMePlease\%BuildId%\SchemaDowngradeScript.sql" /include:Identical'
  env:
    BuildId: $(Build.BuildId)
- task: CmdLine@2
  inputs:
    script: '"C:\Program Files (x86)\Red Gate\SQL Data Compare 14\sqldatacompare.exe" /project:"C:\DatabaseDeploymentResources\MaskMePlease\_ComparisonResources\ProdToStagingDataDownScript.sdc" /scriptfile:"C:\DatabaseDeploymentResources\MaskMePlease\%BuildId%\DataDowngradeScript.sql" /include:Identical'
  env:
    BuildId: $(Build.BuildId)
- task: RedgateSqlChangeAutomationRelease@4
  inputs:
    Operation: 'Create'
    NuGetFile: 'MaskMePlease.Database.1.0.$(Build.BuildID).nupkg'
    ExportPath: 'C:\DatabaseDeploymentResources\MaskMePlease\$(Build.BuildID)\Production'
    AppendEnvironment: false
    TargetDatabaseConnectionType: 'explicit'
    TargetDatabaseServer: '.\TOOLS'
    TargetDatabaseName: 'MaskMePlease_Production'
    TargetAuthMethod: 'windowsAuth'
    TransactionIsolationLevel: 'serializable'
    AbortOnWarning: 'none'
    requiredVersionOfDlma: 'latest'


- task: RedgateSqlChangeAutomationRelease@4
  inputs:
    Operation: 'DeployFromResources'
    ImportPath: 'C:\DatabaseDeploymentResources\MaskMePlease\$(Build.BuildID)\Production'
    AppendEnvironment: false
    TargetDatabaseConnectionType: 'explicit'
    TargetDatabaseServer: '.\TOOLS'
    TargetDatabaseName: 'MaskMePlease_Production'
    TargetAuthMethod: 'windowsAuth'
    requiredVersionOfDlma: 'latest'

Reusable Schema Deployments with SQL Source Control

“That’s who you really like. The people you can think out loud in front of.”
John Green

Can we all take a moment to appreciate how awesome Kendra Little is? No really, go on over to her Twitter or something and remind her. Because not only is she a genius but she brings out the best in a lot of folk, and I don’t think she gets enough credit – so my quote above today is for her!

This is one of those times though, where we stumbled on an idea, and together we fleshed it out and thanks to her ingenuity and straight up desire to help we ended up with a full on video about it! Thank you Kendra! So if you don’t want to sit here and read about Reusable Schema Deployments, take a look at the video below instead where we cover everything from the key differences between State and Migrations, what a filter file is and how to use YAML in Azure DevOps!

For those of you who prefer a nice read, grab a coffee or tea and a biscuit (cookie) and read on!

The Problem

We find ourselves in the unenviable situation where we have a Production database that is delivered to customers to support different applications that we provide to them. When a new customer comes on board and chooses any number of our products, we then deliver them along with a copy of the database containing a set of objects that are specific to their setup.

Example. We produce 3 applications for our customers; Beep, Derp and Doink. In the database that we supply with these applications, we have corresponding schemas ‘Beep’, ‘Derp’ and ‘Doink’, as well as ‘dbo’ which holds a number of objects common across all instances of the database.

The question then is: “How do we deploy only the combinations of these schemas to the relevant customers AND make sure there is as little down time as possible?”

I mean, besides magic, of course!

shia labeouf magic GIF

Solution 1: Less Work, More Schema

There’s a reason why, when you buy a new ERP or CRM system for your company, many times you will receive ALL of the schema! Just bought in a new system to help manage your General Ledger, Accounts Payable and Accounts Receivable, and those are the only modules you’re going to use?

Whooooo-boy-howdy you better believe you’re going to get schema objects for Asset Management, Billing and Risk Management too!

The reason for this is that it is much easier to deliver. It is a single package that needs to be deployed everywhere and if the customer already has the relevant objects in the database then it is MUCH easier to just turn on corresponding application functionality that starts to populate and use those objects!

The problem is, if EVERY customer get’s EVERY object change across EVERY schema… well then it’d be a lot of changes and potentially some quite big changes that could impact our customers, perhaps unnecessarily. This could easily be an argument to be made for the migrations approach to source controlling and deploying changes, but that’s one for another day!

Solution 2: You get a filter, and you get a filter, EVERYBODY gets filters!

oprah lol GIF by Amy Poehler's Smart Girls

In the state based way of deploying we can actually use filter files (.scpf) which allow us to filter at the point of creating a database release artifact. This is a game changer because that means we can have the convenience of developing against a single source database in Dev, source controlling ALL object changes together and it’s only once we actually get to the point of deploying to the customer do we include the filter file in the Create Release Artifact step to include ONLY the necessary schema objects that are relevant to them.

Now this is also a great way of doing it because it means that everything in source control is still our single source of truth and we’re able to validate that everything builds together and we can run unit tests broadly against all objects etc. however it does also mean that we have to either maintain separate filter files for every customer, or for every combination of our Schemas that a customer could receive and update them as and when people add or remove certain applications. It also doesn’t give us any validation that THIS particular release artifact that has been created for that customer actually works independently from the rest of the schema objects and therefore we’re deploying something that hasn’t actually been tested in isolation first!

Finally, the secondary problem with this approach is that it is SLOOOOOOW. Like super slow. This is because the heaviest part of the state based database deployment process is the creation of the release artifact determining what changes should be included in the release that is going out of the door and this is being carried out and putting overhead independently on every. single. customer. Not fun.

Solution 3: Reduce, Reuse, Recycle.

If we take a step back from this problem and look at exactly what we’re trying to do and what we want to do. We want to deliver ONLY the necessary changes to a particular schema in the database that supports that specific application.

But this means that there is a commonality across customers – if for example we assume that we have 30 customers that have a variation of the Beep schema (I’m going to ignore dbo for now because everybody has that), they may also have Derp, or Doink or no other schemas, but the point is all 30 of those customers will require the exact same updates to their Beep schema full stop.

This means, if we can generate a single artifact once, that can be used for all 30 customers receiving this schema, or indeed ANY schema, then we can:

a) Reduce the amount of comparisons taking place to create release artifacts
b) Reuse and Recycle the same release artifacts for multiple customers
c) TEST these artifacts before they actually go out the door!

This is effectively achieved by adding an additional layer on top of the development and deployment process:

An additional step is introduced to produce a single reusable artifact prior to the final Prod deployment, Pre-Prod all receives the same package which contains every object regardless of schema, however when a Production release needs to go out the release artifact is built against the Beep database (in this case, which only has the Beep and dbo schemas) so the pain of the creation of the artifact actually sits outside of the customer environment AND is created only once, allowing us to now distribute that change to any customer who many require it to upgrade their Beep schema.

The same is done for each schema in turn which means we then deploy the fast reusable artifacts, and the only process change required is the step immediately before deploying to Production, almost like the independent databases are an exact mirror.

Don’t get me wrong there are challenges with this model as well.

What if we want to deploy a completely new database with a set number of schemas? Well. You may have to do an ad-hoc deployment or add an additional process to the pile which does that create for you!

What if we create versions 10.4.1, 10.4.2 but these only make it up to one of these Pre-Prod mirrors and then we want to push 10.4.3 to Customer Production environments? We will no longer receive the artifacts from .1 and .2, only .3! Here you will have to create the specific filtered artifacts ONLY prior to deploying to Production so that EVERY change is captured. In the video above I used a Golden DB which had every deployment on it and used this to test my schema specific deployments prior to deployment but it depends on what setup you want to adopt.

Conclusion

Filters are incredibly powerful and if you have subtle differences in hosted environments across your customers or even across your own DBs they can be an ingenious way of being able to keep all core objects AND key variations within Dev, version control and Pre-Production but then making sure that the target ONLY receives what it needs.

But be aware, subtle variations can snowball and you do have to be careful how you handle them as it is not very easy to scale to multiple customers. Fortunately in this scenario 1 schema was mapped to 1 application being delivered which makes it easy to determine who gets what, but the more differences you have, the harder they will be to continuously integrate and deliver.

And FINALLY (that’s right I’m going to stop talking), if you want to read more about this model from Kendra’s perspective, she also wrote about it! You can read her account here: https://littlekendra.com/2020/04/08/make-database-code-reusable-in-sql-source-control-with-deployment-filters/

Have a great week, stay safe and stay well!