“That’s who you really like. The people you can think out loud in front of.”
– John Green
Can we all take a moment to appreciate how awesome Kendra Little is? No really, go on over to her Twitter or something and remind her. Because not only is she a genius but she brings out the best in a lot of folk, and I don’t think she gets enough credit – so my quote above today is for her!
This is one of those times though, where we stumbled on an idea, and together we fleshed it out and thanks to her ingenuity and straight up desire to help we ended up with a full on video about it! Thank you Kendra! So if you don’t want to sit here and read about Reusable Schema Deployments, take a look at the video below instead where we cover everything from the key differences between State and Migrations, what a filter file is and how to use YAML in Azure DevOps!
For those of you who prefer a nice read, grab a coffee or tea and a biscuit (cookie) and read on!
The Problem
We find ourselves in the unenviable situation where we have a Production database that is delivered to customers to support different applications that we provide to them. When a new customer comes on board and chooses any number of our products, we then deliver them along with a copy of the database containing a set of objects that are specific to their setup.
Example. We produce 3 applications for our customers; Beep, Derp and Doink. In the database that we supply with these applications, we have corresponding schemas ‘Beep’, ‘Derp’ and ‘Doink’, as well as ‘dbo’ which holds a number of objects common across all instances of the database.
The question then is: “How do we deploy only the combinations of these schemas to the relevant customers AND make sure there is as little down time as possible?”
I mean, besides magic, of course!

Solution 1: Less Work, More Schema
There’s a reason why, when you buy a new ERP or CRM system for your company, many times you will receive ALL of the schema! Just bought in a new system to help manage your General Ledger, Accounts Payable and Accounts Receivable, and those are the only modules you’re going to use?
Whooooo-boy-howdy you better believe you’re going to get schema objects for Asset Management, Billing and Risk Management too!
The reason for this is that it is much easier to deliver. It is a single package that needs to be deployed everywhere and if the customer already has the relevant objects in the database then it is MUCH easier to just turn on corresponding application functionality that starts to populate and use those objects!
The problem is, if EVERY customer get’s EVERY object change across EVERY schema… well then it’d be a lot of changes and potentially some quite big changes that could impact our customers, perhaps unnecessarily. This could easily be an argument to be made for the migrations approach to source controlling and deploying changes, but that’s one for another day!
Solution 2: You get a filter, and you get a filter, EVERYBODY gets filters!

In the state based way of deploying we can actually use filter files (.scpf) which allow us to filter at the point of creating a database release artifact. This is a game changer because that means we can have the convenience of developing against a single source database in Dev, source controlling ALL object changes together and it’s only once we actually get to the point of deploying to the customer do we include the filter file in the Create Release Artifact step to include ONLY the necessary schema objects that are relevant to them.
Now this is also a great way of doing it because it means that everything in source control is still our single source of truth and we’re able to validate that everything builds together and we can run unit tests broadly against all objects etc. however it does also mean that we have to either maintain separate filter files for every customer, or for every combination of our Schemas that a customer could receive and update them as and when people add or remove certain applications. It also doesn’t give us any validation that THIS particular release artifact that has been created for that customer actually works independently from the rest of the schema objects and therefore we’re deploying something that hasn’t actually been tested in isolation first!
Finally, the secondary problem with this approach is that it is SLOOOOOOW. Like super slow. This is because the heaviest part of the state based database deployment process is the creation of the release artifact determining what changes should be included in the release that is going out of the door and this is being carried out and putting overhead independently on every. single. customer. Not fun.
Solution 3: Reduce, Reuse, Recycle.
If we take a step back from this problem and look at exactly what we’re trying to do and what we want to do. We want to deliver ONLY the necessary changes to a particular schema in the database that supports that specific application.
But this means that there is a commonality across customers – if for example we assume that we have 30 customers that have a variation of the Beep schema (I’m going to ignore dbo for now because everybody has that), they may also have Derp, or Doink or no other schemas, but the point is all 30 of those customers will require the exact same updates to their Beep schema full stop.
This means, if we can generate a single artifact once, that can be used for all 30 customers receiving this schema, or indeed ANY schema, then we can:
a) Reduce the amount of comparisons taking place to create release artifacts
b) Reuse and Recycle the same release artifacts for multiple customers
c) TEST these artifacts before they actually go out the door!
This is effectively achieved by adding an additional layer on top of the development and deployment process:


An additional step is introduced to produce a single reusable artifact prior to the final Prod deployment, Pre-Prod all receives the same package which contains every object regardless of schema, however when a Production release needs to go out the release artifact is built against the Beep database (in this case, which only has the Beep and dbo schemas) so the pain of the creation of the artifact actually sits outside of the customer environment AND is created only once, allowing us to now distribute that change to any customer who many require it to upgrade their Beep schema.
The same is done for each schema in turn which means we then deploy the fast reusable artifacts, and the only process change required is the step immediately before deploying to Production, almost like the independent databases are an exact mirror.
Don’t get me wrong there are challenges with this model as well.
What if we want to deploy a completely new database with a set number of schemas? Well. You may have to do an ad-hoc deployment or add an additional process to the pile which does that create for you!
What if we create versions 10.4.1, 10.4.2 but these only make it up to one of these Pre-Prod mirrors and then we want to push 10.4.3 to Customer Production environments? We will no longer receive the artifacts from .1 and .2, only .3! Here you will have to create the specific filtered artifacts ONLY prior to deploying to Production so that EVERY change is captured. In the video above I used a Golden DB which had every deployment on it and used this to test my schema specific deployments prior to deployment but it depends on what setup you want to adopt.
Conclusion
Filters are incredibly powerful and if you have subtle differences in hosted environments across your customers or even across your own DBs they can be an ingenious way of being able to keep all core objects AND key variations within Dev, version control and Pre-Production but then making sure that the target ONLY receives what it needs.
But be aware, subtle variations can snowball and you do have to be careful how you handle them as it is not very easy to scale to multiple customers. Fortunately in this scenario 1 schema was mapped to 1 application being delivered which makes it easy to determine who gets what, but the more differences you have, the harder they will be to continuously integrate and deliver.
And FINALLY (that’s right I’m going to stop talking), if you want to read more about this model from Kendra’s perspective, she also wrote about it! You can read her account here: https://littlekendra.com/2020/04/08/make-database-code-reusable-in-sql-source-control-with-deployment-filters/
Have a great week, stay safe and stay well!