Moving from Redgate SQL Source Control pipelines to Flyway Desktop with Redgate Deploy

“Like all magnificent things, it’s very simple.”
Natalie Babbitt

There has been a lot of change over the years in the Redgate solutions – I hasten to add this is a good thing. Back in my day it was SQL Source Control to store your database in Version Control; at the time it was probably a 50/50 split between people who used Git and people who used other systems like SVN, TFVC (TFS/VSTS) and Vault or Mercurial etc. and you could then use DLM Automation to build and deploy this state-based database project to Test, Prod and so on.

SQL Source Control and DLM Automation (later SQL Change Automation) have formed the basis for many a pipeline for many many years, and they have been reliable, in some cases life changing for those who have used them… but the times, they are-a changing!

These technologies are still a great option and are still present in Redgate Deploy for those whom they work for, however with the rise of still further distributed computing topologies, and the dominance of cloud-hosted architecture and PaaS databases in todays world – something new is needed.

Enter Flyway Desktop.

As you’ve seen in some of my previous posts, Flyway Desktop is really really easy to get up and running with, not only that but it combines the State and Migrations models together creating one repo with ALL the benefits, and none of the deciding which model is best for you. It was architected from the ground up to be 3 things:

  • Ingeniously simple: to set up, to use, to everything.
  • Cloud ready: designed for use with IaaS and PaaS database options
  • A combination of the best of the best: all of the benefits of previous Redgate solutions, few to none of the drawbacks

...but what if you’re already using Redgate?

Yes Flyway Desktop and Redgate Deploy in general are super easy to get up and running with for new databases, even difficult, monolithic databases (thank you Clone as shadow!), but what about projects you already have under source control? Like I mentioned, SQL Source Control has been around for years and is beloved by many, and SQL Change Automation is still in use by thousands too. We want to maintain the history of our changes for reference, and we don’t want to simply disregard the whole pipeline. So the big question is how do we upgrade our state-based pipeline? Let’s find out together!

Note: This post is for people who want to or are interested in moving to a newer solution (and to give them an idea of what to expect) and in no way reflects any level of urgency you should be feeling – I’m certainly not pushing you to move any of your pipelines now, especially if you’re happy with what you have!

Setup

For starters I set up an end to end SQL Source Control and SQL Change Automation pipeline in Azure DevOps – my understanding of the approach I’m going to take is that this should work wherever your pipeline is (TeamCity & Octopus Deploy, Bamboo, whatever) so don’t feel that this post is not for you just because I used Azure DevOps.

I set up a copy of the DMDatabase on my local SQL Developer Instance, and then created an Azure DevOps repo and cloned it down to my machine:

I linked my database to the repo, created a filter to filter out users and committed it to my repo – then I set up the YAML for the build, and the Release steps for SQL Change Automation:

My SQL Source Control Project in Azure DevOps (Git)
The YAML to build my SQL Source Control Project
Release Steps in Azure DevOps
Deployment Steps

Everything seems to be deploying ok, I’ve even set up an Azure SQL Database as the target for my database changes. Now we have this SQL Source Control -> SQL Change Automation pipeline running, lets investigate replacing it.

SQL Source Control

The first thing I did was to open Flyway Desktop and create a new project – I pointed the project at my Dev DB and at the same local repo that I host my SQL Source Control files in:

and without committing the state to my schema-model folder, only linking to the Dev database, we end up with our repo looking like this:

I’m going to delete the Redgate.ssc file, because we’re no longer in SQL Source Control and I’m going to move every other file to the schema-model folder that is now under my project name (DMDatabase) – full on Copy Paste style:

…and then hit refresh in the Schema Model tab of Flyway Desktop:

and… nothing should happen. Absolutely nothing, because the state of your project, the Schema-Model folder should now exactly match the state of your development database (assuming you had everything committed to SQL Source Control!) – so now we come across to the version control tab aaaand…

WAIT!

If we commit now it will break our CI build, because when we trigger with a new push, my YAML will be expecting $(Pipeline.Workspace)/s/Database as the input, but now we have a slightly altered project we want to build a slightly different path. I’m going to temporarily disable my CI trigger in the YAML pipeline:

and now I’m going to Pull (to get the YAML file in my local repo) and then commit and push my changes:

Now I’m going to change my build YAML file to $(Pipeline.Workspace)/s/Database/DMDatabase/schema-model then save and re-enable Continuous Integration:

et voila!

SQL Change Automation sees it as a regular state based repo and builds and deploys it with no issues whatsoever:

and just like that! SQL Source Control is replaced – our teams can now pull down the latest copy of the Repo with the Flyway Desktop project in and open it. All they will need to do is re-specify their Dev Database Connection. If you are only using SQL Source Control or you’re using SQL Source Control with the SQL Compare GUI for more manual deployments currently then you’re done! When you want to extend your pipeline, you can read below.

SQL Change Automation

This is the step where we have to fundamentally change the way the pipeline works. It’s easy to switch across from a SQL Source Control to Flyway Desktop, which means we get immediate upgrades in speed, reliability and stability in our development process, especially where we’re working with Cloud-hosted databases.

With Redgate Deploy though, we’re fundamentally leveraging the Flyway command line capability for smooth, incremental deployments, and this is always a migrations only deployment – to move across to using Flyway then we’re going to need to make a few alterations to how the pipeline works.

First-things-first: We need some migrations, more specifically: THE migration. When you create a Flyway Desktop project usually you create a Baseline script. This script is the state of your Production environment(s), or a copy of them, and is used to basically be the starting point for your incremental migration scripts in the pipeline. The Baseline, once generated, is run against an empty database referred to in Flyway Desktop as the Shadow Database, although this can of course be a Clone too. Not every developer necessarily needs this – only the ones who will be generating the deployable artifacts, the migrations themselves, and putting them into source control, but they are definitely needed for deployments.

Note: I have some clients I’m working with who want every developer to affect schema changes and then immediately generate the migration for this and share with the team, but equally I have others who want 10 or so developers to share the responsibility of schema changes, and then once they’ve reviewed at the end of a sprint, they generate the Migration for the changes, source control it and approve it.

So in Flyway Desktop we set up our erasable database, our Shadow DB:

I use an empty database I stood up quickly in the Azure Portal:

and on the Generate Migrations tab I’m now prompted to create a baseline script:

I’m going to create the Baseline from my “Prod” environment that I’ve been using for my SQL Source Control deployments and hit baseline:

When you save and finish this will now run the baseline against the Shadow DB to recreate everything – and this is going to give you a chance to detect any changes you still have outstanding in the schema model – Flyway Desktop will compare the environments and detect any outstanding Dev changes, allowing you to also produce a migration for them.

Note: If your plan is to use this process to capture any outstanding code in a V002 “Delta” script to bring all environments back into line, you absolutely can but I would advise you to make the script idempotent – if you add all the necessary IF EXISTS statements for the deployment, you should be ok and it will only create or alter the objects that have to be, in order to sync all the environments up.

First Pull any pending changes from your repo then commit and push this into your Git remote:

and it should look a little like this:

Now for second-things-second, the build. This is actually going to be a very simple step, perhaps the easiest to change. We’re already using YAML, and as you know from previous posts it’s really very easy to leverage the Flyway command line as part of your YAML pipeline, so I’m going to simply swap out the SQL Change Automation build YAML with an updated version of the Flyway YAML from that post:

trigger:
- main

pool:
  vmImage: 'ubuntu-latest'
 
steps:
- task: DockerInstaller@0
  inputs:
    dockerVersion: '17.09.0-ce'
  displayName: 'Install Docker'
 
- task: Bash@3
  inputs:
    targettype: 'inline'
    script: docker run -v $(locations):/flyway/sql flyway/flyway clean -url=$(JDBC) -user=$(userName) -password=$(password)
  displayName: 'Clean build schema'
 
- task: Bash@3
  inputs:
    targettype: 'inline'
    script: docker run -v $(locations):/flyway/sql flyway/flyway migrate -url=$(JDBC) -user=$(userName) -password=$(password)
  displayName: 'Run flyway for build'

My password and username I shall hold back for the JDBC connection variable needs to be encapsulated in quotes, to prevent it being escaped or running partially because of the semi-colon:

jdbc:sqlserver://dmnonproduction.database.windows.net:1433;database=DMDatabase_Build”

and the locations variable was my newly created migrations folder:

$(Pipeline.Workspace)/s/Database/DMDatabase/migrations

Fortunately these few changes mean that I now have a green build where I’m cleaning my Build DB and then building all of my files from there:

Deploying to Production is the only thing left. There’s a decision to be made here – because we’re just invoking the Flyway Docker Container, and we already have the YAML pipeline set up for the build we can:

  • As part of the build, zip up the migrations from the repo and publish them as an artifact, which we can then hand off to the Release portion of Azure DevOps, or indeed any other solution such as Octopus Deploy and run Flyway command line from there
  • OR we can simply expand out the YAML file – discard the “Release” pipeline and go FULL pipeline as code (which is also easier to audit changes on).

Given that we’re modernizing our deployment pipeline and introducing lean deployments of these incremental migration scripts, I’m opting for the latter, so I disable and archive my Release pipeline specifically and simply expand my YAML file with an additional step and an additional variable for the ProdJDBC instead of the Build DB:

- task: Bash@3
  inputs:
    targettype: 'inline'
    script: docker run -v $(locations):/flyway/sql flyway/flyway migrate -url=$(ProdJDBC) -user=$(userName) -password=$(password) -baselineOnMigrate=true -baselineVersion=001.20211210091210
  displayName: 'Deploy to Prod'

and of course in that YAML not forgetting the all important –baselineOnMigrate and –baselineVersion switches (which I’ve always been forgetting) – these are important because we’ll be marking the baseline script as deployed against our target and not actually running the baseline script – we don’t want to try to recreate all of the objects that already exist there.

This is the result:

Successful deployment to Prod, successful move to Flyway Desktop

Pre- and Post- Deployment Scripts

You might leverage pre- and post-deployment scripts in your SQL Source Control pipeline, something that has to happen each time before or after a deployment – if you want to maintain these in your new repo moving forwards you’ll need to make use of the Flyway callback functionality; take your pre-deployment scripts and turn them into a beforeMigrate callback and turn your post-deployment into an afterMigrate callback. These can sit in your migrations folder but:

  1. You may not need these now – because you have access to the migrations first deployment model, most changes can now be tailor-made to your deployment needs, such as injecting DML. statements in with your DDL scripts
  2. They will also run every time against your Shadow DB when you generate a new migration – just something to be aware of.

Final Word

It was much much easier than I thought it would be to move across, but I by no means believe that this will be as easy for everyone who needs or wants to move in the medium-long term. I am always an advocate of testing things out prior to setting them up in earnest, and would encourage you to try this workflow out for yourself first, perhaps in tandem with your SQL Source Control pipeline against a dummy Prod DB temporarily to see how comfortable your team is with the process, and to give yourself the time to ask the questions you might have.

3 simple pipelines for database development with Redgate Deploy – Part 3: CircleCI

“There is no place to reach.. only places to rest to carry on.”
Jaya Bhateja

SPOILER ALERT – This is part 3 of a 3 part series on enabling database deployments using Redgate Deploy, so if you have not read at least the Setup and Principles section of my previous post (Part 1 which you can find here, and if you’re interested Part 2 here for GitHub Actions) then I would strongly advise you do so! Thanks!

In my setup post we managed to get 3 Flyway Desktop repositories set up: 1 for each CICD system we’ll be using, and a number of Azure SQL Databases to use as “Dev“, “Build“, “PROD” etc. – I have never used CircleCI before so this will be a new experience as I try to figure it out at the same time as set up a database deployment pipeline… but just to recap the principles of what we’re trying to achieve:

Principles

I’m setting up 3 separate pipelines in this post which will all effectively do the same thing, but for different “Prod” copies of databases, however when building and deploying in practice you will have a number of tasks you will want to accomplish in and around the process itself (such as really useful things like Unit Tests, Code Analysis etc.). To keep things simple I will be creating a 6th Database – the “Build” database which will act as our CI validation step and our process for all 3 pipelines will be:

  • Invoking a Flyway Clean against the “Builddatabase – this step will remove every object on the database leaving it empty
  • Invoking a Flyway Migrate against the “Builddatabase – this step will build the database from scratch to validate our baseline script and any further migrations build successfully
  • Invoking a further Flyway Migrate against our respective “Prod” database, to deploy the latest scripts we have generated.

CircleCI

Ok I made my way into CircleCI and it was really easy to get up and running with (the free tier that is) and OHMYGOSH will you look at this sleek beauty:

So far so good – CircleCI seems to be even easier to understand so far than GitLab (and CONSIDERABLY easier than GitHub Actions) – I’m sure there are a lot of major differences (and GitLab was really easy to use) but I’m hoping for a similar experience here by the looks of it!

I create a new project pipeline where it asks me to select a repo for this “project”:

So I hit “Set UP Project” and then “build my own yml script” – now you would think this might just give me a blank script but no, just like GitLab they give us the option of a starter pipe:

I’m going to go ahead and choose the “Hello World” pipeline because normally that’s the easiest to cannibalize!

Much like GitLab it has an indicator to let us know whether our YAML is valid or not (I’m looking at YOU Azure DevOps!!!!) which is a massive help, and in general it’s just pretty easy to see what each step is doing. I built out an example YAML file using similar commands to my GitLab pipeline like so:

version: 2.1
parameters:
  ciJDBC:
    type: string
    default: jdbc:sqlserver://dmnonproduction.database.windows.net:1433;database=DMDatabase_Build
  prodJDBC:
    type: string
    default: jdbc:sqlserver://dmproduction.database.windows.net:1433;database=DMDatabase_PROD_CircleCI
  userName:
    type: string
    default: username
  password:
    type: string
    default: password
  migrationPath:
    type: string
    default: .\

jobs:
  clean:
      docker:
        - image: flyway/flyway:latest-alpine
      steps:
        - checkout
        - run:
            name: "Clean Build Database"
            command: "flyway clean -url=${ciJDBC} -user=${userName} -password=${password} -locations=filesystem:${migrationPath}"

  build:
    docker:
      - image: flyway/flyway:latest-alpine
    steps:
      - checkout
      - run:
          name: "Migrate to Build Database"
          command: "flyway migrate -url=${ciJDBC} -user=${userName} -password=${password} -locations=filesystem:${migrationPath}"

  deploy:
    docker:
      - image: flyway/flyway:latest-alpine
    steps:
      - checkout
      - run:
          name: "Deploy to Prod"
          command: "flyway migrate -url=${prodJDBC} -user=${userName} -password=${password} -locations=filesystem:${migrationPath}"

workflows:
  database-deploy-workflow:
    jobs:
      - clean
      - build
      - deploy

and also used the variables reference from the CircleCI documentation which was pretty helpful. But it resulted in this:

Turns out I made a few boo-boos along the way. So variables I was passing in like this: ${Variable} but Circle only really seemed to like it when I used << pipeline.parameters.variable >> because I had defined it at the beginning of the YAML file under parameters.

I also had the jobs running in parallel because I hadn’t defined in my workflow which steps were dependent on which – a lesson I SHOULD really have remembered from GitHub… but oh well. I corrected that:

workflows:
  database-deploy-workflow:
    jobs:
      - clean
      - build:
          requires: 
            - clean
      - deploy:
          requires: 
            - build

Interestingly everything was still failing and although everything was being passed through correctly, the only thing that was ACTUALLY making it to the Flyway Docker container was the first part of the JDBC connection:

Guess what? I had my quote marks in the wrong place.

destroy GIF

It’s ok though because 2 other things failed:

  1. The Prod deploy failed because it found a non-empty schema, a problem I seem to fall over EVERY SINGLE TIME, but which is easily remedied by providing the 2 switches to the Prod deployment: -baselineOnMigrate=true and -baselineVersion=[YourBaselineScriptVersion]
  2. The filepath specified wasn’t a valid path:

Yes, before anyone tells me I know my top level repo folder is still called “GitLab-Flyway“, I figured that out in the last post and I’m still face-palming. So I’m going to quickly alter the Prod Flyway migrate command and then play around with the filesystem locations first to see if I can find a value it likes…

Oh. It just needed a “.”… neat. Well here is the finished YAML that seems to work a treat:

version: 2.1
parameters:
  ciJDBC:
    type: string
    default: "jdbc:sqlserver://dmnonproduction.database.windows.net:1433;database=DMDatabase_Build"
  prodJDBC:
    type: string
    default: "jdbc:sqlserver://dmproduction.database.windows.net:1433;database=DMDatabase_PROD_CircleCI"
  userName:
    type: string
    default: "username"
  password:
    type: string
    default: "password"
  migrationPath:
    type: string
    default: "./GitLab-Flyway/migrations"

jobs:
  clean:
      docker:
        - image: flyway/flyway:latest-alpine
      steps:
        - checkout
        - run:
            name: "Clean Build Database"
            command: flyway clean -url="<< pipeline.parameters.ciJDBC >>" -user=<< pipeline.parameters.userName >> -password=<< pipeline.parameters.password >> -locations=filesystem:<< pipeline.parameters.migrationPath >>

  build:
    docker:
      - image: flyway/flyway:latest-alpine
    steps:
      - checkout
      - run:
          name: "Migrate to Build Database"
          command: flyway migrate -url="<< pipeline.parameters.ciJDBC >>" -user=<< pipeline.parameters.userName >> -password=<< pipeline.parameters.password >> -locations=filesystem:<< pipeline.parameters.migrationPath >>

  deploy:
    docker:
      - image: flyway/flyway:latest-alpine
    steps:
      - checkout
      - run:
          name: "Deploy to Prod"
          command: flyway migrate -url="<< pipeline.parameters.prodJDBC >>" -user=<< pipeline.parameters.userName >> -password=<< pipeline.parameters.password >> -locations=filesystem:<< pipeline.parameters.migrationPath >> -baselineOnMigrate=true -baselineVersion=001.20211130101136

workflows:
  database-deploy-workflow:
    jobs:
      - clean
      - build:
          requires: 
            - clean
      - deploy:
          requires: 
            - build

and we have ourselves one nice, lean CircleCI build and deployment pipeline:

Baseline Script successfully marked as Deployed, and 2nd migration successfully deployed as shown by Flyway_Schema_History table on DMDatabase_Prod_CircleCI

Conclusion

Was the purpose of these three blog posts for me to build 3 perfect pipelines, with impeccable secrets handling, automated testing, code analysis and all the best practices that mean they can all be rolled out into Production deployment pipelines tomorrow with no editing?

No. No way. Far from it.

But the purpose was to prove something else – that it can be done. This is the bare bones approach to enabling your database pipelines with Redgate Deploy and the Flyway Docker container in 3 different CICD systems; GitLab, GitHub and CircleCI and what we hoped to observe was that they can all in fact be used, with Redgate Deploy, to deploy schema changes to any of the supported RDBMS’.

That is indeed what we did. Happy Migrating!

Thank you to everyone who has stuck it out through all 3 parts, trust me, I did an awful lot of learning here myself and made COUNTLESS YAML mistakes – although I don’t class myself a Level 20 Warlock-slash-CICD-Pipeline-Guru it has been thoroughly interesting and I hope you managed to use the basis for these posts as success for your own pipelines! If you do – let me know, I love to hear from anyone who reads my posts!

3 simple pipelines for database development with Redgate Deploy – Part 2: GitHub Actions

“Finding new, relevant quotes for every blog post is hard”
Chris Unwin

SPOILER ALERT – This is part 2 of a 3 part series on enabling database deployments using Redgate Deploy, so if you have not read at least the Setup section of my previous post (Part 1 which you can find here) then I would strongly advise you do so! Thanks!

Welcome to another edition of “Chris stumbles around in the dark until eventually it starts working, but don’t question it because it might stop.” – today we’re taking on GitHub actions and unlike GitLab, I’ve never used Actions before. In my previous post we managed to get 3 repositories set up: 1 for each CICD system we’ll be using, and a number of Azure SQL Databases to use as “Dev“, “Build“, “PROD” etc. just to recap the principles of what we’re trying to achieve though:

Principles

I’m setting up 3 separate pipelines in this post which will all effectively do the same thing, but for different “Prod” copies of databases, however when building and deploying in practice you will have a number of tasks you will want to accomplish in and around the process itself (such as really useful things like Unit Tests, Code Analysis etc.). To keep things simple I will be creating a 6th Database – the “Build” database which will act as our CI validation step and our process for all 3 pipelines will be:

  • Invoking a Flyway Clean against the “Builddatabase – this step will remove every object on the database leaving it empty
  • Invoking a Flyway Migrate against the “Builddatabase – this step will build the database from scratch to validate our baseline script and any further migrations build successfully
  • Invoking a further Flyway Migrate against our respective “Prod” database, to deploy the latest scripts we have generated.

GitHub Actions

Like I said, this is the first time I’ve tried to use Actions all by myself and I’m not 100% sure where to start – I’m familiar with the GitHub interface OBVIOUSLY but at first glance I wouldn’t say it’s immediately obvious… well not to a n00b like me anyway! There’s one little “Actions button” in the center and if you miss it you’ll spend far too long not seeing the wood for the trees!

My GitHub repo with just the Flyway Desktop files in

Well anyway after some investigation it turns out that GitHub Actions is called that for a reason, because you build Actions into your YAML pipeline. Duh. Not sure how I missed that… but it kind of makes perfect sense and on top of that its quite similar in a way to Azure DevOps or Jenkins plugins really.

Unfortunately though there was slim pickins’ on the GitHub “Marketplace” for Flyway Actions, the closest one, created by joshuaavalon only migrates and was specifically put together for PostgreSQL deployments:

joshuaavalon’s Action usage example

So I did what any self-respecting person would do… I took the difficult route, forked Joshua’s repo and made the action generic, so that you could pass in whatever command you wanted to, plus changed it to use the latest Alpine version of the Flyway docker container. Plus I bumped the version number tag and fixed the docs that was using an old version of the Action, and by extension, Flyway container.

Forked repo in GitHub for the Flyway Action

2 things on this:

  1. I would assume that there will be an official Flyway Action at some point in the future, and there’s an argument to be made that using different Flyway commands should be different Actions in themselves — so as a result, I’m not going to publish this Action to the marketplace nor am I going to raise a PR for Joshua’s repo. But it was good to test out the theory.
  2. Thanks to Matt and Andrew here at RG for helping me out with a rather annoying problem involving invoking the correct version – this was solved by them and saved me banging my head against a brick wall for MANY MANY hours.

Anyway once I had the Action itself working (and it turns out it just needs to be sat in a public repo for you to invoke it, which is pretty neat – you don’t HAVE to publish if you don’t want/need to) it was time to move on to properly testing my YAML to build and deploy the contents of GitHub_Flyway. This was what I started off with:

name: CI

on:

  push:
    branches: [ trunk ]
  pull_request:
    branches: [ trunk ]

  workflow_dispatch:

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: ChrisUnwin/flyway-action@master
        with:
          url: jdbc:sqlserver://dmnonproduction.database.windows.net:1433;database=DMDatabase_Build
          user: username
          password: password
          command: clean

…and everything seemed wonderful:

Successful “build” e.g. Flyway Clean in GitHub Actions
Console output of successful Flyway Clean

But then I tried to extend my YAML to also include the Build and Deploy components for the pipeline…

Build failure in GitHub Actions

I have 2 problems:

  1. It turns out it’s not looking at the correct filesystem location (e.g. “.\SQL doesn’t exist”), which is obvious, because I’m a numpty and forgot to change this to instead point at my “migrations” location instead. So I’ll fix that now.
  2. I forgot to include the Baseline on Migrate and Baseline Migration Version arguments like I did with GitLab. Flyway Desktop generates the baseline, mine being V001_20211130101136__baseline.sql so I need it to baseline (but not run) V001_20211130101136 when Flyway goes to deploy. Otherwise Flyway will, like I experienced, fail the deployment because there are existing objects in the PROD database… obviously.

The solution to #1 is SUPER easy fortunately but the solution to #2 has me thinking. If i add -baselineOnMigrate=true to the action, then it will baseline my Build database too, meaning it won’t run the baseline and my build will fail; so I will need to pass an additional argument to my YAML somehow… *sigh* back to the Action.

I also by luck happened to notice that all my actions steps were running at the same time – but thanks to this blog post from Bryan Hogan I was able to find the neat little “uses” command, so that fixed that issue….

So the final version of the Action was:

name: Flyway Pipeline Action
description: Migrate database in Github Action using latest Flyway Docker Container
branding:
  icon: database
inputs:
  url:
    description: The jdbc url to use to connect to the database
    required: true
  user:
    description: The user to use to connect to the database
    required: false
  password:
    description: The password to use to connect to the database
    required: false
  locations:
    description: Comma-separated list of locations to scan recursively for migrations
    required: true
    default: filesystem:./sql
  command:
    description: Which of the Flyway commands you would like Flyway to run
    required: true
    default: migrate
  disableclean:
    description: Would you like clean disabled (true or false)
    required: false
    default: false
  baselineonmigrate:
    description: Would you like to baseline on migrate (true or false)
    required: false
    default: false
  baselineversion:
    description: What is your baseline version
    required: false
    default: 0.0
runs:
  using: docker
  image: docker://flyway/flyway:latest-alpine
  env:
    FLYWAY_URL: ${{ inputs.url }}
    FLYWAY_USER: ${{ inputs.user }}
    FLYWAY_PASSWORD: ${{ inputs.password }}
    FLYWAY_LOCATIONS: ${{ inputs.locations }}
    FLYWAY_CLEAN_DISABLED: ${{ inputs.disableclean }}
    FLYWAY_BASELINE_ON_MIGRATE: ${{ inputs.baselineonmigrate }}
    FLYWAY_BASELINE_VERSION: ${{ inputs.baselineversion }}
  args:
    - ${{ inputs.command }} 

I just re-invented the wheel (which I know you shouldn’t do) but it was super easy – I added the environment variables for Flyway to provide the -baselineOnMigrate and -baselineVersion switches much like I had to in the last post.

I never thought I’d be so happy to see those little green dots:

This resulted from the following pipeline YAML:

name: CI

on:

  push:
    branches: [ trunk ]
  pull_request:
    branches: [ trunk ]

  workflow_dispatch:

jobs:
  clean-build-schema:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: ChrisUnwin/flyway-action@master
        with:
          url: jdbc:sqlserver://dmnonproduction.database.windows.net:1433;database=DMDatabase_Build
          user: username
          password: password
          command: clean
          
  build:
    needs: clean-build-schema
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: ChrisUnwin/flyway-action@master
        with:
          url: jdbc:sqlserver://dmnonproduction.database.windows.net:1433;database=DMDatabase_Build
          user: username
          password: password
          locations: /GitLab-Flyway/migrations
          command: migrate
      
  deploy-to-prod:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: ChrisUnwin/flyway-action@v4
        with:
          url: jdbc:sqlserver://dmproduction.database.windows.net:1433;database=DMDatabase_PROD_GitHub
          user: username
          password: password
          locations: /GitLab-Flyway/migrations
          command: migrate
          disableclean: false
          baselineonmigrate: true
          baselineversion: 001.20211130101136

Can we take a minute to appreciate the fact that I have only just now realized that I accidentally named the top level folder GitLab-Flyway across all 3 of my repos? Well, this pipeline has nothing to do with GitLab, but it works and that’s what I care about!:

DMDatabase_Prod_GitHub successfully migrated to V002

Conclusion

Getting my head around how actions work as part of the pipeline was definitely one of the hardest aspects to this challenge – however, given that yet again we can simply include the Docker container for Flyway as part of the pipeline, we can very quickly and easily get up and running with Flyway for our Redgate Deploy pipeline, you just need the right YAML for the job.

Automating best practice checks at build time using the SQL Code Analysis cmdline (and failing the build)

Quality is not an act, it is a habit.”
Aristotle

I’ve always been thoroughly impressed with the static analysis code report that you can get from SQL Change Automation when it creates a Release Artifact prior to deploying upstream, and the fact we can use SQL Prompt to carry out on-the-fly static analysis as we write our T-SQL code in SSMS, but it has always struck me as odd that there doesn’t appear to be a way to include these checks at build time.

This seems like the perfect opportunity to build the DB from scratch (check), run unit tests (check) and check no poor coding practices have been checked in onto our branch (uh… not check?)

Enter SQL Code Analysis!

Don’t know what I’m talking about? I got you: https://documentation.red-gate.com/scg/sql-code-analysis-documentation/code-analysis-for-sql-server-command-line but to be fair I didn’t know this was a thing myself until yesterday!

One of my esteemed colleagues asked this question yesterday an being me I just couldn’t wait to have a go – is there a way to include this cmdline as part of an Azure DevOps build and FAIL the build if issues are found?

First things’ first: How does it work?

I downloaded the command line and it was pretty simple to get my head around. you can use windows or sql auth, you can point it at a scripts folder or a live DB and you can output the results to xml, html or the console if you’d like. Simple.

SqlCodeGuard.Cmd.exe /s:localhost\TOOLS /d:DMDatabase2019 /out:helloworld.html
SQL Code Analysis Console Output: 4 issues with the DMDatabase2019 found
Code Analysis HTML Output: 4 issues found with the DMDatabase2019, but much easier to read this time

Next: Wrap it in some PowerShell

I am neither a windows command line nor a PowerShell guru, but my first instinct when I’m going to include something like this in a build or deployment is to use PowerShell. It’s easy and mostly non-confusing to pass variables through the pipeline to PowerShell, easy to customize scripts and include if/else logic and to capture exit codes.

So I wrapped the cmdline call in some PowerShell (take it, it’s all yours!) that made it easier to:

  • Import the XML output generated by Code Analysis
  • Count the number of issues generated
  • Exit with error code 1 (failure) if any issues were found
  • Exit clean with 0 if no issues are found
#Set Path for Code Guard, server/instance, database and output location for XML
$codeGuardPath = "C:\Users\chris.unwin\Downloads\SCG-2019-10-17-11-40-22-46"
$server = "REDACTED"
$database = "REDACTED"
$outLocation = "$codeGuardPath\myoutput.xml"
#$user = "REDACTED"
#$password = "REDACTED"

#Invoke SQL Code Guard against the DB (could be the Build Database)
& "$codeGuardPath\SqlCodeGuard.Cmd.exe" /s:$server /d:$database /out:$outLocation #/u:$user /p:$password

#Import output xml file and count contents
$blah = [xml](Get-Content -Path $outLocation)
$files = $blah.SelectNodes('//file') #Objects with issues
$issues = $blah.SelectNodes('//file/issue') #Total issues themselves

#If number of issues > zero, exit with non-zero exit code and output list of affected objects
if ( $issues.count -gt 0 ) {

    "You have: " + $files.count + " objects, containing a total of: " + $issues.count + " issues."
    $files.fullname
    "Please review the xml output for more information."

    exit 1

}

#Else continue with no issues
else {

    "No code issues discovered."

}

This works like a charm:

Static Analysis Output in PowerShell: 2 objects with 30 issues

Finally: Put it in a pipeline

Unsurprisingly, putting it in a pipeline was the easiest part. I took a pipeline I had that was already running a local Azure DevOps agent in my default pool, made sure SQL Code Analysis was present in the correct directory on that machine and voilà! The build fails if it finds any issues.

N.B. I just stuck the raw PowerShell in the pipeline, you would be better off passing connection and location variables to the PowerShell script using custom Azure DevOps environment (and secret) variables. Oh, and having a better install directory for Code Analysis than Downloads, my bad…

Example YAML containing the PowerShell step
Code Analysis finds issues, so the PowerShell exists with Code 1, causing the build to fail

Time to choose.

There are a couple of things I’ve assumed here – I’m running it locally on a server and running against a database, and that database could be one that I’ve just built during my CI pipeline, absolutely – but you could also run this against a scripts folder / set of scripts, so even if you don’t yet have a full build / deploy process, or you have a different process that works for you – you can still include SQL Code Analysis with fairly minimal overhead! Enjoy!

Creating rollback scripts automatically with SQL Compare and SQL Data Compare in Azure DevOps (migrations approach)

“Life’s under no obligation to give us what we expect.”
Margaret Mitchell

Hello everyone and happy 2021! Wait, what? It’s FEBRUARY!?! Oops.

You may well have noticed I have not been around for a couple of months. That is because I’ve been struggling. A lot. We’ve had some home worries, and with the state of the world playing on our minds constantly, it has never felt right to sit down and blog anything other than some of the recipes I’m working on (purely to get out of my own head!) but this week I was asked about rollbacks when working in the Migrations based approach with SQL Change Automation, and I couldn’t resist writing something.

Before I get into this though, I want to be very clear in my preferences. I don’t believe database changes SHOULD ever be rolled back. Always forward. If for any reason you need to undo a change, either for a disaster or just because you want to, and other code is being rolled back – then this should be a task. A rollback branch should be created, the respective changes should be made in dev and then using your fabulous, flexible and repeatable pipeline should be merged, built, checked, tested and deployed upstream to Prod. This rolls the database forwards and does so in a way that is controlled, auditable and prevents data loss.

Jimmy Fallon Agree GIF

However there MAY be some scenarios where a rollback is unavoidable, for example if you are required to do so immediately, or you’re not sure which of the 3 changes that were just deployed actually caused something to go bang, so you want to blanket undo until you can investigate properly.

With this in mind then there are a couple of ways of rolling back in the migrations approach that can absolutely be a part of your pipeline, one or two of which I want to try and flesh out in this post. This post would of course be much shorter if we were working with the State-Based approach, as we would simply need to compare a previous artifact with our target and generate a down script, but as we know, the state-based method does indeed come with it’s own challenges.

  1. Generate down scripts up front – this is an option that is described fairly well in the SQL Change Automation documentation, and may be preferable to some, however this is limited to VS and does definitely include some additional work for developers, and potentially generates quite a lot of scripts that may have to be rationalized
  2. Ad-hoc rollback with SQL Compare – again, this is described in the documentation but they will have to be changes that are then recaptured in the development pipeline in some way, plus this method is ideal for programmable objects but not so much for table changes.

But I think we’re onto something there with a combination of these 2 approaches. Is there a way to use SQL Compare and SQL Data Compare alongside SQL Change Automation, to automatically generate down scripts as artifacts, before each Production deployment? This is what I want to investigate.

Set Up

As usual, I set up an Azure DevOps repo (this time called RollBackPipe) and created myself a set of local databases that I’ve been using to record videos lately so I now have the MaskMePlease database in Dev, Test, Staging and Production flavors. I’m doing this locally on my machine with an Azure DevOps agent I had hanging around anyway – it saved me firing up a bunch of Azure SQL DBs, and I just used SQL Clone.

Implementation

I got the pipeline running straight through, I was simply using YAML for this pipeline as I didn’t want to “faff” around with the Release stages etc. but this could also have been achieved with Classic Pipelines of course.

Ignore the terrible stage names, essentially I was doing 3 things with SQL Change Automation:

  • Database Build: Producing a Nuget Package to represent the successfully building files stored in my Git repo, to consume later in the pipeline
  • Create Release Artifact: This stage looks at the target environments (Test, STG and Production) and creates a number of reports in my local DatabaseDeploymentResources folder for me to review
  • Deploy From Release Artifact: Take the patch script that has been produced and run this against the target DB

So it was time to try and build in the SQL Compare Command Lines. Note if I was doing this on an Azure Pipelines agent I might need to run this as a container or programmatically install the command lines instead, but fortunately they’re just installed on my machine so I could just add a command line stage.

There was actually a couple of ways to do this, I can either pass in the details of the 2 databases completely via the command line (full documentation here), or I could generate an XML config file to pass in and store that in version control, or I could pre-construct a SQL Compare / Data Compare “Project” and just pass that in. The latter seemed easier, so of course I did that!

I opened up SQL Compare and did a Database to Database comparison for Staging and Production, but in the reverse direction to ensure we create a down script and not an up script!

I then saved this as a project, then using the red SQL Compare logo in the top left I was able to launch SQL Data Compare with the same project, so it loads the settings for you. The reason for doing this is that if we undo anything that was created as part of a migration, then try to redeploy it, it won’t actually recognize those features as missing, because the Migration Log table tracks all the migrations, and it will still be saying that it has been deployed. So we need to compare the MigrationLog tables to re-sync back to the previous state, before we ran the migrations.

However I DID make sure that before comparing I only the __MigrationLog table in the comparison in Tables and Mappings:

I then had to change the Comparison Key being used for the migration log table to [Custom], because the default comparison key includes the date/time executed, so this is what I went for:

I saved this project down with the Compare script:

and then following the rather helpful documentation on the command lines, I was able to make the following calls as part of my pipeline (YAML included):

- task: CmdLine@2
  inputs: 
    script: '"C:\Program Files (x86)\Red Gate\SQL Compare 14\sqlcompare.exe" /project:"C:\DatabaseDeploymentResources\MaskMePlease\_ComparisonResources\ProdToStagingDownScript.scp" /scriptfile:"C:\DatabaseDeploymentResources\MaskMePlease\%BuildId%\SchemaDowngradeScript.sql" /include:Identical'
  env:
    BuildId: $(Build.BuildId)
- task: CmdLine@2
  inputs:
    script: '"C:\Program Files (x86)\Red Gate\SQL Data Compare 14\sqldatacompare.exe" /project:"C:\DatabaseDeploymentResources\MaskMePlease\_ComparisonResources\ProdToStagingDataDownScript.sdc" /scriptfile:"C:\DatabaseDeploymentResources\MaskMePlease\%BuildId%\DataDowngradeScript.sql" /include:Identical'
  env:
    BuildId: $(Build.BuildId)

and sure enough, having this run after the deployment to staging but before the deployment to Production I was able to get the reports being produced into the artifacts folder with the relevant changes document:

So it was time to try it out! I created a BadTable on Dev and generated the migration for it, and committed it to source control:

This then triggered the pipeline to run:

and produced the correct reports and .sql scripts:

Then I ran these scripts to rollback Production (of course I could have modified them to preserve any data I needed to retain, this actually works best with Programmable objects like sprocs). Then I re-ran the deployment pipeline and the BadTable lives once again!

So does this work with multiple changes too? I created a stored procedure called “deletemeplease” and modified our BadTable to add a “randomthing” column, to see if we could roll those back:

Production after deployment:

Production after rollback:

All I have to do to roll back forwards now is run my pipeline:

and they’re back:

This image has an empty alt attribute; its file name is image-24.png
This image has an empty alt attribute; its file name is image-25.png

Conclusions

Is it perfectly possible to have a version specific script output as an artifact as part of your Azure DevOps pipelines? Yes it absolutely is.

Is it perfectly possible to rollback changes that you’ve made when taking a Migrations-First approach to database development? Yes it absolutely is.

Would I recommend this approach? No. Absolutely not.

Just because this is possible doesn’t mean it is good process. I’ve been asked about it enough times that I was more curious than anything – however after actually IMPLEMENTING this process, it is clear to see there are many aspects to it that can fall down and it makes it clearer now, more so than ever, that the best approach would be to either roll forwards from Dev, through the pipeline in an automated, auditable and controllable way, or if we just need to roll back temporarily because of other code being rolled back, then use SQL Compare ad-hoc. This pipeline was very simple, with very few changes and when i made mistakes I could rectify them easily myself – but with a larger dev team, when under pressure to fix things, and many changes to consider, this approach can very quickly fall apart.

Long story short, (I believe) we should follow and trust the documentation.

YAML from Azure DevOps:

trigger:
- main

pool: 'default'

steps:
- task: RedgateSqlChangeAutomationBuild@4
  inputs:
    operation: 'Build SQL Change Automation Project'
    sqlProj: 'RollBackPipe\RollBackPipe.sqlproj'
    packageName: 'MaskMePlease.Database'
    tempServerTypeBuild: 'localDB'
    nugetPackageVersionSelector: 'Specific'
    nugetPackageVersion: '1.0'
    nugetPackageVersionUseBuildId: true
    requiredVersionOfDlma: 'latest'

- task: RedgateSqlChangeAutomationRelease@4
  inputs:
    Operation: 'Create'
    NuGetFile: 'MaskMePlease.Database.1.0.$(Build.BuildID).nupkg'
    ExportPath: 'C:\DatabaseDeploymentResources\MaskMePlease\$(Build.BuildID)\Test'
    AppendEnvironment: false
    TargetDatabaseConnectionType: 'explicit'
    TargetDatabaseServer: '.\TOOLS'
    TargetDatabaseName: 'MaskMePlease_Test'
    TargetAuthMethod: 'windowsAuth'
    TransactionIsolationLevel: 'serializable'
    AbortOnWarning: 'none'
    requiredVersionOfDlma: 'latest'


- task: RedgateSqlChangeAutomationRelease@4
  inputs:
    Operation: 'DeployFromResources'
    ImportPath: 'C:\DatabaseDeploymentResources\MaskMePlease\$(Build.BuildID)\Test'
    AppendEnvironment: false
    TargetDatabaseConnectionType: 'explicit'
    TargetDatabaseServer: '.\TOOLS'
    TargetDatabaseName: 'MaskMePlease_Test'
    TargetAuthMethod: 'windowsAuth'
    requiredVersionOfDlma: 'latest'

- task: RedgateSqlChangeAutomationRelease@4
  inputs:
    Operation: 'Create'
    NuGetFile: 'MaskMePlease.Database.1.0.$(Build.BuildID).nupkg'
    ExportPath: 'C:\DatabaseDeploymentResources\MaskMePlease\$(Build.BuildID)\Staging'
    AppendEnvironment: false
    TargetDatabaseConnectionType: 'explicit'
    TargetDatabaseServer: '.\TOOLS'
    TargetDatabaseName: 'MaskMePlease_STG'
    TargetAuthMethod: 'windowsAuth'
    TransactionIsolationLevel: 'serializable'
    AbortOnWarning: 'none'
    requiredVersionOfDlma: 'latest'


- task: RedgateSqlChangeAutomationRelease@4
  inputs:
    Operation: 'DeployFromResources'
    ImportPath: 'C:\DatabaseDeploymentResources\MaskMePlease\$(Build.BuildID)\Staging'
    AppendEnvironment: false
    TargetDatabaseConnectionType: 'explicit'
    TargetDatabaseServer: '.\TOOLS'
    TargetDatabaseName: 'MaskMePlease_STG'
    TargetAuthMethod: 'windowsAuth'
    requiredVersionOfDlma: 'latest'

- task: CmdLine@2
  inputs: 
    script: '"C:\Program Files (x86)\Red Gate\SQL Compare 14\sqlcompare.exe" /project:"C:\DatabaseDeploymentResources\MaskMePlease\_ComparisonResources\ProdToStagingDownScript.scp" /scriptfile:"C:\DatabaseDeploymentResources\MaskMePlease\%BuildId%\SchemaDowngradeScript.sql" /include:Identical'
  env:
    BuildId: $(Build.BuildId)
- task: CmdLine@2
  inputs:
    script: '"C:\Program Files (x86)\Red Gate\SQL Data Compare 14\sqldatacompare.exe" /project:"C:\DatabaseDeploymentResources\MaskMePlease\_ComparisonResources\ProdToStagingDataDownScript.sdc" /scriptfile:"C:\DatabaseDeploymentResources\MaskMePlease\%BuildId%\DataDowngradeScript.sql" /include:Identical'
  env:
    BuildId: $(Build.BuildId)
- task: RedgateSqlChangeAutomationRelease@4
  inputs:
    Operation: 'Create'
    NuGetFile: 'MaskMePlease.Database.1.0.$(Build.BuildID).nupkg'
    ExportPath: 'C:\DatabaseDeploymentResources\MaskMePlease\$(Build.BuildID)\Production'
    AppendEnvironment: false
    TargetDatabaseConnectionType: 'explicit'
    TargetDatabaseServer: '.\TOOLS'
    TargetDatabaseName: 'MaskMePlease_Production'
    TargetAuthMethod: 'windowsAuth'
    TransactionIsolationLevel: 'serializable'
    AbortOnWarning: 'none'
    requiredVersionOfDlma: 'latest'


- task: RedgateSqlChangeAutomationRelease@4
  inputs:
    Operation: 'DeployFromResources'
    ImportPath: 'C:\DatabaseDeploymentResources\MaskMePlease\$(Build.BuildID)\Production'
    AppendEnvironment: false
    TargetDatabaseConnectionType: 'explicit'
    TargetDatabaseServer: '.\TOOLS'
    TargetDatabaseName: 'MaskMePlease_Production'
    TargetAuthMethod: 'windowsAuth'
    requiredVersionOfDlma: 'latest'

Refreshing SQL Server Development workflows with Redgate SQL Provision

“If you quit on the process, you are quitting on the result.
Idowu Koyenikan

SQL Provision is really cool. But you knew that didn’t you? It’s obvious – we get teeny-tiny clones, based on an image with completely sanitized data we can use for just about anything in dev and test, and if we break them? Boom! There’s a new one.

I’m not just talking about refreshing Dev & Test environments though, oh no! I’m talking:

  • Clones as baseline with SQL Change Automation – baseline scripts for projects are a thing of the past, goodbye invalid object headaches!
  • Clones every single time you switch a branch – keeping everything separate and not cross-pollenating database work between branches
  • Clones to check Pull Requests instead of relying solely on the code itself in Version Control

Watch my session on all 3 of these from Redgate Streamed back in August: https://www.red-gate.com/hub/events/redgate-events/redgate-streamed/redgate-streamed-global-august-26

But one question always comes up about clones in any workflow and that is – how often should I refresh Images and Clones?

This question obviously depends a lot on the process but in reality I think the question should be less about clones and more about the images themselves. Clones are transient and can be flipped at a moments notice, but the image, or the “clone tax” as Steve Jones calls it, is the thing that takes time, resource and space.

I’m going to take my own go at answering this question as I would in any customer meeting or architecture session – but if you want some excellent detailed advice and examples, check out this awesome documentation page here: https://documentation.red-gate.com/clone/how-sql-clone-improves-database-devops/self-service-disposable-databases-for-development-and-testing

Q: So, how often should we refresh it?

A: It depends on your use of the Clone – how often do you need up to date data?

As a rule of thumb though, I tend to see the following behaviours:

  • Customer Support – overnight during the working week: Where you have data that needs people to troubleshoot customer issues, it always helps to have data as close to now as possible to help resolve issues. You want an image on standby ready so that at any second a member of support can pull down a copy to look through (if it NEEDS to have sensitive data for this purpose, you can restrict who can create clones from these images by using SQL Clone’s Teams functionality)
  • BI / MIS and Report testing – once a week (if not more often): Business Intelligence and reporting workflows can just mean that you’re reading a lot from your clones in which case they should stay small and you should be able to move seamlessly between clones. But. If your ETL process puts a very heavy load on your clones (like truncating and re-populating tables) you may cause bloat and need to rethink your refresh frequency to be more often where possible, perhaps overnight so that any transformations are captured in the new images, and clones by extension.
  • “BAU” Development (Schema and Static Data Changes) – Every 1 or 2 weeks: If you’re not affecting a large number of changes to your clone, or they are limited to schema and static data only then you should be absolutely fine with a wider refresh cadence – keeping the clones around for the whole sprint or only refreshing once during the sprint can mean everyone more easily stays up to date with the same environment consistently.
  • Ad-Hoc and Test workflows – once per month: There are going to be times where you occasionally need a copy of the live DB, but the fact it is 99% similar in terms of schema, and the data is a few weeks out of date isn’t a big deal. You can pull one down from this “cold copy” for any kind of test, destructive or even to validate certain behaviors / sense check if an update or query will work. It’s also handy to maintain a slightly older copy where possible if you need to start digging into failed updates made in development, so need to have a milestone to compare from.

Again – these workflows may vary and you may have needs to be more or less frequent based on differences being recorded, bloat, space available on the fileshare etc. but generally I find customers are pretty happy with this.

Q: Once we have our refresh rate in place – how do we move developers across?

This is a great question I get a lot of the time, and it stems from the fact a developer may have made a few dozen changes to a clone, and then the frequent refresh rate blows their clones away (and they forgot to commit to version control – D’oh!) – so it’s important to bear in mind that development work, and as a result the cloning of environments is not “cut and dried“. We should give developers a chance to move across as-and-when they’re ready, so I often end up recommending the below workflow, to ease this process.

For the sake of this proposed workflow I’m assuming a couple of things:

  1. The selected workflow is BAU Development and we want to refresh once per week
  2. We have enough space available on our fileshare to allow for 2 (or more) distinct copies of the primary image
  3. Clones are being delivered to jump boxes / VMs within the network that are always connected (and not developer machines), and we can control when they are deleted
  4. We operate on a standard western work schedule where the week begins on Sunday, Saturday and Sunday are considered non-working days and developers typically work anywhere between 8am and 6pm
  5. This can all be automated using SQL Clone’s PowerShell module

Week 1 – Sunday night

  • We create Image A of Primary Database from most recent backup file onto fileshare, applying data masking

Week 1 – Monday to Friday

  • Developers X, Y and Z create their own clones of Database A as they begin the working week
  • The clones are linked to a Git repo where, using SQL Change Automation, the developers commit all changes they make to their clones throughout the week
  • Developer X finishes with their changes, makes their final commit and push on Thursday and works on a different task on Friday

Week 2 – Sunday night

  • We create Image B of Database A – with slightly more up to date (and sanitized) data and capturing any deployed changes the team committed and pushed to git previously
  • We retain Image A for now but do a check for which developers have clones remaining (Developers Y and Z) and either nudge them in the team stand up that they only have a few days left, or automate the sending of an email to those developers warning them their clones are now 1 week old

Week 2 – Monday morning

  • Developer X creates their new clone from Image B and links it to Git ready to start making changes

Week 2 – Tuesday to Friday

  • Gradually over the course of the week as Developers Y and Z finish with their tasks and commit their changes they remove their clones and create new ones from Image B
  • A final reminder, as an email or a notification in MS Teams / Slack goes out on Friday morning that any clones of Image A will be deleted over the weekend

Week 3 – Sunday night

  • Image A with no clones remaining is deleted (or any remaining clones are deleted first) and Image C is created to begin the cycle again

Conclusion

Although this workflow requires the duplication of the central image, it has a number of benefits:

  • It is easily automated using PowerShell
  • The source control process suffers minimal disruption and developers don’t need to rush to finish anything
  • We don’t accidentally destroy developer work – the onus is on the developer to ensure work is committed
  • If, for any reason the image creation process fails, you still have a persisting image, so you don’t prevent developers from doing any work / waiting for the image process to manually complete
  • Moving to newer clones is a more organic process
  • If you wanted to maintain an image throughout the week and refresh a second image overnight for more up to date data, you can simply re-purpose the above principles. This could then be used for a number of the different teams and workflows simultaneously

Bonus Point – Naming Conventions

Many people choose to append the images they create with a date stamp like Image_A_16102020 so we know when it was taken and what the latest is. This is good practice but be warned if you’re using Clones as baseline or for branch switching etc. you will need to have a persistent name else that link will break. An alternative is always having the same name for the most current image and then simply renaming the older image with the date time stamp e.g. Image_A is current, but before creation of a new Image_A, it is renamed to Image_A_16102020 – this will not disrupt the clones that already exist on it, and it allows you to always know which one is most recent.

3 RDBMS’, 3 models, 3 end-to-end deployment pipelines with Azure DevOps and Redgate Deploy

“Choice is the most powerful tool we have. Everything boils down to choice. Every choice we make shuts an infinite number of doors and opens an infinite number of doors.”
– Lori Deschene (https://tinybuddha.com/)

PLEASE NOTE (edit 18/12/2021): Two of the three (and soon the third) models below have been superceded by Flyway Desktop – please see my posts on Flyway Desktop setup (first here) and shadow database setup here to setup any workflows past today’s date, the below should only be used for legacy purposes or with Flyway Teams for PostgreSQL.

Picking a Set-Up

One of the hardest parts of my job is that at any moments notice we could be asked to walk through better database change management processes. That’s not the challenge, the problem is that it could be with any kind of tech stack. I might need a Git Repo of some shape or form (Azure DevOps, plain ol’ Git, Bitbucket etc.) and then a CI server of some kind (Azure DevOps, GitLab, TeamCity, Bamboo etc.) and finally something to handle releases (Azure DevOps, Octopus Deploy, Bamboo etc.) – this is fairly easy to reproduce in multiple combinations with automation, terraform etc. but when you’re actually helping someone set it up – you’ve got to know where all the bits go.

The Redgate tools work with all of these options and combinations so making sure we’re setting everything up right usually means questions about the Repo/CI/CD tooling people choose.

The commonality above and the one I run into the most for all 3 stages, is Azure DevOps. Its straightforward to understand, all in the same place and just plain fun to use (AND it supports emojis ^_^).

Finally now, we have to pick a Relational Database Management System (RDBMS) to use – Redgate Deploy is one of the newest offerings from Redgate and it comprises capabilities for “Database DevOps” across MS SQL Server, Oracle Database and 18 (well actually 19 now thanks to Flyway v7!) other RDBMs‘! So instead of choosing, I’m going to pick the two key ones there, and one of the 18 others: MSSQL, Oracle DB and PostgreSQL.

One final question I had to ask of myself was what models I wanted to use. There are a couple of choices available within the Redgate solution, specifically for MSSQL and Oracle at the moment, so I decided that I would do State based deployments for Oracle and Hybrid deployments for MSSQL, given that PostgreSQL will have to be migrations anyway. Fear not though, the setup is not hugely dissimilar when it comes to the actual pipelines!

Setting up Azure DevOps Repos

This stage was relatively easy – I simply created 3 new projects in my DefaultCollection where I’m going to put the repos for each of the DBs.

and then I created 3 readme files, and cloned all 3 git repos down onto my machine as local repos:

and we’re ready to go!

A quick note: I’m using a mixture of Azure DevOps hosted (for PostgreSQL) and Azure DevOps Server locally installed on my Virtual Machine (for MSSQL/Oracle) with a local agent present to run everything below – you can adopt this methodology or you can use the hosted version, but for the Oracle solution below at least you will need a local agent available (unless you use the DockerHub Image for Schema/Data Compare).

Microsoft SQL Server

The first thing I need to do for all of these is to pick the databases I’ll be working on – for me I’m rather lucky as our demonstration environment has a rather nifty set of databases for me to choose from!

I’m going with SQL Source Control (the MSSQL State component in Redgate Deploy) and SQL Change Automation (the MSSQL Migrations component) both plugged into Management Studio (SSMS) with a set of databases called the ScaryDBA_Dev/Test/Prod environments (which I used SQL Clone to create the copies of), in homage to the wonderful Grant Fritchey.

So the first thing we need to do is get Dev under source control – we’ve refreshed back from Prod so there shouldn’t be any differences and we’re using the Hybrid model, so we’ll need to create the State first. I do this by going to SQL Source Control in SSMS, and linking my DB to Git, creating a State Folder in the top level of my local repo as I do so:

Then once linked I go ahead and source control the initial schema (not sure how? Watch the Redgate University videos here):

Next I setup my Migrations project using SQL Change Automation, creating the Migrations folder in the same top level of my local repo, but now instead of pointing to the database, I’m pointing to my SQL Source Control generated State folder:

Now at this point we get the options to choose filters and comparison options – I would recommend if you’re not sure speak to someone at Redgate or look up the documentation – I often see people wanting to filter out Security/Users/Roles at this stage so it might be worth a look! I just carried on as I only have a few objects anyway!

Connect to the target and create a baseline script (i.e. what does Prod look like now?) again, because I have a minimal setup I’ll go straight from my “Prod” database:

Commit and push and we’re on our way – everything is in version control:

Now i may have cheated by doing MSSQL first – because now actually building and deploying the project is pretty straight forward – much like I have done in previous posts here and here I just used the SQL Change Automation plugins from the Azure DevOps marketplace to first build:

and then deploy the project:

and it all succeeded… the 2nd time around when I remembered to specify which DB I was deploying to!

Oracle Database

The first thing I need to do for all of these is to pick the schemas I’ll be working on… wait, Deja Vu! – well once again I have a little set of schemas present on the demonstration machine that will serve me just fine!

Because we’re working in the State setup, out of Redgate Deploy I’m going to use Source Control for Oracle which allows me to specify the remote repo, the folder to create and even the fact I’m using Azure DevOps Git:

(Step 1 was simply providing the connection details to my Oracle Database, hence why I was on step 2!) – I select the Schema I’ll be putting in Source Control and even get a nifty run down of the structure:

Hit next and give a name to the Project (unsurprisingly I went with HR) and then check in all of your initial objects:

Now one thing that you may have noticed if you’re following along that I should clarify (and which I forgot when setting up this blog post):

  1. You don’t need to specify the local repo you cloned down because Source Control for Oracle handles this itself in the back end, if you want it to be part of a local repo with other code in it, use the Working Folder instead
  2. If you are using Git and NOT the working folder, committing will also Push your objects to the remote – you’ve been warned!

As above, I now head over to Pipelines and hit Create New Pipeline! I check out my repo with the schema objects in it, and add a job to my agent. But what am I going to pick? Well unlike SQL Change Automation there’s not a plugin available on the Azure DevOps Marketplace, we’ll need some good old fashioned command line calls!

First, let’s clean out the CI Schema, I’m going to use the script to remove all objects from the Redgate documentation site and make a call to run the script using sqlplus (I’m storing the file locally but you could even include it in your repo under a build folder maybe?)

echo on
Call exit | sqlplus hr/[passwordredacted]@//localhost:1521/CI @C:\DemoFiles\DropAllObjects.sql
echo off

Next we’ll add a call to the cmdline of Schema Compare for Oracle to build the database from our repo, using the files that were checked out by the agent (an Azure DevOps pre-defined environment variable) – again we’re using a similar script from the Redgate DevOps for Oracle site but because we’re deploying ALL objects from version control, we don’t really want a report per say, this is just to test the schema can be built from the ground up:

"C:\Program Files\Red Gate\Schema Compare for Oracle 5\sco.exe" /deploy /source $(Build.SourcesDirectory)\Schema{HR} /target SYSTEM/[passwordredacted]@localhost:1521/CI{HR} AS SYSDBA /indirect 

echo Build database from state:%ERRORLEVEL%
 
rem IF ERRORLEVEL is 0 then there are no changes.
IF %ERRORLEVEL% EQU 0 (
    echo ========================================================================================================
    echo == Warning - No schema changes detected. == echo ========================================================================================================
)
 
rem IF ERRORLEVEL is 61 there are differences, which we expect.
IF %ERRORLEVEL% EQU 61 (
    echo ========================================================================================================
    echo == Objects were found and built. ==
    echo ========================================================================================================
    rem Reset the ERRORLEVEL to 0 so the build doesn't fail 
    SET ERRORLEVEL=0
)

and assuming this all works, we’ll package up the files into a zip and publish them as an artifact so we can consume them at the release stage!

and guess what? It all just worked *cough* on build #23 when I got the syntax right finally…

Of course we can add additional stages to the build as well, such as a check for Invalid Objects and some Unit Testing, but I’ll keep this pretty lean for now!

Now, just like we did for MSSQL we’re going to set up a new deployment pipeline, grab the artifact we’re publishing from the build, enable a CD trigger and we’re going to deploy to, in this case, Acceptance.

Let’s first create a job on the agent to unpack the zip file and see how far we get – I’m just going to dump them in a DeploymentState folder in the working directory:

and… awww thanks Azure DevOps, I needed to hear that!

and now we add yet another command line task, but this one is just going to do a comparison, it’s not actually going to deploy anything – because we’re going to add a manual intervention step to approve the deployment first! I had a little help again from the Redgate docs for this one, because I keep having to catch cmdline error codes – if I was wise like Alex Yates I probably would have just handled this with PowerShell…

echo off
rem  We generate the deployment preview script artifact here
"C:\Program Files\Red Gate\Schema Compare for Oracle 5\sco.exe" /abortonwarnings:high /b:hdre /i:sdwgvac /source $(System.DefaultWorkingDirectory)\DeploymentState\Schema{HR} /target SYSTEM/Redgate1@localhost:1521/Acceptance{HR} AS SYSDBA /indirect /report:$(System.DefaultWorkingDirectory)\DeploymentState\changes_report.html /scriptfile:$(System.DefaultWorkingDirectory)\DeploymentState\deployment_script.sql > $(System.DefaultWorkingDirectory)\DeploymentState\Warnings.txt

echo Warnings exit code:%ERRORLEVEL%
rem In the unlikely event that the exit code is 63, this mean that a deployment warning has exceeded the allowable threshold (eg, data loss may have been detected)
rem If this occurs it is recommended to review the script, customize it, and perform a manual deployment
 
IF %ERRORLEVEL% EQU 0 (
    echo ========================================================================================================
    echo == No schema changes to deploy
    echo ========================================================================================================

    GOTO END
)
 
IF %ERRORLEVEL% EQU 63 (
    echo ========================================================================================================
    echo == High Severity Warnings Detected! Aborting the build. 
    echo == Review the deployment script and consider deploying manually.
    echo ========================================================================================================
    rem Aborting deployment because high severity warnings were detected
        SET ERRORLEVEL=1
    GOTO END
)
 
rem This is the happy path where we've identified changes and not detected any high warnings
IF %ERRORLEVEL% EQU 61 (
    echo ========================================================================================================
    echo == Schema changes found to deploy - generating deployment script for review
    echo ========================================================================================================
    rem Set ERROLEVEL to 0 so the build job doesn't fail
	SET ERRORLEVEL=0
    GOTO END
)
 
:END
EXIT /B %ERRORLEVEL%

I then throw in an agentless job (Manual Intervention Step) and then finally (once I have reviewed the deployment report that is produced) one further cmdline call to actually run the deployment script again my Acceptance target:

echo on
Call exit | sqlplus hr/[passwordRedacted]@//localhost:1521/Acceptance @$(System.DefaultWorkingDirectory)/DeploymentState\deployment_script.sql
echo off

I have saved my pipeline, now it’s time to test. So I’m going to make a very quick change (so that something is produced) and see what happens…

Boom. Pipeline done.

One word on this though – I haven’t included an awful lot of frills (error handling, checks, NuGet instead of Zip etc.) so you’re free to bulk this out how you see fit, but by golly it works! Also make sure you tick this on the second Agent Job, else it’ll wipe out your working directory – something that obviously definitely did not happen to me…

PostgreSQL

This one might be cheating a little. As you know I’ve already setup a CI pipeline with Flyway before, using Azure SQL DBs and the Flyway Docker container as part of the build, and in some cases even tSQLt for Unit Testing too! But this is PostgreSQL, and this is a new blog post, darn it!

Still getting your head around Flyway? Check out the Redgate University videos!

I started out by creating myself a PostgreSQL 10 server in the Azure Portal, because:

  • I can
  • I didn’t want a local install of PostgreSQL
  • I’m not self sabotaging

and I set up a Dev and Test database on it – that is once I remembered to allow my client IP address *sigh* and then connected from Azure Data Studio:

I already have some basic scripts from my last demo that I can use – so I pulled down the latest version of Flyway (V7) and unzipped it into my files:

Then I created a SQL folder in my local repository for the PostgreSQLPipeline (and popped a couple of migrations in – I’m using the StackOverflow scripts, adapted for PostgreSQL from Kendra Little’s GitHub, thank you Kendra!) – in the previous posts we’ve had to source control the state or initial baseline of the database, however as we’re using Flyway for PostgreSQL this requires us to create and name/order the migrations ourselves, so we have plenty of control over that – hence why we can jump straight into building some scripts this time around.

Finally, I pointed the config file for Flyway to that, also taking the opportunity to point it at my Dev DB using the PostgreSQL JDBC:

Now i didn’t really NEED to do this step and try things out against Dev, because I already have the scripts, so I could have just started building the pipeline – but it’s always worthwhile getting local validation first by running things against Dev and then migrating up!

A quick Flyway Info later and we were good to go – the scripts are recognized so we know we’ve set everything up correctly.

One git add / commit / push and everything is in my repo:

Now as you may know from my other post we can do 1 of 2 things here – we can now either build what we eventually push to the repo using a cmdline call (like we did with the Oracle build) to a machine where we have Flyway installed, or we can use the Docker image.

I’m actually going to use Docker again but this time, instead of specifying the various credentials in a config file that was getting passed to the container, I’m actually going to use Azure DevOps environment variables and build the connection string that way – it’s really easy to keep the variables secret in Pipelines, so I can pass my JDBC connection, complete with Username and Password, as well as my Flyway license key, without worrying someone might get hold of them!

I’m actually going to build against a live PostgreSQL database before deploying, so I also created another DB for me to use: demodb_ci

I actually stole the YAML from my previous pipeline (below) and updated the variables accordingly:

trigger:
- master
 
pool:
  vmImage: 'ubuntu-latest'
 
steps:
- task: DockerInstaller@0
  inputs:
    dockerVersion: '17.09.0-ce'
  displayName: 'Install Docker'

- task: Bash@3
  inputs:
    targettype: 'inline'
    script: docker run --rm -v $(FLYWAY_LOCATIONS):/flyway/sql flyway/flyway clean -url=$(JDBC) -licenseKey=$(licenseKey) -user=$(userName) -password=$(password) -enterprise 
  displayName: 'Clean build schema'
 
- task: Bash@3
  inputs:
    targettype: 'inline'
    script: docker run --rm -v $(FLYWAY_LOCATIONS):/flyway/sql flyway/flyway migrate -url=$(JDBC) -licenseKey=$(licenseKey) -user=$(userName) -password=$(password) -enterprise 
  displayName: 'Run flyway build'

and it ran just fine! Well actually it failed first, because I didn’t have permissions from the IP address that the container was running from, but fortunately Azure has a handy switch in the PostgreSQL Server settings to simply allow Azure Services traffic through the firewall:

Once that was sorted, the first stage (as always) is to download Docker and then we have 2 Flyway containers steps:

1 – Clean the schema and make sure the database is empty
2 – Migrate the schema changes

Then we have two options – we could do like we did in the Oracle pipeline and zip up the files, spitting them out at Release stage and consuming them, either calling Flyway from the command line, or we can go ahead and promote our deployment using the same pipeline.

I’m lazy, so I’m going for the latter!

In a normal “production like” situation I would probably take the opportunity to test and check etc. like I did above, but let’s keep this super lean – if the build works, I trust the deployment. Lets go ahead and deploy to Production – I’ll add this as an additional task in my YAML:

- task: Bash@3
  inputs:
    targettype: 'inline'
    script: docker run --rm -v $(FLYWAY_LOCATIONS):/flyway/sql flyway/flyway migrate -url=$(ProdJDBC) -licenseKey=$(licenseKey) -user=$(userName) -password=$(password) -enterprise 
  displayName: 'Promote to Production'

And the deployment was successful! Phew – I think I’ve earned a cup of tea!

Conclusion

In this blog post I have demonstrated 3 different (and initially very simple*) approaches to the source control and deployment of database changes – but there’s actually a much wider combination we could have adopted – all 3 models with MSSQL, all 3 models with Oracle, and Migrations for up to 18 other systems like DB2, Snowflake and even SAP HANA! But what did I need to do ALL of this? A single solutionRedgate Deploy**.

Thank you for stopping by! Have an amazing week!

*There is a lot missing from the code I have provided, like additional error handling, tests etc. and all of the above CAN be improved – but did we manage to build and deploy across three different systems all using Azure DevOps? Yes we did. If you intend on using any of the above, please ensure you build in the necessary controls and process around it and always pick what is best for you and your team.

**Redgate Deploy is going from strength to strength, expect to see a wide range of improvements made over the coming months – I won’t be surprised if this blog post is already out of date by the time I finish writing it – that’s how awesome the teams working on all of this are!

Automated Dev Database Branch-Switching with AzureSQL, PowerShell and GitHooks

“Keep it simple, stupid!”
– My year 12 & 13 English Lit. Teacher

Recently I’ve been fascinated with something really cool. A couple of my colleagues at Redgate wrote a GitHook which allows you to easily switch branches using Redgate SQL Clone. You can see the hook here with full instructions – and I thought it was pretty neat.

But it got me thinking – I’ve posted a lot about when people are using just Azure SQL DBs (PaaS), about Masking and DB Change Automation, but when you’re using AzureSQL for Dev and Test DBs as well as Prod, you still don’t have the same agility one would expect from a local copy, like a clone.

But, the above GitHook leverages PowerShell (among some other fancy wizardry) so, what if we could do this exact same thing, using the PowerShell Az module to dynamically create and switch Azure SQL DBs in our own private resource groups every time we checkout a branch?

my hero academia wtf GIF by Funimation

I don’t imagine it would be fast because I’m restricted to using very low tier SQL DBs by my tiny allowance of (as Kendra Little calls them) “Azure Bucks”, but it should absolutely be possible!

So I decided to write a PowerShell script to do just that.

The first question I had to come up with an answer to was, how do I replace the Clone “Image” in this process, because I need something that is effectively a copy of our Production (or as near as possible) environment so we have something to base EVERY copy from – so I created the idea of a Golden Copy within the script; effectively this golden copy could be created by copying masking and copying back down from Production using something like Redgate Data Masker and my scripts here in GitHub but as a stop-gap, if it doesn’t find one in the Resource Group and Server you select, it will simply create one from your Dev DB. Best answer I could think of, you’re welcome to improve it!

All that remained was effectively to go through and just mimic the functionality of the Clone script but using Az: so if you are switching to a new branch where you don’t already have an existing Dev DB, then you get a new copy of Golden. If you’re switching to a branch you’ve checked out before, it renames the DBs to swap you back to the correct branch.

Here is an empty AzureSQL DB called DMDatabase_Dev:

When I now issue the git checkout “feature/newfeature” command it asks me to sign in to my Azure account:

and then gets to work:

And… that was it really.

I now have an Azure SQL DB called DMDatabase_Dev_master as I switched from the master branch, and I have a branch new DMDatabase_Dev DB that I can use for my featurebranch. You’ll notice I didn’t include -b in my git command, let’s assume a colleague is already working on this branch. I can now just update my copy (of my golden copy) with their work:

And we’re good to go!

But now if I switch back to my main branch, the object is gone and I can carry on with work on this branch:

It was really straightforward I can’t believe I haven’t seen this in use in more places, but hey guess what? The PowerShell is yours right here if you want it:

https://github.com/ChrisUnwin/PowerShell/blob/master/Demos/Redgate%20Demos/GitHookAzureSQL.ps1

The pre-requisites for it are:

  • You should have a Dev DB and you should update the values at the top of the script with the Dev DB name, server and resource group it is in
  • The script make reference to and creates a Golden copy DB so that you have something you should always be creating from, for consistency – so when you get started, create your own “Golden copy” back from Test/UAT or something if you can – maybe using the script mentioned above – it should be the name of your Dev DB appended with “_Golden”
  • If you want to change how it is authenticated so you don’t have to enter your credentials each time, then go for it – this was just the simplest method for me (and it’s currently 11:05pm so I’m going to bed!)

Feel free to improve it, I’m sure there are plenty improvements that can be made, but it’s a starter for 10 for anyone out there just getting started with development in Azure SQL. Plus it’s kinda neat!

5 (Noticeable) Business Benefits of Secure Database Provisioning

“Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives.”
William A. Foster

I know what you’re thinking.

Chris. Your title looks like it was written to be a corporate whitepaper that I see ads for when I’m browsing social media; it should have a CLICK HERE button, a boilerplate photo of a smiling person holding a pen and it should say something like ‘executives hate them, find out their secret here!’

But something has become immediately obvious to me in the last few months, I still speak to people daily who are forced to:

  • Work in shared development models
  • Work on empty (schema-only) / heavily subset databases
  • Work on old, out of date and/or irrelevant data
  • Make decisions without knowing enough about their data or what they hold

When speaking to them though it becomes immediately obvious that the reason there is no dedicated option available for developers is actually not related to the “traditional” problems that one would expect. You would naturally assume that the reason for not refreshing these environments is because of the large amounts of space or time taken to refresh often enough, or even that ‘we simply cannot due to sensitive PII and regulatory concerns‘.

No. In fact it comes down to, as all things do, time and money.

paid make it rain GIF by Thalia de Jong

In the most recent State of Database DevOps report (2020 that is), a whopping 70% of 2000+ respondents replied that they were using a shared development database and this comes with a whole heap of associated problems, like poor code quality, looser controls around sensitive data and defective deployments. Just these figures alone already point to the solution being to spin up copies for developers on demand and it’s not like we can’t do that. There is SO much technology in the world, across almost all database platforms, that will allow us to virtualize, containerize, sanitize… (effectively all of the ‘izes‘) our databases so that we can have full, safe, realistic copies as frequently as we like. So what is stopping us?

From experience, it’s justification*. People going to senior stakeholders and saying “we need this technology” and hearing a cacophony of classic business challenges back: “but is it broken?”, “do we really NEED it?“, “it costs HOW much!?!“, “how much time will it take to implement?” etc… It’s dev and test hygiene, not a sexy major modernization project like using Azure Arc, using Blockchain or creating Artificial Intelligence. Who cares that developers have to share a database? We’ve got bigger Tofish to fry!

*Sometimes, but much less frequently, it’s down to complexity of implementation – but we’ll leave that one aside for now!

As you will know from my last post on why now is the time to adopt better working practices, it’s important for us to highlight the gains that can be made from newer, updated practices, and why now is not the time to be closing our minds off to a better way of life. It’s not going to be easy to sum this up in 5 points, and there are many other benefits to solid database provisioning but these are in my opinion, the ones that will revolutionize the way you develop.

Very important side note for this blog post: there are lots of subjective key practices, processes and tools that can form part of the “database provisioning process” specifically and they will vary wildly by experience, opinion and company – so for the purposes of the below I will be describing the benefits of a process that involves 3 primary components / steps, given these are the three I tackle most often:

  • Data Identification and Classification / Cataloging
  • Data De-Identification i.e. Data Masking
  • Data Provisioning i.e. Real Time Database Cloning / Provisioning

1 – Increase developer happiness / contentedness

Developers are employed to do 1 thing: innovate. It’s even in the name! Developers are on the cutting edge and are focused on providing value to end users as quickly and efficiently as possible, with shortened release cycles, incremental stories and optimized workflows they can produce this innovation. But a big part of the story is the setup.

Even if you’re working to a more agile methodology it is hard to deliver and test changes which are, in development environments, fundamentally destructive and experimental if you are sharing a workspace with multiple colleagues. Writing on shared Word documents can be frustrating at the best of times, so how can developers be expected to produce high-quality, rigorously tested, game-changing code when at any minute another developer can take the environment down, cause it to run slowly, or overwrite those changes with their own? When you cannot produce changes in an isolated, sandbox environment where they can be individually assessed, re-worked and improved then you have no guarantee that it should be promoted.

All of these sound like arguments that are focused around the production of code, but in fact these issues can all have a huge impact on something that is widely under regarded and scrutinized: developer happiness.

Developers are the people who make stuff go, and without them feeling content and valued in their roles, we can’t expect our productivity and product quality to reflect that – so when developers witness the poor management of their code, something they have worked so hard on as it goes sliding down the priority list or gets rolled back or overwritten etc. they don’t feel motivated to continue doing the best that they can do.

With dedicated environments for dev and test, for different branches, pull requests etc. developers can finally work on innovative and exciting projects, and optimize the code that goes out the door to end users.

2 – Develop a common language about data & make better decisions

It’s very hard to speak about things when you use different language to describe the same thing. That much is obvious. In the United Kingdom alone we have many different words for bread rolls. So when someone comes into a sandwich shop in London and asks for a “Stotty”, can you guarantee that the person serving will know exactly what they mean, exactly when they say it?

The Office Reaction GIF

No. There will be a gap where some translation will be required: some “down-time“, if you will. Now imagine taking something as simple as a bread roll and applying it to an enterprise data estate… you’re going to have a very bad time.

As I talked about in my blog posts here (importance of database classification) and here (classifications role in DevOps) before you can really make a fully informed decision about your data, you must know 2 simple things:

  1. What data you hold
  2. Where your data is

I should hurriedly add that I don’t just mean sensitive data now – all data deserves to be classified because whether you’re a full stack developer adding a column to a table you’ve never used before, an auditor trying to carry out a Data Protection Impact Assessment (DPIA) and trying desperately to include the database, or you’re a BI developer setting up some new reports or processes, you’re going to need to know about the data. This is where people have questions, and this is where you shouldn’t have to reply on anecdotal knowledge or being pushed around from one person to another at the company who supposedly “might be able to help“.

Better insights into data leads to better practices, less waiting (waste reduction) and greater insight. When we then act on this insight we move faster and deliver greater value in our pipelines.

Have you picked up on the trend yet? How all of these are going to end? Well don’t spoil the ending for those who haven’t, they’ll have to wait fort he conclusion!

3 – Move faster and better enable the DevOps pipeline

It’s apt that I’m listening to an amazing EDM remix of the Green Hill Zone from Sonic when writing this section, but isn’t this just what we need as a business? We want to be able to move faster, or to put it in more ‘agile’ terms, we need to be able to pivot and adapt with only a moments notice. Until now, the database has been a monolithic and difficult to steer behemoth, and it shows in our processes.

Yank Tug Of War GIF by BEERLAND

A tangible example of what I mean when I say “move faster”, is branching. It’s fairly commonplace now for a developer to be able to clone a repository and checkout a specific branch, create new branches etc. without fear of switching between those branches and what it might entail. On a dev environment, especially when one is working database-first with your changes (it does make sense to know how the changes will impact the database first – that’s all I’m saying) it is, without a reasonable process in place, exceptionally difficult to easily switch between branches and keep work separate.

This often forces developers to stick to one environment when changes are all made in tandem and can play havoc when it comes to capturing those changes in the right place – a manual state-based comparison of a dev database with multiple branches of work on it to a target upstream could be disastrous.

This is why taking advantage of something like database virtualization, allowing you to spin up copies of databases in seconds, could be the answer. You can automate the provisioning of environments as githooks, during Pull Request automation or as release candidates and the experience will be exactly the same across the board – boom *code base*, fresh and ready to go. When developers can move fast, value comes through a whole lot faster.

4 – Minimize space constraints on new copies, on premise or in the cloud

Space is always a big player in these conversations, and for some it’s enough to boil it down to “well just how much space can we save??” and that’s enough to put a dollar value on the ROI, and people storm ahead with a solution (that’s not always right for them).

But space is a very real problem, much as we (as technology professionals) like to believe that in these modern times of cloud-native solutions, easily scaled serverless-compute VMs and Big Data Clusters, we know there are still a LOT of people out there firefighting legacy, necessary technology and wrestling with what they CAN get out of backups or their SAN tech.

Even using cloud providers costs money, data egress and ingress costs $, BLOB storage costs $, additional security measures cost $. So it’s really not ideal when our databases, for historical reasons or by virtue of the sheer AMOUNT of data we hold and process, are 5, 10, 50, 100TB+, because we’re going to be struggling with this Dev/Test issue still for years to come.

As before with point 3, database virtualization has come of age and has now we have a lot of different solutions from containerization through DBaaS that can aid us in minimizing the amount of space that we ACTUALLY require, meaning we have less money that we need to pour into maintaining large, unwieldy Dev/Test environments or paying a large bill for the privilege of doing so in the cloud (and when developers will be using their dev machines anyway it just makes sense to see what we can do to leverage this existing hardware).

Whilst this one doesn’t directly add specific value to the end of the pipeline, or speed up this delivery, it can help reduce overhead costs associated with the infrastructure needed when providing this value.

5 – Work on realistic data without worrying about data breaches

This is probably one of the most obvious reasons given that I tend to blog about data regulations and compliance ALL THE TIME but I feel like I need to keep saying this.

If you remove all of the data from development and test database copies, this will not help with development and developers will have nothing meaningful to go on, nor any testing that isn’t limited to pre-defined values.

If you leave all of the data in development and test database copies, all you’re doing is duplicating your attack surface area and creating a lot of potential risks for that data to be surfaced where it shouldn’t be – on the internet, in screenshots, emails and of course, hacked.

So there needs to be a happy medium where we can have both the useful data that gives us insight and intelligence of a full data set, the business logic, trends, demographics etc. that we need during testing or analytics – but it should also be sanitized so that data subjects contained therein cannot be re-identified. Static masking, applied to lower environments allows us to retain the data with none of the data.

The Next Generation Data GIF by Star Trek

Protective measures can be built into the DevOps process from the very beginning as you’ve already seen right here on my blog; so as long as it is a part of the process, and we have multiple controls (or guard rails) that allow us to operate safely and quickly without fearing that same speed will cause us to release any sensitive information, allowing us to focus on one thing, value.

Conclusion

As you’ve seen above, it all comes down to time and money but there are many ways to save and speed up within a DevOps process by means of a good, solid database provisioning process. Whilst none of these reasons comes with a fixed ROI (unless you have ALL of your pre-prod database storage costs to hand) they contribute to something far better than that:

The ease of delivering value.

In a world where we can be concerned about everything, and where it’s hard to keep up with emerging technologies – it makes sense to start pruning away blockers to the process, the problems that are stopping us from delivering value faster – THAT is the theme and point of this blog post; our end users. We’re already delivering excellent value to them, we trust our developers and teams, but what’s stopping them from moving faster with database changes? Adopting a good provisioning process will mean you start to notice all of the above become true of your database development lifecycle.

Delays are not DevOps, delaying DevOps is worse: Why we need better working practices now more than ever.

“The time is always right to do what is right.”
– Martin Luther King Jr.

Over the past few months, we have been on lock-down. The product of a devastating and deadly disease that has well and truly stamped it’s legacy on human history forever. But it is out of these times that we receive a glimpse, a look into what is possible, and what humanity can do. It is out of this fight, out of these ever decreasing odds that we finally see what a combined effort can do, and what a focus on our fellow people can bring about. Don’t know what I mean, take a look at the Good News Network and subscribe (just like me) to see the best of us.

But it only works, we only triumph, when we work together.

Recent Example: Scientists at Oxford University have seen a tremendous breakthrough with their virus efforts and are making unprecedented strides towards a viable vaccine – but it involves an incredibly strong partnership with pharmaceutical companies and governments worldwide for staged testing, large scale results gathering and continuous improvement.

So. Why then in the last few months have I been speaking to people who say things like:

  • We have delayed our non-BAU process implementation, because we have seen a spike in usage, and we need all hands on deck.
  • We have been investigating tooling and processes to help our development teams, but this is on hold as we come to terms with this newer way of working.
  • Our teams are keen to adopt more agile ways of working, but they are overburdened at the moment, so we have decided to postpone any research into this for the next few months.

It is feasible, amid a global panic, that people and companies will do (and definitely have done) what comes naturally to us; that is to “bunker down“. We believe that if we shift all efforts from projects and ongoing testing / new processes, we can have all hands available to deal with anything that comes our way. Processes are established for a reason, right? Legacy methods of dealing with ad-hoc changes and semi-frequent deployments, waterfall-esk development cycles and decade(s) old systems represent the familiar, the safe… Supposedly.

Now so, more than ever, it is time to actually change direction and to put more effort into some of the key principles and processes that will lead us to DevOps nirvana; it is this trinity of people, processes and tooling that can ultimately be the salvation for many global companies as they try to maintain their agility and competitiveness within an uncertain, shifting post-pandemic international market. There are many reasons why I, and many others, believe this but I have detailed 3 key reasons below:

1 – Delaying DevOps creates waste and costs businesses money

DevOps is a culmination of learning, experience and effort and it cannot be classified as one single thing, however it is possible to define a number of things that DevOps is and what it most certainly is not. One of the things that DevOps is, is “the constant delivery of value to end users“; the idea that by adopting certain technical measures and working practices we can minimize the time to delivery of new features and functions, which equates to greater value for us, our end users and significantly more agility to shift in different directions as required.

These ideas of flow and value streams are covered quite nicely by Lean IT, which extended from Lean Manufacturing principles and it is nicely explained here (and I would highly recommend you read The Phoenix Project if you haven’t already) – but the purpose of it (in a similar vein to agile principles) in this sense, is that it defines a number of things that don’t add value to the resulting product or service. These ‘things’ are referred to as waste and this is precisely what we should be looking to remove from our existing, legacy processes because why would we work on anything that doesn’t deliver any value?

There are a few different types of waste but I want to highlight three important ones that often exist as a result of legacy processes still being in place:

  • Defects Includes lack of testing (poor execution) and hot fixing environments (unauthorized changes)
  • Waiting Including everything from waiting for refreshed environments to waiting for feedback/results and even manual processes like deployment approvals
  • Motion (excess)Effectively doing the same thing over and over again, fire-fighting problems that arise on a near daily basis, engaging and monopolizing resources constantly who could otherwise be working on other, more important or value-add tickets (for those of you who HAVE read the Phoenix Project, see Brent as an example!)

The product of this waste is very simple and it fits into 3 main buckets: poor customer experience, increased costs and lost productivity. All of these things boil down to one fundamental truth – bad practices cost us money and reputation, transformation now could help us prevent this, and people will remember us for stepping up when we needed to.

2 – The workforce is increasingly distributed/remote and needs to collaborate better

At the beginning of 2020, one of the biggest questions faced by companies all around the globe was “how do I find and retain talent?”; this is not a new question and had already been around for years. Companies restricted to their offices (base OR satellite) realized they were increasingly fighting for candidates in one of the most competitive markets served by an ever dwindling local pool of options.

This was a situation which necessitated companies to stretch beyond their existing capabilities to enable a better quality experience for remote and distance workers and/or teams – a situation which would later be exacerbated 100x-fold by the global pandemic crisis. Not only has this crisis confirmed that most companies, certainly those feeding software markets, can work remotely, it has also posed the question of if we should work remotely more often, and has fundamentally changed the way we as a workforce will continue to work in the future.

One thing is for sure, whilst we will try to “return to normal” as much as possible, normal has been forever changed and remote working and collaboration is here to stay. That’s the important word in play here, collaboration.

The spirit of DevOps as I’ve mentioned before is good quality communication, collaboration and accountability. But at the heart of those three ideas is visibility. In an office we can over-hear, we can drop-in or bring things up “over the water cooler”/”at the coffee machine”. In a remote working context, that isn’t possible. So we have to adhere to 2 of the most important practices in modern day software development: transfer knowledge and record decisions.

When we work and communicate better in a remote/distributed workforce, and use tools and processes available to us, people don’t make unauthorized changes, or make decisions that affect you that you weren’t aware of until 3 weeks after they were made. It becomes easier to make decisions and generate better work faster, rather than being paralyzed by indecision and uncertainty as to whether you hold the latest version of the truth or if it is outdated. When we adopt the right processes and tools into our DevOps methodology, we know for certain what we should be doing, and why we should be doing it.

Process-wise this can easily take the form of common functions, many already at the disposal of teams when remote; stand-ups, retrospectives, mob- and pair- programming, OKR and sprint planning, there are lots of different ways for us to work well and know what we’re supposed to be working on at all times (and why). Tooling-wise we can then match these how we will be doing something with the respective record of what is being done, what decisions have been made and crucially, why. Using work management software like Trello, bug/feature tracking software like JIRA or Azure DevOps work items, source controlling everything (even having a strong branching and merging strategy to control workflow) with rigorous testing routines, policies and pull requests and automation all lead to better informed, happier*, well-performing developers with a crucial sense of purpose.

*Important side note: It is also crucial that we don’t simply lose ourselves in the business benefits completely – developers, testers etc. are all human and we all crave job satisfaction and happiness in our roles. Yes you might be able to increase your deployment frequency, minimize costs or complaints, but nothing compares to a satisfied, motivated team, which DevOps can help breed and inspire.

3 – DevOps breeds innovation and improves company performance, with a tangible return on investment (and not just financial!)

Automation is one of the single greatest ways we can modernize our processes, and is often the first principle we think of when adopting DevOps practices; taking something that is manual or held together by legacy scripts that forever fail and cause outages, and instead continuously integrating, continuously testing and continuously improving using the latest processes and tooling available to us. Automation allows us to create high cycle rates, enhance and multiply the feedback options we have within our pipeline(s) and allows us to reduce manual concerns and issues, to dedicate teams to the very thing they were employed to do: innovate. We only need watch how Netflix does DevOps to realize what we can unlock.

On a weekly basis I discuss existing processes with developers all over the world and one key trend always emerges that we need to focus on: they have a process that is currently manual, and it needs improving. I have lost track of the number of times I’ve been told about a process where developers generate scripts, manually test themselves and then “do x” with it, whether that be just deploying to Production themselves (without review), or putting it in an ever mounting pile of scripts in a folder on a file share for someone to sift through periodically.

Across every single one of the conversations mentioned above that I have, there is not a single discussion that doesn’t include some kind of quantifiable cost to the business, whether that be downtime, customer refunds, regulatory penalties and even in extreme cases, high developer churn. All can still be expressed in terms of 2 things: Time and Money.

If we remove those roadblocks for our developers, if we give them tools to enable them to more easily do their jobs, we put in place processes that allow them to more easily deliver that innovation, and tight automated controls to remove error-prone, manual jobs – we end up with something more akin to harmony. By this of course I mean the “constant delivery of value to end users“. This creates a more positive user experience, allows us to respond more quickly in an uncertain market and make decisions on what we should or, just as importantly, shouldn’t do, faster.

The world at the moment is a very uncertain place and has destroyed jobs, companies and whole industries. We should expect that consumer confidence is at an all time low and as we all come out of lock-down across the globe, we should be prepared to metaphorically “put our best foot forward” to help our developers believe in what we’re doing, stimulate faith in our industries and ultimately deliver more value bidirectionally.

But this idea only works if we do this now.

Conclusion

DevOps is more than just “picking up some new tools” or “rolling out agile” to development teams. It is fundamentally a mindset change that can drastically and fundamentally alter the underlying motivations and thinking within an organisation, allowing you to focus on the most important thing – delivering value, faster.

There are always times where delaying large scale roll-outs is a pertinent decision to make, and a hard one at that! But DevOps seeks to unify every part of of the development cycle; giving you greater visibility, communication, accountability and control, with maximum flexibility to test, validate and even pivot where needed.

But the time to do DevOps is not “once everything improves“, “once we get back to normal” or “when we have more time“… it is now. Now is the opportunity we all have to capitalize on the wave of change we have been consistently waiting to implement, to strengthen our position and future growth in our markets. DevOps is how we can come out on the other side of this disaster ready to embrace new technologies and ways of thinking, to respond to our customers needs, and deliver value and speed at scale.