“You never know what you can do until you try, and very few try unless they have to.”
– C.S. Lewis
Well I don’t have to, but many of the people I speak to on a daily basis are moving into GitLab, so it’s about time I tried it! You can find here testament to the mistakes I make as I try to set up a full end-to-end database change management process with SQL Change Automation and GitLab.
PLEASE NOTE (edit 18/12/2021): If you are just starting out with Redgate source control and deployment processes and are looking into using GitLab for database deployments, please read my updated blog post here using the newest source control and deployment technology Flyway Desktop)
Will it all work perfectly? I don’t doubt that everything will fall over at some point, but let’s see how we get on all the same, and hopefully if you’re setting up this same pipeline, you’ll be able to avoid the errors and failings I inevitably cause! So here we go!
Jan 2021 Edit: Hey guess what!? There’s a video of this too! Check it out on YouTube.

Let’s set up a GitLab Project (and rename the default branch)
Naturally, I didn’t have a GitLab account, so I had to set one up. I’m assuming that if you’re using it already or you’ve just started using it you’re taking advantage of the more business features but I’ve just stuck with the good ol’ free account for now! It was remarkably simple, sign up, email address, confirm and here we are:

Ok there is something very cool I like about setting up a new project, can you tell what it is?

You can completely set up a new blank project but they have templates, you can import projects OR, and I love this, you can setup a full CI/CD pipeline from another repo! Having done this before in Azure DevOps it was not easy, let me tell you. It really seems like Azure DevOps hates you for setting up CI/CD from an external repo, even though it has plenty of helpful ways of doing so!
So I initialized my repository with a README and updated it:

Don’t ever say I’m not descriptive enough!
The first thing I did was a renamed my default branch to ‘trunk’ by going to branches, creating the new branch and then in Settings > Repository changing it to the default and then swapping out the protected status with the outdated master:


Then finally delete the old default in Repository > Branches:

Excellent. Now it’s time to clone trunk onto my machine as we will need the local repository to put our change automation project in!
I created a folder called GitLab test and cloned the mostly empty repository into it:

Simple!
Create a new SQL Change Automation project and push it to trunk
In SSMS I opened up the most recent version of SQL Change Automation an created a new project called “DoggosAreCoolDB” using a copy of a Dev database I had lying around from a previous demonstration (BlogsDotRedgate):

Then I created my baseline as a migration script against the up-stream copy, BlogsDotRedgate_Integration, because who has access to Prod for this? Am I right? *cough & shifty eyes* not me!

I successfully generated my baseline and a change script (I added a column to a table, nice and simple) and then committed them to my local repo, and pushed! Forget branching, today isn’t about that, we’re just PUSHING TO TRUNK, WOO-HOO!



Setting up the CI/CD Pipeline
Now that we have our project and migrations in GitLab we can build out a pipeline! So first stop I went straight to CI/CD > Pipelines and was presented and I hit “Get Started”:

They immediately throw you into a Quick Start “Help” style guide which is immediately a little un-intuitive but surprisingly helpful if you read the whole thing. Effectively we need a YAML file called .gitlab-ci.yml that will store our pipeline as code telling it how and where to build, and we need a runner to actually fire up and execute these steps.
In my experience with some other CI/CD tools, it’s been advantageous to actually create the Runner / Agent first on the machines you’re going to be using, so as I just have my laptop to do this on, I will set one up on there! I found the full documentation for a Windows Runner here, and followed it just so I would have it available.


The GitLab Runner was up and running in my services but I’ll be darned if I can see them anywhere in GitLab…

Aha! So it turns out after a bit of digging that you need to register the runner specifically using the CI/CD section on the project settings, that was probably my bad for not reading the documentation thoroughly but my counter-argument… who actually does? So I issued the register command, applied tags and a description and chose my runner type, I chose shell because I need to be able to run PowerShell on the machine (I’ll need the SQL Change Automation PowerShell components available on the machine where the Build and Deployment are happening of course):

I’ve got the runner on the machine, I’m using an instance of SQL Server to build against, now I just need the YAML file (fortunately GitLab has full documentation for how to structure this as well!)
So I can build my project I’m going to need to know where the repo is cloned to during the process (i.e. to find the .sqlproj file) so by taking a look I managed to find a list of environment variables that can be used in the YAML file, just to be sure though, I created and committed the most basic YAML file that would just echo back the location of the cloned files:
stages:
- DatabaseBuild
Build:
stage: DatabaseBuild
tags:
- sql
script:
- echo $env:CI_PROJECT_DIR
After this let me know the environment variable worked correctly and the build pipeline was being fired up correctly on my private runner, I tried something a little more ambitious, building the .sqlproj file using the cmdlet reference from the SQL Change Automation documentation for help:
stages:
- DatabaseBuild
Build:
stage: DatabaseBuild
tags:
- sql
script:
- $projectPath = $env:CI_PROJECT_DIR
- $project = "$projectPath\DoggosAreCoolDB\DoggosAreCoolDB.sqlproj"
- $validatedProject = $project | Invoke-DatabaseBuild -TemporaryDatabaseServer "Data Source=PSE-LT-CHRISU\"
and we successfully built a database!

All that’s left to do now is 2 things:
- Create a NuGet package as part of the CI build
- Release the database changes to the target DB
I’m still using the same machine for the release portion too, so naturally I can use the same runner for this, if you have other servers you’re deploying to you will of course need additional runners.
We can very easily extend what we already have in our YAML file by just telling the process to create and export a new build artifact – I’m going to name it the same as everything else, and then append the BuildId to the end of the file so we always get something unique:
stages:
- DatabaseBuild
Build:
stage: DatabaseBuild
tags:
- sql
script:
- $projectPath = $env:CI_PROJECT_DIR
- $project = "$projectPath\DoggosAreCoolDB\DoggosAreCoolDB.sqlproj"
- $validatedProject = $project | Invoke-DatabaseBuild -TemporaryDatabaseServer "Data Source=PSE-LT-CHRISU\"
- $buildArtifact = $validatedProject | New-DatabaseBuildArtifact -PackageId DoggosAreCool.Database -PackageVersion 1.$env:CI_JOB_ID
- $buildArtifact | Export-DatabaseBuildArtifact -Path "$env:CI_PROJECT_DIR\Export"
artifacts:
paths:
- $env:CI_PROJECT_DIR\Export\DoggosAreCool.Database.1.$env:CI_JOB_ID.nupkg
expire_in: 1 week
You’ll notice how I’m exporting the NuGet package to the project directory and then uploading it, this is so that we’ll have access to it to release but also so that we can use the artifacts argument in our YAML to upload the file and make it a downloadable package through the GitLab interface (if you go to that SPECIFIC job):

Whilst we’re on a roll here (and things haven’t gone wrong for a while) I’m going to add 2 additional stages ALL AT ONCE to both “Create a Database Release Artifact” and “Deploy from a Database Release Artifact” using, once again, the SQL Change Automation PowerShell cmdlets.
Woo-Hoo! I’m invincible!

I broke it.
Can you see what I did wrong? The error is:
New-DatabaseReleaseArtifact : The specified value for the Source parameter is neither a valid 41database connection string nor a path to an existing NuGet package file or scripts folder: 42'CI_PROJECT_DIR\Export\DoggosAreCool.Database.1.725147351.nupkg'
So 2 fun things. 1 – I forgot to highlight there was an environment variable at one point, so it was just looking for the name of the variable in the path and 2) it keeps erroring out saying my NuGet file isn’t a NuGet file, weird right?

On further inspection it is yet another mistake I made. I’m using the job ID to name the NuGet package, which means when it tries to find the file it’s 2 steps ahead because each stage is counted as a different job! Duh!

A few quick changes should hopefully sort this out! I’m going to put the instance of the pipeline ID in ($env:CI_PIPELINE_ID
) and see if that makes a difference!

Wait. Did it just say the pipeline ran? SUCESSFULLY? That’s exactly what it said! We can verify that this actually happened as well by checking the DatabaseDeploymentResources folder for the Release Artifact to Integration:

And everything is there! Note you won’t have a changes.html report just yet because this is the first time we’ve successfully deployed to Integration, however if we run 1 more change through (I’ll add a stored procedure):

Boom.

Now of course we can add additional stages to this, for manual intervention or to promote to other environments, but I’m going to call it a win here and retire (until the next post) gracefully. I’m sure you’re all wondering what my final YAML file looked like too – well (counterintuitively) I’ve popped it all into GitHub for you and pasted it below. Enjoy!
stages:
- DatabaseBuild
- CreateRelease
- DeployToIntegration
Build:
stage: DatabaseBuild
tags:
- sql
script:
- $projectPath = $env:CI_PROJECT_DIR
- $project = "$projectPath\DoggosAreCoolDB\DoggosAreCoolDB.sqlproj"
- echo "Building project $project"
- $validatedProject = $project | Invoke-DatabaseBuild -TemporaryDatabaseServer "Data Source=PSE-LT-CHRISU\"
- $buildArtifact = $validatedProject | New-DatabaseBuildArtifact -PackageId DoggosAreCool.Database -PackageVersion 1.$env:CI_PIPELINE_ID
- echo "Exporting artifact to $env:CI_PROJECT_DIR\Export"
- $buildArtifact | Export-DatabaseBuildArtifact -Path "$env:CI_PROJECT_DIR\Export"
artifacts:
paths:
- $env:CI_PROJECT_DIR\Export\DoggosAreCool.Database.1.$env:CI_PIPELINE_ID.nupkg
expire_in: 1 week
CreateRelease:
stage: CreateRelease
tags:
- sql
script:
- $integrationDB = New-DatabaseConnection -ServerInstance "PSE-LT-CHRISU\" -Database "BlogsDotRedgate_Integration"
- $buildArtifact = "$env:CI_PROJECT_DIR\Export\DoggosAreCool.Database.1.$env:CI_PIPELINE_ID.nupkg"
- echo "Creating Release Artifact for DoggosAreCuteDB - check C:\DatabaseDeploymentResources\DoggosAreCuteInc\ReleaseArtifacts\$env:CI_PIPELINE_ID\Integration for more information"
- $releaseArtifact = New-DatabaseReleaseArtifact -Source $buildArtifact -Target $integrationDB
- $releaseArtifact | Export-DatabaseReleaseArtifact -Path "C:\DatabaseDeploymentResources\DoggosAreCuteInc\ReleaseArtifacts\$env:CI_PIPELINE_ID\Integration" -Format Folder
Integration:
stage: DeployToIntegration
tags:
- sql
script:
- $integrationDB = New-DatabaseConnection -ServerInstance "PSE-LT-CHRISU\" -Database "BlogsDotRedgate_Integration"
- echo "Deploying changes to Integration"
- Import-DatabaseReleaseArtifact -Path "C:\DatabaseDeploymentResources\DoggosAreCuteInc\ReleaseArtifacts\$env:CI_PIPELINE_ID\Integration" | Use-DatabaseReleaseArtifact -DeployTo $integrationDB
Awesome as always!!!
Thanks Chris!!
LikeLike
Pingback: 3 simple pipelines for database development with Redgate Deploy – Part 2: GitHub Actions | PlantBasedSQL