Oh boy, now this has been truly a pain.

I was first introduced to the idea of Continuous Integrate/Continuous Deployment all the way back in 2018, and I've only just been able to create my own runner.

For context, I self-host Gitlab on a private server.
I don't use any other tools like Jenkins, or sites like DeployHQ, but instead used the feature that is provided by Gitlab. Why muddy the waters?

I've worked with agencies which do use those services, and they made things look so easy; just commit the changes to whichever branch and boom, the site is updated in a matter of minutes (depending on the size).

But there is a lot to learn about Gitlab pipelines, runners, and CI/CD.

Where to begin?

Firstly, you need to make sure that the service in enabled on your Gitlab.
I believe that it is, by default, but if you are like me and found that warning message for every project annoying ("hey, you've got this thing but it doesn't work - do something about it"), you found a way to disable it site-wide.

I believe I followed these instructions on how to enable or disable GitLab CI/CD, and opted for the project by project basis, as I also manage other non-DevOps work there.

Once that is in place, you should notice a new CI/CD menu at the side.
If it's already there, great! GitLab is rather forthcoming with their alerts and messages, so what you need to do next will be obvious.

What to do next?

So you click through and it says "create a pipeline".
This is a file which is a part of the repo; inside that is a list of instructions for the runner to carry out once certain conditions are met.

So you can set some instructions for the Master branch, Production branch, developer branches, or branches with certain tags, etc.
And the commands, can range from SSH commands, run scripts, grab variables, and probably a lot more (I've only needed the most simple tasks).

Essentially, all my runners do is check if their branch is updated, and copy the files to whichever server they need to go. This makes it easier for me, because I have a separate machine to do all the "sitting and waiting" instead of my computer (which I need to use to work).
There are more rules to it than that, such as backups, file permissions, etc., but it is to automate the process.

What is this "Runner"?

Right, yes, runners.
A runner is a service that acts as a user for you. Think of it like a computerised servant, a robot butler (hey, "Jenkins"), that will casually wait around, checking on their repos, and once something happens, they will read through the list of instructions and carry them out.

Why hire someone to do that when you can build a machine to do it for you.

It is recommended that you have a separate machine to do this for you (it doesn't take a lot of processing power), but if you want to use the same server that the repo is on, or even your own computers, you can too.
You just need to make sure you turn on the runners/activate the service.

I found these instructions on installing a GitLab Runner worked well, although I found issues with getting it to work on Windows and Powershell.
So I used Ubuntu (I am quite familiar with Linux now... I think I actually prefer it).

Here's a bit of free advice, make sure you give the appropriate permissions to the machines you use.
I was stuck for such a long time, because I didn't think about the SSH Keys.

Creating the YAML file

This is the list of instructions I mentioned earlier.

For me, I simply wrote what I usually do when deploying a site down.
Some transposing is needed (it's not quite so straight forward), and there are certain rules you need.

There are quite a few tutorials and guides out there, and with a great amount of luck, I found that this offers the basics for what I need (if you've got any suggestions, please, let me know).
Oh, in the example below, I use GitLab variables to store the keys I need to access remotely. I used ssh-keyscan as a command to find the keys (if you don't have them saved elsewhere).

# What stages we have (think of this like tasks)

# The job name (can have multiple)
  # What task we want to do
  stage: deploy

  # The things we actually want to do (aka list of instructions)
   # Adds the SSH Keys to Remote so we can access via the Runner
   - eval $(ssh-agent -s)
   - echo "$SSH_REPO_KEY" | tr -d '\r' | ssh-add - > /dev/null
   - mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
   - echo "$SSH_KNOWN_STAGING" >> ~/.ssh/known_hosts
   - chmod 644 ~/.ssh/known_hosts

  # SSH Commands, can have multiple, and different commands
   - ssh -p[port number] [user]@[ip] "[the command you would normally use manually]"

  # Only run on this branch, tag, etc.
   - [branch]

And that is about it.
You can add so many more tasks, jobs, instructions, all sorts. I would only recommend in keeping it simple as it does use processing power for the Runner and Remote; you can run out of RAM.

If you've got any questions, suggestions, or just want to chat, you can email me using the form below, or find me on Twitter (I'm usually hanging out there).

A little secret

Okay, today I'm going to tell you a nice little method I've been wanting to try out for ages.

If you are unaware of CI/CD, Jenkins, Auto Deployment, DeployHQ, etc., they're essentially methods to push changes to a site with as little interruption as possible.

Instead of manually uploading via FTP yourself, what these services do is use a number of methods to update your live site to avoid down-time.
This way it's between the two servers, and not my computer, which I am using and can have uploading issues.

This doesn't stop programmer errors, but checks that the files don't break the site (and makes sure all get uploaded).

What are the options

There are a number of options, as mentioned previously. I've had the most success with DeployHQ, but that is a paid-for service.
GitLab has a nice little feature built-in, but I've not been able to (con)figure it out*.

* see, that's a joke because you need to configure the settings

So, after scratching my head for a while, I had an idea. I understand Git pretty well, so why not just pull from the repo to the Live site?

Well, I've just tried it now, and it works!


Okay, so I had my site live for quite a while now, but I need to use the Git repo. The easiest way was to make a copy of my site's directory and give it another name as a backup.

Then, once I was happy, I cloned the repo to the server. For this instance, I cloned it into a different name (because safety), but you could just clone it directly down if it's a fresh site (or you are feeling daring).

Once cloned, all the files will be on the site and good to go (or renamed to the correct name if you are being cautious like me).

Now whenever I make changes to my site, I can test locally, see they work and push up as normal.
I then log into my server and simply fetch and changes and pull them down for the site.

The process is no different than if you are working on two different machines and need to keep the code up to date.

Final thoughts

I'm pretty sure there are quite a few redundancies in place using other, more fleshed out, processes, but as I keep a close eye on what is going on and know what should happen, this suffices (for now).

Using a similar "cautious" method, I could clone into a separate directory, make sure it has worked correctly, then rename the folder (deleting the old/current).

But hey, this is just a shot in the dark to try and make my life a bit easier.

After all, it's better to expend more energy to automate a process than have to do it manually all the time.

A little history

I run a Gitlab server from home. It's quite cost-effective as the hardware requirements are rather low, and I can easily upgrade the storage space as and when I need it.
It's also a lot cheaper than hosting a server or VPS elsewhere.

However, the problem lies in accessing the server from somewhere outside of my home. My work takes me all across the country, and that means needing access to the repos remotely.

What I usually did was get the server's public IP and either use that, or update DNS records manually.

Using DigitalOcean, I set up an A Record with the public IP so I can access the server via a subdomain; it's quite handy, but the record would need to be updated whenever my IP changes at home (because a static IP is very costly).

So I created a nice little bash script to run whenever I've lost access due to the IP changing.

The script

IP=$(curl https://ipinfo.io/ip)

curl -X PUT -H "Content-Type: application/json" -H "Authorization: Bearer [API KEY]" -d '{"data":"'"${IP}"'"}' "https://api.digitalocean.com/v2/domains/[DOMAIN]/records/[RECORD ID]" 

To begin with, that's the whole code.
All you need to do is copy that, create a new .sh file (I've called mine DynDNS.sh) and make a few changes, including setting the permissions to 744.

You need your API Key, Domain and Record ID for this script to work. Simply switch out the values (e.g. "[DOMAIN]" would become "classicniall.co.uk").

Getting API Key

It's very straight forward. Go to DigitalOcean API Tokens, and generate a new Token. I'd recommend saving it in a secure file somewhere.
Copy the Token and replace [API KEY] with it.

Getting Domain

I think it's rather straight forward, it's your domain that is used for the subdomain. So, if you want "repo.domain.com", then enter in here "domain.com".

Getting the Record ID

Okay, this one is not so straight forward.
Just run this command:

curl -X GET -H 'Content-Type: application/json' -H 'Authorization: Bearer [API KEY]' "https://api.digitalocean.com/v2/domains/[DOMAIN]/records"

That will spit out all the records for the selected domain. What you need to do is find the A Record for your subdomain, and copy the id field (yes, it's lowercase).

NB: I ran this code in Git Bash, so the results weren't formatted, but it's the only way I knew how to get the ID for the record.

If you don't have an A Record for the subdomain, you will need to manually create the A Record first (if you want to follow my guide), and run the command again.

Running the command

So now that the script has been created, modified and with the correct permissions, you should be able to run it by calling the file itself. In this instance, I simply use "./DynDNS.sh" to run the command, and it executes rather quickly (probably about a second).
I've tested it quite a few times, and it always grabs the correct public IP and updates the record.

Creating a Bash Command

To make things easier, you may want to create a Command in order to run the script. Sure, it may not take much to type out the command above, but it would be easier to type in a single word for the command. I use DynDNS.

Go into your .bash_alias file and simply add:

alias DynDNS="./DynDNS.sh"

alias marks it out as an alias, DynDNS is the command word, and everything between the speech marks is the command itself. Running DynDNS basically says "run this file".

Save it, log out of the server/session and log back in. The alias will now work (or you could manually force the update to take place).

Taking things further

There is definitely room for improvement here.
For starters, this still requires to be manually ran whenever there is a change in the home server's public IP. What we need to do is to make it more dynamic to check if there is a change.

One way is to set up a Cron job or scheduled task to run the file at a set interval, but that wouldn't be very efficient (although it would be an automated process), or we could take things further and run a check to see if the public IP has changed (because it could happen at any moment).

If it changes, then run DynDNS.

However, I don't know how to do this, but it is on my radar.

Classic Niall Limited © 2022

Contact Us

Classic Niall Limited
The Wesley Centre, Blyth Road
Maltby, Rotherham
S66 8JD
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram