Oh boy, now this has been truly a pain.

I was first introduced to the idea of Continuous Integrate/Continuous Deployment all the way back in 2018, and I've only just been able to create my own runner.

For context, I self-host Gitlab on a private server.
I don't use any other tools like Jenkins, or sites like DeployHQ, but instead used the feature that is provided by Gitlab. Why muddy the waters?

I've worked with agencies which do use those services, and they made things look so easy; just commit the changes to whichever branch and boom, the site is updated in a matter of minutes (depending on the size).

But there is a lot to learn about Gitlab pipelines, runners, and CI/CD.

Where to begin?

Firstly, you need to make sure that the service in enabled on your Gitlab.
I believe that it is, by default, but if you are like me and found that warning message for every project annoying ("hey, you've got this thing but it doesn't work - do something about it"), you found a way to disable it site-wide.

I believe I followed these instructions on how to enable or disable GitLab CI/CD, and opted for the project by project basis, as I also manage other non-DevOps work there.

Once that is in place, you should notice a new CI/CD menu at the side.
If it's already there, great! GitLab is rather forthcoming with their alerts and messages, so what you need to do next will be obvious.

What to do next?

So you click through and it says "create a pipeline".
This is a file which is a part of the repo; inside that is a list of instructions for the runner to carry out once certain conditions are met.

So you can set some instructions for the Master branch, Production branch, developer branches, or branches with certain tags, etc.
And the commands, can range from SSH commands, run scripts, grab variables, and probably a lot more (I've only needed the most simple tasks).

Essentially, all my runners do is check if their branch is updated, and copy the files to whichever server they need to go. This makes it easier for me, because I have a separate machine to do all the "sitting and waiting" instead of my computer (which I need to use to work).
There are more rules to it than that, such as backups, file permissions, etc., but it is to automate the process.

What is this "Runner"?

Right, yes, runners.
A runner is a service that acts as a user for you. Think of it like a computerised servant, a robot butler (hey, "Jenkins"), that will casually wait around, checking on their repos, and once something happens, they will read through the list of instructions and carry them out.

Why hire someone to do that when you can build a machine to do it for you.

It is recommended that you have a separate machine to do this for you (it doesn't take a lot of processing power), but if you want to use the same server that the repo is on, or even your own computers, you can too.
You just need to make sure you turn on the runners/activate the service.

I found these instructions on installing a GitLab Runner worked well, although I found issues with getting it to work on Windows and Powershell.
So I used Ubuntu (I am quite familiar with Linux now... I think I actually prefer it).

Here's a bit of free advice, make sure you give the appropriate permissions to the machines you use.
I was stuck for such a long time, because I didn't think about the SSH Keys.

Creating the YAML file

This is the list of instructions I mentioned earlier.

For me, I simply wrote what I usually do when deploying a site down.
Some transposing is needed (it's not quite so straight forward), and there are certain rules you need.

There are quite a few tutorials and guides out there, and with a great amount of luck, I found that this offers the basics for what I need (if you've got any suggestions, please, let me know).
Oh, in the example below, I use GitLab variables to store the keys I need to access remotely. I used ssh-keyscan as a command to find the keys (if you don't have them saved elsewhere).

# What stages we have (think of this like tasks)
stages:
  deploy

# The job name (can have multiple)
deploy_staging:
  # What task we want to do
  stage: deploy

  # The things we actually want to do (aka list of instructions)
  script:
   # Adds the SSH Keys to Remote so we can access via the Runner
   - eval $(ssh-agent -s)
   - echo "$SSH_REPO_KEY" | tr -d '\r' | ssh-add - > /dev/null
   - mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
   - echo "$SSH_KNOWN_STAGING" >> ~/.ssh/known_hosts
   - chmod 644 ~/.ssh/known_hosts

  # SSH Commands, can have multiple, and different commands
   - ssh -p[port number] [user]@[ip] "[the command you would normally use manually]"

  # Only run on this branch, tag, etc.
  only:
   - [branch]

And that is about it.
You can add so many more tasks, jobs, instructions, all sorts. I would only recommend in keeping it simple as it does use processing power for the Runner and Remote; you can run out of RAM.

If you've got any questions, suggestions, or just want to chat, you can email me using the form below, or find me on Twitter (I'm usually hanging out there).

A little secret

Okay, today I'm going to tell you a nice little method I've been wanting to try out for ages.

If you are unaware of CI/CD, Jenkins, Auto Deployment, DeployHQ, etc., they're essentially methods to push changes to a site with as little interruption as possible.

Instead of manually uploading via FTP yourself, what these services do is use a number of methods to update your live site to avoid down-time.
This way it's between the two servers, and not my computer, which I am using and can have uploading issues.

This doesn't stop programmer errors, but checks that the files don't break the site (and makes sure all get uploaded).

What are the options

There are a number of options, as mentioned previously. I've had the most success with DeployHQ, but that is a paid-for service.
GitLab has a nice little feature built-in, but I've not been able to (con)figure it out*.

* see, that's a joke because you need to configure the settings

So, after scratching my head for a while, I had an idea. I understand Git pretty well, so why not just pull from the repo to the Live site?

Well, I've just tried it now, and it works!

Method

Okay, so I had my site live for quite a while now, but I need to use the Git repo. The easiest way was to make a copy of my site's directory and give it another name as a backup.

Then, once I was happy, I cloned the repo to the server. For this instance, I cloned it into a different name (because safety), but you could just clone it directly down if it's a fresh site (or you are feeling daring).

Once cloned, all the files will be on the site and good to go (or renamed to the correct name if you are being cautious like me).

Now whenever I make changes to my site, I can test locally, see they work and push up as normal.
I then log into my server and simply fetch and changes and pull them down for the site.

The process is no different than if you are working on two different machines and need to keep the code up to date.

Final thoughts

I'm pretty sure there are quite a few redundancies in place using other, more fleshed out, processes, but as I keep a close eye on what is going on and know what should happen, this suffices (for now).

Using a similar "cautious" method, I could clone into a separate directory, make sure it has worked correctly, then rename the folder (deleting the old/current).

But hey, this is just a shot in the dark to try and make my life a bit easier.

After all, it's better to expend more energy to automate a process than have to do it manually all the time.

Where to begin

This may be related to my previous post about Regaining Access to DigitalOcean Droplets, but I think I'll write up the whole experience, as there seems to be many different issues I encountered (almost randomly).

I stated in that post that I had to reformat my main PC and set up the SSH Keys again in order to access the droplets. But it turns out there was a lot more going on.

As all good developers should do, my websites are encrypted with an SSL certificate; it's just good and common practice nowadays, and with that, increase the security of the sites. This means reviews and editing the files permissions, and setting up the correct keys and passwords to gain access to the server.

SSL Certificates

There are numerous different providers out there to certify your site; some are free, some are paid, and some may even come from your hosting provider. But that's not what I'm getting into today.

What I'd like to tell you is to make sure that your site is pointing to the correct directory on your site.

It sounds simple, I know, but I was having issues with redirecting my site to the secure version, and it was a simple matter that my Apache2 config files were pointing to the wrong directory.
It was a quick fix, some tweaking in the config files to make sure every attempt to access the site went to the correct place.

Granted, if you're not one who plays around with the config files, you should be able to get support from your service provider (if not, feel free to get in touch and I'll see if I can point you in the right direction).

File Permissions

More often than not, my clients want "WordPress sites", and sometimes it requires working with other developers or hosting providers who may or may not have things set up correctly. This can be a nightmare at times.

After a lot of research, testing, and hair pulling, I think I've got the access permissions set up in a way which is secure and functional.

Ownership of the files

Most servers use www-data for the user and group for their sites. Let's assume this is the same for you.
Your "website" folder needs to have the owner and group set to www-data. This will allow your website to own these files and folders, to view and make editions (if you've allowed it).

You can limit www-data to view only, but then how could you upload files via a media library, for example.

Read/Write Permissions

Here comes the more infuriating part. The read/write permissions for files and folders.

Again, in your website's folder, I would recommend that the permissions are set to 755 for folders, and 644 for all files within the main site directory.

It would also be beneficial to "hide" any config files (the files that hold connection details) from the public. The www-data user will still be able to access them, but not some randomer whose stumbled across your site.

In Summary

This all came about because I wanted to make my site secure with a certificate. I never gave it much thought for myself, because I was just running a "portfolio site", but I wanted a way for people to communicate with me.

So I had a contact form, which quick became bombarded with spam, so enter in reCaptcha, then that needed to be on a secure site.
So I install a certificate, only to find it's not working, that I can't make changes, and what-not.

None of these were major issues, but all added up, it can cause a lot of work.

Also the fact that I couldn't use my IDE to access my server using the SSH Keys. I had to in-fact convert that key into a PPK file and use that!
It wasn't a huge problem, but an inconvenience because of a bug with PhpStorm.

Just some FYI there.

I'm pretty sure I'll find more bugs as time goes on; from new and old features alike, but hey, who said the life of a developer was boring?

Introduction

Running a cost effective business is always an important thing, and for a few years I had a re-seller account with a UK company.
They were a brilliant company to work with, always on hand to provide support, but they couldn't provide me with exactly what I needed, nor could I justify the large expense when I couldn't recuperate it from my clients.

I had a few clients who used/needed dedicated servers, so I had gained a fair amount of experience through this, and after some research and trial runs, I finally settled on DigitalOcean because it was the easiest and cheapest service out there.

Disclaimer

I know there are cheaper and "better" providers out there, but I tried so many and a lot had hidden costs, support was terrible, and quite frankly made working with the servers more of a chore. DigitalOcean, in my opinion, just made it a whole lot easier.

I'm not paid for this article, but if you do want to set up your own DigitalOcean account, you can use this link and I'll get a kickback from it.

Or contact me and I'll manage it all for you.

What was the problem

After trying to help a friend fix their computer, I think I may have inadvertently infected my own computer with the same problem. It could have been a coincidence, but I had to reinstall the OS on my main system, and with that, I lost the SSH Keys I used to access my servers.

So, here I am... locked out of my own systems, and the only access I have is through a in-browser console/terminal which was sluggish and not very reliable.

My services were up and running, everything was running fine, but I could not access the files on the server; it wasn't a good situation.

DigitalOcean has a section where you can paste your SSH Keys, so whenever you start a new Droplet, they are automatically added to the Droplet.

DigitalOcean also, by default, allows you to login via a username and password, which a) isn't the most secure method, and b) needs to be reset.

Now, I'm not sure if I had forgotten my password or if I was ever given the chance to set one, but when I lost access using SSH, I needed to reset the password in order to log into the in-browser console.

You may be thinking "wait, why did you lose access if you can login with a username and password?"; well, simply put, I disabled that method of access to increase security, so the only way was using SSH Keys.

So, the solution?

After hours of research and trial and error, I came across this article which helped me figure out a way to gain easy access to the droplets again.

The in-browser console was not fun to use; practically unusable because of the glitches and bugs. So the main focus was to enable access remotely once again.

Here are the steps which I follow:

This will allow you to log in remotely using the username and password. Now I can copy and paste my SSH Key(s) directly into the server.
This was impossible using the in-browser console as the key was not copied over correctly.

You will now be able to log into the Droplet remotely with the SSH Key, once we disable access via Password again.

To do that, follow the first set of steps, but this time set PasswordAuthentication to No.
Once reloaded, you should be able to access the Droplet again, securely, using the SSH Key only.

Why restrict access with SSH Keys?

Although no solution is perfect, disabling password authentication restricts anyone with the password from gaining access to the server. I believe that DigitalOcean is secure enough as to stop people from gaining access via the in-browser console, but say someone finds out my password for a Droplet; they can simply log in from anywhere and do anything.

SSH Keys create a link between the computer and the server, so as long as there is a link there, the server will allow access from that computer. If there is no link, access is denied.
So, even if someone gets the password, there is no link between computer and server.

Classic Niall Limited © 2022

Contact Us

Classic Niall Limited
The Wesley Centre, Blyth Road
Maltby, Rotherham
S66 8JD
hello@classicniall.co.uk
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram