Oh boy, now this has been truly a pain.

I was first introduced to the idea of Continuous Integrate/Continuous Deployment all the way back in 2018, and I've only just been able to create my own runner.

For context, I self-host Gitlab on a private server.
I don't use any other tools like Jenkins, or sites like DeployHQ, but instead used the feature that is provided by Gitlab. Why muddy the waters?

I've worked with agencies which do use those services, and they made things look so easy; just commit the changes to whichever branch and boom, the site is updated in a matter of minutes (depending on the size).

But there is a lot to learn about Gitlab pipelines, runners, and CI/CD.

Where to begin?

Firstly, you need to make sure that the service in enabled on your Gitlab.
I believe that it is, by default, but if you are like me and found that warning message for every project annoying ("hey, you've got this thing but it doesn't work - do something about it"), you found a way to disable it site-wide.

I believe I followed these instructions on how to enable or disable GitLab CI/CD, and opted for the project by project basis, as I also manage other non-DevOps work there.

Once that is in place, you should notice a new CI/CD menu at the side.
If it's already there, great! GitLab is rather forthcoming with their alerts and messages, so what you need to do next will be obvious.

What to do next?

So you click through and it says "create a pipeline".
This is a file which is a part of the repo; inside that is a list of instructions for the runner to carry out once certain conditions are met.

So you can set some instructions for the Master branch, Production branch, developer branches, or branches with certain tags, etc.
And the commands, can range from SSH commands, run scripts, grab variables, and probably a lot more (I've only needed the most simple tasks).

Essentially, all my runners do is check if their branch is updated, and copy the files to whichever server they need to go. This makes it easier for me, because I have a separate machine to do all the "sitting and waiting" instead of my computer (which I need to use to work).
There are more rules to it than that, such as backups, file permissions, etc., but it is to automate the process.

What is this "Runner"?

Right, yes, runners.
A runner is a service that acts as a user for you. Think of it like a computerised servant, a robot butler (hey, "Jenkins"), that will casually wait around, checking on their repos, and once something happens, they will read through the list of instructions and carry them out.

Why hire someone to do that when you can build a machine to do it for you.

It is recommended that you have a separate machine to do this for you (it doesn't take a lot of processing power), but if you want to use the same server that the repo is on, or even your own computers, you can too.
You just need to make sure you turn on the runners/activate the service.

I found these instructions on installing a GitLab Runner worked well, although I found issues with getting it to work on Windows and Powershell.
So I used Ubuntu (I am quite familiar with Linux now... I think I actually prefer it).

Here's a bit of free advice, make sure you give the appropriate permissions to the machines you use.
I was stuck for such a long time, because I didn't think about the SSH Keys.

Creating the YAML file

This is the list of instructions I mentioned earlier.

For me, I simply wrote what I usually do when deploying a site down.
Some transposing is needed (it's not quite so straight forward), and there are certain rules you need.

There are quite a few tutorials and guides out there, and with a great amount of luck, I found that this offers the basics for what I need (if you've got any suggestions, please, let me know).
Oh, in the example below, I use GitLab variables to store the keys I need to access remotely. I used ssh-keyscan as a command to find the keys (if you don't have them saved elsewhere).

# What stages we have (think of this like tasks)
stages:
  deploy

# The job name (can have multiple)
deploy_staging:
  # What task we want to do
  stage: deploy

  # The things we actually want to do (aka list of instructions)
  script:
   # Adds the SSH Keys to Remote so we can access via the Runner
   - eval $(ssh-agent -s)
   - echo "$SSH_REPO_KEY" | tr -d '\r' | ssh-add - > /dev/null
   - mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
   - echo "$SSH_KNOWN_STAGING" >> ~/.ssh/known_hosts
   - chmod 644 ~/.ssh/known_hosts

  # SSH Commands, can have multiple, and different commands
   - ssh -p[port number] [user]@[ip] "[the command you would normally use manually]"

  # Only run on this branch, tag, etc.
  only:
   - [branch]

And that is about it.
You can add so many more tasks, jobs, instructions, all sorts. I would only recommend in keeping it simple as it does use processing power for the Runner and Remote; you can run out of RAM.

If you've got any questions, suggestions, or just want to chat, you can email me using the form below, or find me on Twitter (I'm usually hanging out there).

A little secret

Okay, today I'm going to tell you a nice little method I've been wanting to try out for ages.

If you are unaware of CI/CD, Jenkins, Auto Deployment, DeployHQ, etc., they're essentially methods to push changes to a site with as little interruption as possible.

Instead of manually uploading via FTP yourself, what these services do is use a number of methods to update your live site to avoid down-time.
This way it's between the two servers, and not my computer, which I am using and can have uploading issues.

This doesn't stop programmer errors, but checks that the files don't break the site (and makes sure all get uploaded).

What are the options

There are a number of options, as mentioned previously. I've had the most success with DeployHQ, but that is a paid-for service.
GitLab has a nice little feature built-in, but I've not been able to (con)figure it out*.

* see, that's a joke because you need to configure the settings

So, after scratching my head for a while, I had an idea. I understand Git pretty well, so why not just pull from the repo to the Live site?

Well, I've just tried it now, and it works!

Method

Okay, so I had my site live for quite a while now, but I need to use the Git repo. The easiest way was to make a copy of my site's directory and give it another name as a backup.

Then, once I was happy, I cloned the repo to the server. For this instance, I cloned it into a different name (because safety), but you could just clone it directly down if it's a fresh site (or you are feeling daring).

Once cloned, all the files will be on the site and good to go (or renamed to the correct name if you are being cautious like me).

Now whenever I make changes to my site, I can test locally, see they work and push up as normal.
I then log into my server and simply fetch and changes and pull them down for the site.

The process is no different than if you are working on two different machines and need to keep the code up to date.

Final thoughts

I'm pretty sure there are quite a few redundancies in place using other, more fleshed out, processes, but as I keep a close eye on what is going on and know what should happen, this suffices (for now).

Using a similar "cautious" method, I could clone into a separate directory, make sure it has worked correctly, then rename the folder (deleting the old/current).

But hey, this is just a shot in the dark to try and make my life a bit easier.

After all, it's better to expend more energy to automate a process than have to do it manually all the time.

A little history

I run a Gitlab server from home. It's quite cost-effective as the hardware requirements are rather low, and I can easily upgrade the storage space as and when I need it.
It's also a lot cheaper than hosting a server or VPS elsewhere.

However, the problem lies in accessing the server from somewhere outside of my home. My work takes me all across the country, and that means needing access to the repos remotely.

What I usually did was get the server's public IP and either use that, or update DNS records manually.

Using DigitalOcean, I set up an A Record with the public IP so I can access the server via a subdomain; it's quite handy, but the record would need to be updated whenever my IP changes at home (because a static IP is very costly).

So I created a nice little bash script to run whenever I've lost access due to the IP changing.

The script

#!/bin/bash
IP=$(curl https://ipinfo.io/ip)

curl -X PUT -H "Content-Type: application/json" -H "Authorization: Bearer [API KEY]" -d '{"data":"'"${IP}"'"}' "https://api.digitalocean.com/v2/domains/[DOMAIN]/records/[RECORD ID]" 

To begin with, that's the whole code.
All you need to do is copy that, create a new .sh file (I've called mine DynDNS.sh) and make a few changes, including setting the permissions to 744.

You need your API Key, Domain and Record ID for this script to work. Simply switch out the values (e.g. "[DOMAIN]" would become "classicniall.co.uk").

Getting API Key

It's very straight forward. Go to DigitalOcean API Tokens, and generate a new Token. I'd recommend saving it in a secure file somewhere.
Copy the Token and replace [API KEY] with it.

Getting Domain

I think it's rather straight forward, it's your domain that is used for the subdomain. So, if you want "repo.domain.com", then enter in here "domain.com".

Getting the Record ID

Okay, this one is not so straight forward.
Just run this command:

curl -X GET -H 'Content-Type: application/json' -H 'Authorization: Bearer [API KEY]' "https://api.digitalocean.com/v2/domains/[DOMAIN]/records"

That will spit out all the records for the selected domain. What you need to do is find the A Record for your subdomain, and copy the id field (yes, it's lowercase).

NB: I ran this code in Git Bash, so the results weren't formatted, but it's the only way I knew how to get the ID for the record.

If you don't have an A Record for the subdomain, you will need to manually create the A Record first (if you want to follow my guide), and run the command again.

Running the command

So now that the script has been created, modified and with the correct permissions, you should be able to run it by calling the file itself. In this instance, I simply use "./DynDNS.sh" to run the command, and it executes rather quickly (probably about a second).
I've tested it quite a few times, and it always grabs the correct public IP and updates the record.

Creating a Bash Command

To make things easier, you may want to create a Command in order to run the script. Sure, it may not take much to type out the command above, but it would be easier to type in a single word for the command. I use DynDNS.

Go into your .bash_alias file and simply add:

alias DynDNS="./DynDNS.sh"

alias marks it out as an alias, DynDNS is the command word, and everything between the speech marks is the command itself. Running DynDNS basically says "run this file".

Save it, log out of the server/session and log back in. The alias will now work (or you could manually force the update to take place).

Taking things further

There is definitely room for improvement here.
For starters, this still requires to be manually ran whenever there is a change in the home server's public IP. What we need to do is to make it more dynamic to check if there is a change.

One way is to set up a Cron job or scheduled task to run the file at a set interval, but that wouldn't be very efficient (although it would be an automated process), or we could take things further and run a check to see if the public IP has changed (because it could happen at any moment).

If it changes, then run DynDNS.

However, I don't know how to do this, but it is on my radar.

Where to begin

This may be related to my previous post about Regaining Access to DigitalOcean Droplets, but I think I'll write up the whole experience, as there seems to be many different issues I encountered (almost randomly).

I stated in that post that I had to reformat my main PC and set up the SSH Keys again in order to access the droplets. But it turns out there was a lot more going on.

As all good developers should do, my websites are encrypted with an SSL certificate; it's just good and common practice nowadays, and with that, increase the security of the sites. This means reviews and editing the files permissions, and setting up the correct keys and passwords to gain access to the server.

SSL Certificates

There are numerous different providers out there to certify your site; some are free, some are paid, and some may even come from your hosting provider. But that's not what I'm getting into today.

What I'd like to tell you is to make sure that your site is pointing to the correct directory on your site.

It sounds simple, I know, but I was having issues with redirecting my site to the secure version, and it was a simple matter that my Apache2 config files were pointing to the wrong directory.
It was a quick fix, some tweaking in the config files to make sure every attempt to access the site went to the correct place.

Granted, if you're not one who plays around with the config files, you should be able to get support from your service provider (if not, feel free to get in touch and I'll see if I can point you in the right direction).

File Permissions

More often than not, my clients want "WordPress sites", and sometimes it requires working with other developers or hosting providers who may or may not have things set up correctly. This can be a nightmare at times.

After a lot of research, testing, and hair pulling, I think I've got the access permissions set up in a way which is secure and functional.

Ownership of the files

Most servers use www-data for the user and group for their sites. Let's assume this is the same for you.
Your "website" folder needs to have the owner and group set to www-data. This will allow your website to own these files and folders, to view and make editions (if you've allowed it).

You can limit www-data to view only, but then how could you upload files via a media library, for example.

Read/Write Permissions

Here comes the more infuriating part. The read/write permissions for files and folders.

Again, in your website's folder, I would recommend that the permissions are set to 755 for folders, and 644 for all files within the main site directory.

It would also be beneficial to "hide" any config files (the files that hold connection details) from the public. The www-data user will still be able to access them, but not some randomer whose stumbled across your site.

In Summary

This all came about because I wanted to make my site secure with a certificate. I never gave it much thought for myself, because I was just running a "portfolio site", but I wanted a way for people to communicate with me.

So I had a contact form, which quick became bombarded with spam, so enter in reCaptcha, then that needed to be on a secure site.
So I install a certificate, only to find it's not working, that I can't make changes, and what-not.

None of these were major issues, but all added up, it can cause a lot of work.

Also the fact that I couldn't use my IDE to access my server using the SSH Keys. I had to in-fact convert that key into a PPK file and use that!
It wasn't a huge problem, but an inconvenience because of a bug with PhpStorm.

Just some FYI there.

I'm pretty sure I'll find more bugs as time goes on; from new and old features alike, but hey, who said the life of a developer was boring?

Introduction

Running a cost effective business is always an important thing, and for a few years I had a re-seller account with a UK company.
They were a brilliant company to work with, always on hand to provide support, but they couldn't provide me with exactly what I needed, nor could I justify the large expense when I couldn't recuperate it from my clients.

I had a few clients who used/needed dedicated servers, so I had gained a fair amount of experience through this, and after some research and trial runs, I finally settled on DigitalOcean because it was the easiest and cheapest service out there.

Disclaimer

I know there are cheaper and "better" providers out there, but I tried so many and a lot had hidden costs, support was terrible, and quite frankly made working with the servers more of a chore. DigitalOcean, in my opinion, just made it a whole lot easier.

I'm not paid for this article, but if you do want to set up your own DigitalOcean account, you can use this link and I'll get a kickback from it.

Or contact me and I'll manage it all for you.

What was the problem

After trying to help a friend fix their computer, I think I may have inadvertently infected my own computer with the same problem. It could have been a coincidence, but I had to reinstall the OS on my main system, and with that, I lost the SSH Keys I used to access my servers.

So, here I am... locked out of my own systems, and the only access I have is through a in-browser console/terminal which was sluggish and not very reliable.

My services were up and running, everything was running fine, but I could not access the files on the server; it wasn't a good situation.

DigitalOcean has a section where you can paste your SSH Keys, so whenever you start a new Droplet, they are automatically added to the Droplet.

DigitalOcean also, by default, allows you to login via a username and password, which a) isn't the most secure method, and b) needs to be reset.

Now, I'm not sure if I had forgotten my password or if I was ever given the chance to set one, but when I lost access using SSH, I needed to reset the password in order to log into the in-browser console.

You may be thinking "wait, why did you lose access if you can login with a username and password?"; well, simply put, I disabled that method of access to increase security, so the only way was using SSH Keys.

So, the solution?

After hours of research and trial and error, I came across this article which helped me figure out a way to gain easy access to the droplets again.

The in-browser console was not fun to use; practically unusable because of the glitches and bugs. So the main focus was to enable access remotely once again.

Here are the steps which I follow:

This will allow you to log in remotely using the username and password. Now I can copy and paste my SSH Key(s) directly into the server.
This was impossible using the in-browser console as the key was not copied over correctly.

You will now be able to log into the Droplet remotely with the SSH Key, once we disable access via Password again.

To do that, follow the first set of steps, but this time set PasswordAuthentication to No.
Once reloaded, you should be able to access the Droplet again, securely, using the SSH Key only.

Why restrict access with SSH Keys?

Although no solution is perfect, disabling password authentication restricts anyone with the password from gaining access to the server. I believe that DigitalOcean is secure enough as to stop people from gaining access via the in-browser console, but say someone finds out my password for a Droplet; they can simply log in from anywhere and do anything.

SSH Keys create a link between the computer and the server, so as long as there is a link there, the server will allow access from that computer. If there is no link, access is denied.
So, even if someone gets the password, there is no link between computer and server.

Intro

A brief disclaimer; I didn't like Gutenberg when it was first announced. I honestly thought it was going down the route of all the other WYSIWYG editors on WordPress.
And for a while, that's what it looked like.

However, now it has been released and I've had a, albeit brief, chance to play with it, I am pleasantly surprised.

WYSIWYG

In my opinion, Gutenberg simply prettifies the whole page/post writing process. It's no secret that all previous versions of WordPress essentially pulled from the design and mechanics of document processing (i.e. Microsoft Word, etc.) and that is fine; it brings that familiarity and makes the ease of writing so much simpler.

With Gutenberg, they've done away with the traditional "kitchen sink" and now everything is in "blocks", where each has it's own formatting area.

So, instead of writing up a huge article filled with different formats, it's essentially divided up into more manageable chunks. So instead of copying and pasting this paragraph somewhere else on the page, I can simply drag and drop; a small detail, but it would come in handy.

Not only that, but whenever I press the Return key to start a new paragraph, it essentially creates a new block. In coding, this would simply be the P tags < p >. If I wanted to create a Header, I can simply format that block to be a Header < H1-7 >, or anything else for that matter.

Overall, I prefer this method than the previous version because you are doing away with messy custom HTML scattered all over an article and brings a form of uniformity to the site.

Development

Now here's what I love the most about Gutenberg, it finally tackles the problem of all those third-party WYSIWYG editors. You know the ones I'm talking about, where you actually need a level of skill in order to use.

All the drag and drop features, having images and articles colliding with one another, the padding and margins screwing up the whole site. It's safe to say that I am not a fan of them, and any developer worth their salt would agree with me. They're messy and complicated and often need more work than it's worth. I had one client spend a whole week trying to write an article using one such editor.

I believe with Gutenberg, as previously stated, will simplify the whole process and you won't get caught up with all those complicated layouts and formats, having to tweak every little block to be "just right".

Speaking with other developers, it looks like it will be a very straight forward process to design and develop your own custom blocks too.
I've yet to try this, so I cannot say for certain how, but from what I've seen it will be a simple matter of creating a block yourself whilst writing an article and then saving it like a template, called a "Reusable Block".

I'm assuming along these lines, my fellow developers and I can create custom blocks built straight into the template, such as a "Call to action" block. Currently my go to method is either an option to display some custom HTML or using short-codes.
I believe creating custom blocks will make this much more manageable and be able to be used throughout the site.

Templating

As a developer, I often use Custom Page Types and templates. Since switching over to WordPress 5, I've noticed that CPT's don't use Gutenberg as default, but use the old version; which is great because it doesn't break all the work I've done previously!

Kudos for the team at WordPress for this.

I'm sure there is a way, but I've not had the need to, so I won't expend energy trying to figure it out just yet, but all future development will include this as an option.
It's a tricky one because not everyone has moved over to WordPress 5, so as with anything, whatever projects I carry out need to be "backward compatible". I'm hoping it's not a huge mess.

Conclusion

In conclusion, I've come to realise that my writing style has not changed since I was in highschool.

Apart from that, I'd recommend people look into using WordPress 5. If it's a new project, jump straight in, if you're upgrading, use a Dev site first.

Gutenberg is a very handy feature, one of which I am surprisingly happy with. Despite the fact that it is "the future" of WordPress, but it makes writing a more pleasant experience.

If you really don't like Gutenberg, there are plugins out there which will enable you to use the old editor, but as for me, I think I'll be looking forward.

Let me tell you a bit about hosting.
It should be easy.

Recently I was testing out a VPS (who I won't name because they don't deserve the advertising), and it has been the biggest regret of the past few months.
Not only was the service appalling, but they added a lot of hidden charges and extra costs, severely dwarfing their advertised pricing.

After weeks of battling them, trying to get the information to understand what's going on, I decided to just close my account with them and move on to a far better service.

What is "Hosting"

Hosting is essentially a computer which stores your website.  Just like your computer has all your files to view and use, that is what a server is; a public computer for people to view the files.
It just so happen that these files are used to make a website.

However, "hosting" comes in many flavours.

Dedicated Server

We might as well start with the big-boy in the room.

Buying a dedicated server is essentially buying a whole computer to run your services from.  It's as simple as that.

These machines are designed to pack some power, and to be used for dedicated purposes.  As discussed in my previous post, my home server is a dedicated server; one machine dedicated to doing a specific task.  You can run the server from home, from your office, or from a server farm/company, but the idea is that it is one machine controlled by you and to do whatever you want with.

It is yours.

Shared Hosting

Now, the flip side (and a far cheaper option) is shared hosting.

To put it simply, this is a dedicated server (or servers) where you share the resources with other people.

This type of hosting has multiple sites running from it; think like a computer with many users who use it all at the same time.  Resources are allocated to each user/site.

This is usually a very low cost hosting, providing you with a friendly control panel and support system, although vulnerabilities to one site or the server can mean that everyone's site can suddenly disappear.  However, what you can do is heavily restricted; certain apps or programs will never be possible.

If money is really tight, or you're just getting into the web-game, this would be a good option, but I much prefer the next.

VPS

VPS, or Virtual Private Server, is a nice combination of Dedicated and Shared.  It is still one machine, but each user is given resources that are specifically theirs.  The main server creates little mini-servers inside of it, which are essentially their own standalone machine.  This keeps the cost down for running a server (because there are not thousands of tiny computers running web services).

But it essentially is just that.  You could buy several small lightweight computers to run a different site off each, but you'll need many cables .
The easiest thing is to get one powerful machine and split it up into smaller machines; similar to what I attempted with Docker; I should really have another crack at that.

This is essentially what I offer my clients, as I know what a pain it can be to share resources, and the costs of running a "basic site" is not much more expensive than with Shared Hosting.
Plus, who doesn't like having their own unrestricted server.

Cloud Hosting

Okay... now... hmm...

Cloud hosting is a series of dedicated machines, which create a massive hosting platform.

Think of all your office computers linked up to one another to create one super computer.  This is what Cloud Hosting is, to an extent.

Hosting from here can vary, but essentially provides you with a VPS which is scalable, so if you need more space, memory, or speed, the hosting scales it up to suit your requirements.

You're probably thinking this is the best option, right?  Well, it kinda is, having the resources there to meet your requirements, and not having to worry about running out of space, or whatever, except this comes at a heavy cost.

From my experience, Cloud Hosting is one of the most expensive options, due to the nature of it; resources are at a premium, because who knows when they'll be gone; plus think of the cost just to run all those machines, and keep them maintained.

However, the redundancy is top quality here, as if the server your site is currently on fails, it automatically switches to a new one, without having to worry.
With all our other options, if the server goes offline, your site goes offline (although that is becoming less common as technology progresses).

So, what should you choose?

Personally, I will always choose a VPS or a Dedicated Server, depending on your requirements.  Most sites I manage I used a VPS for, with a few large, more resource-heavy sites using a dedicated server or two.

You can also choose to manage your server yourself, or find someone *cough* to manage it for you.

Generally you can find help and support from your host, and once everything is ready, it's generally straight forward; just hope you don't break something, like I've had clients in the past do before.

If you want to talk to me about hosting options, do feel free to drop me an email and we can discuss your requirements.
We can find the right plan for you, and get you online in next to no time.

Oh, it'd help if I left you my email: niall@classicniall.co.uk

I run my own home server

It's a powerful enough little machine.

Currently is runs Ubuntu 18.04, on an Intel Celeron Dual processor, 8GB RAM and 500GB 2.5" HDD.
What does all that mean?  Nothing, really.  Just wanted to share it.

The main reason I got this machine is to host my own repos for all the work I do.  Version control is such a lifesaver in the work I do, and it can store a whole arrange of different information.  I use it to store my web work, and my game work, and it makes it easier to work on different computers.

But I'm sure that's for another post.

Around Christmas time, 2018, I was thinking about growing the business's infrastructure, and looking at hosting my own "Office" software; I found OnlyOffice, which is a G Suite/Office 365 alternative.  I was really excited at this prospect, being able to host and control all my own resources, but there was one issue I faced from the off-set; I only had the one server.

This is where Docker came in

What Docker does is create Containers which act like their own mini-server, similar to how virtual machines work.
Essentially you can quickly spin up a Container with whatever you need and start working on it, you can even have multiple containers running at the same time.

So this is what I'd try to do with my home server, use Docker to host my repos and my office.

Except it's that not that straight forward

The thing with Docker is that it uses a whole bunch of different ports, and it isn't so straight forward to get the ports to play well with one another.

I am used to using physical machines, with their own IPs, ports, and whatever else, so when I am shoe-horned into using a shared IP:Port situation, and those ports not being the default, it irks me most troublesome (read: It really annoys me).
That's why I enjoyed running dev sites on VirtualBox, because it was like have my own server there.  I guess that is essentially like a VPS.

This ultimately led me down the rabbit hole of finding a way to share ports, or just host multiple containers.

Let me introduce Compose

Compose (or Docker Compose for the sake of argument) is essentially a way to set all container configuration into a file, so rather than typing it all out, etc., you put it all into a YML file and just tell Compose to run that.  It's very handy and I did indeed fall in love with it, when I got it to work.

This also helped a great deal with sharing ports, running multiple "sites", and for a while I thought it would answer all my problems.

But after 2/3 months of pure struggle, I decided to give up; why?  It ultimately wasn't worth the hassle.

The biggest caveats

The main reason I tried all this was to run multiple services from one machine.  I wasn't expecting it to be such a hassle as "I've done this type of thing before".  It was a great learning experience, and perhaps I'll come back to it, but the main reasons were simply the port situation, and the dynamic IP.

I like to use SSH, and I couldn't directly tunnel into a container from my desktop PC, which is what I like to do.  There were some forum posts about how to do this, like creating a special user which does this, blah blah, but I had spent so much time wasted on this, repeating the same stuff and even having to rebuild the server several times.  From that, I think I've learnt I bought a dodgy SATA cable (not impressed).

But none of that compares to the fact that every so often, my IP changes.  The joys of not being able to have a static IP.  It's kinda essential to running a server.
Without a static IP, I'd have to manually change the IP address of the apps and services whenever it changed, and that just isn't on.

OpenVPN to the rescue

Except, not.

I thought that I could run a cheap VPS (costing like $2 USD per month) to be my VPN server, and have my home server tunnel into that.
It took a while, following some guides and tutorials (which rarely work IMO) and I managed to tunnel into the VPN server from my Desktop, but not my server... and I never could.

I also ended up needing to run my own Certificate Authentication server, so that doubled the cost.

So my quick fire, low-cost solution just wasn't going to work.

What am I doing now?

So I've gone back to the basics, even more so.  I've got my server set up as previously stated, and running GitLab CE (well technically EE, but that's the Omnibus for you).

I'm only using one HDD, as before I was running RAID10, but again, that was just a hassle.  I've got 1TB of storage, so I don't think I'll run out any time soon.

I am still using G Suite Free, as I've been using it for years, and TBH, it's very useful.  I may return to the OnlyOffice option later on in life, as and when I need it (and have the resources available), but as I am still focusing on growing my business, where I am at right now suits me just perfectly.

Perhaps one day I will employ people and need a more robust infrastructure.
Luckily I enjoy SysAdmin.

Classic Niall | A web design and development company, now offering web hosting!
Classic Niall Limited © 2022

Contact Us

Classic Niall Limited
The Wesley Centre, Blyth Road
Maltby, Rotherham
S66 8JD
hello@classicniall.co.uk
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram