I run a Gitlab server from home. It's quite cost-effective as the hardware requirements are rather low, and I can easily upgrade the storage space as and when I need it.
It's also a lot cheaper than hosting a server or VPS elsewhere.
However, the problem lies in accessing the server from somewhere outside of my home. My work takes me all across the country, and that means needing access to the repos remotely.
What I usually did was get the server's public IP and either use that, or update DNS records manually.
Using DigitalOcean, I set up an A Record with the public IP so I can access the server via a subdomain; it's quite handy, but the record would need to be updated whenever my IP changes at home (because a static IP is very costly).
So I created a nice little bash script to run whenever I've lost access due to the IP changing.
#!/bin/bash IP=$(curl https://ipinfo.io/ip) curl -X PUT -H "Content-Type: application/json" -H "Authorization: Bearer [API KEY]" -d '{"data":"'"${IP}"'"}' "https://api.digitalocean.com/v2/domains/[DOMAIN]/records/[RECORD ID]"
To begin with, that's the whole code.
All you need to do is copy that, create a new .sh file (I've called mine DynDNS.sh) and make a few changes, including setting the permissions to 744.
You need your API Key, Domain and Record ID for this script to work. Simply switch out the values (e.g. "[DOMAIN]" would become "classicniall.co.uk").
It's very straight forward. Go to DigitalOcean API Tokens, and generate a new Token. I'd recommend saving it in a secure file somewhere.
Copy the Token and replace [API KEY] with it.
I think it's rather straight forward, it's your domain that is used for the subdomain. So, if you want "repo.domain.com", then enter in here "domain.com".
Okay, this one is not so straight forward.
Just run this command:
curl -X GET -H 'Content-Type: application/json' -H 'Authorization: Bearer [API KEY]' "https://api.digitalocean.com/v2/domains/[DOMAIN]/records"
That will spit out all the records for the selected domain. What you need to do is find the A Record for your subdomain, and copy the id field (yes, it's lowercase).
NB: I ran this code in Git Bash, so the results weren't formatted, but it's the only way I knew how to get the ID for the record.
If you don't have an A Record for the subdomain, you will need to manually create the A Record first (if you want to follow my guide), and run the command again.
So now that the script has been created, modified and with the correct permissions, you should be able to run it by calling the file itself. In this instance, I simply use "./DynDNS.sh" to run the command, and it executes rather quickly (probably about a second).
I've tested it quite a few times, and it always grabs the correct public IP and updates the record.
To make things easier, you may want to create a Command in order to run the script. Sure, it may not take much to type out the command above, but it would be easier to type in a single word for the command. I use DynDNS.
Go into your .bash_alias file and simply add:
alias DynDNS="./DynDNS.sh"
alias marks it out as an alias, DynDNS is the command word, and everything between the speech marks is the command itself. Running DynDNS basically says "run this file".
Save it, log out of the server/session and log back in. The alias will now work (or you could manually force the update to take place).
There is definitely room for improvement here.
For starters, this still requires to be manually ran whenever there is a change in the home server's public IP. What we need to do is to make it more dynamic to check if there is a change.
One way is to set up a Cron job or scheduled task to run the file at a set interval, but that wouldn't be very efficient (although it would be an automated process), or we could take things further and run a check to see if the public IP has changed (because it could happen at any moment).
If it changes, then run DynDNS.
However, I don't know how to do this, but it is on my radar.
Running a cost effective business is always an important thing, and for a few years I had a re-seller account with a UK company.
They were a brilliant company to work with, always on hand to provide support, but they couldn't provide me with exactly what I needed, nor could I justify the large expense when I couldn't recuperate it from my clients.
I had a few clients who used/needed dedicated servers, so I had gained a fair amount of experience through this, and after some research and trial runs, I finally settled on DigitalOcean because it was the easiest and cheapest service out there.
I know there are cheaper and "better" providers out there, but I tried so many and a lot had hidden costs, support was terrible, and quite frankly made working with the servers more of a chore. DigitalOcean, in my opinion, just made it a whole lot easier.
I'm not paid for this article, but if you do want to set up your own DigitalOcean account, you can use this link and I'll get a kickback from it.
Or contact me and I'll manage it all for you.
After trying to help a friend fix their computer, I think I may have inadvertently infected my own computer with the same problem. It could have been a coincidence, but I had to reinstall the OS on my main system, and with that, I lost the SSH Keys I used to access my servers.
So, here I am... locked out of my own systems, and the only access I have is through a in-browser console/terminal which was sluggish and not very reliable.
My services were up and running, everything was running fine, but I could not access the files on the server; it wasn't a good situation.
DigitalOcean has a section where you can paste your SSH Keys, so whenever you start a new Droplet, they are automatically added to the Droplet.
DigitalOcean also, by default, allows you to login via a username and password, which a) isn't the most secure method, and b) needs to be reset.
Now, I'm not sure if I had forgotten my password or if I was ever given the chance to set one, but when I lost access using SSH, I needed to reset the password in order to log into the in-browser console.
You may be thinking "wait, why did you lose access if you can login with a username and password?"; well, simply put, I disabled that method of access to increase security, so the only way was using SSH Keys.
After hours of research and trial and error, I came across this article which helped me figure out a way to gain easy access to the droplets again.
The in-browser console was not fun to use; practically unusable because of the glitches and bugs. So the main focus was to enable access remotely once again.
Here are the steps which I follow:
This will allow you to log in remotely using the username and password. Now I can copy and paste my SSH Key(s) directly into the server.
This was impossible using the in-browser console as the key was not copied over correctly.
You will now be able to log into the Droplet remotely with the SSH Key, once we disable access via Password again.
To do that, follow the first set of steps, but this time set PasswordAuthentication to No.
Once reloaded, you should be able to access the Droplet again, securely, using the SSH Key only.
Although no solution is perfect, disabling password authentication restricts anyone with the password from gaining access to the server. I believe that DigitalOcean is secure enough as to stop people from gaining access via the in-browser console, but say someone finds out my password for a Droplet; they can simply log in from anywhere and do anything.
SSH Keys create a link between the computer and the server, so as long as there is a link there, the server will allow access from that computer. If there is no link, access is denied.
So, even if someone gets the password, there is no link between computer and server.