Configuring SSL and Gitlab through an Apache Reverse Proxy

I've recently started to use Gitlab as an alternative to a Github paid account for projects I don't wish to make public. I wanted to install Gitlab on a server which is used for a few other applications which all use Apache, while Gitlab is really easy to install it installs nginx by default and expects to run on port 80. Normally in this situation I would configure Nginx to point to a non standard port, proxy through apache on the same server and terminate the SSL at apache, however there are some quirks in Gitlab which make this difficult; in this post I'll describe how to proxy Gitlab through apache using SSL.

The Problem

While Gitlab can be manually installed to work with apache this makes upgrades / changes difficult, it comes with a very nice Chef based installer but it assumes it's the only thing installed, if a simple HTTPS proxy is configured (terminating the SLL at Apache) then Gitlab will still mix in some non SSL URLs as it thinks it's still using an unencrypted connection, while not a huge risk this is untidy and annoyed me.

The Solution

The solution is to configure Gitlab to use SSL too and enable an SSL proxy in Apache, this involves defining options in two files:


external_url 'https://<url>:4443'
nginx['ssl_certificate'] = "/etc/ssl/localcerts/<certname>.crt"
nginx['ssl_certificate_key'] = "/etc/ssl/localcerts/<keyname>.key"

After which don't forget to run

 sudo gitlab-ctl reconfigure 
to push the changes into the nginx config

Apache vhost

<VirtualHost <ip>:443>

        ServerName <server url>
        SSLEngine on
        SSLCertificateFile /etc/ssl/localcerts/<certname>.crt
        SSLCertificateKeyFile /etc/ssl/localcerts/<keyname>.key
    <Proxy *>
        Order deny,allow
        Allow from all

    SSLProxyEngine on
    ProxyRequests Off
    ProxyPass / https://<url>:4443/
    ProxyPassReverse / https://<url>/

    Header edit Location ^http://<url>/ https://<url>/
    RequestHeader set X-Forwarded-Proto "https"


Dynamic DNS with the Linode CLI - Version 2

A while back I posted a method of creating your own Dynamic DNS server using the Linode API, shortly after they tweeted me with a tip which greatly simplifies the code. Now no remote service is required to provide your external IP address and it becomes an elegant one liner:

linode domain record-update -l -t A -m lan -T 5 -R [remote_addr] 

Dynamic DNS with the Linode CLI

I've posted an improved method here here

For a while I've been looking for an elegant (and free) solution to mapping a custom DNS record for domains I own to the dynamic IP address of my ADSL connection, mainly for convenient remote access when traveling. I use Linode for my web hosting and DNS so it seemed logical to try and find a solution there. Today when Linode released their new CLI tool it provided me with the inspiration I needed.

In this post I'll show how I wrote a bash script to get my external IP address and update the Linode DNS A record pointing to my home network.

Getting the IP

As the external IP address of most ASDL connections is not the same one mapped to the machine you'll be running this script from an external service is required. A quick Google search reveals many APIs which simply echo back the IP address from which the request came, this saves parsing a page like whatismyip. A list is available here. Curl can then be used to get this address into the script.

Updating the DNS

Most decent DNS hosting services have an API from which DNS records can be modified. Linode have a conventional HTTP API, but the new CLI tool makes it even easier to work with. The API / CLI tools can view and modify most settings of your VMs but in this case the command to update an A record named lan is:

linode domain record-update -l -t A -m lan -T 5 -R

This command looks for the lan A record of and updates the IP it points to, to with a TTL of 5 minutes. A low TTL will stop DNS servers caching an incorrect IP address for more than 5 minutes. Full API documentation can be found on the Linode Github

Note: I thought it best to manually configure the A record first then update from there, creation is slightly different


These two simple tools can be combined together into a bash script. This can then be set to run periodically as a cron job.


# Get the IP address from anywhere that will echo it
ip=`curl -s`
echo "Your current IP Address is: "$ip

linode domain record-update -l -t A -m lan -T 5 -R $ip

It's a good idea to be respectful to the nice people who provide these free services and poll only a few times an hour, not every second! If you require more frequent updates it'd be easy to add a script on a web server to show you the current external IP.

Fixing Email Addresses in Git Repos after migration from Mercurial using Fast Export

Migrating repos from Mercurial to Git can be achieved by a variety of methods. The best method I've found is to use fast-export (not HgGit), however regardless of the method  they all borked the importing of my email address on commits. In this post I'll detail how to fix this.

First I performed the conversion as detailed here.

After this all my commits where shown in gitk as devnull@localhost although this only came to my attention when I tried to push to github and got an invalid-email-address error.

This can be easily fixed using the git filter-branch command:


git filter-branch -f --env-filter '


# Repeat this for each user / email which needs fixing
if [ '$GIT_AUTHOR_NAME' = '<Name used on commit>' ]
    cn='<Name used on commit>'
    cm='<New email address>'
    an='<Name used on commit>'
    am='<New email address>'

export GIT_AUTHOR_NAME='$an'
export GIT_AUTHOR_EMAIL='$am'
' -- --all

Obviously the placeholders need to be replaced with your values.

This code is based on a stackoverflow answer but that only works for the current branch, mine applies to all branches.

Cropping videos using ffmpeg / libav / avconv

Explanatory note:

Ubuntu (my distro of choice) and others are transitioning from ffmpeg to libav, libav is a fork of ffmpeg and most tools are drop in compatible, the method described in this post should work with recent versions of either, the command line tools ffmpeg and avconv are interchangeable.

Old Method

Historically ffmpeg had -croptop, -cropleft etc. parameters used for cropping videos, these have now been replaced by the -vf or video filter option which is a little more complex.

Current Method

The -vf option can be used to specify a section of the source video to use in the output by specifying the size and position of a rectangle to crop to:

The -vf option takes the argument crop=outw:outh:x:y - to create a new video file output.mpeg cropped to 720px x 600px and aligned 240px from the top:

avconv -i input.webm -vf crop=720:600:0:240 output.mpeg

In the example I'm also converting a webm video to mpeg along with cropping it, to convert webm to mpeg at the same dimensions just remove the cropping options.