Dynamic DNS with the Linode CLI - Version 2

A while back I posted a method of creating your own Dynamic DNS server using the Linode API, shortly after they tweeted me with a tip which greatly simplifies the code. Now no remote service is required to provide your external IP address and it becomes an elegant one liner:

linode domain record-update -l oliversmith.io -t A -m lan -T 5 -R [remote_addr] 

Dynamic DNS with the Linode CLI

I've posted an improved method here here

For a while I've been looking for an elegant (and free) solution to mapping a custom DNS record for domains I own to the dynamic IP address of my ADSL connection, mainly for convenient remote access when traveling. I use Linode for my web hosting and DNS so it seemed logical to try and find a solution there. Today when Linode released their new CLI tool it provided me with the inspiration I needed.

In this post I'll show how I wrote a bash script to get my external IP address and update the Linode DNS A record pointing to my home network.

Getting the IP

As the external IP address of most ASDL connections is not the same one mapped to the machine you'll be running this script from an external service is required. A quick Google search reveals many APIs which simply echo back the IP address from which the request came, this saves parsing a page like whatismyip. A list is available here. Curl can then be used to get this address into the script.

Updating the DNS

Most decent DNS hosting services have an API from which DNS records can be modified. Linode have a conventional HTTP API, but the new CLI tool makes it even easier to work with. The API / CLI tools can view and modify most settings of your VMs but in this case the command to update an A record named lan is:

linode domain record-update -l oliversmith.io -t A -m lan -T 5 -R 1.1.1.1

This command looks for the lan A record of oliversmith.io and updates the IP it points to, to 1.1.1.1 with a TTL of 5 minutes. A low TTL will stop DNS servers caching an incorrect IP address for more than 5 minutes. Full API documentation can be found on the Linode Github

Note: I thought it best to manually configure the A record first then update from there, creation is slightly different

Code

These two simple tools can be combined together into a bash script. This can then be set to run periodically as a cron job.

#!/bin/bash

# Get the IP address from anywhere that will echo it
ip=`curl -s http://ipecho.net/plain`
echo "Your current IP Address is: "$ip

linode domain record-update -l oliversmith.io -t A -m lan -T 5 -R $ip

It's a good idea to be respectful to the nice people who provide these free services and poll only a few times an hour, not every second! If you require more frequent updates it'd be easy to add a script on a web server to show you the current external IP.

Fixing Email Addresses in Git Repos after migration from Mercurial using Fast Export

Migrating repos from Mercurial to Git can be achieved by a variety of methods. The best method I've found is to use fast-export (not HgGit), however regardless of the method  they all borked the importing of my email address on commits. In this post I'll detail how to fix this.

First I performed the conversion as detailed here.

After this all my commits where shown in gitk as devnull@localhost although this only came to my attention when I tried to push to github and got an invalid-email-address error.

This can be easily fixed using the git filter-branch command:

#!/bin/bash

git filter-branch -f --env-filter '

an='$GIT_AUTHOR_NAME'
am='$GIT_AUTHOR_EMAIL'
cn='$GIT_COMMITTER_NAME'
cm='$GIT_COMMITTER_EMAIL'

# Repeat this for each user / email which needs fixing
if [ '$GIT_AUTHOR_NAME' = '<Name used on commit>' ]
then
    cn='<Name used on commit>'
    cm='<New email address>'
    an='<Name used on commit>'
    am='<New email address>'
fi

export GIT_AUTHOR_NAME='$an'
export GIT_AUTHOR_EMAIL='$am'
export GIT_COMMITTER_NAME='$cn'
export GIT_COMMITTER_EMAIL='$cm'
' -- --all

Obviously the placeholders need to be replaced with your values.

This code is based on a stackoverflow answer but that only works for the current branch, mine applies to all branches.

Cropping videos using ffmpeg / libav / avconv

Explanatory note:

Ubuntu (my distro of choice) and others are transitioning from ffmpeg to libav, libav is a fork of ffmpeg and most tools are drop in compatible, the method described in this post should work with recent versions of either, the command line tools ffmpeg and avconv are interchangeable.

Old Method

Historically ffmpeg had -croptop, -cropleft etc. parameters used for cropping videos, these have now been replaced by the -vf or video filter option which is a little more complex.

Current Method

The -vf option can be used to specify a section of the source video to use in the output by specifying the size and position of a rectangle to crop to:

The -vf option takes the argument crop=outw:outh:x:y - to create a new video file output.mpeg cropped to 720px x 600px and aligned 240px from the top:

avconv -i input.webm -vf crop=720:600:0:240 output.mpeg

In the example I'm also converting a webm video to mpeg along with cropping it, to convert webm to mpeg at the same dimensions just remove the cropping options.

Atomic Counters using Mongodb's findAndModify with PHP

A common problem when developing web applications is the ability to generate unique sequential numbers, my recent use case was an API which generated an order number on receiving an order but before the order had been stored in a database, normally I would use MySQL auto incrementing keys but I needed to send the order number back to the client long before the order was stored, as Mongodb was already in the application stack it seemed the natural place to generate a persistent source of sequential numbers.

MongoDB provides a useful function findAndModify which, while MongoDB doesn't support transactions, allows a value to be retrieved and updated atomically, making it ideal for this situation where I wanted the order number to increment by one every time it was used.

For those who're in the dark about why you'd need to retrieve and update a value in one operation, the advantage of using this over separate find and updates is that it avoids the race condition where the same value could be retrieved twice if another client attempted to retrieve the counter before the update has taken place.

Example

The findAndModify command is no use without something to find so I started with this document (in a collection named 'Counters':

db.Counters.save({name : 'order_no', value : 1});

Now we have a starting point we can go onto retrieving and modifying the value, from the MongoDB command line client this be done quite simply:

db.Counters.findAndModify({query : { name : 'order_no'}, update : { $inc : {value : 1}}})

The first query being the find section and the second the update, here I've used the $inc operator to increment the value by the specified number. By default this query returns the current value then updates it, if new : true.

As expected this query then returns the document and updates the value each time:

{
    '_id' : ObjectId('4ff9f8d43ddba5fc637aade3'),
    'name' : 'order_no',
    'value' : 1
}

then next time....:

{
    '_id' : ObjectId('4ff9f8d43ddba5fc637aade3'),
    'name' : 'order_no',
    'value' : 2
}

etc....

In the PHP driver found in the PECL repositories this command is currently not supported so must be executed using the execute() command, in my case I'm using this inside a Silex based app with the Doctrine Mongo abstraction layer (not ODM) so the syntax may be slightly different to the raw PHP Mongo library:

$record = $database-&gt;execute(
    'db.Counters.findAndModify({
        query : { name : '.$counter_name.'},
         update : { $inc : {value : 1}}
        })'
);

There is currently an open bug report on the MongoDB PHP client for findAndUpdate support: https://jira.mongodb.org/browse/PHP-117

Limitations

One thing to be careful of, which is applicable to auto incrementing in most databases, is to ensure any replication is correctly configured to ensure there is no chance of using an old value after the value has been updated elsewhere.