BitTorrent Sync

A month ago I wrote about File Transfer for mobile phones. There I wrote how I think a solution for file transfer should work. A few days later I found Bittorrent Sync. Problem solved!

Use ZeroMQ to control a receipt printer

A customer needed a solution to the following problem. There are multiple web POS systems that need to print to a receipt printer. The web server is not in the same place as the POS terminals, but they are connected to a network.

To solve the problem I used ZeroMQ to connect the web server to the receipt printers. The current solution consists of four parts.

  1. Web app
  2. Web server (connects a PUSH socket to a PULL socket)
  3. ZeroMQ device (a multiplexer receives from a PULL socket and sends these to a PUB socket)
  4. Computer with receipt printer (a sink that receives on a SUB socket subscribed to its own address)

First a user in a browser clicks the button to print a receipt.

This command is converted into a data packet formatted for the receipt printer.

This data packet is send to a small program that connects the web server and the point of sale systems. The first frame of the data packet contains the name of the POS system that should print the receipt.

The command to print the receipt contains the address of the terminal that requested to print.

The two small programs are written in Go. The multiplexer program opens two sockets and reads from one socket and sends the other socket. Program 4 opens a socket and reads from it. It sends the packets that it receives to WinAPI printer functions.

The nice thing of the solution is that the multiplexer program doesn't need to know how many printers are connected. So the customer can just connect a sink program for every new terminal that is added.

File transfers for mobile phones

Sometimes I need to copy a file from my computer to my phone. That's should be really easy. But it's not. I would like it to work like this:

  1. I put a file in a folder.
  2. If my phone is on the same network, the file should be copied from the computer to my phone, without further actions from me.

And that's it. The file shouldn't be copied to a server somewhere else. It should only move from my computer to my phone and never touch the internet.

It also means that this functionality won't work over the internet. So if my phone is not connected to the local wifi, it stays on the computer.

The other way around, if I make a few photos with my phone, these photos should be copied automatically to a directory on my computer when I come home.

Why doesn't it work like this?

Redis queue design choices

In a previous post I wrote about simple job queue in Redis. In this post I like to write a bit about the design choices. I think these choices illustrate a point about design.

Job handle

The job handle in the code is create by the client (with help from the Redis server). This job handle is passed to the queue in Redis and picked up on the other side by the worker.

The worker doesn't need to know the content of the key. The key is opaque. This means that the key could be changed to be something else. In the current case the handle is the build up out of the queue name and a unique id.

The worker only needs to know which commands it can use with that job handle. In this case the commands are HGETALL and DEL. When the job handle changes, because it uses a hash type value, then the workers don't need to change.

After the worker got the hash value with the job from Redis, it can perform it's job. The value is specific to the worker and this shouldn't change. In the content of the value uses the keys and value types then the workers keep working. Upgrading the content can be done separately for the client and the worker, if you keep the same keys in the job and only add keys. The workers can get upgraded later with the newer commands. This could also be done the other way around as long as the new worker understands the old job data.

The keys here are similar to URI in HTTP (and REST). The value of the URI is opaque. As a client you shouldn't create URI yourself, but you should the follow the URI from the server according to its media type.

Incrementing the id of the job

Handle of the job consists of the name of the queue and a unique id. We get this id from Redis using the INCR command. The command returns the current value + 1. We use this exact value in the handle. In a way we use the value in pre increment way. We increment and then use the value. This means that we can use the exact value that we got back from Redis.

Another way is to say that the value in Redis is id of the next job. By incrementing the value we say that we used this value. The only problem is that the returned value is one higher than the value we would like to use. So we need to decrement the returned value with 1.

The design choice here was to use the exact value from Redis. In a way this is the same as the job handle example. There the worker uses the exact value that it got. This choice makes it easier, because there is no calculation needed

Redis is a flexible solution to a lot of problems, but as always you need to make choices to get a solution the works for your problem.

I rewrote my blog software in Perl

In the last few weeks I have rewritten my blog software in Perl. The old version used Ruby. I wrote that version, because I wanted to try out Ruby. The software is about as old as the blog itself.

The new software is written Perl. I use many new modules in Perl. For example Moo, Path::Tiny and Carton.

Moo is an OO library for Perl. It allows me to write OO code in Perl without any boilerplate. One of the nice features of Moo is the lazy setting for the is attribute. This makes the attribute lazy and calls a builder when initializing it.

Path::Tiny is a small abstraction that simplifies all code that works with files and directories. The relative method finds the relative part of the filename in a directory. It becomes trivial to copy a file from one directory to another while keeping the same structure. I use this to copy CSS and JS assets to final output.

Carton gives me control over the modules that are installed. I write a small cpanfile with the dependencies of my program and carton installs those modules.

Not all features of the old software are rewritten, but they're not really needed.

Simple queue in Redis with Perl

Sometimes you need to have a asynchronous worker written in Perl. The small script here takes a job from the queue and executes the code. It only takes a few lines of Perl.

use Redis;
my $redis = Redis->new(encoding => undef);

my $queue_name = 'q';
my $timeout    = 10;

for (;;) {
    my ($queue, $job_id) = $redis->blpop(join(':', $queue_name, 'queue'), $timeout);
    if ($job_id) {

        my %data = $redis->hgetall($job_id);

        # do something with data...
        # ...

        # remove data for job

The client would look like this:

use Redis;
my $redis = Redis->new(encoding => undef);

my $queue_name = 'q';

# Create the next id
my $id = $redis->incr(join(':',$queue_name, 'id'));
my $job_id = join(':', $queue_name, $id);

my %data = ();

# Set the data first 
$redis->hmset($job_id, %data);

# Then add the job to the queue
$redis->rpush(join(':', $queue_name, 'queue'), $job_id);

This type of queue is simple to create. This version just takes a few lines of code. It only depends on the Redis module and a running Redis server. The Redis module connects to $ENV{REDIS_SERVER}, so this could be changed before running the worker script.

Dockerfile for Pinto server

Yesterday I created Pinto server on a server with Debian. Today I did the same but with Docker. Docker is way to run processes in their own sandbox. Docker uses a Dockerfile to create an image that run a certain process. The Dockerfile that I used to run pintod looks like this:

FROM ubuntu
RUN apt-get -y install curl perl build-essential
RUN curl -L | bash
RUN mkdir /var/pinto
VOLUME /var/pinto
RUN adduser --system --home /opt/local/pinto --shell /bin/false --disabled-login --group pinto
ENV PINTO_HOME /opt/local/pinto
RUN /opt/local/pinto/bin/pinto init
CMD /opt/local/pinto/bin/pintod

I created a volume for /var/pinto that contains the repository. The pinto init doesn't initialize it, it seems. If I later mount the volume in on the host, the directory is empty, because the host directory was empty.

It would be great if I could create a volume with files with the Dockerfile and later mount them in a directory on the host system.

Dist Milla and Pinto

Today I was working on a private module that I use to write web applications. A few weeks ago I found out about Milla. Milla is a plugin bundle for Dist::Zilla, which makes it easy to create Perl modules.

When you're ready Milla automatically releases your new module to PAUSE. This is really useful for public modules, but not that useful for private modules.

Pinto is like CPAN in a way. It's really easy to create a private CPAN with it. That's what I did today. It contains the pintod script that starts a webserver, which is compatible with the CPAN infrastructure. It allows you to install modules from it with cpanm.

With cpanm you use the following command line arguments: --mirror <url> --mirror-only. This way cpanm only uses your mirror.

Milla allows me to automatically release my modules to my private CPAN. The changes was expected and really easy to implement.

I changed my dist.ini from:



-bundle = @Milla
-remove = UploadToCPAN

root          = <my private CPAN URL>

With this config milla creates a release and uploads it to my Pinto server. Now I can use these new modules and install them with cpanm.

Create your own YouTube playlists

A few days ago Mindcrack Ultra Hardcore season 11 started. In Ultra Hardcore the play by special rules, that are different from vanilla Minecraft. The biggest difference is that players don't regenerate health automatically. They need to eat golden apples to increaes their health. These apples are harder to make, because you need golden ingots instead of golden nuggets.

Each season players record videos of their perspective of the match and post these videos to their YouTube channel. This season these videos are posted every other day around 22:00 UTC. This is mostly to give viewers the opportunity to watch multiple perspectives. The total running time for the first episode was about 7 hours. That's a lot of video to watch.

At the moment the best way to watch these videos is to subscribe to a channel and follow along whenever a video is posted. The problem (for me) is that these players post many other videos of games that I don't like to watch. There also isn't a place where these videos are linked in one place. To remedy this problem I wrote a few programs that create the playlists automatically.

Let's start with the output. The program created a few playlists on YouTube from the videos that I gathered from the channel. I created two views into the videos.

The first view is the list of videos for each player. Each player has a playlist with all the videos in episode order. For example the playlist for Zisteau. There is also a playlist for the other players.

The second view is a list of all videos in each episode. This means that the videos happened around the same 20 minutes of play time. For example all videos from episode 1. The other episodes are also available.

To create these playlists I wrote three Perl scripts. The first one gets the latest episodes for each players. This way I get a list of the last videos of the channel. This list contains all videos, also videos that I don't want to add to the playlist.

In the next step (with the other two scripts) I parse the titles of the videos and find the videos that match the regular expressions for UHC or Ultra Hardcore and Season 11. Here I also try to find the episode number of the video.

Next I create a playlist or find the playlist_id of the episode (or player) and add all the videos that haven't been added yet.

Now we're done. I used the WebService::GData::Youtube module for calling the YouTube API, as it removes most of the dirty work.


My name is Peter Stuifzand. You're reading my personal website.