Hosting Upsource with Docker – DNS Dilemmas

Currently at work we are using an open source source code management tool called Kallithea. Unfortunately it doesn’t seem to be under active development any longer and is in general a bit unstable and lacking the features we need in a growing development team. For me the biggest pain point was not having a nice web interface to browse and review code. We’re currently evaluating other options (BitBucket, GitHub, VSO/TFS etc.) and trying to decide whether to self-host or not. This process is taking a bit of time, so I went looking for something to tide us over until we came up with a more permanent solution. This lead me to Upsource, one of JetBrains’ latest incarnations.

Upsource is web-based tool for browsing and reviewing code. The handy thing with Upsource is that it tacks onto your source code hosting tool, rather than being an all-in-one like the systems we are looking at moving to. This allowed me to quietly install it without ruffling any feathers and let members of the team decide whether or not they wanted to use it. Luckily I had a spare Linux box running Ubuntu on which I was quickly able to get it installed and hooked up with LDAP.

The interesting part came a month or later when the next version of Upsource was released (February 2017). As well as a bunch of handy new features (full-text search FTW) they also announced that new versions were being published as Docker images. This sounded like a good idea and one which would make future updates easier, so I followed the instructions to migrate my Upsource instance to being hosted under Docker. Unfortunately I found that after starting up my new version of Upsource inside a Docker container, it could no longer resolve internal URLs; neither those pointing to the source code repositories or to the LDAP server.

A bit of Googling revealed that this was a known issue with Docker on recent versions of Ubuntu: https://github.com/docker/docker/issues/23910. It sounds like it’s resolved in the latest version of Docker, but I couldn’t work out whether that had been released yet.

Luckily someone had already written up a handy blogpost showing how to get around the issue: https://robinwinslow.uk/2016/06/23/fix-docker-networking-dns/#the-permanent-system-wide-fix.

I went with the ‘quick fix’ approach described there:

  1. I ran this command to find the IP address of the DNS server running inside my company’s network. This spat out two contiguous IPs for me, so I just choose the first one.
    $ nmcli dev show | grep 'IP4.DNS'
    IP4.DNS[1]:                             10.0.0.2
    IP4.DNS[1]:                             10.0.0.3
  2. Added a –dns 10.0.0.2 argument to the docker run command I used to start the Upsource container.

Problem solved!

TradeMe to Trello Chrome Extension

TradeMe to TrelloIt can be tricky keeping track of a bunch of listings when you’re looking to join a new flat. Have I already contacted that person? When am I viewing this place? Have I heard back from them yet? The built in functionality on TradeMe (the auction site just about everyone in New Zealand uses to list and look for flats) is just not up to the task.

Trello is a web application which allows you to create a custom set of lists and to move ‘cards’ back and forth between them. Many developers and others working in the tech industry are likely to be familiar with it already.

I’ve created a Chrome extension to link TradeMe and Trello together, making moving flats that little bit easier. Using this extension, it’s as simple as clicking the icon when you are on a TradeMe listing to have a card automatically created in Trello.

Trello board with TradeMe listings

May your flat hunting be forever more organised!

Get the extension: https://chrome.google.com/webstore/detail/trademe-to-trello/eapogjcjbcgaoocipcfcnedibnfdmlng?hl=en&gl=NZ.

The code is open source, and up on GitHub: https://github.com/nick-nz/trademe-to-trello.

Is there more to Googling than you think?

As a developer I Google stuff. Alot. It almost happens automatically:

  1. Working on something.
  2. Unfamiliar error appears.
  3. Google it.
  4. Choose the first StackOverflow link in the results.
  5. Problem solved (usually).

That trivialises the process somewhat. Most decent developers will spend some time considering the error and trying a few things to fix it before resorting to searching. And of course we don’t just seek help when errors occur: looking for best practices during the design phase of a project, finding a more concise way of implementing some logic and learning from the mistakes of others are all common use cases.

So I was surprised when recently I was tutoring at a ‘bootcamp’ style programming course and noticed many of the students struggled to construct useful search queries. They would do things like searching for an error verbatim, including their custom variable names and data. They struggled to abstract and generalize a problem. They also struggled to use the results of a first query to make improvements to subsequent queries.

It turns out being a good Googler is a skill many of us have subconsciously built up over years of work. Is my problem language or framework specific? Do I need to widen or narrow my search? Is this even a problem that the wider developer community would be able to help with or is it an issue specific to my company’s codebase?

Wisdom of the Ancients

 

So, if you’re involved in teaching programming or mentoring junior developers, consider working with them to construct useful searches. You may already know the answer to the problem and could go straight to helping them with it. Teaching them to find the solution themselves may actually be more beneficial.

Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.

Final year project – A remote laboratory

Intro

As part of my final (3rd Professional) year of Computer Engineering at the University of Canterbury, I have been working on a full year project. The college views these projects as the capstone of the degree program. They are designed to allow students to focus on a specific area, working at their own pass under the guidance of an academic supervisor.

My project is to design a remote laboratory system to aid teaching of Embedded Software in the Electrical and Computer Engineering department. I’ll explain exactly what that means soon, but first some background.

In 2012 students in ENCE361 assigned a project which involved writing an embedded program to control helicopter. The helicopter was to move up/down and left/right in response to button presses and to maintain robust behaviour at all times. The helicopter is fixed in a stand which uses a light sensor to output an analogue voltage proportional to its height. Students are required to read this value using an ADC and to control the helicopter with PWM signals.

Students enjoyed this project, however there were problems in regards to the helicopter in terms of access and breakages. It was hard to ensure each group had equal opportunity to use a helicopter stand.

Around the same time, my supervisor, Dr Steve Weddell was in communication with the University of Technology Sydney (UTS) and had learnt about the concept of remote labs. He figured the helicopter project would be a suitable candidate to be converted to a remote lab format.

The Project

For my project to be successful it would have to provide the following features:

  • Two functioning helicopter rigs.

  • Ability to respond to ‘virtual’ button presses.

  • Ability to upload programs onto the microcontroller remotely.

  • Ability to view the helicopter on a webcam.

I’m pleased to say that all of these requirements have been meet. The video below shows how students might use the system (best viewed full screen):

So, How does it all work?

The key to the whole system is SAHARA Labs, a set of software packages which provide a framework for developing custom remote laboratory setups. SAHARA is open source, released under a BSD license. To view and download the most up-to date code, head to the project’s GitHub repositories.

SAHARA

SAHARA consists of three main components:

  1. Web Interface – this is the components students (or other users of the system are presented with). It provides facility to login and access rigs, queue or make reservations if all are in use. Academics are also able to monitor student usage and download reports through the web interface. Rig pages can be customized with buttons and other control elements.
  2. Rig Client – provides various functions to interact with hardware. It is written in Java and requires further development to provide the final, lowest layer of abstraction to a specific rig.
  3. Scheduling Server – ties multiple rigs together and coordinates user access through the web interface. It has the ability to tie into a universities existing authentication system such as LDAP.

I installed all three of these components on an Ubuntu machine. The next step was to extend the RigClient and to choose hardware to interact with the helicopter and Stellaris development board.

UTS had recently developed a rig with a number of similarities to our planned rig, and they were kind enough to provide us their source code as an example to work from. Their rig involved students programming a Digilent Nexys FPGA, where’s ours uses a Texas Instruments Stellaris EKS-LM31968 development board.

Buttons

I modified the web interface using HTML5 and JS to include the required buttons. When these are pressed, they fire Rig Client methods which are routed to a custom class. The next decision to make was how to send these logic signals to the microcontroller. preferably using a USB device. I investigated a number of options, including an Arduino board, but ended up choosing a FT245R FTDI device. This provides a bit bang mode which was perfect for this application. The standard way of talking to one of these devices is to write C code, using the libFTDI library. In order to achieve this from the Rig Client (which is written in Java) I used the Java Native Interface (JNI).

The following code snippet shows how pins are asserted in response to buttons presses routed from the web interface:


jboolean Java_au_edu_uts_eng_remotelabs_heli_HeliIO_setByte(JNIEnv *env, jobject thiz, jint addr) {
  if (!deviceExists) {
    // PRINTDEBUG("Cannot set data byte when not connected to Heli");
    return false;
  }

  int pin;
  if (addr == 0) {
    pin = UP_PIN;
  } else if (addr == 1) {
    pin = DOWN_PIN;
  } else if (addr == 2) {
     pin = SELECT_PIN;
  } else if (addr == 3) {
     pin = RESET_PIN;
  } else {
    // Do something sensible.
    return false;
  }

  /* Enable bitbang mode with a single output line */
  ftdi_set_bitmode(&ftdic, pin, BITMODE_BITBANG);

  unsigned char c = 0;
  if (!ftdi_write_data(&ftdic, &c, 1)) {
    innerDisconnect();
    return false;
  }

  usleep(200);
  c ^= pin;

  if (!ftdi_write_data(&ftdic, &c, 1)) {
    innerDisconnect();
    return false;
  }

  return true;
}

Code Upload

The other major bit of functionality required was to provide a way for students to upload binaries of their programs and to automatically program them onto the microcontroller for testing.

Luckily OpenOCD plays nicely with our chosen microcontroller. The Java Rig Client communicates with the OpenOCD daemon by instantiating a Python script, which in turn makes use of the Python Expect library. This is best understood by looking at the source code below:


import pexpect
import argparse
import os
import sys

def main(**kwargs):
if kwargs['format'] == 'bin':
upload_program(kwargs['program'])
else:
sys.exit(2)

def upload_program(program):
child = pexpect.spawn('telnet localhost 4444')

child.sendline('reset')
child.expect('>')

child.sendline('halt')
child.expect('>')

child.sendline('flash write_image erase ' + program)
child.expect('>')
child.sendline('sleep 5')
child.expect('>')

child.sendline('reset run')
child.expect('>')

child.sendline('exit')

if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Flash a bitfile to the Stellaris')
parser.add_argument('program', type=str, help='Program name')
parser.add_argument('format', type=str, choices=['bin'], help='Specify the file format')
args = parser.parse_args()
main(**vars(args))
sys.exit()

Webcam

Finally, the whole system is not of much use if students are unable to see the helicopter in action. A Logitech C920 is connected to the rig computer for this purpose. I had envisaged video streaming to be one of the simpler aspects of this project, but unfortunately  it was a pain in the ass to get working! The team at UTS said they used ffserver/ffmpeg, however I had no luck with the version in Ubuntu’s apt-get repository. It turned out, building the latest version from source was the only way to get it working:

sudo git clone git://source.ffmpeg.org/ffmpeg.git
cd ffmpeg/
sudo ./configure
sudo makesudo make install
usermod -a -G video username

I was then able to steam SWF, Flash and Motion JPEG using the following configuration file:

# Port on which the server is listening. You must select a different
# port from your standard HTTP web server if it is running on the same
# computer.
Port 7070

# Address on which the server is bound. Only useful if you have
# several network interfaces.
BindAddress 0.0.0.0

# Number of simultaneous HTTP connections that can be handled. It has
# to be defined *before* the MaxClients parameter, since it defines the
# MaxClients maximum limit.
MaxHTTPConnections 200

# Number of simultaneous requests that can be handled. Since FFServer
# is very fast, it is more likely that you will want to leave this high
# and use MaxBandwidth, below.
MaxClients 100

# This the maximum amount of kbit/sec that you are prepared to
# consume when streaming to clients.
MaxBandwidth 100000

# Access log file (uses standard Apache log file format)
# '-' is the standard output.
CustomLog -

# Suppress that if you want to launch ffserver as a daemon.
#NoDaemon

##################################################################
# Definition of the live feeds. Each live feed contains one video
# and/or audio sequence coming from an ffmpeg encoder or another
# ffserver. This sequence may be encoded simultaneously with several
# codecs at several resolutions.

<Feed feed1.ffm>

</Feed>

<Stream status.html>
 Format status
</Stream>

<Stream camera1.swf>
 Feed feed1.ffm
 Format swf
 VideoFrameRate 15
 VideoSize 320x240
 VideoBitRate 250
 VideoQMin 3
 VideoQMax 10
 NoAudio
</Stream>

<Stream camera1.flv>
 Feed feed1.ffm
 Format flv
 VideoFrameRate 15
 VideoSize 320x240
 VideoBitRate 250
 VideoQMin 3
 VideoQMax 10
 NoAudio
</Stream>

<Stream camera1.mjpg>
 Feed feed1.ffm
 Format mpjpeg
 VideoFrameRate 15
 VideoIntraOnly
 VideoSize 320x240
 VideoBitRate 500
 VideoQMin 3
 VideoQMax 10
 NoAudio
 Strict -1
</Stream>

With these tasks complete, the basic system works! A second rig client has also been added – this involves installing another copy of the Rig Client on a second machine, which talks to the Scheduling Server over the network. A number of other features have been added since and I might detail these in a future post.

I have written a paper on this project and will present this at the 2013 Electronics New Zealand Conference (ENZCON) in September. More complete details can be found in my Engineering Report.