Final year project – A remote laboratory

Intro

As part of my final (3rd Professional) year of Computer Engineering at the University of Canterbury, I have been working on a full year project. The college views these projects as the capstone of the degree program. They are designed to allow students to focus on a specific area, working at their own pass under the guidance of an academic supervisor.

My project is to design a remote laboratory system to aid teaching of Embedded Software in the Electrical and Computer Engineering department. I’ll explain exactly what that means soon, but first some background.

In 2012 students in ENCE361 assigned a project which involved writing an embedded program to control helicopter. The helicopter was to move up/down and left/right in response to button presses and to maintain robust behaviour at all times. The helicopter is fixed in a stand which uses a light sensor to output an analogue voltage proportional to its height. Students are required to read this value using an ADC and to control the helicopter with PWM signals.

Students enjoyed this project, however there were problems in regards to the helicopter in terms of access and breakages. It was hard to ensure each group had equal opportunity to use a helicopter stand.

Around the same time, my supervisor, Dr Steve Weddell was in communication with the University of Technology Sydney (UTS) and had learnt about the concept of remote labs. He figured the helicopter project would be a suitable candidate to be converted to a remote lab format.

The Project

For my project to be successful it would have to provide the following features:

  • Two functioning helicopter rigs.

  • Ability to respond to ‘virtual’ button presses.

  • Ability to upload programs onto the microcontroller remotely.

  • Ability to view the helicopter on a webcam.

I’m pleased to say that all of these requirements have been meet. The video below shows how students might use the system (best viewed full screen):

So, How does it all work?

The key to the whole system is SAHARA Labs, a set of software packages which provide a framework for developing custom remote laboratory setups. SAHARA is open source, released under a BSD license. To view and download the most up-to date code, head to the project’s GitHub repositories.

SAHARA

SAHARA consists of three main components:

  1. Web Interface – this is the components students (or other users of the system are presented with). It provides facility to login and access rigs, queue or make reservations if all are in use. Academics are also able to monitor student usage and download reports through the web interface. Rig pages can be customized with buttons and other control elements.
  2. Rig Client – provides various functions to interact with hardware. It is written in Java and requires further development to provide the final, lowest layer of abstraction to a specific rig.
  3. Scheduling Server – ties multiple rigs together and coordinates user access through the web interface. It has the ability to tie into a universities existing authentication system such as LDAP.

I installed all three of these components on an Ubuntu machine. The next step was to extend the RigClient and to choose hardware to interact with the helicopter and Stellaris development board.

UTS had recently developed a rig with a number of similarities to our planned rig, and they were kind enough to provide us their source code as an example to work from. Their rig involved students programming a Digilent Nexys FPGA, where’s ours uses a Texas Instruments Stellaris EKS-LM31968 development board.

Buttons

I modified the web interface using HTML5 and JS to include the required buttons. When these are pressed, they fire Rig Client methods which are routed to a custom class. The next decision to make was how to send these logic signals to the microcontroller. preferably using a USB device. I investigated a number of options, including an Arduino board, but ended up choosing a FT245R FTDI device. This provides a bit bang mode which was perfect for this application. The standard way of talking to one of these devices is to write C code, using the libFTDI library. In order to achieve this from the Rig Client (which is written in Java) I used the Java Native Interface (JNI).

The following code snippet shows how pins are asserted in response to buttons presses routed from the web interface:


jboolean Java_au_edu_uts_eng_remotelabs_heli_HeliIO_setByte(JNIEnv *env, jobject thiz, jint addr) {
  if (!deviceExists) {
    // PRINTDEBUG("Cannot set data byte when not connected to Heli");
    return false;
  }

  int pin;
  if (addr == 0) {
    pin = UP_PIN;
  } else if (addr == 1) {
    pin = DOWN_PIN;
  } else if (addr == 2) {
     pin = SELECT_PIN;
  } else if (addr == 3) {
     pin = RESET_PIN;
  } else {
    // Do something sensible.
    return false;
  }

  /* Enable bitbang mode with a single output line */
  ftdi_set_bitmode(&ftdic, pin, BITMODE_BITBANG);

  unsigned char c = 0;
  if (!ftdi_write_data(&ftdic, &c, 1)) {
    innerDisconnect();
    return false;
  }

  usleep(200);
  c ^= pin;

  if (!ftdi_write_data(&ftdic, &c, 1)) {
    innerDisconnect();
    return false;
  }

  return true;
}

Code Upload

The other major bit of functionality required was to provide a way for students to upload binaries of their programs and to automatically program them onto the microcontroller for testing.

Luckily OpenOCD plays nicely with our chosen microcontroller. The Java Rig Client communicates with the OpenOCD daemon by instantiating a Python script, which in turn makes use of the Python Expect library. This is best understood by looking at the source code below:


import pexpect
import argparse
import os
import sys

def main(**kwargs):
if kwargs['format'] == 'bin':
upload_program(kwargs['program'])
else:
sys.exit(2)

def upload_program(program):
child = pexpect.spawn('telnet localhost 4444')

child.sendline('reset')
child.expect('>')

child.sendline('halt')
child.expect('>')

child.sendline('flash write_image erase ' + program)
child.expect('>')
child.sendline('sleep 5')
child.expect('>')

child.sendline('reset run')
child.expect('>')

child.sendline('exit')

if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Flash a bitfile to the Stellaris')
parser.add_argument('program', type=str, help='Program name')
parser.add_argument('format', type=str, choices=['bin'], help='Specify the file format')
args = parser.parse_args()
main(**vars(args))
sys.exit()

Webcam

Finally, the whole system is not of much use if students are unable to see the helicopter in action. A Logitech C920 is connected to the rig computer for this purpose. I had envisaged video streaming to be one of the simpler aspects of this project, but unfortunately  it was a pain in the ass to get working! The team at UTS said they used ffserver/ffmpeg, however I had no luck with the version in Ubuntu’s apt-get repository. It turned out, building the latest version from source was the only way to get it working:

sudo git clone git://source.ffmpeg.org/ffmpeg.git
cd ffmpeg/
sudo ./configure
sudo makesudo make install
usermod -a -G video username

I was then able to steam SWF, Flash and Motion JPEG using the following configuration file:

# Port on which the server is listening. You must select a different
# port from your standard HTTP web server if it is running on the same
# computer.
Port 7070

# Address on which the server is bound. Only useful if you have
# several network interfaces.
BindAddress 0.0.0.0

# Number of simultaneous HTTP connections that can be handled. It has
# to be defined *before* the MaxClients parameter, since it defines the
# MaxClients maximum limit.
MaxHTTPConnections 200

# Number of simultaneous requests that can be handled. Since FFServer
# is very fast, it is more likely that you will want to leave this high
# and use MaxBandwidth, below.
MaxClients 100

# This the maximum amount of kbit/sec that you are prepared to
# consume when streaming to clients.
MaxBandwidth 100000

# Access log file (uses standard Apache log file format)
# '-' is the standard output.
CustomLog -

# Suppress that if you want to launch ffserver as a daemon.
#NoDaemon

##################################################################
# Definition of the live feeds. Each live feed contains one video
# and/or audio sequence coming from an ffmpeg encoder or another
# ffserver. This sequence may be encoded simultaneously with several
# codecs at several resolutions.

<Feed feed1.ffm>

</Feed>

<Stream status.html>
 Format status
</Stream>

<Stream camera1.swf>
 Feed feed1.ffm
 Format swf
 VideoFrameRate 15
 VideoSize 320x240
 VideoBitRate 250
 VideoQMin 3
 VideoQMax 10
 NoAudio
</Stream>

<Stream camera1.flv>
 Feed feed1.ffm
 Format flv
 VideoFrameRate 15
 VideoSize 320x240
 VideoBitRate 250
 VideoQMin 3
 VideoQMax 10
 NoAudio
</Stream>

<Stream camera1.mjpg>
 Feed feed1.ffm
 Format mpjpeg
 VideoFrameRate 15
 VideoIntraOnly
 VideoSize 320x240
 VideoBitRate 500
 VideoQMin 3
 VideoQMax 10
 NoAudio
 Strict -1
</Stream>

With these tasks complete, the basic system works! A second rig client has also been added – this involves installing another copy of the Rig Client on a second machine, which talks to the Scheduling Server over the network. A number of other features have been added since and I might detail these in a future post.

I have written a paper on this project and will present this at the 2013 Electronics New Zealand Conference (ENZCON) in September. More complete details can be found in my Engineering Report.