LightTraceMemory part 2 – steps and missteps

Getting to hello world – a video that dims live

We had a previous example of a processing script that took the brightest pixel from a video (reliable and known frame rates etc) – the next challenge was to make sure that this could be replicated for frames grabbed from a webcam, and introducing dimming.

Raspberry Pi Camera, a bad start

Initial attempts made using a Raspberry Pi with the Raspberry Pi Camera V2. The first few steps were very easy: taking a photo, saving a photo to the drive, showing a video preview on the computer screen. However, complete failure to install and run processing at an acceptable speed on the pi – could have moved to python CV and similar libraries but seemed like the wrong path, as there were lots of issues getting the right dependencies installed to get video working at all, and the general rate of progress was slow due to the computer speed. Working with live video looked like it would be completely impossible – I moved to the windows laptop and its webcam, with a view to moving to a different and better quality camera later. This reflects a theme I have encountered in the past (and later in this project) – Raspberry Pi’s being very frustrating to work with – they seem to be remarkably unforgiving… In the future I will try to do all experimentation on a larger computer, then move that to a Raspberry Pi later on if that is what the project needs.

Switching to webcam, getting past minimum levels of quality

Initial examples in processing like those in https://processing.org/tutorials/video/ worked well on the laptop. A few hiccups making sure I called the correct camera, and in making sure I loaded, manipulated and then showed the pixels correctly, but good progress made. Having fulfilled these initial steps, I was fairly confident that I could start on my own path and produce the project.

One of the very first successes with a dimming video

A first use of the dimming video with a light source

First play with filters to reduce presence of background, this was taken in quite bright conditions with a phone screen

However, improving the quality beyond this baseline later proved remarkably difficult. It appears that any image coming out of the webcam (and any other camera) is limited to around 800×400 pixels at 60Hz before each frame is encoded in a .h264 format, making it hard to read, especially for someone unfamiliar with video. The severely limited the quality of the output.

Switching to Boolean arithmetic, and learning about colour in ARGB

Manipulating the colours used in the images was very easy – just pull out the RGB values, manipulate those as integers, then put them back into the pixel – this is repeated for every pixel in the image. However, all functions need to be at the lowest level possible to make this loop as quick as it can be. Moving from functions such as:

r = red(img.pixels[loc]);
g= green(img.pixels[loc]);
b= blue(img.pixels[loc]);
//do something to these values
c = color(r,g,b);
pixels[loc] = c;

to ones like

</pre>
r=(c>>16)&0xFF;
g=(c>>8 )&0xFF;
b=c&0xFF;

c = 0xFF000000 | (r << 16) | (g << 8) | b;

Each pixel is in ARGB (Alpha Red Green Blue) format, so moving the pixels across, then filtering out the final two values gives the r, g, b values as quickly as possible. This increased the framerate of my program from around 20Hz up to about 58-60Hz – from a pure video framerate of about 60-62Hz.

As the ARGB values are being altered directly – there are some unexpected effects.

From google’s colour picker – 255,0,0 produces pure red

There is some error for small numbers – though I thought it largely was solved by setting colours that were nearly black to exactly black. This error was around subtraction in binary of small numbers, with some becoming negative, under some binary systems, a negative number is represented with a positive first byte – so zero is 00000000, one is 00000001, two is 00000010 but negative one is 10000001 under some systems and 11111111 under others – therefore, reaching black without stopping will lead to a very large value – pure white or a strong colour – leading to a strange ‘decay’ – this then leads to a second ‘decay’ in red or some other colour. I still don’t quite understand some of the behaviours seen – why don’t the colour come back again and again?

Strange ‘decay’ can be seen in the noise around the image

An example of something going wrong in the dimming process

Saving down videos, working with multiple videos simultaneously

A key advantage of working in code – as opposed to some other medium – is the ability to quickly make similar copies of the code produced. This allowed the creation of two videos from the same webcam footage simultaneously, one for the live screen which dimmed with time, and one for a overlaid photo and video which would not dim over time. This was as simple as creating two variables for videos and saving them separately. However, the amount of work being done per frame doubled, and saving frame (and upon saving, of videos) to the hard drive was quite slow – saving a video took a couple of seconds, and the frame rate reduced from about 57Hz to around 40Hz. I needed to delete competing .jar files for the video and video export libraries to make them work simultaneously.

Switching to a new camera

To try to increase the quality of the live camera footage, a new camera, the SJ4000 produced by SJCAM was used. The camera was able to capture to its own memory at 1080p resolutions very easily, but was limited as a webcam in the same way as the integral webcam – it encoded each frame for larger resolutions, it was also hard to get it working at higher resolutions as they weren’t offered as a default. However, for lower resolutions the ‘PC Camera’ mode on the camera worked very quickly and easily.

Other downsides were a complete switch of library to extract frames from videos – the standard video library in processing was not compatible. I ended up using https://github.com/sarxos/webcam-capture library instead, though making sure I had all the right dependencies installed was difficult – you need to add several files to the processing script which I hadn’t done before. A further (small) complication was a change in the returned pixels – r, g and b values were returned as 3 x number of pixels array. This meant that the ‘guts’ of all the algorithms needed to be changed to accommodate the new camera – you now moved pixels along the array rather than extracting and returning rgb values to an individual pixel colour.

Issues with the SJ4000

Video taken with the SJ4000, a clear ‘grid’ pattern can be seen

Despite some initially promising results, the quality of results from the SJ4000 was ultimately disappointing and we returned to using the webcam on the laptop. This image often had an ‘overlaid grid’ and was extremely grainy in low light. The camera seemed to be very aggressing in changing the contrast/brightness due to changes in light levels, leading to sudden bleaching of the image, or a total blackout. This caused havoc with the dimming of image over time. The image was also very noisy – the camera had difficulty ‘holding’ the edge of objects, especially in low light.

Getting computers to talk to each other, without interfering with one another

When saving videos, the MP4 file becomes visible immediately, but it is only completed and released once the video save function has run in processing. Once this is saved, further work can be done using the saved video. This was solved simply by creating a text file with the same name as the video upon saving, a small python script polled the folder for these text files and used this as a trigger to start the processing. There are probably other and better ways to do this but this was quick and cheap and easy – good enough. The text files has the added benefit of being able to store more information that informed the later processing, the number of frames in a video for example.

There ‘trigger’ files were used in several parts of the process to pass information through, and make sure tasks did not try to open the same file simultaneously leading to errors. This seems to be similar to the ‘.lok’ files used by some programs to prevent multiple users from accessing the same file.

I could have gone further with the passing of information through the trigger file, noting down diagnostic information allowing me to find any likely problems, but the code wasn’t complex enough or going to be used by others to require that – I knew it back-to-front on the evening and could find problems quickly.

All file sharing was done over a series of Windows shared folders, with all computers on the same wifi network (one that I provided and that was not connected to the internet). This allowed me to test the network are carefully as I could, reducing the number of things that were different on the evening, and reducing the risk to the event from poor overly constrained wifi on location.

Trigger files used to pass information and make sure computers did not try to access the same image and video files simultaneously

Overlaying videos, video speeds and how videos compress – why it is hard to get good results

It would seem that videos are far more complicated than they first appear – not just series of images – compression sees to that. Further issues complicate this further, the variable rate of frame capture from the live video (frames are just processed as fast as they can) means that a time needs to attached to each frame to ensure the rate of playback is consistent not only between videos but also within one video. Creating an overlay then required saving the rendered video frames at different points in time, processing these pixels, then recreating the video – all this way beyond my ability to pull it together in time, and the compromises to make it easier, putting an artificial cap on the framerate during capture, were not acceptable.

Automating cameras – return of the raspberry pi, again

The initial quality of screenshots was not nearly good enough to either show as stills or to use to produce the final images for the Kickstarter backers. The image wasn’t good enough, and interrupting the webcam footage to take a better quality still photo wasn’t possible – it would stop the live footage for the duration of the shot, and it didn’t seem that the webcam would be able to ‘open a shutter’ for an extended period of time – limiting the quality to that seen in the video footage. Fortunately, a good camera was lent to us by Thomas.Matthews – a Canon EOS 5D Mark III – which was one of the best in the market a couple of years ago and still an excellent quality camera. This camera needed to have a shutter that was triggered by a computer on command and the images passed to the outside computer for overlaying and display.

A starting point was gphoto2 http://www.gphoto.org/, which is able to operate a large number of cameras (including the Canon) with a high degree of precision. However, I struggled to make it work on Windows, and moving to Linux by only choice was my Raspberry Pi. Installing gphoto2 was possible on the pi, but triggering the camera was very sporadic. I seemed to need to restart the pi between each shot for some reason which was unacceptable. Furthermore, I was struggling to make the pi access the Windows shared folders to save files, and poll for the trigger files to take a shot. This was a low point in the project – I couldn’t reliably take photos, and couldn’t save them to the network anyway.

Automating cameras – going to other pieces of software

A way forward was found with digicamControl http://digicamcontrol.com/ – it worked well with windows and although it didn’t seem to provide quite as much control as gphoto2 the level of control through its Command Line Utility http://digicamcontrol.com/doc/userguide/cmd was fine – saving to where I specified, control over iso, shutter time, aperture. I was then able to use a command call from python to take photos with the Canon – this was triggered by writing a trigger text file from the processing script, a process so fast that it didn’t lead to any jumps in the playback.

For displaying the images, faststone Image Viewer had all the features I needed – in particular updating the photos in the slideshow as images were added to the folder.

Great hardware makes a real difference

Inital testing of the Canon with overlaid photographs

Test overlay at the venue with an early trace

It’s always the stuff you forget – we needed a stand to put the cameras on, we made it on the day, and it was bad

A lot of work and experimentation went into the programs that allowed LightTraceMemory to function, however, nearly none went into the design of the stand that the projector sat on, just a trolley with a cloth over the top and holes cut on the day for the cameras. This meant that the stand was completed in a rush and failed regularly (though not too severely) with pieces of cloth coming into the frame, the camera was sat on a pile of teabag boxes to see over the laptop. In the light it looked like a misshapen dalek – a weird and malign presence in the room, and right in the middle. I fear it was the ‘worst object in the LFA’. It might be an explanation for the slow-down of the frame rates that the computer was overly hot from the projector below

The saving grace of the camera dalek

Bizarrely, no-one noticed the giant camera dalek – in the dark it became nearly invisible. The bright screen beyond and the pendulums in the foreground and the link between them completely drew the attention in the room. An amazing number of people asked where the camera and projector were ‘hidden’.

Simplify, simplify – get rid of everything extraneous

and make it work on the night.

Many pieces of code were omitted for the final event – with only those that were reliable being taken forward. Overlay of video over photos was replaced by overlay of high quality images, video saving was not used – this left the project surprisingly close to its original intention – and on the whole to a good level of quality and reliability. The frame rates on the preview were lower than we wanted and almost no video footage was saved, but the photos were of excellent quality, and are excellent records of the event.

Reasons not to use the video functions during the evening, a slowdown to 10-15Hz leads to very strange results

Leave a Reply