Noise-meter – with raspberry pi, and visualising its output

Results:

I was interested in the noise levels in a space, and wanted to visualise how they changed over time. However, I wanted to know more about the distrib

I was interested in the noise levels in a space, and wanted to visualise how they changed over time. However, I wanted to know more about the distribution of noise levels over time, something that any single measure could not really provide, and even tools like a series of box plots felt clumsy. This is because the noises I am interested in are not the loudest noises, they typically increase the 60th-80th percentiles, though not always. I don’t have a strong intuition about an appropriate model for the noise levels, so choosing a model for its representation didn’t feel appropriate, so creating a graphic that moves me as close to the raw data as possible seemed the best idea.

ution of noise levels over time, something that any single measure could not really provide, and even tools like a series of box plots felt clumsy. This is because the noises I am interested in are not the loudest noises, they typically increase the 60th-80th percentiles, though not always. I don’t have a strong intuition about an appropriate model for the noise levels, so choosing a model for its representation didn’t feel appropriate, so creating a graphic that moves me as close to the raw data as possible seemed the best idea.

Drawing inspiration from Edwarde Tufte’s wavefields (and here), spectrograms, drawing histograms over time my previous experimentation with violin plots. I also like the Tensorflow Summary Histograms – though think my solution is a bit less heavy and intuitive to read for my data which is very unlikely to be multi-modal.

Tensorflow Summary Histogram

A summary histogram clearly showing a bifurcation in a distribution over some third variable.

I wanted to take the spectrum of noise levels for each minute and plot that distribution in a fairly continuous way over time. In the end I cheated to get the effect I wanted – I drew a line graph with 60 series (the quietest reading in each minute forms the first series, then the second quietest and so on), and the rasterization process when this is saved as an image makes it appear like a spectrum – but the results seem effective, giving an intuitive sense of how the distribution of noise levels has varied over time, with a minimum of interpretation forced by the visualisation – I feel quite close to the data.

24 hours of data viewed as a spectrum

Plot of 24 hours of noise levels (y-axis is noise level is dBA) – changed in the distribution of noise levels are immediately obvious – from a reduction in the variance overnight, to occasional increases in the volume (at all percentiles) during the day when there is activity near the sound-meter.

I wanted to be able to see this distribution online at any time, so just set the raspberry pi to generate and upload a graph each hour, using the previous 24 hours data – not fancy, but does what I want! I will make more graphics when I get round to it, increasing interactivity on the web page, taking longer and shorter periods, plotting previous mean values of noise at that time over the last week as so on.

Getting it done:

I used a sound level meter and connected this to a raspberry pi, the raspberry pi queries the microphone for a reading each second, as per instructions from http://www.swblabs.com/article/pi-soundmeter and https://codereview.stackexchange.com/questions/113389/read-decibel-level-from-a-usb-meter-display-it-as-a-live-visualization-and-sen. These are then saved each minute into a csv file to the raspberry pi. This script is started automatically on startup of the pi, so should run whenever the pi has power.


#!/usr/bin/python

import sys
import usb.core
import requests
import time
import datetime
import subprocess

streams="Sound Level Meter:i"
tokens=""

dev=usb.core.find(idVendor=0x16c0,idProduct=0x5dc)

assert dev is not None

print dev

print hex(dev.idVendor)+','+hex(dev.idProduct)

#create the first file in which to save the sound level readings
sound_level_filepath = "/home/pi/Documents/sound_level_records/"
now_datetime_str = time.strftime("%Y_%m_%d_%H_%M",datetime.datetime.now().timetuple())
sound_level_file = open(sound_level_filepath + now_datetime_str,"w")

while True:
    #every minute create a new file in which to save the sound level readings
    now_datetime = datetime.datetime.now()
    if (now_datetime.second == 0): #(now_datetime.minute == 0) and:
        sound_level_file.close
        now_datetime_str = time.strftime("%Y_%m_%d_%H_%M",now_datetime.timetuple())
        sound_level_file = open(sound_level_filepath + now_datetime_str,"w")
    time.sleep(1)
    ret = dev.ctrl_transfer(0xC0,4,0,0,200)
    dB = (ret[0]+((ret[1]&3)*256))*0.1+30
    print time.strftime("%Y_%m_%d_%H_%M_%S",now_datetime.timetuple()) + "," + str(dB)
    sound_level_file.write(time.strftime("%Y_%m_%d_%H_%M_%S",now_datetime.timetuple()) + "," + str(dB) + "\n")

Each hour, a scheduled task for the raspberry pi (using cron) is set to create a graph of the previous 24 hours of data, and upload to my website, behind a username and password, so I can see the results by visiting the page. The code is below.


#!/usr/bin/python

print "also hello there!"
import time

import numpy as np
import seaborn as sns
import pandas as pd
print "importing matplotlib"
import matplotlib
print "finished importing matplotlib"
print "importing pylab"
import pylab
print "finished importing pylab"
from os import listdir as listdir
from datetime import datetime
from datetime import timedelta
from dateutil.parser import parse
import glob
import os
import ftplib

def myFormatter(x,pos):
return pd.to_datetime(x)

current_time = datetime.now()

#combined_dataframe = pd.DataFrame(columns=np.arange(60).tolist())
combined_dataframe = pd.DataFrame()
x_index = []
list_of_filenames = []
print combined_dataframe

sns.set(color_codes=True)

list_of_filenames.append(glob.glob('/home/pi/Documents/sound_level_records/' + time.strftime("%Y_%m_%d",current_time.timetuple()) + '*'))
list_of_filenames.append(glob.glob('/home/pi/Documents/sound_level_records/' + time.strftime("%Y_%m_%d",(current_time-timedelta(days=1)).timetuple()) + '*'))
list_of_filenames = [item for sublist in list_of_filenames for item in sublist]

print len(list_of_filenames)

#import data from each minute
for filename in list_of_filenames:
    x_ordered_title = datetime.strptime(os.path.basename(filename), '%Y_%m_%d_%H_%M')
    time_difference = current_time-x_ordered_title
    if time_difference.days*86400+time_difference.seconds<86400: #24 hours
        x = pd.read_csv(filename, header=None, names=['timestamp','dB'])
        x_ordered = x.sort('dB')
        x_ordered_data = x_ordered['dB'].tolist()
        if len(x_ordered_data) == 60:
            x_dataframe = pd.DataFrame(np.reshape(x_ordered['dB'].tolist(),(1,60)))
            x_index.append(x_ordered_title)
            combined_dataframe = combined_dataframe.append(x_dataframe)
combined_dataframe.index = x_index
combined_dataframe = combined_dataframe.sort()
combined_dataframe.sort_index(inplace = True)

fig = matplotlib.pyplot.figure(dpi = 200, figsize = (10,10))
jet = matplotlib.pyplot.get_cmap('jet')
cNorm = matplotlib.colors.Normalize(vmin=1, vmax=60)
scalarMap = matplotlib.cm.ScalarMappable(norm=cNorm, cmap=jet)

for count in range(2,58):
    colorVal = scalarMap.to_rgba(count)
    fig = matplotlib.pyplot.plot(combined_dataframe.index,combined_dataframe.xs(count,axis=1), linewidth = 0.5, color=colorVal)

pylab.savefig('/home/pi/Documents/Sound_meter_graphs/test.png')
try:
    ftp = ftplib.FTP("davidjohnhewlett.co.uk","user","password")
    ftp.set_pasv = False
    ftp.cwd("/public_html/sound_levels/")
    f_file = open('/home/pi/Documents/Sound_meter_graphs/test.png','rb')
    ftp.storbinary('STOR test.png', f_file)
    ftp.close()
except:
    print "oh dear"

Looking back at this project, it has taken a surprisingly familiar form – I have largely constructed it out of existing pieces of code, bolting them together. Even my previous practice with connecting to the raspberry pi using SSH has been helpful, transferring code to and from the raspberry pi whilst it is running in headless mode.

One particular issue I had not come across before, and was quite difficult to diagnose, was the raspberry not creating graphs initially when in headless mode, when it produced the graphs whenever I tested it plugged in. Perhaps unsurprisingly, the difference between the two was that I had a screen connected when I was testing the pi, but not when the pi was in working mode. Having a screen connected was significant as the backend for matplotlib was not loaded when no screen was connected – I needed to change the backend.

Interestingly:

Import matplotlib
Matplotlib.use(‘Agg’)

Did not work as it had for others. After a lot of frustration, changing the backend in the configuration file for matplotlib to ‘Agg’ seemed to work, as discussed here.

Some visualisations of the catastrophic surface described by the Swallow Tail Pavilion

Just for interest, I reused parts of the code for LightTraceMemory to animate the catastrophes for the Swallow Tail Pavilion for the Chelsea Flower Show. Results below:

Icosahedron v3: More available – and an IABSE Future of Design Finalist

The third version of the icosahedron moves towards a more typical size, but puts its complexity in different places to make it more available and accessible for someone who wanted to build their own. All the geometric complexity (including the precious ‘compass walk’ for marking out the locations of the magnets on the wooden spheres for the nodes, and the end details of the rods) is taken into the computer, and output as a 3D print. You can see the whole icosahedron here and the individual nodes here. For both, user: Rudolf; password: Laban.

These 3D prints can be ordered from any 3D print shop with ease – marketplaces like 3Dhubs provided a good place to start, so in depth understanding of 3D printing is eliminated. The icosahedron has been reduced to a similar level of complexity in construction as a piece of flat pack furniture.

The ‘noding out’ of forces in the nodes, and brittleness of the magnets limiting the force on each of the nodes, means a clear resin – an unusual choice for structural components due to its brittleness – works well. The resin’s translucency reveals the magnets nicely and its detailed prints and small layer thicknesses reduces the required tolerances, as well as producing a beautiful, and nearly imperceptible ‘grain’ to the nodes as the layers reach the top of the sphere. Unfortunately, the tolerances are not small enough to create an acceptable interference fit with the magnets, the icosahedron needs some of its magnets glued in place.

Drying the glue attaching the rod end connections to the rods – what else would such a chair have been designed for?

Rod with glued attachment onto a flat end and magnet (a true interference fit in the fitting, no glue needed, it just relies on the expansion of the attachment)

3D printed node with magnets inserted

3D printed node, rods and rod end attachments forming part of the constructed icosahedron

Other improvements have been made: most noticeably a change in the balancing system for the icosahedron, with the ‘feet’ of previous models replaced with extremely fine thread, a magnet and a steel sphere. This change has been a great step forward for, perhaps surprising, reasons:

  • the icosahedron is more platonic as the threads do not ‘read’ at any distance
  • the new system isn’t immediately perceptible, giving a sense of magic until this small puzzle about how the icoshedron stands up is solved
  • they reduce the length of the package needed to carry the icosahedron by about 300mm, making it easier to fit into small cars
  • the strings make it easier for the icosahedron to adjust to slightly non-flat surfaces with ease

The constructed icosedhedron, note the new balancing system at the base

Constructing the v3 icosahedron

Deconstructing the v3 icosahedron

Another change has been attaching each of the nodes directly to a rod, trying to attach in places where tension will be present (some of the forces are larger due to the change in stability system). The nodes at each corner of the ‘table place’ have been attached to the horizontal rod that defines the side of the door plane, carrying the tension across a glued joint, rather than a more brittle magnet. This attachment also stops stray nodes running across the floor when the icosahedron is collapsed! The pre-attachment of the nodes makes construction of the icosahedron a little easier.

This icosahedron was provided with an ultimate test of its portability – I took it on the plane to Edinburgh to present in the IABSE Future of Design Competition as a finalist – the icosahedron behaved perfectly, surviving the trip, coming together at the end of the lunch break and collapsing (when I hit it hard) mid-presentation! This felt quite daring at the time! It was beaten by some really fantastic work on pre-cast concrete joints inspired by traditional Korean joinery in my half of the presenters – but I was really pleased just to show the icosahedron to other engineers, despite it really being a bit of fun – special thanks go to Eva MacNamara at Expedition for reviewing and helping me improve my entry, as well as pointing the icosahedron in my direction in the first place. My entry paper is here: David Hewlett Laban Dance Icosahedron IABSE Future of Design 2018.

Following its trip to Edinburgh, the icosahedron was handed over to the Keep Fit Association at a small training session.

v3 Icosahedron with the Keep Fit Association

 

The Modern Timber House in the UK, New Paradigms and Technologies, by Peter Wilson

I attended a discussion at the Building Centre based around Peter Wilson’s book ‘The Modern Timber House in the UK’. It was completely packed – so I could never see the speakers – and the high level display had some unhelpful slides on a loop – so I was lucky copies of the book were handed out for free! I might question the commercial wisdom of handing out a free copy of your new book to the 200 or so people most likely to buy it, but thanks!

As the speakers and reviewer pointed out, timber construction has moved forward enormously in the UK over the last 10 years. Previous editions of this book probably would have had to look far beyond the UK to find suitable examples, and it would seem like future editions could feasibly focus on individual cities and areas. It seems like London (home of the tall timber house [roughly 6 stories and up and ‘affordable’ timber housing]) and the Isle of Skye [small thermally efficient homes in exposed and inaccessible locations] could merit books of their own some day.

The key benefits of timber seem to be: improved quality through extensive prefabrication (leading to the relative ease of tricky geometries and easily achieved airtightness), lightness (reducing required foundation capacity and traffic to site), speed of construction on site, and carbon sequestration capacity. Solid timber was noted for its good acoustic insulation properties – one guest at the talk told of its use in flats next to rail lines, allowing better utilisation of these sites with quality housing. For those with the ability to take a project from start to end as a 3D model with complete confidence, many of these ‘wins’ become quite standard. Though they aren’t mentioned in this book, which openly tries to persuade in favour of timber, drawbacks seem to be: a lagging insurance and building standards industry, economy at small scale, careful detailing to avoid continuously wet wood, and lower structural strengths limiting development heights and increasing the volume of structure. Slender structure is often curtailed by fire regulations, even when disproportionate collapse requirements are satisfied.

Many of the houses in the book feel very ‘grand designs’ – there isn’t much optimisation to be seen, and a lot of personal effort. To create something worthwhile, a system that offers the right level of optimisation for a variety of sites seems necessary, at least for the London market. This would offer 3-8 stories with a variety of plan geometries, and completely standardised details. For small dwellings the Isle of Skye and the Scottish Highlands seem to have found something of a similar level of repetition and customisation through companies like Heb Homes.

A few favourites:

Overall – worth a look through to identify the best projects and what makes them different from the mediocre ones. The book would have benefitted from many more diagrams, many passages of text are technical and hard to follow in writing, and I feel a few diagrams would have helped these passages greatly. I feel the future of timber is in its ability to be easily manipulated by CNC machines, allowing high quality projects with a high degree of automated design – much as opendesk do with furniture today – so a chapter on the use of CNC to develop increasingly complex juntions, and increase the proportion of projects automatically designed, would have been of interest to me.

Laban Dance Icosahedron – bigger and better

I was approached to produce another icosahedron, with a few small alterations. This one was to be larger – just under 2m tall at the top nodes – than the prototype, but the same in as many other ways as possible.

This greater size meant the weight of the icosahedron was going to be greater, by about 15%, and the critical buckling load of the rods was going to be lower, reducing to around 63% of the previous iterations – therefore I expected the rods to be around twice as close to their buckling load as found previously. This required for robustness a move to 12mm diameter rods for the bottom members of the icosahedron, which has a minimal visual impact. The icosahedron stands fairly robustly with ‘all 9mm diameter’ rods but the softening due to axial loading leads to significant vibration when it is touched – hardly ideal for long exposure photography!

Aside from the size of the rods a couple of other changes were made: smaller magnets were used in nodes – the prototype demonstrated the larger magnets weren’t needed. The smaller magnets helped to reduce the congestion on the interior of the nodes, and allowed for a better quality of drill-hole without a pilot hole. The benefits of the reduced congestion were particularly apparent in the nodes mounted on the vertical legs, where room needs to be left for a rod to pass through as well. A further alteration on the prototype was that every magnet was glued in place, rather than allowing those that could to use friction – the prototype had occasionally ‘shed’ magnets as the wooden rods changed shape slightly stopping the friction grip, so magnets that were unremovable at manufacture because loose over time.

A final change was placing the nodes at the top of the vertical elements on the rods rather than using a magnetic connection. This increased robustness of the icosahedron but also had a key manufacturing benefit – I didn’t need to drill an end hole in a very long rod, reducing the risk of spoiling rods and also removing the need to rejig. The large holes in the top nodes were using the same setup as the ones that lie along the vertical rods.

The icosahedron was delivered on time and budget, and looks good! See below:

A final improvement was for the icosahedron was finding a new way to construct it with two people, to start, as many rods as possible were laid out on the floor:

Then one person provides support at the top of the icosahedron, holding the top rod in place, the other person builds the rest of the structure along, but has the benefit of two more ‘anchored points’ at the top of the icosahedron to hold everything in place, so this is quite easy.

I am looking to develop the design again – maybe experimenting with 3D printing or CNC manufacture to increase the build quality beyond what I can do by hand, so if you would like me to make you an icosahedron – let me know on david@davidjohnhewlett.co.uk.

Offloading computing tasks to another computer with SSH – getting started using a raspberry pi

SSH into a raspberry pi – why bother?

I am interested in learning how to upload to, run programmes on, and download files from other computers – it is a common situation at work that long analyses are run, clogging up a computer that could be used for other tasks whilst that analysis is running. This could be solved by comandeering a spare computer across the office…

However, the ability to push files across the network, run analysis on them, then receive the results for post processing has much greater power and interest – many analyses could be run simultaneously for a grid-search optimisation, increasing their value in parametric design, or passing much larger analyses to faster computers. In both cases this communication with raspberry pi is a stepping stone to communication with computers in the cloud. Either very many small computers or a single very large computer. Amazon Web Services offers small computers (about the power of an average desktop) for about £0.02-£0.20/hour. The fastest computers, at about £1-£5/hour, appear to be about 10-20 times faster than a typical desktop for applications that use many cores effectively. These prices can be significantly discounted for ‘spot pricing’ and ‘reserved instances’ where immediate use of computing power isn’t guarenteed. This seems like a great way of not only getting bigger jobs done a little faster, but getting much more detailed results from grid-searches and other optimisations – where single programs are run many times with different inputs. This is standard practice in many engineering and scientific environments where most of the work is performed on large servers – with the human interface at low power laptop in a coffee shop.

Image from my final year project on the indentation of tubes – a grid search of different failure mechanisms of tubes subject to an indenting force. All of the processing of these models was performed on a server overnight, and the models generated parametrically.

Result of grid-search, not as dense as I would have liked. This was run on a server using a similar technique, but ran sequentially rather than using many machines at once.

As an aside – this method of sending files was considered for the LightTraceMemory project, but was rejected in favour of shared windows folders which were easier and faster to set up.

Connecting to the Raspberry Pi

Getting started with the Raspberry Pi 3 and the Raspberry Pi Zero-W. I installed Raspbian on both using Noobs, then used system configuration to turn on SSH, connected to the internet, either via an Ethernet cable or using the built in wifi. Finding the IP Address of the Raspberry Pi’s just needed ‘ifconfig’ in the command prompt on the pi – n.b. NOT ‘ipconfig’ – that is for windows.

At this point point ‘ping 192.168.1.XX’ from my PC was working when I pinged the Pi’s address.

To find the IP address of a Pi when I didn’t have a way of finding the IP address from the Pi itself (so when I was running the pi ‘headless’) I used ‘Zenmap’. This program pings every IP address in a range to find the ones that respond, though I’m sure I will find more features of how it works over time. Zenmap is from here: https://nmap.org/zenmap/. As I am working on my home network, every IP address will start with 192.168.1.XX where XX is between 1 and 255 inclusive (I think). To search this range, I searched 192.168.1.1/24 in zenmap. The ‘ping scan’ option worked fine and was fairly quick to run, about 5 seconds – it returns the names of the computers on the network and their IP addresses.

To establish a connection with the Raspberry Pi I used PuTTY – which allows you to log onto another computer on the network. In particular, it allows you to communicate with a computer running linux when you are running windows. I have had some experience with PuTTY during my final year university project, where I needed to run large ABAQUS simulations on the server, but I was blindly following instructions to make it work, rather than trying to understand so I could use it to its full potential for other uses. Typing in the IP address of the pi and opening a window gives you a terminal window onto the pi, where you can execute commands – so this is just like operating the pi in terminal mode.

Nmap window showing computers on the network

 

PuTTY setup window ready to connect to the Raspberry Pi


PuTTY window for Raspberry Pi

Sending and receiving files to and from the Raspberry Pi:

Use PSCP (Putty Secure Copy Program) in windows with the format:

pscp C:\Users\david\OneDrive\Projects\SSHTesting\example.txt pi@192.168.1.106:example2.txt

then using ‘ls’ in the Raspberry Pi’s PuTTY window will list the directory, and show the file has been sent. Without more complicated setup, the password of the raspberry pi will need to be typed each time – I solve this later by setting up keys that allow secure authentication.

To retrieve a file from the Raspberry Pi and copy it onto the local computer, pscp is used again, for example:

pscp pi@192.168.1.106:write_example2.txt C:\Users\david\OneDrive\Projects\SSHTesting\write_example.txt

So the same pscp – source – target structure, when the source or the target isn’t you on your computer, you need to specify who you are – so user@ipaddress.

Another method that is easier, but doesn’t build to the final goal of uploading, running and downloading manipulating files, is a FTP programme like Filezilla, which allows transfer of files using a drag and drop interface.

Running a program on the raspberry pi to manipulate file:

We have now achieved the minimum behaviour wanted – we have a way of sending files to and from the pi, and running commands on the pi. Therefore, we can upload an input file and a program to manipulate it, run that program in the pi, and download the file output. There are several refinements we might want to add to this – running the programme from the same window or batch file. A start is with ‘plink’ [PuTTY Link] which is part of PuTTY.

For example we can upload a text file: example.txt.

Then manipulate it with the following python script which just removes half the letters and saves a file – sys.argv[1] gives the first argument provided after a call, the zeroth is the name of the python file itself:


import sys
readfilepath = sys.argv[1]
writefilepath = sys.argv[2]
readfile = open(readfilepath, 'r')
writefile = open(writefilepath, 'w')
letters = readfile.read()
for i,letter in enumerate(letters):
    if i%2==0:
        #print letter
        writefile.write(letter)
readfile.close
writefile.close

The download the written file back to the PC.

The following commands do that all from one window:


pscp C:\Users\david\OneDrive\Projects\SSHTesting\example.txt pi@192.168.1.106:example2.txt

plink pi@192.168.1.106 python text_halver.py example2.txt write_example2.txt

pscp pi@192.168.1.106:write_example2.txt C:\Users\david\OneDrive\Projects\SSHTesting\write_example.txt

There are still issues before this can become one batch file, the computer still asks for the Raspberry Pi’s password at every command – one way is to pass the password as an argument to plink – but this isn’t recommended for obvious reasons. Therefore, a more permanent connection to the Pi needs to made so that plink can give commands as part of a windows batch file.

Creating a secure connection to the Raspberry Pi:

A way to create a more secure connection between the PC and the Raspberry Pi is with a set of Private-Public keys – something built into PuTTY closely through Puttygen (PuTTY Key Generator) (https://the.earth.li/~sgtatham/putty/0.58/htmldoc/Chapter8.html#pubkey) and Pagent (stores keys in memory to avoid typing in passwords several times over a session) (https://the.earth.li/~sgtatham/putty/0.58/htmldoc/Chapter9.html#pageant).

Firstly, make the private and public keys in puttygen, then save each in the .ssh folder in windows (User/David/.ssh in my case – I had already made this from a previous experiment with encrypted email), for example as myprivatekey.ppk and mypublickey.pub. Copy the text of the public key into the authorized_keys file in /.ssh folder in the Raspberry File – if you don’t want to do this on the pi itself, then copy in it using vim (itself an education) over the putty command line, or through Filezilla or pscp (I had issues getting into the .ssh folder) as a file transfer. https://dvpizone.wordpress.com/2014/03/02/how-to-connect-to-your-raspberry-pi-using-ssh-key-pairs/ was very useful and I followed its instructions on changing the authorized_keys file and the sshd_config file on the pi. https://askubuntu.com/questions/204400/ssh-public-key-no-supported-authentication-methods-available-server-sent-publ was also helpful – I saved directly from Puttygen and had something that sat in the wrong format for this task. It needs to be in the format “ssh-rsa AAAAB3yc2EA…kqal= rsa-key-20121022” for example – as per the linked page.

Once the public key was on the pi, and private key loaded into pageant, pageant is running, the private key location loaded into putty, the username loaded into PuTTy and password turned off on the pi login was straightforward, just open in PuTTy and straight in securely!

Successful login to Raspberry using keys

This clears the way for plink and other commands to run without requiring a password, which in turn clears the path for them to run in a batch file.

Testing with plink and other commands went well – as expected pageant needs to have been started, but PuTTY doesn’t seem to need to have already been used to make it work.

Finally, I can put those three commands into a batch file (just save a text file as test.bat containing the commands as you would type them), then run the batch file to upload a text document to the raspberry pi, run the program on the file to create a new one, then download the result – which is what was wanted.

More can be done to create more complex batch files, or using the capabilities of windows powershell, to automate the passing and processing of files.


 

LightTraceMemory part 2 – steps and missteps

Getting to hello world – a video that dims live

We had a previous example of a processing script that took the brightest pixel from a video (reliable and known frame rates etc) – the next challenge was to make sure that this could be replicated for frames grabbed from a webcam, and introducing dimming.

Raspberry Pi Camera, a bad start

Initial attempts made using a Raspberry Pi with the Raspberry Pi Camera V2. The first few steps were very easy: taking a photo, saving a photo to the drive, showing a video preview on the computer screen. However, complete failure to install and run processing at an acceptable speed on the pi – could have moved to python CV and similar libraries but seemed like the wrong path, as there were lots of issues getting the right dependencies installed to get video working at all, and the general rate of progress was slow due to the computer speed. Working with live video looked like it would be completely impossible – I moved to the windows laptop and its webcam, with a view to moving to a different and better quality camera later. This reflects a theme I have encountered in the past (and later in this project) – Raspberry Pi’s being very frustrating to work with – they seem to be remarkably unforgiving… In the future I will try to do all experimentation on a larger computer, then move that to a Raspberry Pi later on if that is what the project needs.

Switching to webcam, getting past minimum levels of quality

Initial examples in processing like those in https://processing.org/tutorials/video/ worked well on the laptop. A few hiccups making sure I called the correct camera, and in making sure I loaded, manipulated and then showed the pixels correctly, but good progress made. Having fulfilled these initial steps, I was fairly confident that I could start on my own path and produce the project.

One of the very first successes with a dimming video

A first use of the dimming video with a light source

First play with filters to reduce presence of background, this was taken in quite bright conditions with a phone screen

However, improving the quality beyond this baseline later proved remarkably difficult. It appears that any image coming out of the webcam (and any other camera) is limited to around 800×400 pixels at 60Hz before each frame is encoded in a .h264 format, making it hard to read, especially for someone unfamiliar with video. The severely limited the quality of the output.

Switching to Boolean arithmetic, and learning about colour in ARGB

Manipulating the colours used in the images was very easy – just pull out the RGB values, manipulate those as integers, then put them back into the pixel – this is repeated for every pixel in the image. However, all functions need to be at the lowest level possible to make this loop as quick as it can be. Moving from functions such as:

r = red(img.pixels[loc]);
g= green(img.pixels[loc]);
b= blue(img.pixels[loc]);
//do something to these values
c = color(r,g,b);
pixels[loc] = c;

to ones like

</pre>
r=(c>>16)&0xFF;
g=(c>>8 )&0xFF;
b=c&0xFF;

c = 0xFF000000 | (r << 16) | (g << 8) | b;

Each pixel is in ARGB (Alpha Red Green Blue) format, so moving the pixels across, then filtering out the final two values gives the r, g, b values as quickly as possible. This increased the framerate of my program from around 20Hz up to about 58-60Hz – from a pure video framerate of about 60-62Hz.

As the ARGB values are being altered directly – there are some unexpected effects.

From google’s colour picker – 255,0,0 produces pure red

There is some error for small numbers – though I thought it largely was solved by setting colours that were nearly black to exactly black. This error was around subtraction in binary of small numbers, with some becoming negative, under some binary systems, a negative number is represented with a positive first byte – so zero is 00000000, one is 00000001, two is 00000010 but negative one is 10000001 under some systems and 11111111 under others – therefore, reaching black without stopping will lead to a very large value – pure white or a strong colour – leading to a strange ‘decay’ – this then leads to a second ‘decay’ in red or some other colour. I still don’t quite understand some of the behaviours seen – why don’t the colour come back again and again?

Strange ‘decay’ can be seen in the noise around the image

An example of something going wrong in the dimming process

Saving down videos, working with multiple videos simultaneously

A key advantage of working in code – as opposed to some other medium – is the ability to quickly make similar copies of the code produced. This allowed the creation of two videos from the same webcam footage simultaneously, one for the live screen which dimmed with time, and one for a overlaid photo and video which would not dim over time. This was as simple as creating two variables for videos and saving them separately. However, the amount of work being done per frame doubled, and saving frame (and upon saving, of videos) to the hard drive was quite slow – saving a video took a couple of seconds, and the frame rate reduced from about 57Hz to around 40Hz. I needed to delete competing .jar files for the video and video export libraries to make them work simultaneously.

Switching to a new camera

To try to increase the quality of the live camera footage, a new camera, the SJ4000 produced by SJCAM was used. The camera was able to capture to its own memory at 1080p resolutions very easily, but was limited as a webcam in the same way as the integral webcam – it encoded each frame for larger resolutions, it was also hard to get it working at higher resolutions as they weren’t offered as a default. However, for lower resolutions the ‘PC Camera’ mode on the camera worked very quickly and easily.

Other downsides were a complete switch of library to extract frames from videos – the standard video library in processing was not compatible. I ended up using https://github.com/sarxos/webcam-capture library instead, though making sure I had all the right dependencies installed was difficult – you need to add several files to the processing script which I hadn’t done before. A further (small) complication was a change in the returned pixels – r, g and b values were returned as 3 x number of pixels array. This meant that the ‘guts’ of all the algorithms needed to be changed to accommodate the new camera – you now moved pixels along the array rather than extracting and returning rgb values to an individual pixel colour.

Issues with the SJ4000

Video taken with the SJ4000, a clear ‘grid’ pattern can be seen

Despite some initially promising results, the quality of results from the SJ4000 was ultimately disappointing and we returned to using the webcam on the laptop. This image often had an ‘overlaid grid’ and was extremely grainy in low light. The camera seemed to be very aggressing in changing the contrast/brightness due to changes in light levels, leading to sudden bleaching of the image, or a total blackout. This caused havoc with the dimming of image over time. The image was also very noisy – the camera had difficulty ‘holding’ the edge of objects, especially in low light.

Getting computers to talk to each other, without interfering with one another

When saving videos, the MP4 file becomes visible immediately, but it is only completed and released once the video save function has run in processing. Once this is saved, further work can be done using the saved video. This was solved simply by creating a text file with the same name as the video upon saving, a small python script polled the folder for these text files and used this as a trigger to start the processing. There are probably other and better ways to do this but this was quick and cheap and easy – good enough. The text files has the added benefit of being able to store more information that informed the later processing, the number of frames in a video for example.

There ‘trigger’ files were used in several parts of the process to pass information through, and make sure tasks did not try to open the same file simultaneously leading to errors. This seems to be similar to the ‘.lok’ files used by some programs to prevent multiple users from accessing the same file.

I could have gone further with the passing of information through the trigger file, noting down diagnostic information allowing me to find any likely problems, but the code wasn’t complex enough or going to be used by others to require that – I knew it back-to-front on the evening and could find problems quickly.

All file sharing was done over a series of Windows shared folders, with all computers on the same wifi network (one that I provided and that was not connected to the internet). This allowed me to test the network are carefully as I could, reducing the number of things that were different on the evening, and reducing the risk to the event from poor overly constrained wifi on location.

Trigger files used to pass information and make sure computers did not try to access the same image and video files simultaneously

Overlaying videos, video speeds and how videos compress – why it is hard to get good results

It would seem that videos are far more complicated than they first appear – not just series of images – compression sees to that. Further issues complicate this further, the variable rate of frame capture from the live video (frames are just processed as fast as they can) means that a time needs to attached to each frame to ensure the rate of playback is consistent not only between videos but also within one video. Creating an overlay then required saving the rendered video frames at different points in time, processing these pixels, then recreating the video – all this way beyond my ability to pull it together in time, and the compromises to make it easier, putting an artificial cap on the framerate during capture, were not acceptable.

Automating cameras – return of the raspberry pi, again

The initial quality of screenshots was not nearly good enough to either show as stills or to use to produce the final images for the Kickstarter backers. The image wasn’t good enough, and interrupting the webcam footage to take a better quality still photo wasn’t possible – it would stop the live footage for the duration of the shot, and it didn’t seem that the webcam would be able to ‘open a shutter’ for an extended period of time – limiting the quality to that seen in the video footage. Fortunately, a good camera was lent to us by Thomas.Matthews – a Canon EOS 5D Mark III – which was one of the best in the market a couple of years ago and still an excellent quality camera. This camera needed to have a shutter that was triggered by a computer on command and the images passed to the outside computer for overlaying and display.

A starting point was gphoto2 http://www.gphoto.org/, which is able to operate a large number of cameras (including the Canon) with a high degree of precision. However, I struggled to make it work on Windows, and moving to Linux by only choice was my Raspberry Pi. Installing gphoto2 was possible on the pi, but triggering the camera was very sporadic. I seemed to need to restart the pi between each shot for some reason which was unacceptable. Furthermore, I was struggling to make the pi access the Windows shared folders to save files, and poll for the trigger files to take a shot. This was a low point in the project – I couldn’t reliably take photos, and couldn’t save them to the network anyway.

Automating cameras – going to other pieces of software

A way forward was found with digicamControl http://digicamcontrol.com/ – it worked well with windows and although it didn’t seem to provide quite as much control as gphoto2 the level of control through its Command Line Utility http://digicamcontrol.com/doc/userguide/cmd was fine – saving to where I specified, control over iso, shutter time, aperture. I was then able to use a command call from python to take photos with the Canon – this was triggered by writing a trigger text file from the processing script, a process so fast that it didn’t lead to any jumps in the playback.

For displaying the images, faststone Image Viewer had all the features I needed – in particular updating the photos in the slideshow as images were added to the folder.

Great hardware makes a real difference

Inital testing of the Canon with overlaid photographs

Test overlay at the venue with an early trace

It’s always the stuff you forget – we needed a stand to put the cameras on, we made it on the day, and it was bad

A lot of work and experimentation went into the programs that allowed LightTraceMemory to function, however, nearly none went into the design of the stand that the projector sat on, just a trolley with a cloth over the top and holes cut on the day for the cameras. This meant that the stand was completed in a rush and failed regularly (though not too severely) with pieces of cloth coming into the frame, the camera was sat on a pile of teabag boxes to see over the laptop. In the light it looked like a misshapen dalek – a weird and malign presence in the room, and right in the middle. I fear it was the ‘worst object in the LFA’. It might be an explanation for the slow-down of the frame rates that the computer was overly hot from the projector below

The saving grace of the camera dalek

Bizarrely, no-one noticed the giant camera dalek – in the dark it became nearly invisible. The bright screen beyond and the pendulums in the foreground and the link between them completely drew the attention in the room. An amazing number of people asked where the camera and projector were ‘hidden’.

Simplify, simplify – get rid of everything extraneous

and make it work on the night.

Many pieces of code were omitted for the final event – with only those that were reliable being taken forward. Overlay of video over photos was replaced by overlay of high quality images, video saving was not used – this left the project surprisingly close to its original intention – and on the whole to a good level of quality and reliability. The frame rates on the preview were lower than we wanted and almost no video footage was saved, but the photos were of excellent quality, and are excellent records of the event.

Reasons not to use the video functions during the evening, a slowdown to 10-15Hz leads to very strange results

LightTraceMemory – a write up part 1: the code used on the day

The over-arching concept for LightTraceMemory, swing the pendulum, see your pattern, see your contribution to the patterns made over the course of the whole evening

Image from: http://lightmemory.net/About

 

A second interpretation of the event – showing overall outputs as well as experiences on the day

Image from: https://www.kickstarter.com/projects/1188160556/tracing-the-future-we-shape

LightTraceMemory was an event that took place as part of the London Festival of Architecture 2017 (LFA). The event used a spherical pendulum, which was attached to a pulley so its radius could be changed by the users, coupled with a series of different presentations of that pendulum – these different presentations used inputs from different time periods to show the pendulum in different ways, tying in with the LFA theme of ‘Memory’ remarkably well. The three presentations of the pendulum were:

  • A presentation completely in the moment: the pendulum itself
  • A presentation with a ‘memory’ of around 1 minute: a slowly dimming display showing the path of the pendulum
  • A presentation with a ‘memory’ of the whole evening: series of overlaid long exposure photographs of different people’s swings of the pendulums.

 

Video saving preview – not used on the day but a couple of results from earlier testing

Example of individual image – memory over the course of about 1 minute. To look at all of them, go to https://goo.gl/photos/Quf6YgqkRZebG6dYA

Random overlay of about 5 images – memory over the course of an evening. To see all of them go to https://goo.gl/photos/SUXkvsMsk4bVT3fXA

The origins of LightTraceMemory come from a group task at a company away day, with the same techniques were used to convey the idea of a bowl (focus on the shape of the pendulum) rather than the idea of memory (which uses the long exposure aspect of the photography). A real variety of shapes were considered in the ‘form generation’ process aside from the spherical pendulum – but time and again the spherical pendulum seemed like the best.

My input on the project was largely through programming and IT – getting a series of cameras working on command, getting computers to talk to one another and exchange and overlay images and videos, so that is what I am recording here. I might have input to other aspects – but the skills and interests of the rest of the team clearly left a gap in the technical aspects of the project and not in others. Creating the right experience, branding, blacking out the space (which became building a portal frame) were well covered without me – though I did help minimise the amount of bracing and haunching in the portal frame. What I did also aligns with my personal interest in image and cameras, a key driver of change in the world is the ability of computers to extract meaningful information from images (self-driving cars, image recognition systems) and to have worked with cameras in this project has helped me gain a bit more experience in the nuts and bolts of systems like this work – in short, live video is extremely unforgiving. Images, which can be processed over longer period of time, are relatively easy once you have captured them.

The final result

Live ‘dimming’ display

The dimming display uses a simple processing script (www.processing.org) for the majority of the time – it takes each frame from the laptop webcam (the best quality we found in the end), then compares this to a dimmed version of the previous frame, it chooses the brightest pixels from each of these, then displays that frame. This creates a display that slowly dims from previous frames to black. A few alterations improved performance in practice. Performance was reasonably good in the lead up to event – around 50-57Hz frame rate (down from 60-62Hz without any processing required). However, on the day, the frame rate reduced to around 30-40Hz, without any apparent cause – perhaps overheating, but this seems unlikely. Using the video-save features, which reduced the frame rate to around 30-40Hz in the lead up to the event, reduced the frame rate to around 15Hz at the event, so these weren’t used on the day.

The key part of the processing code for the dimming display is here:

The rest of the code using unused or lesser used features is later.

Overlaid photographs

The saving grace of the overlaid photographs has been the loan of a high quality camera (Canon EOS 5D Mark III) and display from Thomas.Matthews. The camera was linked to a laptop using digicam Control’s command line functionality to allow automatic shooting. Images were passed to another computer using a shared folder over a private Wifi network alongside some ‘trigger’ txt files which instructed the other computer to combine photos using processing and display the combined photos. The FastStone Image Viewer (http://www.faststone.org/) turned out to be what I needed for the slideshow, it automatically updates the slideshow images as new photos are added to the folder.

The key pieces of code are below, there is lots of ‘setup’ required for shared folders etc that isn’t included here:

The code that creates the dimming display, and also takes commands to create the other functions used:


//run this file from \\DESKTOP-G142UH9\LightTraceMemory\Scripts\processing_copy\processing-3.3.3\processing.exe
//make sure you have included the following files:
//\\DESKTOP-G142UH9\LightTraceMemory\Scripts\processing-3.3.3\webcam\webcam-capture-0.3.12-20161206.184756-3.jar
//\\DESKTOP-G142UH9\LightTraceMemory\Scripts\processing-3.3.3\webcam\libs\bridj-0.6.2.jar
//\\DESKTOP-G142UH9\LightTraceMemory\Scripts\processing-3.3.3\webcam\libs\slf4j-api-1.7.2.jar

import processing.video.*;
import com.hamoid.*;
int numberOfPixels;
int[] oldpixels;
int[] undimmedpixels;
int[] oldundimmedpixels;
byte[] oldundimmedscreenpixels;
int deltar = 0;
int deltag = 0;
int deltab = 0;
float video2FrameRate = 28.0;//29.0;
long videoname, photoname;
int currentbrightness, oldbrightness;
int r, g, b, oldr, oldg, oldb, oldundimmedr, oldundimmedg, oldundimmedb;
int oldundimmedbrightness, undimmedbrightnessdifference;
int i;
PGraphics undimmedpixelsimage;
color oldundimmedc;
Capture video;
VideoExport videoExport1;
VideoExport videoExport2;
int videoExportNumberOfFrames = 0;
boolean recording = false;
import java.util.Date;
Date timestamp;

import com.github.sarxos.webcam.Webcam;
import com.github.sarxos.webcam.WebcamPanel;
import com.github.sarxos.webcam.WebcamResolution;
import java.awt.Dimension;
import java.awt.Image;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
Webcam webcam;
byte[] screenpixels;
int[] oldscreenpixels;
int pixellength;
int pixel;

&amp;nbsp;

void setup() {
//if attached to a 1280x720 screen, use full screen
//fullScreen();
size(1280, 720);
noStroke();
frameRate(60);
println("Press s to start video, r to toggle recording, q to save the movie, x to exit the program");

for (Webcam camlist : Webcam.getWebcams ()) {
println(camlist.getName());
//for (Dimension res : camlist.getViewSizes()) println(res.toString());
}

webcam = Webcam.getWebcamByName("Microsoft LifeCam Front 1");

Dimension[] nonStandardResolutions = new Dimension[]{
new Dimension(1280,720),
};

webcam.setCustomViewSizes(nonStandardResolutions);
webcam.setViewSize(new Dimension(1280, 720));
webcam.open(true);

numberOfPixels = 1280 * 720;
oldpixels = new int[numberOfPixels];
undimmedpixels = new int[numberOfPixels];
oldundimmedpixels = new int[numberOfPixels*3];
screenpixels = new byte[numberOfPixels*3];
oldscreenpixels= new int[numberOfPixels*3];
loadPixels();
videoExport1 = new VideoExport(this);

undimmedpixelsimage = createGraphics(1280,720);
videoExport2 = new VideoExport(this,"test_pgraphics_2.mp4",undimmedpixelsimage);
videoExport2.setFrameRate(video2FrameRate);
videoExportNumberOfFrames = 0;

color black = 0xFF000000;
pixel = 0;
for (int i = 0; i&lt;numberOfPixels; i++){
pixels[i] = black ;
oldpixels[i] = black;
oldundimmedpixels[i] = black;
oldscreenpixels[pixel] = byte(0);
oldscreenpixels[pixel+1]=byte(0);
oldscreenpixels[pixel+2] = byte(0);
pixel = pixel+3;
}
updatePixels();

undimmedpixelsimage.beginDraw();
undimmedpixelsimage.loadPixels();
for (int loc=0; loc&lt;numberOfPixels; loc++) {
undimmedpixelsimage.pixels[loc] = black;
}
undimmedpixelsimage.updatePixels();
undimmedpixelsimage.endDraw();

}

void draw() {
//println(frameRate);
loadPixels();
BufferedImage cap = webcam.getImage();
screenpixels = ((DataBufferByte) cap.getRaster().getDataBuffer()).getData();
//this is a byte array of screen pixels

pixellength = 3 ;
pixel = 0;

for (int loc=0; loc&lt;numberOfPixels; loc++) {

//create the dimmed video for display and to save
if (frameCount%3 == 0) {deltar=1;} else {deltar=0;}
if (frameCount%3 == 0) {deltag=1;} else {deltag=0;}
if (frameCount%3 == 0) {deltab=1;} else {deltab=0;}
oldr = oldscreenpixels[pixel] - deltar;
oldg = oldscreenpixels[pixel+1] - deltag;
oldb = oldscreenpixels[pixel+2] - deltab;
oldbrightness = 3*oldr + oldb + 4*oldg;
if(oldbrightness&lt;100){oldr=0x00; oldg=0x00; oldb=0x00; oldbrightness=0;}

r = int(screenpixels[pixel]);
g = int(screenpixels[pixel+1]);
b = int(screenpixels[pixel+2]);
currentbrightness = 3*r + b + 4*g;
//make any darker areas in the current pixels black
if (currentbrightness&lt;500) {r=0x00; g=0x00; b=0x00; currentbrightness=0;}

int brightnessdifference = currentbrightness - oldbrightness;
//so if new pixel is brighter, then brightnessdifference positive, use new value
//https://stackoverflow.com/questions/596216/formula-to-determine-brightness-of-rgb-color

if (brightnessdifference &gt; 0){ //i.e. new pixel is brighter
//the function on loc creates a flip about a vertical line on the video
pixels[(loc/1280)*1280+1279-(loc%1280)] = 0xFF000000 | (r &lt;&lt; 16) | (g &lt;&lt; 8) | b;
oldscreenpixels[pixel] = r;
oldscreenpixels[pixel+1] = g;
oldscreenpixels[pixel+2] = b;
} else {
pixels[(loc/1280)*1280+1279-(loc%1280)] = 0xFF000000 | (oldr &lt;&lt; 16) | (oldg &lt;&lt; 8) | oldb;
oldscreenpixels[pixel] = oldr;
oldscreenpixels[pixel+1] = oldg;
oldscreenpixels[pixel+2] = oldg;
}

pixel = pixel + pixellength;
} //end of for each pixel in frame

updatePixels();
pixel = 0;
if(recording) {
videoExport1.saveFrame(); //dimmed live and video
//println("hello");

undimmedpixelsimage.beginDraw();
undimmedpixelsimage.loadPixels();
for (int loc=0; loc&lt;numberOfPixels; loc++) {

r = int(screenpixels[pixel]);
g = int(screenpixels[pixel+1]);
b = int(screenpixels[pixel+2]);
currentbrightness = 3*r + b + 4*g;
if (currentbrightness&lt;100) {r=0x00; g = 0x00; b=0x00; currentbrightness=0;}

//get previous color
oldundimmedr = oldundimmedpixels[pixel];
oldundimmedg = oldundimmedpixels[pixel+1];
oldundimmedb = oldundimmedpixels[pixel+2];
oldundimmedbrightness = 3*oldundimmedr + oldundimmedb + 4*oldundimmedg;

undimmedbrightnessdifference = currentbrightness - oldundimmedbrightness;

if (undimmedbrightnessdifference&gt;0) { //i.e. the new pixel is brighter than the old one
undimmedpixelsimage.pixels[loc] = 0xFF000000 | (r &lt;&lt; 16) | (g &lt;&lt; 8) | b;
oldundimmedpixels[pixel] = r;
oldundimmedpixels[pixel+1] = g;
oldundimmedpixels[pixel+2] = b;
} else {
undimmedpixelsimage.pixels[loc] = 0xFF000000 | (oldundimmedr &lt;&lt; 16) | (oldundimmedg &lt;&lt; 8) | oldundimmedb;
oldundimmedpixels[pixel] = oldundimmedr;
oldundimmedpixels[pixel+1] = oldundimmedg;
oldundimmedpixels[pixel+2] = oldundimmedb;
}
pixel = pixel+pixellength;
}
undimmedpixelsimage.updatePixels();
undimmedpixelsimage.endDraw();

videoExport2.saveFrame(); //undimmed video - want this to save the contents of undimmed, not the current display
videoExportNumberOfFrames = videoExportNumberOfFrames +1;
} //end of if recording

} //end of draw function

void keyPressed() {

if(key == 's' || key == 'S'){
videoname = new Date().getTime();
videoExport1.setMovieFileName("//DESKTOP-G142UH9/LightTraceMemory/Videos/dimmed/"+ videoname + ".mp4");
videoExport1.startMovie();
videoExport2.setMovieFileName("//DESKTOP-G142UH9/LightTraceMemory/Videos/undimmed/"+ videoname + ".mp4");
videoExport2.startMovie();
println("Movie is started");
}

if(key == 'r' || key == 'R') {

recording = !recording;

println("Recording is " + (recording ? "ON" : "OFF"));

}

if (key == 'q' || key == 'Q') {
recording = false;
videoExport1.endMovie();
videoExport2.endMovie();
//undimmedpixelsimage.save("//DESKTOP-G142UH9/LightTraceMemory/Images/Undimmed/"+ videoname + ".png");
//saveStrings("//DESKTOP-G142UH9/LightTraceMemory/Trigger/"+ videoname + ".txt",split("NumberOfFrames " + str(videoExportNumberOfFrames),' '));
videoExportNumberOfFrames = 0;
videoname = new Date().getTime();
videoExport1.setMovieFileName("//DESKTOP-G142UH9/LightTraceMemory/Videos/dimmed/"+ videoname + ".mp4");
videoExport2.setMovieFileName("//DESKTOP-G142UH9/LightTraceMemory/Videos/undimmed/"+ videoname + ".mp4");
videoExport1.startMovie();
videoExport2.startMovie();
//https://github.com/hamoid/video_export_processing/issues/38
//go to video library and delete the jna.jar file
println("Movie is saved, next movie started");
}

if (key == 'x' || key=='X') {
recording = false;
videoExport1.endMovie();
videoExport2.endMovie();
undimmedpixelsimage.save("//DESKTOP-G142UH9/LightTraceMemory/Images/Undimmed/"+ videoname + ".png");
saveStrings("//DESKTOP-G142UH9/LightTraceMemory/Trigger/"+ videoname + ".txt",split("NumberOfFrames " + str(videoExportNumberOfFrames),' '));
println("Exiting the program");
exit();
}
if (key == 'p' || key == 'P') {
//grabs a picture from the canon
photoname = new Date().getTime();
saveStrings("//DESKTOP-G142UH9/LightTraceMemory/EOSTrigger/"+ photoname + ".txt",split("hello there"," "));
println("Canon should take photo here");
}
if (key == 'b' || key == 'B') {
//sets the screen to black
color black = 0xFF000000;
pixel = 0;
for (int i = 0; i&lt;numberOfPixels; i++){
pixels[i] = black;
oldpixels[i] = black;
oldundimmedpixels[i] = black;
oldscreenpixels[pixel] = byte(0);
oldscreenpixels[pixel+1]=byte(0);
oldscreenpixels[pixel+2] = byte(0);
pixel = pixel+3;
}
}
}

The code that sits on the first computer, waiting for a trigger file to appear, and starting the process of taking a photo with the Canon camera, n.b. this is nearly identical to the other piece of code on the other computer that triggers the combine photo processing script:

import os, timeimport os, time
path_to_watch = "//DESKTOP-G142UH9/LightTraceMemory/EOSTrigger"#note to self - use forward slashes not backslashes here
before = dict ([(f, None) for f in os.listdir (path_to_watch)])
while 1:
&nbsp;   time.sleep (2)
&nbsp;   after = dict ([(f, None) for f in os.listdir (path_to_watch)])
&nbsp;   added = [f for f in after if not f in before]
&nbsp;   removed = [f for f in before if not f in after]
&nbsp;   if added:
&nbsp; &nbsp;     print "Added: ", ", ".join (added)
 &nbsp;      filenames = added[0][:-4]
 &nbsp;      processingScriptCall = "C:/Users/david/OneDrive/Projects/LightTraceMemory/Scripts/digiCamControl/CameraControlCmd.exe /capturenoaf /iso 100 /aperture 14.0 /shutter 20s /ec +0.0 /filename C:/Users/david/OneDrive/Projects/LightTraceMemory/Images/Undimmed/" + filenames + ".JPG"
&nbsp; &nbsp;     print processingScriptCall
 &nbsp;      os.system(processingScriptCall)
 &nbsp;      print("Took photo " + filenames)
 &nbsp;      print("Saving trigger file")
 &nbsp;      triggerfile = open("C:/Users/david/OneDrive/Projects/LightTraceMemory/Trigger/"+ filenames + ".txt","w")
 &nbsp;      triggerfile.close()
 &nbsp;      print("Saved trigger file")
    if removed:
&nbsp; &nbsp;     print "Removed: ", ", ".join (removed)
&nbsp;       before = after

The code that sits on the second computer, waiting for a trigger file to appear, and starting the processing scripts to combine photos:


import os, time
path_to_watch = "//DESKTOP-G142UH9/LightTraceMemory/Trigger"
#note to self - use forward slashes not backslashes here
before = dict ([(f, None) for f in os.listdir (path_to_watch)])
while 1:&amp;amp;amp;amp;amp;amp;nbsp; 
    time.sleep (2) 
    after = dict ([(f, None) for f in os.listdir (path_to_watch)]) 
    added = [f for f in after if not f in before]
    removed = [f for f in before if not f in after]
    if added:
        print "Added: ", ", ".join (added)
        filenames = added[0][:-4]
        print("copying image to new image location")
        os.system( "copy %s %s" % ('''\\\\DESKTOP-G142UH9\\LightTraceMemory\\Images\\Undimmed\\'''+filenames+".JPG", '''\\\\DESKTOP-G142UH9\\LightTraceMemory\\Images\\RecentCombined\\NewImage.JPG'''))
        print("copied image to new image location"
        processingScriptCall = '''B:/processing-3.3.3/processing-java.exe --sketch="//DESKTOP-G142UH9/LightTraceMemory/Scripts/VideoCombine" --run ''' + filenames
        print processingScriptCall
        os.system(processingScriptCall)
        print("Processed photo " + filenames)
        for i in range(1,3): 
             processingScriptCall = '''B:/processing-3.3.3/processing-java.exe --sketch="//DESKTOP-G142UH9/LightTraceMemory/Scripts/ImageCombineRandom" --run ''' + filenames + " " + str(i)
             print processingScriptCall 
             os.system(processingScriptCall) 
        print("Produced " + filenames) 
    if removed:
        print "Removed: ", ", ".join (removed) 
    before = after

This code is the processing script called to combine the most recent image into the combined image, creating a compound and constantly updating image of all the images taken through the evening:

&lt;/pre&gt;
import java.util.ArrayList;

String path = "//DESKTOP-G142UH9/LightTraceMemory/Images/Undimmed";
int videoWidth = 5760;
int videoHeight = 3840;
int numberOfPixels = videoWidth * videoHeight;
int loc = 0;
color c, oldc;
int i;
int r, g, b, oldr, oldg, oldb;
int numberOfFrames, numberOfFramesRead;
int brightness, oldbrightness, brightnessdifference;
boolean namequalitytest;
String[] filenames;
String filename;
String imagename;
PImage combinedImage;
PImage oldImage;

void setup() {
size(5760,3840);
filenames = listFileNames(path);
filename = args[0];
combinedImage = createImage(5760, 3840, ARGB);

combinedImage.loadPixels();
//set every pixel to black to start
color black = 0xFF000000;
for (loc=0; loc&lt;numberOfPixels; loc++) {
combinedImage.pixels[loc] = black;
}

oldImage = loadImage("//DESKTOP-G142UH9/LightTraceMemory/Images/RecentCombined/RecentCombined.JPG");
oldImage.loadPixels();

for (loc=0; loc&lt;numberOfPixels; loc++) {
c = combinedImage.pixels[loc];
r = (c &gt;&gt; 16) &amp; 0xFF;
g = (c &gt;&gt; 8) &amp; 0xFF;
b = c &amp; 0xFF;
brightness = 3*r + b + 4*g;
oldc = oldImage.pixels[loc];
oldr = (oldc &gt;&gt; 16) &amp; 0xFF;
oldg = (oldc &gt;&gt; 8) &amp; 0xFF;
oldb = oldc &amp; 0xFF;
oldbrightness = 3*oldr + oldb + 4*oldg;
brightnessdifference = brightness - oldbrightness;

if (brightnessdifference&gt;0) {
combinedImage.pixels[loc] = 0xFF000000 | (r &lt;&lt; 16) | (g &lt;&lt; 8) | b;
} else {
combinedImage.pixels[loc] = 0xFF000000 | (oldr &lt;&lt; 16) | (oldg &lt;&lt; 8) | oldb;
}
} //end of for each pixel

oldImage = loadImage("//DESKTOP-G142UH9/LightTraceMemory/Images/RecentCombined/NewImage.JPG");
oldImage.loadPixels();

for (loc=0; loc&lt;numberOfPixels; loc++) {
c = combinedImage.pixels[loc];
r = (c &gt;&gt; 16) &amp; 0xFF;
g = (c &gt;&gt; 8) &amp; 0xFF;
b = c &amp; 0xFF;
brightness = 3*r + b + 4*g;
oldc = oldImage.pixels[loc];
oldr = (oldc &gt;&gt; 16) &amp; 0xFF;
oldg = (oldc &gt;&gt; 8) &amp; 0xFF;
oldb = oldc &amp; 0xFF;
oldbrightness = 3*oldr + oldb + 4*oldg;
brightnessdifference = brightness - oldbrightness;

if (brightnessdifference&gt;0) {
combinedImage.pixels[loc] = 0xFF000000 | (r &lt;&lt; 16) | (g &lt;&lt; 8) | b;
} else {
combinedImage.pixels[loc] = 0xFF000000 | (oldr &lt;&lt; 16) | (oldg &lt;&lt; 8) | oldb;
}
} //end of for each pixel
//} //end of check for using the just added image
//} //end of for each image

combinedImage.updatePixels();
combinedImage.save("//DESKTOP-G142UH9/LightTraceMemory/Images/Combined/" + filename + ".JPG");
combinedImage.save("//DESKTOP-G142UH9/LightTraceMemory/Images/RecentCombined/RecentCombined.JPG");
println("Saved combined image");

exit();

// This function returns all the files in a directory as an array of Strings
String[] listFileNames(String dir) {
File file = new File(dir);
if (file.isDirectory()) {
String names[] = file.list();
ArrayList&lt;String&gt; names2 = new ArrayList&lt;String&gt;();
for (String name : names) {
String imagesuffix = name.substring(name.length()-4, name.length());
//println(imagesuffix);
if (imagesuffix.equals(".JPG")) {
names2.add(name);
//println(name);
}
}
String[] names3 = names2.toArray(new String[0]);
return names3;
} else {
// If it's not a directory
return null;
}
}

A few tweaks to the images selected to overlay create the random combined images, with sets of 5 selected:


//import processing.video.*;
//import com.hamoid.*;
import java.util.ArrayList;

String path = "//DESKTOP-G142UH9/LightTraceMemory/Images/Undimmed2";
int videoWidth = 5760;
int videoHeight = 3840;
int numberOfPixels = videoWidth * videoHeight;
int loc = 0;
int iteration = 0;
color c, oldc;
int i,j;
int r, g, b, oldr, oldg, oldb;
int rreduction, greduction, breduction;
int numberOfFrames, numberOfFramesRead;
int brightness, oldbrightness, brightnessdifference;
boolean namequalitytest, randomtest;
String[] filenames;
String filename;
String imagename;
PImage combinedImage;
PImage oldImage;

void setup() {
size(5760,3840);
filenames = listFileNames(path);
//filename = args[0];
filename = "hello";
iteration = 1; //int(args[1]);
combinedImage = createImage(5760, 3840, ARGB);

//combinedImage.beginDraw();
combinedImage.loadPixels();
//set every pixel to black to start
color black = 0xFF000000;
for (loc=0; loc&lt;numberOfPixels; loc++) {
combinedImage.pixels[loc] = black;
}

//go through each image (except the current one) and update if the new pixel is brighter
for (int j =0; j&lt;100; j=j+1){
for (String name : filenames) {
println(name);
imagename = name.substring(0, name.length()-4);
println(imagename);
namequalitytest = imagename.equals(filename);
randomtest = (random(1)&lt;5.0/filenames.length);
//randomtest = true;
if (namequalitytest || randomtest) { //n.b. don't use == here - looking for same object not same value
println("passed one of the tests");
oldImage = loadImage("//DESKTOP-G142UH9/LightTraceMemory/Images/Undimmed2/" + name); //n.b. name includes the suffix
oldImage.loadPixels();
rreduction = int(random(20));
greduction = int(random(20));
breduction = int(random(20));
for (loc=0; loc&lt;numberOfPixels; loc++) {
c = combinedImage.pixels[loc];
r = constrain(((c &gt;&gt; 16) &amp; 0xFF)-rreduction,0,255);
g = constrain(((c &gt;&gt; 8) &amp; 0xFF)-greduction,0,255);
b = constrain((c &amp; 0xFF)-breduction,0,255);
brightness = 3*r + b + 4*g;
oldc = oldImage.pixels[loc];
oldr = (oldc &gt;&gt; 16) &amp; 0xFF;
oldg = (oldc &gt;&gt; 8) &amp; 0xFF;
oldb = oldc &amp; 0xFF;
oldbrightness = 3*oldr + oldb + 4*oldg;
brightnessdifference = brightness - oldbrightness;

if (brightnessdifference&gt;0) {
combinedImage.pixels[loc] = 0xFF000000 | (r &lt;&lt; 16) | (g &lt;&lt; 8) | b;
} else {
combinedImage.pixels[loc] = 0xFF000000 | (oldr &lt;&lt; 16) | (oldg &lt;&lt; 8) | oldb;
}
} //end of for each pixel
} //end of check for using the just added image
} //end of for each image
combinedImage.updatePixels();
combinedImage.save("//DESKTOP-G142UH9/LightTraceMemory/Images/ImageCombineRandom2/" + filename + "_" + j + ".JPG");
println("Saved combined image");
}
exit();
}

// This function returns all the files in a directory as an array of Strings
String[] listFileNames(String dir) {
File file = new File(dir);
if (file.isDirectory()) {
String names[] = file.list();
ArrayList&lt;String&gt; names2 = new ArrayList&lt;String&gt;();
for (String name : names) {
String imagesuffix = name.substring(name.length()-4, name.length());
//println(imagesuffix);
if (imagesuffix.equals(".JPG")) {
names2.add(name);
//println(name);
}
}
String[] names3 = names2.toArray(new String[0]);
return names3;
} else {
// If it's not a directory
return null;
}
}

Peter Rice: An Engineer Imagines

Peter Rice is possibly the best known structural engineer of the 20th century – projects he worked on are some of the best known today: the Sydney Opera House, Beauborg (Pompidou Centre, Paris), Pavilion of the Future at the Seville Expo, Lloyds of London.

It is hard to work out whether is achieves mastery of each material, or is just clever enough to push his way to achieve remarkable and interesting results in any problem he chooses. I hope it is the former though the quotes sometimes suggest otherwise. ‘A juggernaught, crushing the obstacles of practicality and cost, for us to build what we liked’ and someone who sees the build world as a place for exploitation, a place that ‘will have to become more complex to absorb our energies and occupy us fully’? Or is he someone who shares Ove Arup’s position on man’s position within the environment, and how the man’s power should be best harnessed.

Rice never mentions real issues with cost or time in his projects, a constant backdrop to almost any modern work. It seems the engineering world has ‘tightened up’ a lot since his day. However, I don’t think his power to place himself at the center of some of the most exciting projects, largely through his relationships with several of the leading architects of his day, should be underestimated (also, the book is supposed to be a celebration of engineering life). Looking at the cross section through the roof at the Menil art gallery there is a sumptuous level of detail that I cannot believe would survive a modern project – though the situation of the project (wealthy and interested private client, influential and trusting architect) is probably more important than its time period. Rice’s ability to persuade others to work in a new material was clearly impressive – I am not convinced that he really needed to work in cast iron for the Manil Gallery. A similar prediciment occurs for the Pavillion of the Future – a little fussy in steel and stone – though he certainly met the ‘spectacular’ part of the design brief. And again on the Pompidou center, was an optimisation of column thickness and the use of unusual centrifugally spun steel sections really beneficial?

Rice really seems to design from the ‘material up’ – he takes all his details from the particular abilities of each material and often seems to work outwards from there. Glass takes carefully designed point loads from its strength – so hang it from the top corners in sheets and support those against the wind with a separate structure. Polycarbonate has flexibility and little strength – so contain it with large areas in contact with glue adhering them, and carefully clamped details – and limit its structural role to a shear link between timber chords.

Do read, though skip the chapter on horse racing.