
The over-arching concept for LightTraceMemory, swing the pendulum, see your pattern, see your contribution to the patterns made over the course of the whole evening
Image from: http://lightmemory.net/About
Image from: https://www.kickstarter.com/projects/1188160556/tracing-the-future-we-shape
LightTraceMemory was an event that took place as part of the London Festival of Architecture 2017 (LFA). The event used a spherical pendulum, which was attached to a pulley so its radius could be changed by the users, coupled with a series of different presentations of that pendulum – these different presentations used inputs from different time periods to show the pendulum in different ways, tying in with the LFA theme of ‘Memory’ remarkably well. The three presentations of the pendulum were:
- A presentation completely in the moment: the pendulum itself
- A presentation with a ‘memory’ of around 1 minute: a slowly dimming display showing the path of the pendulum
- A presentation with a ‘memory’ of the whole evening: series of overlaid long exposure photographs of different people’s swings of the pendulums.
Video saving preview – not used on the day but a couple of results from earlier testing

Example of individual image – memory over the course of about 1 minute. To look at all of them, go to https://goo.gl/photos/Quf6YgqkRZebG6dYA

Random overlay of about 5 images – memory over the course of an evening. To see all of them go to https://goo.gl/photos/SUXkvsMsk4bVT3fXA
The origins of LightTraceMemory come from a group task at a company away day, with the same techniques were used to convey the idea of a bowl (focus on the shape of the pendulum) rather than the idea of memory (which uses the long exposure aspect of the photography). A real variety of shapes were considered in the ‘form generation’ process aside from the spherical pendulum – but time and again the spherical pendulum seemed like the best.
My input on the project was largely through programming and IT – getting a series of cameras working on command, getting computers to talk to one another and exchange and overlay images and videos, so that is what I am recording here. I might have input to other aspects – but the skills and interests of the rest of the team clearly left a gap in the technical aspects of the project and not in others. Creating the right experience, branding, blacking out the space (which became building a portal frame) were well covered without me – though I did help minimise the amount of bracing and haunching in the portal frame. What I did also aligns with my personal interest in image and cameras, a key driver of change in the world is the ability of computers to extract meaningful information from images (self-driving cars, image recognition systems) and to have worked with cameras in this project has helped me gain a bit more experience in the nuts and bolts of systems like this work – in short, live video is extremely unforgiving. Images, which can be processed over longer period of time, are relatively easy once you have captured them.
The final result
Live ‘dimming’ display
The dimming display uses a simple processing script (www.processing.org) for the majority of the time – it takes each frame from the laptop webcam (the best quality we found in the end), then compares this to a dimmed version of the previous frame, it chooses the brightest pixels from each of these, then displays that frame. This creates a display that slowly dims from previous frames to black. A few alterations improved performance in practice. Performance was reasonably good in the lead up to event – around 50-57Hz frame rate (down from 60-62Hz without any processing required). However, on the day, the frame rate reduced to around 30-40Hz, without any apparent cause – perhaps overheating, but this seems unlikely. Using the video-save features, which reduced the frame rate to around 30-40Hz in the lead up to the event, reduced the frame rate to around 15Hz at the event, so these weren’t used on the day.
The key part of the processing code for the dimming display is here:
The rest of the code using unused or lesser used features is later.
Overlaid photographs
The saving grace of the overlaid photographs has been the loan of a high quality camera (Canon EOS 5D Mark III) and display from Thomas.Matthews. The camera was linked to a laptop using digicam Control’s command line functionality to allow automatic shooting. Images were passed to another computer using a shared folder over a private Wifi network alongside some ‘trigger’ txt files which instructed the other computer to combine photos using processing and display the combined photos. The FastStone Image Viewer (http://www.faststone.org/) turned out to be what I needed for the slideshow, it automatically updates the slideshow images as new photos are added to the folder.
The key pieces of code are below, there is lots of ‘setup’ required for shared folders etc that isn’t included here:
The code that creates the dimming display, and also takes commands to create the other functions used:
//run this file from \\DESKTOP-G142UH9\LightTraceMemory\Scripts\processing_copy\processing-3.3.3\processing.exe //make sure you have included the following files: //\\DESKTOP-G142UH9\LightTraceMemory\Scripts\processing-3.3.3\webcam\webcam-capture-0.3.12-20161206.184756-3.jar //\\DESKTOP-G142UH9\LightTraceMemory\Scripts\processing-3.3.3\webcam\libs\bridj-0.6.2.jar //\\DESKTOP-G142UH9\LightTraceMemory\Scripts\processing-3.3.3\webcam\libs\slf4j-api-1.7.2.jar import processing.video.*; import com.hamoid.*; int numberOfPixels; int[] oldpixels; int[] undimmedpixels; int[] oldundimmedpixels; byte[] oldundimmedscreenpixels; int deltar = 0; int deltag = 0; int deltab = 0; float video2FrameRate = 28.0;//29.0; long videoname, photoname; int currentbrightness, oldbrightness; int r, g, b, oldr, oldg, oldb, oldundimmedr, oldundimmedg, oldundimmedb; int oldundimmedbrightness, undimmedbrightnessdifference; int i; PGraphics undimmedpixelsimage; color oldundimmedc; Capture video; VideoExport videoExport1; VideoExport videoExport2; int videoExportNumberOfFrames = 0; boolean recording = false; import java.util.Date; Date timestamp; import com.github.sarxos.webcam.Webcam; import com.github.sarxos.webcam.WebcamPanel; import com.github.sarxos.webcam.WebcamResolution; import java.awt.Dimension; import java.awt.Image; import java.awt.image.BufferedImage; import java.awt.image.DataBufferByte; Webcam webcam; byte[] screenpixels; int[] oldscreenpixels; int pixellength; int pixel; &nbsp; void setup() { //if attached to a 1280x720 screen, use full screen //fullScreen(); size(1280, 720); noStroke(); frameRate(60); println("Press s to start video, r to toggle recording, q to save the movie, x to exit the program"); for (Webcam camlist : Webcam.getWebcams ()) { println(camlist.getName()); //for (Dimension res : camlist.getViewSizes()) println(res.toString()); } webcam = Webcam.getWebcamByName("Microsoft LifeCam Front 1"); Dimension[] nonStandardResolutions = new Dimension[]{ new Dimension(1280,720), }; webcam.setCustomViewSizes(nonStandardResolutions); webcam.setViewSize(new Dimension(1280, 720)); webcam.open(true); numberOfPixels = 1280 * 720; oldpixels = new int[numberOfPixels]; undimmedpixels = new int[numberOfPixels]; oldundimmedpixels = new int[numberOfPixels*3]; screenpixels = new byte[numberOfPixels*3]; oldscreenpixels= new int[numberOfPixels*3]; loadPixels(); videoExport1 = new VideoExport(this); undimmedpixelsimage = createGraphics(1280,720); videoExport2 = new VideoExport(this,"test_pgraphics_2.mp4",undimmedpixelsimage); videoExport2.setFrameRate(video2FrameRate); videoExportNumberOfFrames = 0; color black = 0xFF000000; pixel = 0; for (int i = 0; i<numberOfPixels; i++){ pixels[i] = black ; oldpixels[i] = black; oldundimmedpixels[i] = black; oldscreenpixels[pixel] = byte(0); oldscreenpixels[pixel+1]=byte(0); oldscreenpixels[pixel+2] = byte(0); pixel = pixel+3; } updatePixels(); undimmedpixelsimage.beginDraw(); undimmedpixelsimage.loadPixels(); for (int loc=0; loc<numberOfPixels; loc++) { undimmedpixelsimage.pixels[loc] = black; } undimmedpixelsimage.updatePixels(); undimmedpixelsimage.endDraw(); } void draw() { //println(frameRate); loadPixels(); BufferedImage cap = webcam.getImage(); screenpixels = ((DataBufferByte) cap.getRaster().getDataBuffer()).getData(); //this is a byte array of screen pixels pixellength = 3 ; pixel = 0; for (int loc=0; loc<numberOfPixels; loc++) { //create the dimmed video for display and to save if (frameCount%3 == 0) {deltar=1;} else {deltar=0;} if (frameCount%3 == 0) {deltag=1;} else {deltag=0;} if (frameCount%3 == 0) {deltab=1;} else {deltab=0;} oldr = oldscreenpixels[pixel] - deltar; oldg = oldscreenpixels[pixel+1] - deltag; oldb = oldscreenpixels[pixel+2] - deltab; oldbrightness = 3*oldr + oldb + 4*oldg; if(oldbrightness<100){oldr=0x00; oldg=0x00; oldb=0x00; oldbrightness=0;} r = int(screenpixels[pixel]); g = int(screenpixels[pixel+1]); b = int(screenpixels[pixel+2]); currentbrightness = 3*r + b + 4*g; //make any darker areas in the current pixels black if (currentbrightness<500) {r=0x00; g=0x00; b=0x00; currentbrightness=0;} int brightnessdifference = currentbrightness - oldbrightness; //so if new pixel is brighter, then brightnessdifference positive, use new value //https://stackoverflow.com/questions/596216/formula-to-determine-brightness-of-rgb-color if (brightnessdifference > 0){ //i.e. new pixel is brighter //the function on loc creates a flip about a vertical line on the video pixels[(loc/1280)*1280+1279-(loc%1280)] = 0xFF000000 | (r << 16) | (g << 8) | b; oldscreenpixels[pixel] = r; oldscreenpixels[pixel+1] = g; oldscreenpixels[pixel+2] = b; } else { pixels[(loc/1280)*1280+1279-(loc%1280)] = 0xFF000000 | (oldr << 16) | (oldg << 8) | oldb; oldscreenpixels[pixel] = oldr; oldscreenpixels[pixel+1] = oldg; oldscreenpixels[pixel+2] = oldg; } pixel = pixel + pixellength; } //end of for each pixel in frame updatePixels(); pixel = 0; if(recording) { videoExport1.saveFrame(); //dimmed live and video //println("hello"); undimmedpixelsimage.beginDraw(); undimmedpixelsimage.loadPixels(); for (int loc=0; loc<numberOfPixels; loc++) { r = int(screenpixels[pixel]); g = int(screenpixels[pixel+1]); b = int(screenpixels[pixel+2]); currentbrightness = 3*r + b + 4*g; if (currentbrightness<100) {r=0x00; g = 0x00; b=0x00; currentbrightness=0;} //get previous color oldundimmedr = oldundimmedpixels[pixel]; oldundimmedg = oldundimmedpixels[pixel+1]; oldundimmedb = oldundimmedpixels[pixel+2]; oldundimmedbrightness = 3*oldundimmedr + oldundimmedb + 4*oldundimmedg; undimmedbrightnessdifference = currentbrightness - oldundimmedbrightness; if (undimmedbrightnessdifference>0) { //i.e. the new pixel is brighter than the old one undimmedpixelsimage.pixels[loc] = 0xFF000000 | (r << 16) | (g << 8) | b; oldundimmedpixels[pixel] = r; oldundimmedpixels[pixel+1] = g; oldundimmedpixels[pixel+2] = b; } else { undimmedpixelsimage.pixels[loc] = 0xFF000000 | (oldundimmedr << 16) | (oldundimmedg << 8) | oldundimmedb; oldundimmedpixels[pixel] = oldundimmedr; oldundimmedpixels[pixel+1] = oldundimmedg; oldundimmedpixels[pixel+2] = oldundimmedb; } pixel = pixel+pixellength; } undimmedpixelsimage.updatePixels(); undimmedpixelsimage.endDraw(); videoExport2.saveFrame(); //undimmed video - want this to save the contents of undimmed, not the current display videoExportNumberOfFrames = videoExportNumberOfFrames +1; } //end of if recording } //end of draw function void keyPressed() { if(key == 's' || key == 'S'){ videoname = new Date().getTime(); videoExport1.setMovieFileName("//DESKTOP-G142UH9/LightTraceMemory/Videos/dimmed/"+ videoname + ".mp4"); videoExport1.startMovie(); videoExport2.setMovieFileName("//DESKTOP-G142UH9/LightTraceMemory/Videos/undimmed/"+ videoname + ".mp4"); videoExport2.startMovie(); println("Movie is started"); } if(key == 'r' || key == 'R') { recording = !recording; println("Recording is " + (recording ? "ON" : "OFF")); } if (key == 'q' || key == 'Q') { recording = false; videoExport1.endMovie(); videoExport2.endMovie(); //undimmedpixelsimage.save("//DESKTOP-G142UH9/LightTraceMemory/Images/Undimmed/"+ videoname + ".png"); //saveStrings("//DESKTOP-G142UH9/LightTraceMemory/Trigger/"+ videoname + ".txt",split("NumberOfFrames " + str(videoExportNumberOfFrames),' ')); videoExportNumberOfFrames = 0; videoname = new Date().getTime(); videoExport1.setMovieFileName("//DESKTOP-G142UH9/LightTraceMemory/Videos/dimmed/"+ videoname + ".mp4"); videoExport2.setMovieFileName("//DESKTOP-G142UH9/LightTraceMemory/Videos/undimmed/"+ videoname + ".mp4"); videoExport1.startMovie(); videoExport2.startMovie(); //https://github.com/hamoid/video_export_processing/issues/38 //go to video library and delete the jna.jar file println("Movie is saved, next movie started"); } if (key == 'x' || key=='X') { recording = false; videoExport1.endMovie(); videoExport2.endMovie(); undimmedpixelsimage.save("//DESKTOP-G142UH9/LightTraceMemory/Images/Undimmed/"+ videoname + ".png"); saveStrings("//DESKTOP-G142UH9/LightTraceMemory/Trigger/"+ videoname + ".txt",split("NumberOfFrames " + str(videoExportNumberOfFrames),' ')); println("Exiting the program"); exit(); } if (key == 'p' || key == 'P') { //grabs a picture from the canon photoname = new Date().getTime(); saveStrings("//DESKTOP-G142UH9/LightTraceMemory/EOSTrigger/"+ photoname + ".txt",split("hello there"," ")); println("Canon should take photo here"); } if (key == 'b' || key == 'B') { //sets the screen to black color black = 0xFF000000; pixel = 0; for (int i = 0; i<numberOfPixels; i++){ pixels[i] = black; oldpixels[i] = black; oldundimmedpixels[i] = black; oldscreenpixels[pixel] = byte(0); oldscreenpixels[pixel+1]=byte(0); oldscreenpixels[pixel+2] = byte(0); pixel = pixel+3; } } }
The code that sits on the first computer, waiting for a trigger file to appear, and starting the process of taking a photo with the Canon camera, n.b. this is nearly identical to the other piece of code on the other computer that triggers the combine photo processing script:
import os, timeimport os, time path_to_watch = "//DESKTOP-G142UH9/LightTraceMemory/EOSTrigger"#note to self - use forward slashes not backslashes here before = dict ([(f, None) for f in os.listdir (path_to_watch)]) while 1: time.sleep (2) after = dict ([(f, None) for f in os.listdir (path_to_watch)]) added = [f for f in after if not f in before] removed = [f for f in before if not f in after] if added: print "Added: ", ", ".join (added) filenames = added[0][:-4] processingScriptCall = "C:/Users/david/OneDrive/Projects/LightTraceMemory/Scripts/digiCamControl/CameraControlCmd.exe /capturenoaf /iso 100 /aperture 14.0 /shutter 20s /ec +0.0 /filename C:/Users/david/OneDrive/Projects/LightTraceMemory/Images/Undimmed/" + filenames + ".JPG" print processingScriptCall os.system(processingScriptCall) print("Took photo " + filenames) print("Saving trigger file") triggerfile = open("C:/Users/david/OneDrive/Projects/LightTraceMemory/Trigger/"+ filenames + ".txt","w") triggerfile.close() print("Saved trigger file") if removed: print "Removed: ", ", ".join (removed) before = after
The code that sits on the second computer, waiting for a trigger file to appear, and starting the processing scripts to combine photos:
import os, time path_to_watch = "//DESKTOP-G142UH9/LightTraceMemory/Trigger" #note to self - use forward slashes not backslashes here before = dict ([(f, None) for f in os.listdir (path_to_watch)]) while 1:&amp;amp;amp;amp;amp;nbsp; time.sleep (2) after = dict ([(f, None) for f in os.listdir (path_to_watch)]) added = [f for f in after if not f in before] removed = [f for f in before if not f in after] if added: print "Added: ", ", ".join (added) filenames = added[0][:-4] print("copying image to new image location") os.system( "copy %s %s" % ('''\\\\DESKTOP-G142UH9\\LightTraceMemory\\Images\\Undimmed\\'''+filenames+".JPG", '''\\\\DESKTOP-G142UH9\\LightTraceMemory\\Images\\RecentCombined\\NewImage.JPG''')) print("copied image to new image location" processingScriptCall = '''B:/processing-3.3.3/processing-java.exe --sketch="//DESKTOP-G142UH9/LightTraceMemory/Scripts/VideoCombine" --run ''' + filenames print processingScriptCall os.system(processingScriptCall) print("Processed photo " + filenames) for i in range(1,3): processingScriptCall = '''B:/processing-3.3.3/processing-java.exe --sketch="//DESKTOP-G142UH9/LightTraceMemory/Scripts/ImageCombineRandom" --run ''' + filenames + " " + str(i) print processingScriptCall os.system(processingScriptCall) print("Produced " + filenames) if removed: print "Removed: ", ", ".join (removed) before = after
This code is the processing script called to combine the most recent image into the combined image, creating a compound and constantly updating image of all the images taken through the evening:
</pre> import java.util.ArrayList; String path = "//DESKTOP-G142UH9/LightTraceMemory/Images/Undimmed"; int videoWidth = 5760; int videoHeight = 3840; int numberOfPixels = videoWidth * videoHeight; int loc = 0; color c, oldc; int i; int r, g, b, oldr, oldg, oldb; int numberOfFrames, numberOfFramesRead; int brightness, oldbrightness, brightnessdifference; boolean namequalitytest; String[] filenames; String filename; String imagename; PImage combinedImage; PImage oldImage; void setup() { size(5760,3840); filenames = listFileNames(path); filename = args[0]; combinedImage = createImage(5760, 3840, ARGB); combinedImage.loadPixels(); //set every pixel to black to start color black = 0xFF000000; for (loc=0; loc<numberOfPixels; loc++) { combinedImage.pixels[loc] = black; } oldImage = loadImage("//DESKTOP-G142UH9/LightTraceMemory/Images/RecentCombined/RecentCombined.JPG"); oldImage.loadPixels(); for (loc=0; loc<numberOfPixels; loc++) { c = combinedImage.pixels[loc]; r = (c >> 16) & 0xFF; g = (c >> 8) & 0xFF; b = c & 0xFF; brightness = 3*r + b + 4*g; oldc = oldImage.pixels[loc]; oldr = (oldc >> 16) & 0xFF; oldg = (oldc >> 8) & 0xFF; oldb = oldc & 0xFF; oldbrightness = 3*oldr + oldb + 4*oldg; brightnessdifference = brightness - oldbrightness; if (brightnessdifference>0) { combinedImage.pixels[loc] = 0xFF000000 | (r << 16) | (g << 8) | b; } else { combinedImage.pixels[loc] = 0xFF000000 | (oldr << 16) | (oldg << 8) | oldb; } } //end of for each pixel oldImage = loadImage("//DESKTOP-G142UH9/LightTraceMemory/Images/RecentCombined/NewImage.JPG"); oldImage.loadPixels(); for (loc=0; loc<numberOfPixels; loc++) { c = combinedImage.pixels[loc]; r = (c >> 16) & 0xFF; g = (c >> 8) & 0xFF; b = c & 0xFF; brightness = 3*r + b + 4*g; oldc = oldImage.pixels[loc]; oldr = (oldc >> 16) & 0xFF; oldg = (oldc >> 8) & 0xFF; oldb = oldc & 0xFF; oldbrightness = 3*oldr + oldb + 4*oldg; brightnessdifference = brightness - oldbrightness; if (brightnessdifference>0) { combinedImage.pixels[loc] = 0xFF000000 | (r << 16) | (g << 8) | b; } else { combinedImage.pixels[loc] = 0xFF000000 | (oldr << 16) | (oldg << 8) | oldb; } } //end of for each pixel //} //end of check for using the just added image //} //end of for each image combinedImage.updatePixels(); combinedImage.save("//DESKTOP-G142UH9/LightTraceMemory/Images/Combined/" + filename + ".JPG"); combinedImage.save("//DESKTOP-G142UH9/LightTraceMemory/Images/RecentCombined/RecentCombined.JPG"); println("Saved combined image"); exit(); // This function returns all the files in a directory as an array of Strings String[] listFileNames(String dir) { File file = new File(dir); if (file.isDirectory()) { String names[] = file.list(); ArrayList<String> names2 = new ArrayList<String>(); for (String name : names) { String imagesuffix = name.substring(name.length()-4, name.length()); //println(imagesuffix); if (imagesuffix.equals(".JPG")) { names2.add(name); //println(name); } } String[] names3 = names2.toArray(new String[0]); return names3; } else { // If it's not a directory return null; } }
A few tweaks to the images selected to overlay create the random combined images, with sets of 5 selected:
//import processing.video.*; //import com.hamoid.*; import java.util.ArrayList; String path = "//DESKTOP-G142UH9/LightTraceMemory/Images/Undimmed2"; int videoWidth = 5760; int videoHeight = 3840; int numberOfPixels = videoWidth * videoHeight; int loc = 0; int iteration = 0; color c, oldc; int i,j; int r, g, b, oldr, oldg, oldb; int rreduction, greduction, breduction; int numberOfFrames, numberOfFramesRead; int brightness, oldbrightness, brightnessdifference; boolean namequalitytest, randomtest; String[] filenames; String filename; String imagename; PImage combinedImage; PImage oldImage; void setup() { size(5760,3840); filenames = listFileNames(path); //filename = args[0]; filename = "hello"; iteration = 1; //int(args[1]); combinedImage = createImage(5760, 3840, ARGB); //combinedImage.beginDraw(); combinedImage.loadPixels(); //set every pixel to black to start color black = 0xFF000000; for (loc=0; loc<numberOfPixels; loc++) { combinedImage.pixels[loc] = black; } //go through each image (except the current one) and update if the new pixel is brighter for (int j =0; j<100; j=j+1){ for (String name : filenames) { println(name); imagename = name.substring(0, name.length()-4); println(imagename); namequalitytest = imagename.equals(filename); randomtest = (random(1)<5.0/filenames.length); //randomtest = true; if (namequalitytest || randomtest) { //n.b. don't use == here - looking for same object not same value println("passed one of the tests"); oldImage = loadImage("//DESKTOP-G142UH9/LightTraceMemory/Images/Undimmed2/" + name); //n.b. name includes the suffix oldImage.loadPixels(); rreduction = int(random(20)); greduction = int(random(20)); breduction = int(random(20)); for (loc=0; loc<numberOfPixels; loc++) { c = combinedImage.pixels[loc]; r = constrain(((c >> 16) & 0xFF)-rreduction,0,255); g = constrain(((c >> 8) & 0xFF)-greduction,0,255); b = constrain((c & 0xFF)-breduction,0,255); brightness = 3*r + b + 4*g; oldc = oldImage.pixels[loc]; oldr = (oldc >> 16) & 0xFF; oldg = (oldc >> 8) & 0xFF; oldb = oldc & 0xFF; oldbrightness = 3*oldr + oldb + 4*oldg; brightnessdifference = brightness - oldbrightness; if (brightnessdifference>0) { combinedImage.pixels[loc] = 0xFF000000 | (r << 16) | (g << 8) | b; } else { combinedImage.pixels[loc] = 0xFF000000 | (oldr << 16) | (oldg << 8) | oldb; } } //end of for each pixel } //end of check for using the just added image } //end of for each image combinedImage.updatePixels(); combinedImage.save("//DESKTOP-G142UH9/LightTraceMemory/Images/ImageCombineRandom2/" + filename + "_" + j + ".JPG"); println("Saved combined image"); } exit(); } // This function returns all the files in a directory as an array of Strings String[] listFileNames(String dir) { File file = new File(dir); if (file.isDirectory()) { String names[] = file.list(); ArrayList<String> names2 = new ArrayList<String>(); for (String name : names) { String imagesuffix = name.substring(name.length()-4, name.length()); //println(imagesuffix); if (imagesuffix.equals(".JPG")) { names2.add(name); //println(name); } } String[] names3 = names2.toArray(new String[0]); return names3; } else { // If it's not a directory return null; } }