Graphic statics in Geogebra and RhinoVault

Graphic statics is a powerful method of determining the load path through a structure, manipulation of the ‘form diagram’ (showing the forces and structural geometry in space) and the force diagram (linking those forces together topographically) directly affecting one another. A geometry optimised for a particular criteria for forces (e.g. constant tension force in the bottom chord) can be easily found.

Graphic statics is typically restricted to work performed by hand, traditional structural analysis software almost invariably being based on matrices – which are hard to translate to a force diagram. This has meant for many structures the force diagram is effectively ignored.

Geogebra, by creating a parametic paper space, allows the development of graphic statics solutions that can be immediately varied and explored, increasing their usefulness beyond pen and paper, as well as being quicker and more accurately constructed. It is used by the Block Research Group to help teach graphic statics, and I have found their materials useful in creating a couple of examples of my own.

The first is made by directly following these instructions:

Although this example is taken from a much older version of Geogebra, both the example template and the example itself worked fine for me in Geogebra Classic version 6.0.574.

I ended up here:

I then progressed to a (very slightly) more complicated example, with three loads rather than two. Here, I needed to take two steps to find the overall resultant force, rather than one, which is why there is an intermediate resultant force (in dotted blue). Having completed the first example, Geogebra started showing its strength over pen and paper – at least for me. I was able to work more quickly, more cleanly, and with fewer distractions than I would have faces on paper.

A final advantage is the ability to create simple animations of the force and form diagrams, exploring the design space within the constraints of the initial set-up of the model. Geogebra seemed to struggle with exporting anything but the smallest .gif files. I ended up just moving a slider manually, using greenshot’s ‘capture previous region’ tool to get a series of images, and then stitching these together using photoscape to make a gif, below:

This can be expanded to arches and systems of arches as below, and shown in the examples of the Block Research Group.

Form diagram for a system of arches
Force diagram for this system of arches

Here, the limits of Geogebra start to become clearer (at least at my level of understanding). The program starts to slow significantly, and the lack of ability to easily add layers and manage the increasing complexity of the diagram leads to messiness. Geogebra does seem to be an extension of pen and paper rather than a tool rhat really allows new forms to be explored.

Building from here, more sophisticated versions are possible in geogebra in 3D – although I haven’t attemped this. The interface in Geogebra isn’t really sophisticated enough to allow easy input – I would switch to Rhino+Grasshopper+RhinoVault at this point.

From here, work in 3D is best attempted in Rhino+Rhinovault, an extension developed by the Block Research Group which is freely available (however, note it only runs in Rhino 5, and installing its dependencies was quite fiddley). However, its outputs need experience at the lower level offered by Geogebra and pen and paper work – else you are risk of garbage in garbage out (GIGO) and making something that looks good but makes no sense. It is also aimed at producing vault like geometries, rather than solving previously defined topologies (such as optimising as a truss described in Bill Baker of SOM’s Structural Optimisation Using Graphic Statics.

From just a couple of hours of work I found RhinoVault easy enough to use, but controlling it required a greater level of skill and understanding than I could manage in a couple of hours. However, I was able to create some quick models and (very attractive) diagrams.

Form generated using RhinoVault
Left: form diagram (just the structure viewed in plan) Right: force diagram. For the vault form above
Modified force diagram (trying to put a series of ridges in the diagram) – N.B. This can’t work!

By manipulating the force diagram, ridges can be introduced such as:

Left: form diagram. Right: force diagram with ridges introduced. Rerun rvHorizontal and rvVertical in RhinoVault to introduce these changes.
Vault formed with three ridges visible
Elevation of vault with three ridges

Similarly, the form diagram can be altered to change the boundary conditions. In this case, I have removed the support from three of the six outer edges:

Support removed from three of outer edges – some remeshing needed…
Perspective view of vault with open sides

In short, RhinoVault is clearly a very powerful tool in the right hands, and with the right post-processing and more detailed analysis, with the geometry in Rhino export to analysis packages is straightforward. However, until you have the right level of skill, making fine adjustments is difficult – and can lead to significant amounts of trial and error. It is quite hard to define what is ‘good design’ using this tool. It is very easy to skip past forms that do not solve fully. As far as I can tell, RhinoVault is no longer being maintained (very sadly, its principle author seems to have died a couple of years ago) and Rhino 5 will slowly but surely become harder to run over time.

The Block Research Group and its collaborators have done an amazing job creating and then distributing this tool! RhinoVault is pretty well documented, although the tutorials are now out of date – in particular some of the rvModification commands are quite hard to understand without an up to date tutorial, but basic usage is still the same. Their other work is also really well documented, with most papers and articles freely available on their website.

IABSE Future of Design Conference 2019 poster

Our poster:

A colleague at WSP and I entered the design competition at the IABSE Future of Design competition, we were really pleased to be shortlisted (though gutted to fall at the final hurdle, obviously!) and it was interesting to see the other shortlisted entries. It is always fascinating to see you others have responded to the same brief. Our poster is here, and images from it are below:

Elevation – bridge span is 25m
Plan – bridge span is 25m
Render of typical voissoir – width is 4m

Our poster puts quite a few ideas to the test, some of which worked well, some not so well, the good:

  • Using photoshop to transform sketches – neither my collaborator nor I are good ‘presentation quality’ sketchers (although we are good enough ‘conversational sketchers’), and would usually turn to a rendered model for a competition. Photoshop quickly turned rough sketches into something of acceptable quality for the competition. There was no 3D model of the whole bridge to produce, which sped things up (but do see ‘the bad’, point 1). Most of the sketches were on top of some rough lines produced in Rhino to keep things to scale. I think the sketches are simple and clear, seem unlaboured, and are good enough quality for a poster.
  • Producing a better render in Rhino – with a bit of effort, I was able to preserve the linework in Rhino alongside the rendered surfaces, this produced a better render (after some playing with the lighting) than I have managed before. I hate Rhino renders without the edge lines, so this was a step up from a Rhino screenshot without losing its clarity.
  • Using powerpoint to layout the poster was just fine – no need for something more sophisticated
  • Graphic statics – the justification for the design was run developed using a reciprocal diagram to demonstrate that the slightly counter intuitive ‘down-pulling’ cable worked as intended, and why it was necessary. It provided a way to put calculation onto a poster without it seeming out of place – it’s just a diagram. It was great to enter using a structural system that felt genuinely unusual and used the canyon’s sides to achieve what wouldn’t have been possible in another situation.
  • UHPC – the bridge used my collaborators knowledge of Ultra-high performance concrete to slim it down even more compared to my original suggestion stone, without just using it like a ‘magic’ material
  • The inspirational projects section – it felt like a great way to draw on other great projects and ideas whilst avoiding any awkwardness of appropriation or similarity

The bad:

  • the renders and sketches weren’t strong enough compared to the competition. Not having a 3D view of the whole bridge would have taken more time to produce, and additional space on the poster (carrying an opportunity cost), but not having one marked us out as being less polished than the other shortlisted teams.
  • the text was too small and there was too much of it – I don’t think anyone read much and the quantity probably put some people off even starting. The whole poster was just too busy.
  • I’m not sure at the conference (and perhaps even some of the judges) both read and really understood the graphic statics part of the poster, it isn’t a part of mainstream engineering education and can seem a bit unorthodox if you don’t know about it already. For this particular form, the reciprocal diagram isn’t especially pretty or eye-catching.
  • the poster was produced over the final weekend prior to the closing entries – there wasn’t much time for reflection or iteration

For interest, other shortlisted posters from the day: team 3, team 4 and team 7 (we were team 17). Team 3 were the judges winners, and audience vote winners.

A wider view:

The brief was quite interesting – put a footbridge about half way down a canyon, spanning between its two sides. Almost any form of bridge is possible, you have an endless supply of ‘skyhooks’ above and below your structure that can carry either compression or tension. As long as the bridge is light enough, you can just lift it in from the top, so you can prefabricate.

Some wilder ideas I had included:

  • Have a suspended path that runs all the way up the centre of the canyon, rather than a path dug into the sides
  • Just have people walk along the top of the canyon, and leave that place undisturbed, the experience of seeing the canyon from the top using a bridge is probably adequate
  • Build a tunnel (down one side, below the river and back up the other), so the crossing between the sides is completely invisible from the canyon
  • Transporter bridges (would need to be staffed, but carry a toll/suggested donation and that should cover it)
  • Move the picnic site to the other side of the canyon, so no crossing is required, and neither is a path up the canyon
  • Don’t have the picnic area at all, and do something else

So whilst I enjoyed reponding to the brief given (it’s always fun to sketch out the design of a small footbridge) I do feel it misses the an important part of being an engineer – the creation of a path up each side of the canyon needs real challenge and there wasn’t space to do that within the competition. I don’t think a path dug in like this is appropriate for a national park, in particular a special location like a canyon:

  • It is irreversible for future generations, we should give them the opportunity to remove the path if they want
  • Would damage the river and canyon below during its construction, which would require significant dust, noise and vibration
  • Rock fall protection along the path would probably require the installation of nets above the path further spoiling the canyon
  • The omission of edge protection for a path like this is strange – perhaps I and the other entrants should have omitted edge protection from our bridges, which are the same height above a drop, and are wider (albeit potentially more crowded), or at least shown the handrail continuing from our bridges on plan to show it connecting with a footpath edge protection.

An example I like is a canyon walk is in Dollar, Scotland. For the majority of its length, the path winds over the ground on bridges, leaving the carpet of vegetation largely undisturbed, and stopping visitors from straying from the path and causing additional damange.

If this were a real project, I believe the responsible thing to do would be press the client and architect on why they were constructing this path, and what other options had been explored, not just design the bridge.

It was interesting to note that the 2015 winner challenged the brief, re-siting the proposed bridge down the Thames to re-use an existing bridge. However, it still seems to be the best strategy, should you want to progress to the shortlist and beyond, to respond to the brief in a more straightforward manner. In future years, it would be nice to see an honorable mention for an entry that challenged some aspect of the brief convincingly, or even a separate prize category.

More tests with the Anycubic Photon S

Continuing printing with the Anycubic Photon S, some issues seem to be working themselves out.

Caps: the magnet recesses still seem to be undersized, with an internal diameter between 5.0mm and 5.5mm creating a push fit for this component. An internal diameter between 1.0mm and 1.5mm (depth of 2mm, fillet radius 0.2mm at each end) leads to a hole that just opens up.

Models of caps to test opening of top hole for thread and fit of magnet in recess, all dimensions in image in millimetres
Test caps, from left to right: 0.5mm diameter top hole, 1.0mm diameter top hole, 1.5mm top hole; 5.0mm diameter magnet recess, 5.5mm diameter recess, 6.0mm magnet recess

Nodes: leaving the printer alone during the print (so not using the pause function and ensuring I did not knock it) removed z-wobble issues, using side-cutters removed the pock-marks in the sides of the node, with any bumps removed with sandpaper. Using increasing sandpaper grits should increase the surface quality until it reaches a condition that cannot be differentiated from the rest of the node, without supports.

Second printed node: leaving the printer alone during the print removed z-wobble issues. Again, very good and consistent fit of the magnets in their recesses.
Using side cutters to cut away support structure, and using a coarse sandpaper to remove any final bumps. A higher grit of sandpaper should reduce the appearance of the scratches to an acceptable level.

Getting started with the Anycubic Photon S

Why I have (reluctantly) bought a 3D printer:

I always thought I would never own my own 3D printer – I believed that the quality of results from desktop printing were too poor to be of use, and their cost, space and maintenance requirements were too high. 3D printers in themselves don’t interest me much, only the prints do, so I wouldn’t take it up as a hobby. The presence of low cost 3D print marketplaces allowed me to order prints by uploading .stl files at very reasonable prices and lead times, the fuss of owning my own printer never seemed worth it.

However, things have changed over the last couple of years, so I have changed my mind:

  • The marketplaces for 3D printing by amateur ‘desktop’ print shops have disappeared, these have been replaced by higher end shops (for example has hidden its individual suppliers behind a single pricing system, and increased the required quality of the shops it uses, both of which have lead to increases in prices). This has made it harder to procure 3D printed parts at a very good desktop quality at a low price.
  • Desktop SLA printers have come onto the market at much lower price points. These have small build volumes but high levels of accuracy and detail (x=47microns, y=47 microns, z>10microns for the Anycubic Photon S) that are great for my needs to model making and small component making.

I have purchased one of the cheapest SLA printers, the Anycubic Photon S – the prints are very good quality, but it is hard work (and exactly the kind of work I try to avoid)!

The printer that arrived only printed with half the UV screen working, producing the print shown below:

First test model: neatly divided in two due to screen error

Fixing the printer:

The first suggestion from Anycubic’s support was to detach and reattach the cable from the motherboard to the UV screen, this didn’t work. After that, I agreed to attempt a motherboard replacement (took about 10 days to deliver the motherboard) in return for a free 1 litre bottle of resin (worth about £30), this did the trick and the printer now functions properly. In all, these issues and Anycubic’s reaction to them were disappointing. The printer comes with a neat card saying it has passed through quality control, and yet it still had a fundamental issue. I felt that Anycubic relied too much on my goodwill and technical skill to attempt to fix a defective product, expecting me to perform a motherboard replacement (so incurring up to 2 weeks delay and couple of hours work with no guarantee of a fix) without any form of compensation – I had to threaten to return the printer to Amazon before the free resin was offered as compensation. In my view, Anycubic should have immediately offered a replacement or refund upon learning of an issue, alongside offering the motherboard replacement with an incentive.

First few prints:

Installing the slicer, and the importing the .stl files, generated in Rhino3D was easy and quick. The auto-generator for supports in the slicer are obviously inappropriate so I manually added supports, this was easy. The slicing of files takes around 5 minutes, and the process of saving to a USB and plugging this into the printer is easy (and much less potential hassle than having some kind of Wifi type connection).

I have started with printing some caps to go at the base of an icosahedron (these create a termination of a thin thread and a magnet). The general quality of the prints were very good: nearly invisible layer lines and few visible printing artefacts. Issues to resolve: dimensional accuracy of the inside cup, the printer seems to have over-cured on the inside slightly meaning that the magnets don’t quite fit. I had to increase the diameter of the hole at the top of the cup significantly to make sure it didn’t close up, I need to find ways to print smaller holes reliably.

Printed caps x2: note manually added support structure, plug for scale

First three versions of caps printed, note increased size of center hole until large hole in v3 actually gives an opening

v3 cap modelled in Rhino. The internal reverse fillet doesn’t seem to have had an affect, so will omit in future models.

Pausing print to check on progresss (and probably damaging the prints in the process)

First few nodes:

The first node I printed detached from its supports during the print (I think at the exact moment I paused the print to inspect it and it was lifted out of the pool of uncured resin). I increased the level of support for the second print and this wasn’t an issue. Both prints gave a good initial impression of the accuracy of the machine, showing excellent levels of detail.

Unlike the caps, I seemed to hit the dimensions of the holes for the magnets perfectly here, each one was a consistent push fit, which bodes well for the accuracy of the machine. This level of fit was as good as anything I have had professionally printed. The green resin was a little darker than I had expected, but that is just a note, I have clear resin to try after the green is used up (I could try mixing them?). There are small issues on the node with pockmarks being left where the supports were snapped off – I will need to find ways to improve this over time and practice. There are two instances of z-axis wobble seen in the node, one I believe occurred when I paused the model to inspect its progress.

First and second nodes, first print failed by detaching from supports during the print

Version 2 printed nodes prior to magnet insertion, note pockmarks on top surface where supports have been removed, and ring showing z-axis wobble when print paused for inspection

Printed version 2 node, note excellent fit of magnets and layer lines visible by top magnet

Magnets inserted into version 2 node, note layer lines by magnet and compared to finger print for scale

Other issues noted during printing:

  • SLA printing is surprisingly wasteful – so far (over about 4 prints) I have used over 10 pairs of Nitrile gloves, a lot of paper towels, about 300ml of isopropanol alcohol and several resin straining sieves. The waste generated is contaminated by sticky uncured resin which is harmful.
  • Avoiding contamination of surrounding areas by uncured resin required discipline and patience.
  • Processing prints (e.g. decanting resin) during the day can lead to a skin forming on the resin which then needs to be removed, this is messy and annoying, recommend processing prints at night or with curtains shut to reduce the UV light that is present.
  • Print processing needs to take place within a few hours of the print completing (i.e. the morning after the print is run), and you need to plan for this.
  • Printing bed often slips during removal of prints, need to re-level after every use, this takes about 3 minutes.
  • Filtering the resin back into the bottle often leads to spillage, use a funnel.

In all:

Overall, the experience of owning a 3D printer has been much as I expected: messy, hard work, quite unreliable, but also rewarding, with the occasional result that is much better than expected. If to-order 3D print prices had remained low I would have been right not to purchase one, but times and markets change. However, I really look forward to being able to iterate models over time to perfect small details, and learn a little about the art of producing good results reliably.

Working with Aconex

Aconex is a document management system commonly used in the construction industry – it is used primarily by large developers and clients to manage construction documents (for example drawings, calculation packs, risk assessment method statements, material approval forms and so on).

Each of these documents passes through a series of checks before becoming ‘approved’ in some way. These checks usually culminate in approval from one or more engineers, managers, or, for more specialised and technical documents, ‘Appointed persons’ (some nominated person with a particular level of responsibility or skill set – e.g. someone responsible for electrical safety on a project). Once the document has been fully approved by that person it may be used, enabling the activity it describes to proceed, this might be the construction of the structure described by the approved drawing, or the activity described in the approved method statement.

Aconex allows these approval processes to be described using its workflow processes, which help to automate approvals and feedback in the case of non-approval, but the approvals process can be completed manually if needed (although this is very labour intensive and pretty soul destroying – computers should be allowed to do what they do best!).

However, the key disadvantage of using a more manual approvals process is that the exact status of a document can be hard to discern without several minutes of effort. Even the best trackers, known to be complete and regularly updated, cannot give you an up to the minute document status. Every time a status is needed – you consult the tracker to find its status at the time of last update, then check for any correspondence issued since then that updates this understanding. In my experience, developing and maintaining a fully up to date tracker for a significant set of documents (say 100-200 documents) can be a full time job. If you have no technical input on the documents you are using in the tracker, then you could probably manage about 5-10 times as many.

Fortunately, Aconex has a well developed API that allows querying of the database of documents, as well as each document and its history, including where necessary the ability to download the document itself. This allows each document to be tracked, using some simple questions, so that either the tracker can be updated automatically or the user alerted to activity involving a document, indicating the tracker is now out of date and action needs to be taken. Other uses of the API include creating an up to date folder of every document of a particular type that is very easily accessible by the team, and helping to discern the status of a whole project (for example – how many documents have been sent to this person for approval but have not been returned by them?). Whilst implementing this ‘flow chart’ of decisions to find the status of a document cannot be completely accurate, it can help the engineer significantly by making sure all the simple questions about a document’s status have been asked – in particular, increasing confidence about whether a document has changed since the tracker was last updated.

As part of my work, I created a few small programs to explore these possibilities:

  • One creates a query and sends it to Aconex (This is a small c# program, compiled to create an .exe)
  • One downloads documents given certain search parameters (creating an up to date folder of documents for team use) – in Python 2.7
  • One downloads all Aconex Mail items
  • One searches fairly exhaustively for indications a document has been updated, allowing an engineer to see whether a tracker is out of date – in Python 2.7

C# code (compiled to .exe) to call Aconex API and receive .xml file of search results or document metadata:

C# code (compiled to .exe) to call Aconex API and receive other files:

Python script to download all documents returned by a search with wildcards:

Python code to classify document statuses:

Photospheres in dark locations

The original intention of using photospheres was to make photographs, especially site photographs, more navigable, and capture as much contextual information as possible. On the whole, this has proved really effective for certain kinds of projects where site access is more difficult, or spending extended periods of time at a site isn’t desirable.

However, dark basements were foiling this approach. I wasn’t able to light the scene simultaneously, so the result was a series of partially lit photospheres (with 75% or so of the photosphere in complete darkness). Each of these was no more useful than a non-contextualised photograph, I have several photospheres that I cannot accurately place.

Fortunately, I was able to draw on my previous work on with LightTraceMemory. Here,  a multiple exposure technique was used (employing processing) to overlay several images. Whe the camera is kept exactly still on a monopod, and several photographs are taken with the lighting cast around the camera so that the entire scene is lit in at least one photograph of the series, then these can be overlaid to create a single, fully lit, photosphere. To overlay, the same pixel is compared in all the images, and the brightest one chosen for the final image.

However, my PPE (orange high visiblity shirt and trousers) were causing an issue. Frequently, the brightest of the pixels from the images would be my, or someone else’s, orange PPE from a photograph where they were accidentally partially lit. This lead to useful detail being obscured behind bright orange streaks. As I come across bright orange objects quite infrequently in the basements, a good way to differentiate between useful and non-useful data is how orange it is: useful information is rarely orange, non-useful information frequently is. Therefore, I needed a way to filter out pixels that were ‘too orange’ from each image. I achieved this crudely, using the dot product of the normalised RGB vector of the pixel color and a normalised RGB vector for pure orange, and setting a limit on this, beyond which it would be rejected – 0.9 to 0.95 seemed to work well.

Taking a typical scene (outside this time, as there are a few more orange objects to choose from), and replacing any ‘too orange’ pixels with pure black (note the barriers, my PPE, and the yellow on the legible London sign):

Typical outdoor scene with ‘too orange’ pixels removed (as part of testing)

However, this technique proved good enough, although it did tend to ‘rub out’ brickwork that was quite orange in colour. Therefore, I added one more step – accumulating the rejected ‘too orange’ pixels onto another overlaid image (taking the brightest for each pixel location), and adding in the pixels from the ‘too orange’ image as a final step for any pixel locations where the existing overlay is near-black (so no pixel data has made it through the ‘too orange’ filter for any of the photos, or that area has remained unlit in any photos, so doesn’t carry any useful information anyway). This helps to fill in some of the areas of brickwork and other fairly orange items, without allowing PPE to obscure detail on other photographs.

Basement example 1, 5 photos are taken with different parts of the scene lit by a lamp:

For each pixel, the orange pixels are filtered out, and the brightest remaining pixel is left, creating an overlay:

Overlay, without any ‘too orange’ pixels

Overlay of the ‘too orange’ pixels

Where the original overlay is black, and there is information in the ‘too orange’ overlay, the orange overlay is added back on, a little more detail is preserved. Note that the original overlay may not have been dark enough in some areas (e.g. the inner parts of the ladder) so the ‘too orange’ overlay has not been used. This can be corrected, where useful, by increasing the threshold at which the ‘too orange’ overlay is used (for this overlay it was set to almost completely black, so quite strict).

These overlaid images can be run through the previous photosphere scripts to create a 360 view using krpano. The photospheres shown in this post are all available at this link [User: UsefulSimple, Password: RrlMrKArf2eP] to see the different images and overlays.

Other technical issues, before I forget that I dealt with these:

  • the Hugin scripts used to process the Samsung ‘2 circles’ format of photosphere image in a equirectangular projection used by krpano required the exif data for each image to list the camera model as a Gear 360, this meant that all of the photos produced using the processing script, initially, would not pass through the existing scripts. I used a check within the python script to ensure every image passed to the Hugin scripts had its exif data set correctly.
  •  the process of overlaying the photos usually took place off the memory card of the camera, so the suggested directories for the photo download locations did not work as expected when I ran the code without the memory card in place
  • when taking the photos (e.g. on stairwells and in basements) the typical timing of the photos came in a series of bursts (e.g. 5 photos at approximately a 3 second spacing, followed by at least 15 seconds whilst I moved the monopod to the next location). I used sklearn’s implimentation of the k-means algorithm to group the photos automatically using their exif timestamp for each of them. This can easily be taken much further if I want to do so later, for example:
    • taking the distance from the group mean for each point, decreasing the number of means from the number of photos until the largest and second largest group mean to point distances increase very suddenly (as the k-means will suddenly be forced to combine two bursts of photos into one scene). Use this to count the number of scenes rather than doing this manually.
    • making a small GUI that allows the user to change the number of scenes manually (maybe with some help from the above program), and manually assign images to scenes if needed


Prickly growth spurt

My cactus has put on a growth spurt over the last few weeks – following a distubing ‘drooping’ episode where its two branches changed angle significantly over the period of about a month.

In early July the cactus was showing early signs of growing its smallest arm:

6 weeks later, its growth has been well beyond my expections, the smaller arm is now almost as large as the other one having grown at least another two areoles along, almost doubling its length:

I have enjoyed watching how it grows, the areoles appearing at the tip and unfurling outwards as the arm pushes them out of the way. For the small arm, the process has been surprisingly violent, with the unfurling of the arm from the end almost seeming to rip the ridges apart, and leaving a ‘tide mark’ of newer stem.

Tidemark passing between two ridges that previously joined at the end of the small arm. Note the change in texture of the stem before and after the tidemark, with a (more hydrophobic) dottier and darker appearance on the older, right hand side, flesh.

Another, even clearer, change in texture at the growth spurt line, with a smoother texture for the newer flesh (on the left hand side)

The spines grow much like small branches (though I believe they are modified leaves), shrinking, drying and changing colour once each section is about a week old. The wetting qualities of the cactus have changed too, the older stem seems to be the most hydrophobic, followed by the newer stems (a slightly lighter green if that is perceptible), followed by the older spines and the newer spines. The areoles, especially the quite furry new areoles, are positively hydrophilic, the water clinging to them across the whole surface. Reading Superhydrophobic and superhydrophilic plant surfaces: an inspiration for biomimetic materials it seems most super-hydrophobic plant materials are found in wetlands, keeping water away from the plants helps to prevent bacterial attach and help stop accummulation of dirt on the leaves. In the case of the cactus, I suspect the hydrophobic nature of the stems helps increase the proportion of any rain reaching the root system, and helps to prevent burning of the plant caused by lensing through attached water droplets after rain.

Note the wetting of the areoles, and the droplets that seem to have formed preferentially on the newer stems (even accounting for the confounding factor that the droplets might be more likely to form on the finer spines as they can fully enclose these more easily)

New arm growth a few minutes after water spraying

Growth at top of main stem, a few minutes after watering

The order of growth of the stems seems to be dominated by the availability of space at the end of each stem. Looking down above the main stem, it is clear that the 7 ridges alternate somehow in growing a new areole and bump, though as there are 7 ridges, it is not clear quite how this order is determined.

Looking down at the growth of the main stem

Noise-meter – with raspberry pi, and visualising its output


I was interested in the noise levels in a space, and wanted to visualise how they changed over time. However, I wanted to know more about the distrib

I was interested in the noise levels in a space, and wanted to visualise how they changed over time. However, I wanted to know more about the distribution of noise levels over time, something that any single measure could not really provide, and even tools like a series of box plots felt clumsy. This is because the noises I am interested in are not the loudest noises, they typically increase the 60th-80th percentiles, though not always. I don’t have a strong intuition about an appropriate model for the noise levels, so choosing a model for its representation didn’t feel appropriate, so creating a graphic that moves me as close to the raw data as possible seemed the best idea.

ution of noise levels over time, something that any single measure could not really provide, and even tools like a series of box plots felt clumsy. This is because the noises I am interested in are not the loudest noises, they typically increase the 60th-80th percentiles, though not always. I don’t have a strong intuition about an appropriate model for the noise levels, so choosing a model for its representation didn’t feel appropriate, so creating a graphic that moves me as close to the raw data as possible seemed the best idea.

Drawing inspiration from Edwarde Tufte’s wavefields (and here), spectrograms, drawing histograms over time my previous experimentation with violin plots. I also like the Tensorflow Summary Histograms – though think my solution is a bit less heavy and intuitive to read for my data which is very unlikely to be multi-modal.

Tensorflow Summary Histogram

A summary histogram clearly showing a bifurcation in a distribution over some third variable.

I wanted to take the spectrum of noise levels for each minute and plot that distribution in a fairly continuous way over time. In the end I cheated to get the effect I wanted – I drew a line graph with 60 series (the quietest reading in each minute forms the first series, then the second quietest and so on), and the rasterization process when this is saved as an image makes it appear like a spectrum – but the results seem effective, giving an intuitive sense of how the distribution of noise levels has varied over time, with a minimum of interpretation forced by the visualisation – I feel quite close to the data.

24 hours of data viewed as a spectrum

Plot of 24 hours of noise levels (y-axis is noise level is dBA) – changed in the distribution of noise levels are immediately obvious – from a reduction in the variance overnight, to occasional increases in the volume (at all percentiles) during the day when there is activity near the sound-meter.

I wanted to be able to see this distribution online at any time, so just set the raspberry pi to generate and upload a graph each hour, using the previous 24 hours data – not fancy, but does what I want! I will make more graphics when I get round to it, increasing interactivity on the web page, taking longer and shorter periods, plotting previous mean values of noise at that time over the last week as so on.

Getting it done:

I used a sound level meter and connected this to a raspberry pi, the raspberry pi queries the microphone for a reading each second, as per instructions from and These are then saved each minute into a csv file to the raspberry pi. This script is started automatically on startup of the pi, so should run whenever the pi has power.


import sys
import usb.core
import requests
import time
import datetime
import subprocess

streams="Sound Level Meter:i"


assert dev is not None

print dev

print hex(dev.idVendor)+','+hex(dev.idProduct)

#create the first file in which to save the sound level readings
sound_level_filepath = "/home/pi/Documents/sound_level_records/"
now_datetime_str = time.strftime("%Y_%m_%d_%H_%M",
sound_level_file = open(sound_level_filepath + now_datetime_str,"w")

while True:
    #every minute create a new file in which to save the sound level readings
    now_datetime =
    if (now_datetime.second == 0): #(now_datetime.minute == 0) and:
        now_datetime_str = time.strftime("%Y_%m_%d_%H_%M",now_datetime.timetuple())
        sound_level_file = open(sound_level_filepath + now_datetime_str,"w")
    ret = dev.ctrl_transfer(0xC0,4,0,0,200)
    dB = (ret[0]+((ret[1]&3)*256))*0.1+30
    print time.strftime("%Y_%m_%d_%H_%M_%S",now_datetime.timetuple()) + "," + str(dB)
    sound_level_file.write(time.strftime("%Y_%m_%d_%H_%M_%S",now_datetime.timetuple()) + "," + str(dB) + "\n")

Each hour, a scheduled task for the raspberry pi (using cron) is set to create a graph of the previous 24 hours of data, and upload to my website, behind a username and password, so I can see the results by visiting the page. The code is below.


print "also hello there!"
import time

import numpy as np
import seaborn as sns
import pandas as pd
print "importing matplotlib"
import matplotlib
print "finished importing matplotlib"
print "importing pylab"
import pylab
print "finished importing pylab"
from os import listdir as listdir
from datetime import datetime
from datetime import timedelta
from dateutil.parser import parse
import glob
import os
import ftplib

def myFormatter(x,pos):
return pd.to_datetime(x)

current_time =

#combined_dataframe = pd.DataFrame(columns=np.arange(60).tolist())
combined_dataframe = pd.DataFrame()
x_index = []
list_of_filenames = []
print combined_dataframe


list_of_filenames.append(glob.glob('/home/pi/Documents/sound_level_records/' + time.strftime("%Y_%m_%d",current_time.timetuple()) + '*'))
list_of_filenames.append(glob.glob('/home/pi/Documents/sound_level_records/' + time.strftime("%Y_%m_%d",(current_time-timedelta(days=1)).timetuple()) + '*'))
list_of_filenames = [item for sublist in list_of_filenames for item in sublist]

print len(list_of_filenames)

#import data from each minute
for filename in list_of_filenames:
    x_ordered_title = datetime.strptime(os.path.basename(filename), '%Y_%m_%d_%H_%M')
    time_difference = current_time-x_ordered_title
    if time_difference.days*86400+time_difference.seconds<86400: #24 hours
        x = pd.read_csv(filename, header=None, names=['timestamp','dB'])
        x_ordered = x.sort('dB')
        x_ordered_data = x_ordered['dB'].tolist()
        if len(x_ordered_data) == 60:
            x_dataframe = pd.DataFrame(np.reshape(x_ordered['dB'].tolist(),(1,60)))
            combined_dataframe = combined_dataframe.append(x_dataframe)
combined_dataframe.index = x_index
combined_dataframe = combined_dataframe.sort()
combined_dataframe.sort_index(inplace = True)

fig = matplotlib.pyplot.figure(dpi = 200, figsize = (10,10))
jet = matplotlib.pyplot.get_cmap('jet')
cNorm = matplotlib.colors.Normalize(vmin=1, vmax=60)
scalarMap =, cmap=jet)

for count in range(2,58):
    colorVal = scalarMap.to_rgba(count)
    fig = matplotlib.pyplot.plot(combined_dataframe.index,combined_dataframe.xs(count,axis=1), linewidth = 0.5, color=colorVal)

    ftp = ftplib.FTP("","user","password")
    ftp.set_pasv = False
    f_file = open('/home/pi/Documents/Sound_meter_graphs/test.png','rb')
    ftp.storbinary('STOR test.png', f_file)
    print "oh dear"

Looking back at this project, it has taken a surprisingly familiar form – I have largely constructed it out of existing pieces of code, bolting them together. Even my previous practice with connecting to the raspberry pi using SSH has been helpful, transferring code to and from the raspberry pi whilst it is running in headless mode.

One particular issue I had not come across before, and was quite difficult to diagnose, was the raspberry not creating graphs initially when in headless mode, when it produced the graphs whenever I tested it plugged in. Perhaps unsurprisingly, the difference between the two was that I had a screen connected when I was testing the pi, but not when the pi was in working mode. Having a screen connected was significant as the backend for matplotlib was not loaded when no screen was connected – I needed to change the backend.


Import matplotlib

Did not work as it had for others. After a lot of frustration, changing the backend in the configuration file for matplotlib to ‘Agg’ seemed to work, as discussed here.

Some visualisations of the catastrophic surface described by the Swallow Tail Pavilion

Just for interest, I reused parts of the code for LightTraceMemory to animate the catastrophes for the Swallow Tail Pavilion for the Chelsea Flower Show. Results below:

Icosahedron v3: More available – and an IABSE Future of Design Finalist

The third version of the icosahedron moves towards a more typical size, but puts its complexity in different places to make it more available and accessible for someone who wanted to build their own. All the geometric complexity (including the precious ‘compass walk’ for marking out the locations of the magnets on the wooden spheres for the nodes, and the end details of the rods) is taken into the computer, and output as a 3D print. You can see the whole icosahedron here and the individual nodes here. For both, user: Rudolf; password: Laban.

These 3D prints can be ordered from any 3D print shop with ease – marketplaces like 3Dhubs provided a good place to start, so in depth understanding of 3D printing is eliminated. The icosahedron has been reduced to a similar level of complexity in construction as a piece of flat pack furniture.

The ‘noding out’ of forces in the nodes, and brittleness of the magnets limiting the force on each of the nodes, means a clear resin – an unusual choice for structural components due to its brittleness – works well. The resin’s translucency reveals the magnets nicely and its detailed prints and small layer thicknesses reduces the required tolerances, as well as producing a beautiful, and nearly imperceptible ‘grain’ to the nodes as the layers reach the top of the sphere. Unfortunately, the tolerances are not small enough to create an acceptable interference fit with the magnets, the icosahedron needs some of its magnets glued in place.

Drying the glue attaching the rod end connections to the rods – what else would such a chair have been designed for?

Rod with glued attachment onto a flat end and magnet (a true interference fit in the fitting, no glue needed, it just relies on the expansion of the attachment)

3D printed node with magnets inserted

3D printed node, rods and rod end attachments forming part of the constructed icosahedron

Other improvements have been made: most noticeably a change in the balancing system for the icosahedron, with the ‘feet’ of previous models replaced with extremely fine thread, a magnet and a steel sphere. This change has been a great step forward for, perhaps surprising, reasons:

  • the icosahedron is more platonic as the threads do not ‘read’ at any distance
  • the new system isn’t immediately perceptible, giving a sense of magic until this small puzzle about how the icoshedron stands up is solved
  • they reduce the length of the package needed to carry the icosahedron by about 300mm, making it easier to fit into small cars
  • the strings make it easier for the icosahedron to adjust to slightly non-flat surfaces with ease

The constructed icosedhedron, note the new balancing system at the base

Constructing the v3 icosahedron

Deconstructing the v3 icosahedron

Another change has been attaching each of the nodes directly to a rod, trying to attach in places where tension will be present (some of the forces are larger due to the change in stability system). The nodes at each corner of the ‘table place’ have been attached to the horizontal rod that defines the side of the door plane, carrying the tension across a glued joint, rather than a more brittle magnet. This attachment also stops stray nodes running across the floor when the icosahedron is collapsed! The pre-attachment of the nodes makes construction of the icosahedron a little easier.

This icosahedron was provided with an ultimate test of its portability – I took it on the plane to Edinburgh to present in the IABSE Future of Design Competition as a finalist – the icosahedron behaved perfectly, surviving the trip, coming together at the end of the lunch break and collapsing (when I hit it hard) mid-presentation! This felt quite daring at the time! It was beaten by some really fantastic work on pre-cast concrete joints inspired by traditional Korean joinery in my half of the presenters – but I was really pleased just to show the icosahedron to other engineers, despite it really being a bit of fun – special thanks go to Eva MacNamara at Expedition for reviewing and helping me improve my entry, as well as pointing the icosahedron in my direction in the first place. My entry paper is here: David Hewlett Laban Dance Icosahedron IABSE Future of Design 2018.

Following its trip to Edinburgh, the icosahedron was handed over to the Keep Fit Association at a small training session.

v3 Icosahedron with the Keep Fit Association