Harmonograph from spherical pendulums

Spherical pendulums produce some attractive forms on their own, but they have been applied together for a long time now (it is quite a Victorian thing apparently) to create compound shapes. The machines that make these shapes are called harmonographs, so called due to their use in helping to visualise musical harmonies. As far as I know, Harmonographs have only ever been curiosities, without any real practical application beyond the production of attractive postcards.

Harmonographs use pendulums in a variety of configurations to produce different shapes, dragging a pen over paper to produce a shape. Some use simple pendulums in combination with spherical pendulums. In every case, the complexity and beauty of the form produced is much greater than the crude contraption it has come from.

A few harmonographs have been produced using light and long exposure and produce very beautiful images:

One artist (artist collective) has already done some of this before, though without interaction between the physical and the digital. I hear they are a big deal, so very happy to follow in their footsteps.

Someone online has made a 3D harmonograph using python (as I hope to do very soon!) but has just used sinusoidal patterns rather than any proper decay functions to try to model a spherical pendulum – I am not even sure they have used a model of a spherical pendulum rather than a series of simple pendulums. I wonder how mine might be different.

Solution for a simple harmonograph’s motion is here:

Download (PDF, 516KB)

Spherical Pendulums

Finding the governing equations, and forming a strategy for creating a really basic solver given some initial conditions:

Download (PDF, 122KB)

Then using this code to draw some pendulums and their movements through time:

For a simple pendulum, plotting its x and y coordinated with time, without any air resistance:


Then including some air resistance, see pdf for detail, shows a roughly exponential decay of the amplitude of the oscillations over time, as would be expected:


The same approach is then taken with the more complicated spherical pendulum, which, with no air resistance considered, has a path that looks like this:


Adding in a small air resistance of alpha=0.01 (see pdf for expanation), produces a slightly different path, as would be expected. The ‘no air resistance path’ is in blue, the ‘small air resistance path’ is in green:


Following the green path for a longer period of time, we see the following pattern (now plotted in blue):


Setting alpha=0.05 leads to the following difference from the no air resistance path (no air resistance in blue, with air resistance in green):


Following this green path for a longer period of time we see the following pattern (now plotted in blue rather than green):


Perhaps most nicely, the later, gentle, spiraling behaviour is clear here.


The code used to generate these plots was as follows, for detail on how it was derived see the pdf at the top of the post:

The simple pendulum:

import numpy
import matplotlib
import pandas
import matplotlib.pyplot as plt
import time
from mpl_toolkits.mplot3d import Axes3D

def inputInitialConditions():
 print("installising arrays and inputting initial conditions")
 t_initial = 0
 theta_initial = numpy.pi/5
 thetadot_initial = 0
 psi_initial = 0
 psidot_initial = 4
 numberOfTimeSteps = 20000
 lengthOfTimeStep = 0.001
 m = 1
 l = 1
 g = 9.81
 alpha = 0.04
 timeSteps = numpy.arange(numberOfTimeSteps)
 columnTitles = ['t', 'theta', 'thetadot', 'thetadotdot', 'psi', 'psidot', 'psidotdot', 'x', 'y', 'z']
 constants = {'m':m, 'l':l, 'g':g, 'lengthOfTimeStep':lengthOfTimeStep, 'numberOfTimeSteps':numberOfTimeSteps, 't_initial':t_initial, 'alpha':alpha}
 course = pandas.DataFrame(numpy.zeros([numberOfTimeSteps,10],dtype=long), index = timeSteps, columns = columnTitles)
 course.loc[0] = [t_initial, theta_initial,thetadot_initial,0,psi_initial,psidot_initial,0,0,0,0]
 return [course, constants, timeSteps]

def completeInitialTimeStep(course, constants, timeSteps):
 g = constants['g']
 l = constants['l']
 m = constants['m']
 alpha = constants['alpha']
 t_initial = constants['t_initial']
 theta = course.loc[0,'theta']
 psi = course.loc[0,'psi']
 if numpy.absolute(theta) < 0.000001:
 theta = 0.000001
 print 'Important Warning: Very small theta on timestep 0 generating errors, use simple pendulum program'
 if numpy.absolute(theta) > numpy.pi/2:
 theta = numpy.pi/2 
 print 'Important Warning: Very large theta, potential errors or unphysical behaviour'
 sintheta = numpy.sin(theta)
 costheta = numpy.cos(theta)
 tantheta = sintheta / costheta
 sinpsi = numpy.sin(psi)
 cospsi = numpy.cos(psi)
 psidot = course.loc[0, 'psidot']
 thetadot = course.loc[0,'thetadot']

 course.loc[0,'thetadotdot'] = ((psidot**2)*costheta-g/l)*sintheta-alpha*l/m*sintheta*numpy.absolute(thetadot)*thetadot
 course.loc[0,'psidotdot'] = -2*psidot*thetadot/tantheta-alpha*l/m*numpy.absolute(psidot)*psidot
 course.loc[0,'t'] = t_initial
 course.loc[0,'x'] = l*sintheta*cospsi
 course.loc[0,'y'] = l*sintheta*sinpsi
 course.loc[0,'z'] = l*costheta
 return course

def calculatePendulumCourse(course, constants, timeSteps):
 g = constants['g']
 l = constants['l']
 m = constants['m']
 alpha = constants['alpha']
 lengthOfTimeStep = constants['lengthOfTimeStep']
 previoustheta = course.loc[0,'theta']
 previousthetadot = course.loc[0, 'thetadot']
 previousthetadotdot = course.loc[0, 'thetadotdot']
 previouspsi = course.loc[0,'psi']
 previouspsidot = course.loc[0,'psidot']
 previouspsidotdot = course.loc[0,'psidotdot']
 for timeStep in timeSteps[1:]:
 theta = previoustheta + (previousthetadot * lengthOfTimeStep) + ((previousthetadotdot * lengthOfTimeStep**2)/2)
 course.loc[timeStep, 'theta'] = theta
 sintheta = numpy.sin(theta)
 costheta = numpy.cos(theta)
 tantheta = numpy.tan(theta)
 if numpy.absolute(theta) < 0.000001:
 sintheta = 0.000001
 tantheta = sintheta
 print 'Important Warning: Very small theta on timestep' + str(timeStep)
 psi = course.loc[timeStep, 'psi'] = previouspsi + (previouspsidot * lengthOfTimeStep) + ((previouspsidotdot * lengthOfTimeStep**2)/2)
 course.loc[timeStep, 'psi'] = psi
 sinpsi = numpy.sin(psi)
 cospsi = numpy.cos(psi)
 if numpy.absolute(psi) < 0.000001:
 sinpsi = 0.000001
 print 'Important Warning: Very small psi on timestep' + str(timeStep)
 tanpsi = sinpsi / cospsi 
 thetadot = previousthetadot + previousthetadotdot * lengthOfTimeStep
 course.loc[timeStep, 'thetadot'] = thetadot
 psidot = previouspsidot + previouspsidotdot * lengthOfTimeStep
 course.loc[timeStep, 'psidot'] = psidot 
 thetadotdot = ((psidot**2)*costheta-g/l)*sintheta-alpha*l/m*numpy.absolute(thetadot)*thetadot
 course.loc[timeStep,'thetadotdot'] = thetadotdot
 psidotdot = -2*psidot*thetadot/tantheta-alpha*l/m*numpy.absolute(psidot)*psidot*sintheta
 course.loc[timeStep,'psidotdot'] = psidotdot
 x = l*sintheta*cospsi
 y = l*sintheta*sinpsi
 z = l*costheta
 course.loc[timeStep,'x'] = x
 course.loc[timeStep,'y'] = y
 course.loc[timeStep,'z'] = z
 previoustheta = theta
 previousthetadot = thetadot
 previousthetadotdot = thetadotdot
 previouspsi = psi
 previouspsidot = psidot
 previouspsidotdot = psidotdot
 return course
def plotPendulumCourse(course):
 xs = course['x']
 ys = course['y']
 zs = course['z']
 fig = plt.figure()
 ax = fig.gca(projection='3d')
 ax.plot(xs, ys, zs)

def main():
 print("Starting spherical pendulum")
 t0 = time.time()
 [course,constants,timeSteps] = inputInitialConditions()
 course = completeInitialTimeStep(course, constants, timeSteps)
 course = calculatePendulumCourse(course, constants, timeSteps)
 print (time.time()-t0)
 print("Completed spherical pendulum")


The comparison plot:

import numpy
import matplotlib
import pandas
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

course0 = pandas.read_csv('C:\Users\david\Desktop\spherical_pendulum_no_air_resistance.csv')
course1 = pandas.read_csv('C:\Users\david\Desktop\spherical_pendulum_0.01_air_resistance.csv')
course5 = pandas.read_csv('C:\Users\david\Desktop\spherical_pendulum_0.05_air_resistance.csv')
course10 = pandas.read_csv('C:\Users\david\Desktop\spherical_pendulum_0.1_air_resistance.csv')

fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot(course0['x'], course0['y'], course0['z'], label = 'alpha = 0')
ax.plot(course1['x'], course1['y'], course1['z'], label = 'alpha = 0')
ax.plot(course5['x'], course5['y'], course5['z'], label = 'alpha = 0')
ax.plot(course10['x'], course10['y'], course10['z'], label = 'alpha = 0')
ax.set_xlabel('x value')
ax.set_ylabel('y value')
ax.set_zlabel('z value')

The world is inside out


Another day, another door on the tube. But, Donald J. Trump is now president elect of the United States, so I can’t be more odd than the median(ish) American.

The inside out

Door on the tube

Today I carried an ebay-ed door across London. The second door can wait, my forearms hurt.

But, so many people engaged with it – mostly with jokes, knocking on it, or making up stories as to why it was there.

There was something about this door, ripped from its doorway, that was strange and intruiging – the mundanity heightening the strangeness.

I agree, it is an unsettling thing to find. The inside has been taken out: literally – something pinned down has escaped, as a small insight into personal taste that is usually left unshown, and as something that brings forward doors other people have known.

Everyone liked doorways, not everyone liked the door, or doors.

Getting started with Grasshopper, working with set data

Sets are a subset of lists in grasshopper, with some useful restrictions, though on the whole their usefulness is quite limited.

Sets are limited to more primitive data types, such as numbers, strings and vectors, where there is an easy test for equality, it does not allow more complex objects like curves and breps. A set does not allow the same element to appear more than once, though note that items that look similar might be identical, for example an integer and float, with the same value. However, they are not the same.

There are many functions in the sets tab in grasshopper that seem better suited to lists – for example components that assume there might be identical elements in a set.

An introduction to the different containers for data available in grasshopper is here: http://www.grasshopper3d.com/forum/topics/lists-sets-strings-trees

Download (PDF, 621KB)

Getting started with Grasshopper: Working neatly, working with list data

First, good sources of information:







AAD_Algorithms-Aided Design by Fulvio Wirz


I am starting to use Grasshopper in anger now – and am managing to create things more quickly, and explore ideas that usually very hard, compared modelling by hand. I am taking a different approach to others who have been learning the software – I am looking at how it manipulates data and then thinking about how this could be applied, rather than approaching problems and trying to solve them directly. This is probably a reflection of my background as a more theoretically driven engineer, but I feel it has some real benefits in knowing how the program works, and is supposed to work, rather than developing bad habits and workarounds. I will need to move onto creating real items eventually though – though hopefully when I do I will be creating efficiently and working with the program, rather than against it.


There are several data primitives in grasshopper, which closely align to the geometric primitives found in Rhino3D – points, vectors, curves of various types, rectangular surfaces and their breps, fields. There are other primitives not seen in Rhino to control the flow of data: booleans, numbers, strings, colours, domains, matrices, file paths. Often, these are the properties of rhino primitive – the length of a curve expressed as a float for example.


There seem to be significant differences between working in Rhino and Grasshopper about ‘Modelling with Precision’ – in Grasshopper precision is guaranteed by the environment (or at least by the catastrophic results of making a mistake, errors usually become very apparent – usually…). In comparison, when modelling in Rhino discipline is required to keep a model working well and with good associative relationships, for example via the use of compound points and using the appropriate object snaps.


See pdf for examples of items listed below:

Download (PDF, 914KB)


Working neatly in grasshopper:

Clustering: the most powerful way of working neatly, combines many components into one, this cluster can then be reused and saved as a cluster and used elsewhere.

Grouping – put a box round things

Aligning – make boxes line up nicely, not associative, which might be a mercy


Working with list data:

Lists from panels, type in data with a new line between each item, turn off ‘multiline data’ in the options (right click on the panel object).

Whilst lists of primitives are common (e.g. a range of numbers), the most common use of lists are lists of geometric primitives – for example, ‘set multiple lines’ in a line primitive will create a list of lines, this list of objects can then be manipulated as a list of normal primitives.

Many different ways to create, interrogate, manipulate lists. Some much more useful than others. See pdf for examples.


Next up, working with tree data structures.

Thoughts on: The Selection of Design, Gorgon L. Glegg

The Selection of Design is short – really short, it maybe takes an hour to read through. This is understandable, looking at the timeline of books written by Glegg this is the second, written four years after The Design of Design, which I enjoyed greatly. Unfortunately it suffers the fate of many second outputs, it fails to live up to the first as it only contains a few years of wisdom, whereas the first contained the best ideas of most of a career.

Many of the ideas are interesting and some are thought provoking, though there are some that do not feel are fully thought through.

  • There is little simple engineering left – or at least there shouldn’t be… disagree with this – changes in opportunity and materials allow simple solutions where previously a workaround was required
  • Try to focus your thought when starting to solve a problem at the ‘interfaces’ to make these, often hard to analyse, parts simple – in the case of a manufacturing run this is often at the interface where, up to the interface, value is measured in weight (to help account for variation in density) and afterwards it is measured in volume. For other systems the interfaces can often be seen at points where the types of energy change radically. There is usually more than one interface in each system. When designing, start from each of these interfaces and work outwards. Agree with this.
  • As a strategy when faced with difficult and highly constrained problems, try solving the problem several times, each time with one of the constraints removed – what is common between these solutions? Try to reimpose the final condition by adjusting one of the partial designs. Agree with this.
  • Think about the context of the thing you are designing in its entirety and think about ways it might vary from your common experience – an example given is moisture in the air condensing on cool objects, making them wet, another that vibrations make nuts fall off bolts, changing the conditions of the machine.
  • When choosing between designs of seemingly equal merit, give benefit to the one that is statically determinate (or easy to analyse accurately), this reduces the risk by allowing a more informed design. If both are determinate and easy enough to analyse accurately, choose the one that is “complicated*” and “could not have been done 5 years ago”. I disagree with the latter – the assumption that all neat design solutions have already been found is certainly not true in structural engineering.
    • * Complicated is justified later – something with unneeded complications is bad, something with appropriate complexity tends to be good. Glegg is still advocating the approach of ‘divide and tidy up’ from The Design of Design, but is now pushing towards a paradigm of a fairly direct mapping of function to components, this is advocated over a many functions to few components approach largely to increase the effectiveness of any one component – I agree with this. However, in the context of another constraint the combining of functions might well be well worthwhile despite impacts on individual component’s efficiency, for example, reducing the weight of a racing car by using the block of the engine as part of the structure of the car.
  • Differentiating between possible impossibilities and intrinsic impossibilities – use basic principles, not only from physics and engineering, but also from accounting, psychology etc.
  • When faced with an impossibility do not try to negotiate a compromise – the right solution will not lie there, instead, try to make some of the impossibility dissolve through a clever solution, or one that moves a part of the impossibility elsewhere where it can be easily managed.
  • Value simplicity only for its more direct virtues, such as reliability and cheapness. There is no intrinsic value in simplicity itself. Furthermore, an oversimplified design can be vulnerable to small variations between the model and the real world which a more complicated design is able to accommodate.
  • Work with the material and its effects wherever you can – use expansion to create beneficial internal stresses, change the material (e.g. heating or cooling it) so it works for you rather than fighting it


Now with complementary bookmark – good choice Keith Blacks. Who knew the Spice Girls were still touring together in 2008?

Arduino progress

I am slowly reaching the point where I can use the technical information about the Arduinos (and clones) to make progress beyond ‘mashups’ of previous and example code. This allows me to make progress that is far faster and more satisfying – my confidence that any particular decision will lead to the desired effect has increased dramatically, which is so much more satisfying. Getting acquainted with the sources of information for the Arduinos has also made porting functions from one to another much faster – I can predict whether the smaller version (Uno to Nano to Gemma) will be able to function satisfactorily – do they have enough pulse width modulation pins for a servo to run (or else put the load on the processor and use a soft-servo), are they capable of receiving I2C signals?


Pinout diagrams:

For the Arduino Uno: http://pighixxx.com/unov3pdf.pdf

Arduino Nano: http://pighixxx.com/nanopdf.pdf

Adafruit Gemma: https://learn.adafruit.com/introducing-gemma/pinouts


I2C attachment:

No need to specify which pins I2C attaches to – there is only one possible pin for SDA (provides the datastream) and one for SCL (provides the clock ‘beat’ so the processor knows at what rate to listen). These pins seem to be specified by the controller, not surprising that there is only one possible pin for each, which in turn is not surprising as I2C can support many devices off one set of pins.

For my purposes for now, interpreting I2C is started by attaching the appropriate library and attaching a device to I2C (usually any number will do)


Servo attachment and more general attachment:

Most servos are driven from a high voltage pin, a ground pin and a pulse width modulation signal – the position of the servo is determined by the duty cycle. Attaching the servo on digital pin “D5” on the Arduino Nano requires


(the number on the board) and not

myServo.attach(11) //the physical pin number

and also not

 myServo.attach(1) //the first PWM pin) 

. For attachment to the analogue pins attach to


where y is the number on the board of the analogue pin.


Use of the serial:

Use the serial port wherever possible, and move to a board that does not have a serial port (i.e. the Adafruit Gemma) at the latest opportunity. Checking for errors without the serial port, especially errors you have not investigated before and know the rough causes of, is much trickier.

Current simplified code to allow movement of servo based upon laser distance measurement:

#include &amp;lt;Wire.h&amp;gt;
#include &amp;lt;SparkFun_VL6180X.h&amp;gt;
#include &amp;lt;PID_v1.h&amp;gt;
#define VL6180X_ADDRESS 0x29
//think this is to do with the setup of the sensor not the board? 
//But where do we define the position of the sensor
//suppose there are only one SDA SCL on the board so must be those?
int x;
#include &amp;lt;Servo.h&amp;gt;
#define SERVO1PIN 5
Servo myServo1;
VL6180xIdentification identification;
VL6180x sensor(VL6180X_ADDRESS);

void setup() {
x = 0;

void loop() {
x = sensor.getDistance();
x = constrain(x,17,50);
x = map(x,17,50,0,180);

Arduino Gemma in action – note a little more sluggish due to constant servo refresh required by the soft servo library:

Arduino Nano in action

Thoughts on: Freakonomics Self-improvement Month – Peak, Grit, the Mundanity of Excellence, Tim Ferriss

I’m not really one for self-improvement, glossy books and lofty goals you never quite stick to have always seemed tacky. Freakonomics has taken a bit of a step down by hosting a self-improvement series, inviting in people with a product to sell and allowing them to promote their wares at will. However, the guest’s involvement has worked really well, I have bought two of the books that have been promoted, read them (twice – a self-improvement tactic of my own) and am now sketching out my opinion on them (another revolting self-improvement tactic). Perhaps, I really am one for naff self-improvement?


The Freakonomics self-improvement month focusses on productivity, by far the most popular topic amongst voting listeners. Guest in the introductory show, Charles Duhigg, suspects this popularity is because “our experience matches so poorly with our expectation”. My personal frustration at the mismatch between possibility and reality does not really stem from a lack of productivity, I am reasonably good at producing things when I need to, but my inability to ingrain and sustain good ideas, create good habits over time – perhaps I should read Duhigg’s books on habit. My retention of knowledge and skill is poor. In all, this limits my ability to benefit from learning new things, as I simply can’t do them a few weeks later and have to relearn. Reading books twice and writing my thoughts on them are attempts to remedy this and, hopefully, I will remember and act-on things I have liked from the books the better for doing this. This will be a small piece of self-improvement in itself. As a hopeful start, I remember the names of the authors and basic concepts with confidence (this is pretty rare for me after a single read of a book), so perhaps the additional effort of a double read and write up has been worth it.


The books and podcasts could be boiled down into 100 words for each contributor, so let’s do that:


Peak by Anders Ericsson, Freakonomics episode

Peak, Secrets from the New Science of Expertise, primarily exists to sell the idea of ‘deliberate practice’. This is practice, usually alone or with an instructor, with clearly defined, small goals, a known effective training regime, maximal concentration, deliberate change, a performance demand slightly beyond your current ability and fast, accurate feedback. ‘Focus, feedback, fix-it’. Deliberate practice aims for skills rather than knowledge. With plenty of deliberate practice, you can become an expert in almost any area, your initial aptitude, as long as it meets a fairly low baseline, is relatively unimportant.


The mundanity of excellence by Daniel Chambliss

Coming from years of observation of swimmers Daniel Chambliss summarises why some are faster than others. Qualitative differentiation is seen between levels, quantitative differentiation is seen within levels, these differences improve race times. These qualitative factors come from great training culture that strives for continuous improvement.  Most importantly, outstanding performance is the combination of many technical factors, each isolated, honed and reintroduced in practice, that come together consistently. Excellence is hard to understand when seen as a complete product, but there is no magic. ‘Practice not until you can get it right, but until you cannot get it wrong’.


Grit by Angela Duckworth, Freakonomics episode

Angela Duckworth is a pretty serious overachiever, and has a theory as to why some people reach great levels of performancE, they practice intensively and effectively (see Anders Ericsson) and they keep on practicing over long periods of time. They show ‘stick-to-it-iveness’, ‘follow-through’ – or grit. They are not distracted by other potential goals, always stepping towards their main goal. Talent x effort = Skill, Skill x effort  = achievement.


Tim Ferriss, Freakonomics episode

Has no evidence for what he says – why do Freakonomics give him an easy time? There is no grit or excellence here, just a good salesman.


Will anything I do be different as a result of reading these books? Probably not in the long term, I will almost certainly forget about their ideas over time – unless they keep on coming up. This is probably the way forward – if the ideas are good and stick around in the marketplace in some form, then I will assimilate them, if they don’t, they will be forgotten. Excitingly, Freakonomics are looking for volunteers to engage in deliberate practice to improve some skill over the course of a year or so, so perhaps it will remain on my radar for a while yet, or be killed as those volunteers fail in their ambitions.