Thoughts on: The Design of Design, Gordon Glegg

ISBN 0 521 07447 9

The Design of Design is the genuine advice of someone seen who has seen (and made, analysed and corrected) his fair share of errors. Some of the advice I know and accept – for example, creative thought is best nurtured through alternating periods of extreme focus and relaxation, with inspiration usually striking whilst you relax. Some advice runs against what I have experienced and been taught – for example, I generally feel that extreme concentration of complexity in a design is the safest path, however Glegg advocates introducing a ‘complicate to simplify’ approach spreading his complexity more evenly. By taking this approach he seems at risk of unexpected interactions between seemingly unrelated items ruining several of his solutions. However he insures himself with full scale testing.

The most useful advice for me concerns his sense of ‘style’ in a design solution and finding ways of describing it – these form useful general guidelines that, once you look, crop up everywhere. These include a sense of working with, rather than fighting against, your forces and materials – do your designs have positive geometric stiffness (they get stiffer as they deform as the geometry becomes more favorable), do you use the properties of your materials such as their plasticity to provide safety mechanisms, or rely on later complex additions to make them safe? Perhaps, the most important rule for style is the number of parameters needed to define the solution and how it behaves should be as small as possible – a ruthless approach to the complexity of the solution should be made in its final turns. The same idea appears when learning about 3D modelling – ‘fairness’, usually a marine term, is taken and used as general advice.

The meaning of “fair” is much debated in the marine industry. No one can define it, but they know when they see it. Although fairing a surface is traditionally associated with hull surfaces, all visible surfaces on any object can benefit from this process. In Rhino, the first cue for fairness in a surface is the spacing of the surface display
isocurves. There are other characteristics of fair curves and surfaces. Although a curve or surface may be fair without exhibiting all of the characteristics, they tend to have these characteristics. If you keep these in mind while modeling, you will end up with a better final product.
The guidelines for creating a fair surface include:
● Use the fewest possible control points to get the curve shape.
● Use the fewest possible curves to get the surface shape

Rhino 5 User Guide

Glegg looks at solving engineering design problems in two main areas, the design of the problem and the design of the designer, which is then subdivided into the analytical, artistic and inventive components of thinking.

When designing the problem, Glegg recommends:

  1. Check for intrinsic impossibilities – are the forces so large as to crush any material that could fit in the load path, are you destroying or creating energy in order for your solution to work?
  2. Beware of myths and unchecked assumptions, the stories you hear are often wrong and outdated
  3. Define problems in figures and quantified levels, avoid any definition defined in words

The design of the designer:

Three key areas of thinking for the designer: the inventive, the artistic, the rational. A lack of any leads to a poor design, but strength in one is often enough:

The inventive:

  1. Concentration and relaxation around your work
  2. Skeptisism of tradition and folklaw
  3. Complicate to simplify – usually a small change to one small part can bring rewards
  4. Make the most of your material and processes, rather than fighting their properties – they will not give in without a fight
  5. Divide and tidy up your solution, get the the end with a solution, however ugly, that works, then refine

The artistic:

  1. Aim at continuity of energy – I extend this to include the continuity of forces through structures, are your forces changing directions? Are forces that tend to occur simultaneously taking short paths to cancel one another when they can?
  2. Avoid over-designing, take an idea to extremes and pull back into the sweet spot where your innovation is well used
  3. Choose a rational basis for what you want to achieve – know success and completion when you see it
  4. Find the appropriate medium, if you are fighting your processes and materials to find a solution, then you are likely to have made a mistake very early in the design process
  5. Avoid perpetuating arts and crafts – a inappropriate, though well optimised, process might be easy to procure, but should be carefully considered before it is chosen, there might be good economic reasons for a particular solution.

The rational:

  1. Think logically – find the right analytical tool for any given task, back up your more complicated analysis with simpler approaches
  2. Design for a predictable life – something that demands replacement regularly will be maintained and other faults are more likely to be found. Designing for a indefinite life increases the risk of sudden and catastrophic failure. Design modes of failure that are predictable, easy to fix
  3. Watch for disguised assumptions – it is easy to come up with analytical stories to convince yourself of a position, and for someone else to tell a different story to reach very different solutions. Solving for strength is easy – find any loadpath and lower bound theorem works hard to fill in the gaps. Solving for stiffness is hard – you need to find the true solution to make sure you don’t accidently break something else on the way to your strength solution…
  4. Safety is found in absorbing energy, not transferring it
  5. Overpaying is generally cheaper than failure

Well worth a read for the young engineer starting out practice. It only takes a couple of hours to read and has some nice examples to think about.

IMG_20160410_181518

Thoughts on: The Signal and the Noise, the art and science of prediction by Nate Silver

Nate Silver has recently lost his sheen, having been a consistent Trump doubter throughout the Republican primaries – his piece ‘Donald Trump’s Six Stages of Doom’ felt like a ray of hope last year, predicting that Trump lacked the breadth of support to continue his rise as the Republican field began to narrow. FiveThirtyEight has remained highly sceptical of Trump’s ability to win the nomination until far too late, until effectively the nomination has been secured. In a return to form Nate Silver has looked at his failed predictions more closely in ‘How I acted like a pundit and screwed up on Donald Trump’ – maybe next time he will be a little less wrong.

The Signal and the Noise has been a strange book to read twice – the first read was very easy, the second dragged. Large passages of the book that on first reading flow well feel frustratingly shallow when read again. However, much of the book is excellent and shows a deep knowledge not only of statistics and modelling of systems but also of the nuances his subjects, in particular baseball, poker and American politics. I would give the chapters on climate change, flu and terrorism a miss.

The real strength of the book is its accessible introduction to some language around forecasting and examples of its use, some of these concepts include:

Forecast: a probabilistic statement, usually over a longer time scale, i.e. there is a 60 percent chance of an earthquake in Southern California over the next thirty years. There are many things that are very hard to predict but fairly easy to forecast once longer term data is available. An upgrade to a forecast is a ‘time dependent forecast’ – where the probability of an event occurring varies over time according to some variables, for example the forecast for a thunderstorm in London over the next week shows a higher probability when it is hot and humid.

Prediction: a definitive and specific statement about when and where something will happen, e.g. there will be a riot next Friday afternoon in Lullingstone – a good prediction is testable. Hypothesis live by the quality of their predictions.

The signal: the signal indicates the true underlying relationship between variables, in its ultimate form it is Laplace’s demon, where given the position and velocity of every particle in existence and enough calculation power, past, present and future are all the same to you.

The noise: the data that obscures the signal, it has a wide variety of sources

Overfitting: finding a model (often extremely well calibrated against the past) that fits the noise rather than the signal (underlying relationship). An overfitted model ‘predicts’ the past well but the future extremely poorly – it is taking the wrong things into account and failing to find the parts of the model that matter. Superstitions are often examples of overfitting – finding false patterns in the shadows. The most famous example is probably the the superbowl stock market indicator. With the growth of available data and computing power it increasingly easy to create a falsely convincing overfitted model. to search millions of potential correlations, many of which will appear to be significantly correlated, purely by chance. To avoid overfitting, theory can help, showing which variables might be the most useful in any given model. I am fairly certain I have been guilty of overfitting in various pieces of work in the past.

Calibration of a model: a well calibrated model gives accurate probabilities of events occurring across the range it predicts – events it predicts with X% likelihood of occurring actually occur X% of the time over the long term.

Wet bias: when the calibration of a model is deliberately poor, for example to provide more positive surprises than negative surprises, so the forecast is more pessimistic than would be accurate. This is taken from weather forecasting, where rain is given a higher than accurate probability as the public notice a false negative (i.e. low probability of rain when there actually is rain) more than a false positive.

Discrimination of a model: a model is capable of discriminating between different events to give different probabilities – a model that does not discriminate can still be very well calibrated, but is not useful. Saying 1/6th of rolls of a fair dice will come out as a 6 is well calibrated, but not very useful if you need an advantage over someone in a game.

Goodhart’s law: the predictive value of an economic variable decreases when it is measured and targeted, in particular by government policy

Bayesian prior, posterior probability and Bayes theorem: a simple identity that helps to update the probability of something being true given a prior belief that it was true, a probability of observing the evidence seen given the belief is true, and a probability of observing the evidence seen given the belief is true. The outcome (posterior probability) can then be used as the prior for the next input of data to update the probability of something being true.

Derivation of Bayes theorem:

P(A|B)P(B) = P(B|A)P(A)

P(Rain|Humid)P(Humid) = P(Humid|Rain)P(Rain)

Then divide by P(B), or P(Humid), this gives Bayes, and allows us to use as a tool for updating our ‘degree of belief’ when provided with new evidence:

P(Rain|Humid) = P(Rain) * (P(Humid|Rain)/P(Humid))

Posterior = Prior * Bayes factor

The same can be applied to continuous distributions.

 

Frequentism: the interpretation of probability as the limit of its proportion as the number of opportunities for it to occur tend to infinity

The role of assumptions in models: often the assumptions made in a model dominate its behavior, to generate an idea try to break the assumptions and check for changes in output. To create an idea of the sensitivity of your model make small changes to the inputs, run many times and plot the range of outputs as a probability distribution.

Accuracy and precision: accuracy is when the predictions fall, on average, on the correct values, a precise prediction has a small range of expected values

Fitting the noise, correlation without causation: similar to overfitting

Nearest neighbor analysis: finding the results of similar individuals from the past, and using this to find a probabilistic analysis of the potential of that individual. Of those who have been here in the past, what actually happened to them?

What makes a forecast good: accuracy, honesty, economic value: accuracy is about whether the forecast is correct over the long term (is accuracy the same as calibration?), honesty is whether the model is the best the analyst could produce given the information available to them, economic value is whether the model allows those who use it to make better decisions, so takes their biases into account

What makes a forecast of use: persistence, climatology: persistence is a model that assumes a result will be the same as previously (i.e. the temperature today will be the same as yesterday), climatology (for a weather forecast) takes a slightly longer view, looking at the average behavior previously with only the most basic inputs (for example the day of the year for weather).

Results oriented thinking, process orienting thinking: acknowledging your results may not be an indication of the quality of your predictions over the short term, required confidence you are on the right track, however

Theory, correlations and causation: where many variables are available, theory is needed to help separate the relationships with true predictive value from those that are mere causation and will not be of long term value.

Initial condition uncertainty, structural uncertainty and scenario uncertainty: when making a prediction about events, a lack of information about the current condition may dominate very term predictions, this is initial condition uncertainty, scenario uncertainty dominates over the longer term as the behavior of the system may change from our current model, at any time period there will be structural uncertainty, uncertainty about our current understanding of the system

Log-log charts: make it much easier to observe relationships in variables that are exponentially related, for example those that obey Zipf’s law. Log-log charts can reveal otherwise hard to see relationships around frequency of occurrence, and ranks – and sometimes reveal where the limits of the data might be, if the size of events and their frequency follows a straight line on a plot, then it is not unimaginable that something an order of magnitude (or two) larger might occur in the future.

IMG_20160410_175903

Adaptive thing desk model

With its profits UST funds several EngD students, and a byproduct of one EngD (by Gennaro Senatore) has been the adaptive truss. It is quite a sophisticated bit of kit, covered in sensors and actuators, and with some proper programming behind it, that allows it to detect its movements and compensate by changing shape to produce a flat surface under a variety of loads. Without the help of its actuators, it moves quite a bit:

Aside from being an interesting machine and demonstration on Expedition’s ability to create things, the adaptive truss has a more fundamental lesson to teach – that in the design against permanent and variable loads they need not always combine them and then allow the design to be dominated by stiffness.

An adaptive structure will always need to be strong enough to sustain the loads that will be placed on it, but it doesn’t need to so stiff as to deflect less than some strict limit under every load case, including ones that are quite rare. For the most extreme loads an adaptive structure can change shape  compensating for large movements. A structure that adapts like this will need less material but adds complexity and power usage to compensate. Like so many things, the initial gains of small amounts of adaptability are large and the costs small – for example the retensioning of cables in a pre-stressed slab (one time adaption) as additional floors are added, or mechanically operated sun-screens on the sides of buildings (daily adaption). The marginal benefit of added adaptability quickly decreases and becomes negative, with the most extreme cases relying on adaptive mechanisms to prevent excessive vibration – like a pair of noise cancelling headphones. This results in systems where the overall energy use and cost are greater than with no adaptability at all – the lower weight of material is more than compensated by the energy consumed in use.

The adaptive truss has now done the rounds –  ‘almost everyone we really want to see it has seen it’. It is looking for a more permanent home, and Expedition is looking for a way to convey its message to visitors to the office in a more manageable form.
The desk model needs to be conducive to play and experimentation by visitors – and demonstrate its ideas to, sometimes, unconvinced audiences. The desk model also needs to show something beyond what has been shown with the larger adaptive truss, largely that adaptive structures could be beneficial in many areas – it could be a bike suspension system, a facade support system, or anything between and beyond.

The first model has been made as simple as possible: it has a small, simple structure, cheap and basic sensors and actuators, and the minimum of complexity in its programming. Hopefully, the process of developing it will show potential pitfalls in producing more sophisticated models and give others an idea of what can be quickly and cheaply achieved.

The basic design involves a very soft, movable truss (powered by a small stepper motor), ultrasonic distance sensor and arduino uno to co-ordinate the two – it also has a small dot matrix display, mostly to help diagnose problems.

The truss is a three pin arch: two compression members with an offset cable placed below. This cable changes length as a stepper motor winds it in, this allows the bridge to move up and down. An elastic band built into the cable makes the entire structure very soft, with large changes in position resulting from about the weight of a mobile phone applied and taken away.

The basic structure, loaded by blackberry

The basic structure, loaded by blackberry

The arduino uno makes the electrical aspect of the circuit extremely easy, with each component being individually wired to input and output pins – it is really a combination of around 3 circuits in parallel connected by a program. There weren’t any complications due to interactions between circuits so I could reuse existing. The program controlling the truss was as simple as possible, it just checks the distance to the bridge from the sensor every second, if the bridge is too far away, it lowers the bridge, and vice-versa.

Creating the program was also relatively straightforward, especially at such a low level of sophistication. Each second, the ultrasonic sensor is pinged, it returns a duration for the ping to bounce off the bridge and return, and the arduino instructs the motor to wind appropriately for a period of slightly under a second.

</p>
#include <LedControlMS.h>
#include <NewPing.h>
#include <Stepper.h>

//create the various components

//locate these on arduino board
LedControl lc=LedControl(12,11,10,64);
Stepper myStepper(200, 4, 6, 5, 7);
#define trigPin 2
#define echoPin 3

String digits= "0123456789";

int duration = 1;
int distance = 1;
int number = 1;
int oldnumber = 5;

int ultrasonicnumber() {
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);
Serial.print(duration);
int height = 1080; //larger numbers are higher
int sensitivity = 20; //smaller numbers are
number = constrain(((duration-height)/sensitivity)+5,0,9);
return number;
}

void displaynumberondot(int number) {
lc.displayChar(0, lc.getCharArrayPosition(digits[number]));
pinMode(trigPin, OUTPUT);
pinMode(echoPin, INPUT);
}

void shiftstepper(int number) {
if (number>5){
myStepper.setSpeed(25);
myStepper.step(200);
}
if (number<5) {
myStepper.setSpeed(25);
myStepper.step(-200);
}
delay(1000);
}

void setup() {
Serial.begin (9600);
for (int i=0; i< 2; i++){
lc.shutdown(i,false);
lc.setIntensity(i,8);
lc.clearDisplay(i);
}
}

void loop() {
number = ultrasonicnumber();

//returns number that describes bridge position

displaynumberondot(number);

//display this number on the led dot matrix
shiftstepper(number);
}</pre>
&nbsp;
<pre>

Complications arising due to more sophisticated control mechanisms, for example some type of proportional control which would require interrupts, have been avoided for now.
Initially, the ultrasonic sensor was a great success, it seemed to detect objects quickly and reliably:

However, the ultrasonic sensor had much greater difficulty detecting the underside of the bridge – perhaps it was not large enough, or the wooden structure below the main deck was confusing to the chip packaged with the ultrasonic sensor. It was also hard to diagnose issues as it was not at all obvious what the sensor was detecting when it gave bad readings. For the next iteration a laser distance finder seems the most appropriate tool, though they are quite expensive.
Most other issues were down to my own lack of competence, but most were eventually solved.
A small custom set of characters were made for the dot matrix display, the digits, each with one half of an arrow next to them – the intension being to reflect the arrow in a small mirror. This was achieved with small alterations to an existing arduino library.

In all the truss is quite striking, and design choices to keep all complexity off the main span of the bridge seem to have been effective. In reality, it is probably advantageous to have the equipment used for producing adaptiveness away from the permanent structure anyway for ease of maintenance.
The truss is also a good proof of concept that a small adaptive truss model can be made quickly and cheaply, paving the way for more sophisticated models in later versions. Improvements needed to reach a ‘minimum viable product’ are probably: a smooth, reasonably fast control system, much greater reliability and robustness, greater [any] attention to structural detail and manufacture and finding a way for the model to explain itself quickly and clearly. Suggestions include 3D printed components, hydraulic linear actuators, vertically aligned trusses, augmented reality goggles (ambitious) and proper control systems with more sophisticated visual feedback, we shall see.

Oh, and here it is in action:

Great Stellated Dodecahedron decorations

I tried making a great dodecahedron as a kid – though no trace of it remains, sadly. The book I copied the net from, Mathematical Models by Cundy and Rollett, still lives amongst the cookbooks and its shapes and nets remain tempting.

UST [employer] allowed me to be in charge of this year’s Christmas decorations [why?], so I decided to outdo my 11 year old self and go a further stellation, and produce (with the help of some lunch-goers, motivated only by free sandwiches and love of polyhedra) a series of great stellated dodecahedrons to go on the tree. Unlike my childhood self, I didn’t try to scale up the tiny drawings of nets from the book, I just used one of the free nets online which, being a vector image, scaled up perfectly.

Don’t get this reference? Don’t worry.

Polyhedra seem to be at the edge of what is more simply defined by a process than by its geometry. In the case of the great stellated dodecahedron:

  • take regular pentagons, arrange such that one side of each of three pentagons are touching all the way along each side – you make a dodecahedron
  • extend each face until it meets another extension of another face (this a stellation – in 2D a pentagram is a stellation of a pentagon – you extend each side until it meets another side’s extension) – you should be imagining a spikey shape with 5 sided spikes on it here
  • stellate two more times (to the great dodecahedron then to the great stellated dodecahedron)

This ‘process rather than geometry’ paradigm seems to be increasingly important, with parametric design, and a wide range of ways to generate designs and search and order design spaces, becoming popular. This allowing humans to spend more time thinking about the aims of a design and its basic parameters, rather than its precise geometry. Thinking about an object like the great stellated dodecahedrom in terms of a process is an entry point to this, and there is a whole family of increasingly spikey shapes that result from increasing the number of stellations beyond the three used here. Other families occur when you stellate other shapes such as the icosohedron.

I digress – lets get back to making the polyhedra: one trial version is constructed from 5 sheets of thick sketching paper:

IMG_20151119_191551

Trial run – 5 sheets of A4, 2 sheets of A4 uncompleted in background

Another, much less successful one, was produced from two pieces of thick paper (its net is in the background). The ratio of card thickness to length was too great, and it was uncrisp and fiddley.

Smaller polyhedra, paper too thick

Following these trials, a few improvements were made, mainly: make the thickness/size ratio smaller – so large sheets of thin paper, built in prettiness without using acres of virgin material and finding a better system than ‘glue dots’. Printing the net on the back of the paper is a huge time saver and it isn’t visible on the final product – it also makes production less frustrating. Improvements missed were using a laser cutter to get really accurate cutting and scoring and the development of a tab system to remove the need for glue or tape in construction.

IMG_20151130_201112

Nets printed on back of old maps, no tessellation yet

For built in interest, reasonably stiff high quality paper and printed nets I used old maps, I used UST’s A0 plotter to print on the back of these – note to self, this is a nightmare, try to get a flatbed next time. By chance the maps I had spare were of South London and Elgin, Scotland. Local maps seem to be really popular, people love seeing places they recognise and have stories to tell about them (examples include holidays, birthplaces, homes, distilleries and others) which makes the making process much more fun.

IMG_20151130_231548

A grid of spikes, the first half of the build process completed

Net cut out, pre-folded and after first round of joints taped up, creates the spikes, each spike has a triangular base, and these triangular bases form an icosohedron. Also for future reference, there is a real desire for normal instructions rather than laying down rules – in my case I said that ‘when cutting out the shape, whenever two long lines are coincident, or two short lines are coincident, you do not cut, when a long line and a short line are coincident you do cut’. This did not come across as clearly as I would have liked. However, an instruction like this, when seeing the final shape (where on the shape do long lines and short lines meet?) gives much more freedom and insight than ‘cut along these lines in this order’ type instructions. There were plenty of improvements on my making techniques during the session.

IMG_20151130_233628

The final product

The making session.

IMG_20151201_202719

Output of the workshop

There are a few more around the office where I need to chivvy to get them finished off.

For future Christmas decorations, these seem to work nicely, they are large enough for about 10 to fill a medium sized tree comfortably, and are (I think) inoffensive and just unusual enough to provide interest. However, not many guests seem to play with them or inspect them when they are sat at the sofas – I had thought that they might. On the other hand, the ones made at the lunchtime session have been named, so maybe they will be taken home to be used and enjoyed again? Hopeful.