Arduino progress

I am slowly reaching the point where I can use the technical information about the Arduinos (and clones) to make progress beyond ‘mashups’ of previous and example code. This allows me to make progress that is far faster and more satisfying – my confidence that any particular decision will lead to the desired effect has increased dramatically, which is so much more satisfying. Getting acquainted with the sources of information for the Arduinos has also made porting functions from one to another much faster – I can predict whether the smaller version (Uno to Nano to Gemma) will be able to function satisfactorily – do they have enough pulse width modulation pins for a servo to run (or else put the load on the processor and use a soft-servo), are they capable of receiving I2C signals?

 

Pinout diagrams:

For the Arduino Uno: http://pighixxx.com/unov3pdf.pdf

Arduino Nano: http://pighixxx.com/nanopdf.pdf

Adafruit Gemma: https://learn.adafruit.com/introducing-gemma/pinouts

 

I2C attachment:

No need to specify which pins I2C attaches to – there is only one possible pin for SDA (provides the datastream) and one for SCL (provides the clock ‘beat’ so the processor knows at what rate to listen). These pins seem to be specified by the controller, not surprising that there is only one possible pin for each, which in turn is not surprising as I2C can support many devices off one set of pins.

For my purposes for now, interpreting I2C is started by attaching the appropriate library and attaching a device to I2C (usually any number will do)

 

Servo attachment and more general attachment:

Most servos are driven from a high voltage pin, a ground pin and a pulse width modulation signal – the position of the servo is determined by the duty cycle. Attaching the servo on digital pin “D5” on the Arduino Nano requires

 myServo.attach(5);

(the number on the board) and not

myServo.attach(11) //the physical pin number

and also not

 myServo.attach(1) //the first PWM pin) 

. For attachment to the analogue pins attach to

x.attach(Ay)

where y is the number on the board of the analogue pin.

 

Use of the serial:

Use the serial port wherever possible, and move to a board that does not have a serial port (i.e. the Adafruit Gemma) at the latest opportunity. Checking for errors without the serial port, especially errors you have not investigated before and know the rough causes of, is much trickier.

Current simplified code to allow movement of servo based upon laser distance measurement:

#include <Wire.h>
#include <SparkFun_VL6180X.h>
#include <PID_v1.h>
#define VL6180X_ADDRESS 0x29
//think this is to do with the setup of the sensor not the board? 
//But where do we define the position of the sensor
//suppose there are only one SDA SCL on the board so must be those?
int x;
#include <Servo.h>
#define SERVO1PIN 5
Servo myServo1;
VL6180xIdentification identification;
VL6180x sensor(VL6180X_ADDRESS);

void setup() {
Serial.begin(9600);
x = 0;
Wire.begin();
delay(100);
sensor.getIdentification(&identification);
sensor.VL6180xDefautSettings();
myServo1.attach(SERVO1PIN);
myServo1.write(90);
delay(10);
}



void loop() {
x = sensor.getDistance();
Serial.println(x);
x = constrain(x,17,50);
x = map(x,17,50,0,180);
myServo1.write(x);
delay(10);
};

Arduino Gemma in action – note a little more sluggish due to constant servo refresh required by the soft servo library:

Arduino Nano in action

Thoughts on: Freakonomics Self-improvement Month – Peak, Grit, the Mundanity of Excellence, Tim Ferriss

I’m not really one for self-improvement, glossy books and lofty goals you never quite stick to have always seemed tacky. Freakonomics has taken a bit of a step down by hosting a self-improvement series, inviting in people with a product to sell and allowing them to promote their wares at will. However, the guest’s involvement has worked really well, I have bought two of the books that have been promoted, read them (twice – a self-improvement tactic of my own) and am now sketching out my opinion on them (another revolting self-improvement tactic). Perhaps, I really am one for naff self-improvement?

 

The Freakonomics self-improvement month focusses on productivity, by far the most popular topic amongst voting listeners. Guest in the introductory show, Charles Duhigg, suspects this popularity is because “our experience matches so poorly with our expectation”. My personal frustration at the mismatch between possibility and reality does not really stem from a lack of productivity, I am reasonably good at producing things when I need to, but my inability to ingrain and sustain good ideas, create good habits over time – perhaps I should read Duhigg’s books on habit. My retention of knowledge and skill is poor. In all, this limits my ability to benefit from learning new things, as I simply can’t do them a few weeks later and have to relearn. Reading books twice and writing my thoughts on them are attempts to remedy this and, hopefully, I will remember and act-on things I have liked from the books the better for doing this. This will be a small piece of self-improvement in itself. As a hopeful start, I remember the names of the authors and basic concepts with confidence (this is pretty rare for me after a single read of a book), so perhaps the additional effort of a double read and write up has been worth it.

 

The books and podcasts could be boiled down into 100 words for each contributor, so let’s do that:

 

Peak by Anders Ericsson, Freakonomics episode

Peak, Secrets from the New Science of Expertise, primarily exists to sell the idea of ‘deliberate practice’. This is practice, usually alone or with an instructor, with clearly defined, small goals, a known effective training regime, maximal concentration, deliberate change, a performance demand slightly beyond your current ability and fast, accurate feedback. ‘Focus, feedback, fix-it’. Deliberate practice aims for skills rather than knowledge. With plenty of deliberate practice, you can become an expert in almost any area, your initial aptitude, as long as it meets a fairly low baseline, is relatively unimportant.

 

The mundanity of excellence by Daniel Chambliss

Coming from years of observation of swimmers Daniel Chambliss summarises why some are faster than others. Qualitative differentiation is seen between levels, quantitative differentiation is seen within levels, these differences improve race times. These qualitative factors come from great training culture that strives for continuous improvement.  Most importantly, outstanding performance is the combination of many technical factors, each isolated, honed and reintroduced in practice, that come together consistently. Excellence is hard to understand when seen as a complete product, but there is no magic. ‘Practice not until you can get it right, but until you cannot get it wrong’.

 

Grit by Angela Duckworth, Freakonomics episode

Angela Duckworth is a pretty serious overachiever, and has a theory as to why some people reach great levels of performancE, they practice intensively and effectively (see Anders Ericsson) and they keep on practicing over long periods of time. They show ‘stick-to-it-iveness’, ‘follow-through’ – or grit. They are not distracted by other potential goals, always stepping towards their main goal. Talent x effort = Skill, Skill x effort  = achievement.

 

Tim Ferriss, Freakonomics episode

Has no evidence for what he says – why do Freakonomics give him an easy time? There is no grit or excellence here, just a good salesman.

 

Will anything I do be different as a result of reading these books? Probably not in the long term, I will almost certainly forget about their ideas over time – unless they keep on coming up. This is probably the way forward – if the ideas are good and stick around in the marketplace in some form, then I will assimilate them, if they don’t, they will be forgotten. Excitingly, Freakonomics are looking for volunteers to engage in deliberate practice to improve some skill over the course of a year or so, so perhaps it will remain on my radar for a while yet, or be killed as those volunteers fail in their ambitions.

Thoughts on: The Design of Design, Gordon Glegg

ISBN 0 521 07447 9

The Design of Design is the genuine advice of someone seen who has seen (and made, analysed and corrected) his fair share of errors. Some of the advice I know and accept – for example, creative thought is best nurtured through alternating periods of extreme focus and relaxation, with inspiration usually striking whilst you relax. Some advice runs against what I have experienced and been taught – for example, I generally feel that extreme concentration of complexity in a design is the safest path, however Glegg advocates introducing a ‘complicate to simplify’ approach spreading his complexity more evenly. By taking this approach he seems at risk of unexpected interactions between seemingly unrelated items ruining several of his solutions. However he insures himself with full scale testing.

The most useful advice for me concerns his sense of ‘style’ in a design solution and finding ways of describing it – these form useful general guidelines that, once you look, crop up everywhere. These include a sense of working with, rather than fighting against, your forces and materials – do your designs have positive geometric stiffness (they get stiffer as they deform as the geometry becomes more favorable), do you use the properties of your materials such as their plasticity to provide safety mechanisms, or rely on later complex additions to make them safe? Perhaps, the most important rule for style is the number of parameters needed to define the solution and how it behaves should be as small as possible – a ruthless approach to the complexity of the solution should be made in its final turns. The same idea appears when learning about 3D modelling – ‘fairness’, usually a marine term, is taken and used as general advice.

The meaning of “fair” is much debated in the marine industry. No one can define it, but they know when they see it. Although fairing a surface is traditionally associated with hull surfaces, all visible surfaces on any object can benefit from this process. In Rhino, the first cue for fairness in a surface is the spacing of the surface display
isocurves. There are other characteristics of fair curves and surfaces. Although a curve or surface may be fair without exhibiting all of the characteristics, they tend to have these characteristics. If you keep these in mind while modeling, you will end up with a better final product.
The guidelines for creating a fair surface include:
● Use the fewest possible control points to get the curve shape.
● Use the fewest possible curves to get the surface shape

Rhino 5 User Guide

Glegg looks at solving engineering design problems in two main areas, the design of the problem and the design of the designer, which is then subdivided into the analytical, artistic and inventive components of thinking.

When designing the problem, Glegg recommends:

  1. Check for intrinsic impossibilities – are the forces so large as to crush any material that could fit in the load path, are you destroying or creating energy in order for your solution to work?
  2. Beware of myths and unchecked assumptions, the stories you hear are often wrong and outdated
  3. Define problems in figures and quantified levels, avoid any definition defined in words

The design of the designer:

Three key areas of thinking for the designer: the inventive, the artistic, the rational. A lack of any leads to a poor design, but strength in one is often enough:

The inventive:

  1. Concentration and relaxation around your work
  2. Skeptisism of tradition and folklaw
  3. Complicate to simplify – usually a small change to one small part can bring rewards
  4. Make the most of your material and processes, rather than fighting their properties – they will not give in without a fight
  5. Divide and tidy up your solution, get the the end with a solution, however ugly, that works, then refine

The artistic:

  1. Aim at continuity of energy – I extend this to include the continuity of forces through structures, are your forces changing directions? Are forces that tend to occur simultaneously taking short paths to cancel one another when they can?
  2. Avoid over-designing, take an idea to extremes and pull back into the sweet spot where your innovation is well used
  3. Choose a rational basis for what you want to achieve – know success and completion when you see it
  4. Find the appropriate medium, if you are fighting your processes and materials to find a solution, then you are likely to have made a mistake very early in the design process
  5. Avoid perpetuating arts and crafts – a inappropriate, though well optimised, process might be easy to procure, but should be carefully considered before it is chosen, there might be good economic reasons for a particular solution.

The rational:

  1. Think logically – find the right analytical tool for any given task, back up your more complicated analysis with simpler approaches
  2. Design for a predictable life – something that demands replacement regularly will be maintained and other faults are more likely to be found. Designing for a indefinite life increases the risk of sudden and catastrophic failure. Design modes of failure that are predictable, easy to fix
  3. Watch for disguised assumptions – it is easy to come up with analytical stories to convince yourself of a position, and for someone else to tell a different story to reach very different solutions. Solving for strength is easy – find any loadpath and lower bound theorem works hard to fill in the gaps. Solving for stiffness is hard – you need to find the true solution to make sure you don’t accidently break something else on the way to your strength solution…
  4. Safety is found in absorbing energy, not transferring it
  5. Overpaying is generally cheaper than failure

Well worth a read for the young engineer starting out practice. It only takes a couple of hours to read and has some nice examples to think about.

IMG_20160410_181518

Thoughts on: The Signal and the Noise, the art and science of prediction by Nate Silver

Nate Silver has recently lost his sheen, having been a consistent Trump doubter throughout the Republican primaries – his piece ‘Donald Trump’s Six Stages of Doom’ felt like a ray of hope last year, predicting that Trump lacked the breadth of support to continue his rise as the Republican field began to narrow. FiveThirtyEight has remained highly sceptical of Trump’s ability to win the nomination until far too late, until effectively the nomination has been secured. In a return to form Nate Silver has looked at his failed predictions more closely in ‘How I acted like a pundit and screwed up on Donald Trump’ – maybe next time he will be a little less wrong.

The Signal and the Noise has been a strange book to read twice – the first read was very easy, the second dragged. Large passages of the book that on first reading flow well feel frustratingly shallow when read again. However, much of the book is excellent and shows a deep knowledge not only of statistics and modelling of systems but also of the nuances his subjects, in particular baseball, poker and American politics. I would give the chapters on climate change, flu and terrorism a miss.

The real strength of the book is its accessible introduction to some language around forecasting and examples of its use, some of these concepts include:

Forecast: a probabilistic statement, usually over a longer time scale, i.e. there is a 60 percent chance of an earthquake in Southern California over the next thirty years. There are many things that are very hard to predict but fairly easy to forecast once longer term data is available. An upgrade to a forecast is a ‘time dependent forecast’ – where the probability of an event occurring varies over time according to some variables, for example the forecast for a thunderstorm in London over the next week shows a higher probability when it is hot and humid.

Prediction: a definitive and specific statement about when and where something will happen, e.g. there will be a riot next Friday afternoon in Lullingstone – a good prediction is testable. Hypothesis live by the quality of their predictions.

The signal: the signal indicates the true underlying relationship between variables, in its ultimate form it is Laplace’s demon, where given the position and velocity of every particle in existence and enough calculation power, past, present and future are all the same to you.

The noise: the data that obscures the signal, it has a wide variety of sources

Overfitting: finding a model (often extremely well calibrated against the past) that fits the noise rather than the signal (underlying relationship). An overfitted model ‘predicts’ the past well but the future extremely poorly – it is taking the wrong things into account and failing to find the parts of the model that matter. Superstitions are often examples of overfitting – finding false patterns in the shadows. The most famous example is probably the the superbowl stock market indicator. With the growth of available data and computing power it increasingly easy to create a falsely convincing overfitted model. to search millions of potential correlations, many of which will appear to be significantly correlated, purely by chance. To avoid overfitting, theory can help, showing which variables might be the most useful in any given model. I am fairly certain I have been guilty of overfitting in various pieces of work in the past.

Calibration of a model: a well calibrated model gives accurate probabilities of events occurring across the range it predicts – events it predicts with X% likelihood of occurring actually occur X% of the time over the long term.

Wet bias: when the calibration of a model is deliberately poor, for example to provide more positive surprises than negative surprises, so the forecast is more pessimistic than would be accurate. This is taken from weather forecasting, where rain is given a higher than accurate probability as the public notice a false negative (i.e. low probability of rain when there actually is rain) more than a false positive.

Discrimination of a model: a model is capable of discriminating between different events to give different probabilities – a model that does not discriminate can still be very well calibrated, but is not useful. Saying 1/6th of rolls of a fair dice will come out as a 6 is well calibrated, but not very useful if you need an advantage over someone in a game.

Goodhart’s law: the predictive value of an economic variable decreases when it is measured and targeted, in particular by government policy

Bayesian prior, posterior probability and Bayes theorem: a simple identity that helps to update the probability of something being true given a prior belief that it was true, a probability of observing the evidence seen given the belief is true, and a probability of observing the evidence seen given the belief is true. The outcome (posterior probability) can then be used as the prior for the next input of data to update the probability of something being true.

Derivation of Bayes theorem:

P(A|B)P(B) = P(B|A)P(A)

P(Rain|Humid)P(Humid) = P(Humid|Rain)P(Rain)

Then divide by P(B), or P(Humid), this gives Bayes, and allows us to use as a tool for updating our ‘degree of belief’ when provided with new evidence:

P(Rain|Humid) = P(Rain) * (P(Humid|Rain)/P(Humid))

Posterior = Prior * Bayes factor

The same can be applied to continuous distributions.

 

Frequentism: the interpretation of probability as the limit of its proportion as the number of opportunities for it to occur tend to infinity

The role of assumptions in models: often the assumptions made in a model dominate its behavior, to generate an idea try to break the assumptions and check for changes in output. To create an idea of the sensitivity of your model make small changes to the inputs, run many times and plot the range of outputs as a probability distribution.

Accuracy and precision: accuracy is when the predictions fall, on average, on the correct values, a precise prediction has a small range of expected values

Fitting the noise, correlation without causation: similar to overfitting

Nearest neighbor analysis: finding the results of similar individuals from the past, and using this to find a probabilistic analysis of the potential of that individual. Of those who have been here in the past, what actually happened to them?

What makes a forecast good: accuracy, honesty, economic value: accuracy is about whether the forecast is correct over the long term (is accuracy the same as calibration?), honesty is whether the model is the best the analyst could produce given the information available to them, economic value is whether the model allows those who use it to make better decisions, so takes their biases into account

What makes a forecast of use: persistence, climatology: persistence is a model that assumes a result will be the same as previously (i.e. the temperature today will be the same as yesterday), climatology (for a weather forecast) takes a slightly longer view, looking at the average behavior previously with only the most basic inputs (for example the day of the year for weather).

Results oriented thinking, process orienting thinking: acknowledging your results may not be an indication of the quality of your predictions over the short term, required confidence you are on the right track, however

Theory, correlations and causation: where many variables are available, theory is needed to help separate the relationships with true predictive value from those that are mere causation and will not be of long term value.

Initial condition uncertainty, structural uncertainty and scenario uncertainty: when making a prediction about events, a lack of information about the current condition may dominate very term predictions, this is initial condition uncertainty, scenario uncertainty dominates over the longer term as the behavior of the system may change from our current model, at any time period there will be structural uncertainty, uncertainty about our current understanding of the system

Log-log charts: make it much easier to observe relationships in variables that are exponentially related, for example those that obey Zipf’s law. Log-log charts can reveal otherwise hard to see relationships around frequency of occurrence, and ranks – and sometimes reveal where the limits of the data might be, if the size of events and their frequency follows a straight line on a plot, then it is not unimaginable that something an order of magnitude (or two) larger might occur in the future.

IMG_20160410_175903

Adaptive thing desk model

With its profits UST funds several EngD students, and a byproduct of one EngD (by Gennaro Senatore) has been the adaptive truss. It is quite a sophisticated bit of kit, covered in sensors and actuators, and with some proper programming behind it, that allows it to detect its movements and compensate by changing shape to produce a flat surface under a variety of loads. Without the help of its actuators, it moves quite a bit:

Aside from being an interesting machine and demonstration on Expedition’s ability to create things, the adaptive truss has a more fundamental lesson to teach – that in the design against permanent and variable loads they need not always combine them and then allow the design to be dominated by stiffness.

An adaptive structure will always need to be strong enough to sustain the loads that will be placed on it, but it doesn’t need to so stiff as to deflect less than some strict limit under every load case, including ones that are quite rare. For the most extreme loads an adaptive structure can change shape  compensating for large movements. A structure that adapts like this will need less material but adds complexity and power usage to compensate. Like so many things, the initial gains of small amounts of adaptability are large and the costs small – for example the retensioning of cables in a pre-stressed slab (one time adaption) as additional floors are added, or mechanically operated sun-screens on the sides of buildings (daily adaption). The marginal benefit of added adaptability quickly decreases and becomes negative, with the most extreme cases relying on adaptive mechanisms to prevent excessive vibration – like a pair of noise cancelling headphones. This results in systems where the overall energy use and cost are greater than with no adaptability at all – the lower weight of material is more than compensated by the energy consumed in use.

The adaptive truss has now done the rounds –  ‘almost everyone we really want to see it has seen it’. It is looking for a more permanent home, and Expedition is looking for a way to convey its message to visitors to the office in a more manageable form.
The desk model needs to be conducive to play and experimentation by visitors – and demonstrate its ideas to, sometimes, unconvinced audiences. The desk model also needs to show something beyond what has been shown with the larger adaptive truss, largely that adaptive structures could be beneficial in many areas – it could be a bike suspension system, a facade support system, or anything between and beyond.

The first model has been made as simple as possible: it has a small, simple structure, cheap and basic sensors and actuators, and the minimum of complexity in its programming. Hopefully, the process of developing it will show potential pitfalls in producing more sophisticated models and give others an idea of what can be quickly and cheaply achieved.

The basic design involves a very soft, movable truss (powered by a small stepper motor), ultrasonic distance sensor and arduino uno to co-ordinate the two – it also has a small dot matrix display, mostly to help diagnose problems.

The truss is a three pin arch: two compression members with an offset cable placed below. This cable changes length as a stepper motor winds it in, this allows the bridge to move up and down. An elastic band built into the cable makes the entire structure very soft, with large changes in position resulting from about the weight of a mobile phone applied and taken away.

The basic structure, loaded by blackberry

The basic structure, loaded by blackberry

The arduino uno makes the electrical aspect of the circuit extremely easy, with each component being individually wired to input and output pins – it is really a combination of around 3 circuits in parallel connected by a program. There weren’t any complications due to interactions between circuits so I could reuse existing. The program controlling the truss was as simple as possible, it just checks the distance to the bridge from the sensor every second, if the bridge is too far away, it lowers the bridge, and vice-versa.

Creating the program was also relatively straightforward, especially at such a low level of sophistication. Each second, the ultrasonic sensor is pinged, it returns a duration for the ping to bounce off the bridge and return, and the arduino instructs the motor to wind appropriately for a period of slightly under a second.

</p>
#include <LedControlMS.h>
#include <NewPing.h>
#include <Stepper.h>

//create the various components

//locate these on arduino board
LedControl lc=LedControl(12,11,10,64);
Stepper myStepper(200, 4, 6, 5, 7);
#define trigPin 2
#define echoPin 3

String digits= "0123456789";

int duration = 1;
int distance = 1;
int number = 1;
int oldnumber = 5;

int ultrasonicnumber() {
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);
Serial.print(duration);
int height = 1080; //larger numbers are higher
int sensitivity = 20; //smaller numbers are
number = constrain(((duration-height)/sensitivity)+5,0,9);
return number;
}

void displaynumberondot(int number) {
lc.displayChar(0, lc.getCharArrayPosition(digits[number]));
pinMode(trigPin, OUTPUT);
pinMode(echoPin, INPUT);
}

void shiftstepper(int number) {
if (number>5){
myStepper.setSpeed(25);
myStepper.step(200);
}
if (number<5) {
myStepper.setSpeed(25);
myStepper.step(-200);
}
delay(1000);
}

void setup() {
Serial.begin (9600);
for (int i=0; i< 2; i++){
lc.shutdown(i,false);
lc.setIntensity(i,8);
lc.clearDisplay(i);
}
}

void loop() {
number = ultrasonicnumber();

//returns number that describes bridge position

displaynumberondot(number);

//display this number on the led dot matrix
shiftstepper(number);
}</pre>
&nbsp;
<pre>

Complications arising due to more sophisticated control mechanisms, for example some type of proportional control which would require interrupts, have been avoided for now.
Initially, the ultrasonic sensor was a great success, it seemed to detect objects quickly and reliably:

However, the ultrasonic sensor had much greater difficulty detecting the underside of the bridge – perhaps it was not large enough, or the wooden structure below the main deck was confusing to the chip packaged with the ultrasonic sensor. It was also hard to diagnose issues as it was not at all obvious what the sensor was detecting when it gave bad readings. For the next iteration a laser distance finder seems the most appropriate tool, though they are quite expensive.
Most other issues were down to my own lack of competence, but most were eventually solved.
A small custom set of characters were made for the dot matrix display, the digits, each with one half of an arrow next to them – the intension being to reflect the arrow in a small mirror. This was achieved with small alterations to an existing arduino library.

In all the truss is quite striking, and design choices to keep all complexity off the main span of the bridge seem to have been effective. In reality, it is probably advantageous to have the equipment used for producing adaptiveness away from the permanent structure anyway for ease of maintenance.
The truss is also a good proof of concept that a small adaptive truss model can be made quickly and cheaply, paving the way for more sophisticated models in later versions. Improvements needed to reach a ‘minimum viable product’ are probably: a smooth, reasonably fast control system, much greater reliability and robustness, greater [any] attention to structural detail and manufacture and finding a way for the model to explain itself quickly and clearly. Suggestions include 3D printed components, hydraulic linear actuators, vertically aligned trusses, augmented reality goggles (ambitious) and proper control systems with more sophisticated visual feedback, we shall see.

Oh, and here it is in action: