Getting started with Grasshopper: Working neatly, working with list data

First, good sources of information:

http://grasshopperprimer.com/

http://wiki.bk.tudelft.nl/toi-pedia/Grasshopper

http://www.grasshopper3d.com/

Meta-source:

https://explodebreps.wordpress.com/grasshopper-resources/

Book:

AAD_Algorithms-Aided Design by Fulvio Wirz

 

I am starting to use Grasshopper in anger now – and am managing to create things more quickly, and explore ideas that usually very hard, compared modelling by hand. I am taking a different approach to others who have been learning the software – I am looking at how it manipulates data and then thinking about how this could be applied, rather than approaching problems and trying to solve them directly. This is probably a reflection of my background as a more theoretically driven engineer, but I feel it has some real benefits in knowing how the program works, and is supposed to work, rather than developing bad habits and workarounds. I will need to move onto creating real items eventually though – though hopefully when I do I will be creating efficiently and working with the program, rather than against it.

 

There are several data primitives in grasshopper, which closely align to the geometric primitives found in Rhino3D – points, vectors, curves of various types, rectangular surfaces and their breps, fields. There are other primitives not seen in Rhino to control the flow of data: booleans, numbers, strings, colours, domains, matrices, file paths. Often, these are the properties of rhino primitive – the length of a curve expressed as a float for example.

 

There seem to be significant differences between working in Rhino and Grasshopper about ‘Modelling with Precision’ – in Grasshopper precision is guaranteed by the environment (or at least by the catastrophic results of making a mistake, errors usually become very apparent – usually…). In comparison, when modelling in Rhino discipline is required to keep a model working well and with good associative relationships, for example via the use of compound points and using the appropriate object snaps.

 

See pdf for examples of items listed below:

Download (PDF, 914KB)

 

Working neatly in grasshopper:

Clustering: the most powerful way of working neatly, combines many components into one, this cluster can then be reused and saved as a cluster and used elsewhere.

Grouping – put a box round things

Aligning – make boxes line up nicely, not associative, which might be a mercy

 

Working with list data:

Lists from panels, type in data with a new line between each item, turn off ‘multiline data’ in the options (right click on the panel object).

Whilst lists of primitives are common (e.g. a range of numbers), the most common use of lists are lists of geometric primitives – for example, ‘set multiple lines’ in a line primitive will create a list of lines, this list of objects can then be manipulated as a list of normal primitives.

Many different ways to create, interrogate, manipulate lists. Some much more useful than others. See pdf for examples.

 

Next up, working with tree data structures.

Thoughts on: The Selection of Design, Gorgon L. Glegg

The Selection of Design is short – really short, it maybe takes an hour to read through. This is understandable, looking at the timeline of books written by Glegg this is the second, written four years after The Design of Design, which I enjoyed greatly. Unfortunately it suffers the fate of many second outputs, it fails to live up to the first as it only contains a few years of wisdom, whereas the first contained the best ideas of most of a career.

Many of the ideas are interesting and some are thought provoking, though there are some that do not feel are fully thought through.

  • There is little simple engineering left – or at least there shouldn’t be… disagree with this – changes in opportunity and materials allow simple solutions where previously a workaround was required
  • Try to focus your thought when starting to solve a problem at the ‘interfaces’ to make these, often hard to analyse, parts simple – in the case of a manufacturing run this is often at the interface where, up to the interface, value is measured in weight (to help account for variation in density) and afterwards it is measured in volume. For other systems the interfaces can often be seen at points where the types of energy change radically. There is usually more than one interface in each system. When designing, start from each of these interfaces and work outwards. Agree with this.
  • As a strategy when faced with difficult and highly constrained problems, try solving the problem several times, each time with one of the constraints removed – what is common between these solutions? Try to reimpose the final condition by adjusting one of the partial designs. Agree with this.
  • Think about the context of the thing you are designing in its entirety and think about ways it might vary from your common experience – an example given is moisture in the air condensing on cool objects, making them wet, another that vibrations make nuts fall off bolts, changing the conditions of the machine.
  • When choosing between designs of seemingly equal merit, give benefit to the one that is statically determinate (or easy to analyse accurately), this reduces the risk by allowing a more informed design. If both are determinate and easy enough to analyse accurately, choose the one that is “complicated*” and “could not have been done 5 years ago”. I disagree with the latter – the assumption that all neat design solutions have already been found is certainly not true in structural engineering.
    • * Complicated is justified later – something with unneeded complications is bad, something with appropriate complexity tends to be good. Glegg is still advocating the approach of ‘divide and tidy up’ from The Design of Design, but is now pushing towards a paradigm of a fairly direct mapping of function to components, this is advocated over a many functions to few components approach largely to increase the effectiveness of any one component – I agree with this. However, in the context of another constraint the combining of functions might well be well worthwhile despite impacts on individual component’s efficiency, for example, reducing the weight of a racing car by using the block of the engine as part of the structure of the car.
  • Differentiating between possible impossibilities and intrinsic impossibilities – use basic principles, not only from physics and engineering, but also from accounting, psychology etc.
  • When faced with an impossibility do not try to negotiate a compromise – the right solution will not lie there, instead, try to make some of the impossibility dissolve through a clever solution, or one that moves a part of the impossibility elsewhere where it can be easily managed.
  • Value simplicity only for its more direct virtues, such as reliability and cheapness. There is no intrinsic value in simplicity itself. Furthermore, an oversimplified design can be vulnerable to small variations between the model and the real world which a more complicated design is able to accommodate.
  • Work with the material and its effects wherever you can – use expansion to create beneficial internal stresses, change the material (e.g. heating or cooling it) so it works for you rather than fighting it

IMG_20160612_223404

Now with complementary bookmark – good choice Keith Blacks. Who knew the Spice Girls were still touring together in 2008?

Arduino progress

I am slowly reaching the point where I can use the technical information about the Arduinos (and clones) to make progress beyond ‘mashups’ of previous and example code. This allows me to make progress that is far faster and more satisfying – my confidence that any particular decision will lead to the desired effect has increased dramatically, which is so much more satisfying. Getting acquainted with the sources of information for the Arduinos has also made porting functions from one to another much faster – I can predict whether the smaller version (Uno to Nano to Gemma) will be able to function satisfactorily – do they have enough pulse width modulation pins for a servo to run (or else put the load on the processor and use a soft-servo), are they capable of receiving I2C signals?

 

Pinout diagrams:

For the Arduino Uno: http://pighixxx.com/unov3pdf.pdf

Arduino Nano: http://pighixxx.com/nanopdf.pdf

Adafruit Gemma: https://learn.adafruit.com/introducing-gemma/pinouts

 

I2C attachment:

No need to specify which pins I2C attaches to – there is only one possible pin for SDA (provides the datastream) and one for SCL (provides the clock ‘beat’ so the processor knows at what rate to listen). These pins seem to be specified by the controller, not surprising that there is only one possible pin for each, which in turn is not surprising as I2C can support many devices off one set of pins.

For my purposes for now, interpreting I2C is started by attaching the appropriate library and attaching a device to I2C (usually any number will do)

 

Servo attachment and more general attachment:

Most servos are driven from a high voltage pin, a ground pin and a pulse width modulation signal – the position of the servo is determined by the duty cycle. Attaching the servo on digital pin “D5” on the Arduino Nano requires

 myServo.attach(5);

(the number on the board) and not

myServo.attach(11) //the physical pin number

and also not

 myServo.attach(1) //the first PWM pin) 

. For attachment to the analogue pins attach to

x.attach(Ay)

where y is the number on the board of the analogue pin.

 

Use of the serial:

Use the serial port wherever possible, and move to a board that does not have a serial port (i.e. the Adafruit Gemma) at the latest opportunity. Checking for errors without the serial port, especially errors you have not investigated before and know the rough causes of, is much trickier.

Current simplified code to allow movement of servo based upon laser distance measurement:

#include <Wire.h>
#include <SparkFun_VL6180X.h>
#include <PID_v1.h>
#define VL6180X_ADDRESS 0x29
//think this is to do with the setup of the sensor not the board? 
//But where do we define the position of the sensor
//suppose there are only one SDA SCL on the board so must be those?
int x;
#include <Servo.h>
#define SERVO1PIN 5
Servo myServo1;
VL6180xIdentification identification;
VL6180x sensor(VL6180X_ADDRESS);

void setup() {
Serial.begin(9600);
x = 0;
Wire.begin();
delay(100);
sensor.getIdentification(&identification);
sensor.VL6180xDefautSettings();
myServo1.attach(SERVO1PIN);
myServo1.write(90);
delay(10);
}



void loop() {
x = sensor.getDistance();
Serial.println(x);
x = constrain(x,17,50);
x = map(x,17,50,0,180);
myServo1.write(x);
delay(10);
};

Arduino Gemma in action – note a little more sluggish due to constant servo refresh required by the soft servo library:

Arduino Nano in action

Thoughts on: Freakonomics Self-improvement Month – Peak, Grit, the Mundanity of Excellence, Tim Ferriss

I’m not really one for self-improvement, glossy books and lofty goals you never quite stick to have always seemed tacky. Freakonomics has taken a bit of a step down by hosting a self-improvement series, inviting in people with a product to sell and allowing them to promote their wares at will. However, the guest’s involvement has worked really well, I have bought two of the books that have been promoted, read them (twice – a self-improvement tactic of my own) and am now sketching out my opinion on them (another revolting self-improvement tactic). Perhaps, I really am one for naff self-improvement?

 

The Freakonomics self-improvement month focusses on productivity, by far the most popular topic amongst voting listeners. Guest in the introductory show, Charles Duhigg, suspects this popularity is because “our experience matches so poorly with our expectation”. My personal frustration at the mismatch between possibility and reality does not really stem from a lack of productivity, I am reasonably good at producing things when I need to, but my inability to ingrain and sustain good ideas, create good habits over time – perhaps I should read Duhigg’s books on habit. My retention of knowledge and skill is poor. In all, this limits my ability to benefit from learning new things, as I simply can’t do them a few weeks later and have to relearn. Reading books twice and writing my thoughts on them are attempts to remedy this and, hopefully, I will remember and act-on things I have liked from the books the better for doing this. This will be a small piece of self-improvement in itself. As a hopeful start, I remember the names of the authors and basic concepts with confidence (this is pretty rare for me after a single read of a book), so perhaps the additional effort of a double read and write up has been worth it.

 

The books and podcasts could be boiled down into 100 words for each contributor, so let’s do that:

 

Peak by Anders Ericsson, Freakonomics episode

Peak, Secrets from the New Science of Expertise, primarily exists to sell the idea of ‘deliberate practice’. This is practice, usually alone or with an instructor, with clearly defined, small goals, a known effective training regime, maximal concentration, deliberate change, a performance demand slightly beyond your current ability and fast, accurate feedback. ‘Focus, feedback, fix-it’. Deliberate practice aims for skills rather than knowledge. With plenty of deliberate practice, you can become an expert in almost any area, your initial aptitude, as long as it meets a fairly low baseline, is relatively unimportant.

 

The mundanity of excellence by Daniel Chambliss

Coming from years of observation of swimmers Daniel Chambliss summarises why some are faster than others. Qualitative differentiation is seen between levels, quantitative differentiation is seen within levels, these differences improve race times. These qualitative factors come from great training culture that strives for continuous improvement.  Most importantly, outstanding performance is the combination of many technical factors, each isolated, honed and reintroduced in practice, that come together consistently. Excellence is hard to understand when seen as a complete product, but there is no magic. ‘Practice not until you can get it right, but until you cannot get it wrong’.

 

Grit by Angela Duckworth, Freakonomics episode

Angela Duckworth is a pretty serious overachiever, and has a theory as to why some people reach great levels of performancE, they practice intensively and effectively (see Anders Ericsson) and they keep on practicing over long periods of time. They show ‘stick-to-it-iveness’, ‘follow-through’ – or grit. They are not distracted by other potential goals, always stepping towards their main goal. Talent x effort = Skill, Skill x effort  = achievement.

 

Tim Ferriss, Freakonomics episode

Has no evidence for what he says – why do Freakonomics give him an easy time? There is no grit or excellence here, just a good salesman.

 

Will anything I do be different as a result of reading these books? Probably not in the long term, I will almost certainly forget about their ideas over time – unless they keep on coming up. This is probably the way forward – if the ideas are good and stick around in the marketplace in some form, then I will assimilate them, if they don’t, they will be forgotten. Excitingly, Freakonomics are looking for volunteers to engage in deliberate practice to improve some skill over the course of a year or so, so perhaps it will remain on my radar for a while yet, or be killed as those volunteers fail in their ambitions.

Thoughts on: The Design of Design, Gordon Glegg

ISBN 0 521 07447 9

The Design of Design is the genuine advice of someone seen who has seen (and made, analysed and corrected) his fair share of errors. Some of the advice I know and accept – for example, creative thought is best nurtured through alternating periods of extreme focus and relaxation, with inspiration usually striking whilst you relax. Some advice runs against what I have experienced and been taught – for example, I generally feel that extreme concentration of complexity in a design is the safest path, however Glegg advocates introducing a ‘complicate to simplify’ approach spreading his complexity more evenly. By taking this approach he seems at risk of unexpected interactions between seemingly unrelated items ruining several of his solutions. However he insures himself with full scale testing.

The most useful advice for me concerns his sense of ‘style’ in a design solution and finding ways of describing it – these form useful general guidelines that, once you look, crop up everywhere. These include a sense of working with, rather than fighting against, your forces and materials – do your designs have positive geometric stiffness (they get stiffer as they deform as the geometry becomes more favorable), do you use the properties of your materials such as their plasticity to provide safety mechanisms, or rely on later complex additions to make them safe? Perhaps, the most important rule for style is the number of parameters needed to define the solution and how it behaves should be as small as possible – a ruthless approach to the complexity of the solution should be made in its final turns. The same idea appears when learning about 3D modelling – ‘fairness’, usually a marine term, is taken and used as general advice.

The meaning of “fair” is much debated in the marine industry. No one can define it, but they know when they see it. Although fairing a surface is traditionally associated with hull surfaces, all visible surfaces on any object can benefit from this process. In Rhino, the first cue for fairness in a surface is the spacing of the surface display
isocurves. There are other characteristics of fair curves and surfaces. Although a curve or surface may be fair without exhibiting all of the characteristics, they tend to have these characteristics. If you keep these in mind while modeling, you will end up with a better final product.
The guidelines for creating a fair surface include:
● Use the fewest possible control points to get the curve shape.
● Use the fewest possible curves to get the surface shape

Rhino 5 User Guide

Glegg looks at solving engineering design problems in two main areas, the design of the problem and the design of the designer, which is then subdivided into the analytical, artistic and inventive components of thinking.

When designing the problem, Glegg recommends:

  1. Check for intrinsic impossibilities – are the forces so large as to crush any material that could fit in the load path, are you destroying or creating energy in order for your solution to work?
  2. Beware of myths and unchecked assumptions, the stories you hear are often wrong and outdated
  3. Define problems in figures and quantified levels, avoid any definition defined in words

The design of the designer:

Three key areas of thinking for the designer: the inventive, the artistic, the rational. A lack of any leads to a poor design, but strength in one is often enough:

The inventive:

  1. Concentration and relaxation around your work
  2. Skeptisism of tradition and folklaw
  3. Complicate to simplify – usually a small change to one small part can bring rewards
  4. Make the most of your material and processes, rather than fighting their properties – they will not give in without a fight
  5. Divide and tidy up your solution, get the the end with a solution, however ugly, that works, then refine

The artistic:

  1. Aim at continuity of energy – I extend this to include the continuity of forces through structures, are your forces changing directions? Are forces that tend to occur simultaneously taking short paths to cancel one another when they can?
  2. Avoid over-designing, take an idea to extremes and pull back into the sweet spot where your innovation is well used
  3. Choose a rational basis for what you want to achieve – know success and completion when you see it
  4. Find the appropriate medium, if you are fighting your processes and materials to find a solution, then you are likely to have made a mistake very early in the design process
  5. Avoid perpetuating arts and crafts – a inappropriate, though well optimised, process might be easy to procure, but should be carefully considered before it is chosen, there might be good economic reasons for a particular solution.

The rational:

  1. Think logically – find the right analytical tool for any given task, back up your more complicated analysis with simpler approaches
  2. Design for a predictable life – something that demands replacement regularly will be maintained and other faults are more likely to be found. Designing for a indefinite life increases the risk of sudden and catastrophic failure. Design modes of failure that are predictable, easy to fix
  3. Watch for disguised assumptions – it is easy to come up with analytical stories to convince yourself of a position, and for someone else to tell a different story to reach very different solutions. Solving for strength is easy – find any loadpath and lower bound theorem works hard to fill in the gaps. Solving for stiffness is hard – you need to find the true solution to make sure you don’t accidently break something else on the way to your strength solution…
  4. Safety is found in absorbing energy, not transferring it
  5. Overpaying is generally cheaper than failure

Well worth a read for the young engineer starting out practice. It only takes a couple of hours to read and has some nice examples to think about.

IMG_20160410_181518

Thoughts on: The Signal and the Noise, the art and science of prediction by Nate Silver

Nate Silver has recently lost his sheen, having been a consistent Trump doubter throughout the Republican primaries – his piece ‘Donald Trump’s Six Stages of Doom’ felt like a ray of hope last year, predicting that Trump lacked the breadth of support to continue his rise as the Republican field began to narrow. FiveThirtyEight has remained highly sceptical of Trump’s ability to win the nomination until far too late, until effectively the nomination has been secured. In a return to form Nate Silver has looked at his failed predictions more closely in ‘How I acted like a pundit and screwed up on Donald Trump’ – maybe next time he will be a little less wrong.

The Signal and the Noise has been a strange book to read twice – the first read was very easy, the second dragged. Large passages of the book that on first reading flow well feel frustratingly shallow when read again. However, much of the book is excellent and shows a deep knowledge not only of statistics and modelling of systems but also of the nuances his subjects, in particular baseball, poker and American politics. I would give the chapters on climate change, flu and terrorism a miss.

The real strength of the book is its accessible introduction to some language around forecasting and examples of its use, some of these concepts include:

Forecast: a probabilistic statement, usually over a longer time scale, i.e. there is a 60 percent chance of an earthquake in Southern California over the next thirty years. There are many things that are very hard to predict but fairly easy to forecast once longer term data is available. An upgrade to a forecast is a ‘time dependent forecast’ – where the probability of an event occurring varies over time according to some variables, for example the forecast for a thunderstorm in London over the next week shows a higher probability when it is hot and humid.

Prediction: a definitive and specific statement about when and where something will happen, e.g. there will be a riot next Friday afternoon in Lullingstone – a good prediction is testable. Hypothesis live by the quality of their predictions.

The signal: the signal indicates the true underlying relationship between variables, in its ultimate form it is Laplace’s demon, where given the position and velocity of every particle in existence and enough calculation power, past, present and future are all the same to you.

The noise: the data that obscures the signal, it has a wide variety of sources

Overfitting: finding a model (often extremely well calibrated against the past) that fits the noise rather than the signal (underlying relationship). An overfitted model ‘predicts’ the past well but the future extremely poorly – it is taking the wrong things into account and failing to find the parts of the model that matter. Superstitions are often examples of overfitting – finding false patterns in the shadows. The most famous example is probably the the superbowl stock market indicator. With the growth of available data and computing power it increasingly easy to create a falsely convincing overfitted model. to search millions of potential correlations, many of which will appear to be significantly correlated, purely by chance. To avoid overfitting, theory can help, showing which variables might be the most useful in any given model. I am fairly certain I have been guilty of overfitting in various pieces of work in the past.

Calibration of a model: a well calibrated model gives accurate probabilities of events occurring across the range it predicts – events it predicts with X% likelihood of occurring actually occur X% of the time over the long term.

Wet bias: when the calibration of a model is deliberately poor, for example to provide more positive surprises than negative surprises, so the forecast is more pessimistic than would be accurate. This is taken from weather forecasting, where rain is given a higher than accurate probability as the public notice a false negative (i.e. low probability of rain when there actually is rain) more than a false positive.

Discrimination of a model: a model is capable of discriminating between different events to give different probabilities – a model that does not discriminate can still be very well calibrated, but is not useful. Saying 1/6th of rolls of a fair dice will come out as a 6 is well calibrated, but not very useful if you need an advantage over someone in a game.

Goodhart’s law: the predictive value of an economic variable decreases when it is measured and targeted, in particular by government policy

Bayesian prior, posterior probability and Bayes theorem: a simple identity that helps to update the probability of something being true given a prior belief that it was true, a probability of observing the evidence seen given the belief is true, and a probability of observing the evidence seen given the belief is true. The outcome (posterior probability) can then be used as the prior for the next input of data to update the probability of something being true.

Derivation of Bayes theorem:

P(A|B)P(B) = P(B|A)P(A)

P(Rain|Humid)P(Humid) = P(Humid|Rain)P(Rain)

Then divide by P(B), or P(Humid), this gives Bayes, and allows us to use as a tool for updating our ‘degree of belief’ when provided with new evidence:

P(Rain|Humid) = P(Rain) * (P(Humid|Rain)/P(Humid))

Posterior = Prior * Bayes factor

The same can be applied to continuous distributions.

 

Frequentism: the interpretation of probability as the limit of its proportion as the number of opportunities for it to occur tend to infinity

The role of assumptions in models: often the assumptions made in a model dominate its behavior, to generate an idea try to break the assumptions and check for changes in output. To create an idea of the sensitivity of your model make small changes to the inputs, run many times and plot the range of outputs as a probability distribution.

Accuracy and precision: accuracy is when the predictions fall, on average, on the correct values, a precise prediction has a small range of expected values

Fitting the noise, correlation without causation: similar to overfitting

Nearest neighbor analysis: finding the results of similar individuals from the past, and using this to find a probabilistic analysis of the potential of that individual. Of those who have been here in the past, what actually happened to them?

What makes a forecast good: accuracy, honesty, economic value: accuracy is about whether the forecast is correct over the long term (is accuracy the same as calibration?), honesty is whether the model is the best the analyst could produce given the information available to them, economic value is whether the model allows those who use it to make better decisions, so takes their biases into account

What makes a forecast of use: persistence, climatology: persistence is a model that assumes a result will be the same as previously (i.e. the temperature today will be the same as yesterday), climatology (for a weather forecast) takes a slightly longer view, looking at the average behavior previously with only the most basic inputs (for example the day of the year for weather).

Results oriented thinking, process orienting thinking: acknowledging your results may not be an indication of the quality of your predictions over the short term, required confidence you are on the right track, however

Theory, correlations and causation: where many variables are available, theory is needed to help separate the relationships with true predictive value from those that are mere causation and will not be of long term value.

Initial condition uncertainty, structural uncertainty and scenario uncertainty: when making a prediction about events, a lack of information about the current condition may dominate very term predictions, this is initial condition uncertainty, scenario uncertainty dominates over the longer term as the behavior of the system may change from our current model, at any time period there will be structural uncertainty, uncertainty about our current understanding of the system

Log-log charts: make it much easier to observe relationships in variables that are exponentially related, for example those that obey Zipf’s law. Log-log charts can reveal otherwise hard to see relationships around frequency of occurrence, and ranks – and sometimes reveal where the limits of the data might be, if the size of events and their frequency follows a straight line on a plot, then it is not unimaginable that something an order of magnitude (or two) larger might occur in the future.

IMG_20160410_175903