Decent User Experience – A New Human Right?

We digital designers, of all hues, generally like to tell ourselves that we’re doing more than simply earning a wage; building wireframes, pushing pixels etc.  We’re making people’s lives better.

We also tend to believe that the businesses that ultimately foot our bills, take a more prosaic view; it’s all about the brand and the bottom line.

I think that there is something more profound going on here; I’m arguing that we should start to see ourselves as playing a small, but active, part in one of history’s grand narratives.  A big claim I know, for a UX blog post.  But bear with me, I guarantee you’ll be rewarded, and hopefully even, convinced.

I’ve recently been reading Steven Pinker’s excellent book on the little known fact that we’re currently enjoying the results of a multi-century long decline in violence – in all it’s forms; in crime, in war, in murder, in abuse, in discrimination.  In fact, in pretty much any form of oppression and violation of another human being’s experience of life.  The book goes into the details of this happy decline in countless different areas – here’s an example, a chart looking at the massive decline of murder rates in Europe, in comparison to the those of non-state societies.

Untitled-1-01

Homicide rates in Western Europe, 1300-2000 and in nonstate societies.  From p459 of The Better Angels of Our Nature

The past was indeed a very different country – and i’d much rather be living in the here and now.

In the dim, and sadly, not so distant, past, life was cheap, authoritarian hierarchies were unassailable and invulnerable, divisions between nations and social groups were impenetrable and perhaps most importantly, liberal enlightenment philosophy had not infected our psyche with its celebration of the ‘individual’.   Today we live in a world of ‘individualism’, a world suffused by information bearing other people’s perspectives; a world in which it is ever harder to be completely blind and unsympathetic to the experiences, and suffering of others.

But what does this have to do with software user experiences? I hear you say.  Well, one of the driving forces behind this incredible change in the fabric of our culture, over the past half a century, is what Pinker refers to as the ‘Rights Revolutions’.  I believe that the value we place on good user experience should be seen in the context of these revolutions.

I am asking the question: is a decent user experience a new human right?

The Rights Revolutions

So, what are these ‘Rights Revolutions’? And what do they have to do with user experience?  Well to quote Pinker:

“The efforts to stigmatize, and in many cases criminalize, temptations to violence have been advanced in a cascade of campaigns for ‘rights’ – civil rights, women’s rights, children’s rights, gay rights and animal rights”
From p458 of The Better Angels of Our Nature

rights-02

Proportion of books mentioning phrases from 1945. From p459 of The Better Angels of Our Nature

This cascade is nicely illustrated by the graph below that shows the sequence in which these phrases have become popular.  There’s a momentum behind these changes, each rights revolution raises the bar, as to what is acceptable, setting the stage for the next move forward.  If it’s not right to racially discriminate, how can we be tolerating sexual discrimination?  If we can no longer mistreat children, why should we be able to mistreat animals?

These changes, Pinker suggests, are primarily driven by “the technologies that made ideas and people increasingly mobile”.  As people were increasingly brought into contact with other people’s perspectives, through fiction, through travel, through the sharing of ideas; it became increasingly hard to maintain that other peoples experiences didn’t matter.   We increasingly came to value the rights of the individual and to sympathise with their experience. That has driven and continues to drive important and largely positive changes to the world we live in.

The right to a decent user experience?

Nothing has been as powerful in driving the large scale improvement to the quality of user experience, as the web.  Businesses got the ability to make user experience changes relatively rapidly and see the big impacts to bottom line metrics.  Perhaps more importantly, users got the ability to easily switch over to whichever site offered them the best experience for getting what they wanted.

This commercial dynamic has, sadly, been far less successful at driving substantial improvement to the user experience of specialist business software – an area that RMA specialises in.  Sadly business software is still generally sold through bulk licensing arrangements and bought by IT managers or business folk more interested in ticking boxes than in the merits of good design, or the experiences of users.

I think this is something that needs to, and is about to change.  Just because someone isn’t immediately visible as a monetizable metric on a web analytics dashboard, doesn’t mean they don’t matter.  People who work at a job day-in day-out to get important things done, matter.  Their experience is important; perhaps more important, than those of the hordes of debt-laden e-shoppers.

Let’s face it, badly designed software can make people’s life pretty miserable; unnecessarily hard to learn tools that make you feel stupid, inefficient experiences that frustratingly waste time, confused designs that hinder rather than help you get things done, ugly jarring experiences that make life just that little bit greyer.

Thanks to the web and mobile app ecosystems people are almost universally exposed to good user experience – they know what they are missing in the software that plagues their work lives.  At RMA we are witnessing this rising tide of intolerance to bad design in enterprise software. People are rising up against it; we have seen them demanding more from their bosses, their IT departments and their services providers.  They are starting to side step restrictive IT restrictions by bringing in their own devices (BYOD) and using decently designed cloud based offerings.  B2B software houses are starting to increase their investments in UX and visual design. The tide is starting to turn.

There are many powerful, financially driven, arguments for investing in better user experiences; they increase productivity while decreasing support and training costs.  But I believe that we can perhaps help the tide turn faster if we start to reframe the need for change.

We need to help shape and nurture the awareness of the right for decent user experience in all those millions of business software users. And, perhaps, more importantly those of us who work on business products, need to build a culture in which it is simply not acceptable to ship bad user experiences.  Just as it isn’t acceptable to discriminate against employees, or to cheat your customers, it shouldn’t be acceptable to subject people to bad user experiences.

For better or worse, most of us live much of our lives in software; just as life is precious, so are experiences.

A better kind of Visualisation Book (By Mischa Weiss-Lijn)

In a recent blog post, the invariably interesting Robert Kosara points out that:

“After you’ve seen one visualization book, you’ve seen them all.“

And, having read quite a few, I must say, man is he right. They tend to be big on beautiful full colour reproductions, and short on insights and techniques.  You generally emerge from having read a visualisation book, none the wiser on how actually to conceive, design, or implement compelling, and more importantly, useful visualisations.  Okay, in most visualisation books you can pull out a few useful tips and find some inspirational designs – but generally, it’s pretty slim pickings.

Recently though, i’ve hit a rather rich vein, and I’m hoping I can find more of the same.  At RMA we’ve been doing some fascinating work with the insurance industry on visualising natural catastrophe risks (think Hurricanes, Floods and the like) and exposure to insurance liabilities.  This has led me away from the usual bevy of beautifully illustrated generalist visualisation books to ostensibly dryer specialist visualisation books.  

Here are a couple, catchy titles, fancy coffee table ready cover designs, n’ all:

Thematic Cartography and Geovisualization Terry A. Slocum et al

Visualizing data William S Cleveland

The “Thematic Cartography and Geovisualization” book is a breath of fresh air.  It’s poorly designed, has hardly any nice looking visualisations in it.  But once you actually start reading it you find a treasure trove of concretely useful, mature and validated visualisation techniques.  It serves as a start contrast to the rest of the data and information visualisation field.  Here is a textbook, with a solid chapter on each of the key techniques for visualisation data on maps – e.g. a Choropleth. It goes into detail about how and when the techniques should be used, how the data should be prepared, their inherent weaknesses, the pros and cons of the various work arounds.  This is invaluable information condensing the collective knowledge of the GIS community into an accessible and re-usable form.  If only there more textbooks of this kind for non-geo visualisations!

A example of a choropleth visualisation of an insurer’s ‘Aggregate’ exposure

“Visualising Data” by William Cleveland, is actually a rather famous, and oft cited book.  But I’ll wager that not many people have actually read it.  Rather like the last book, it’s actually a very focused subject matter specific book.  It’s about visualisation techniques for statistical analysis; characterising distributions and the relationships between the properties of a system.

Again, while elegantly minimal, the visualisations certainly aren’t the eye-candy we’re used to seeing in a visualisation book.  The prose is terse and there’s quite a bit of maths.  But again, it’s filled with analytical visualisation techniques that clearly delivery insights.  Interestingly for each technique, Cleveland works through a series of interesting data sets, using the techniques to analyse the data and drive out insight.  It is perhaps telling how rarely this approach is taken by authors of visualisation books.  

A scatterplot matrix from “Visualising Data”

Basically the overall learning for me, is to read more narrowly focused visualization books – books from domains where visualisation has been intensively used for driving out insight; where the visualisations have been honed and matured to the point where they demonstrably do work.

VisWeek2012 – a UX designer’s view of the year’s biggest Visualisation conference (By Mischa Weiss-Lijn)

At RMA we’ve always tended to focus on projects that involve quite a bit of Data and Information Visualisation work (or ‘VisualiZation’ for those on the other side of the pond).  While we’ve become known for delivering to a very high quality in this area, we’ve drawn on our skills as Information and Interaction Designers to create our solutions.  While we’ve all read Tufte, Yau  and many others, we haven’t tended to connect deeply with the Information and Data Visualisation research communities.

So, we decided that one of us should tentatively dip our designerly toes in the rarefied waters of VisWeek 2012, the world’s biggest Visualisation conference, and get a taste for what the larger visualisation community has to offer folks like us; folks that design and build visualisations, that people actually use day-in-day-out.

And.. off I went, and now that I’m back, here is a subjective view on what a largely research oriented visualisation conference has to offer those working on designing interactive visualisations for use in real world settings.

It’s big and getting bigger

Having done my PhD in Info Vis many years ago, what impressed me right off the bat is how much the field has grown.  There are pretty big and well established groups focused on information visualisation across the world.  Here in Europe, Germany seems to be a particularly bright spot (e.g. University of Konstanz), while the UK also has quite a bit going on, with hotspots at City University, and Oxford among others.

While the field has been getting increasingly fashionable over the last few years, it seems to be reaching a tipping point, where I believe interactive visualisations will enter the mainstream of digital experience and thus become ever more relevant for designers of all stripes.

This may be old news to some, but there are a number of forces at work here:

  1. Big data: tech catch phrase of the moment, that basically boils down to data being available in unprecedented volumes, varieties and accumulating at ever increasing velocities.
  2. Better tools: back when I was doing my info vis research, you had to build every visualisation, pretty much, from scratch. Now there are lots of great tools to get you started (more on those later)
  3. Infographics in the media: The recent surge in the production of infographics has brought the possibilities of visualising data to the public imagination.  Pioneering publications such as the New York Times, the Guardian, Wired and folk such as David McCandless have popularised  visualisations that convey powerful narratives using data.

All this means that while visualisation work has always been important here at RMA, it’s likely to start becoming something that designers everywhere will be encountering more and more.

More grown up, less core innovation?

While information visualisation has gotten ‘bigger’, the research seems to have changed somewhat in character.  The work seems to focused more on evaluating and refining existing visualisation techniques and applying them to new and challenging domains.  That’s all good, and important, but the flip side is (and this probably a bit controversial) that from what I saw at VisWeek, there seems to be less valuable creative innovation around visualisation techniques.

Let’s dive into each of these topics in turn.

Domain focused visualisation

The research work has diversified into looking in detail at how visualisation can support a host of important new application areas from the somewhat unapproachable visualisations done for cyber security and bioscience to the somewhat more comprehensible work in the medical and finance domains.

Here’s a few choice examples from the conference.

Visualizing Memes on Twitter

A lot of people hope to transform the firehose of Twitter activity into something intelligible.  This could have important applications in lots of areas were people stand to gain from a real time understanding of consumer and industry sentiment – an area of considerable interest in financial markets.   Another area where this could be important is in being able to detect major events as they happen; think earthquakes, bush fires and the like.

Whisper is a nice piece of work that allows you to search for particular types of event, to see where the discussion and thus the event, originates, where flow goes thereafter and how people feel about as time progresses (positive = green, negative = red).

Leadline is more focused on allowing people to detect signifiant new events as they happen.  The ‘signal strength’ of automatically clustered ‘topics’ are visualised.  You can filter on the person, time range, event magnitude, or location, to focus on an event and understand it.

Medical Visualisation

There was a bunch of work around specialist information visualisations for the medical profession.  As medical providers provide more and more open access, data, (e.g. http://data.medicare.gov) this is an area that is bound to keep on growing.

MasterPlan is a visualisation tool that was custom built to help architectural planners, in the renewal of a rather old Austrian hospital, understand how patients flow between the different wards and units.  The visual analysis afforded by the tool allowed them to see the key patterns and identify a better way to cluster and place things.

The OutFlow visualisation, from IBM, (above) shows the outcomes (good = green, bad = red) of successive treatments administered to a patient (and a very ill one in this case!).

Less core innovation

I stand to be corrected here, but it felt to me, that while there was lots of innovation, I didn’t see visualisation designs that were applicable outside of their narrow solution area.  There is innovation to be sure, but’s generally more evolution than revolution.

For example, Facettice, was a rather beautiful bit of work for navigating faceted categorical data sets.  While lovely looking, it’s rather impractical in terms of readability, interaction and real-estate consumption.

Another example, would be Contingency Wheel++, a rather impressive, and powerful tool for exploring datasets with quite a few columns (say 20) and massive numbers of rows.  While it’s great work in many ways (and I’m afraid a little too complex to explain here), I wonder how broadly something like this could be used.

Of course, innovation doesn’t need to be game changing, small incremental improvements are generally what move us forward.  One bit work that was particularly nice in this regard was a paper on ‘sketchy rendering‘; new library for the Processing language (see below for more on that), that allows you do render interactive charts and visualisations in a sketchy style; something that could be quite handy for testing early prototypes.

More and better evaluations

It was great to see that there seems to be a much more robust emphasis on evaluating the value of visualisation techniques and applications.  Back when I was doing research in this area (a decade ago) people were busy churning out new ways to visualise things, without generally stopping to check if they were any use.  Nowadays the balance seems to have tipped away from pure invention to more of a focus on application and evaluation.

Automated user testing on the cheap

One particularly interesting development was the very prevalent use of Amazon’s Mechanical Turk capability for doing evaluations. The basic idea is that you set up your evaluation and software experience so that it is totally automated and available online (so a web application with accompanying forms).  You then recruit ‘users’ from the hoards of ‘Mechanical Turk Workers’ waiting to complete tasks for relatively small amounts of cash.  You can even insist that your workers have certain qualifications (i.e. have passed particular tests), to ensure you get users able to complete your tasks.

Despite the obvious attractions, there are definitely some issues here – in particular around the limitations of the Mechanical Turk Worker population (mostly US and India based) and the ‘quality’ of response you get. There was one paper in particular that claimed, in contradiction of previous work, that people performed very poorly on a task (Baysian probability judgements) despite having visualisation support; I suspect that the Mechanical Turk workers weren’t trying quite as hard as the students typically employed in previous experiments.

Isn’t it obvious?

Some of the evaluation papers had the unfortunate effect of forcing me to crack wry smile and ask myself:

Why oh why. Why, bother at all.

Let’s pick one example to illustrate the point.  It was an evaluation of the relative efficiency of the radial and cartesian charts.  Paraphrasing the question for the chart below: Does wind direction have an effect on efficiency, if so which direction leads to greater efficiency (btw: fewer minutes between wheel events means higher efficiency)?

So what was the answer?

Yes, you guessed it the cartesian charts took the prize.

While this  work was partly about evaluation methodology – from the point of view of the visualisation, i’d say… isn’t obvious?  Only a basic design sense or understanding of vision would tell you that it’s far easier to scan horizontally to compare values.  Do we really need to run empirical evaluations for this sort of thing?

There was quite a bit of work at VisWeek, where some design training and capabilities could go a long way to making the research output a whole lot more useful out in the real world.

Better tools

There are a bunch of great tools out there for designers and creators of visualisations; here’s a quick run down.

Visual analytics packages

There are a bunch of software packages out there competing for the custom of the emerging discipline of data scientists, as well as the less well versed journalists, business folk who are grappling with crippling amounts of data.  These can be super handy for any designer who is trying to get a handle on a data set before launching into Illustrator.

Perhaps the most accessible is Tableau (PC only, sadly) which allows you to build up relatively complex interactive visualisations semi-automatically.  They have a free version of the software that’s open to all to use, as long as you don’t mind publishing your visualisations out in the open.

Better development tools

On the technical end, a bunch of languages and frameworks have emerged that can be leveraged to rapidly create performant visualisations.  The two main contenders are Processing and D3.  They are both open source efforts. sired by MIT and Stanford respectively, with very active communities and lots of shared code for you to build on.

Processing is an open source Java-like language aimed at promoting software literacy within the visual arts, which has been designed for quickly creating visuals and visualisations.  People have used Processing to create a wide array of experiences, from sound sculptures, data art and info graphics, to full on applications.

D3 is a Javascript library that is more narrowly scoped around creating visualisations on the web.  You can use it to manipulate data and create interactive, animated SVG visualisations.

So… What’s the verdict?

I have to say I learnt a lot by attending VisWeek; it was particularly valuable for me to get a sense for where the field is at and where it’s going.  The more focused sessions, in particular, helped me get valuable footholds into areas of work relevant to some of the projects here at RMA.

However, I wonder whether there is a place for a more industry or practitioner focused visualisation conference where the papers and presentations (from researchers and practitioners alike) could be focused on innovation in visualisation that are more likely to be adopted outside of a research context.

Another big take away, is the field is still sorely lacking integration with the design community, and in particular the visual design community.  The researchers are nearly all computer scientists by training; and it really shows.

Designing for fluency – Part 2 (by Mischa Weiss-Lijn)

Fluency, cognitive choreography and designing better workflows

If you’ve read part 1, you’ll know all about what Fluency is and how understanding it subtly, but importantly, changes the way you think about usability.  Now I’d like to take this one step further by introducing a couple of other concepts from Kahneman’s book on the fascinating world of modern cognitive psychology, as well as one that i’ve made up all on my own: ‘Cognitive Choreography’.

Let’s start by briefly explaining what I mean by the term cognitive choreography.  The workflows that we are called to design can place varied and diverse demands on our darling users.  It’s often about going through the motions; form fillin’ payment details, skimming content, navigating.  But people are often also being asked to make critical decisions and perform complex tasks.  These different types of engagements require very different types of cognition (as we’ll see later). And in this article I’ll go through some relatively new research that points towards ways in which designers can encourage the right type of cognition for the right moment; what i’ve called Cognitive Choreography.

The two Systems: Thinking fast and thinking slow

Kahneman’s book centres on what he calls the ‘two systems’; two modes of thinking, one fast, System 1, and the other slow, System 2.  These have very different capabilities and constraints, and as a result some important implications for design.

System 2 is what does the conscious reasoning; it is the deliberate, rational voice in your head that you like to think is in control.  System 1 is the automatic, largely unconscious, part of your mind where most of the work actually get’s done.  Although the reality is inevitably rather more complicated, it’s helpful to adopt Kahneman’s conceit of these systems as two distinct characters.

System 1: The associative machine

  • Fast
  • Effortless
  • Automatic and involuntary
  • Can do countless things in parallel
  • Slow learning
  • Generates our intuitions
  • Driven by the associations of our memories and emotions
  • Uses heuristics (rules of thumb), that are often right, but sometimes very wrong

System 2: The lazy controller

  • Slow
  • Effortful
  • Selective (lazy) and limited in capacity
  • Does things in serial
  • Flexible
  • Uses reason, logic and rationalisation

System 1 effortlessly generates impressions and feelings (“xyz link looks most relevant”) that are the main sources of explicit beliefs and deliberate choices of System 2 (“I’ll click on xyz link”).  The problem here is that System 1 is error prone and System 2 is lazy.  System 2 can overcome System 1’s shortcomings, by critically examining the intuitions System 1 generates, but will often not.  I think that as designers we should think about how we can help System 2 spring into action when the moment is right.

Before looking at how we can help the right system spring into action, let’s look at how System 1 can sometimes lead users astray.

Biases: Thinking fast and wrong

System 1 has evolved to be quick and get things mostly right, most of the time, for your average hunter gatherer in the long gone Pleistocene (i.e. before we got all civilized, started farming and building urban jungles).  As a result it doesn’t adhere to the tenets of logical and statistical reasoning that underpin what we think of as ‘rational’ thought; it uses heuristics, rough rules of thumb, that are easily computed and generally work.  And that leads to errors, which, in our new fangled not-too-much-hunting-or-gathering-needs-doing kind of world, are more problematic than they used to be.

Here is a brief listing of some of the things that can go wrong.  If you want to really learn the slightly scary truth about how rubbish we (and yes that includes you) are at making judgements and choices then I refer you toKahneman’s book or perhaps take a look at this scary wikipedia list of cognitive biases.

System 1 is biased to believe and confirm what it has previously seen, or is initially presented (luckily for the advertising industry). So it tends to be

  1. Overconfident in beliefs based on small amounts of evidence (“The site would be better if it was purple.  My wife said so.”).
  2. Very vulnerable to framing effects (“90% fat free” vs “10% fat)
  3. Doesn’t factor in base-rates (i.e. a things general prevalence).   Insurance sales and the tabloid press play off of this all the time; it is the gravity of the event that matters, the fact that it’s very very unlikely, doesn’t have nearly as much impact as it should.  So for example, you may be tempted to insure your brand new fridge, against breakdown in it’s first year, because you’re so dependent having it work, even though it’s extremely unlikely that anything will go wrong.

This is because System 1:

  1. Focuses on the evidence presented and ignores what isn’t
  2. Neglects ambiguity and suppresses doubt
  3. Exaggerates the emotional consistency of what’s new with what’s already known

System 1 infers and invents causes, even when something was just down to chance. So, for example, in The Black Swan Nassim Taleb relates that when bond prices initially rose the day Saddam Hussein was captured, Bloomberg ran with “Treasuries rise: Hussein capture may not curb terrorism”.  Then, half an hour later the prices fell, and the headline changed: “Treasuries fall: Hussein capture boosts allure of risky assets”.  The same event can’t explain bond prices going both up and down; but because it was the major event of the day, System 1 automatically creates a causal narrative; satisfying our need for coherence.

System 1 will dodge a difficult question and instead substitutes in the answer for an easier one.  So for example in predicting the future performance of a firm, one instinctively relies on its past performance.  When assessing the strength of a candidate, one instinctively relies on whether we liked them or not.

System 1 does badly with sums, because it only deals with typical exemplars, or averages. So for example, when researchers asked how much people would be willing to pay to save either 2,000, 20,000 or 200,000 birds after an oil disaster, people suggested very similar sums of money.  It wasn’t the number of birds that was driving them, but the exemplar was; the image of a bird soaked in oil.  Similarly with visual displays people can very easily tell you the average length of a bunch of visual elements, but not the sum of their lengths.

Let’s not forget though, that while it has its failings, System 1, does a pretty impressive job of things most of the time, for most people.  In fact System 1 is crucial to the kind of deep creativity that us designers pride ourselves on.  It’s what helps you get things done fast, and well.  It’s what results from practice and is the basis of most forms of expertise; you wouldn’t want to drive your car without it!

As a result, lot of the time it’s appropriate, and indeed better, for System 2 to put it’s lazy feet up and give System 1 the reins.

So what’s the overall takeaway for designers?  Well, one is that, if users are at a point where it’s important that they critically inspect the facts, and overcome their pre-conceptions and first impressions; then System 2 needs to be on the job.  Otherwise, we can leave System 1 in the driving seat.

So how can designers engage in Cognitive Choreography, and help ensure users have the right System in the driving seat, at the right time?  Well one approach is to use Fluidity.

Cognitive Choreography

Fluency and switching users between Systems

I would guess that there are many things designers can do to knowingly encourage users to engage the right cognitive faculties for the task at hand; but one interesting and counterintuitive approach is to use Fluency.  That’s what I’m going to focus on here.

To recap from my previous post on the subject, Fluency is the brain’s intuitive sense of how hard your poor brain is being asked to work on something.  Lots of things will impact your sense of fluency as illustrated in the graphic below.

As well as being the key to really understanding what is going on behind users perceptions of usability and beauty, it just so happens that we can use the Fluency of our designs to engage system 2.  From an evolutionary perspective, the reason we have this intuitive sense of Fluency, is to have an alarm bell that will wake System 2 up when things aren’t going smoothly and we need more careful, bespoke, thought. When an experience is Disfluent and creates what Kahneman calls “Cognitive Strain”, System 2 is mobilised.   Thus we, as designers, can actively engage, and disengage, System 2, by controlling the many levers we have at our disposal to change the Fluency of a UI.

So, in case you’re not convinced, here’s one of the experiments that demonstrate this sort of effect in action.  A bunch of Princeton students were given a set of 3 short brain twisting problems to solve.  These problems were designed to have an answer that would seem obvious to System 1, but was infact wrong. To get the right answer, you’d need to have gotten System 2 in the game.  Here’s an example:

If it takes 5 machines, 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

100 minutes OR 5 minutes

When students were shown the problems presented in an ordinary, legible, font, 90% of them got at least one problem wrong.  When the problems were presented using a small, poorly contrasted font, only 35% of them did.  Yep, that’s right, making the font less legible, resulted in an almost 200% uplift in performance.  (btw, the answer was ‘5’)

Low Fluency creates cognitive strain, which encourages the user to activate System 2, which thinks things through, and get’s to the right answer.  High fluency does the opposite, encouraging the user to leave System 1 in control.

So the design implication here is, that when you come to designing a portion of your flow where it’s critical that System 2 be fully engaged, it may be the time to purposely create a low fluency experience, using the array of tools at your disposal (e.g. font legibility, contrast, layout, motion, copy etc).

When to use Fluency and Cognitive Choreography

So we have a bunch of ways we can make an experience more, or less, Fluent, now we need to understand when to use this to encourage users to apply the right kind of cognition to the task at hand.

Perhaps the first, and most important, thing to say is that you will need to be sparing and purposeful with low Fluency UIs.  Low Fluency is by it’s nature unpleasant, and on top of that using System 2 takes effort, so, it’s no fun.

However the science of System 1’s failings give us some clear pointers as to where we should consider putting the brakes on Fluency.

You should consider engaging System 2, with a low Fluency UI when:

  1. Critical, high risk, decisions are being made
  2. The user is being engaged in a task that you know has features which will lead System 1 astray.  So the user will be asked to:
  1. Draw inferences based on very small sample sizes
  2. Draw inferences based on incomplete information; e.g. they are given a small part of the story, or no information about base rates (i.e. general prevalence of a thing)
  3. Make decisions based on potentially framed and biased messages from an interested party
  4. Mentally work with sums, rather than averages or prototypes

Wrapping it up

Most of the time you will want to maximise Fluency and thus usability, encouraging users to coast along primarily using rapid, intuitive outputs of tireless System 1.  But your can reduce critical errors, and increase the quality of significant decisions and judgements if you selectively lower the Fluency of your UI, to make sure the lazy, but smart System 2 is fully engaged.

Of course, there is a balance to be struck here, as designers we will be hard pressed to make the experience as disfluent as psychologists can when doing experiments.  In fact it would be super valuable if people in the HCI community would take a closer look at this, and measure the effectiveness of lowering Fluency to within the bounds of commercial acceptability.

Another interesting tension here, is that many of us designers, are working on systems primarily aimed at realising and increasing sales.  So even if users are making critical, and expensive, decisions, based on incomplete information, it’s in your client’s interest that System 2 stays as lazy and disengaged, as possible.  However, there are countless digital experiences that support productive and often critical processes or decision making.  That’s what we focus on here at RMA, and that’s where doing a little Cognitive Choreography could come in handy.

Designing for fluency – Part 1 (by Mischa Weiss-Lijn)

From the new psychology of Fluency to usability, beauty and beyond

Having had a background in (proper) psychology, before emerging as a designer, I’ve often been dubious about the value of this fascinating ‘science of the mind’ for the practice of design.  Every now and then, you’ll hear some strained reference to Fitt’s law from a recent MSc. graduate; but let’s face it, in our day to day, very little psychology is actually brought to bear.

So, it came as a surprise to find a treasure trove of design insights, when reading the excellent “Thinking fast and slow” by the Nobel prize winning psychologist Daniel Kahneman.  Let me be clear; it’s not a book about design, it’s a pretty hardcore psychology book.  It’s about how we think and reason; not how we would like to think we think, but how we actually think.  Warts and all.  My hunch is that really understanding that, and the warts especially, could be a valuable tool for designers.

Fluency

One example I’d like to pick out, is Kahneman’s treatment of a phenomenon generally termed ‘Processing fluency, which he calls ‘Cognitive Ease’.  Like it or not, and more ‘usability’ minded designers may well not, the perception of usability, trustworthiness, beauty is partially dependent on the myriad of superficially unrelated factors that drive the fluency of cognitive processing (don’t worry I’m about to explain).

Yes that’s right.  You can do things to make people think a design is more, trustworthy or beautiful, without actually making it more trustworthy or beautiful.  At all.

So let me explain.

As Kahneman explains it; our brain has a number of built-in dials (you could think of them as a bunch of sixth senses), that are constantly, effortlessly and unconsciously, updating us on (evolutionarily) important aspects of our environment.  So, for example: “What’s the current threat level?”, “Is anything new going on?”.  One of these is, Processing Fluency, which is basically a measure of how hard your poor brain is being asked to work.  It’s basic raison-d’etre, is to let you know when you need to redirect attention or make more effort.  However, interestingly for us designers, it ends up having a much broader impact on the way we evaluate things and make decisions.   Anything that increases fluency (and there are lots of things that do) will bias many types of (and perhaps all) judgements positively.

This is a, somewhat scarily, broad phenomenon.  Who would have thought that:

Rhyming statements seem truer than equivalent non-rhyming ones

Shares with more easily pronounced names outperform on the stock market

Text written with simpler words, are judged to have been written by a more intelligent author

To usability, beauty and beyond

But let’s focus on how this relates to design.

It turns out that anything that increases fluency, will positively effect many aspects of the way people perceive, judge, and presumably experience, something.  Fluency will make people trust something more, make it feel more familiar, more effortless, more aesthetically pleasing, more valuable; fluency will even make people feel more confident in their own ability to engage with the experience.  And these effects can all potentially be brought to bear independently, and on top of, the actual content of the experience.

What’s powerful here is that there are lots of ways in which you can increase the fluency of your experience; ‘manipulations’ in the parlance of psychologists.  I’ve tried to summarise what I’ve been able to glean from the psychology literature around this in the infographic below.  The thing to remember is that any of these manipulations will positively impact people’s perceptions of your experience.

You can make your copy more fluent, your visual design more fluent, and your flows more fluent.

Making your copy more fluent

Let’s start with copy.  A bunch of the things we normally think of as best practice, such as using simple straightforward language and uncomplicated syntax, increase fluency.  It’s interesting to realise that such simple things could end up impacting how much people will trust the experience!

Looking at copy from the perspective of fluency gives weight to more flippant techniques, such as the use of rhyme, alliteration.  It guides us to think carefully about how easy copy is to say out loud.  All these things improve what is called ‘Phonological Fluency’ i.e. how easy something is to say; how easily it rolls off the tongue.  If it’s easier to say, it’s easier to think.

Then, consider ‘Orthographic Fluency’ i.e. how easy one can translate written text into spoken words and meaning.  This guides us to avoid creative spellings (e.g. “Tumblr 4ever”).  It gives a clear rationale for always using the most direct, succinct and approachable notation available (e.g. “1” not “one”, “%” not “percent”).

Making your visual designs more fluent

Font designers will be happy to hear that there have been lots (and lots) of experiments that show the impact of the clarity and readability font on fluency, with all the many fold benefits this brings.  Readability is not just about readability – it’s about fluency.

Font selection is one thing that contributes towards ‘Physical Perceptual Fluency’, and psychologists have shown that having a good level of contrast does too (for fonts in particular, but presumably it will be just as important for UI elements).  Of course that’s not where it ends, even if psychologists haven’t really looked much deeper, much of the principles behind good, functional, visual design, such as leveraging Gestalt grouping principles, must surely drive this Physical Perceptual Fluency.

There’s also been a bunch of work looking at how the length of time people have to see and absorb a display impacts fluency; they call it Temporal Perceptual Fluency.  The less time, the less fluent.  This probably doesn’t have too much impact on most design applications unless you are presenting stuff for less than 1 second.  But my hunch is that judicious use of motion design will also contribute to this type fluency.

Make your flows more fluent

There has been a bunch of work looking at the role memory plays in fluency.

Most obviously using common UI patterns will create a more fluent experience by virtue of their familiarity.  Similarly when experiences are designed to be easier to learn and remember they are going to be more fluent. But you could have guessed that.

Something you might not have guessed is that you can use ‘Priming’ to make an experience more fluent.  Priming is a psychological technique that basically boils to exposing people to related stimuli before showing them the experience you’re interested in.  This activates the relevant areas of your brain making it easier to process the experience once it comes along.  Is this something we could use as designers?  Perhaps we can.  For example, we could sequence content and interactions to prime parts of the experience that we expect to be challenging.

What else?

While psychologists have already discovered lots of ways to manipulate fluency, I’d guess that there are many more waiting to be discovered.  Psychologists, haven’t been thinking about design, so they’ve not really been looking in all the right places.  In fact, i’ve taken a couple of liberties to add some obvious candidates to my graphic which are not (yet) grounded in empirical evidence (motion design and visual hierarchy). One area that doesn’t seem to have been explored at all is how to make interactions more fluent.  And there is surely much more we can do to create fluency in user journeys and IA. Perhaps someone should look into it!

And so… what?

Is this just dressing up our time honoured notions of usability in fancy new scientific jargon?   Or does it give us a genuinely new and useful conceptual tool for creating better experiences? After all we’ve had related concepts before, for example Cooper’s ‘Cognitive friction’ in his classic Inmates are running the asylum book.  Making experiences as easy and frictionless as possible is at the heart of all good digital design techniques.

To be honest, I’ve only just started thinking about this, and so haven’t yet been able to put it into practice.  But my hunch is that there are a couple of key things that the concept of fluency offers which are interesting, and potentially useful.  Firstly, there is the evidence of there being a broader set of qualities that go into making an experience frictionless or fluent, then we’ve traditionally allowed for.  Secondly, and more importantly, there is the discovery that any and all of these, impact on the full range of people’s perception and memory of an experience. We want to create experiences that people feel good about.  Depending on the experience, we want our users to come away persuaded, happy and confident.  An understanding of how to create fluency, gives us a new way of thinking about how to get the design outcomes we’re after.

Further reading for the curious

Part 2 of this fine blog: Fluency, cognitive choreography and designing better workflows, which looks at how we should be using our grip on Fluency to help users think in the right way, depending on the experiences they are engaged in.

Alter, Adam L, and Daniel M Oppenheimer. “Uniting the tribes of fluency to form a metacognitive nation” Personality and Social Psychology Review 13.3 (2009): 219-235.

Daniel Kahneman “Thinking, Fast and Slow” 2011

earphones

Musical interactions – part 2 (by Jason Mesut)

I was really (pleasantly) surprised to get so much audience participation and questioning after my talk at the Digital Brand Jam event at Brunel – especially after I shot down on one of the questions around music industry, and how easy it was to distribute these days.

The questions were incredibly pertinent to my professional interests so I thought I’d share some of them (and my responses). I got a lot of interest in my thoughts on the following areas:

Should it be digital-with-physical, or physical-with digital?

I experienced a lot of turmoil deciding which way round this should be (I was still wrangling with it on train journey to the event). I concluded that the digital stuff has become the brains, and the physical aspects have become the interface part. So, you could have the same interface parts for longer periods of time and get familiar with how they feel.

How does expertise come into the frame, and how is it considered in the design of tools, services and systems?

I have paraphrased here but the questions were around designing something that could be used for the first time by novices or something that would be used regularly by experts. There’s a whole series of blog posts in me about this one, but my answer was along these lines…

When you design for a novice user using cookie cutter, ‘clip-art’ UI paradigms to make something seem ‘intuitive’ (my most hated word in UX), you can actually disempower the expert user who needs more flexibility and control. Not many systems, products or services manage to cater for both ends of the user spectrum and this makes it harder for a novice user to rapidly become an expert user.

Investment Bankers, for example, compete over their knowledge of Bloomberg short codes. Clinical consultants like to demonstrate their expertise and expose the complexity that underlies their practice, rather than dumbing it down.

Bloomberg display

Traders pride themselves on their expert use short codes to quickly call up financial market data in Bloomberg

At the IxDA London Movie Night (movie clips coming soon), we discussed the tension between design for novice vs. expert users. It was generally felt that when you design for one side, you might detract from the other. However, I think there are examples where this hasn’t necessarily been the case, notably console games (most famously Nintendo’s flagship Mario series) and the Tenori-On, which after about 10-15 minutes of guided play, allows you to make beautiful music and grow your skill level very quickly.

How does motivation affect interaction?

weeding tool

Not my legs…

I explained my personal motivation to buy technology (well, gadgets) as underpinning a lack of skill or willingness to do something. I mentioned my interest in buying the excellent Fiskars weed puller and how it got me to do my first bit of weeding for about 15 years. It’s a fantastic product – easy, satisfying, and incredibly empowering, like a tool should be. I talked around how games help to motivate people by easing them in through play and increasing the level of challenge before addictive interaction and obsession begin.

How can sound be used in design and interaction?

earphonesOne of my favourite subjects. I believe that sound is a relatively untouched area in design (some good thoughts around it here). We focus so much on physical forms, the way something might look, or even how it smells, but less about how it sounds (car door and electric car engine sonic aesthetics aside).

 

I mentioned my interest in using sound more effectively within Investment Banking scenarios as part of better alert systems or to allow people to concentrate better in times of stress. I also made reference to a friend’s research into info sonification, where messages, alerts and feedback that would normally be visual could be made audible, therefore using another channel of communication that could better augment our lives than being obsessed with looking at screens.

ARC music controller

Musical Interactions – part 1 – (by Jason Mesut)

I’ve known for a while about the Strategy and Innovation MA that John Boult runs at Brunel, and as well as collaborating with John as part of Big Potatoes, he recently asked me to come and talk at the Digital Brand Jam event at Brunel (30th of May 2012).

When he learned that I was deeply interested in musical technology, John suggested it might be relevant to some conversations he was having around experience design and brand. Either he was incredibly insightful and in tune with some of my raw thinking, or he was just looking for some random spice. Either way, I thought it would be good to do a rapid re-hash of one of my Musical Tech interaction design talks. I did the first one at MEX in November 2010 as a 20 minute rollercoaster ride through physical music tech interaction, and extended it to a 1-hour geekfest at London IA in January 2012.

Getting even 20 minutes down to 10 minutes was always going to be hard for me – I have too much to say on most subjects! It wasn’t made any easier by having to juggle this with exciting new business activities, large account management challenges, my ever-tiring recruiting mission and the need to solidify a plan for my UX practice with my team.

Even so, I gave it a shot, using some of the videos and imagery I had dug out previously and giving thought to a new framework of discussion which I thought was pretty interesting to share.

My main premises:

  • The ways in which we interact with technology are always changing but some things always stay the same: our physiology, our need to show off and control multiple parameters at once
  • Music technology innovates and lasts in this space, standing the test of time and forging new paths I showed how this is demonstrated through four areas of transition for interaction design relating to analogue, physical and digital electronics and systems design.

Physical Analogue: Electronics with fluid paths

Technics 1210 turntable – the DJ’s instrument.

Technics 1210 turntable

one of my most prized possessions

The theremin – an early example of free-space gestural interaction, demonstrating the imprecise nature of Kinect, Wii and similar free-space gestural interaction systems.

The Roland TB-303, without which there would be no acid house or techno. After 20+ years these units sell for £1500+ despite flaws. These things have lasted well and retain a set of avid fans.

Physical-to-Digital

Fairlight CMI

The cumbersome Fairlight CMI (Computer Musical Instrument)

From tape-machine to Fairlight sequencing and sampling, the cumbersome and the sonic flaws get replaced by convenient, but arguably colder interactions and sound.

As we move to fully software-based powerful Digital Audio Workstations (DAWs) like Ableton Live or Logic, we can use our laptops to make orchestral masterpieces or ear-piercing dubstep soundclashes wherever and whenever we like.

Digital-in-Physical

tenori-on

TENORI-ON – A 16×16 matrix of LED switches creates a “visible music” interface

Moving from Technics 1210 turntables to CDJs meant that DJs could take their whole collection with them without fear of luggage handlers at Heathrow nicking their most prized vinyl.

When synthesisers began to hide their power behind small-screen menus with limited controls, others re-exposed the innards in software editors controlled by a mouse. These days the VST software control is almost a de-facto complement to the hardware synthesiser to allow more flexibility in programming.

As new instruments like the Tenori-On and the Monome (see my talk from a few years back, here) emerged they brought with them newer and more flexible ways to compose and create music, without having to know much musical theory or go through the rigmarole of learning a new instrument.

At the same time new products like the Teenage Engineering OP-1 combine aircraft-quality industrial design and engineering with powerful digital synthesis and sequencing to give us a powerful audio workstation in a unit smaller than half a Macbook Air.

Digital-to-Physical

ARC music controller

ARC music controller – simple, elegant, and expensive

Synthesizers are starting to expose hardware controls again, but these controllers manipulate digital electronic signal paths rather than analogue ones. We are empowered by this hands-on control.

Increasingly we are seeing the use of cheaper and dumber control surfaces and devices. It’s not uncommon to see banks of linear faders or rotaries (knobs) that can be easily programmed to control a number of parameters on other physical devices or within musical software. You can even get plugins for controlling Adobe’s Lightroom these days.

And people are making their own controllers, including the biggest knob I have ever seen, and some of the most gorgeous and polished knobs ever to be created.

Digital-with-Physical

Scratch DJs always struggled with the idea of CDJs; they just don’t have the tactility of vinyl on turntables. Serato, Final Scratch and Traktor have been working hard to fill this gap bv using real vinyl with time-coded high pitch sounds to control MP3s. Mind-blowingly cool as this is (even 5+ years on), you need bulky turntables to play with it.

Recently, Numark created some CDJs with real vinyl platters to better reflect the tactility of vinyl but with the power and convenience of hooking up to an MP3 collection on a laptop or CDs with MP3s or higher quality recordings.

Meanwhile increasing numbers of cheaper and dumber controllers are helping to better control the digital brains behind the glass screens of iPads and laptops. For a mindblowing example of this check out Korg’s iMS-20 iPad app, which uses the Korg MS20ic Midi controller keyboard with patch cables. Just move the patch cables on the physical device and it will connect them on the iPad. I bought one straight away on eBay when I saw this.

Then you get things like the Reactable, where the digital brain complements physical objects allowing collaborative Simultaneous Multi User Interface (SMUI) interaction.

Expanding this topic for forthcoming IxDA London

I am looking to curate an event pretty soon for IxDA London on Music Tech interaction design. I believe this subject needs some full-on airing. I am, however, very biased.

In the meanwhile, please check out some of presentations and video around this subject: