Decent User Experience – A New Human Right?

We digital designers, of all hues, generally like to tell ourselves that we’re doing more than simply earning a wage; building wireframes, pushing pixels etc.  We’re making people’s lives better.

We also tend to believe that the businesses that ultimately foot our bills, take a more prosaic view; it’s all about the brand and the bottom line.

I think that there is something more profound going on here; I’m arguing that we should start to see ourselves as playing a small, but active, part in one of history’s grand narratives.  A big claim I know, for a UX blog post.  But bear with me, I guarantee you’ll be rewarded, and hopefully even, convinced.

I’ve recently been reading Steven Pinker’s excellent book on the little known fact that we’re currently enjoying the results of a multi-century long decline in violence – in all it’s forms; in crime, in war, in murder, in abuse, in discrimination.  In fact, in pretty much any form of oppression and violation of another human being’s experience of life.  The book goes into the details of this happy decline in countless different areas – here’s an example, a chart looking at the massive decline of murder rates in Europe, in comparison to the those of non-state societies.

Untitled-1-01

Homicide rates in Western Europe, 1300-2000 and in nonstate societies.  From p459 of The Better Angels of Our Nature

The past was indeed a very different country – and i’d much rather be living in the here and now.

In the dim, and sadly, not so distant, past, life was cheap, authoritarian hierarchies were unassailable and invulnerable, divisions between nations and social groups were impenetrable and perhaps most importantly, liberal enlightenment philosophy had not infected our psyche with its celebration of the ‘individual’.   Today we live in a world of ‘individualism’, a world suffused by information bearing other people’s perspectives; a world in which it is ever harder to be completely blind and unsympathetic to the experiences, and suffering of others.

But what does this have to do with software user experiences? I hear you say.  Well, one of the driving forces behind this incredible change in the fabric of our culture, over the past half a century, is what Pinker refers to as the ‘Rights Revolutions’.  I believe that the value we place on good user experience should be seen in the context of these revolutions.

I am asking the question: is a decent user experience a new human right?

The Rights Revolutions

So, what are these ‘Rights Revolutions’? And what do they have to do with user experience?  Well to quote Pinker:

“The efforts to stigmatize, and in many cases criminalize, temptations to violence have been advanced in a cascade of campaigns for ‘rights’ – civil rights, women’s rights, children’s rights, gay rights and animal rights”
From p458 of The Better Angels of Our Nature

rights-02

Proportion of books mentioning phrases from 1945. From p459 of The Better Angels of Our Nature

This cascade is nicely illustrated by the graph below that shows the sequence in which these phrases have become popular.  There’s a momentum behind these changes, each rights revolution raises the bar, as to what is acceptable, setting the stage for the next move forward.  If it’s not right to racially discriminate, how can we be tolerating sexual discrimination?  If we can no longer mistreat children, why should we be able to mistreat animals?

These changes, Pinker suggests, are primarily driven by “the technologies that made ideas and people increasingly mobile”.  As people were increasingly brought into contact with other people’s perspectives, through fiction, through travel, through the sharing of ideas; it became increasingly hard to maintain that other peoples experiences didn’t matter.   We increasingly came to value the rights of the individual and to sympathise with their experience. That has driven and continues to drive important and largely positive changes to the world we live in.

The right to a decent user experience?

Nothing has been as powerful in driving the large scale improvement to the quality of user experience, as the web.  Businesses got the ability to make user experience changes relatively rapidly and see the big impacts to bottom line metrics.  Perhaps more importantly, users got the ability to easily switch over to whichever site offered them the best experience for getting what they wanted.

This commercial dynamic has, sadly, been far less successful at driving substantial improvement to the user experience of specialist business software – an area that RMA specialises in.  Sadly business software is still generally sold through bulk licensing arrangements and bought by IT managers or business folk more interested in ticking boxes than in the merits of good design, or the experiences of users.

I think this is something that needs to, and is about to change.  Just because someone isn’t immediately visible as a monetizable metric on a web analytics dashboard, doesn’t mean they don’t matter.  People who work at a job day-in day-out to get important things done, matter.  Their experience is important; perhaps more important, than those of the hordes of debt-laden e-shoppers.

Let’s face it, badly designed software can make people’s life pretty miserable; unnecessarily hard to learn tools that make you feel stupid, inefficient experiences that frustratingly waste time, confused designs that hinder rather than help you get things done, ugly jarring experiences that make life just that little bit greyer.

Thanks to the web and mobile app ecosystems people are almost universally exposed to good user experience – they know what they are missing in the software that plagues their work lives.  At RMA we are witnessing this rising tide of intolerance to bad design in enterprise software. People are rising up against it; we have seen them demanding more from their bosses, their IT departments and their services providers.  They are starting to side step restrictive IT restrictions by bringing in their own devices (BYOD) and using decently designed cloud based offerings.  B2B software houses are starting to increase their investments in UX and visual design. The tide is starting to turn.

There are many powerful, financially driven, arguments for investing in better user experiences; they increase productivity while decreasing support and training costs.  But I believe that we can perhaps help the tide turn faster if we start to reframe the need for change.

We need to help shape and nurture the awareness of the right for decent user experience in all those millions of business software users. And, perhaps, more importantly those of us who work on business products, need to build a culture in which it is simply not acceptable to ship bad user experiences.  Just as it isn’t acceptable to discriminate against employees, or to cheat your customers, it shouldn’t be acceptable to subject people to bad user experiences.

For better or worse, most of us live much of our lives in software; just as life is precious, so are experiences.

A better kind of Visualisation Book (By Mischa Weiss-Lijn)

In a recent blog post, the invariably interesting Robert Kosara points out that:

“After you’ve seen one visualization book, you’ve seen them all.“

And, having read quite a few, I must say, man is he right. They tend to be big on beautiful full colour reproductions, and short on insights and techniques.  You generally emerge from having read a visualisation book, none the wiser on how actually to conceive, design, or implement compelling, and more importantly, useful visualisations.  Okay, in most visualisation books you can pull out a few useful tips and find some inspirational designs – but generally, it’s pretty slim pickings.

Recently though, i’ve hit a rather rich vein, and I’m hoping I can find more of the same.  At RMA we’ve been doing some fascinating work with the insurance industry on visualising natural catastrophe risks (think Hurricanes, Floods and the like) and exposure to insurance liabilities.  This has led me away from the usual bevy of beautifully illustrated generalist visualisation books to ostensibly dryer specialist visualisation books.  

Here are a couple, catchy titles, fancy coffee table ready cover designs, n’ all:

Thematic Cartography and Geovisualization Terry A. Slocum et al

Visualizing data William S Cleveland

The “Thematic Cartography and Geovisualization” book is a breath of fresh air.  It’s poorly designed, has hardly any nice looking visualisations in it.  But once you actually start reading it you find a treasure trove of concretely useful, mature and validated visualisation techniques.  It serves as a start contrast to the rest of the data and information visualisation field.  Here is a textbook, with a solid chapter on each of the key techniques for visualisation data on maps – e.g. a Choropleth. It goes into detail about how and when the techniques should be used, how the data should be prepared, their inherent weaknesses, the pros and cons of the various work arounds.  This is invaluable information condensing the collective knowledge of the GIS community into an accessible and re-usable form.  If only there more textbooks of this kind for non-geo visualisations!

A example of a choropleth visualisation of an insurer’s ‘Aggregate’ exposure

“Visualising Data” by William Cleveland, is actually a rather famous, and oft cited book.  But I’ll wager that not many people have actually read it.  Rather like the last book, it’s actually a very focused subject matter specific book.  It’s about visualisation techniques for statistical analysis; characterising distributions and the relationships between the properties of a system.

Again, while elegantly minimal, the visualisations certainly aren’t the eye-candy we’re used to seeing in a visualisation book.  The prose is terse and there’s quite a bit of maths.  But again, it’s filled with analytical visualisation techniques that clearly delivery insights.  Interestingly for each technique, Cleveland works through a series of interesting data sets, using the techniques to analyse the data and drive out insight.  It is perhaps telling how rarely this approach is taken by authors of visualisation books.  

A scatterplot matrix from “Visualising Data”

Basically the overall learning for me, is to read more narrowly focused visualization books – books from domains where visualisation has been intensively used for driving out insight; where the visualisations have been honed and matured to the point where they demonstrably do work.

VisWeek2012 – a UX designer’s view of the year’s biggest Visualisation conference (By Mischa Weiss-Lijn)

At RMA we’ve always tended to focus on projects that involve quite a bit of Data and Information Visualisation work (or ‘VisualiZation’ for those on the other side of the pond).  While we’ve become known for delivering to a very high quality in this area, we’ve drawn on our skills as Information and Interaction Designers to create our solutions.  While we’ve all read Tufte, Yau  and many others, we haven’t tended to connect deeply with the Information and Data Visualisation research communities.

So, we decided that one of us should tentatively dip our designerly toes in the rarefied waters of VisWeek 2012, the world’s biggest Visualisation conference, and get a taste for what the larger visualisation community has to offer folks like us; folks that design and build visualisations, that people actually use day-in-day-out.

And.. off I went, and now that I’m back, here is a subjective view on what a largely research oriented visualisation conference has to offer those working on designing interactive visualisations for use in real world settings.

It’s big and getting bigger

Having done my PhD in Info Vis many years ago, what impressed me right off the bat is how much the field has grown.  There are pretty big and well established groups focused on information visualisation across the world.  Here in Europe, Germany seems to be a particularly bright spot (e.g. University of Konstanz), while the UK also has quite a bit going on, with hotspots at City University, and Oxford among others.

While the field has been getting increasingly fashionable over the last few years, it seems to be reaching a tipping point, where I believe interactive visualisations will enter the mainstream of digital experience and thus become ever more relevant for designers of all stripes.

This may be old news to some, but there are a number of forces at work here:

  1. Big data: tech catch phrase of the moment, that basically boils down to data being available in unprecedented volumes, varieties and accumulating at ever increasing velocities.
  2. Better tools: back when I was doing my info vis research, you had to build every visualisation, pretty much, from scratch. Now there are lots of great tools to get you started (more on those later)
  3. Infographics in the media: The recent surge in the production of infographics has brought the possibilities of visualising data to the public imagination.  Pioneering publications such as the New York Times, the Guardian, Wired and folk such as David McCandless have popularised  visualisations that convey powerful narratives using data.

All this means that while visualisation work has always been important here at RMA, it’s likely to start becoming something that designers everywhere will be encountering more and more.

More grown up, less core innovation?

While information visualisation has gotten ‘bigger’, the research seems to have changed somewhat in character.  The work seems to focused more on evaluating and refining existing visualisation techniques and applying them to new and challenging domains.  That’s all good, and important, but the flip side is (and this probably a bit controversial) that from what I saw at VisWeek, there seems to be less valuable creative innovation around visualisation techniques.

Let’s dive into each of these topics in turn.

Domain focused visualisation

The research work has diversified into looking in detail at how visualisation can support a host of important new application areas from the somewhat unapproachable visualisations done for cyber security and bioscience to the somewhat more comprehensible work in the medical and finance domains.

Here’s a few choice examples from the conference.

Visualizing Memes on Twitter

A lot of people hope to transform the firehose of Twitter activity into something intelligible.  This could have important applications in lots of areas were people stand to gain from a real time understanding of consumer and industry sentiment – an area of considerable interest in financial markets.   Another area where this could be important is in being able to detect major events as they happen; think earthquakes, bush fires and the like.

Whisper is a nice piece of work that allows you to search for particular types of event, to see where the discussion and thus the event, originates, where flow goes thereafter and how people feel about as time progresses (positive = green, negative = red).

Leadline is more focused on allowing people to detect signifiant new events as they happen.  The ‘signal strength’ of automatically clustered ‘topics’ are visualised.  You can filter on the person, time range, event magnitude, or location, to focus on an event and understand it.

Medical Visualisation

There was a bunch of work around specialist information visualisations for the medical profession.  As medical providers provide more and more open access, data, (e.g. http://data.medicare.gov) this is an area that is bound to keep on growing.

MasterPlan is a visualisation tool that was custom built to help architectural planners, in the renewal of a rather old Austrian hospital, understand how patients flow between the different wards and units.  The visual analysis afforded by the tool allowed them to see the key patterns and identify a better way to cluster and place things.

The OutFlow visualisation, from IBM, (above) shows the outcomes (good = green, bad = red) of successive treatments administered to a patient (and a very ill one in this case!).

Less core innovation

I stand to be corrected here, but it felt to me, that while there was lots of innovation, I didn’t see visualisation designs that were applicable outside of their narrow solution area.  There is innovation to be sure, but’s generally more evolution than revolution.

For example, Facettice, was a rather beautiful bit of work for navigating faceted categorical data sets.  While lovely looking, it’s rather impractical in terms of readability, interaction and real-estate consumption.

Another example, would be Contingency Wheel++, a rather impressive, and powerful tool for exploring datasets with quite a few columns (say 20) and massive numbers of rows.  While it’s great work in many ways (and I’m afraid a little too complex to explain here), I wonder how broadly something like this could be used.

Of course, innovation doesn’t need to be game changing, small incremental improvements are generally what move us forward.  One bit work that was particularly nice in this regard was a paper on ‘sketchy rendering‘; new library for the Processing language (see below for more on that), that allows you do render interactive charts and visualisations in a sketchy style; something that could be quite handy for testing early prototypes.

More and better evaluations

It was great to see that there seems to be a much more robust emphasis on evaluating the value of visualisation techniques and applications.  Back when I was doing research in this area (a decade ago) people were busy churning out new ways to visualise things, without generally stopping to check if they were any use.  Nowadays the balance seems to have tipped away from pure invention to more of a focus on application and evaluation.

Automated user testing on the cheap

One particularly interesting development was the very prevalent use of Amazon’s Mechanical Turk capability for doing evaluations. The basic idea is that you set up your evaluation and software experience so that it is totally automated and available online (so a web application with accompanying forms).  You then recruit ‘users’ from the hoards of ‘Mechanical Turk Workers’ waiting to complete tasks for relatively small amounts of cash.  You can even insist that your workers have certain qualifications (i.e. have passed particular tests), to ensure you get users able to complete your tasks.

Despite the obvious attractions, there are definitely some issues here – in particular around the limitations of the Mechanical Turk Worker population (mostly US and India based) and the ‘quality’ of response you get. There was one paper in particular that claimed, in contradiction of previous work, that people performed very poorly on a task (Baysian probability judgements) despite having visualisation support; I suspect that the Mechanical Turk workers weren’t trying quite as hard as the students typically employed in previous experiments.

Isn’t it obvious?

Some of the evaluation papers had the unfortunate effect of forcing me to crack wry smile and ask myself:

Why oh why. Why, bother at all.

Let’s pick one example to illustrate the point.  It was an evaluation of the relative efficiency of the radial and cartesian charts.  Paraphrasing the question for the chart below: Does wind direction have an effect on efficiency, if so which direction leads to greater efficiency (btw: fewer minutes between wheel events means higher efficiency)?

So what was the answer?

Yes, you guessed it the cartesian charts took the prize.

While this  work was partly about evaluation methodology – from the point of view of the visualisation, i’d say… isn’t obvious?  Only a basic design sense or understanding of vision would tell you that it’s far easier to scan horizontally to compare values.  Do we really need to run empirical evaluations for this sort of thing?

There was quite a bit of work at VisWeek, where some design training and capabilities could go a long way to making the research output a whole lot more useful out in the real world.

Better tools

There are a bunch of great tools out there for designers and creators of visualisations; here’s a quick run down.

Visual analytics packages

There are a bunch of software packages out there competing for the custom of the emerging discipline of data scientists, as well as the less well versed journalists, business folk who are grappling with crippling amounts of data.  These can be super handy for any designer who is trying to get a handle on a data set before launching into Illustrator.

Perhaps the most accessible is Tableau (PC only, sadly) which allows you to build up relatively complex interactive visualisations semi-automatically.  They have a free version of the software that’s open to all to use, as long as you don’t mind publishing your visualisations out in the open.

Better development tools

On the technical end, a bunch of languages and frameworks have emerged that can be leveraged to rapidly create performant visualisations.  The two main contenders are Processing and D3.  They are both open source efforts. sired by MIT and Stanford respectively, with very active communities and lots of shared code for you to build on.

Processing is an open source Java-like language aimed at promoting software literacy within the visual arts, which has been designed for quickly creating visuals and visualisations.  People have used Processing to create a wide array of experiences, from sound sculptures, data art and info graphics, to full on applications.

D3 is a Javascript library that is more narrowly scoped around creating visualisations on the web.  You can use it to manipulate data and create interactive, animated SVG visualisations.

So… What’s the verdict?

I have to say I learnt a lot by attending VisWeek; it was particularly valuable for me to get a sense for where the field is at and where it’s going.  The more focused sessions, in particular, helped me get valuable footholds into areas of work relevant to some of the projects here at RMA.

However, I wonder whether there is a place for a more industry or practitioner focused visualisation conference where the papers and presentations (from researchers and practitioners alike) could be focused on innovation in visualisation that are more likely to be adopted outside of a research context.

Another big take away, is the field is still sorely lacking integration with the design community, and in particular the visual design community.  The researchers are nearly all computer scientists by training; and it really shows.

Designing for fluency – Part 2 (by Mischa Weiss-Lijn)

Fluency, cognitive choreography and designing better workflows

If you’ve read part 1, you’ll know all about what Fluency is and how understanding it subtly, but importantly, changes the way you think about usability.  Now I’d like to take this one step further by introducing a couple of other concepts from Kahneman’s book on the fascinating world of modern cognitive psychology, as well as one that i’ve made up all on my own: ‘Cognitive Choreography’.

Let’s start by briefly explaining what I mean by the term cognitive choreography.  The workflows that we are called to design can place varied and diverse demands on our darling users.  It’s often about going through the motions; form fillin’ payment details, skimming content, navigating.  But people are often also being asked to make critical decisions and perform complex tasks.  These different types of engagements require very different types of cognition (as we’ll see later). And in this article I’ll go through some relatively new research that points towards ways in which designers can encourage the right type of cognition for the right moment; what i’ve called Cognitive Choreography.

The two Systems: Thinking fast and thinking slow

Kahneman’s book centres on what he calls the ‘two systems’; two modes of thinking, one fast, System 1, and the other slow, System 2.  These have very different capabilities and constraints, and as a result some important implications for design.

System 2 is what does the conscious reasoning; it is the deliberate, rational voice in your head that you like to think is in control.  System 1 is the automatic, largely unconscious, part of your mind where most of the work actually get’s done.  Although the reality is inevitably rather more complicated, it’s helpful to adopt Kahneman’s conceit of these systems as two distinct characters.

System 1: The associative machine

  • Fast
  • Effortless
  • Automatic and involuntary
  • Can do countless things in parallel
  • Slow learning
  • Generates our intuitions
  • Driven by the associations of our memories and emotions
  • Uses heuristics (rules of thumb), that are often right, but sometimes very wrong

System 2: The lazy controller

  • Slow
  • Effortful
  • Selective (lazy) and limited in capacity
  • Does things in serial
  • Flexible
  • Uses reason, logic and rationalisation

System 1 effortlessly generates impressions and feelings (“xyz link looks most relevant”) that are the main sources of explicit beliefs and deliberate choices of System 2 (“I’ll click on xyz link”).  The problem here is that System 1 is error prone and System 2 is lazy.  System 2 can overcome System 1’s shortcomings, by critically examining the intuitions System 1 generates, but will often not.  I think that as designers we should think about how we can help System 2 spring into action when the moment is right.

Before looking at how we can help the right system spring into action, let’s look at how System 1 can sometimes lead users astray.

Biases: Thinking fast and wrong

System 1 has evolved to be quick and get things mostly right, most of the time, for your average hunter gatherer in the long gone Pleistocene (i.e. before we got all civilized, started farming and building urban jungles).  As a result it doesn’t adhere to the tenets of logical and statistical reasoning that underpin what we think of as ‘rational’ thought; it uses heuristics, rough rules of thumb, that are easily computed and generally work.  And that leads to errors, which, in our new fangled not-too-much-hunting-or-gathering-needs-doing kind of world, are more problematic than they used to be.

Here is a brief listing of some of the things that can go wrong.  If you want to really learn the slightly scary truth about how rubbish we (and yes that includes you) are at making judgements and choices then I refer you toKahneman’s book or perhaps take a look at this scary wikipedia list of cognitive biases.

System 1 is biased to believe and confirm what it has previously seen, or is initially presented (luckily for the advertising industry). So it tends to be

  1. Overconfident in beliefs based on small amounts of evidence (“The site would be better if it was purple.  My wife said so.”).
  2. Very vulnerable to framing effects (“90% fat free” vs “10% fat)
  3. Doesn’t factor in base-rates (i.e. a things general prevalence).   Insurance sales and the tabloid press play off of this all the time; it is the gravity of the event that matters, the fact that it’s very very unlikely, doesn’t have nearly as much impact as it should.  So for example, you may be tempted to insure your brand new fridge, against breakdown in it’s first year, because you’re so dependent having it work, even though it’s extremely unlikely that anything will go wrong.

This is because System 1:

  1. Focuses on the evidence presented and ignores what isn’t
  2. Neglects ambiguity and suppresses doubt
  3. Exaggerates the emotional consistency of what’s new with what’s already known

System 1 infers and invents causes, even when something was just down to chance. So, for example, in The Black Swan Nassim Taleb relates that when bond prices initially rose the day Saddam Hussein was captured, Bloomberg ran with “Treasuries rise: Hussein capture may not curb terrorism”.  Then, half an hour later the prices fell, and the headline changed: “Treasuries fall: Hussein capture boosts allure of risky assets”.  The same event can’t explain bond prices going both up and down; but because it was the major event of the day, System 1 automatically creates a causal narrative; satisfying our need for coherence.

System 1 will dodge a difficult question and instead substitutes in the answer for an easier one.  So for example in predicting the future performance of a firm, one instinctively relies on its past performance.  When assessing the strength of a candidate, one instinctively relies on whether we liked them or not.

System 1 does badly with sums, because it only deals with typical exemplars, or averages. So for example, when researchers asked how much people would be willing to pay to save either 2,000, 20,000 or 200,000 birds after an oil disaster, people suggested very similar sums of money.  It wasn’t the number of birds that was driving them, but the exemplar was; the image of a bird soaked in oil.  Similarly with visual displays people can very easily tell you the average length of a bunch of visual elements, but not the sum of their lengths.

Let’s not forget though, that while it has its failings, System 1, does a pretty impressive job of things most of the time, for most people.  In fact System 1 is crucial to the kind of deep creativity that us designers pride ourselves on.  It’s what helps you get things done fast, and well.  It’s what results from practice and is the basis of most forms of expertise; you wouldn’t want to drive your car without it!

As a result, lot of the time it’s appropriate, and indeed better, for System 2 to put it’s lazy feet up and give System 1 the reins.

So what’s the overall takeaway for designers?  Well, one is that, if users are at a point where it’s important that they critically inspect the facts, and overcome their pre-conceptions and first impressions; then System 2 needs to be on the job.  Otherwise, we can leave System 1 in the driving seat.

So how can designers engage in Cognitive Choreography, and help ensure users have the right System in the driving seat, at the right time?  Well one approach is to use Fluidity.

Cognitive Choreography

Fluency and switching users between Systems

I would guess that there are many things designers can do to knowingly encourage users to engage the right cognitive faculties for the task at hand; but one interesting and counterintuitive approach is to use Fluency.  That’s what I’m going to focus on here.

To recap from my previous post on the subject, Fluency is the brain’s intuitive sense of how hard your poor brain is being asked to work on something.  Lots of things will impact your sense of fluency as illustrated in the graphic below.

As well as being the key to really understanding what is going on behind users perceptions of usability and beauty, it just so happens that we can use the Fluency of our designs to engage system 2.  From an evolutionary perspective, the reason we have this intuitive sense of Fluency, is to have an alarm bell that will wake System 2 up when things aren’t going smoothly and we need more careful, bespoke, thought. When an experience is Disfluent and creates what Kahneman calls “Cognitive Strain”, System 2 is mobilised.   Thus we, as designers, can actively engage, and disengage, System 2, by controlling the many levers we have at our disposal to change the Fluency of a UI.

So, in case you’re not convinced, here’s one of the experiments that demonstrate this sort of effect in action.  A bunch of Princeton students were given a set of 3 short brain twisting problems to solve.  These problems were designed to have an answer that would seem obvious to System 1, but was infact wrong. To get the right answer, you’d need to have gotten System 2 in the game.  Here’s an example:

If it takes 5 machines, 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

100 minutes OR 5 minutes

When students were shown the problems presented in an ordinary, legible, font, 90% of them got at least one problem wrong.  When the problems were presented using a small, poorly contrasted font, only 35% of them did.  Yep, that’s right, making the font less legible, resulted in an almost 200% uplift in performance.  (btw, the answer was ‘5’)

Low Fluency creates cognitive strain, which encourages the user to activate System 2, which thinks things through, and get’s to the right answer.  High fluency does the opposite, encouraging the user to leave System 1 in control.

So the design implication here is, that when you come to designing a portion of your flow where it’s critical that System 2 be fully engaged, it may be the time to purposely create a low fluency experience, using the array of tools at your disposal (e.g. font legibility, contrast, layout, motion, copy etc).

When to use Fluency and Cognitive Choreography

So we have a bunch of ways we can make an experience more, or less, Fluent, now we need to understand when to use this to encourage users to apply the right kind of cognition to the task at hand.

Perhaps the first, and most important, thing to say is that you will need to be sparing and purposeful with low Fluency UIs.  Low Fluency is by it’s nature unpleasant, and on top of that using System 2 takes effort, so, it’s no fun.

However the science of System 1’s failings give us some clear pointers as to where we should consider putting the brakes on Fluency.

You should consider engaging System 2, with a low Fluency UI when:

  1. Critical, high risk, decisions are being made
  2. The user is being engaged in a task that you know has features which will lead System 1 astray.  So the user will be asked to:
  1. Draw inferences based on very small sample sizes
  2. Draw inferences based on incomplete information; e.g. they are given a small part of the story, or no information about base rates (i.e. general prevalence of a thing)
  3. Make decisions based on potentially framed and biased messages from an interested party
  4. Mentally work with sums, rather than averages or prototypes

Wrapping it up

Most of the time you will want to maximise Fluency and thus usability, encouraging users to coast along primarily using rapid, intuitive outputs of tireless System 1.  But your can reduce critical errors, and increase the quality of significant decisions and judgements if you selectively lower the Fluency of your UI, to make sure the lazy, but smart System 2 is fully engaged.

Of course, there is a balance to be struck here, as designers we will be hard pressed to make the experience as disfluent as psychologists can when doing experiments.  In fact it would be super valuable if people in the HCI community would take a closer look at this, and measure the effectiveness of lowering Fluency to within the bounds of commercial acceptability.

Another interesting tension here, is that many of us designers, are working on systems primarily aimed at realising and increasing sales.  So even if users are making critical, and expensive, decisions, based on incomplete information, it’s in your client’s interest that System 2 stays as lazy and disengaged, as possible.  However, there are countless digital experiences that support productive and often critical processes or decision making.  That’s what we focus on here at RMA, and that’s where doing a little Cognitive Choreography could come in handy.

Designing for fluency – Part 1 (by Mischa Weiss-Lijn)

From the new psychology of Fluency to usability, beauty and beyond

Having had a background in (proper) psychology, before emerging as a designer, I’ve often been dubious about the value of this fascinating ‘science of the mind’ for the practice of design.  Every now and then, you’ll hear some strained reference to Fitt’s law from a recent MSc. graduate; but let’s face it, in our day to day, very little psychology is actually brought to bear.

So, it came as a surprise to find a treasure trove of design insights, when reading the excellent “Thinking fast and slow” by the Nobel prize winning psychologist Daniel Kahneman.  Let me be clear; it’s not a book about design, it’s a pretty hardcore psychology book.  It’s about how we think and reason; not how we would like to think we think, but how we actually think.  Warts and all.  My hunch is that really understanding that, and the warts especially, could be a valuable tool for designers.

Fluency

One example I’d like to pick out, is Kahneman’s treatment of a phenomenon generally termed ‘Processing fluency, which he calls ‘Cognitive Ease’.  Like it or not, and more ‘usability’ minded designers may well not, the perception of usability, trustworthiness, beauty is partially dependent on the myriad of superficially unrelated factors that drive the fluency of cognitive processing (don’t worry I’m about to explain).

Yes that’s right.  You can do things to make people think a design is more, trustworthy or beautiful, without actually making it more trustworthy or beautiful.  At all.

So let me explain.

As Kahneman explains it; our brain has a number of built-in dials (you could think of them as a bunch of sixth senses), that are constantly, effortlessly and unconsciously, updating us on (evolutionarily) important aspects of our environment.  So, for example: “What’s the current threat level?”, “Is anything new going on?”.  One of these is, Processing Fluency, which is basically a measure of how hard your poor brain is being asked to work.  It’s basic raison-d’etre, is to let you know when you need to redirect attention or make more effort.  However, interestingly for us designers, it ends up having a much broader impact on the way we evaluate things and make decisions.   Anything that increases fluency (and there are lots of things that do) will bias many types of (and perhaps all) judgements positively.

This is a, somewhat scarily, broad phenomenon.  Who would have thought that:

Rhyming statements seem truer than equivalent non-rhyming ones

Shares with more easily pronounced names outperform on the stock market

Text written with simpler words, are judged to have been written by a more intelligent author

To usability, beauty and beyond

But let’s focus on how this relates to design.

It turns out that anything that increases fluency, will positively effect many aspects of the way people perceive, judge, and presumably experience, something.  Fluency will make people trust something more, make it feel more familiar, more effortless, more aesthetically pleasing, more valuable; fluency will even make people feel more confident in their own ability to engage with the experience.  And these effects can all potentially be brought to bear independently, and on top of, the actual content of the experience.

What’s powerful here is that there are lots of ways in which you can increase the fluency of your experience; ‘manipulations’ in the parlance of psychologists.  I’ve tried to summarise what I’ve been able to glean from the psychology literature around this in the infographic below.  The thing to remember is that any of these manipulations will positively impact people’s perceptions of your experience.

You can make your copy more fluent, your visual design more fluent, and your flows more fluent.

Making your copy more fluent

Let’s start with copy.  A bunch of the things we normally think of as best practice, such as using simple straightforward language and uncomplicated syntax, increase fluency.  It’s interesting to realise that such simple things could end up impacting how much people will trust the experience!

Looking at copy from the perspective of fluency gives weight to more flippant techniques, such as the use of rhyme, alliteration.  It guides us to think carefully about how easy copy is to say out loud.  All these things improve what is called ‘Phonological Fluency’ i.e. how easy something is to say; how easily it rolls off the tongue.  If it’s easier to say, it’s easier to think.

Then, consider ‘Orthographic Fluency’ i.e. how easy one can translate written text into spoken words and meaning.  This guides us to avoid creative spellings (e.g. “Tumblr 4ever”).  It gives a clear rationale for always using the most direct, succinct and approachable notation available (e.g. “1” not “one”, “%” not “percent”).

Making your visual designs more fluent

Font designers will be happy to hear that there have been lots (and lots) of experiments that show the impact of the clarity and readability font on fluency, with all the many fold benefits this brings.  Readability is not just about readability – it’s about fluency.

Font selection is one thing that contributes towards ‘Physical Perceptual Fluency’, and psychologists have shown that having a good level of contrast does too (for fonts in particular, but presumably it will be just as important for UI elements).  Of course that’s not where it ends, even if psychologists haven’t really looked much deeper, much of the principles behind good, functional, visual design, such as leveraging Gestalt grouping principles, must surely drive this Physical Perceptual Fluency.

There’s also been a bunch of work looking at how the length of time people have to see and absorb a display impacts fluency; they call it Temporal Perceptual Fluency.  The less time, the less fluent.  This probably doesn’t have too much impact on most design applications unless you are presenting stuff for less than 1 second.  But my hunch is that judicious use of motion design will also contribute to this type fluency.

Make your flows more fluent

There has been a bunch of work looking at the role memory plays in fluency.

Most obviously using common UI patterns will create a more fluent experience by virtue of their familiarity.  Similarly when experiences are designed to be easier to learn and remember they are going to be more fluent. But you could have guessed that.

Something you might not have guessed is that you can use ‘Priming’ to make an experience more fluent.  Priming is a psychological technique that basically boils to exposing people to related stimuli before showing them the experience you’re interested in.  This activates the relevant areas of your brain making it easier to process the experience once it comes along.  Is this something we could use as designers?  Perhaps we can.  For example, we could sequence content and interactions to prime parts of the experience that we expect to be challenging.

What else?

While psychologists have already discovered lots of ways to manipulate fluency, I’d guess that there are many more waiting to be discovered.  Psychologists, haven’t been thinking about design, so they’ve not really been looking in all the right places.  In fact, i’ve taken a couple of liberties to add some obvious candidates to my graphic which are not (yet) grounded in empirical evidence (motion design and visual hierarchy). One area that doesn’t seem to have been explored at all is how to make interactions more fluent.  And there is surely much more we can do to create fluency in user journeys and IA. Perhaps someone should look into it!

And so… what?

Is this just dressing up our time honoured notions of usability in fancy new scientific jargon?   Or does it give us a genuinely new and useful conceptual tool for creating better experiences? After all we’ve had related concepts before, for example Cooper’s ‘Cognitive friction’ in his classic Inmates are running the asylum book.  Making experiences as easy and frictionless as possible is at the heart of all good digital design techniques.

To be honest, I’ve only just started thinking about this, and so haven’t yet been able to put it into practice.  But my hunch is that there are a couple of key things that the concept of fluency offers which are interesting, and potentially useful.  Firstly, there is the evidence of there being a broader set of qualities that go into making an experience frictionless or fluent, then we’ve traditionally allowed for.  Secondly, and more importantly, there is the discovery that any and all of these, impact on the full range of people’s perception and memory of an experience. We want to create experiences that people feel good about.  Depending on the experience, we want our users to come away persuaded, happy and confident.  An understanding of how to create fluency, gives us a new way of thinking about how to get the design outcomes we’re after.

Further reading for the curious

Part 2 of this fine blog: Fluency, cognitive choreography and designing better workflows, which looks at how we should be using our grip on Fluency to help users think in the right way, depending on the experiences they are engaged in.

Alter, Adam L, and Daniel M Oppenheimer. “Uniting the tribes of fluency to form a metacognitive nation” Personality and Social Psychology Review 13.3 (2009): 219-235.

Daniel Kahneman “Thinking, Fast and Slow” 2011

Google logo above reception

Sketchy behaviour at Google

Google logo above reception

Cropped image because my iPhone camera just couldn’t do justice to the view over London from reception at Google towers

Jason and Sam visited the big G on Tuesday to run a workshop building on the hugely popular Sketching Interfaces session they did at Interactions12 (Dublin) and HCID (London), earlier this year.

Even though the Googlers rated themselves a mixed bunch, in terms of confidence and ability, there was some impressive sketch action on display throughout the session.

post it notes showing how workshop participants rated their confidence and ability (before the session)

Despite the spread the standard of sketching at the session was pretty high

We had designers battling with producers and developers in live sketch-offs and the smiles even lasted when Sam sent them all back to school and gave them lines (to practice their labelling skills).

The tips and tricks were popular as ever and Sam’s custom rubber stamps (sketch it nicely once, stamp it as many times as you like) were a real hit.

2 of Sam's custom gesture stamps (great for cheating at sketching and user-testing sessions alike)

2 of Sam’s custom gesture stamps (great for cheating at sketching and user-testing sessions alike)

Huge thanks to all the guys at Google who made it a really fun session to be part of.

Imagine the form of the future

If you had a time machine (and didn’t have anything better to do with it) and travelled back to say, 1997 and brought back a bunch of forms from webpages and applications, and then compared them to the forms we’re using today, it might be difficult to tell the 2 sets apart.

Branding aside, forms haven’t moved on much in the last decade or so

Fifteen years down the line and the way we capture data online and in applications hasn’t fundamentally changed all that much. We’ve refined the process bit by bit, and thanks to folks like Luke Wroblewski and Chui Chui Tan we understand more about how to design more effective forms, but in terms of innovation the form remains firmly stuck in a circa 1997 rut.

The form is one of the basic building blocks that underpins the way we use websites and applications. It’s the gateway that allows us to build interactive systems and services on the web rather than limiting us to just consuming content. But the way in which we use the web is in the process of fundamentally changing, and this has to force us into looking at the way in which we engage with the web around data capture.

As mobile devices, and the infrastructure that they rely on, continue to improve and tempt more and more of us away from the desktop and laptop in our use of the web (through browsers and native apps alike), many of the lessons we’ve learned and relied on are having to be re-learned to work on devices with smaller screens and no physical keyboard or mouse.

Part of the problem is that we’re still using design solutions based on systems that were developed a decade and a half ago on devices that are not the same as those that we’ve ‘learned the rules’ for. Navigating and using a traditional form on a mobile device is a clunky experience at best. Putting aside the problems associated with non-physical keyboards, reduced screen real estate and touch-based navigation, many of the lessons we have learned about form design don’t work for mobile.  This hasn’t gone unnoticed and there are some additions to the established learning that take into account mobile device form factors and mobile user behaviour, but essentially these are just tweaks to the same old form paradigm.

Let’s face it, forms aren’t sexy. What they are is frustrating. No one relishes the prospect of filling in forms, in fact they are regularly cited as points of pain in user feedback. So why is it that something that has been so fundamental to our use of the web and something that causes so much frustration, hasn’t seen any real innovation in over a decade and a half?

As someone who works as part of a design team that specialises in complex, day-in, day-out applications I come up against a lot of form-based interaction. I’ve read Luke’s book and Chui’s excellent blog posts and try to incorporate the benefit of their learning and experience into the forms that sit within the systems I design. But a recent project prompted us as a team to think about breaking away from the traditional and thinking about how we might capture data in the future – on the assumption that surely we won’t still be using the same old forms in another 15 years.

We spent time thinking about alternative means of capturing data that weren’t necessarily tied to keyboard entry. We covered our studio walls in sketches and ideas that used voice recognition, gesture, 2D barcodes and QR codes – even ‘magic’ pens. But we also tried to look at how we could improve the data capture experience without relying on these things.

Amongst some of the more future-looking ideas we spent time refining several ideas based on how the form itself might be improved as well as the means of entering data. After all, not everyone has an iPad or a Wacom Inkling and regardless of how groundbreaking our ideas are – the majority of people will still be using a keyboard (or touchscreen) to fill in forms for a while still.

We wanted to design a form that wasn’t tied to a linear process and that allows the user to enter data at any point and in any order. We had seen similar form-based interfaces as part of field-based observation but for all the benefits that the structure of the form gave, the fact that every single field was displayed on a single screen with no visual hierarchy and the most basic of styling, convinced us we could do better.

From many concepts, we are in the process of refining an idea that allows complex forms to be created in the back-end, but that strips out the complexity in the front-end and places a focus on just the field that is being used.

We want to show that it is possible to create a usable and beautiful form that doesn’t fit the traditional mould. Our thinking is based on:

  • Focus – stripping out any distracting or irrelevant content and only showing what is required at each stage
  • Flexibility – creating a structure that wasn’t reliant on the user following a prescribed, linear process
  • Working with the user, not against them – inspired by old-fashioned greenscreen systems that seemed to be well-suited to data entry
  • Shortcutting – predictive data entry, short codes and keyboard shortcuts that support expert users without compromising the novice user experience