Fixed Price vs. T&M

Fixed_Price_vs_T&M

I don’t like fixed price contracts, or at least not in my industry – software design and build. It’s not that I don’t like constraints and deadlines – on the contrary, I think they help one focus and sometimes accomplish more than within complete freedom of action. It is, however, artificial constraints that urge me to consider alternatives.

The most prominent alternative to fixed price is the Time & Material, or simply T&M, type of contract. I will assume that the differences between the two are obvious and won’t bore you with explanations of what they are. However, I will put them head to head in order to show why my feeling, which has gradually turned into a strong opinion through experience and analysis, indicates the fixed price model as the inferior one.

A couple of considerations before we get to it:

  • This comparison is done in the context of  the software delivery (design, development) services industry. This is not to say that the arguments would not apply to other industries, only that I am most confident to talk about the former.

  • I am making an assumption, albeit a strong one, that a fixed price contract necessarily implies upfront requirements that allow for a decent indication of expected effort. After all, if the requirements are not clear, how would someone then be able to size work and thus accept obligations of a fixed price contract?

  • The opposite then, although only optionally, is true for T&M. This means that there can be a level of upfront requirements, for instance, enough to gauge an expected budget, but they are not required, because a budget can also be set on mere availability of financial resources or a simple spending appetite.

With this in mind, here’s why I believe clients would be much better off with T&M contracts versus fixed price.

Fixed price contract myths

To begin, let’s look at the main reasons behind choosing a fixed price contract over a T&M. The very word ‘fixed’ implies that there is a better (perceived) clarity around the object of the contract, which in turn provides stronger guarantees relating to:

  • Total cost of delivery

  • Definition of final deliverable

  • Timeline of delivery

Let’s look at each one separately.

Myth #1. Fixed price contract will help me manage costs better.

It will help you manage costs. However, it won’t let you do it better than a T&M contract would, as it’s just as easy, and just as common, to set up a budget for the latter.

Even if we assume that maximum spend can be equally well controlled by both, I would argue that business value received for the budget would be lower in a fixed price engagement, and here’s why:

  • A vendor will always overestimate effort required to deliver the outcome in order to ensure against the risk of not doing so, and the client ends up paying more for the equivalent amount of work, or receives less for the equivalent budget.

  • Part of that budget will also be spent on change management overhead, as opposed to delivering additional business value through software.

Change management is an important consideration here. Since a fixed price contract shifts all risk to the vendor, after contingency premium, change management is vendor’s primary tool for mitigating that risk. Since the purpose of change management is to identify any deviation from baseline requirements, price it and charge for it as additional work, this more often than not results in the final cost of delivery ending up higher than that in the original contract, thus hindering the effort to control total spend.

Myth #2. Fixed price defines final deliverables better.

Even though the name refers to a fixed price, or fixed fee, what really sets it apart from a T&M contract is a fixed scope. However, I think it’s needless to say that in most projects, especially in software projects, definition of exhaustive detailed requirements prior to commencement of delivery is very impractical  – not impossible, but impractical. What makes it so:

  • While people are really good at saying what they like or don’t like about something they see (one of the fundamental ideas behind agile delivery), we are quite bad at saying in sufficient detail what we want upfront. Which translates into incomplete or inaccurate requirements and the necessity for continuous requirements’ refinement.

  • Our own experience with upfront detailed requirements shows that due to changes in direction and priorities, as well as feedback to product deliveries, up to 40% of original requirements (including designs) do not make it into the final product. This is pure wasted effort.

Say, you do decide to define full requirements upfront. This requires a very concentrated effort and exceptional analytical skills. However, even if it is done, the following financial implications have to be considered:

  • Postponed revenue. Gathering and refinement of requirements is a time-consuming activity, and an attempt to accomplish it before beginning of development is done at a cost of delayed development and thus time-to-market.

  • Higher cost. If you contract specific resources, such as visual designers, you would be forced to use their services longer, because not only would they need to work on requirements, but also provide support to developers during delivery, as requirements are never good and clear enough on their own.

Therefore, a fixed price contract type is only applicable in a situation that is impractical to begin with.

Myth #3. A fixed price contract provides me with guarantees of when the final product is to be delivered.

This is true. If a vendor is falling behind a delivery schedule, the team can be ramped up, or development speed increased in other ways, if necessary, to meet the delivery deadlines. However, the same can be achieved with a T&M contract when working towards a milestone in a release or delivery plan. After all, a T&M contract often only defines the daily rate and total budget with sufficient flexibility in budget burn rate while working towards delivery goals.

Furthermore, the fixed price approach implies that, in order to deliver a fixed scope in a fixed amount of time, one needs to indicate, evaluate and contain any project-external influences and dependencies on any third party or client’s own internal resources that can impede a fluent delivery of the final solution – an extremely difficult task that often ends up with inflated SLAs and other commitments that are designed to act as insurance against non-delivery as opposed to reasonable coordination of efforts. With a ticking T&M meter, on the other hand, it is in the client’s best interest – mind you, both for vendor’s (smooth uninterrupted delivery) and client’s (efficient spending, fast delivery) sake – to ensure actions of different parties are orchestrated within the reasonable planning horizon to minimise/eliminate any downtime of project resources.

Product & requirements management

OK, so if we take away all the ‘comforts’ of the ‘clarity’ of the fixed price contract, and move all the risk back to the client, how can I possibly say that a client would be much better off with a T&M contract? Because, believe it or not, this is exactly what I am saying.

Well, the biggest issue, which I have witnessed all too often, is to do with the mentality behind a fixed price approach. Once a fixed price project is signed and in the drawer, the client goes into a state of mind which is something like ‘Ah… I can finally kick back, relax and just wait until my product is delivered to me’. And while this might be true for when you order a pizza, this is never the case when you order custom-made software.

What is the single most important input into software delivery? It’s requirements. There’s a reason why Scrum identifies Product Owner as indisputably the most important role in a project. And while development, QA, design and a number of other skill-sets are extremely important, it is the business value delivered through well conceived and prioritised requirements that determines success and failure of a project to the business.

And get this – product/requirements’ management is always (or should always be) the function of the client. And while T&M approach leaves all the risk with the client, it is an illusion that this risk can be outsourced or sold at a premium. The only way to deal with this risk is to manage it, and the best and only tool to do that is good requirements’ management.

I have already made a point that it makes little sense to prepare upfront full-detail requirements, but  more often than not one would want to do some upfront requirements definition/design – only enough to understand the scope of the project, inform architectural considerations as well as allow for very high-level sizing and estimation – never a commitment. Instead, as requirements are being produced and refined, they are also prioritised to make sure that more important functionality will be implemented before the more trivial.

So, where fixed price approach is trying to box requirements with time and cost constraints, T&M merely uses either or both of the constraints as a line which can easily be crossed (through trade-offs) while performing the task of continuous (re)prioritisation with the goal of delivering highest business value within the constraint.

So… what’s so great about T&M?

I believe that what a T&M contract does is create one simple thing, and that is a healthy collaborative environment that is critical to any project’s success. And it does so by correctly defining the two key ingredients of a good project:

  • Skill-set. The very reason for a client to consider a service provider is the need for specific skills (design, technological, IT architecture, project/programme management, etc.) that it lacks within its own organisation. And a vendor provides these skills at an agreed price – the daily rate.

  • Leadership in the form of requirements’ management, as it not only entails the direction of the product to be built, but also bears the responsibility of success or failure of the project. I believe it is self-deceit that this responsibility can be outsourced, or concentrated into a short effort before the beginning of the project.

No project will succeed without these two components. And if T&M provides a great basis for these working together, why the overhead of a fixed price?

Conclusion

This article is, without a doubt, very one-sided in favour of T&M contracts. And for a number of good reasons which, if I did a good or at least a decent job, are clear and indisputable. However, it would be wrong to say that there is no place for fixed price contracts – there certainly is.

What kind of project could qualify for a fixed price? A small one. How small is small enough? I believe the best way to gauge that is to see what project management approach you would feel more comfortable applying:

  • If the requirements are small, clear and detailed enough so that you can construct a robust waterfall plan that you are confident you will deliver with little slippage, then there’s no harm in going fixed price.

  • However, if you think that too much time would be wasted on getting a deep understanding of requirements at the outset of the project, and a resulting sequential plan still urges you to slap a big fat contingency on the price, then agile approach with a T&M contract seems a better candidate.

In any case, when you think about what services we normally buy for fixed price, it’s usually something relatively simple, something repetitive – a shoe sole to be repaired, a haircut, a wedding photographer hire even. However, you rarely expect a price upfront for fixing a car after an accident, or for legal services to draw up a proprietary contract without a raft of ‘if‘ clauses. Less so should we expect to give a fixed price for building software, which is neither simple nor repetitive. Agile approach to software delivery was invented for this exact reason that it is so difficult (virtually impossible) to plan out software for A to Z, and if you can’t plan it, you can’t reliably estimate and cost it.

Decent User Experience – A New Human Right?

We digital designers, of all hues, generally like to tell ourselves that we’re doing more than simply earning a wage; building wireframes, pushing pixels etc.  We’re making people’s lives better.

We also tend to believe that the businesses that ultimately foot our bills, take a more prosaic view; it’s all about the brand and the bottom line.

I think that there is something more profound going on here; I’m arguing that we should start to see ourselves as playing a small, but active, part in one of history’s grand narratives.  A big claim I know, for a UX blog post.  But bear with me, I guarantee you’ll be rewarded, and hopefully even, convinced.

I’ve recently been reading Steven Pinker’s excellent book on the little known fact that we’re currently enjoying the results of a multi-century long decline in violence – in all it’s forms; in crime, in war, in murder, in abuse, in discrimination.  In fact, in pretty much any form of oppression and violation of another human being’s experience of life.  The book goes into the details of this happy decline in countless different areas – here’s an example, a chart looking at the massive decline of murder rates in Europe, in comparison to the those of non-state societies.

Untitled-1-01

Homicide rates in Western Europe, 1300-2000 and in nonstate societies.  From p459 of The Better Angels of Our Nature

The past was indeed a very different country – and i’d much rather be living in the here and now.

In the dim, and sadly, not so distant, past, life was cheap, authoritarian hierarchies were unassailable and invulnerable, divisions between nations and social groups were impenetrable and perhaps most importantly, liberal enlightenment philosophy had not infected our psyche with its celebration of the ‘individual’.   Today we live in a world of ‘individualism’, a world suffused by information bearing other people’s perspectives; a world in which it is ever harder to be completely blind and unsympathetic to the experiences, and suffering of others.

But what does this have to do with software user experiences? I hear you say.  Well, one of the driving forces behind this incredible change in the fabric of our culture, over the past half a century, is what Pinker refers to as the ‘Rights Revolutions’.  I believe that the value we place on good user experience should be seen in the context of these revolutions.

I am asking the question: is a decent user experience a new human right?

The Rights Revolutions

So, what are these ‘Rights Revolutions’? And what do they have to do with user experience?  Well to quote Pinker:

“The efforts to stigmatize, and in many cases criminalize, temptations to violence have been advanced in a cascade of campaigns for ‘rights’ – civil rights, women’s rights, children’s rights, gay rights and animal rights”
From p458 of The Better Angels of Our Nature

rights-02

Proportion of books mentioning phrases from 1945. From p459 of The Better Angels of Our Nature

This cascade is nicely illustrated by the graph below that shows the sequence in which these phrases have become popular.  There’s a momentum behind these changes, each rights revolution raises the bar, as to what is acceptable, setting the stage for the next move forward.  If it’s not right to racially discriminate, how can we be tolerating sexual discrimination?  If we can no longer mistreat children, why should we be able to mistreat animals?

These changes, Pinker suggests, are primarily driven by “the technologies that made ideas and people increasingly mobile”.  As people were increasingly brought into contact with other people’s perspectives, through fiction, through travel, through the sharing of ideas; it became increasingly hard to maintain that other peoples experiences didn’t matter.   We increasingly came to value the rights of the individual and to sympathise with their experience. That has driven and continues to drive important and largely positive changes to the world we live in.

The right to a decent user experience?

Nothing has been as powerful in driving the large scale improvement to the quality of user experience, as the web.  Businesses got the ability to make user experience changes relatively rapidly and see the big impacts to bottom line metrics.  Perhaps more importantly, users got the ability to easily switch over to whichever site offered them the best experience for getting what they wanted.

This commercial dynamic has, sadly, been far less successful at driving substantial improvement to the user experience of specialist business software – an area that RMA specialises in.  Sadly business software is still generally sold through bulk licensing arrangements and bought by IT managers or business folk more interested in ticking boxes than in the merits of good design, or the experiences of users.

I think this is something that needs to, and is about to change.  Just because someone isn’t immediately visible as a monetizable metric on a web analytics dashboard, doesn’t mean they don’t matter.  People who work at a job day-in day-out to get important things done, matter.  Their experience is important; perhaps more important, than those of the hordes of debt-laden e-shoppers.

Let’s face it, badly designed software can make people’s life pretty miserable; unnecessarily hard to learn tools that make you feel stupid, inefficient experiences that frustratingly waste time, confused designs that hinder rather than help you get things done, ugly jarring experiences that make life just that little bit greyer.

Thanks to the web and mobile app ecosystems people are almost universally exposed to good user experience – they know what they are missing in the software that plagues their work lives.  At RMA we are witnessing this rising tide of intolerance to bad design in enterprise software. People are rising up against it; we have seen them demanding more from their bosses, their IT departments and their services providers.  They are starting to side step restrictive IT restrictions by bringing in their own devices (BYOD) and using decently designed cloud based offerings.  B2B software houses are starting to increase their investments in UX and visual design. The tide is starting to turn.

There are many powerful, financially driven, arguments for investing in better user experiences; they increase productivity while decreasing support and training costs.  But I believe that we can perhaps help the tide turn faster if we start to reframe the need for change.

We need to help shape and nurture the awareness of the right for decent user experience in all those millions of business software users. And, perhaps, more importantly those of us who work on business products, need to build a culture in which it is simply not acceptable to ship bad user experiences.  Just as it isn’t acceptable to discriminate against employees, or to cheat your customers, it shouldn’t be acceptable to subject people to bad user experiences.

For better or worse, most of us live much of our lives in software; just as life is precious, so are experiences.

Lessons-learned learned

lessons-learned

Like any self-respecting and forward-looking organisation involved with project management, we want to improve our practices and learn from our mistakes. Therefore we gladly embrace the concept of capturing lessons-learned during and after each project. We try to keep it simple in a basic form of a list of insights, remarks, failings as well as ideas on how we could perhaps do things better.

Capturing lessons-learned, however, is just part of the task on hand. How do we make sure we don’t just leave the valuable lessons… well, unlearned? The insights are usually captured in separate documents for each project, and then they are placed somewhere. And then the next project comes, and you don’t really have time and desire to look through each of them hoping that maybe you will find something useful there.

Our first idea was that we need a central master list of lessons-learned .  Yes, some solution where lessons can be organised and categorised, ideally even tagged, and easily searchable. That way, if one wanted to see what we have done wrong with, say, estimation in our previous projects, one could easily find a list of our past stumbles on that topic.

The first step in everything we do nowadays is, of course, to google it. We tried looking for some software that manages our challenge nicely, or perhaps some tips on how to do it ourselves utilising Google Docs or any other popular collaborative cloud tools. However, we were surprised to find very little information on this topic, most of it being very hazy, with generic advice on building your own database that supports tagging, search, etc. No samples, no demos.

OK, so there’s no quick fix for this. And when you have to build your own software, you think twice. Which we did. We started at the basics and looked at what the solution has to do for us. Here are the criteria we came up with:

* Lessons-learned should be located on the project’s ‘critical path’, i.e. a project manager should have to naturally come to learn from past experiences, as opposed to pausing just to step sideways to look at past lessons, if one remembers to do that in the first place.
* They must be easily accessible and searchable, i.e. you should be able to quickly find relevant insights.
* Ideally, they should be processed, which means that it shouldn’t just say that we did something wrong, but rather suggest an alternative approach or a solution so one doesn’t spend time trying to come up with one.

Having considered these criteria, we decided that the best lessons log to have is… none at all. What we do instead is after each project we process each lesson-captured, with only two possible outcomes for each:

1. Discard, which means that there’s nothing you can do about a particular insight. Perhaps it was very specific to a project situation, perhaps something highly unlikely happened, or maybe it’s a kind of risk you simply cannot ensure against.
2. Action, and then discard. To action a lesson-learned means to update our processes and practices in a way that would ensure we don’t repeat those mistakes again.

It’s a simple and great solution that satisfies all the criteria we’ve raised earlier. Since learning is weaved into our core processes, they are on a project’s ‘critical path’. Also, by updating respective parts of the process, we make learnings relevant to whatever stage of the project we are in (say, altered estimation practices at the beginning of the project). And finally, since we turn each captured lesson into a solution, they are processed and actionable when one encounters them.

So, instead of maintaining a log of captured lessons, we process and then discard each one of them. It’s similar to a to-do list – once you have completed a task, you don’t retain it, do you? (Do you?) Done is done.

Plus, this is a great way to naturally keep our processes updated, which on it’s own is hardly ever a fun and inspiring task.

This is not to say that our project management approach is process-heavy. On the contrary, it’s rather lightweight and efficient. However, we still have light process descriptions or useful checklists for different stages of a project. For instance, one lesson that was captured in a project of ours was a slower than expected start in team communication dynamics, as we had a couple of new contractors join the team. Having processed this insight we decided that it would be a good idea to organise a team social (translate – drinks) at the project outset. So we added an item to cover this to a project initiation checklist. It is not a prescriptive task, but rather one that urges the project manager to think about this, gauge the necessity for such an event and organise one if seen as needed.

So, if you are thinking about a lessons-learned log, don’t. If you have one, we suggest you have a few sessions to go through it, process each lesson with the goal of emptying the list, and then discard the log and never look back.

A better kind of Visualisation Book (By Mischa Weiss-Lijn)

In a recent blog post, the invariably interesting Robert Kosara points out that:

“After you’ve seen one visualization book, you’ve seen them all.“

And, having read quite a few, I must say, man is he right. They tend to be big on beautiful full colour reproductions, and short on insights and techniques.  You generally emerge from having read a visualisation book, none the wiser on how actually to conceive, design, or implement compelling, and more importantly, useful visualisations.  Okay, in most visualisation books you can pull out a few useful tips and find some inspirational designs – but generally, it’s pretty slim pickings.

Recently though, i’ve hit a rather rich vein, and I’m hoping I can find more of the same.  At RMA we’ve been doing some fascinating work with the insurance industry on visualising natural catastrophe risks (think Hurricanes, Floods and the like) and exposure to insurance liabilities.  This has led me away from the usual bevy of beautifully illustrated generalist visualisation books to ostensibly dryer specialist visualisation books.  

Here are a couple, catchy titles, fancy coffee table ready cover designs, n’ all:

Thematic Cartography and Geovisualization Terry A. Slocum et al

Visualizing data William S Cleveland

The “Thematic Cartography and Geovisualization” book is a breath of fresh air.  It’s poorly designed, has hardly any nice looking visualisations in it.  But once you actually start reading it you find a treasure trove of concretely useful, mature and validated visualisation techniques.  It serves as a start contrast to the rest of the data and information visualisation field.  Here is a textbook, with a solid chapter on each of the key techniques for visualisation data on maps – e.g. a Choropleth. It goes into detail about how and when the techniques should be used, how the data should be prepared, their inherent weaknesses, the pros and cons of the various work arounds.  This is invaluable information condensing the collective knowledge of the GIS community into an accessible and re-usable form.  If only there more textbooks of this kind for non-geo visualisations!

A example of a choropleth visualisation of an insurer’s ‘Aggregate’ exposure

“Visualising Data” by William Cleveland, is actually a rather famous, and oft cited book.  But I’ll wager that not many people have actually read it.  Rather like the last book, it’s actually a very focused subject matter specific book.  It’s about visualisation techniques for statistical analysis; characterising distributions and the relationships between the properties of a system.

Again, while elegantly minimal, the visualisations certainly aren’t the eye-candy we’re used to seeing in a visualisation book.  The prose is terse and there’s quite a bit of maths.  But again, it’s filled with analytical visualisation techniques that clearly delivery insights.  Interestingly for each technique, Cleveland works through a series of interesting data sets, using the techniques to analyse the data and drive out insight.  It is perhaps telling how rarely this approach is taken by authors of visualisation books.  

A scatterplot matrix from “Visualising Data”

Basically the overall learning for me, is to read more narrowly focused visualization books – books from domains where visualisation has been intensively used for driving out insight; where the visualisations have been honed and matured to the point where they demonstrably do work.

VisWeek2012 – a UX designer’s view of the year’s biggest Visualisation conference (By Mischa Weiss-Lijn)

At RMA we’ve always tended to focus on projects that involve quite a bit of Data and Information Visualisation work (or ‘VisualiZation’ for those on the other side of the pond).  While we’ve become known for delivering to a very high quality in this area, we’ve drawn on our skills as Information and Interaction Designers to create our solutions.  While we’ve all read Tufte, Yau  and many others, we haven’t tended to connect deeply with the Information and Data Visualisation research communities.

So, we decided that one of us should tentatively dip our designerly toes in the rarefied waters of VisWeek 2012, the world’s biggest Visualisation conference, and get a taste for what the larger visualisation community has to offer folks like us; folks that design and build visualisations, that people actually use day-in-day-out.

And.. off I went, and now that I’m back, here is a subjective view on what a largely research oriented visualisation conference has to offer those working on designing interactive visualisations for use in real world settings.

It’s big and getting bigger

Having done my PhD in Info Vis many years ago, what impressed me right off the bat is how much the field has grown.  There are pretty big and well established groups focused on information visualisation across the world.  Here in Europe, Germany seems to be a particularly bright spot (e.g. University of Konstanz), while the UK also has quite a bit going on, with hotspots at City University, and Oxford among others.

While the field has been getting increasingly fashionable over the last few years, it seems to be reaching a tipping point, where I believe interactive visualisations will enter the mainstream of digital experience and thus become ever more relevant for designers of all stripes.

This may be old news to some, but there are a number of forces at work here:

  1. Big data: tech catch phrase of the moment, that basically boils down to data being available in unprecedented volumes, varieties and accumulating at ever increasing velocities.
  2. Better tools: back when I was doing my info vis research, you had to build every visualisation, pretty much, from scratch. Now there are lots of great tools to get you started (more on those later)
  3. Infographics in the media: The recent surge in the production of infographics has brought the possibilities of visualising data to the public imagination.  Pioneering publications such as the New York Times, the Guardian, Wired and folk such as David McCandless have popularised  visualisations that convey powerful narratives using data.

All this means that while visualisation work has always been important here at RMA, it’s likely to start becoming something that designers everywhere will be encountering more and more.

More grown up, less core innovation?

While information visualisation has gotten ‘bigger’, the research seems to have changed somewhat in character.  The work seems to focused more on evaluating and refining existing visualisation techniques and applying them to new and challenging domains.  That’s all good, and important, but the flip side is (and this probably a bit controversial) that from what I saw at VisWeek, there seems to be less valuable creative innovation around visualisation techniques.

Let’s dive into each of these topics in turn.

Domain focused visualisation

The research work has diversified into looking in detail at how visualisation can support a host of important new application areas from the somewhat unapproachable visualisations done for cyber security and bioscience to the somewhat more comprehensible work in the medical and finance domains.

Here’s a few choice examples from the conference.

Visualizing Memes on Twitter

A lot of people hope to transform the firehose of Twitter activity into something intelligible.  This could have important applications in lots of areas were people stand to gain from a real time understanding of consumer and industry sentiment – an area of considerable interest in financial markets.   Another area where this could be important is in being able to detect major events as they happen; think earthquakes, bush fires and the like.

Whisper is a nice piece of work that allows you to search for particular types of event, to see where the discussion and thus the event, originates, where flow goes thereafter and how people feel about as time progresses (positive = green, negative = red).

Leadline is more focused on allowing people to detect signifiant new events as they happen.  The ‘signal strength’ of automatically clustered ‘topics’ are visualised.  You can filter on the person, time range, event magnitude, or location, to focus on an event and understand it.

Medical Visualisation

There was a bunch of work around specialist information visualisations for the medical profession.  As medical providers provide more and more open access, data, (e.g. http://data.medicare.gov) this is an area that is bound to keep on growing.

MasterPlan is a visualisation tool that was custom built to help architectural planners, in the renewal of a rather old Austrian hospital, understand how patients flow between the different wards and units.  The visual analysis afforded by the tool allowed them to see the key patterns and identify a better way to cluster and place things.

The OutFlow visualisation, from IBM, (above) shows the outcomes (good = green, bad = red) of successive treatments administered to a patient (and a very ill one in this case!).

Less core innovation

I stand to be corrected here, but it felt to me, that while there was lots of innovation, I didn’t see visualisation designs that were applicable outside of their narrow solution area.  There is innovation to be sure, but’s generally more evolution than revolution.

For example, Facettice, was a rather beautiful bit of work for navigating faceted categorical data sets.  While lovely looking, it’s rather impractical in terms of readability, interaction and real-estate consumption.

Another example, would be Contingency Wheel++, a rather impressive, and powerful tool for exploring datasets with quite a few columns (say 20) and massive numbers of rows.  While it’s great work in many ways (and I’m afraid a little too complex to explain here), I wonder how broadly something like this could be used.

Of course, innovation doesn’t need to be game changing, small incremental improvements are generally what move us forward.  One bit work that was particularly nice in this regard was a paper on ‘sketchy rendering‘; new library for the Processing language (see below for more on that), that allows you do render interactive charts and visualisations in a sketchy style; something that could be quite handy for testing early prototypes.

More and better evaluations

It was great to see that there seems to be a much more robust emphasis on evaluating the value of visualisation techniques and applications.  Back when I was doing research in this area (a decade ago) people were busy churning out new ways to visualise things, without generally stopping to check if they were any use.  Nowadays the balance seems to have tipped away from pure invention to more of a focus on application and evaluation.

Automated user testing on the cheap

One particularly interesting development was the very prevalent use of Amazon’s Mechanical Turk capability for doing evaluations. The basic idea is that you set up your evaluation and software experience so that it is totally automated and available online (so a web application with accompanying forms).  You then recruit ‘users’ from the hoards of ‘Mechanical Turk Workers’ waiting to complete tasks for relatively small amounts of cash.  You can even insist that your workers have certain qualifications (i.e. have passed particular tests), to ensure you get users able to complete your tasks.

Despite the obvious attractions, there are definitely some issues here – in particular around the limitations of the Mechanical Turk Worker population (mostly US and India based) and the ‘quality’ of response you get. There was one paper in particular that claimed, in contradiction of previous work, that people performed very poorly on a task (Baysian probability judgements) despite having visualisation support; I suspect that the Mechanical Turk workers weren’t trying quite as hard as the students typically employed in previous experiments.

Isn’t it obvious?

Some of the evaluation papers had the unfortunate effect of forcing me to crack wry smile and ask myself:

Why oh why. Why, bother at all.

Let’s pick one example to illustrate the point.  It was an evaluation of the relative efficiency of the radial and cartesian charts.  Paraphrasing the question for the chart below: Does wind direction have an effect on efficiency, if so which direction leads to greater efficiency (btw: fewer minutes between wheel events means higher efficiency)?

So what was the answer?

Yes, you guessed it the cartesian charts took the prize.

While this  work was partly about evaluation methodology – from the point of view of the visualisation, i’d say… isn’t obvious?  Only a basic design sense or understanding of vision would tell you that it’s far easier to scan horizontally to compare values.  Do we really need to run empirical evaluations for this sort of thing?

There was quite a bit of work at VisWeek, where some design training and capabilities could go a long way to making the research output a whole lot more useful out in the real world.

Better tools

There are a bunch of great tools out there for designers and creators of visualisations; here’s a quick run down.

Visual analytics packages

There are a bunch of software packages out there competing for the custom of the emerging discipline of data scientists, as well as the less well versed journalists, business folk who are grappling with crippling amounts of data.  These can be super handy for any designer who is trying to get a handle on a data set before launching into Illustrator.

Perhaps the most accessible is Tableau (PC only, sadly) which allows you to build up relatively complex interactive visualisations semi-automatically.  They have a free version of the software that’s open to all to use, as long as you don’t mind publishing your visualisations out in the open.

Better development tools

On the technical end, a bunch of languages and frameworks have emerged that can be leveraged to rapidly create performant visualisations.  The two main contenders are Processing and D3.  They are both open source efforts. sired by MIT and Stanford respectively, with very active communities and lots of shared code for you to build on.

Processing is an open source Java-like language aimed at promoting software literacy within the visual arts, which has been designed for quickly creating visuals and visualisations.  People have used Processing to create a wide array of experiences, from sound sculptures, data art and info graphics, to full on applications.

D3 is a Javascript library that is more narrowly scoped around creating visualisations on the web.  You can use it to manipulate data and create interactive, animated SVG visualisations.

So… What’s the verdict?

I have to say I learnt a lot by attending VisWeek; it was particularly valuable for me to get a sense for where the field is at and where it’s going.  The more focused sessions, in particular, helped me get valuable footholds into areas of work relevant to some of the projects here at RMA.

However, I wonder whether there is a place for a more industry or practitioner focused visualisation conference where the papers and presentations (from researchers and practitioners alike) could be focused on innovation in visualisation that are more likely to be adopted outside of a research context.

Another big take away, is the field is still sorely lacking integration with the design community, and in particular the visual design community.  The researchers are nearly all computer scientists by training; and it really shows.

Designing for fluency – Part 2 (by Mischa Weiss-Lijn)

Fluency, cognitive choreography and designing better workflows

If you’ve read part 1, you’ll know all about what Fluency is and how understanding it subtly, but importantly, changes the way you think about usability.  Now I’d like to take this one step further by introducing a couple of other concepts from Kahneman’s book on the fascinating world of modern cognitive psychology, as well as one that i’ve made up all on my own: ‘Cognitive Choreography’.

Let’s start by briefly explaining what I mean by the term cognitive choreography.  The workflows that we are called to design can place varied and diverse demands on our darling users.  It’s often about going through the motions; form fillin’ payment details, skimming content, navigating.  But people are often also being asked to make critical decisions and perform complex tasks.  These different types of engagements require very different types of cognition (as we’ll see later). And in this article I’ll go through some relatively new research that points towards ways in which designers can encourage the right type of cognition for the right moment; what i’ve called Cognitive Choreography.

The two Systems: Thinking fast and thinking slow

Kahneman’s book centres on what he calls the ‘two systems’; two modes of thinking, one fast, System 1, and the other slow, System 2.  These have very different capabilities and constraints, and as a result some important implications for design.

System 2 is what does the conscious reasoning; it is the deliberate, rational voice in your head that you like to think is in control.  System 1 is the automatic, largely unconscious, part of your mind where most of the work actually get’s done.  Although the reality is inevitably rather more complicated, it’s helpful to adopt Kahneman’s conceit of these systems as two distinct characters.

System 1: The associative machine

  • Fast
  • Effortless
  • Automatic and involuntary
  • Can do countless things in parallel
  • Slow learning
  • Generates our intuitions
  • Driven by the associations of our memories and emotions
  • Uses heuristics (rules of thumb), that are often right, but sometimes very wrong

System 2: The lazy controller

  • Slow
  • Effortful
  • Selective (lazy) and limited in capacity
  • Does things in serial
  • Flexible
  • Uses reason, logic and rationalisation

System 1 effortlessly generates impressions and feelings (“xyz link looks most relevant”) that are the main sources of explicit beliefs and deliberate choices of System 2 (“I’ll click on xyz link”).  The problem here is that System 1 is error prone and System 2 is lazy.  System 2 can overcome System 1’s shortcomings, by critically examining the intuitions System 1 generates, but will often not.  I think that as designers we should think about how we can help System 2 spring into action when the moment is right.

Before looking at how we can help the right system spring into action, let’s look at how System 1 can sometimes lead users astray.

Biases: Thinking fast and wrong

System 1 has evolved to be quick and get things mostly right, most of the time, for your average hunter gatherer in the long gone Pleistocene (i.e. before we got all civilized, started farming and building urban jungles).  As a result it doesn’t adhere to the tenets of logical and statistical reasoning that underpin what we think of as ‘rational’ thought; it uses heuristics, rough rules of thumb, that are easily computed and generally work.  And that leads to errors, which, in our new fangled not-too-much-hunting-or-gathering-needs-doing kind of world, are more problematic than they used to be.

Here is a brief listing of some of the things that can go wrong.  If you want to really learn the slightly scary truth about how rubbish we (and yes that includes you) are at making judgements and choices then I refer you toKahneman’s book or perhaps take a look at this scary wikipedia list of cognitive biases.

System 1 is biased to believe and confirm what it has previously seen, or is initially presented (luckily for the advertising industry). So it tends to be

  1. Overconfident in beliefs based on small amounts of evidence (“The site would be better if it was purple.  My wife said so.”).
  2. Very vulnerable to framing effects (“90% fat free” vs “10% fat)
  3. Doesn’t factor in base-rates (i.e. a things general prevalence).   Insurance sales and the tabloid press play off of this all the time; it is the gravity of the event that matters, the fact that it’s very very unlikely, doesn’t have nearly as much impact as it should.  So for example, you may be tempted to insure your brand new fridge, against breakdown in it’s first year, because you’re so dependent having it work, even though it’s extremely unlikely that anything will go wrong.

This is because System 1:

  1. Focuses on the evidence presented and ignores what isn’t
  2. Neglects ambiguity and suppresses doubt
  3. Exaggerates the emotional consistency of what’s new with what’s already known

System 1 infers and invents causes, even when something was just down to chance. So, for example, in The Black Swan Nassim Taleb relates that when bond prices initially rose the day Saddam Hussein was captured, Bloomberg ran with “Treasuries rise: Hussein capture may not curb terrorism”.  Then, half an hour later the prices fell, and the headline changed: “Treasuries fall: Hussein capture boosts allure of risky assets”.  The same event can’t explain bond prices going both up and down; but because it was the major event of the day, System 1 automatically creates a causal narrative; satisfying our need for coherence.

System 1 will dodge a difficult question and instead substitutes in the answer for an easier one.  So for example in predicting the future performance of a firm, one instinctively relies on its past performance.  When assessing the strength of a candidate, one instinctively relies on whether we liked them or not.

System 1 does badly with sums, because it only deals with typical exemplars, or averages. So for example, when researchers asked how much people would be willing to pay to save either 2,000, 20,000 or 200,000 birds after an oil disaster, people suggested very similar sums of money.  It wasn’t the number of birds that was driving them, but the exemplar was; the image of a bird soaked in oil.  Similarly with visual displays people can very easily tell you the average length of a bunch of visual elements, but not the sum of their lengths.

Let’s not forget though, that while it has its failings, System 1, does a pretty impressive job of things most of the time, for most people.  In fact System 1 is crucial to the kind of deep creativity that us designers pride ourselves on.  It’s what helps you get things done fast, and well.  It’s what results from practice and is the basis of most forms of expertise; you wouldn’t want to drive your car without it!

As a result, lot of the time it’s appropriate, and indeed better, for System 2 to put it’s lazy feet up and give System 1 the reins.

So what’s the overall takeaway for designers?  Well, one is that, if users are at a point where it’s important that they critically inspect the facts, and overcome their pre-conceptions and first impressions; then System 2 needs to be on the job.  Otherwise, we can leave System 1 in the driving seat.

So how can designers engage in Cognitive Choreography, and help ensure users have the right System in the driving seat, at the right time?  Well one approach is to use Fluidity.

Cognitive Choreography

Fluency and switching users between Systems

I would guess that there are many things designers can do to knowingly encourage users to engage the right cognitive faculties for the task at hand; but one interesting and counterintuitive approach is to use Fluency.  That’s what I’m going to focus on here.

To recap from my previous post on the subject, Fluency is the brain’s intuitive sense of how hard your poor brain is being asked to work on something.  Lots of things will impact your sense of fluency as illustrated in the graphic below.

As well as being the key to really understanding what is going on behind users perceptions of usability and beauty, it just so happens that we can use the Fluency of our designs to engage system 2.  From an evolutionary perspective, the reason we have this intuitive sense of Fluency, is to have an alarm bell that will wake System 2 up when things aren’t going smoothly and we need more careful, bespoke, thought. When an experience is Disfluent and creates what Kahneman calls “Cognitive Strain”, System 2 is mobilised.   Thus we, as designers, can actively engage, and disengage, System 2, by controlling the many levers we have at our disposal to change the Fluency of a UI.

So, in case you’re not convinced, here’s one of the experiments that demonstrate this sort of effect in action.  A bunch of Princeton students were given a set of 3 short brain twisting problems to solve.  These problems were designed to have an answer that would seem obvious to System 1, but was infact wrong. To get the right answer, you’d need to have gotten System 2 in the game.  Here’s an example:

If it takes 5 machines, 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

100 minutes OR 5 minutes

When students were shown the problems presented in an ordinary, legible, font, 90% of them got at least one problem wrong.  When the problems were presented using a small, poorly contrasted font, only 35% of them did.  Yep, that’s right, making the font less legible, resulted in an almost 200% uplift in performance.  (btw, the answer was ‘5’)

Low Fluency creates cognitive strain, which encourages the user to activate System 2, which thinks things through, and get’s to the right answer.  High fluency does the opposite, encouraging the user to leave System 1 in control.

So the design implication here is, that when you come to designing a portion of your flow where it’s critical that System 2 be fully engaged, it may be the time to purposely create a low fluency experience, using the array of tools at your disposal (e.g. font legibility, contrast, layout, motion, copy etc).

When to use Fluency and Cognitive Choreography

So we have a bunch of ways we can make an experience more, or less, Fluent, now we need to understand when to use this to encourage users to apply the right kind of cognition to the task at hand.

Perhaps the first, and most important, thing to say is that you will need to be sparing and purposeful with low Fluency UIs.  Low Fluency is by it’s nature unpleasant, and on top of that using System 2 takes effort, so, it’s no fun.

However the science of System 1’s failings give us some clear pointers as to where we should consider putting the brakes on Fluency.

You should consider engaging System 2, with a low Fluency UI when:

  1. Critical, high risk, decisions are being made
  2. The user is being engaged in a task that you know has features which will lead System 1 astray.  So the user will be asked to:
  1. Draw inferences based on very small sample sizes
  2. Draw inferences based on incomplete information; e.g. they are given a small part of the story, or no information about base rates (i.e. general prevalence of a thing)
  3. Make decisions based on potentially framed and biased messages from an interested party
  4. Mentally work with sums, rather than averages or prototypes

Wrapping it up

Most of the time you will want to maximise Fluency and thus usability, encouraging users to coast along primarily using rapid, intuitive outputs of tireless System 1.  But your can reduce critical errors, and increase the quality of significant decisions and judgements if you selectively lower the Fluency of your UI, to make sure the lazy, but smart System 2 is fully engaged.

Of course, there is a balance to be struck here, as designers we will be hard pressed to make the experience as disfluent as psychologists can when doing experiments.  In fact it would be super valuable if people in the HCI community would take a closer look at this, and measure the effectiveness of lowering Fluency to within the bounds of commercial acceptability.

Another interesting tension here, is that many of us designers, are working on systems primarily aimed at realising and increasing sales.  So even if users are making critical, and expensive, decisions, based on incomplete information, it’s in your client’s interest that System 2 stays as lazy and disengaged, as possible.  However, there are countless digital experiences that support productive and often critical processes or decision making.  That’s what we focus on here at RMA, and that’s where doing a little Cognitive Choreography could come in handy.

Designing for fluency – Part 1 (by Mischa Weiss-Lijn)

From the new psychology of Fluency to usability, beauty and beyond

Having had a background in (proper) psychology, before emerging as a designer, I’ve often been dubious about the value of this fascinating ‘science of the mind’ for the practice of design.  Every now and then, you’ll hear some strained reference to Fitt’s law from a recent MSc. graduate; but let’s face it, in our day to day, very little psychology is actually brought to bear.

So, it came as a surprise to find a treasure trove of design insights, when reading the excellent “Thinking fast and slow” by the Nobel prize winning psychologist Daniel Kahneman.  Let me be clear; it’s not a book about design, it’s a pretty hardcore psychology book.  It’s about how we think and reason; not how we would like to think we think, but how we actually think.  Warts and all.  My hunch is that really understanding that, and the warts especially, could be a valuable tool for designers.

Fluency

One example I’d like to pick out, is Kahneman’s treatment of a phenomenon generally termed ‘Processing fluency, which he calls ‘Cognitive Ease’.  Like it or not, and more ‘usability’ minded designers may well not, the perception of usability, trustworthiness, beauty is partially dependent on the myriad of superficially unrelated factors that drive the fluency of cognitive processing (don’t worry I’m about to explain).

Yes that’s right.  You can do things to make people think a design is more, trustworthy or beautiful, without actually making it more trustworthy or beautiful.  At all.

So let me explain.

As Kahneman explains it; our brain has a number of built-in dials (you could think of them as a bunch of sixth senses), that are constantly, effortlessly and unconsciously, updating us on (evolutionarily) important aspects of our environment.  So, for example: “What’s the current threat level?”, “Is anything new going on?”.  One of these is, Processing Fluency, which is basically a measure of how hard your poor brain is being asked to work.  It’s basic raison-d’etre, is to let you know when you need to redirect attention or make more effort.  However, interestingly for us designers, it ends up having a much broader impact on the way we evaluate things and make decisions.   Anything that increases fluency (and there are lots of things that do) will bias many types of (and perhaps all) judgements positively.

This is a, somewhat scarily, broad phenomenon.  Who would have thought that:

Rhyming statements seem truer than equivalent non-rhyming ones

Shares with more easily pronounced names outperform on the stock market

Text written with simpler words, are judged to have been written by a more intelligent author

To usability, beauty and beyond

But let’s focus on how this relates to design.

It turns out that anything that increases fluency, will positively effect many aspects of the way people perceive, judge, and presumably experience, something.  Fluency will make people trust something more, make it feel more familiar, more effortless, more aesthetically pleasing, more valuable; fluency will even make people feel more confident in their own ability to engage with the experience.  And these effects can all potentially be brought to bear independently, and on top of, the actual content of the experience.

What’s powerful here is that there are lots of ways in which you can increase the fluency of your experience; ‘manipulations’ in the parlance of psychologists.  I’ve tried to summarise what I’ve been able to glean from the psychology literature around this in the infographic below.  The thing to remember is that any of these manipulations will positively impact people’s perceptions of your experience.

You can make your copy more fluent, your visual design more fluent, and your flows more fluent.

Making your copy more fluent

Let’s start with copy.  A bunch of the things we normally think of as best practice, such as using simple straightforward language and uncomplicated syntax, increase fluency.  It’s interesting to realise that such simple things could end up impacting how much people will trust the experience!

Looking at copy from the perspective of fluency gives weight to more flippant techniques, such as the use of rhyme, alliteration.  It guides us to think carefully about how easy copy is to say out loud.  All these things improve what is called ‘Phonological Fluency’ i.e. how easy something is to say; how easily it rolls off the tongue.  If it’s easier to say, it’s easier to think.

Then, consider ‘Orthographic Fluency’ i.e. how easy one can translate written text into spoken words and meaning.  This guides us to avoid creative spellings (e.g. “Tumblr 4ever”).  It gives a clear rationale for always using the most direct, succinct and approachable notation available (e.g. “1” not “one”, “%” not “percent”).

Making your visual designs more fluent

Font designers will be happy to hear that there have been lots (and lots) of experiments that show the impact of the clarity and readability font on fluency, with all the many fold benefits this brings.  Readability is not just about readability – it’s about fluency.

Font selection is one thing that contributes towards ‘Physical Perceptual Fluency’, and psychologists have shown that having a good level of contrast does too (for fonts in particular, but presumably it will be just as important for UI elements).  Of course that’s not where it ends, even if psychologists haven’t really looked much deeper, much of the principles behind good, functional, visual design, such as leveraging Gestalt grouping principles, must surely drive this Physical Perceptual Fluency.

There’s also been a bunch of work looking at how the length of time people have to see and absorb a display impacts fluency; they call it Temporal Perceptual Fluency.  The less time, the less fluent.  This probably doesn’t have too much impact on most design applications unless you are presenting stuff for less than 1 second.  But my hunch is that judicious use of motion design will also contribute to this type fluency.

Make your flows more fluent

There has been a bunch of work looking at the role memory plays in fluency.

Most obviously using common UI patterns will create a more fluent experience by virtue of their familiarity.  Similarly when experiences are designed to be easier to learn and remember they are going to be more fluent. But you could have guessed that.

Something you might not have guessed is that you can use ‘Priming’ to make an experience more fluent.  Priming is a psychological technique that basically boils to exposing people to related stimuli before showing them the experience you’re interested in.  This activates the relevant areas of your brain making it easier to process the experience once it comes along.  Is this something we could use as designers?  Perhaps we can.  For example, we could sequence content and interactions to prime parts of the experience that we expect to be challenging.

What else?

While psychologists have already discovered lots of ways to manipulate fluency, I’d guess that there are many more waiting to be discovered.  Psychologists, haven’t been thinking about design, so they’ve not really been looking in all the right places.  In fact, i’ve taken a couple of liberties to add some obvious candidates to my graphic which are not (yet) grounded in empirical evidence (motion design and visual hierarchy). One area that doesn’t seem to have been explored at all is how to make interactions more fluent.  And there is surely much more we can do to create fluency in user journeys and IA. Perhaps someone should look into it!

And so… what?

Is this just dressing up our time honoured notions of usability in fancy new scientific jargon?   Or does it give us a genuinely new and useful conceptual tool for creating better experiences? After all we’ve had related concepts before, for example Cooper’s ‘Cognitive friction’ in his classic Inmates are running the asylum book.  Making experiences as easy and frictionless as possible is at the heart of all good digital design techniques.

To be honest, I’ve only just started thinking about this, and so haven’t yet been able to put it into practice.  But my hunch is that there are a couple of key things that the concept of fluency offers which are interesting, and potentially useful.  Firstly, there is the evidence of there being a broader set of qualities that go into making an experience frictionless or fluent, then we’ve traditionally allowed for.  Secondly, and more importantly, there is the discovery that any and all of these, impact on the full range of people’s perception and memory of an experience. We want to create experiences that people feel good about.  Depending on the experience, we want our users to come away persuaded, happy and confident.  An understanding of how to create fluency, gives us a new way of thinking about how to get the design outcomes we’re after.

Further reading for the curious

Part 2 of this fine blog: Fluency, cognitive choreography and designing better workflows, which looks at how we should be using our grip on Fluency to help users think in the right way, depending on the experiences they are engaged in.

Alter, Adam L, and Daniel M Oppenheimer. “Uniting the tribes of fluency to form a metacognitive nation” Personality and Social Psychology Review 13.3 (2009): 219-235.

Daniel Kahneman “Thinking, Fast and Slow” 2011