VisWeek2012 – a UX designer’s view of the year’s biggest Visualisation conference (By Mischa Weiss-Lijn)

At RMA we’ve always tended to focus on projects that involve quite a bit of Data and Information Visualisation work (or ‘VisualiZation’ for those on the other side of the pond).  While we’ve become known for delivering to a very high quality in this area, we’ve drawn on our skills as Information and Interaction Designers to create our solutions.  While we’ve all read Tufte, Yau  and many others, we haven’t tended to connect deeply with the Information and Data Visualisation research communities.

So, we decided that one of us should tentatively dip our designerly toes in the rarefied waters of VisWeek 2012, the world’s biggest Visualisation conference, and get a taste for what the larger visualisation community has to offer folks like us; folks that design and build visualisations, that people actually use day-in-day-out.

And.. off I went, and now that I’m back, here is a subjective view on what a largely research oriented visualisation conference has to offer those working on designing interactive visualisations for use in real world settings.

It’s big and getting bigger

Having done my PhD in Info Vis many years ago, what impressed me right off the bat is how much the field has grown.  There are pretty big and well established groups focused on information visualisation across the world.  Here in Europe, Germany seems to be a particularly bright spot (e.g. University of Konstanz), while the UK also has quite a bit going on, with hotspots at City University, and Oxford among others.

While the field has been getting increasingly fashionable over the last few years, it seems to be reaching a tipping point, where I believe interactive visualisations will enter the mainstream of digital experience and thus become ever more relevant for designers of all stripes.

This may be old news to some, but there are a number of forces at work here:

  1. Big data: tech catch phrase of the moment, that basically boils down to data being available in unprecedented volumes, varieties and accumulating at ever increasing velocities.
  2. Better tools: back when I was doing my info vis research, you had to build every visualisation, pretty much, from scratch. Now there are lots of great tools to get you started (more on those later)
  3. Infographics in the media: The recent surge in the production of infographics has brought the possibilities of visualising data to the public imagination.  Pioneering publications such as the New York Times, the Guardian, Wired and folk such as David McCandless have popularised  visualisations that convey powerful narratives using data.

All this means that while visualisation work has always been important here at RMA, it’s likely to start becoming something that designers everywhere will be encountering more and more.

More grown up, less core innovation?

While information visualisation has gotten ‘bigger’, the research seems to have changed somewhat in character.  The work seems to focused more on evaluating and refining existing visualisation techniques and applying them to new and challenging domains.  That’s all good, and important, but the flip side is (and this probably a bit controversial) that from what I saw at VisWeek, there seems to be less valuable creative innovation around visualisation techniques.

Let’s dive into each of these topics in turn.

Domain focused visualisation

The research work has diversified into looking in detail at how visualisation can support a host of important new application areas from the somewhat unapproachable visualisations done for cyber security and bioscience to the somewhat more comprehensible work in the medical and finance domains.

Here’s a few choice examples from the conference.

Visualizing Memes on Twitter

A lot of people hope to transform the firehose of Twitter activity into something intelligible.  This could have important applications in lots of areas were people stand to gain from a real time understanding of consumer and industry sentiment – an area of considerable interest in financial markets.   Another area where this could be important is in being able to detect major events as they happen; think earthquakes, bush fires and the like.

Whisper is a nice piece of work that allows you to search for particular types of event, to see where the discussion and thus the event, originates, where flow goes thereafter and how people feel about as time progresses (positive = green, negative = red).

Leadline is more focused on allowing people to detect signifiant new events as they happen.  The ‘signal strength’ of automatically clustered ‘topics’ are visualised.  You can filter on the person, time range, event magnitude, or location, to focus on an event and understand it.

Medical Visualisation

There was a bunch of work around specialist information visualisations for the medical profession.  As medical providers provide more and more open access, data, (e.g. http://data.medicare.gov) this is an area that is bound to keep on growing.

MasterPlan is a visualisation tool that was custom built to help architectural planners, in the renewal of a rather old Austrian hospital, understand how patients flow between the different wards and units.  The visual analysis afforded by the tool allowed them to see the key patterns and identify a better way to cluster and place things.

The OutFlow visualisation, from IBM, (above) shows the outcomes (good = green, bad = red) of successive treatments administered to a patient (and a very ill one in this case!).

Less core innovation

I stand to be corrected here, but it felt to me, that while there was lots of innovation, I didn’t see visualisation designs that were applicable outside of their narrow solution area.  There is innovation to be sure, but’s generally more evolution than revolution.

For example, Facettice, was a rather beautiful bit of work for navigating faceted categorical data sets.  While lovely looking, it’s rather impractical in terms of readability, interaction and real-estate consumption.

Another example, would be Contingency Wheel++, a rather impressive, and powerful tool for exploring datasets with quite a few columns (say 20) and massive numbers of rows.  While it’s great work in many ways (and I’m afraid a little too complex to explain here), I wonder how broadly something like this could be used.

Of course, innovation doesn’t need to be game changing, small incremental improvements are generally what move us forward.  One bit work that was particularly nice in this regard was a paper on ‘sketchy rendering‘; new library for the Processing language (see below for more on that), that allows you do render interactive charts and visualisations in a sketchy style; something that could be quite handy for testing early prototypes.

More and better evaluations

It was great to see that there seems to be a much more robust emphasis on evaluating the value of visualisation techniques and applications.  Back when I was doing research in this area (a decade ago) people were busy churning out new ways to visualise things, without generally stopping to check if they were any use.  Nowadays the balance seems to have tipped away from pure invention to more of a focus on application and evaluation.

Automated user testing on the cheap

One particularly interesting development was the very prevalent use of Amazon’s Mechanical Turk capability for doing evaluations. The basic idea is that you set up your evaluation and software experience so that it is totally automated and available online (so a web application with accompanying forms).  You then recruit ‘users’ from the hoards of ‘Mechanical Turk Workers’ waiting to complete tasks for relatively small amounts of cash.  You can even insist that your workers have certain qualifications (i.e. have passed particular tests), to ensure you get users able to complete your tasks.

Despite the obvious attractions, there are definitely some issues here – in particular around the limitations of the Mechanical Turk Worker population (mostly US and India based) and the ‘quality’ of response you get. There was one paper in particular that claimed, in contradiction of previous work, that people performed very poorly on a task (Baysian probability judgements) despite having visualisation support; I suspect that the Mechanical Turk workers weren’t trying quite as hard as the students typically employed in previous experiments.

Isn’t it obvious?

Some of the evaluation papers had the unfortunate effect of forcing me to crack wry smile and ask myself:

Why oh why. Why, bother at all.

Let’s pick one example to illustrate the point.  It was an evaluation of the relative efficiency of the radial and cartesian charts.  Paraphrasing the question for the chart below: Does wind direction have an effect on efficiency, if so which direction leads to greater efficiency (btw: fewer minutes between wheel events means higher efficiency)?

So what was the answer?

Yes, you guessed it the cartesian charts took the prize.

While this  work was partly about evaluation methodology – from the point of view of the visualisation, i’d say… isn’t obvious?  Only a basic design sense or understanding of vision would tell you that it’s far easier to scan horizontally to compare values.  Do we really need to run empirical evaluations for this sort of thing?

There was quite a bit of work at VisWeek, where some design training and capabilities could go a long way to making the research output a whole lot more useful out in the real world.

Better tools

There are a bunch of great tools out there for designers and creators of visualisations; here’s a quick run down.

Visual analytics packages

There are a bunch of software packages out there competing for the custom of the emerging discipline of data scientists, as well as the less well versed journalists, business folk who are grappling with crippling amounts of data.  These can be super handy for any designer who is trying to get a handle on a data set before launching into Illustrator.

Perhaps the most accessible is Tableau (PC only, sadly) which allows you to build up relatively complex interactive visualisations semi-automatically.  They have a free version of the software that’s open to all to use, as long as you don’t mind publishing your visualisations out in the open.

Better development tools

On the technical end, a bunch of languages and frameworks have emerged that can be leveraged to rapidly create performant visualisations.  The two main contenders are Processing and D3.  They are both open source efforts. sired by MIT and Stanford respectively, with very active communities and lots of shared code for you to build on.

Processing is an open source Java-like language aimed at promoting software literacy within the visual arts, which has been designed for quickly creating visuals and visualisations.  People have used Processing to create a wide array of experiences, from sound sculptures, data art and info graphics, to full on applications.

D3 is a Javascript library that is more narrowly scoped around creating visualisations on the web.  You can use it to manipulate data and create interactive, animated SVG visualisations.

So… What’s the verdict?

I have to say I learnt a lot by attending VisWeek; it was particularly valuable for me to get a sense for where the field is at and where it’s going.  The more focused sessions, in particular, helped me get valuable footholds into areas of work relevant to some of the projects here at RMA.

However, I wonder whether there is a place for a more industry or practitioner focused visualisation conference where the papers and presentations (from researchers and practitioners alike) could be focused on innovation in visualisation that are more likely to be adopted outside of a research context.

Another big take away, is the field is still sorely lacking integration with the design community, and in particular the visual design community.  The researchers are nearly all computer scientists by training; and it really shows.

Imagine the form of the future

If you had a time machine (and didn’t have anything better to do with it) and travelled back to say, 1997 and brought back a bunch of forms from webpages and applications, and then compared them to the forms we’re using today, it might be difficult to tell the 2 sets apart.

Branding aside, forms haven’t moved on much in the last decade or so

Fifteen years down the line and the way we capture data online and in applications hasn’t fundamentally changed all that much. We’ve refined the process bit by bit, and thanks to folks like Luke Wroblewski and Chui Chui Tan we understand more about how to design more effective forms, but in terms of innovation the form remains firmly stuck in a circa 1997 rut.

The form is one of the basic building blocks that underpins the way we use websites and applications. It’s the gateway that allows us to build interactive systems and services on the web rather than limiting us to just consuming content. But the way in which we use the web is in the process of fundamentally changing, and this has to force us into looking at the way in which we engage with the web around data capture.

As mobile devices, and the infrastructure that they rely on, continue to improve and tempt more and more of us away from the desktop and laptop in our use of the web (through browsers and native apps alike), many of the lessons we’ve learned and relied on are having to be re-learned to work on devices with smaller screens and no physical keyboard or mouse.

Part of the problem is that we’re still using design solutions based on systems that were developed a decade and a half ago on devices that are not the same as those that we’ve ‘learned the rules’ for. Navigating and using a traditional form on a mobile device is a clunky experience at best. Putting aside the problems associated with non-physical keyboards, reduced screen real estate and touch-based navigation, many of the lessons we have learned about form design don’t work for mobile.  This hasn’t gone unnoticed and there are some additions to the established learning that take into account mobile device form factors and mobile user behaviour, but essentially these are just tweaks to the same old form paradigm.

Let’s face it, forms aren’t sexy. What they are is frustrating. No one relishes the prospect of filling in forms, in fact they are regularly cited as points of pain in user feedback. So why is it that something that has been so fundamental to our use of the web and something that causes so much frustration, hasn’t seen any real innovation in over a decade and a half?

As someone who works as part of a design team that specialises in complex, day-in, day-out applications I come up against a lot of form-based interaction. I’ve read Luke’s book and Chui’s excellent blog posts and try to incorporate the benefit of their learning and experience into the forms that sit within the systems I design. But a recent project prompted us as a team to think about breaking away from the traditional and thinking about how we might capture data in the future – on the assumption that surely we won’t still be using the same old forms in another 15 years.

We spent time thinking about alternative means of capturing data that weren’t necessarily tied to keyboard entry. We covered our studio walls in sketches and ideas that used voice recognition, gesture, 2D barcodes and QR codes – even ‘magic’ pens. But we also tried to look at how we could improve the data capture experience without relying on these things.

Amongst some of the more future-looking ideas we spent time refining several ideas based on how the form itself might be improved as well as the means of entering data. After all, not everyone has an iPad or a Wacom Inkling and regardless of how groundbreaking our ideas are – the majority of people will still be using a keyboard (or touchscreen) to fill in forms for a while still.

We wanted to design a form that wasn’t tied to a linear process and that allows the user to enter data at any point and in any order. We had seen similar form-based interfaces as part of field-based observation but for all the benefits that the structure of the form gave, the fact that every single field was displayed on a single screen with no visual hierarchy and the most basic of styling, convinced us we could do better.

From many concepts, we are in the process of refining an idea that allows complex forms to be created in the back-end, but that strips out the complexity in the front-end and places a focus on just the field that is being used.

We want to show that it is possible to create a usable and beautiful form that doesn’t fit the traditional mould. Our thinking is based on:

  • Focus – stripping out any distracting or irrelevant content and only showing what is required at each stage
  • Flexibility – creating a structure that wasn’t reliant on the user following a prescribed, linear process
  • Working with the user, not against them – inspired by old-fashioned greenscreen systems that seemed to be well-suited to data entry
  • Shortcutting – predictive data entry, short codes and keyboard shortcuts that support expert users without compromising the novice user experience
designer and developer caricature

10 ways to improve your working relationship with your developers

— a UXers guide by Amanda Wright —

1. Respect your developers

As user experience professionals we demand respect for our ‘craft’ and so you should respect your developer’s code. Remember, it’s easier to imagine than it is to build.

2. Don’t throw things over the fence
There’s nothing worse than finishing everything to the nth degree, handing it over to the developers and then vanishing, with scarcely a thought given to the implementation.

If you leave the developers to their own devices, don’t complain when they start to fill in any blanks or make decisions for you. Make sure you are around to answer questions and help with the QA and release process.

Being available post-release to monitor feedback channels can provide you with valuable feedback and help identify bugs that may have slipped through.

3. Work (largely) in parallel
This is a controversial one, particularly in agency environments where signing off deliverables and meeting deadlines is key. Once sketches have been agreed, developers can start thinking about how to structure the application, giving you a chance to move onto higher fidelity wireframes. Once these are agreed, developers can move onto building the interface. This allows for problems to be identified and solutions to be devised as a team (and documented for posterity if needed).

One caveat is that the visual design should be relatively fixed before you hand over to development as a complete rework of CSS and Javascript is painful. This shouldn’t prevent you from tightening up things as part of a normal design review process.

4. Embrace developers’ problem solving powers
We often like to think of ourselves as the champion problem solvers, but guess what? Developers like to solve a challenge or two as well. I’ve found that identifying the areas in which developers can add value, or even just leaving them a little space to make their own mark on the experience, can do wonders.

During implementation, if they identify a problem that needs resolving, allow them to suggest a solution first before you jump in with one of your own, they might just surprise you.

5. Involve developers in user testing
Bringing to life the voice of people who use to your website or service is one of the key objectives for someone in a user experience role. Go one step further by sharing videos and feedback from testing with your developers. If possible, allow them to observe if they have time to attend sessions. Watching people struggle in a user study is worth a thousand times more than just banging on about a user-centred design approach.

6. Use Shared Nomenclature
For an industry that intends to make things clearer, we do love a good buzzword and have a tendency to use lots of acronyms. Using plain language should start when you communicate to your developers and the wider team. The user experience community has recently adopted development terminology and made it our own (Agile and Lean UX spring to mind) so it’s important that everyone is clear what we mean when we use these terms, especially as the mental models can often differ.

7. Make time to share learnings from your industry
Just as you have to contend with countless user experience methods, new interaction paradigms and fine tuning your soft skills, developers (I’m talking largely front-end here) need to overcome the latest browser quirks, keep up to date with new languages and the endless parade of new shiny devices. Both tribes face significant challenges in order to keep up to date. Make time to share what you’ve learned so you both are aware of what’s going on in each other’s world.

8. Learn to be lazy
Sounds a little odd but there is a long running joke that a good programmer is a lazy programmer. Developers will strive to overcome redundancy and automate anything that needs to be done more than twice. Rather than reinventing the wheel, we should look to adopt standard patterns where appropriate and build upon existing user experience work. One way of doing something, is always better than three – especially on the same website or service!

9. Be pragmatic
This doesn’t mean that you should stick to something ‘safe’ when you are designing an experience, it means that you should be realistic about budgets and technical constraints that you have to deal with. A simple, functional and well-executed solution is more valuable to your client or company than a stripped back, half-finished solution that would have been great if all the bells and whistles were included. When it comes down to the crunch, the bells and whistles are always the first to go.

10. Don’t be the “I” in team
As the cliché goes, “there’s no “I” in team. It doesn’t matter if you designed the best experience in the world, if the implementation is poor, slow or full of bugs. You are only as good as the developers you are working with.