VisWeek2012 – a UX designer’s view of the year’s biggest Visualisation conference (By Mischa Weiss-Lijn)

At RMA we’ve always tended to focus on projects that involve quite a bit of Data and Information Visualisation work (or ‘VisualiZation’ for those on the other side of the pond).  While we’ve become known for delivering to a very high quality in this area, we’ve drawn on our skills as Information and Interaction Designers to create our solutions.  While we’ve all read Tufte, Yau  and many others, we haven’t tended to connect deeply with the Information and Data Visualisation research communities.

So, we decided that one of us should tentatively dip our designerly toes in the rarefied waters of VisWeek 2012, the world’s biggest Visualisation conference, and get a taste for what the larger visualisation community has to offer folks like us; folks that design and build visualisations, that people actually use day-in-day-out.

And.. off I went, and now that I’m back, here is a subjective view on what a largely research oriented visualisation conference has to offer those working on designing interactive visualisations for use in real world settings.

It’s big and getting bigger

Having done my PhD in Info Vis many years ago, what impressed me right off the bat is how much the field has grown.  There are pretty big and well established groups focused on information visualisation across the world.  Here in Europe, Germany seems to be a particularly bright spot (e.g. University of Konstanz), while the UK also has quite a bit going on, with hotspots at City University, and Oxford among others.

While the field has been getting increasingly fashionable over the last few years, it seems to be reaching a tipping point, where I believe interactive visualisations will enter the mainstream of digital experience and thus become ever more relevant for designers of all stripes.

This may be old news to some, but there are a number of forces at work here:

  1. Big data: tech catch phrase of the moment, that basically boils down to data being available in unprecedented volumes, varieties and accumulating at ever increasing velocities.
  2. Better tools: back when I was doing my info vis research, you had to build every visualisation, pretty much, from scratch. Now there are lots of great tools to get you started (more on those later)
  3. Infographics in the media: The recent surge in the production of infographics has brought the possibilities of visualising data to the public imagination.  Pioneering publications such as the New York Times, the Guardian, Wired and folk such as David McCandless have popularised  visualisations that convey powerful narratives using data.

All this means that while visualisation work has always been important here at RMA, it’s likely to start becoming something that designers everywhere will be encountering more and more.

More grown up, less core innovation?

While information visualisation has gotten ‘bigger’, the research seems to have changed somewhat in character.  The work seems to focused more on evaluating and refining existing visualisation techniques and applying them to new and challenging domains.  That’s all good, and important, but the flip side is (and this probably a bit controversial) that from what I saw at VisWeek, there seems to be less valuable creative innovation around visualisation techniques.

Let’s dive into each of these topics in turn.

Domain focused visualisation

The research work has diversified into looking in detail at how visualisation can support a host of important new application areas from the somewhat unapproachable visualisations done for cyber security and bioscience to the somewhat more comprehensible work in the medical and finance domains.

Here’s a few choice examples from the conference.

Visualizing Memes on Twitter

A lot of people hope to transform the firehose of Twitter activity into something intelligible.  This could have important applications in lots of areas were people stand to gain from a real time understanding of consumer and industry sentiment – an area of considerable interest in financial markets.   Another area where this could be important is in being able to detect major events as they happen; think earthquakes, bush fires and the like.

Whisper is a nice piece of work that allows you to search for particular types of event, to see where the discussion and thus the event, originates, where flow goes thereafter and how people feel about as time progresses (positive = green, negative = red).

Leadline is more focused on allowing people to detect signifiant new events as they happen.  The ‘signal strength’ of automatically clustered ‘topics’ are visualised.  You can filter on the person, time range, event magnitude, or location, to focus on an event and understand it.

Medical Visualisation

There was a bunch of work around specialist information visualisations for the medical profession.  As medical providers provide more and more open access, data, (e.g. this is an area that is bound to keep on growing.

MasterPlan is a visualisation tool that was custom built to help architectural planners, in the renewal of a rather old Austrian hospital, understand how patients flow between the different wards and units.  The visual analysis afforded by the tool allowed them to see the key patterns and identify a better way to cluster and place things.

The OutFlow visualisation, from IBM, (above) shows the outcomes (good = green, bad = red) of successive treatments administered to a patient (and a very ill one in this case!).

Less core innovation

I stand to be corrected here, but it felt to me, that while there was lots of innovation, I didn’t see visualisation designs that were applicable outside of their narrow solution area.  There is innovation to be sure, but’s generally more evolution than revolution.

For example, Facettice, was a rather beautiful bit of work for navigating faceted categorical data sets.  While lovely looking, it’s rather impractical in terms of readability, interaction and real-estate consumption.

Another example, would be Contingency Wheel++, a rather impressive, and powerful tool for exploring datasets with quite a few columns (say 20) and massive numbers of rows.  While it’s great work in many ways (and I’m afraid a little too complex to explain here), I wonder how broadly something like this could be used.

Of course, innovation doesn’t need to be game changing, small incremental improvements are generally what move us forward.  One bit work that was particularly nice in this regard was a paper on ‘sketchy rendering‘; new library for the Processing language (see below for more on that), that allows you do render interactive charts and visualisations in a sketchy style; something that could be quite handy for testing early prototypes.

More and better evaluations

It was great to see that there seems to be a much more robust emphasis on evaluating the value of visualisation techniques and applications.  Back when I was doing research in this area (a decade ago) people were busy churning out new ways to visualise things, without generally stopping to check if they were any use.  Nowadays the balance seems to have tipped away from pure invention to more of a focus on application and evaluation.

Automated user testing on the cheap

One particularly interesting development was the very prevalent use of Amazon’s Mechanical Turk capability for doing evaluations. The basic idea is that you set up your evaluation and software experience so that it is totally automated and available online (so a web application with accompanying forms).  You then recruit ‘users’ from the hoards of ‘Mechanical Turk Workers’ waiting to complete tasks for relatively small amounts of cash.  You can even insist that your workers have certain qualifications (i.e. have passed particular tests), to ensure you get users able to complete your tasks.

Despite the obvious attractions, there are definitely some issues here – in particular around the limitations of the Mechanical Turk Worker population (mostly US and India based) and the ‘quality’ of response you get. There was one paper in particular that claimed, in contradiction of previous work, that people performed very poorly on a task (Baysian probability judgements) despite having visualisation support; I suspect that the Mechanical Turk workers weren’t trying quite as hard as the students typically employed in previous experiments.

Isn’t it obvious?

Some of the evaluation papers had the unfortunate effect of forcing me to crack wry smile and ask myself:

Why oh why. Why, bother at all.

Let’s pick one example to illustrate the point.  It was an evaluation of the relative efficiency of the radial and cartesian charts.  Paraphrasing the question for the chart below: Does wind direction have an effect on efficiency, if so which direction leads to greater efficiency (btw: fewer minutes between wheel events means higher efficiency)?

So what was the answer?

Yes, you guessed it the cartesian charts took the prize.

While this  work was partly about evaluation methodology – from the point of view of the visualisation, i’d say… isn’t obvious?  Only a basic design sense or understanding of vision would tell you that it’s far easier to scan horizontally to compare values.  Do we really need to run empirical evaluations for this sort of thing?

There was quite a bit of work at VisWeek, where some design training and capabilities could go a long way to making the research output a whole lot more useful out in the real world.

Better tools

There are a bunch of great tools out there for designers and creators of visualisations; here’s a quick run down.

Visual analytics packages

There are a bunch of software packages out there competing for the custom of the emerging discipline of data scientists, as well as the less well versed journalists, business folk who are grappling with crippling amounts of data.  These can be super handy for any designer who is trying to get a handle on a data set before launching into Illustrator.

Perhaps the most accessible is Tableau (PC only, sadly) which allows you to build up relatively complex interactive visualisations semi-automatically.  They have a free version of the software that’s open to all to use, as long as you don’t mind publishing your visualisations out in the open.

Better development tools

On the technical end, a bunch of languages and frameworks have emerged that can be leveraged to rapidly create performant visualisations.  The two main contenders are Processing and D3.  They are both open source efforts. sired by MIT and Stanford respectively, with very active communities and lots of shared code for you to build on.

Processing is an open source Java-like language aimed at promoting software literacy within the visual arts, which has been designed for quickly creating visuals and visualisations.  People have used Processing to create a wide array of experiences, from sound sculptures, data art and info graphics, to full on applications.

D3 is a Javascript library that is more narrowly scoped around creating visualisations on the web.  You can use it to manipulate data and create interactive, animated SVG visualisations.

So… What’s the verdict?

I have to say I learnt a lot by attending VisWeek; it was particularly valuable for me to get a sense for where the field is at and where it’s going.  The more focused sessions, in particular, helped me get valuable footholds into areas of work relevant to some of the projects here at RMA.

However, I wonder whether there is a place for a more industry or practitioner focused visualisation conference where the papers and presentations (from researchers and practitioners alike) could be focused on innovation in visualisation that are more likely to be adopted outside of a research context.

Another big take away, is the field is still sorely lacking integration with the design community, and in particular the visual design community.  The researchers are nearly all computer scientists by training; and it really shows.