Monthly Archive for November, 2007

Design & Reach

Following on from my last post, Mike Danziger and I chatted on email & he wrote up some impressions of the InfoVis conference. Stephen Few responded to some of the points, and a couple (1, 2) of subsequent postings, and some other comments (3, 4) have shown that people are interested. Sorry for being so late in responding myself – the day job sometimes gets in the way!

For me, the key contribution has been Pat Hanrahan’s. I feel the same way & I’m grateful to him for providing some academic respectability to what would otherwise just be my own opinions. From my own pragmatic software industry perspective, I’d like to say something about how his suggestions could be taken forward.

Delivery mechanisms are key: to appeal to the masses one needs reach. Interactive visualizations must be delivered to people’s eyes & to their fingertips. Static images in papers aren’t enough: people don’t have much time or patience & won’t enjoy having to read lots of text in order to learn how the interaction works.

One approach is to put good visualization capability into commonly used tools such as Excel (1). That way people can manage their data themselves. Because the user has the ability to load and edit the data behind the visualizations this means a high degree of skill is needed when crafting the software so it has the necessary flexibility. Each tool has different extension points & platforms. In practical terms this means a software company is forced to choose a very small list of supported environments & work flows.

The more obvious route is to exploit the immediacy and universality of web delivery mechanisms. Thanks to Flash, Silverlight & Java there is a huge audience out there with suitable runtimes. It is good to see more and more experimental visualizations using these. (Though problems with data management are still there of course…)

Reach isn’t enough: in order to bring something compelling to people one must embrace designers. Graphic designers, user experience designers, interaction designers, the works! The right kind of designers can keep a visualization clean, useful & informative but also imbue it with style, panache & memorability. There is a design revolution happening now in the software industry & it will sweep up information visualization tools along the way.

The combination of the need for reach and good design is the main reason why I’m so interested in the Adobe platform. Because they already have designers using their tools, they don’t need to woo them to new platform. Add a massive install base (flash) and increasingly workable languages (mxml, as3) and it is hard to dismiss. Nice to see I’m not alone in thinking this.

Sacramento Thoughts

I got back from the IEEE Visualization conference in Sacramento a few days ago – it was highly enjoyable and I met some great people there.

I’ve been struggling to come to terms with the quantity of reading I now have to do. I’ve also found it hard to summarize my thoughts on all that I heard.

I think my personal best paper award would go to Jeff Heer’s “Design Considerations for Collaborative Visual Analytics”.

On a similar topic, Fernanda Viégas said something that caught my attention: instead of focusing on the classic visualization question of scaling the amount of data being visualized, the Many-Eyes project scales the size of the audience.

However, each data set in the Many-Eyes site is isolated. Processing of the data has to be done in advance in order to bring it down to a manageable size, and data sets do not have any intersection points with each other. (Although they do allow comments to refer to other data sets, along with other navigation aids.)

Classic information visualization research seems to follow a pattern something like this:
* Researcher gets hold of a dataset from somewhere.
* They consider various encodings of it.
* While doing that they achieve some level of domain knowledge.
* They develop an isolated visualization system – this is what they spend most of their time on (I can’t blame them – it is the fun bit).
* They achieve some insights of their own which gives them a warm glow.
* Some short evaluation is tacked on to keep the reviewers happy when they get the paper.

From an outsider’s perspective:
* In many cases the dataset is considered in isolation from other potentially interesting & relevant data sets.
* The quality of the encodings chosen depends on the knowledge of the researcher, and this can vary quite a bit.
* The system developed tends to be isolated from other applications & systems – that makes it easier to develop. Often there are no multi-user aspects, but this at least seems to be changing.
* Insights almost always are with regard to knowledge gleaned from outside the data set. E.g., a downturn in the number of farmers (in census data) could be explained by increasing agricultural mechanisation (innate knowledge), or the popularity of a certain baby name might coincide with a celebrity (search for ‘Celine’ here). There is often an implied “cause and effect” hypothesis in these kind of insights.

Going back to Viégas’ comments I suspect that the true problem lies in scaling not just the audience – though that of course is important – but scaling on both the number and type of datasets being visualized.

The ‘perfect visualization tool’ would be able to cope with new data sets being thrown at it. Linkage would be automatically established between elements of the data set (e.g., Joe Bloggs from one data set would be recognised as the same Joe Blogs from another data set). The data sets could have a wide variety of schemas and come from wildly different sources. The various visualizations in the tool would be automagically updated with the relevant encoding of the new data, and new visualizations which have suddenly become appropriate would be displayed. The user would be able to reach many new insights because all the data is cross-referenced and generally speaking most insights come when combining data. Plus the visualization, being perfect, would show those insights clearly.

Mike Cammarano’s talk on his work with the dbpedia data was interesting from this angle, in that the data was inherently heterogeneous & extensible. Of course, the Semantic Web research agenda is of interest here too, despite lying outside of information visualization research.

As Matthew Ericson showed, the sheer craft and skill needed to combine data well and communicate it effectively means that it is difficult to see a perfect visualization tool being realised in an automated way. I guess this makes it an interesting research area!

Another aspect of developing web-based social visualizations is that there is much more potential for gathering information about how users actually use the visualizations: server-side logs can be designed to keep track of almost every action. This would lack the rigour of a properly controlled lab experiment, but that would be counterbalanced by the sheer number of possible users, so I’d say there must be huge benefits in this approach. (And of course making sense of the logs could be another data visualization challenge!)

On a separate topic I found Stephen Few’s capstone talk rather unsettling – I understand why he is so passionate about designing clear visuals, but sometimes that passion can err on the abrasive side. And that style won’t endear the visualization community to the world out there. I also think he underestimates the power of playfulness and fun in reaching out to an audience – come on – Swivel’s option to ‘bling your graph’ is just funny! Another worry is that the very Spartan style of visuals he favours actually imposes an aesthetic in its own right, for all of its good intentions and intelligent rationale. We should accept some people just won’t like that aesthetic.

However, his tutorial was a really excellent Tuftean summary of all that is great and good about the subject, so I guess he can be forgiven! And when you see graphics like graphwise (thanks Nathan) you can see how much work there is to be done :-)