Why I’m fed up with digital projects (and why I’m not): a rant

I’ve felt a little rant bubbling up in me over the last few months: a sense of disquiet about digital, a jaded annoyance about wasted time and resources and opportunities squandered. Today I was reminded about an old project that was the epitome of digital idiocy, one of those thoughtless knee-jerk “we must have an app!” projects that make me want to throw a toddler tantrum, kicking and screaming “but who is it for?” until someone agrees to at least do a bit of audience research or string together a minimally viable set of objectives. And that reminder seems to have brought it all to the surface, so here goes.

I am fed up of seeing people and organisations produce digital rubbish: poor apps, clunky games, badly designed microsites and other half-arsed online, mobile and technological systems and whatnots. I am fed up of people who are smart about digital, who think about users, who ask the right questions at the start, who embrace technology without fear and understand how to apply it, being over-ruled by people who’ve just bought an iPad for their kids and now assume that everyone has them and that this is all the justification they need to insist upon spending £50k on a new app for their organisation (an app for who?! For what?!). Moreover, I am fed up of projects that fail being brushed under the carpet, with no evaluation or debrief that enables everyone involved to learn from mistakes and build something better the next time.

It makes me sad to see money and time being wasted, when it’s usually avoidable, and even more so if nothing is then learned from this. It makes me especially sad, because the technology we have available to us today is AMAZING. My phone is able to do astonishing things and the computers I use have incredible firepower (I still remember how exciting it was to get a floppy disc drive for my BBC micro, now look at them!).  It feels like there is so much potential and so much more that the technology we have could allow us to do.

It’s not like there aren’t enough examples of brilliant products, apps, games, and other applications of these technologies out there to learn from. Too many to list. But many many many more that have been an utter waste of everyone’s time from the beginning because they just didn’t ask some basic questions about who and what their product was for, and test it with that in mind along the way. Isn’t that a bit sad?

I know it’s not easy. That there is a degree of luck in rising above the huge amounts of competition online or on on the app store, that things beyond your control can go wrong during the development of a project, and that there are some times you need to go on your gut instinct, and that’s not infallible. Goodness knows I’ve made every kind of mistake creating digital things over the years, as have most people working in this constantly changing field. And I will continue to make mistakes, as will most people etc.

So I’m not claiming there is a perfect process, but there are better and worse ones, and there are definitely massive howling alarm-bell-ringing clangers that should be sending up the red flag right from the start. The “we need an app, never mind who for”, the “we’ve got all this content, let’s just stick it online, people will come to us”, “let’s create a whole new portal for something that already exists”, and so on. And sometimes it feels like we aren’t moving on from this, these whack-a-mole stupidities keep popping up over and over again and just will not die.

Why is that? In a recent conversation with a fellow person-who-works-somewhat-indeterminately-within-“digital” we discovered we’d both been feeling this slight ennui with the field. This sense that it wasn’t meeting its potential, that a lot of rubbish was being made, and that it was hard to see how to make things better. We wondered if it it might be that digital is constantly being driven by the hardware makers, by Apple and console or computer companies and that their vested interest in creating new stuff for us to buy means we’re always just playing catch up. Why do we let this dictate the pace? Where is the chance to take stock and develop best practice?

My experience as a freelancer over the last few months, applying for funding and responding to ITTs, has also been eye-opening. The systems and funding in place for making digital stuff are really broken. If you were trying to design a process that would result in crappy digital products, you’d make people (who may not be particularly digitally literate) scope everything out and pin it down before they’d ever been able to build anything or ask their audience about it, only then engage an agency who are the real experts but can’t change the scope, and then make the project team go through a month long change request process if they want to develop the project in a different direction from the scope based on the evidence as they start building and testing things. Which is exactly the process in place in a frighteningly high number of places. The old models for procuring new infrastructure etc just do not work for digital projects, and it’s hamstringing them from the very start.

Plus money, of course. Not enough money, or not tailoring the project to the actual budget available.  I’ve had this rant elsewhere. I seem to spend a lot more time these days telling people not to make a digital thing. Would it work just as well as a piece of text? As a printout? As a physical activity? Or a really simple website? Don’t overreach if you just can’t afford it, do something straightforward and proven.

But at the same time I love working on and seeing others working on brilliant digital projects. The reach you can get, the amazing response in terms of both scale and thoughtful quality, the sense of shared global experience, the playfulness and the utility, when done well. So, how do we do more of these?

Evaluating online video

Whilst I was Multimedia Producer at the Wellcome Trust I began a programme of evaluation for our online videos. I left before I was able to see most of the results from that, but in this post I’ll share the plan of action I had for the evaluation, which used a number of different approaches. I was building upon work that Danny Birchall and I did in games evaluation, using some of the same methods, but evaluating video presents a somewhat different set of challenges. If you have any thoughts about other techniques one could use, or ways of improving on my methodology, I’d love to hear that.

Wellcome Trust YouTube Channel
Wellcome Trust YouTube Channel

Why evaluate videos?

At the Trust we (me and the other Multimedia Producer, Barry J Gibb) were creating short documentaries about the research funded by the Trust, educational films, and so on. The audience varied, but was usually considered to be an interested lay audience or a GCSE/A-level student. Our video dissemination strategy was fairly straightforward, we posted them on our site (where they frankly got a bit lost) and on YouTube (which I always considered to be far more important especially because of the much greater chance someone would find them there, and the ability to share).

I set the YouTube channels up in early 2009, but had no formal plan of evaluation (oh! to turn the clock back) and simply kept an ad hoc eye on analytics and feedback. The analytics were sometimes interesting, but commentary on YouTube, you may have noticed, is rarely that worthwhile. When we got lots of it, it was generally polarised and sometimes unpleasant. When we got none, which was usually the case, it was like putting the videos up to the sound of tumbleweeds and crickets. We had no real idea who was watching our videos, whether or not they liked them, how they were using them, and therefore it was hard to know how to improve them.

That was the key, for me. How could we develop a strategy for the future when we didn’t really know what had worked or not in the past? But all was not lost. A lot of information was just sitting there, in the form of analytics, and to find out more, we just had to ask.

My approach

I split the project up into a number of different areas:

1. Analytics

2. Staff feedback

3. Audience feedback

4. External organisations

Analytics

My plan was to track several paramenters for the lifetime of each video, ideally broken down into years. I had decided to go for calendar years rather than years since posted to be able to compare videos across the same period, but was somewhat in two minds about it. Youtube analytics offers information about views, traffic sources, and playback locations, which I had planned to include, and demographics and audience retention, which I hadn’t. Demographics on YouTube are only tracked for those people who are signed in and have added those details to their profile, so are largely worthless. I guess they may be less so now that it’s owned by Google and more people are likely to be signed in, but I still wouldn’t place much stock in them. The audience retention graphs are interesting, but hard to compare.

The analytics presented the biggest challenge for the project, though.  The first challenge was that the analytics were split between Google Analytics for the videos on our website and Youtube’s own analytics. The latter are much more restrictive than the former, though they give more detail on the attention patterns, for example. GA depends very much on how it’s been set up for video, but thankfully when GA was set up for the Wellcome Trust site at least someone had the sense to track the number of actual video plays as events, rather than rely on pageviews. But still, marrying GA and YouTube together wasn’t going to be easy.

The second issue was that getting data out of YouTube isn’t easy. You can download spreadsheet reports for one parameter over one time period at a time, but when you have well over 50 videos and want to try and track several parameters over several years, well, I didn’t fancy doing that kind of epic legwork. There is a solution, but it’s not ideal either. Google have an experimental API for YouTube analytics, and our IT department were as I left on the case of trying to write an application that would use it to built reports.

I did have a couple of initial findings from my more ad hoc monitoring of stats over the years, though these aren’t going to be hugely robust and I had hoped to interrogate them further with more data.

  1. On YouTube, most people were arriving after searching for relevant keywords *in Youtube*, (ie, not through Google organic search), or from related video links. The latter were definitely responsible for hits on our most popular videos, that were coming up as related links to other high profile videos from others.
  2. Many more people were arriving directly at video pages rather than the Channel page.

Staff Feedback

The Wellcome Trust has several hundred employees and, as with our external audience, we didn’t know what all of them thought of our video output. We didn’t even know if they were all aware of it, that they could use it or even commission their own films. Their opinions and needs were also very important to developing our video strategy. I put together a survey in Survey Monkey, had it checked by our UX expert (always worth doing)  and posted it on our intranet. I wasn’t around for the results of this one.

Audience Feedback

This was the bit I was most interested in, qualitative feedback from our audience. For our games evaluation work we had put up a survey at the end of the game, followed up with telephone interviews, and also looked at community commentary. As we’ve established, for Youtube the latter is not particularly useful, and we also didn’t have comments open on our site. But the survey was still an approach we could take, except instead of one game, we had ninety three videos. And instead of several hundred thousand plays over the launch weekend, most videos would have a few hundred views over their lifetimes.

This meant that we would need a much longer data gathering period as only a small fraction of people will answer a survey. As always, we ran a prize draw with this to help the numbers, but even so, it seemed wise to assume it would be six months (at least) before we had enough responses to come to any conclusions.

I drew up some survey questions, with the help of colleagues and again had them checked over by Nancy Willacy, our UX expert, to make sure they were understandable and that we were asking the right questions. There is lots out there on survey design, including some points from our own experience in this paper on games evaluations that I wrote with colleagues at Wellcome, The Science Museum and the National Space Center. Being very clear about what you are trying to find out is very important, it’s easy to creep beyond that and try and find out everything, but nobody wants to answer a huge survey online.

My intention was to add the survey as a link on our site next to all videos, and also to put them up as linked annotations on YouTube videos. I did the YouTube links myself before I left, which was a pretty tedious job. You can see an example at the end of this video. I have a feeling that you may have less options in the annotations if you aren’t a YouTube partner, and that adding links like this might not be possible. I guess you’d have to add one in the text below, but that’s not ideal.

External organisations

Finally, I wanted to know what other people were doing with video at similar organisations (and also what they thought of our output, if they’d seen it). This was to be much less formal, simply identifying other organisations and writing to them with a list of questions, at least in the first instance.

And you?

I hope some of that is interesting or useful, but it would also be great if other people share their experiences of evaluating video. What do you do in your organisation? What could we have done better here? Even better, does anyone have any results they can share?