Accountability in Design

As a designer, my value is measured by my ability to insightfully create meaningful and effective visual communication for a specific and unique audience. Remember that episode of Mad Men, where Don Draper told the research analyst that he found her methods “perverse”? Don knew what was best for the campaign. He wasn’t going to let his creative output be undermined by cold data. In a way, he was right. We work from the heart.

However, Don Draper had the luxury/curse of living in an era of relative unaccountability, compared to today. Sure, creative work has always been held accountable to sales figures and other high-level metrics, but it seems historically there’s been a divide between the strategy and implementation, and the results. Stabs in the dark were supported by birds-eye or anecdotal results. There was little-to-no meaningfully granular connection between the work and the result. That is no longer the case.

Now, I don’t mean to throw our predecessors under the bus. They weren’t at fault, they were simply limited by the technology of the time. And in fairness, I’m not suggesting that we have it all figured out to perfection today.  This historical context is worth considering, because today successful creative work is equal parts science and soul, and we have the luxury of being able to measure our work in both capacities.  What we are experiencing today is a shift in how our tools influence design-thinking. 

We must move past the mentality that being ‘wrong’ about a design-instinct somehow degrades your professional value. As designers, we have the keen ability to organize concept into meaning (and make stuff look good). But we are biased, too, by our own experiences, and our own preferences. Just like every person on earth, we are unique.

Luckily, there are all sorts of new tools on the table for meaningfully evaluating design work. Pro: it allows us as designers to back our work up with more than personal expertise. Con: sometimes we have to admit when our design-instinct was wrong.

First, a caveat

There are things you can test quantitatively, and things that, by nature, you cannot. Know the difference. Also, don’t discount a valuable key progress indicator just because you can’t test it. And don’t let the KPIs that are easy to test drive the conversation, as their success may be a detriment to something else (or they may not really be that important). Have an honest conversation with your team and your stakeholders about what the important goals are, how you plan to achieve them and how you’ll measure progress.

For example, there is myriad research out there demonstrating that pop-ups work for email signup conversion. They just do. They work really well, in fact. If your only goal is to get people to sign up, use a pop-up. Of course, there is another side to that coin, which is much harder to test: what will that pop-up do to your brand perception? Does it cheapen your brand? Maybe. Will it annoy your users? Probably. Can you measure those factors quantitatively? Not really. Based on your strategic goals, try to find a way to test the devil’s advocate KPI, even if it’s apples-to-oranges.

To the laboratory!

I’m a firm believer that design must be presented in its native format to be honestly evaluated and tested. Take, for instance, a large-scale vehicle wrap. Even the most perfectly to-scale diagram won’t accurately represent the design in a real-world setting. Looking at a website on a printed page, or projected on a wall is a totally different experience than was intended.

On top of simply being able to better evaluate design in a more native context, functional prototypes let us test something very close to the final production design without going too deep into production quality code. This means more design agility as we can test solutions, identify potential problems and respond to them earlier in the process.

Often there are physical limitations to creating a truly representational design prototype. When it comes to digital design, however, the working format (the diagram) and native format (the product) are essentially the same. That being the case, it’s not a huge leap to take the static UI design comps a step further into some sort of working prototype.

Enter, InVision

At Drake Cooper, we started using InVision about a year ago to present design comps in a more true-to-form setting. Initially, we used it primarily for presentation and review, allowing stakeholders to interact with the user interface on their smartphones or laptops. They can see to-scale elements and typography, and test out UI elements in its natural state.

Taking things a step further, we began using InVision for rapid prototyping and testing of design concepts. We can take static design comps and simulate a moderate level of functionality with a minimal amount of time and development resources. Then, those prototypes can get in front of real users for testing and evaluation through tools like usertesting.com, Optimizely, or even some good ol’ hallway testing.

When entering into the testing phase, start with a really simple design question to solve. If you have several questions that are mutually exclusive, break them up into separate prototypes and test individually. Don’t try to answer everything at once. Build your prototype to handle all of the possible paths down the funnel you are testing. If you know there are going to be points of friction, make sure they are represented in your prototype.

Next, get it in front of some users. Instruct the tester in a non-biased way. If you want to find out if the signup button is visible enough, instruct the user to “signup for an account,” as opposed to “click the signup button to signup for an account.” Make them do the work and watch for problems.

Testing in the wild

The idea is pretty simple. You have a thing, and you want to see what will happen if you change it a little. Will your conversions go up? Will some other metric go down? Maybe the little change is actually a big one, and instead of risking irreparable harm to your brand or thousands of dollars in lost business, you release it to a narrow audience and test it against the control (the original version). We use Optimizely for this, it’s a great tool for everything from small copy changes (which can be done right in the Optimizely dashboard) to big design changes. Google analytics also has a powerful experiments feature, which requires a bit more setup but let’s you really dig into the results.

So what?

Ok, so I’m a designer and I just talked about testing design and the sky didn’t fall. We can have our cake and eat it too. Here’s the call to action: design with your head and your heart. Design for the real humans who will be using your product, not annotations on a chart. Continue to use your expertise to create delightful experiences and build meaning from chaos.

Then, test.

The results will either validate your assumptions or reveal something you were blind to. Every test result is an opportunity to learn something new and grow yourself as a designer.

And one last thing. There is this ethos in programming that “you are not your code.” This applies to design, too. Your design expertise is derived from years of practice and experience. There will be tests that invalidate your design. That’s okay. You aren’t your design.

Up Next

Mastering Snapchat: 5 Brands That are Doing it Right

Break through the clutter, cut through the noise, stand out in the crowd. Anyone who works in the ad world…

Drake Cooper
Drake Cooper
January 12, 20165 min read