Assisted ads creation

Duration: 1 quarter

Problem to solve: “How might we help marketers self-correct their ad copy so that they run less risk of having their ads rejected?”

Format: responsive web

Status: Not shipped, alas

 

The problem to solve

LinkedIn wanted to offer advertisers a way to self-correct their ad text as they entered it into Campaign Manager, LinkedIn’s advertising tool, and offer advertisers a better experience while they placed more ads.

The project team had generated a list of the most common reasons ads were rejected. Most were language and URL based, and the team had hypothesized they could be addressed with a form of text analysis tool -like an advanced spell checker that warned users of restricted words, bad URLs, and the like.

The solution would most certainly be needed inside the ad creation UI, where the user would enter the text that could cause their ad to be approved, or rejected.

It took the team a couple of tries to align on a basic user flow, as we were biased to use existing UI components. Then at the first design review, the word “Grammarly“ was mentioned, and it became clear that the team wanted a comprehensive text correction feature. This feature should focus first on advertisers, but be designed for any user who typed text into any product within LinkedIn.

 

Alignment: competitive comparison

When I came to the project, there was interest from the product leadership to provide a service similar to Grammarly inside our campaign creation experience. Like Grammarly, we would offer to correct spelling, but also warn users of other ad policy violations like all-caps, special characters, other language policies, and bad URLs. 

I set about collecting large amounts of screenshots analyzing how the Chrome plug-in for Grammarly worked in the context of large text documents, social media sites, etc.  

Other services I investigated were the text and content correction services for Nextdoor, Google and Microsoft Office.

 

System design

In our design, the “panel“ serves up the corrections by category in a certain order, while the “cards“ are clicked on in the order the user prefers.

The decision to serve up corrections by category came after a lot of thought: A human thinks of text as meaning, not a collection of individual words. In the UI I designed, the order in which I ask the user to correct problems is designed to save them time, and to avoid serving up the same kind of problem twice:

  • If the correction is content-related (such as mentioning topics that are not allowed in ads, like drugs or alcohol) we want the user to start by correcting that. This is because a content problem may cause the user to want to re-write their text altogether. There is little point in correcting spelling for a text that is not allowed in the first place.

  • Another example is all-caps text: I want the user to correct this problem early, so I don’t have to keep showing them their text written in all-caps. If I did, then I’d have to explain that a single word may have two problems: a misspelling and being written in all-caps.

  • For business users, the correction by category makes sense, as they may be working from a text style guide created for their brand. Some categories of corrections may even cause the brand to update their style guide to avoid rejected ads.

  • Aligning on this order of operations was also needed for backend engineering: they were busy creating the API as I was working on this design, and the two systems had to fit each other.

 

Interaction model and wireframes

Final interactions

The cards

In what I called the “cards“ the user clicks on individual words and is shown guidance on how to correct the text. The red “counter” counts down the number of needed corrections and reacts to changes by the user.

With “the cards“ the user self-selects the words that they want to correct, so the UI does not try to steer the order in which the corrections are made.

ezgif.com-gif-maker (1).gif
 

The panel

In what I dubbed “the panel”, the user corrects the text inside the panel that opens up to the side of the text input field. This UI is much more opinionated and shows the user the suggested corrections in a certain order.

In research, we found that “the panel” was favored by business users. These users are professional content managers who want to use the panel to make screenshots, and document problems before making corrections. These users also appreciated the “panel“ for its speed: the text in the image below could be fully corrected with four clicks.

ezgif.com-gif-maker.gif
 

User research

Method and discussion guide

For the final stage of the project, I was fortunate to have access to a user researcher who would help me conduct a usability test on five participants. I created an interactive prototype using Figma for this purpose, and the researcher recruited participants based on my recommendations, with a variety of experience and business scale.

Before the testing started, the researcher and I met to set expectations:

We decided to use the RITE iterative method, an approach I have found useful for most usability settings. This means I would make changes to the design as soon as I became convinced that they were needed so that the next test participant would have the opportunity to use the improved design. We listed a set of core assumptions so that we could see how many of them would turn out to be correct and we agreed on the main narrative of the testing session so that the researcher could write her discussion guide.

Results

  • Overall, the prototype tested very well: participants had few problems understanding what the design was supposed to do. We had two surprises:

  • Few participants needed an explanation of what was happening nor of what they should do, so the text went from explaining to simply listing out the number of problems.

  • Participants were very interested in what else the text guidance would offer them and expressed hopes for recommended words, as opposed to just corrections. They saw the recommendations as a way to better ads performance.

  • There was minimal interest in giving feedback to the recommendation AI. They saw it as unrewarding make-work that benefitted no one but LinkedIn.

Prototype demo video