The limits of the IPA’s The Long And The Short Of It

4 minutes estimated reading time

The IPA’s The Long And The Short Of It (TLATSOI) has been a north star for agency strategists since it was published in 2013. It’s now been out there long enough to understand the limits of its approach.  This post started with a blog post that talked about the IPA’s The Long And The Short Of It (TLATSOI) role in the planning and strategy process of the ad industry.

Thermometer

The Long And The Short Of It Needs The Wrong And The Shit Of It. Feel free to go and have a read and come back.

The Limits

The IPA’s original research had flaws that dictated the limits in the methodology:

  • Focusing purely on successes brings in biases due to the research being taken out of context. Context provided by the ‘complete’ population of good, mediocre and awful campaigns rather than award winners
  • There aren’t any lessons on how not to truly mess up

TLATSOI isn’t a LinkedIn article

Its easy to throw shots over the table when someone has done a lot of work. TLATSOI isn’t an article on the ‘five morning habits of Warren Buffet’ to make you successful.

Les Binet and Peter Field analysed 996 campaigns entered in the IPA Effectiveness awards (1980 – 2010). That would have taken them a considerable amount of time to do. They then managed to write it all up and distill it down into a very slim volume on my bookshelf.

The work is an achievement and Binet & Field deserve our gratitude and respect. Secondly, other marketing disciplines don’t have their version of TLATSOI. We couldn’t critique TLATSOI if it didn’t exist.

Let’s say we want to stand on their shoulders and build something more comprehensive than TLATSOI. Just what would it take?

The limits of working with what you have

Binet and Field worked with what they have. If you’ve ever written an award entry you’ll know pulling it together is a pain in the arse. 996 award entries represents thousands of weeks of non-billable agency time. This was also strained through their empirical experience in the business, which adds a ‘welcome’ bias.

Now imagine if that kind of rigor in terms of documentation and analysis was put into mediocre campaigns. The kind of campaign where the client logo barely makes into the agency credentials deck.

Without a major agency (nudge, nudge, wink, wink BBH) providing all their warts-and-all data, the initative won’t start.

It will be hard to get what is needed. Agency functions aren’t geared up to deliver the information. A technological solution would take a good while to put in place; and like all IT projects would have a 70% failure rate.

In an industry where careers are made and talent attracted on ‘hits’; theres a big chunk of realpolitik to address.

How would you keep a lid on the dirty laundry?

We live in a connected world. To the point that there are now likely to be four certainties. Birth, death, taxes and data breaches. Imagine a data dump, some Excel skills and what was a bit of snark would do to an agency’s reputation? The stain of an ad agency equivalent of the movie industry Gold Raspberries would likely bury careers.

What do we measure?

My friend Rob Blackie started some of the thinking on effectiveness data SLA tiers

A = Tests the objective directly using a Randomised Control Test (RCT) in a real world environment (e.g. measured at point of sale).
B = RCT tests of proximate objective (e.g. brand), direct measurement of impacts without correction for population bias or confounding factors (e.g. a sunny week drives a lot of ice cream consumption). Or case studies (independent), quality survey data on changes in behaviour, testing in an artificial environment. For instance a Nielsen Brand Lift study
C = Case studies (non-independent), data sources that may contain significant bias compared to the underlying population. For instance: Award entries.
D = Indicative data such as PR coverage, social media Likes and similar.
E = Anecdotes. Extra points for quality, and reproducibility across different suppliers / evaluators.

There are challenges capturing long-term branding factors such as advertising ‘ad stock’ or ‘carryover‘. That then takes you into fundemental questions:

How long is the minimum viable time of campaign duration to be considered for assessment?

How long should we be measuring long term branding effects? How do you measure ‘clientside’ quality issues:

  • Resourcing / budgets
  • Product
  • Ambience in the case of client-owned channels
  • Adequate quality briefs. Are the objectives written well? Are they relevant to the business
  • Mission creep or changing company agendas

All of this means that getting to the greater volume of poor campaigns as well as the best is easier said than done. The best way to kick it off would be having large agencies to work together on putting together data sets.