AI and Analytics for Business

Updates

Unanswered Questions from AIAB Annual Conference with Prof. Kirthi Kalyanam

Our audience had so many questions for our speakers that we needed more answers! So we’ve followed up with our presenters and asked them to on the questions we still had queued up in Pigeonhole.

Professor Kirthi Kalyanam talks more about his presentation, Cross Channel Effects of Search Engine Advertising on Brick & Mortar Retail Sales: Insights from Multiple Large Scale Field Experiments on Google.com

1. How did the study adjust for external factors such as competition, weather, etc?

Test markets were randomly selected. The stores in the test market were matched to stores in control markets based on a very large battery of variables that included competition, weather etc. The goal in the end was to make sure that the overall index for the group of test stores trended similarly with an index of control stores. A lot of this is automated in APT’s software.

In addition to this, the APT software uses outlier analysis to detect things like weather events during the test.

2. How do you explain a negative sales lift?

Every test uses but one draw and hence this is possible. Substantively, you can get negative lists in related categories if the marketing program is stealing sales from other categories.

Our retailers are very much aware of this issue and hence in every test APT measured both borrowing sales from other categories and from future time periods.

3. What is your recommendation for doing causality analysis for consumer driven behaviors?

This is a very broad and general topic. I am not sure I am the best person to opine on this. But a few things that are on top of my mind:

a. Meta analysis of clinical trials might be the platinum standard. If it is easy to do clinical trials, then I recommend this.

b. If it is not easy to do clinical trials, then I recommend focusing on lowering the cost of doing clinical trials so that it becomes easier.

c. In some situations clinical trials or randomization might not be possible. In this case perhaps a quasi-experimental design is a reasonable alternative. For an example of a quasi-experimental design using Regression Discontinuity in Search Engine Advertising, I refer the reader to the following recent paper:

4. How much sales were cannibalized from digital sales channel in this study?

We did not have data to measure this in depth. But simple measurements did not show much evidence of cannibalization. The intuition is that our advertising intercepted offline shoppers and included the retailer in their consideration. It could have done the same for online shoppers.

5. Given low click rates, how big do the studies have to be?

In this case, APT designed each study to measure a 1% change in sales given the historic sales variability at the retailer’s sites. The number of retail sites required depends on this variability. In our experiments the number of stores ranged from as low as 25 to as high as 349. The average test used 167 stores.

6. What is your variable for the brand strength regression?

We used click-thru rate as a proxy for brand strength online.

7. If you have to spend $1 to make $2.5 and margins are low, is this type of advertising profitable?

The Return on Ad Spend (ROAS) varied considerably across retailers in our study. The retailers in our tests had the objective of hitting a certain level of ROAS.

8. How cleanly do the meta-analysis methods for pharmaceutical carry
over to ad testing?

Our experience has been that there is a lot we can learn from the meta analysis of clinical trials in pharma. Some adjustments have to be made (as we did in our study) to reflect the realities of advertising field experiments.

9. When presenting these cross-channel methodologies and results to marketers, what components or impacts of the research resonates most with them? What are the biggest translation gaps?

Audiences love the rigorous nature of the studies. They really like the idea that we retained all studies possible including studies that did not have a significant lift. They really like the idea that we are generating a range of outcomes for this population of retailers. They can use this internally in a “what if” simulation. They really like the finding we have of how impression share and brand strength matter. The details of the data and some aspects of estimation are difficult to translate.

10. Can you summarize the endogeneity problem and why besides just counting that experiments are valuable?

Shoppers search when they are about to buy and search advertising is served when shoppers are ready to buy. These effects might be more powerful in search advertising compared to other forms of advertising like TV. Also offline ad budgets dominated search budgets so there might be a detection issue that requires an intervention.

11. Can you control for type of creative? Eg sale ads have different effect than simple promotions…

Yes this is possible. Search ad creative is being optimized for better conversions using a number of tools.

12. You run a center on retailing. Can you get them to run experiments?

Retailers use field experiments quite a bit.

13. Is experimentation more important for search than for other online
channels?

See response to #10 above.

14. Share of voice? Is it the cause or the effect?

Across these independent experiments we found share of voice to be correlated with sales lift.

15. How does this translate to omni-channel experience?

The retailer’s web site has to be ready to support this omni-channel shopper.

16. How did you do experiments with share of voice?

See response to #10 above.