9 High-Attention Questions with Bill Forelli, Lumen’s VP of Sales
Recently, Lumen’s Bill Forrelli, VP of Sales, North America, sat down to speak with Adrian Tennant of creative agency Bigeye to talk about all things attention for the “In Clear Focus” podcast.
Here’s an abridged version of the interview, which can be found in full here.
Lumen is a little bit over 10 years old. It was started by Mike Follett, who’s still our CEO, and Mike really started his career on the planning side in Media. And he’s a brilliant guy. If anyone listening has had the pleasure of meeting Mike, you know he’s as brilliant as he is kind, and he’s a lot of both of those things.
His brilliance really led him to the realization that within the media planning process, there was no real data that led to his job being easier and the outcomes being more accurate and better. That led him directly to attention.
Mike started Lumen to learn more about human interaction with media itself. What are the biometric responses? What are the cognitive responses to the media and how is it different, depending on where the media went?
That all started with eye-tracking technology that allowed us to understand the biometric responses from consumers when actually see an ad.
I have a favorite law, which is Goodhart’s Law. And that law states when a measure becomes a target or the target, it stops becoming a good measure. And one of my favorite examples of this was from, Colonial India, actually in Delhi. And some of you might know the story, it’s referred to as The Cobra Effect. But what happened was the local Delhi government had a problem with venomous cobra snakes, you know, an explosion in snake populations, biting people and livestock. So they wanted to come up with a way to help lessen the snake population
You would, you know, bring in the severed head of a snake and then get a monetary reward. So what some industrious people started to do was breed cobra snakes. So there were these big snake breeding grounds where people would breed snakes, cut the heads off, bring it into the government. The government caught wind of this, shut the program down, and those people had all these cobra snakes, and they just let ’em roam free. So the idea was that they wanted to lessen the population of snakes, but the target became the measure.
That’s what advertising did with viewability – we bred cobras by saying the measure is viewability, and we started to breed impressions. So you get this explosion of websites and publishers trying to deliver as many impressions as possible without considering what that’s doing to the target, which is selling stuff.
The explosion of ads out there leads to all the statistics – you know, “People are exposed to 10,000 ads per day and they only see this many of them” – really have done a disservice to what the original target was, which was to figure out if someone was seeing the ad or not.
There are a couple of different methodologies for eye tracking that we utilize.
The first one is what gives us our biometric dataset. So we have this opt-in passive participant group that is being eye-tracked on a daily basis to help feed our AI models. Through a desktop browser extension and the forward-facing phone camera, we can directly measure eye movement on desktop and laptop devices
The second method is where we work with clients to recruit a panel to specifically measure creative within an in-context environment. We have the ability to recreate all the major social platforms like Instagram, Facebook, and Tik-Tok as well as platforms like YouTube and major news websites. We will then manually insert the ad units within those lookalike environments and measure the consumer’s visual journey through the eye-tracking tech to produce heat maps, gaze plots, and feature analysis based off of those one-to-one eye tracking results. We’ll add questionnaires at the end to understand things like ad recall and purchase intent through direct response surveys.
We have three major product lines: Attention Review, live measurement, and SPOTLIGHT.
Attention Review is a product that looks at a historical analysis of a campaign. So we can go back six to 12 months, and we take in all of the campaign data for that time period for a brand. And we analyze attention over that period of time. Gives a bunch of data in a relatively short amount of time to really benchmark where the brand is at with attention to help make, planning, and buying decisions moving forward.
The live measurement piece, which is powered by LAMP – the Lumen Attention Measurement and Planning Suite – is designed to measure live. So we tag and ingest live impression data on an ongoing campaign, and we can look at optimizations and actually measure the effects of those optimizations live in a campaign. The other side of that, on top of the measurement, is the opportunity actually to activate for attention. So we have high attentive PMPs and custom pre-bid algorithms where we’re actually buying for attention ahead of time. That comes with the live measurement as well.
And then, finally, SPOTLIGHT is that kind of second eye-tracking piece where we’re looking at custom creative eye-tracking studies that are conducted in a number of different realms. So we can do cinema, out-of-home, digital out-of-home, print, social media, rich media, you name it. We will do an eye-tracking study on that. So that’s a little bit more of an open-ended component of attention and more of the cognitive side of things to go along with the replicable, digital stuff.
Bill Forelli: We definitely see trends, you know, one of them being the larger the ad size, the more attention it gets – that’s an obvious one. But what’s really interesting that we find on a campaign-per-campaign basis with clients is that we want to understand how attention works for that particular client, and in a lot of cases, for that particular campaign too.
Coke is a good example of a brand that will have vastly different attention criteria than a pharmaceutical company would have. So Coke, for a branded campaign, might only need a couple hundred milliseconds of attention for it to reach their outcomes, whereas a pharmaceutical company might need much longer attention on an ad to get across their message. Attention is different for every campaign, for every brand. Some brands have branded campaigns and performance campaigns running at the same time. We’re going to optimize those completely differently because one’s going to need more time per user. The other one’s going to need more users in order to perform at its maximum. So being able to communicate directly with brand teams and with clients to help walk them through how to use attention and how to look at it specific to them, I think is a really important thing that we try to, really instill in our clients right off the bat.
Bill Forelli: If you want to see a bunch of ad tech people sweat, tell them that you’re having a third-party audit done on your data! When Mike told us all that he was doing, everyone in Lumen was just like, “Oh my gosh, Mike, what are you doing?” And he was so confident, and rightly so, and in hindsight, he should have been. But yeah, we all were like, “What is going on? Hope, hope this goes well.” And it did go really well. One of the things that is really interesting about AI right now and is important to remember with how we deliver data is that there’s no instant verification of its accuracy. So if you go into ChatGPT and type in whatever, “Write me a poem about Lumen in the style of Edgar Allen Poe,” or something, it’s going to deliver something that you can read and go, “Yeah, that’s actually pretty close. That’s pretty good.” Stable Diffusion, you know, “Make me a picture of a blue car.” It’ll do it, and you can look at it and say, “Yeah, that’s pretty accurate.” Or, you know, “That’s not very accurate. It’s got five wheels!” Or whatever. But you have that instant recognition of its accuracy. And that’s something that this PwC audit really did for us is show that the amount of data that we’ve collected and the AI models that we’ve built off of them are incredibly accurate because what they were able to do is say, “Okay, predict attention here.” And then they actually measured the attention there with actual people. So it was really kind of replicating that “type something into ChatGPT” and using the eyeball test to verify it. But it was using the actual attention predictions that we were making, comparing that to what was actually happening.
Bill Forelli: I actually do: there was this team that we were meeting with, and a portion of their campaign started to lag below benchmarks for attention. And they were trying to figure out how to help improve that, what they could do to help improve their attention on this campaign. So we walked them through and got all the way down to domain-level attention.
And we look at actual domains from average time to percent viewed to our APM, which is attentive seconds per thousand impressions, which is an aggregate of both percent viewed, which is, “Did they look at it or not?” and then average time, which is “For how long did they look at it?”
So we went through this domain list, and we pointed out a few domains. “See this domain, this is serving a lot of impressions, and it has a very low attention score. See this domain, this one has a, has very good attention, but is not serving as many impressions. So what you could do is let’s shift some of that budget from the lower performing domain to the higher performing domain and shift those impressions over.”
So they went back, they made some adjustments to their media plan, and it only took, I think, two days for them to come back, and we jumped on, not an emergency call, but we jumped on another call with them, and they could not believe what was happening. They could see in the dashboard the attention, their overall attention score going up as a result of those optimizations that we made. That was really fun to see that happen, and it was in such a short amount of time that it really highlighted how cool it can be when you look at attention data to make optimizations and actually see those effects happen.
Bill Forelli: The coolest thing that I’ve seen at Lumen is really the AI algorithms succeeding in the PwC audit. That really was something that opened my eyes to how to explain better what we were doing in a way that drove a lot of the new marketing materials that we’ve come out with over the last couple of months. I’ve used a lot more AI-generated artwork in our collateral as a maybe not-so-subtle nod to those findings from that PwC audit to really highlight that the validation is important to trust in AI-generated data and content, and that was a really important thing for me to recognize and realize just from a messaging standpoint, that we really do need to see that in order to believe it. You asked the question – rightly so – how do people, really look at the accuracy of these models. It’s way easier to look at the accuracy of a ChatGPT or Stable Diffusion output; it’s not so easy with attention measurement. That audit really helped tell that story.
I think that attention measurement in technology will really become what viewability data is now. Frankly, I think it’s more of the true story of what we’re trying to do in advertising.
I would love to think that Lumen’s methodology and our research-focused take on that will be a key part of how that evolves. I really do want measurement, and attention measurement, in particular, to be a measure. I keep going back to Goodhart’s Law, but I love it so much because it highlights a lot of, I think, the follies that we can get into in ad tech measurement and so forth. I want the target to be outcomes, and I want that to be the legacy that Lumen with attention has in the industry. I want us to be the company that really made attention a measure again and made the outcomes the target.