In our third episode we speak with Chloe Grutchfield, Co-founder of RedBud and Senior VP Product at Sourcepoint. Chloe’s incredible career spans across various technical and product focused roles at leading companies such as Acxiom, Telefonica, and Verve, before she co-founded RedBud alongside business partner Rhys Denny in 2018.
Today we’re going to be talking about the wonderfully complex worlds of data protection and brand safety and GDPR. plus picking Chloe’s brains on some of the biggest trends we’re witnessing today.
Brand safety is going through a seismic shift – it has to be broader to incorporate privacy and data ethics.
Brand safety tools are now under the microscope and are increasingly required to be smarter when analysing the context of the website. Advertisers cannot risk losing precious ad space due to brand safety tools not understanding the context behind the ad or context of the website.
Q1: In the EU, there is still a considerable lack of awareness regarding the management of personal information. From your lens at Sourcepoint and during your time spent building RedBud, how have you seen the attitude of publishers and brands change to tackle the regulations set by the GDPR?
A: I’ve seen a lot of change. Rhys and I set up RedBud because we both lost our jobs to GDPR. The company we were working for decided it was too risky for them to continue operating in Europe. We were children of GDPR. When we started RedBud it was at the same time the ID TCF framework launched in the market. At the time, no one knew what this framework was all about. No one knew how to interpret the regulations. It felt like all of the publishers implemented consent management platforms (CMP) quickly as a vanity project.
Then they had to catch up and ensure they had the proper resources internally to manage the CMPs and manage the requirements of the new law. We’ve seen big changes in terms of focus from implementing something and checking a box to making sure the implemented CMP works hand in hand with the technology they have on their website.
For example, two and a half years ago, you would have CMPs which would say, “Hey, we’re dropping cookies and this is the purpose why we drop cookies”. Accepting or rejecting cookies didn’t have a big impact on what’s happening on the website. Cookies, storage and access to information were still happening before the consent of the user.
Fast forward two and a half years to now. Most sites have implemented a CMP which works hand in hand with all the technology they have on their website. They withhold everything from loading, every script from the page, until the user has given consent.
Publishers have invested time and resources internally to get teams focused on privacy. We’ve seen publishers like The Guardian put privacy first, and use it almost as a USP to attract agencies that are buying across websites who don’t put privacy first.
Q: Placing an ad in a safe environment is a challenge for all companies. How do you see brand safety – and ensuring the suitability of ad placement – evolving over the next 12 months? How can brands gain the confidence to run ads on websites with verified consent?
A: There are several angles to this question. The first angle, brand safety is going to be broader and it’s going to encompass privacy and data ethics. For brands, it’s about advertising on websites that use data which respects the choice of the consumer. These websites give the user transparency on how the data is going to be used and respects the choice of the user.
Secondly, brand safety tools in the past have been coarse. For example, the New York Times homepage recently had a brand safety holding ad. It wasn’t a branded ad. The brand safety tool thought it was COVID related content, therefore not brand-safe. Instead of the ad, it showed a brand safety image. This was crazy. It’s the New York Times homepage.
In the next 12 months, those brand safety tools have to continue to be smarter by looking at the context of the website. Some of them are already doing this, but the trend will continue.
Thirdly, you have publishers like Reach who have created their own content verification and brand safety tools. There’s going to be more AI used in brand safety and those tools are going to have to be smarter to not penalise the publisher, or result in agencies losing precious advertising opportunities like the New York Times homepage.
Q: There are different types of inventory in the advertising spectrum that range from very safe models to more high-risk open auction approaches. What steps do brands need to take in order to combat this complex space?
A: Most of the steps brands will need to take will be in open auctions. In the open auction area, there’s a large number of longtail publishers, who are small and part of networks. They have lots of different activities, and they do sponsorships and advertising as a side project. These longtail publishers, don’t have a relationship with agencies. They’re not in the weeds in the ad tech ecosystem as some of the larger publishers, agencies and ad tech vendors.
It will be important for agencies to scrutinise the longtail, have the appropriate brand safety tools and verification tools to make sure they know where their ads are going. It is going to be important to know where your money is going and make sure that you have the checks in place to promote brand-safe advertising.
Q: The demise of the third-party cookie has left many in our industry somewhat lost in how to get ready for this brand-new world. As solutions such as contextual targeting come back to centre stage, how do you see the relationship between brands and publishers changing over the next 12-24 months?
A: Advertisers and publishers have to continue working closely together. Agencies and publishers have been talking about the depreciation of cookies for a long time. Multiple publishers that would have never worked together in the past, are working together to brainstorm. They’re testing different approaches to solve the deprecation of the third party cookie challenge. They’re even embracing standardisation of audience ID category. Agencies will have to continue leaning in a little bit more.
On the publisher side, it will be paramount they think about creating something simple for agencies to buy. Agencies are used to buying on Facebook and Google, where it’s easy to target a particular audience with scale. What’s going to be important for publishers is to enable the same thing and continue embracing standardised categories of content or standardised audiences.
When agencies need to buy across a number of premium publishers, they will be able to target mums, new mums, and have the confidence they can use targeting criteria across a number of different publishers. We don’t want to have the definition on one website, and a slightly different category on a different website. Agencies want something simple to buy.
Q: AI has become a buzzword in digital advertising lately, especially with advancements in contextual to help marketers trace the environment and context of their ad placements. From your point of view, do you believe the advancements in AI and machine learning are good enough to help brands feel safe in our current climate?
A: One of the key things we’ll need to do is solve for the bias that we may have in AI. We have to ensure whatever training data we’re using, we don’t introduce any bias.
I have a great example. In Hungarian, there is no notion of gender. When you take a sentence in Hungarian and translate it in Google Translate, you will have a sentence like, “neutral gender is doing the dishes”, “neutral gender is running the company”. However, when you look at the translation in Google Translate, the translation has added “she” for words related to kids, family and cleaning, and”he” for words related to running your business and exercising.
This is all due to the training set in the model being used. There’s a bias, it’s stereotypical. For brand safety, it’s going to be key to look at the data being used to ensure we avoid those instances.
In a brand safety context, how would this appear? What I’ve heard multiple publishers mention with football is that you’re going to use the words “attacker”, “secure”, and “attacking”. When you combine football and attack it’s fine, it makes sense. You’re not going to block the content, it’s not violent content.
But AI has to have models elaborate enough to know the attacker in the context of a football article is not something to block, and you can continue advertising it. AI is going to have to remove any bias. It is going to have to be more clever to avoid those silly blockages of content.
Q: We’re going through such a seismic shift right now, and it’s fair to say there’s not a “one size fits all” solution out there. What advice would you give to businesses when trying to protect their brand whilst adhering to data protection regulations?
A: There are several things agencies and brands need to be doing. Much of it is vetting. They’re going to have to vet the vendors they work with. What I mean by vetting is not vetting with a spreadsheet filled in good faith. Sometimes the people who fill in those sheets, those RFIs, might not have the information and they want to win that contract. They’re going to say what they want the company to be doing, and future sell.
It’s going to be important for agencies and brands to engage in the proper vetting of vendors. This includes looking at how vendors are behaving on websites, looking at whether they’re potentially doing things that are not compliant with privacy regulations. Agencies have to vet the longtail of publishers they advertise on. We build a privacy lens with exactly those two objectives in mind to help brands and agencies vet the website they’re advertising on, the apps they’re advertising on, and the vendors they work with.
I’m going to use a particular example; I remember one of our clients was having problems with their data being made available in the buying platforms without their knowledge. It turns out a small audience monetisation company was piggybacking on another vendor to drop cookies and link that cookie to behavioural information about the user. At no point was that vendor ever mentioned on the site of our client as an approved vendor. Yet that company was selling the data of our clients on the biggest platforms in Google, specifically in all of the big DSPs branded with the name of our client. The data was completely non-compliance because the vendor had never been mentioned in the consent management platform.
They were building those audience profiles they had gotten from piggybacking off of technology across many websites, and agencies were buying those segments. They were available on the buying platforms. It’s going to be important agencies pay attention to where they’re buying media, what data they’re using, and ensuring they vet the data they’re using. They may think “It’s easy, it’s in my campaign tool. I tick a box and there you go, I’m targeting an audience.” They’re going to have to be doing a bit more than that in their own vetting of the data that they’re leveraging.
Q: Let’s pivot to the world of the publisher for a bit. When you started RedBud in 2018, your business model predominantly focused on supporting the publisher in the wake of the GDPR launch. Since starting RedBud, have you seen a change in mindset from publishers? How has progress been made to combat data privacy from a publisher lens?
A: Publishers made huge progress. They have teams looking after privacy, and they have a dedicated person in charge of the consent management platform and ensuring it is always up to date with the vendors they’re working with.
We have particular publishers we’ve been on a journey with for a number of years. We track their progression, month after month on a number of criteria like how many unnecessary cookies are you dropping before consent, with the aim to have only strictly necessary cookies. They’ve made huge progress. They have invested so much time and resource in having privacy-first experiences.
Q: Finally, I’d love to find out a little about you and your role. Without undervaluing others in the industry, your role as SVP Product for a data privacy software company sounds very intense, with everything going on. What do you think is key for success in a role such as yours?
A: There are several things that are critical. I like to see the role of someone in a product role as being a mini CEO. I do think to be successful in a product role, you need to be able to do a variety of things. Such as build a roadmap, prioritise items in a roadmap, understand finances, understand the legal aspect of products and have privacy by design mindset when building products. It requires being comfortable with a variety of things. When you’re in a product role, you have to work with multiple stakeholders who may be very technical, like the engineer or not technical at all like sales.
It’s important to be able to explain concepts at different technical levels, be good at building rapport and be a great team player. I’ve said several things: being able to switch and do a variety of tasks, work well as a team and adapt your narrative to whichever stakeholders you’re speaking with. At school, I loved every subject; Math, English, French and Science. There was not one subject I didn’t like. That’s why I’m so comfortable. I enjoy the product role. I like my job because I get to do a variety of things like I did at school.