InScope: Bias in ‘big data’

Conor Marshall is an Account Executive for the Social & Innovation team in OMD Create Melbourne, using data and insights to deliver high impact messaging for brands in the social space.

Since the dawn of advertising, media agencies have longed to deliver best possible solutions for clients. Developing a deep understanding of a client’s target audience, and their media consumption, is key to delivering campaigns that best suit client needs. Newer technologies, as well as developments in digital media, have provided new environments in which advertisers can reach consumers whilst also providing increased access to consumer data.

The volume of data now available through digital mediums well surpasses the capacity humans can manually process and plan against. Enter the age of the algorithm – AI, and more specifically, machine learning. At its essence, it is pattern recognition by a computer that can learn from patterns and adapt independently. Learning from previous computations, the computer is then able to produce reliable, repeatable decisions and results. Machine learning is the processing powerhouse behind our everyday applications and technologies. It powers recommendation systems like those on Netflix and Spotify, search engines and social-media feeds, and even voice assistants (Siri and Alexa). But machine learning is also behind key publisher’s ad delivery algorithms, such as Facebook, Google and Twitter.

With these machines being fed such large amounts of data, constantly optimising ad delivery and further learnings about the user, is it possible for these artificially intelligent algorithms to develop a human-like bias?

Facebook, and many other leading companies, are already taking steps to prohibit advertisers targeting based on their own bias and certain demographics (e.g. race, religion, socio-economic status). However, ad delivery algorithms can inadvertently optimise based on these protected demographics due to algorithm’s generalisation of profile information and previous behaviour. An often-used example is an employer targeting a recruitment ad to both women and men, but over time, the ad delivery algorithm identifies that one gender is engaging more with the ad, and the algorithm then begins to primarily serve the ad to that gender.

This was seen in a recent study led by Muhammad Ali and Piotr Sapiezynski at Northeastern University (Boston, USA). The study found that otherwise identical Facebook ads, with slight adaptations to creative, can have significant impact on the audience reached. This included women being delivered a greater volume of ads for pre-school teachers and secretaries, while minorities were served ads for janitors and taxi drivers.

Leaders across the digital space are looking to seed out and fix these issues within their ad serving algorithms. Many have adopted a human first solution, with engineers and data scientists ‘teaching’ the machine with bias, how to recognise it and blocking the delivery. However, attempts to combat bias themselves, can be biased. In the same way ‘big data’ feeding into the machine can be contaminated by the inequalities and bias of society, the humans developing the underlying systems and review processes can also inadvertently influence the data with their own bias.

As digital and social media spend continues to rapidly increase and evolve, it is important for all media professionals, publishers and advertisers to be aware of the implications of ad delivery algorithms and machine learned optimisation. Having a greater understanding of how these systems work will better inform strategies that mitigate and prevent ‘big data’, and human learned bias, from perpetuating the inequalities and injustices of society, without compromising the effectiveness of tailored media campaigns.

Share
Loading Facebook Comments ...

Leave a Reply

Your email address will not be published. Required fields are marked *