The Facebook Feed (previously known as News Feed) is the most powerful editor the world has ever seen. This AI editor curates a front page that consumes on average 33 minutes of daily attention, significantly shaping the news diet of 2.89 billion people.
With such power at its fingertips I’m interested to know what really drives the decisions of the Facebook Feed AI — what is it really attempting to accomplish? How does it decide what hits the headlines?
AIs are designed with certain goals when they are first created that cast a long shadow over how they work
So I’ve pieced together the original thinking and how it has been tweaked over time to what it is now.
At the dawn of Feed was the idea that “it was like a newspaper that pulled together the best bits of what had changed in the social network around you” recalls Chris Cox, part of the original product team. Mark Zuckerberg articulates this in a blog post in 2006 as “being able to know what’s going on in your friends lives”.
The success of the resulting news feed was then judged against community feedback. Facebook asked users the question “of the 2000 stories you could have seen, were these top stories the most important that you cared about that day” — the AI’s job was to be as accurate as possible in picking and prioritising the stories of the day — a pretty close match to the job description of a human newspaper editor.
Over the years, the wider Facebook product changed, notably with the inclusion of media articles and marketing content. This crowded out news about friends from the feed, so much so that Zuckerberg announced a shift in a blog post in 2018. Since the balance had leaned too far toward public content, Feed’s news goal would now be switched from “helping you find relevant content” to “helping you have more meaningful social interactions.”
It’s a lot like a newspaper publisher firing their longstanding editor after 12 years and hiring a new one with a different angle. It was, and still is, a risk. Nudging people to interact more with each other runs against our learned experience of simply consuming what we watch and read.
This bias can also have unwanted side effects. For example, Zuckerberg highlighted that “people engage disproportionately with more sensationalist and provocative content” which was giving the AI a bad (if not criminal) name, not least in countries like Myanmar. The AI, left unchecked, was prioritising the borderline content that generating the most social interactions. Prohibited content like hate speech, fro example against Rohingya muslims, rises to the top even as it triggers a return volley of social condemnation.
The Feed AI has now been adjusted to deprioritise content the closer it gets to the borderline (see graphic below) but inevitably the threat remains – selecting responsible news headlines based on the opinion of the crowd is a difficult problem to solve.
Perhaps this difficulty explains the launch in January last year of Facebook News – a “hybrid media” news product curated mainly by human editors but offering limited automated personalisation based on topics. A Facebook spokesperson explained that it “offers users a news experience that delivers a mix of both a personalised and curated experience.” Coupled with Facebook’s removal of the word “News” from Feed’s name last month, Facebook’s direction of travel is clear — take national and international issues off everyone’s AI curated home page, and shave them off into a more human controlled environment.
It seems the world’s most powerful news editor has had its wings clipped, for now.