Previous posts in this series covered content production and sharing, connections and inventory. This post highlights considerations and key metrics for Activity Feed Ranking. Ranking is important as it links the inventory to consumption and feedback.
UNDERSTANDING ACTIVITY FEED RANKINGS
The goal of an Activity Feed is to highlight the posts users find the most relevant. This is accomplished primarily via rankings that determine the order in which posts appear — and those rankings are driven, at least in part, by your company’s strategy and mission. To properly execute on a Activity Feed ranking system, you must understand the total posts (inventory) available to each user, gather information (signals) about your users and the content they post, and then use those signals to anticipate user behavior (prediction) and determine each post’s importance (relevancy) to each user. A post’s relevancy score will inform where it appears in a given user’s feed.
An effective ranking system must, therefore, include a prediction algorithm that can assign numerical relevancy scores to each post-user pair — for example, whether a user is more likely to enjoy and find relevant a post from their childhood versus one from a celebrity they follow.
A user’s inventory comprises the posts they are eligible to see, from all the friends and publishers they follow. For users who are “inventory-constrained,” or have very little inventory, ranking is unnecessary, because they have the opportunity to consume all of their available content (whether they take that opportunity or not). A user who follows numerous friends, celebrities and other entities, however, will have a much larger inventory — perhaps a few thousand posts per day — and likely can’t consume it all. For these users, ranking via relevancy scores is critical.
Key metrics for inventory
- Amount of inventory available
- Number of connections
- Consumption of available inventory
- Number of posts consumed
- Percent of users who are inventory-constrained
Signals comprise all available information on a user and their preferences in terms of content and can help you predict whether the user will engage with a given post. The questions below are examples of such signals. Note, though, that this list is far from comprehensive, and the categories below include hundreds of signals. Product teams should look into all the signals that could drive engagement for their product.
Who posted the content?
One group of signals includes information about the producer of the content — on Facebook, for example, is it from a friend of the user, or a page or group? The more the user has previously interacted with the post’s author (through actions such as likes, comments, tags, clicks, and profile or page visits), the more likely they will be to engage with the post.
- Friend: How close is the friend? How recently did they become friend with the user? Is the friend a “needy” user (that is, do they have a low number of connections)?
- Page: How much interest has the user shown in the page? How recently did they follow and/or like it? Have they changed the settings for the page to “see first” or “get notifications”? If the page is run by a news organization, is that organization local to the user?
- Group: How engaged is the user with the group? When did they last interact with it, and what actions did they take?
What type of content is it?
Facebook’s ranking algorithm shows users more of the content types they typically engage with — if you tend to like or comment on photos rather than status updates, for example, you will be shown more of the former.
- Original vs. non-original: Is the content a personal post from the user’s friend or family member, or a link or re-share? On Facebook’s Activity Feed, original content is more likely to appear at or near the top.
- Format: Is the post a video, text, image or a combination? How long is the text? What is the quality of the image? The length of the video?
- Taxonomy: Is the content social? Informational? Entertaining? Communicative or collaborative? What’s considered interesting and valuable will vary from user to user.
- Other categorizations: Is the content clickbait or false news? Is it “spammy”? On Facebook, such posts may be assigned low rankings, making them less likely to be seen.
When was the content posted?
The more recent the post, the more likely a user is to see it — particularly if they engage with your product frequently. For users who visit less frequently, an Activity Feed may instead prioritize “highlights” such as major life events and big news stories, rather than the most recent posts.
- How recent is the post?
- Is it a duplicate (or authentic) post?
What kind of engagement is the content getting?
The more that a user engages with a post, whether implicitly (time spent) or explicitly (actions such as likes and comments), the more likely they are to have found it valuable — and the more likely that other users may find it valuable, as well. Therefore, Activity Feed rankings will often prioritize posts that are “viral” or have high engagement.
- What kind of engagement is the post getting? Is it specific feedback (likes, comments, reactions, kudos, etc.), or time spent on or hovering over a post? If it is a comment, how long is the comment — does the engagement constitute a true conversation?
- How quickly has the post’s engagement grown?
- Who is engaging with the post, and is it motivating them to produce their own content (for example, via re-share)?
- What type of engagement, and from which users, will motivate the producer to post again?
- Is the engagement higher or lower than other posts (accounting for all other variables)?
What do we know about the user?
Each user engages differently, based on factors that include gender, age, type of device used, connectivity, etc. To offer the right posts to a given user, it is, therefore, useful to consider this demographic information. For example, a user who views their Activity Feed on an old phone with a weak internet connection is unlikely to have a great experience if served high-bandwidth video.
- What is the user’s demographic information?
- What is their connectivity?
- What device or devices does the user have? What are the characteristics (memory, storage, speed) of those devices?
Once you have captured data on your signals, you can better anticipate what your users are likely to do. Because users’ past behavior is predictive of their future behavior, a machine learning model can determine to a certain degree of confidence not only whether a given user will like a post, but whether they’ll click, comment, share, hide it, or even mark it as spam. Evaluated together, the likelihood of these outcomes can produce a single relevancy score specific to each post-user pair, representing how interested the user is likely to be in the post. When each post in your platform’s inventory has such scores, your sorting algorithm can place them in the order they will appear to each user.
These predictions are challenging for multiple reasons. Engagement actions such as likes and comments are only a rough proxy for a user’s true feelings — for example, they may like posts that they don’t truly “like” (such as the news of someone’s death), click on posts that they then find unsatisfying, or hide posts purely to “manage their inbox.” Similarly, following certain signals can lead you to optimize for virality, rather than quality — feeding users a steady diet of “candy” that may eventually turn them off your product.
Therefore, it is important to take care in determining which predictions that will inform your relevancy scores, and to what extent. Choosing the right combination is as much art as science.
The relevancy score for each post-user pair should reflect not only the predictions derived from your signals but your Activity Feed’s optimization function. You may decide to optimize for any number of metrics — such as time spent, number of sessions or click-through rate — based on your company’s goals and mission. Facebook’s rankings, for example, are informed by its Activity Feed values, which prioritize friends and family over celebrities and pages. Optimization can also be leveraged to support specific strategies; for example, you may choose to highlight new products over old products, to encourage their growth.
Your optimization function should assign weights to each of your predictions. In the example below, P(like) is the likelihood that a given user will like a given post, and a, b, c, d, and e are the weights assigned to each prediction:
aP(like) +bP(share) + cP(comments) + dP(kudos) — e*P(dislike) Each weight can be determined using tests, heuristics, qualitative methods, etc., based on the metric you want to move (such as time spent). You may also choose to use various weights for specific types of users.
The Activity Feed game is one of the tradeoffs. Should you show more videos than text? Value comments over likes, or content production over content consumption? It’s helpful to understand and develop exchange rates for these tradeoffs (for example, users who watch Y number of videos are X percent less likely to produce their own content.
Explore versus exploit
Should you optimize for (exploit) what you already know about your users’ behavior, or try to learn more (explore) what you don’t know? That is, to what extent should you highlight the kinds of posts they’re likely to value, and to what extent should you highlight the kinds of posts they haven’t tried? This is a fundamental question for all ranking algorithms, and there is no simple answer. The explore-exploit tradeoff is particularly challenging in cases of large inventory; when there is too much of inventory but not enough signal to rank all posts with equal predictive power. In these instances, exploiting may result in long-term issues. Having a principled approach is important; otherwise, you may optimize for factors that do not truly maximize engagement. E.g., understanding whether or not users are increasingly tired of the current content shown, by conducting user experience surveys is one good method of determining the balance between exploit and explore at the user level.
Not enough data
No matter how meticulously you construct your algorithm, there will always be data you don’t have. How does a user’s choice of breakfast influence what they want to read? Did they hide a post because they didn’t like it or simply because they’d finished reading it? Is an active comment thread an indication that people liked a post or that it made them angry? The goal is not simply to model data, but to model people’s behaviors as manifested in data — and people are too complex for any algorithm to comprehensively model. Product teams should try to get additional relevant data to infer people’s interests.
No optimization function is perfect
Similarly, prediction algorithms are designed to optimize toward a given metric or metrics. But such metrics can never fully capture the spirit of a company’s goals and mission — and predictions and relevancy scores will thus never be entirely sufficient. One can run a prediction algorithm on the US Open and assign each player a probability of winning, but in an Activity Feed environment, “winning” is not a discrete, measurable outcome. Ranking algorithms can help predict whether and how a user will interact with a post, but not whether that interaction truly serves your mission.
Virality and clickbait
An Activity Feed environment often favors the interaction of any kind and high-velocity interaction in particular. As a result, “clickbait” posts generally get more distribution than others. Your product team should look for creative ways to dampen this effect. For example, one can identify what phrases are commonly used in clickbait headlines that are not used in others. One could also look for frequent abuse from the same creator and then take corrective measures.
Long-term versus short-term
Ideally, your product should be optimized for the long term, but most algorithms optimize for the short term. For example, notifications may at first bring users back to their Activity Feeds more frequently, but eventually frustrate those users and make them less likely to return to your product. While long-term metrics are often more difficult to measure and optimize for, they can be very useful in understanding the ultimate impact of product decisions. Use such insights to amplify posts whose engagement offers long-term benefits (for example, the wedding photos of a user’s close friend).
User experience metrics
Satisfaction surveys, net promoter scores, and qualitative feedback are useful for driving strategy, but difficult to optimize for. The primary reason is that this data is typically sparse, is not available in real time for product optimization, is not representative of the entire population (and requires further bias corrections etc.). Therefore, look for measurable proxy metrics that correlate to the survey-type metrics within your product.
Ranking types of content
Optimizing for certain metrics will favor certain types of content — for example, optimizing for time spent will lead to a bias toward video posts, which generally take longer to consume than text posts. Conversely, optimizing for number of posts viewed will emphasize text. To address this, use find effective ways of normalizing the data to correct for the bias. Also, look at your product strategically — do you expect the future of your product to be videos or texts?
- The greater your inventory, signals and predictive ability, the more relevant your platform’s posts will be. Think about ways to optimize each of these components to improve your users’ experience over time.
- The goal is to model not data, but people’s behavior as manifested in data — and people are too complex for any algorithm to fully model.
- Your platform’s ability to recommend the right stories to the right users in the right order will improve over time. The greater your inventory, signals, and predictive ability, the more relevant your posts will be. As a product team, think about ways in which you can improve each of the different components.
Follow us on Medium for weekly updates.
This work is a product of Sequoia Capital’s Data Science team. Chandra Narayanan and Hem Wadhar wrote this post. Please email firstname.lastname@example.org with questions, comments and other feedback.