LinkedIn has rolled out a new content moderation framework that optimizes moderation queues and reduces the time to catch policy violations by 60%. This breakthrough technology may be the future of content moderation as it becomes more widely available. The previous method of using a first in, first out (FIFO) process resulted in harmful content taking a long time to be reviewed and removed, exposing users to offensive content. To address this issue, LinkedIn has developed an automated framework using a machine learning model to prioritize content likely to violate policies, moving those items to the front of the queue. This new process has significantly sped up the review process.
The new framework uses an XGBoost machine learning model to predict which content items are likely to violate policies. XGBoost is an open-source machine learning library that helps classify and rank items in a dataset by training the model to identify specific patterns on a labeled dataset. LinkedIn has trained the model using past human-labeled data from the content review queue and tested it on an out-of-time sample. This technology has proven to be highly successful in accuracy and processing time, outperforming other types of algorithms.
As a result of the new framework, LinkedIn is able to make automatic decisions on about 10% of the content queued for review, with an “extremely high” level of precision that exceeds the performance of human reviewers. The average time for catching policy-violating content has been reduced by about 60%. The new content review prioritization system is currently used for feed posts and comments, with plans to expand its use throughout LinkedIn. This technology has the potential to improve the user experience by reducing exposure to harmful content and helping moderation teams handle large volumes of content more efficiently. As it becomes more widely available, this technology may become more ubiquitous in content moderation processes. Readers can find the LinkedIn announcement entitled “Augmenting our content moderation efforts through machine learning and dynamic content prioritization” for further details.
Read Full Article