In recent years, social media and online social networking sites have become a major disseminator of false facts, urban legends, fake news, or, more generally, misinformation. To overcome this problem, online platforms are, on the one hand, empowering their users—the crowd—with the ability to evaluate the content they are exposed to and, on the other hand, resorting to trusted third parties for fact checking stories. However, given the noise in the evaluations provided by the crowd and the high cost of fact checking, the above mentioned measures require careful reasoning and smart algorithms. In this talk, I will first describe a modeling framework based on marked temporal point process that links noisy evaluations provided by the crowd to robust, unbiased and interpretable notions of information reliability and source trustworthiness. Then, I will introduce a scalable online algorithm, CURB, to select which stories to send for fact checking and when to do so to efficiently reduce the spread of fake news and misinformation with provable guarantees. Finally, I will show the effectiveness of our modeling framework and our algorithm using real-world data gathered from Wikipedia, Stack Overflow, Twitter and Weibo. This talk includes joint work with Behzad Tabibian, Jooyeon Kim, Isabel Valera, Mehrdad Farajtabar, Le Song, Alice Oh and Bernhard Schoelkopf.
Manuel Gomez Rodriguez
Max Planck Institute for Software Systems