20 November 2019
Federal Ministry for Digital and Economic Affairs, Vienna, Austria
The feedback of stakeholders should help to develop this tool further as well as two other tools currently under development in the frame of the project - the browser plug-in to be used by citizens and the dashboard for fact-checking journalists and policymakers. The aim of the browser plug-in will be to provide users with misinformation ratings of social media posts and corrective information collected from several fact-checking sources. The tool crawls the Twitter timeline of a user, gives a label to tweet whether the tweet is blurry or not and explain the reason in case of the blurry tweet, and shows the tweet credibility score, based on the threshold of aforesaid MisinfoMe tool approach. The aim of the dashboard for fact-checking journalists and policymakers will be to detect, track and predict the spread and evolution of misinformation on the web.
During the workshop, the participants discussed why such artificial intelligence tools should be developed, either for reasons of building trust or for making people think twice while reading news items or for reasons of transparency. The ranking was conducted with the application of the DecideIT multi-criteria decision analysis tool. The participants also discussed functionalities of the tools and participated in decision making experiments and the goal was to collect perceptions of three groups of stakeholders, i.e., journalists/fact checkers, citizens, and policy-makers, on various functionalities and features of artificial intelligence tools dealing with misinformation detection and prevention.
Photos from the workshop
How the MisinfoMe tool works?
The assessments are done about the profile (on Twitter, Facebook or website) coming from published reports, for example, when a fact-checker reviewed a tweet from the selected profile or when the profile has appeared on public lists of misinforming accounts. The functionality of MisinfoMe is based on Internet URLs being used and assessments of these URLs by fact-checkers.
Last edited: 25 November 2019
Baris I, Hosseini AS, Denigris S, Hal O, Staab S, Mensio M, Young O, Shah SI, et al. (2019). Co-Creation of Misinformation Management Policies. University of Koblenz-Landau
Kyza E, Varda C, Panos D, Karapanos E, Konstantinou L, Karageorgiou M, Ekenberg L ORCID: https://orcid.org/0000-0002-0665-1889, Komendantova N ORCID: https://orcid.org/0000-0003-2568-6179, et al. (2019). Co-Creating Misinformation-Resilient Societies. Cyprus University of Technology
Komendantova N ORCID: https://orcid.org/0000-0003-2568-6179, Ekenberg L ORCID: https://orcid.org/0000-0002-0665-1889, Neumueller S, & Amann W (2018). Co-Inform: Background report Austrian Pilot. IIASA
Saqr M, Joshi S, Perfumi S, Casu O, Sotirchos D, Ekenberg L ORCID: https://orcid.org/0000-0002-0665-1889, Komendantova N ORCID: https://orcid.org/0000-0003-2568-6179, Kyza E, et al. (2018). Co-Inform: Co-Creating Misinformation-Resilient Societies: Pilot Requirements and Service Design. Stockholm University
Joshi S, Koulolias V, Sjösberg C, Perfumi S, Saqr M, Petritsopoulou M, Casu O, Sotirchos D, et al. (2018). Co-Creation Framework – Building a Sustainable Ecosystem. Co-Inform Consortium
International Institute for Applied Systems Analysis (IIASA)
Schlossplatz 1, A-2361 Laxenburg, Austria
Phone: (+43 2236) 807 0 Fax:(+43 2236) 71 313