CLEF 2018
Conference and Labs of the Evaluation Forum Information Access Evaluation meets Multilinguality, Multimodality, and Visualization 10 - 14 September 2018, Avignon - FRANCE |
![]() ![]() |

![]() |
Call for Papers
Important Dates
OverviewThe CLEF Conference addresses all aspects of Information Access in any modality and language. The CLEF conference includes presentation of research papers and a series of workshops presenting the results of lab-based comparative evaluation benchmarks. CLEF 2018 Avignon is the 9th year of the CLEF Conference series and the 19th year of the CLEF initiative as a forum for information retrieval (IR) evaluation. The CLEF conference has a clear focus on experimental IR as carried out within evaluation forums (CLEF Labs, TREC, NTCIR, FIRE, MediaEval, RomIP, SemEval, TAC, ...) with special attention to the challenges of multimodality, multilinguality, and interactive search also considering specific classes of users as children, students, impaired users in different tasks (academic, professional, …) . We invite paper submissions on significant new insights demonstrated on IR test collections, on analysis of IR test collections and evaluation measures, as well as on concrete proposals to push the boundaries of the Cranfield/TREC/CLEF evaluation paradigm.The CLEF Conference addresses all aspects of Information Access in any modality and language. The CLEF conference includes presentation of research papers and a series of workshops presenting the results of lab-based comparative evaluation benchmarks. CLEF 2018 Avignon is the 9th year of the CLEF Conference series and the 19th year of the CLEF initiative as a forum for information retrieval (IR) evaluation. The CLEF conference has a clear focus on experimental IR as carried out within evaluation forums (CLEF Labs, TREC, NTCIR, FIRE, MediaEval, RomIP, SemEval, TAC, ...) with special attention to the challenges of multimodality, multilinguality, and interactive search also considering specific classes of users as children, students, impaired users in different tasks (academic, professional, …) . We invite paper submissions on significant new insights demonstrated on IR test collections, on analysis of IR test collections and evaluation measures, as well as on concrete proposals to push the boundaries of the Cranfield/TREC/CLEF evaluation paradigm. FormatAuthors are invited to electronically submit original papers, which have not been published and are not under consideration elsewhere, using the LNCS proceedings format:http://www.springer.com/it/computer-science/lncs/conference-proceedings-guidelines Two types of papers are solicited:
TopicsRelevant topics for the CLEF 2018 Conference include but are not limited to:
CommitteeConference Chairs:Patrice Bellot (Aix-Marseille Univ., France) Chiraz Trabelsi (Univ. of Tunis, Tunis)Program Chairs:Josiane Mothe (Univ. de Toulouse, France) Fionn Murtagh (Univ. of Huddersfield, UK)Evaluation Lab ChairsJian Yun Nie (Univ. de Montréal, Canada) Laure Soulier (LIP6, UPMC, France)Proceedings Chairs:Linda Cappellato (Univ. of Padua, Italy) Nicola Ferro (University of Padua, Italy) |