Call for Papers

Important Dates

  • 7 May 2018: Title, authors and abstract upload
  • 14 May 2018: Submission of Long and Short Papers
  • 11-15 June 2018: Notification of Acceptance
  • 22 June 2018: Camera Ready Copy due
  • 10-14 September 2018: Conference

Overview

CLEF 2018 Avignon is the 9th year of the CLEF Conference series and the 19th year of the CLEF initiative as a forum for information retrieval (IR) evaluation. The CLEF conference has a clear focus on experimental IR as carried out within evaluation forums (CLEF Labs, TREC, NTCIR, FIRE, MediaEval, RomIP, SemEval, TAC, ...) with special attention to the challenges of multimodality, multilinguality, and interactive search also considering specific classes of users as children, students, impaired users in different tasks (academic, professional, …) . We invited paper submissions on significant new insights demonstrated on IR test collections, on analysis of IR test collections and evaluation measures, as well as on concrete proposals to push the boundaries of the Cranfield/TREC/CLEF evaluation paradigm.

Format

Authors were invited to electronically submit original papers, which have not been published and are not under consideration elsewhere, using the LNCS proceedings format:
http://www.springer.com/it/computer-science/lncs/conference-proceedings-guidelines Two types of papers were solicited:
  • Long papers: 12 pages max. Aimed to report complete research works.
  • Short papers: 6 pages max. Position papers, new evaluation proposals, developments and applications, etc.
Papers have been peer-reviewed by at least 3 members of the program committee. Selection has been based on originality, clarity, and technical quality. Papers have been submitted in PDF format using easychair: https://www.easychair.org/conferences/?conf=clef2018

Topics

Relevant topics for the CLEF 2018 Conference include but are not limited to:
  • Information Access in any Language or Modality: information retrieval, image retrieval, question answering, search interfaces and design, infrastructures, etc.
  • Analytics for Information Retrieval: theoretical and practical results in the analytics field that are specifically targeted for information access data analysis, data enrichment, etc.
  • User studies either based on lab studies or crowdsourcing.
  • Past results/run deep analysis both statistically and fine grain based.
  • Evaluation Initiatives: conclusions, lessons learned, impact and projection of any evaluation initiative after completing their cycle.
  • Evaluation: methodologies, metrics, statistical and analytical tools, component based, user groups and use cases, ground-truth creation, impact of multilingual/multicultural/multimodal differences, etc.
  • Technology Transfer: economic impact/sustainability of information access approaches, deployment and exploitation of systems, use cases, etc.
  • Interactive Information Retrieval Evaluation: the interactive evaluation of information retrieval systems using user-centered methods, evaluation of novel search interfaces, novel interactive evaluation methods, simulation of interaction, etc.
  • Specific Application Domains: Information access and its evaluation in application domains such as cultural heritage, digital libraries, social media, expert search, health information, legal documents, patents, news, books, plants, etc.
  • New data collection: presentation of new data collection with potential high impact on future research, specific collections from companies or labs, multilingual collections.
  • Work on data from rare languages, collaborative, social data.

Committee

Conference Chairs:

Patrice Bellot (Aix-Marseille Univ., France)
Chiraz Trabelsi (Univ. of Tunis, Tunis)

Program Chairs:

Josiane Mothe (Univ. de Toulouse, France)
Fionn Murtagh (Univ. of Huddersfield, UK)

Evaluation Lab Chairs

Jian Yun Nie (Univ. de Montréal, Canada)
Laure Soulier (LIP6, UPMC, France)

Proceedings Chairs:

Linda Cappellato (Univ. of Padua, Italy)
Nicola Ferro (University of Padua, Italy)