Keynote Talks
Evaluation of Personalization of Information Interaction in an Era of Information Ubiquity
Nicholas Belkin
Abstract:
This paper is derived from a presentation by me and Rob Capra entitled “IR System Users: New Research Directions”, at SWIRL 3, Lorne, Victoria, Australia, February 2018. A report on SWIRL 3 is in press in the SIGIR Forum, v. 52, no. 1 (June 2018).
In the emerging technological and social-technical environment, people will be constantly and ubiquitously emerged in sea of information. The field of information retrieval (IR), and especially that concerned with personalization of interactive IR (IIR), needs to construe them as such, and, especially, not merely, only, or even as “users” who will stop doing what they’re doing, to engage in an IR system. In this paper, I identify some characteristics of this rapidly developing environment that are especially salient for how we should conceive of support of people in their interactions with information, and the issues that arise from this context with respect to the concept of personalization of such support. The resulting understanding of personalization of interaction with information has strong implications for just how the effectiveness and usefulness of such support should and could be evaluated, some of which are proposed, with the aim of initiating discussion in the research community of the problems attendant to them.
Available on Canal U Avignon channel
Evaluation of (personalized) Search Engines and Recommender Systems: two sides of the same coin ?
Gabriella Pasi
Abstract:
Since the appearance in 1992 of the article by Nick Belkin and Bruce Croft “IR and IF: two sides of the same coin”, IF has evolved into a rich and coherent research area, giving rise to one of today’s most widespread technologies, i.e. Recommender Systems. On the IR side, the development of methods for Personalized Search has exploited the key role of users and user-systems interactions in the search process, thus making closer, at some extent, the IR and the (content-based) IF tasks. While some techniques originally defined in one of the two fields have affected each other, little effort has been spent to investigate the purpose, measures and techniques used to evaluate the effectiveness of the two categories of systems. The aim of this talk is to present a comparative analysis of the tasks of evaluating (personalized) Information Retrieval Systems (IRSs) and Recommender Systems (RSs), by outlining their similarities and differences. An overview of the evaluation’s ''dimensions'' and related measures defined and adopted in the two contexts will be presented, with the aim of possibly offering new perspectives to the evaluations of both (personalized) IRSs and RSs.
Available on Canal U Avignon channel
Abstract:
"Bias" is a trending topic in the context of Artificial Intelligence and Data Science, and for a good reason: more and more decision making processes in our lives (such as getting a loan or being interviewed by a job) are mediated by Machine Learning systems; and both the research community and the society at large are increasingly aware that Machine Learning happens to be as prone to bias as human cognition.
Most research on system bias currently focus on biases introduced by the algorithms and/or the data used by the algorithms to learn. But state-of-the-art systems are usually the result of a "natural selection" process where iterative evaluation, both inside and outside the lab, plays a key role. Consequently, biases in our evaluation methodologies may have a substantial impact on systems. In the talk I will discuss the many sources of bias in current evaluation practices, how they may impact research in the fields of Information Retrieval, Natural Language Processing and Recommender Systems, and what are the challenges to eliminate them.
Available on Canal U Avignon channel