Living Labs for Academic Search
- Update 2020-09-24: All slides from the talks of the workshop are compiled in a Google Drive. Thanks for participating!
- Update 2020-09-04: Second invited talk: Joeran Beel - Lessons Learned from Operating Mr. DLib: A Research-Paper Recommender-System as-a-service with Living Lab
- Update 2020-08-26: Please, register for CLEF 2020.
- Update 2020-05-01: First invited talk: Frank Hopfgartner - Lessions learned from the NewsREEL Living Lab
- Update 2020-04-11: CLEF 2020 will be an online-only conference and therefore LiLAS will be, too. The submission deadlines changed to allow more time to prepare everything:
Schedule for 23 September 2020
Time | Topic |
---|---|
15:00 | Introduction and Welcome (Philipp Schaer, THK) |
15:15 | Invited talk I: Lessons learned from the NewsREEL Living Lab (Frank Hopfgartner, University of Sheffield) |
15:45 | STELLA - Evaluation Infrastructure for LiLAS (Timo Breuer, THK) |
16:00 | Task 1: Academic Search in the Life Sciences (Leyla Jael Garcia Castro, ZB MED) |
16:15 | Task 2: Research Datasets in the Social Sciences (Johann Schaible, GESIS) |
16:30 | BREAK |
17:00 | Invited talk II: Lessons Learned from Operating Mr. DLib: A Research-Paper Recommender-System as-a-service with Living Lab (Jöran Beel, Trinity College Dublin) |
17:30 | Open Discussion/Breakout on LiLAS 2021 |
18:00 | Sum up, Outlook and Farewell |
Vision
In this workshop lab, we would like to bring together IR researchers interested in the online evaluation of academic search systems. The goal is to foster knowledge on improving the search for academic resources like literature (ranging from short bibliographic records to full-text papers), research data, and the interlinking between these resources. The employed online evaluation approach in this workshop allows the direct connection to existing academic search systems from the Life Sciences and the Social Sciences.
The motivation behind this lab is to
- bring together interested researchers
- advertise the online evaluation campaign idea, and
- develop ideas, best practices, and guidelines for a full online evaluation campaign at CLEF 2021.
We see academic search as a broader term for scientific and especially domain-specific retrieval tasks which comprises Document as well as Dataset Retrieval. As huge platforms like Google Scholar (or Google Dataset Search) are not open to public research and do not offer any domain-specific features, we focus on mid-size scientific search systems that offer domain-specific resources and use cases. This focus allows for using many specific information types like bibliographic metadata, usage data, download rates, citations, etc., in order to develop and evaluate innovative search applications. Further details on current evaluation infrastructures in academic search can be found in Schaible et al. (2020).
We would like to move beyond the traditional offline evaluation setup and bring together evaluation techniques from industry and practice into the academic realm. Therefore, utilizing online evaluations, as taking the actual user into account, would be a step forward towards improving the evaluation situation. The details of the online experiments and the metrics are to be discussed at the workshop, but right now we favor a setup which is based on a Docker container infrastructure that is briefly described in Breuer et al. (2019) and would incorporate usage feedback like click-through rates.
Call for Contribution
For our workshop lab, we encourage participants both from academia and industry to submit their work on the theory, experimentation and practice regarding academic search systems and/or living lab evaluations. We strongly support novel ideas on either improving academic search, or on advancing the use of living labs to evaluate retrieval approaches. Furthermore, we support submissions on lessons learned on both academic search and living labs, to ignite discussions on how it is possible to improve their use. Our call for submission comprises, but is not limited to, the following:
- design and evaluation of intelligent search and recommendation approaches
- design and evaluation of novel user interfaces aiding users to find scholarly resources
- discussion of offline and online evaluation metrics and methods for measuring a system’s ability to aid users in finding scholarly resources
- discussion on alleviating the reproducibility issue of evaluation results
We solicit two types of contributions:
Research and Lessons Learned papers presenting novel contributions in the area of academic search and/or living lab systems as well as lessons learned discussing current challenges and possible solution mitigating the problems. Papers should be up to 8 pages including references.
Open ideas, discussion, and demo papers describing ideas that are not yet in the scope of a research contribution but illustrate an aspect of academic search and living lab systems that should be taken into account in the future. Papers should be up to 4 pages including references.
Submission Guidelines
Submissions must be as PDF, formatted in the style of the Springer Publications format for Lecture Notes in Computer Science (LNCS). For details on the LNCS style, see Springer’s Author Instructions. Authors should use Springer’s proceedings templates, either for LaTeX or for Word, and are encouraged to include their ORCIDs in the papers.
All submissions must be written in English and should be submitted electronically through the conference submission system.
Dates
26 April 2020: Registration for the Lab(no official CLEF registration is needed to join LiLAS in 2020)17 July 2020: Submission of Participant Papers14 August 2020: Notification of Acceptance Participant Papers28 August 2020: Camera Ready Copy of Participant Papers- Unitl 19 September please register for CLEF 2020.
For further details, please refer to the CLEF 2020 schedule
Organization
We will have a half-day workshop that is split up in two parts.
The first part will consist of an overview presentation by the organizers, invited talks (e.g. from the organizers of the former CLEF labs LL4IR or NewsREEL to present lessons learned from previous campaigns) and some lightning talks from early adopters who implemented first prototypes based on the data and APIs made available or that describe an interesting use case or idea that could lead into a task in 2021. We invite participants of the workshop to submit short technical or position papers that would lead into the lightning talks.
The second half will consist of break-out sessions which should be inspired by the talks of the first half and might focus on topics like data sets, technical setups, evaluation metrics, interfaces or task design. The break-out groups will organize themselves using collaborative text editors like Google Docs to allow a later documentation of the discussion and idea tracking.
The workshop will end with some short presentations of the results of the break-out sessions.
LiLAS 2020 Chairs
- Philipp Schaer, TH Köln, Germany
- Johann Schaible, GESIS, Germany
- Leyla Jael Garcia-Castro, ZB MED, Germany
Program Committee
- Krisztian Balog, University of Stavanger, Norway
- Joeran Beel, Trinity College Dublin, Ireland
- Birger Larsen, Aalborg University, Denmark
- Vivien Petras, Humboldt University, Germany
- Ansgar Scherp, Ulm University, Germany
- Philipp Mayr, GESIS, Germany
- Tommaso di Noia, Politecnico di Bari, Italy
Contact: lilas@stella-project.org
LiLAS is part of CLEF 2020.