REALInfo-2024: 1st Workshop on Reliable Evaluation of Large Language Models for Factual Information |
Website | https://sites.google.com/view/real-info-2024?usp=sharing |
Submission link | https://easychair.org/conferences/?conf=realinfo1 |
Abstract registration deadline | March 31, 2024 |
Submission deadline | March 31, 2024 |
Submission Guidelines
We invite the following formats:
-
Research papers (4-8 pages)
-
Position papers (4 pages)
References and appendices (if applicable) are excluded from this page count, but the length of the entire paper, including references, must not exceed eleven pages in the case of full papers and eight pages in the case of short papers. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this workshop. Submissions will be evaluated by the program committee based on the quality of the work and its fit to the workshop themes. All submissions must be double-blind, and a high-resolution PDF of the paper should be uploaded to the EasyChair submission site before the paper submission deadline. The accepted papers will be published as Proceedings of the ICWSM Workshops. Please use AAAI two-column, camera-ready style. All deadlines are 11:59 pm AOE (anywhere on earth).
List of Topics
- New evaluation methods and metrics for evaluating LLM’s factuality considering diverse social context, e.g., source and domain of data, language, temporal generalization of information, or hallucination in generated/summarized content.
- Human-centered design approaches to aid LLMs in detecting and mitigating false information, e.g., human experts in the loop, and variation in prompting.
- New LLM-powered tools, methods, and applications for improving factuality assessment in social computing and computational social science.
- Biases and blindspots of LLMs in factuality assessment, including approaches for error analysis and model diagnostics.
- Limitations of existing benchmarks for tasks relevant to factuality assessment, e.g., claim verification, fact-checking, stance detection, and misinformation detection.
- Improve datasets and evaluation quality, e.g., avoidance of selection bias, addressing subjective judgments and biases in crowd-sourced annotation.
- Comparative evaluation and implications of open source and commercial LLMs for tasks relevant to factuality assessment.
- How does the reliability and factuality of LLM impact users (e.g. journalists, software engineers, artists) and communities?
Committees
Program Committee
- Motahhare Eslami, CMU
- Kaustuv Saha, UIUC
- Ian Arawjo, Harvard University
- Gias Uddin, York University
- Farnaz Jahanbakhsh, MIT/UMich
- Christos Christodoulopoulos, Amazon
- Sara Tonelli, Fondazione Bruno Kessler
- Abeer AlDayel, King Saud University
- Sudipta Kar, Amazon Alexa
- Shebuti Rayana, SUNY Old Westbury
- Sunandan Chakraborty, IUPUI
- Sanjay Kairam, Reddit
- Shurui Zhou, University of Toronto
- Weiwei Cheng, Amazon
- Tom Hartvigsan, University of Virginia
- Paolo Rosso, Universitat Politècnica de València
- Chiyu Zhang, University of British Columbia
Organizing committee
- Sarah Preum
- Björn Ross
- Syed Ishtiaque Ahmed
- Daphne Ippolito
Invited Speakers
- Munmun De Choudhury, Georgia Tech
Publication
REALInfo-2024 proceedings will be published in the Proceedings of the ICWSM Workshops, 2024.
Venue
Buffalo, New York, USA
Contact
All questions about submissions should be emailed to spreum@dartmouth.edu