Claudia Hauff: Panel Abstract

Information providers are under scrutiny with respect to the
truthfulness, believability and credibility of the information they
disseminate. Eager to address the credibility issue, the field of
computer science has been prolific in developing automated solutions
for a series of highly relevant tasks, such as fact-checking, and bias
estimation. The validity of such typically machine learning-based
solutions relies heavily on their training and evaluation data-sets;
and unfortunately, systematic and methodological errors can often be
found to have occurred during their compilation process. In addition,
the widespread popularity of benchmarks and the (seemingly) ever
increasing speed in which state-of-the-art baselines are beaten
provide a false sense of achievement: issues of privacy, bias,
accountability, etc. as they appear in the wild remain largely
unsolved.