Abstract: In 2014, the number of paper submissions to NeurIPS was 1,678, but this number skyrocketed to 10,411 in 2022, putting a huge strain on the peer review process. In this talk, we attempt to address this challenge starting by considering the following scenario: Alice submits a large number of papers to a machine learning conference and knows about the ground-truth quality of her papers; Given noisy ratings provided by independent reviewers, can Bob obtain accurate estimates of the ground-truth quality of the papers by asking Alice a question about the ground truth? First, if Alice would truthfully answer the question because by doing so her payoff as additive convex utility over all her papers is maximized, we show that the questions must be formulated as pairwise comparisons between her papers. Moreover, if Alice is required to provide a ranking of her papers, which is the most fine-grained question via pairwise comparisons, we prove that she would be truth-telling. By incorporating the ground-truth ranking, we show that Bob can obtain an estimator with the optimal squared error in certain regimes based on any possible ways of truthful information elicitation. Moreover, the estimated ratings are substantially more accurate than the raw ratings when the number of papers is large and the raw ratings are very noisy. Finally, we conclude the talk with an experiment of this scoring mechanism in ICML 2023. This is based on arXiv:2110.14802, arXiv:2206.08149, and arXiv:2304.11160.
About the Speaker:
苏炜杰(Weijie Su),宾夕法尼亚大学沃顿统计与数据科学系以及计算机与信息科学系副教授。在加入宾大之前,2016年他从斯坦福大学获得博士学位,2011年从welcome欢迎光临威尼斯公司获得学士学位。他的研究兴趣涵盖隐私保护数据分析、优化、高维统计和深度学习理论。2016年他荣获Stanford Theodore Anderson Dissertation Award、2019年荣获美国NSF CAREER Award、2020年荣获Alfred Sloan Research Fellowship、2022年荣获SIAM Early Career Prize in Data Science和IMS Peter Gavin Hall Prize。
Meeting Link: https://meeting.tencent.com/dm/u1VR1i3Qzqt5
Tencent Meeting: 674-868-332