Comparing BERT with an intent based question answering setup for open-ended questions in the museum domain

Abstract:

BERT-based models achieve state-of-the-art performance for factoid question answering tasks. In this work, we investigate whether a pre-trained BERT model can also perform well for open-ended questions. We set up an online experiment, from which we collected 111 user-generated open-ended questions. These questions were passed to a pre-trained BERT QA model and a dedicated intent recognition based module. We have found that the simple intent based module was around 25% more often correct than the pre-trained BERT model, indicating that open-ended questions still require different solutions compared to factoid questions.


Year: 2021
In session: Sprachdialog
Pages: 247 to 253