个人杂谈之SIGIR2019

Posted by cyq on November 29, 2020

(完成)SIGIR2019(不包括openQA相关)

  • (CQA中的重复问题检测)Adaptive Multi-Attention Network Incorporating Answer Information for Duplicate Question Detection

    动机:paired answers can provide effective information for duplicate question detection while they may simultaneously introduce noise to the detection.

    方法:We propose an adaptive multi-attention network (AMAN), an effective method integrating external knowledge from answers for duplicate identification and filtering out the noise introduced by paired answers adaptively.

  • Controlling Risk of Web Question Answering

  • (人类行为)Human Behavior Inspired Machine Reading Comprehension

  • (人类行为)Teach Machine How to Read: Reading Behavior Inspired Relevance Estimation

    由人类阅读的6条启发式规则设计检索模型。

  • (人类行为)Investigating Passage-level Relevance and Its Role in Document-level Relevance Judgment

    如何由passage-level的相关性得到document-level的相关性。

  • CEDR: Contextualized Embeddings for Document Ranking

    把BERT得到的contextual embedding 融合到现有的neural ranking models(KNRM、DRMM、PACRR)

  • Content-Based Weak Supervision for Ad-Hoc Re-Ranking

    We presented an approach for employing content-based sources of pseudo relevance for training neural IR models.

    We also showed that performance can be boosted using two filtering techniques: one heuristic-based and one that re-purposes a neural ranker

  • Deeper Text Understanding for IR with Contextual Neural Language Modeling

    用bing search log数据微调Bert,再在Robust和CluWeb上进行rerank BoW top-100的实验。

    使用Bert时把document切分成passage,再使用BERT-FirstP、BERT-MaxP、BERT-SumP三种组合方式。

  • FAQ Retrieval Using Attentive Matching

    本文提出了多种方法解决FAQ检索。We compared various possible aggregation methods to e￿ectively represent query, question and answer information, and observed that answers in FAQs can provide valuable and bene￿cial information for retrieval models, if properly aggregated. 表明了在FAQ检索中也可以使用answer的信息。

  • FAQ Retrieval using Query-Question Similarity and BERT-Based Query-Answer Relevance

    This paper presented a method for using TSUBAKI-based query-question similarity and BERT-based query-answer relevance in a FAQ retrieval task.

  • History Modeling for Conversational Question Answering

    提出了一个模型 for 对话式QA。

  • Learning More From Less: Towards Strengthening Weak Supervision for Ad-Hoc Retrieval

    presented two approaches(noise-aware model 和 influence-aware model) to reduce the amount of weak data needed to surpass the performance of the unsupervised method that generates the training data.

  • 【MMN】Multi-level Matching Networks for Text Matching

    动机:A major limitation of existing works is that only high level contextualized word representations are utilized to obtain word level matching results without considering other levels of word representations, thus resulting in incorrect matching decisions for cases where two words with different meanings are very close in high level contextualized word representation space.

    方法:instead of making decisions utilizing single level word representations, a multi-level matching network (MMN) is proposed in this paper for text matching, which utilizes multiple levels of word representations to obtain multiple word level matching results for final text level matching decision.

  • Document Gated Reader for Open Domain Question Answering(计算每个文档与query的相似度时考虑其他文档的信息;将文档的相似度与answer span的概率相乘,在全局进行normalize;用bootstrapping数据生成机制解决远程监督带来的假正例问题)

  • An axiomatic approach to regularizing neural ranking models

    使用一些公理对document进行扰动,使得扰动后的文本比原文本更相关或者更不相关,从而得到更多的监督信号,帮助模型的训练。

  • [ ]