Up | Prev | PrevTail | Tail |
In this chapter we have introduced and investigated conceptual language models and we have shown that knowledge captured using concepts from a concept language can be effectively used to improve full-text, ad hoc retrieval. In our method, the original textual query is translated to a conceptual query model and, by means of generative concept models this conceptual query model is used to update the textual query model. The motivation behind this dual translation is that an explicit conceptual representation of the information need can be used to derive related terms which are less dependent on the original query text. In both translation steps we have applied an EM algorithm to improve model estimation.
In this chapter we have addressed RQ 2 and its subquestions by using an extensive set of experiments on five test collections from two domains.
We have used the EM algorithm to re-estimate textual and conceptual document models. These models are used in the process of determining a conceptual query model based on pseudo relevant documents and for determining the translation probabilities from concepts to text. This element is essential in order to achieve good performance, since it makes sure that the language models only generate content-bearing terms. Moreover, since the resulting terms and concepts are more specific than without EM-based re-estimation, we believe they would be useful for presenting as suggestions to a user. We find that, although each step in our method of applying conceptual language models is not significantly different from the other, the full model is able to significantly outperform both a standard language modeling and a relevance modeling approach.
To estimate a conceptual query model we propose a method that looks at the top-ranked documents in an initially retrieved set. In order to assess the effectiveness of this step, we compare the results of using these concepts with a standard language modeling approach. Moreover, since this method relies on pseudo relevant documents from an initial retrieval run, we also compare the results of our conceptual query models to another, established pseudo relevance feedback algorithm based on relevance models. We asked:
We have found that the conceptual language models yield significant improvements over a query likelihood baseline on all the evaluated measures. When compared to relevance models and using the same pseudo relevant documents, conceptual language models show a significant improvement in terms of MAP on two test collections, as well as a significant increase in recall on two other test collections. On the remaining measures, it gives similar improvements as relevance models.
As to the portability of our models, the usefulness of the proposed approach has been evaluated in two domains, social sciences and genomics, each with different types of documents and their own concept vocabularies. Despite these large differences, the concept-based feedback shows consistent improvements. It is interesting to note that while a thesaurus might be limited in representing specific information needs, it can still be used to improve retrieval effectiveness. The MeSH thesaurus can, be used to improve genomics information retrieval despite its general biomedical coverage. The annotations of the CLEF collections seems to fit the information needs better, resulting in even better retrieval performance in the social sciences domain.
We have observed a significant improvement in terms of recall on all collections, which is in line with results obtained from relevance feedback methods in general. On the TREC collections, however, we have also observed a significant increase in early precision. As such, our method is both a recall enhancing device and a precision enhancing device.
In sum, we have shown that conceptual language models (using the document annotations as a pivot language) can improve text-based retrieval, both with and without conventional pseudo relevance feedback. We have observed that solely using the document annotations for expansion does not significantly improve retrieval results. These two findings confirm conclusions from earlier work; Srinivasan [302], for example, also concludes that only using MeSH terms for expansion is not effective. Hersh et al. [127] also find that mapping queries to a knowledge structure (the UMLS Metathesaurus in their case, of which MeSH is a part) during indexing does not aid retrieval effectiveness. Yang and Chute [349], on the other hand, do find improvements when using the same knowledge structure. In more recent work, Liu [190] performed a user study in which he compared users’ interaction with a query reformulation interface using biomedical abstracts with and without the associated MeSH terms. He finds that MeSH terms are more useful for domain experts than for search experts for obtaining early precision. As to the reason for this, he speculates that non-experts lack sufficient knowledge of the domain to understand and therefore make use of the MeSH terms. Using conceptual query models, we are able to move the burden of locating appropriate conceptual annotations from the user to the system, without compromising retrieval performance. In the next chapter we use machine learning to obtain a different way of automatically identifying relevant concepts given a query.
Besides using conceptual query models to improve retrieval as we did in this chapter, the generated concepts may also be used as conceptual suggestions or feedback to the user. Here we have obtained these models using pseudo relevance feedback techniques; in the next chapter we consider the task of mapping queries to concepts in a different context and without annotated documents. Furthermore, the queries we use there are general domain queries and, hence, we map them to a more general knowledge structure, i.e., DBpedia.
In Chapter 7 we take the mapping method presented in the next chapter and use the linked concepts for each query to update the query model. To this end, we apply several of the intuitions behind the conceptual language models presented and evaluated in this chapter.
Up | Prev | PrevTail | Front |