Up | Next | Tail |
In Section 2.3.2 we have introduced core relevance feedback models in the language modeling approach to information retrieval (IR). In Eq. 2.14 we have indicated a means by which to emphasize the importance of each individual feedback document, . In this section, we turn to different ways of estimating this relative importance. When we know (or assume) that a given set of documents, , is relevant to a query, we posit that documents therein that are more similar to are more topically relevant and should thus receive a higher probability of being picked. We thus propose two models that base the estimate of on the divergence between and . They are introduced in this section.
The first model rewards documents that contain terms that are frequent in the set of feedback documents. Using this model, we determine by determining the generative probability of given , i.e., the probability that the set of relevant documents generated the terms in the current document, similar to the query likelihood approach (cf. Eq. 2.3). More formally:
Here, is determined using Eq. 2.13; below, we refer to this model as MLgen.
The second method measures the divergence between and each by determining the log-likelihood ratio, normalized by the collection . Interpreted loosely, this measure indicates the average surprise of observing document when we have in mind, normalized using a background collection, . That is, terms that are “well-explained” by the collection (i.e., that have a high frequency in the collection) do not contribute as much to the comparison as terms that are not. It quantifies how much better one language model is than another in modeling an observed text in comparison with modeling by a collection model. More formally:
The measure has the attractive property that it is high for documents for which is high and is low. So, in order to receive a high score, documents should contain specific terminology, i.e., they should be dissimilar from the collection model but similar to the topical model of relevance. Since we do not know the actual parameters of by which we could calculate this, we use as a surrogate and linearly interpolate it with the collection model (cf. Eq. 2.13). This is similar to the intuitions behind MBF (cf. Eq. 2.16):
This interpolation also ensures that zero-frequency issues are avoided and that the sum in Eq. 4.2 is over the same event space for all language models involved. Then, in order to use this discriminative measure as a probability, we define a normalization factor .
Finally, by putting Eq. 2.15 and Eq. 4.2 together, we obtain an estimate of our expanded query model:
This model, to which we refer as NLLR, effectively determines the query model based on information from each individual relevant document and the most representative sample we have of , namely .
As an aside, other ways of estimating have been proposed. Examples include simply assuming a uniform distribution, the retrieval score of a document (or the inverse thereof), or information from clustered documents [24, 170]. One could also apply machine learning to select documents to use for relevance feedback, and use the machine learner’s confidence level as a substitute for [124].
The surface form of NLLR seems reminiscent of a model introduced in [60]. Carpineto et al. [60] propose to use the KL-divergence between and the collection to select and weight expansion terms for Rocchio feedback [267]. Their model is also highly similar to the query clarity score that uses this measure to predict the difficulty of a query [84]. Besides the fact that we do not use a VSM, Carpineto et al. also ignore the individual document models by assuming independence between relevant documents, similar to MLE.
Ponte’s [247] log ratio method is also related to NLLR. He uses the log of the ratio between a term’s probability given each relevant document and its probability given the collection, summed over all the relevant documents. However, Ponte [247] views the query as a set—as opposed to a generative model—and, moreover, he uses the log ratio only for thresholding the terms to be added to the initial query.
MBF is related to NLLR in that it also uses information from both the set of relevant documents and the collection in its estimations, although the estimation method is different. Moreover, NLLR leverages information from each individual relevant document. When we apply this intuition underlying NLLR to MBF, we should let go of the full document independence assumption in MBF and change the M-step (cf. Eq. 2.18) to:
Under the assumption that we exclude the collection estimate, we set (cf. Eq. 2.16) and obtain:
which is a simplified version of NLLR, using a uniform probability of selecting a document. Moreover, this is in fact the same as the relevance model in situation 1 (when the full set of relevant documents is known, cf. Section 2.3.2): RM-0.
The relevance modeling approach to relevance feedback can be viewed as a simplification of MLgen and NLLR, since it assumes that each document has an equal probability of being selected (RM-0) or that this probability is dependent on the query (RM-1 and RM-2). The latter models explicitly consider the initial query by first gathering evidence from each document for a query term and, next, combining the evidence for all query terms (RM-2) or vice versa (RM-1), as detailed in Section 2.3.2. Using the probability that a document generated the query (as is the case with RM-1 and RM-2) is a much simpler implementation of leveraging the notion that documents should be weighted according to their “relative” level of relevance, essentially replacing in the MLgen and NLLR models with only the query . And, since the query is quite sparse compared to , our models avoid overfitting to obtain an improved estimate.
Up | Next | Front |