Many operations in automatic theorem provers require search in huge collections of terms and clauses. Such operations typically fall into the following scheme. Given a collection of terms (clauses) and a query term (clause) , find in some/all terms related to according to a certain retrieval condition. Most interesting retrieval conditions are formulated as existence of a substitution that relates in a special way the query and the retrieved objects . Here is a list of retrieval conditions frequently used in provers:
term is unifiable with term , i.e., there exists a substitution , such that =
term is an instance of , i.e., there exists a substitution , such that =
term is a generalisation of , i.e., there exists a substitution , such that =
clause θ-subsumes clause , i.e., there exists a substitution , such that is a subset/submultiset of
clause is θ-subsumed by , i.e., there exists a substitution , such that is a subset/submultiset of
More often than not, we are actually interested in finding the appropriate
substitutions explicitly, together with the retrieved terms ,
rather than just in establishing existence of such substitutions.
Very often the sizes of term sets to be searched are large,
the retrieval calls are frequent and the retrieval condition test
is rather complex. In such situations linear search in , when the retrieval
condition is tested on every term from , becomes prohibitively costly.
To overcome this problem, special data structures, called indexes, are
designed in order to support fast retrieval. Such data structures,
together with the accompanying algorithms for index maintenance
and retrieval, are called term indexing techniques.
Substitution trees outperform path indexing, discrimination tree indexing, and abstraction trees.[2]
A discrimination tree term index stores its information in a trie data structure.[3]
Indexing techniques used in logic programming
First-argument indexing is the most common strategy where the first argument is used as index. It distinguishes atomic values and the principal functor of compound terms.
Nonfirst argument indexing is a variation of first-argument indexing that uses the same or similar techniques as first-argument indexing on one or more alternative arguments. For instance, if a predicate call uses variables for the first argument, the system may choose to use the second argument as the index instead.
Multiargument indexing creates a combined index over multiple instantiated arguments if there is not a sufficiently selective single argument index.
Deep indexing is used when multiple clauses use the same principal functor for some argument. It recursively uses the same or similar indexing techniques on the arguments of the compound terms.
Trie indexing uses a prefix tree to find applicable clauses.[4]
References
^Colomb, Robert M. (1991). "Enhancing unification in PROLOG through clause indexing". The Journal of Logic Programming. 10: 23–44. doi:10.1016/0743-1066(91)90004-9.
^Körner, Philipp; Leuschel, Michael; Barbosa, João; Costa, Vítor Santos; Dahl, Verónica; Hermenegildo, Manuel V.; Morales, Jose F.; Wielemaker, Jan; Diaz, Daniel; Abreu, Salvador; Ciatto, Giovanni (2022). "Fifty Years of Prolog and Beyond". Theory and Practice of Logic Programming. 22 (6): 776–858. doi:10.1017/S1471068422000102. hdl:10174/33387. ISSN1471-0684. This article incorporates text from this source, which is available under the CC BY 4.0 license.
Further reading
P. Graf, Term Indexing, Lecture Notes in Computer Science 1053, 1996 (slightly outdated overview)
R. Sekar and I.V. Ramakrishnan and A. Voronkov, Term Indexing, in A. Robinson and A. Voronkov, editors, Handbook of Automated Reasoning, volume 2, 2001 (recent overview)
W. W. McCune, Experiments with Discrimination-Tree Indexing and Path Indexing for Term Retrieval, Journal of Automated Reasoning, 9(2), 1992
P. Graf, Substitution Tree Indexing, Proc. of RTA, Lecture Notes in Computer Science 914, 1995