- edited description
performance differences with the same ontology
Having the same ontology, but with axioms defined in different order, the processing time can differ significantly.
I am attaching two files for Zebra puzzle - the FaCT++ dumps in LISP format.
Using FaCT++ utility, I get the following output for each file.
For t-fast.txt
FaCT++.Kernel: Reasoner for the SROIQ(D) Description Logic, 64-bit
Copyright (C) Dmitry Tsarkov, 2002-2017. Version 1.7.0-SNAPSHOT (01 January 2017)
Loading KB... done in 0 seconds
Preprocessing... done in 0.02 seconds
Consistency checking... done in 0.07 seconds
Processing query... 100% done in 16.44 seconds
Working time = 16.55 seconds
For t-slow.txt
FaCT++.Kernel: Reasoner for the SROIQ(D) Description Logic, 64-bit
Copyright (C) Dmitry Tsarkov, 2002-2017. Version 1.7.0-SNAPSHOT (01 January 2017)
Loading KB... done in 0 seconds
Preprocessing... done in 0.01 seconds
Consistency checking... done in 1.26 seconds
Processing query... 100% done in 442.71 seconds
Working time = 444 seconds
If I sort the files, the time for both of them is the same (two times slower than t-fast.log
, but much faster than t-slow.log
)
FaCT++.Kernel: Reasoner for the SROIQ(D) Description Logic, 64-bit
Copyright (C) Dmitry Tsarkov, 2002-2017. Version 1.7.0-SNAPSHOT (01 January 2017)
Loading KB... done in 0 seconds
Preprocessing... done in 0.02 seconds
Consistency checking... done in 0.1 seconds
Processing query... 100% done in 33.91 seconds
Working time = 34.05 seconds
Also, after sorting the files, the differences between them are (which shows that the files reflect the same ontology)
@@ -90,14 +90,14 @@
(related ub1bL156C1 http://folk.uio.no/martige/what/2012/04/22/zebra#livesIn http://folk.uio.no/martige/what/2012/04/22/zebra#House3)
(related ub1bL163C1 http://folk.uio.no/martige/what/2012/04/22/zebra#livesIn ub1bL164C13)
(related ub1bL163C1 http://folk.uio.no/martige/what/2012/04/22/zebra#smokes http://folk.uio.no/martige/what/2012/04/22/zebra#Chesterfield)
-(related ub1bL164C13 http://folk.uio.no/martige/what/2012/04/22/zebra#isNextTo fbcfb40de1c1e4fc9b6e7db04d3f8da10b66)
+(related ub1bL164C13 http://folk.uio.no/martige/what/2012/04/22/zebra#isNextTo f6e9e00cc65594283ad381eeefefe78feb66)
(related ub1bL165C1 http://folk.uio.no/martige/what/2012/04/22/zebra#hasPet http://folk.uio.no/martige/what/2012/04/22/zebra#Fox)
-(related ub1bL165C1 http://folk.uio.no/martige/what/2012/04/22/zebra#livesIn fbcfb40de1c1e4fc9b6e7db04d3f8da10b66)
+(related ub1bL165C1 http://folk.uio.no/martige/what/2012/04/22/zebra#livesIn f6e9e00cc65594283ad381eeefefe78feb66)
(related ub1bL169C1 http://folk.uio.no/martige/what/2012/04/22/zebra#livesIn ub1bL170C13)
(related ub1bL169C1 http://folk.uio.no/martige/what/2012/04/22/zebra#smokes http://folk.uio.no/martige/what/2012/04/22/zebra#Kools)
-(related ub1bL170C13 http://folk.uio.no/martige/what/2012/04/22/zebra#isNextTo fbcfb40de1c1e4fc9b6e7db04d3f8da10b67)
+(related ub1bL170C13 http://folk.uio.no/martige/what/2012/04/22/zebra#isNextTo f6e9e00cc65594283ad381eeefefe78feb67)
(related ub1bL171C1 http://folk.uio.no/martige/what/2012/04/22/zebra#hasPet http://folk.uio.no/martige/what/2012/04/22/zebra#Horse)
-(related ub1bL171C1 http://folk.uio.no/martige/what/2012/04/22/zebra#livesIn fbcfb40de1c1e4fc9b6e7db04d3f8da10b67)
+(related ub1bL171C1 http://folk.uio.no/martige/what/2012/04/22/zebra#livesIn f6e9e00cc65594283ad381eeefefe78feb67)
(related ub1bL175C1 http://folk.uio.no/martige/what/2012/04/22/zebra#drinks http://folk.uio.no/martige/what/2012/04/22/zebra#OrangeJuice)
(related ub1bL175C1 http://folk.uio.no/martige/what/2012/04/22/zebra#smokes http://folk.uio.no/martige/what/2012/04/22/zebra#LuckyStrike)
(related ub1bL182C21 http://folk.uio.no/martige/what/2012/04/22/zebra#isNextTo ub1bL182C33)
Comments (5)
-
reporter -
reporter - edited description
-
I confirm this issue with independent tests done by owl-cpp with both Zebra ontologies by Martin Skjæveland and Denis Ponomaryov.
Dear @dtsarkov, in your IJCAR 2006 paper with Ian Horrocks you mention some ordering heuristics implemented in FaCT++. Do they have something to do with this issue? If yes, can we manage them using the settings orSortSat and orSortSub?
-
So yes, the final answer is, this is by design (feature, not a bug).
I had some time to read Tsarkov and Horrocks, Ordering heuristics for description logic reasoning. 2005, IJCAI, 609-614:
Heuristics can be used to try to find a “good” order in which to apply inference rules (we will call these rule-ordering heuristics) and, for non-deterministic rules, the order in which to explore the different expansion choices offered by rule applications (we will call these expansion-ordering heuristics). The aim is to choose an order that leads rapidly to the discovery of a model (in case the input is satisfiable) or to a proof that no model exists (in case the input is unsatisfiable).
and
Choosing a good heuristic, or at least not choosing a bad one, is very important: an inappropriate heuristic may not simply fail to improve performance, it may seriously degrade it. Even more problematical is, given a range of possible heuristics, choosing the best one to use for a given (type of) problem.
I think, the issue may be closed.
-
- Log in to comment