- edited description
Improve approximate inference algorithm
For problems intractable with exact inference, Lea provides a method that performs approximate inference, namely estimate_mc
. This works fine but, because it is a rejection sampling algorithm, it has the drawback to be inefficient for calculating conditional probabilities x.given(e)
when P(e)
is small: most of the calculation time is passed to generate, test and drop random samples not verifying the condition e
.
Lea should provide other approximate algorithms, which are more efficient for this kind of problems.
MCMC and weighted algorithms are not directly envisioned here, because AFAIK these are limited to "observations" (conjunction of equalities) while Lea is more general, covering any boolean function for the condition e
. An "hybrid" algorithm mixing exact inference on e
and approximate inference on x
should be feasible. Also, extra options beyond the number of samples shall allow to make trade-offs in those algorithms.
Comments (8)
-
reporter -
reporter Change Lea.calc signature, implement MC algorithms and make related changes on gen_one_random_mc methods and others (refs
#55)→ <<cset 649f3b43a407>>
-
reporter Make corrections, restrict use of MCEV algorithm, rename MCEC as MCLW (refs
#55)→ <<cset d453ac7425a5>>
-
reporter Make corrections and simplifications (refs
#55)→ <<cset 1a916754c3e5>>
-
reporter Add test_algorithms.py for testing new MC algorithms (refs
#55)→ <<cset 1f5b02bf06a8>>
-
reporter Correct method's docstrings (refs
#55)→ <<cset d1468991a8b9>>
-
reporter - changed status to resolved
-
reporter - changed status to closed
- Log in to comment