- changed status to resolved
ENH improve HDPModel L-BFGS objective parameterization to avoid overflow
Issue #2
resolved
Currently, HDPModel needs to do gradient optimization of its factor
q( v_k | u_k0, u_k1) = Beta( v_k | u_k0, u_k1 )
we could instead parameterize the beta in terms of a "scale/sum" and a "mean". This would perhaps make gradient descent more stable.
Comments (1)
-
reporter - Log in to comment
Completed with commit of HDPModel2.py, which uses OptimizerForHDP2.py to solve a re-parameterized objective. Generally more stable in practice.