Issue #31
resolved
Hello
I was hoping for a few insights into how best to compare the results from our primary capture Hi-C experiment, and a reciprocal validation capture. The reciprocal experiment contained ~4000 baits.
As it is not advisable to compare the sets of stringent interactions directly, I have tried using the sdef package with published data. I've tried to reproduce the validation results presented in Javierre et al., but I do not arrive at perfectly agreeing values.
So my questions are: To prepare the matrix for input into sdef, should I include only the interactions which occur in both experiments and create an ID such as 'bait1_otherEnd1' etc? And then should I assess interaction p values, weighted P values, or CHiCAGO scores?
thanks! JB
Comments (2)
-
-
- changed status to resolved
- Log in to comment
We used all interactions that exceeded the score of zero (sic!) in at least one of the experiments (i.e., either primary or reciprocal) and
exp(–Chicago_score)
as input forbaymod
, settingp-value = TRUE
. Note that we actually used a developmental version of this package, known asBGcom
and available from here: http://www.bgx.org.uk/software.html. I'm realising only now that it's not exactly identical to the one insdef
, although the differences are minor - so thanks for bringing this up.