0bfc332
committed
Commits
Comments (0)
Files changed (19)

+41 2StatSimulationBased.txt

+0 0StatSimulationBased/MatrixTypeErrors.png

+0 0StatSimulationBased/equation023.png

+2 0StatSimulationBased/equation023.tex

+0 0StatSimulationBased/equation024.png

+3 0StatSimulationBased/equation024.tex

+0 0StatSimulationBased/equation025.png

+6 0StatSimulationBased/equation025.tex

+0 0StatSimulationBased/equation026.png

+1 0StatSimulationBased/equation026.tex

+0 0StatSimulationBased/equation027.png

+1 0StatSimulationBased/equation027.tex

+0 0StatSimulationBased/equation028.png

+1 0StatSimulationBased/equation028.tex

+38 0StatSimulationBased/ex316twosample.py

+117 0StatSimulationBased/ex42poweranalysis.py

+55 0StatSimulationBased/ex43example.py

+51 0StatSimulationBased/ex_RtoPy.py

+0 0StatSimulationBased/stegnerdata.png
StatSimulationBased.txt
* A: In the precomputing days, people used to look up a table that told you, for n1 degrees of freedom, how many SE's you need to go around the sample mean to get a 95% CI.
* One importantpoint to notice is that the range defined by the confidence interal wil vary with each sample even if the sample size is kept constant. The reason is that the sample mean will vary each time, and the standard deviation will vary too.
* The sample mean and standard deviation are likely to be close to the population mean and standard deviation,but they are ultimately just estimates of the true parameters.
* This means that as evidence for rejection of H0 we will count extreme values on both sides of µ . For this reason, the above test is calles a **2sided significance test.**
+ * can state our 0 hypothesis as follows H0: μ1 = μ2 aka null hypothesis is that the difference between the two means is zero.
+ * differences of means of 2sample follows normal distribution and is **centered around the true difference betweeen the two populations. **
+ * translating this tstatistics to pvalue is problematic, we dont know degrees of freedom needed for the correct tdistribution are not obvious. The tdistribution assumes that only s replaces a single σ : but we have two of these. If σ1 = σ2, we could just take a weighted average of the 2sample SDs s1 and s2.
+ * In this case the correct tdistribution has (n11 + n21) degrees of freedom [proof: Rice, 1992, 422]
+ 1. Attempt to minimize error → how to measure it? Let P(RH0) = "//Probability of rejecting the null hypothesis conditional on the assumption that the null hypothesis is in fact true//."
+ * Thus,if we want to decrease the chance of a Type II error, we need to increase the power of the statistical test.
+ * The best situation is when we have relatively high power (low type2 error) and low type1 error. By convention, we keep α at 0.05, and we should not to change it → lowering reduce power.
+ * If we have a relatively narrow CI, and a nonsignifant result (p > 0.05), we have then relatively high power and a relatively low probability of making a Type2 error aka accepting the null hypothesis as true, when it is in fact false.
+ * observed power provides no new information after pvalue is known: if the pvalue is high, we already know the observed power is low, there is nothing gained by computing it.