1. APA Publications and Communications Board Working Group on Journal Article Reporting Standards (2008). Reporting standards for research in psychology: Why do we need them? What might they be? American Psychologist, 63(9), 839-851. doi: 10.1037/0003-066x.63.9.839
2. Acock, A. C. (2005). Working with missing values. Journal of Marriage and Family, 67(4), 1012-1028. doi: 10.1111/j.1741-3737.2005.00191.x
3. Aronson, E., Wilson, T. D., & Brewer, M. B. (1998). Experimentation in social psychology. In D. T. Gilbert, S. T. Fiske & G. Lindzey (Eds.), The handbook of social psychology (4th ed., Vol. 1, pp. 99-142). Boston, MA: McGraw- Hill.
4. Bakeman, R., & Gottman, J. M. (1997). Observing interaction: An introduction to sequential analysis. Cambridge, England: Cambridge University Press.
5. Boomsma, D., Busjahn, A., & Peltonen, L. (2002). Classical twin studies and beyond. Nature Reviews Genetics, 3(11), 872. doi: doi:10.1038/nrg932
6. Borckardt, J. J., Murphy, M. D., Nash, M. R., & Shaw, D. (2004). An empirical examination of visual analysis procedures for clinical procedures for clinical practice evaluation. Journal of Social Service Research, 30(3), 55-73. doi: 10.1300/J079v30n03_04
7. Brewer, M. B. (2000). Research design and issues of validity. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology. (pp. 3-16). Cambridge, England: Cambridge University Press.
8. Cheng, S., & Powell, B. (2005). Small samples, big Challenges: Studying atypical family forms. Journal of Marriage and Family, 67(4), 926-935. doi: 10.1111/j.1741-3737.2005.00184.x
9. Cohen, J. (1994). The earth is round (p<.05). American Psychologist, 49(12), 997-1003. doi: 10.1037/0003-066x.49.12.997
10. Collins, L. M., & Sayer, A. G. (2000). Modeling growth and change processes: Design, measurement, and analysis for research in social psychology. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology (pp. 478-495). Cambridge, England: Cambridge University Press.
11. Cook, T. D. (1999, March). Considering the major arguments against random assignment: An analysis of the intellectual culture surrounding evaluation in American schools of education. Paper presented at the Annual Meeting of the Harvard Faculty Seminar on Experiments in Education, Cambridge, MA.
12. Cumming, G. (2008). Replication and p intervals: p values predict the future only vaguely, but confidence intervals do much better. Perspectives on Psychological Science, 3(4), 286-300. doi: 10.1111/j.1745-6924.2008.00079.x
13. Dearing, E., & Hamilton, L. C. (2006). Best practices in quantitative methods for developmentalists: V. Contemporary advances and classic advice for analyzing mediating and moderating variables. Monographs of the Society for Research in Child Development, 71(3), 88-104.
14. Dishion, T. J., McCord, J., & Poulin, F. (1999). When interventions harm: Peer groups and problem behavior. American Psychologist, 54(9), 755-764. doi: 10.1037/0003-066x.54.9.755
15. Erceg-Hurn, D. M., & Mirosevich, V. M. (2008). Modern robust statistical methods: An easy way to maximize the accuracy and power of your research. American Psychologist, 63(7), 591-601. doi: 10.1037/0003-066x.63.7.591
16. Gigerenzer, G., Krauss, S., & Vitouch, O. (2004). The null ritual: What you always wanted to know about significance testing but were afraid to ask. In D. Kaplan (Ed.), The Sage handbook of quantitative methodology for the social sciences (pp. 391-408). Thousand Oaks, CA: Sage.
17. Gilgun, J. F. (2005). Qualitative research and family psychology. Journal of Family Psychology, 19(1), 40-50. doi: 10.1037/0893-3200.19.1.40
18. Henry, G. T. (1990). Practical sampling. Newbury Park, CA: Sage.
19. Jensen, A. R. (1980). Uses of sibling data in educational and psychological research. American Educational Research Journal, 17(2), 153-170. doi: 10.2307/1162480
20. John, O. P., & Benet-Martinez, B. (2000). Measurement: Reliability, construct validation, and scale construction. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality research (pp. 339-369). Cambridge, England: Cambridge University Press.
21. Keith, T. Z., & Reynolds, C. R. (2003). Measurement and design issues in child assessment research. In C. R. Reynolds & R. W. Kamphaus (Eds.), Handbook of psychological and educational assessment of children: Intelligence, aptitude, and achievement (2nd ed., pp. 79-111). New York, NY: Guilford Press.
22. Kraemer, H. C., Mintz, J., Noda, A., Tinklenberg, J., & Yesavage, J. A. (2006). Caution regarding the use of pilot studies to guide power calculations for study proposals. Archives of General Psychiatry, 63(5), 484-489. doi: 10.1001/archpsyc.63.5.484
23. LaRossa, R. (2005). Grounded theory methods and qualitative family research. Journal of Marriage and Family, 67(4), 837-857. doi: 10.1111/j.1741-3737.2005.00179.x
24. Lesik, S. A. (2006). Applying the regression-discontinuity design to infer causality with non-random assignment. Review of Higher Education, 30(1), 1-19. doi: 10.1353/rhe.2006.0055
25. Lindhal, K. M. (2001). Methodological issues in family observational research. In P. K. Kerig & K. M. Lindahl (Eds.), Family observational coding systems: Resources for systematic research (pp. 23-32). Mahwah, NJ: Erlbaum.
26. MacKinnon, D. P., Fairchild, A. J., & Fritz, M. S. (2007). Mediation analysis. Annual Review of Psychology, 58, 593-614. doi: 10.1146/annurev.psych.58.110405.085542
27. Margolin, G., Chien, D., Duman, S. E., Fauchier, A., Gordis, E. B., Oliver, P. H., . . . Vickerman, K. A. (2005). Ethical issues in couple and family research. Journal of Family Psychology, 19(1), 157-167. doi: 10.1037/0893-3200.19.1.157
28. Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods, 9(2), 147-163. doi: 10.1037/1082-989x.9.2.147
29. McClelland, G. H. (2000). Unruly, ill-mannered observations can ruin your analysis. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality research (pp. 393-411). Cambridge, England: Cambridge University Press.
30. Morgan, S. L. (2001). Counterfactuals, causal effect heterogeneity, and the Catholic school effect on learning. Sociology of Education, 74(4), 341-374. doi: 10.2307/2673139
31. Paxton, P., Curran, P. J., Bollen, K. A., Kirby, J., & Chen, F. (2001). Monte Carlo experiments: Design and implementation. Structural Equation Modeling, 8(2), 287-312. doi: 10.1207/S15328007SEM0802_7
32. Peterson, C. (2009). Minimally sufficient research. Perspectives on Psychological Science, 4(1), 7-9. doi: 10.1111/j.1745-6924.2009.01089.x
33. Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879-903. doi: 10.1037/0021-9010.88.5.879
34. Rosenthal, R., & Rosnow, R. L. (2008). Essentials of behavioral research: Methods and data analysis (3rd ed.). Boston, MA: McGraw-Hill.
35. Rutter, M. (2007). Proceeding from observed correlation to causal inference: The use of natural experiments. Perspectives on Psychological Science 2(4), 377-395. doi: 10.1111/j.1745-6916.2007.00050.x
36. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.
37. Singer, J. D., & Willet, J. B. (1996). Methodological issues in the design of longitudinal research: Principles and recommendations for a quantitative study of teachers' careers. Educational Evaluation and Policy Analysis, 18(4), 265-283. doi: 10.3102/01623737018004265
38. Smith, E. R. (2000). Research design. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology (pp. 17-39). Cambridge, England: Cambridge University Press.
39. Suarez-Balcazar, Y., Balcazar, F. E., & Taylor-Ritzler, T. (2009). Using the Internet to conduct research with culturally diverse populations: Challenges and opportunities. Cultural Diversity and Ethnic Minority Psychology, 15(1), 96-104. doi: 10.1037/a0013179
40. West, S. G., & Aiken, L. S. (1997). Toward understanding individual effects in multicomponent prevention programs: Design and analysis strategies. In K. J. Bryant, M. Windle & S. G. West (Eds.), The science of prevention: Methodological advances from alcohol and substance abuse research (pp. 167-209). Washington, DC: American Psychological Association.
41. West, S. G., Biesanz, J. C., & Pitts, S. C. (2000). Causal inference and generalization in field settings: Experimental and quasi-experimental designs. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality research (pp. 40–84). Cambridge, England: Cambridge University Press.