The very first meeting of the EDDA Working Group was organized on June 26 2017.
Topic: The prior is the most important and the most controversial element of Bayesian statistics. Throughout her PhD project, Mariëlle investigated its formalization and evaluation. Specifically, she (1) systematically obtained priors from literature, supported by experts, (2) elicited priors directly from experts, (3) evaluated replication with the first study as a prior for the future, and (4) evaluated resemblance between animal studies and human studies with the animal study as a prior.
|Hand-out 1||Duco Veen
Expert Knowledge as a Source of Data. Expert knowledge is a valuable source of information and can be collected through expert elicitation. As with any other type of data, it is important to control the quality of the data collection. To ensure proper representation of experts’ beliefs in the form of a probability distribution, software can be used to make the translation explicit. High quality elicited expert knowledge can solve modeling issues, enrich traditional data or provide direct answers to research questions.
Using the Data Agreement Criterion to Rank Experts’ Beliefs. When there are several experts available as well as data, there is a vast amount of information that offers extensive opportunities. Using the Data Agreement Criterion (DAC) we can evaluate elicited experts’ beliefs in the light of new data. By doing so experts can learn from the data, from each other or when all experts are in agreement one could learn that the data are actually polluted.
Topic: Bayesian estimation is frequently suggested as the appropriate estimation method for studies with a small sample size. Sanne carried out a systematic literature review, and conclude that Bayesian estimation indeed has advantages, but requires the inclusion of prior information for it to perform well with small samples. In fact, using only default priors can cause more bias than with Maximum Likelihood estimation! Researchers with small data sets therefore have to be careful: the smaller the sample size, the more important it is to carefully choose prior distributions!
In her next project, using a simulation study she is going to investigate the questions that are left unanswered: Which parameters require prior information? How certain do we need to be about this prior information; and what is the impact of biased parameters on the accuracy of the other parameters estimated in the model?
Conflict, Clash and Convergence of Expert and Data. By letting experts make predictions in a probabilistic form of a prior distribution we are able to take valuable knowledge into account as well as the uncertainty about this prediction. With the Data Agreement Criterion (DAC) experts’ prior beliefs can be evaluated in the light of new data and by doing so we can identify if these beliefs clash or converge with the data. By reflecting on these results with the experts, we can figure out where discrepancies come from and how the interaction between experts’ knowledge and the data actually works. This evaluation results in a validated learning process that could lead to convergence of experts’ beliefs and data. However, the question remains how this validated learning process works in a longitudinal way, and how the DAC scores could be used over time for this purpose.
Elicitating expert knowledge about the amount of scientific misbehaviour among PhD candidates. The fundament of good science is scientific integrity among scientists. In this study, experts on scientific behaviour (Deans, vice-Deans, professors et cetera) were asked to estimate the scientific behaviour of PhD students. Since the experts are often involved in decisions and policies about scientific integrity, it is important that these judgements are valid. Meanwhile, PhD students were asked to fill in a survey containing three vignettes about falsification and questionable practices. First, a method has been developed to elicitate the priors from the experts. Secondly, the prior data conflict was examined using the survey results as data. In this way, the validity of experts judgement on scientific behaviour could be tested.
Topic: Complex statistical models generally require large sample sizes. In practice, these numbers cannot always be easily obtained. In research on the psychological impact of pediatric burns on the family, problems with small sample sizes may arise. In order to obtain a sufficient sample size, prolonged multicenter studies are needed, as burn centres in the Netherlands comprise small units. Using informative priors to increase the power of the statistical analyses can be a solution to sample size problems. In the current project, we use informative priors to estimate a growth curve model with prospective data from mothers of young children with burns. The ultimate goal of the project is to compare results obtained with default priors, informative priors obtained from the literature, and priors resulting from expert knowledge. This project is a good example of the way substantive researchers and researchers in methodology and statistics can collaborate and support each other.
Topic: When making decisions about students, we either rely on the results of educational or psychological tests or the expert opinion of the teacher(s). Instead of choosing for tests or teacher expertise, we could obtain a more ‘complete’ or ‘fair’ judgment if we would combine these two. Currently, however, there is no formal and transparent way of achieving this. In her PhD project, Kimberley investigates how Bayesian statistics can be used to combine teacher expertise (prior) with test data. Recently, she developed an elaborate online elicitation tool, taking into account the typically limited knowledge of statistics of primary school teachers and their limited time. She tested this elicitation tool twice with a group of primary school teachers and discussed with them how this instrument can be of help within the educational system.
Topic: Although Bayesian statistical approaches are increasingly well investigated, empirical applications of Bayesian models are still lacking behind in a variety of research fields. Familiarizing early career researchers with these methods is thus crucial for the development of Bayesian statistics. A Shiny app has been developed to facilitate scholars’ first contact with Bayesian modeling. Interactively, users can choose different priors, upload pre-defined datasets and construct their first posterior distributions.
Hand-out available upon request
Topic: Bayesian estimation of PTSD trajectories in a small sample of humanitarian aid workers
Traditional (maximum likelihood) latent growth mixture modeling (LGMM) is limited in terms of detecting low frequency classes or small subgroups within (small) samples. Bayesian estimation using priors can offer a possible solution to this small sample problem.
After exposure to any traumatic event, different trajectories of posttraumatic stress disorder (PTSD) symptom development may occur. Longitudinal studies with assessments starting early after the trauma most consistently suggest a four trajectory model/solution (low-stable/resilient, high-stable/chronic, high-decreasing/recovery, and low-increasing/delayed onset; Bonanno et. al [2004, 2010]). Currently, we are working on a prospective dataset to investigate posttraumatic stress disorder (PTSD) trajectory classification among 213 humanitarian aid workers who were assessed prior to deployment, shortly after return and 3-6 months later. We took novel efforts to rescale and integrate previous knowledge from the PTSD literature with a pre-trauma assessment to inform priors for our current dataset, resulting in a proposed 3-trajectory model.
|Inge van der Borg
Impact of the method used for a small dataset. When dealing with small sample sizes, decisions in method can have a major impact on the results. To investigate the impact of method decisions, different software programs were used to analyse the generated data: SPSS, JASP, Biems and R restrictor. This data consisted of four groups and was analysed by ANOVA, Contrast test, F-bar test, Non parametric (K&W-test), bootstrap and with Bayesian statistics. The results showed that there are methods who detect significant differences between groups in small datasets where other methods did not. These methods all worked with prior information or tested informative hypothesis. Based on the results you should try to implement prior information in the method used to analyse a small dataset.