# Chapter 9 Bayesian Inference for 1-way ANOVAs

There are many ways to approach Bayesian Inference, even for a simple design such as a 1way ANOVA. e.g., https://rpubs.com/dgolicher/jags_one_way_anova

If you intend to do a lot of Bayesian statistics you would find it helpful to learn Stan or the BUGS/JAGS language, which can be accessed in R via the R2OpenBUGS or R2WinBUGS packages.

But a different Bayesian approach has gained much currency in recent years, especially in the behavioral and life sciences. This is the Bayes Factor approach and is the one illustrated in this chapter.

## 9.1 A BayesFactor approach to Oneway ANOVA

This section is not meant to be a comprehensive treatment of Bayesian alternatives to the traditional ANOVA. It provides simple code to extract a bayes factor for the omnibus model and then attempts to demonstrate a way to evaluate contrasts within that model.

Initially, using the **BayesFactor** package of Morey et. al., we can use their default `anovaBF`

approach. In order to set up the analysis for contrasts that are done below, the contrast for “factora” is re-specified here as the standard orthogonal set used above. This BayesFactor approach is the alternative to the “fit.1” model used above.

The ‘anovaBF’ default approach is to compare the full model to the intercept-only model. A Jeffries prior is used for in the default approach of the `anovaBF`

function for hypothetical population means and variances. Some control over the characteristics of the prior distribution is possible in `anovaBF`

, but that is beyond the scope of this document. Readers are urged to begin with the help pages for the **BayesFactor** package and the `anovaBF`

function.

```
#library(BayesFactor)
#set contrasts for factora to the orthogonal set
<- matrix(c(2,-1,-1, 0,1,-1),ncol=2)
contrasts.factora contrasts(hays$factora) <- contrasts.factora
#contrasts(factora)
anovaBF(dv~factora, data=hays)
```

```
## Bayes factor analysis
## --------------
## [1] factora : 6.926757 ±0.01%
##
## Against denominator:
## Intercept only
## ---
## Bayes factor type: BFlinearModel, JZS
```

This indicates some evidence against the null, rougly 7 times the evidence for the null. It is not a strikingly strong degress of evidence

## 9.2 Analytical Contrasts and **BayesFactor**

A direct BF approach to individual contrasts with the **BayesFactor** doesn’t appear to be available. An alternative would be to use the ‘lmBF’ function on the analogous “lm” model. However, ‘lmBF’ won’t work with IVs that are factors, so we will try to trick it. If we manually code the contrasts as new variables in the data frame we can name those as IVs. I’ve created a modified .csv file that has these new variables (ac1 and ac2). Read the new file and look at the first few and last few lines:

```
<- read.csv("data/hays3.csv", stringsAsFactors=T)
hays3 kable(headTail(hays3))
```

factora | dv | ac1 | ac2 | |
---|---|---|---|---|

1 | control | 27 | 2 | 0 |

2 | control | 28 | 2 | 0 |

3 | control | 33 | 2 | 0 |

4 | control | 19 | 2 | 0 |

… | NA | … | … | … |

27 | slow | 22 | -1 | -1 |

28 | slow | 17 | -1 | -1 |

29 | slow | 20 | -1 | -1 |

30 | slow | 23 | -1 | -1 |

Now lets run the BF regression with those IVs to find the best model . Because of what we already know about this data set, I fit the full model with both contrasts, and a model that only had ac1 (the contrast that compares the control group to the average of both experimental groups).

The ‘lmBF’ function allows us to evaluate those models and to compare them. The substantially larger bayes Factor for the ac1-only model indicates it is a better fit. This is because (as we know), the means of the 2nd and 3rd groups are so close so we don’t really need that second vector in the model.

```
<- lmBF(dv~ac1+ac2, data=hays3)
lmbf.full <- lmBF(dv~ac1, data=hays3)
lmbf.ac1 #lmbf.ac2 <- lmBF(dv~ac2, data=hays3)
#lmbf.full
#lmbf.ac1
<- c(lmbf.full,lmbf.ac1)
ful_vs_ac1 ful_vs_ac1
```

```
## Bayes factor analysis
## --------------
## [1] ac1 + ac2 : 6.959137 ±0%
## [2] ac1 : 20.4168 ±0%
##
## Against denominator:
## Intercept only
## ---
## Bayes factor type: BFlinearModel, JZS
```

A different function from the same package permits direct comparison of all regression models for a given set of IVs. It gives results that mirror what we saw above.

`generalTestBF(dv~ac1+ac2, data=hays3, whichModels="all")`

```
## Bayes factor analysis
## --------------
## [1] ac1 : 20.4168 ±0%
## [2] ac2 : 0.3530796 ±0%
## [3] ac1 + ac2 : 6.959137 ±0%
##
## Against denominator:
## Intercept only
## ---
## Bayes factor type: BFlinearModel, JZS
```

To use this approach, it is recommended that the student explore a good bit more background on use of Bayes Factors approaches. The detailed treatment of this method is beyond the bounds of the 511 class.

One can also examine considerable online help from Morey and others on the **BayesFactor** package:

[http://bayesfactor.blogspot.com/2015/01/multiple-comparisons-with-bayesfactor-1.html]