Introduction to Bayesian Inference:

The Beta-Binomial Model

Dr. Mine Dogucu

library(bayesrules)
library(tidyverse)

Examples from this lecture are mainly taken from the Bayes Rules! book and the new functions are from the bayesrules package.

Statistical Inference

Making meaning of data

  • In the first half of the quarter, we retrieved data (by downloading or scraping), opened data, joined data, wrangled data, described data.

  • In the second half of the quarter we will make meaning of data using statistical inference and modeling.

Research question

Every research project aims to answer a research question (or multiple questions).

Example

Do UCI students who exercise regularly have higher GPA?

Population

Each research question aims to examine a population.

Example

Population for this research question is UCI students.

Sampling

A population is a collection of elements which the research question aims to study. However it is often costly and sometimes impossible to study the whole population. Often a subset of the population is selected to be studied. Sample is the subset of the population that is studied.

Example

Since it would be almost impossible to study ALL UCI students, we can study a sample of students.

Note

The goal is to have a sample that is representative of the population so that the findings of the study can generalize to the population.

Descriptive Statistics vs. Inferential Statistics

  • In descriptive statistics, we use sample statistics such as the sample mean or proportion to understand the observed data.

  • In inferential statistics we use the observed data to make an inference about the **population parameters* using probabilistic models.

Bayesian vs. Frequentist Statistics

  • These are two major paradigms that define probability and thus two major paradigms to making statistical inference.
  • We will make statistical inference using Bayesian methods and next week we will use frequentist methods. Both these methods are valid and used in research.
  • Which of these methods you will choose to utilize in your final projects will depend on your understanding of your research topic, your philosophical approach to science, and a few statistical considerations.

Bayesian Inference

Bechdel Test

Alison Bechdel’s 1985 comic Dykes to Watch Out For has a strip called The Rule where a person states that they only go to a movie if it satisfies the following three rules:

  • the movie has to have at least two women in it;

  • these two women talk to each other; and

  • they talk about something besides a man.

This test is used for assessing movies in terms of representation of women. Even though there are three criteria, a movie either fails or passes the Bechdel test.

Let \(\pi\) be the the proportion of movies that pass the Bechdel test.

The Beta distribution is a good fit for modeling our prior understanding about \(\pi\).

We will utilize functions from library(bayesrules) to examine different people’s prior understanding of \(\pi\) and build our own.

The Optimist

summarize_beta(14, 1)
       mean mode         var         sd
1 0.9333333    1 0.003888889 0.06236096
plot_beta(14, 1) 

The Clueless

summarize_beta(1, 1)
  mean mode        var        sd
1  0.5  NaN 0.08333333 0.2886751
plot_beta(1, 1) 

The Feminist

summarize_beta(5, 11)
    mean      mode        var        sd
1 0.3125 0.2857143 0.01263787 0.1124183
plot_beta(5, 11) 

Vocabulary

Informative prior: An informative prior reflects specific information about the unknown variable with high certainty (ie. low variability).

Vague (diffuse) prior:

A vague or diffuse prior reflects little specific information about the unknown variable. A flat prior, which assigns equal prior plausibility to all possible values of the variable, is a special case.

Quiz question

Which of these people are more certain (i.e. have a highly informative prior)?

  • The optimist
  • The clueless
  • The feminist

Plotting Beta Prior

Your prior

What is your prior model of \(\pi\)?

Utilize the summarize_beta() and plot_beta() functions to describe your own prior model of \(\pi\). Make sure to note this down. We will keep referring to this quite a lot.

Data

set.seed(84735)
bechdel_sample <- sample_n(bechdel, 20)

We are taking a random sample of size 20 from the bechdel data frame using the sample_n() function.

The set.seed() makes sure that we end up with the same set of 20 movies when we run the code. This will hold true for anyone in the class. So we can all reproduce each other’s analyses, if we wanted to. The number 84735 has no significance other than that it closely resembles BAYES.

Data

glimpse(bechdel_sample)
Rows: 20
Columns: 3
$ year   <dbl> 2005, 1983, 2013, 2001, 2010, 1997, 2010, 2009, 1998, 2007, 201…
$ title  <chr> "King Kong", "Flashdance", "The Purge", "American Outlaws", "Se…
$ binary <chr> "FAIL", "PASS", "FAIL", "FAIL", "PASS", "FAIL", "FAIL", "PASS",…
count(bechdel_sample, binary)
# A tibble: 2 × 2
  binary     n
  <chr>  <int>
1 FAIL      11
2 PASS       9

The Optimist

summarize_beta_binomial(14, 1, y = 9, n = 20)
      model alpha beta      mean      mode         var         sd
1     prior    14    1 0.9333333 1.0000000 0.003888889 0.06236096
2 posterior    23   12 0.6571429 0.6666667 0.006258503 0.07911070

The Optimist

plot_beta_binomial(14, 1, y = 9, n = 20)

The Clueless

summarize_beta_binomial(1, 1, y = 9, n = 20)
      model alpha beta      mean mode        var        sd
1     prior     1    1 0.5000000  NaN 0.08333333 0.2886751
2 posterior    10   12 0.4545455 0.45 0.01077973 0.1038255

The Clueless

plot_beta_binomial(1, 1, y = 9, n = 20)

The Feminist

summarize_beta_binomial(5, 11, y = 9, n = 20)
      model alpha beta      mean      mode        var         sd
1     prior     5   11 0.3125000 0.2857143 0.01263787 0.11241827
2 posterior    14   22 0.3888889 0.3823529 0.00642309 0.08014418

The Feminist

plot_beta_binomial(5, 11, y = 9, n = 20)

Comparison

Your Posterior

Utilize summarize_beta_binomial() and plot_beta_binomial() functions to examine your own posterior model.

Balancing Act of Bayesian Analysis

In Bayesian methodology, the prior model and the data both contribute to our posterior model.

Different Data, Different Posteriors

Morteza, Nadide, and Ursula – all share the optimistic Beta(14,1) prior for \(\pi\) but each have access to different data. Morteza reviews movies from 1991. Nadide reviews movies from 2000 and Ursula reviews movies from 2013. How will the posterior distribution for each differ?

Morteza’s analysis

bechdel_1991 <- filter(bechdel, year == 1991)
count(bechdel_1991, binary)
# A tibble: 2 × 2
  binary     n
  <chr>  <int>
1 FAIL       7
2 PASS       6
6/13
[1] 0.4615385
summarize_beta_binomial(14, 1, y = 6, n = 13)
      model alpha beta      mean      mode         var         sd
1     prior    14    1 0.9333333 1.0000000 0.003888889 0.06236096
2 posterior    20    8 0.7142857 0.7307692 0.007037298 0.08388860

Morteza’s analysis

plot_beta_binomial(14, 1, y = 6, n = 13)

Nadide’s analysis

bechdel_2000 <- filter(bechdel, year == 2000)
count(bechdel_2000, binary)
# A tibble: 2 × 2
  binary     n
  <chr>  <int>
1 FAIL      34
2 PASS      29
29/(34+29)
[1] 0.4603175
summarize_beta_binomial(14, 1, y = 29, n = 63)
      model alpha beta      mean      mode         var         sd
1     prior    14    1 0.9333333 1.0000000 0.003888889 0.06236096
2 posterior    43   35 0.5512821 0.5526316 0.003131268 0.05595773

Nadide’s analysis

plot_beta_binomial(14, 1, y = 29, n = 63)

Ursula’s analysis

bechdel_2013 <- filter(bechdel, year == 2013)
count(bechdel_2013, binary)
# A tibble: 2 × 2
  binary     n
  <chr>  <int>
1 FAIL      53
2 PASS      46
46/(53+46)
[1] 0.4646465
summarize_beta_binomial(14, 1, y = 46, n = 99)
      model alpha beta      mean      mode         var         sd
1     prior    14    1 0.9333333 1.0000000 0.003888889 0.06236096
2 posterior    60   54 0.5263158 0.5267857 0.002167891 0.04656062

Ursula’s analysis

plot_beta_binomial(14, 1, y = 46, n = 99)

Summary

priors: Beta(14,1), Beta(5,11), Beta(1,1)