

Y, X = dmatrices("result ~ nb_toss", data=toss_df) # Build my X (10000 rows, cols: intercept = 1, nb_toss) Now I fit my logistic regression model: from patsy import dmatrices So as you can see the coin is completely fair (50% probability of heads). # Build a dataframe with the number of tosses and the result (1 or 0) for each row # If at least one coin results in a success, assign 1 to result, otherwise assign 0 # Each coin toss has a 0.5 chance of success (binomial random variable) # Assign a random (uniform distribution) number of tosses to each example, from 0 to 30 # Set random seed to always get the same resultsĮxamples = My simulation uses the following code: import numpy as np We end up with a 10000 by 2 dataframe with two columns: nb_toss and result.For each example, result = 1 if at least one toss resulted in a heads, result = 0 otherwise.

Coin flip machine how to#
In this scenario, I know that a logistic regression model is not very useful, but I am just trying to understand how to interpret things and apply the model to real-life examples better. I want to simulate a number of coin tosses and fit a logistic regression model with whether we had at least one heads as the outcome and the number of coin tosses as the only predictor. I am trying to better understand the results of logistic regression models and I wanted to apply a logistic regression model on a trivial "fair" coin flip simulation example.
