You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the Atkinson quote there's a space missing in "fromyear to year"
The RHS of the Claims for full credibility formula would look neater if the brackets were \left( and \right)
The derivation of the 1,082 sample size could be clearer if LC_p and LC_r were defined in the cell (I assume they're defined in a hidden cell as they're not referenced elsewhere?)
The formula for Z looks odd as the square root isn't rendered properly
Last sentence in the "Bayesian Approach" intro para: "Note that this is describing a different from the Buhlman Bayesian approach." seems to be missing a word?
I think the introduction of the Bayesian model could do with a bit more explanation - that's a sort of overarching comment I guess, but it might be addressed by the caveat in the previous section where you introduce Bayesian stats that this isn't a full overview. Still, I think the unsuspecting LFC user might benefit from a bit more exposition of why the Bayesian model is specified in the way it is, how the Turing @model definition relates to what is happening theoretically, and maybe what's happening intuitively compared to LFC. This isn't really a criticism of your approach as I recognise this is hard, and something which I've never really seen done well in any of these Bayesian modelling intros which always seem to go (1) here's Bayes' formula (2) here's a simple coin flip example, look how the Beta distribution is updated with every new data point! and then (3) here's a more complicated model which doesn't really follow from (1) and (2) in any obvious way, as well as some discussion about why for this model you should really be using a spike-and-slab prior to ensure that NUTS explores the full parameter space
After the model is defined, I'm not sure I understand "The result of the sample function is a set of data containing individual outcomes for the parameters that appear in proportion to the posterior probability for those parameters." Imho the use of "data" here is a bit confusing (I would always think of data as the observations, not something that comes out of the model), and I'm not sure what is meant by "individual outcomes for the parameters" (I would think of an outcome here as 0/1, claim or no claim)
In the following sentence "In a sense, we can empirically derive the posterior distribution for each parameters." I think the final word should be singular
In the visualizing posterior density section "In the even of facing" event is missing a t
Predictions and results section, numbered list item 2 typo in "Limitied"
I was surprised not to find anything about confidence intervals, is this because of the "only approximate measure of uncertainty" for LFC? It would have been interesting to see how increasing n in some subgroups changes the posterior variance (maybe taking a group with lots less than 1,082 observations, one with around 1,082 and one with much more)
It would also be nice to have a discussion of the predictive accuracy - all the models are super simple and basically combine group-level averages from 2 years to predict year 3 in some form if I'm not mistaken, so why is Bayes actually better? Normally I see this motivated by some sort of overfitting/bias-variance tradeoff argument where it is said the prior prevents the model from overfitting where there is little information, is that what's going on? If so, how sensitive is this to the choice of prior (which isn't really discussed when the model is set up?)
the last table width stretches beyond the page - not sure if that's a Franklin issue?
The text was updated successfully, but these errors were encountered:
Per feedback on slack:
The text was updated successfully, but these errors were encountered: