Skip to content

Manuscript Error: Chapter 4 #131

Open
@JeanLuc001

Description

@JeanLuc001
Location Original Possible correction / Problem
Section 4.1 We can note express mathematically by extending Equation […] We can express this mathematically by extending Equation […]
Section 4.1 model_linear faithfully gives us a linear growth rate as shown Fig. 4.2 […] "model_linear" is named "model_baby_linear" in Listing 4.2
4E7 […] baby_model_linear and baby_model_sqrt […] both models are named differently in listings 4.2 and 4.3: "model_baby_linear" and "model_baby_sqrt"
Section 4.1 The model tends to overestimate the length of babies close to 0 months of age, and over estimate length at 10 months of age, and then once again underestimate at 25 months of age. Figure 4.2 looks different than this sentence says: overestimation @0 months underestimation @10 months overestimation  @25 months
Section 4.3 But if this town in a cold climate with an average daily temperature of -5 degrees Celsius […] But if this is a town in a cold climate with an average daily temperature of -5 degrees Celsius […]
Section 4.3, description of Figure 4.5 On the right we show the non-interaction estimate […] On the left we show our model from Code Block tips_no_interaction […] On the left we show the non-interaction estimate […] On the right we show our model from Code Block tips_interaction […]
Section 4.4 Outliers, as the name suggests, are observations that lie outside of the range “reasonable expectation”. Outliers, as the name suggests, are observations that lie outside the range of “reasonable expectation”.
Section 4.4, Table 4.1 […] noting in particular σ which at a mean value of 574 seems high […] In Table 4.1, the mean is 2951.1, not 574
Section 4.5 Often we have dataset that […] Often we have datasets that […]
Section 4.5 The GraphViz representation is also shown in Fig. 4.12. The GraphViz representation is also shown in Fig. 4.16.
Section 4.5 Description of Figures 4.20/21 In the online version there is "code" text below the figures, like   [fig:Salad_Sales_Basic_Regression_Scatter_Sigma_Pooled_Slope_Unpooled]{#fig:Salad_Sales_Basic_Regression_Scatter_Sigma_Pooled_Slope_Unpooled label=”fig:Salad_Sales_Basic_Regression_Scatter_Sigma_Pooled_Slope_Unpooled”}
Section 4.5 Note how the estimated of σ  in the multilevel model is within the bounds of the  estimates from the pooled model. Note how the estimation of σ in the multilevel model is within the bounds of the  estimates from the pooled model.
Section 4.6 In our data treatment thus far we have had two options for groups, pooled where there is no distinction between groups, and unpooled where there a complete distinction between groups. In our data treatment thus far we have had two options for groups, pooled where there is no distinction between groups, and unpooled where there is a complete distinction between groups.
Section 4.6 The partial refers to the idea that groups that do not share one fixed parameter, but share a which describes the distribution of for the parameters of the prior itself. The partial refers to the idea that groups that do not share one fixed parameter, but share a hyperprior/hyperparameter (???) which describes the distribution of for the parameters of the prior itself.
Section 4.6 But in this case we assume that only the variance is related, which justifying the use of partial pooling and that the slopes are completely independent. But in this case we assume that only the variance is related, which justifies the use of partial pooling and that the slopes are completely independent.
Section 4.6, description of Figure 4.24 Note how the hyperprior tends to represent fall within the range of the three group priors. Note how the hyperprior tends to represent fall within the range of the three group priors.
Section 4.6 We can also see the effect of a hierarchical model if we compare the summary tables of the unpooled model and hierarchical models in Table 4.3. We can also see the effect of a hierarchical model if we compare the summaries of the unpooled model and hierarchical model in Tables 4.3 and 4.4.
Section 4.6 Moreover, the estimates of the pizza and salad categories in the hierarchical category, while regressed towards the mean slightly, remain largely the same as the unpooled estimates. Moreover, the estimates of the pizza and sandwich categories in the hierarchical model, while regressed towards the mean slightly, remain largely the same as the unpooled estimates.
Section 4.6 It would be helpful to explain what "regressed towards the mean" means.  
Section 4.6 Given that our observed data and the model which does not share information between groups this consistent with our expectations. Given that our observed data and the model which does not share information between groups**,** this is consistent with our expectations.
Section 4.6, info box Note that since beta_mj has a Gaussian distributed prior, we can actually choose two hyperprior […] Note that since beta_mj has a Gaussian distributed prior, we can actually choose two hyperpriors […]
Section 4.6, info box A natural question you might ask is can we go even further and adding hyperhyperprior to the parameters that are parameterized the hyperprior? A natural question you might ask is can we go even further and add a hyperhyperprior to the parameters that are parameterize the hyperprior?
Section 4.6, info box Intuitively, they are a way for the model to “borrow” information from sub-group or sub-cluster of data to inform the estimation of other sub-group/cluster with less observation. The group with more observations will inform the posterior of the hyperparameter, which then in turn regulates the parameters for the group with less observations. Intuitively, they are a way for the model to “borrow” information from sub-groups or sub-clusters of data to inform the estimation of other sub-groups/clusters with less observation. The group with more observations will inform the posterior of the hyperparameters, which then in turn regulate [or regulates, if the concept of information sharing is meant?] the parameters for the group with less observations.
Section 4.6, info box In this lens, putting hyperprior on parameters that are not group specific is quite meaningless. In this sense, putting a hyperprior on parameters that are not group specific is quite meaningless.
Section 4.6.1 At sampling at the top of the funnel where Y is around a value 6 to 8, a sampler can take wide steps of lets say 1 unit, and likely remain within a dense r.egion of the posterior When sampling at the top of the funnel where Y is around a value 6 to 8, a sampler can take wide steps of lets say 1 unit, and likely remain within a dense r.egion of the posterior
Section 4.6.1 This drastic difference in the posterior geometry shape is one reason poor posterior estimation, can occur for sampling based estimates. This drastic difference in the posterior geometry shape is one reason poor posterior estimation [no comma here] can occur for sampling based estimates.
Section 4.6.1 In hierarchical models the geometry is largely defined by the correlation of hyperpriors to other parameters, which can result in funnel geometry that are difficult to sample. In hierarchical models the geometry is largely defined by the correlation of hyperpriors to other parameters, which can result in funnel geometries that are difficult to sample.
Section 4.6.1 In other words as the value beta_sh approaches zero, there the region in which to sample parameter collapses and the sampler is not able to effectively characterize this space of the posterior. In other words as the value beta_sh approaches zero, there the is a region in which to the sample parameter collapses and the sampler is not able to effectively characterize this space of the posterior.
Section 4.6.1, description of Figure 4.27 As the hyperprior approaches zero the posterior space for slope collapses results in the divergences seen in blue. As the hyperprior approaches zero the posterior space for the slope collapses and results in the divergences seen in blue.
Section 4.6.1 Is there something wrong with the subscript of beta_m?  
Section 4.6.2 This using the fitted parameter estimates to make an out of sample prediction for the distribution of customers for 50 customers, at two locations and for the company as a whole simultaneously. This uses the fitted parameter estimates to make an out of sample prediction for the distribution of customers for 50 customers, at two locations and for the company as a whole simultaneously.
Section 4.6.2 In this case, imagine we are opening another salad restaurant in a new location we can already make some predictions of how the salad sales might looks like […] In this case, imagine we are opening another salad restaurant in a new location we can already make some predictions of how the salad sales might look like […]
Section 4.6.2 […] the distributions with and without the point/group/etc being close to each other. […] the distributions with and without the point/group/etc. being close to each other.
Section 4.6.3 Moreover, since the effect of partial pooling is the combination of how informative the hyperprior is, the number of groups you have, and the number of observations in each group. Moreover, since the effect of partial pooling is the combination of how informative the hyperprior is, the number of groups you have, and the number of observations in each group.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions