Comparing the Fit of Distributions to Experimental Flake Size Data I

To determine whether the power law distribution provides an appropriate model for flake-size distributions, I obtained experimentally-generated lithic assemblages and fit a number of common distributions to the data. The initial experiments comprised the reduction of cores to produce flakes that could be further shaped into a tool. I used the maximum likelihood method to find the optimal parameter values for each distribution. The following table shows some initial results of this work, focusing on the flake-size data generated during a single core reduction episode.

Results of Fitting Various Distributions to a Single Experimentally-Generated Flake Assemblage

The models selected for comparison comprise a list of commonly-used distributions for modeling heavy-tailed data. Other model-fitting approaches, for example, found the Weibull distribution to fit flake-size distributions. While the maximum likelihood method provides a means by which to fit each of these models to the same data, this approach must be supplemented with some method for comparing the results.

One approach to model comparison, the Akaike Information Criterion (AIC), comes from information theory. The information-theoretic approach quantifies the expected distance between a particular model and the “true” model. The distance reflects the loss of information resulting from use of the focal model, which naturally fails to capture all of the variability in the data. The best model among a series of candidate models is the model that minimizes the distance or information loss. Obviously, the true model is unknown, but the distance can be estimated using some clever math. The derivation of AIC is beyond my powers to explain in any additional detail. After much hairy math, involving various approximations and simplifications and matrix algebra, a very simple formula emerges. AIC quantifies the distance using each model’s likelihood and a penalty term. AIC for a particular model can be defined as follows:

AIC = -2L + 2k, where L is the log-likelihood and k refers to the number of parameters in the model.

For small samples, a corrected version of AIC – termed AICc – is sometimes used:

AICc = AIC + 2k(k+1)/(nk-1)

The best-fitting model among a series of candidates will have the lowest value of AIC or AICc. Unlike other approaches, the information-theoretic approach can simultaneously compare any set of models. Likelihood ratio tests, another common model-comparision technique, are limited to pairwise comparisons of nested models; models are nested when the more complex model can be reduced to the simpler model by setting some parameters to a particular value, such as zero. The models compared in this example are not all nested, since the lognormal – for example – cannot be reduced to any of the other models by setting a parameter to a particular value.

The AIC and AICc for the power law distribution are both lower than the values for any of the other modeled distributions. The power law distribution thus fits this experimentally-produced flake size data better than other common distributions. These preliminary results support the work of Brown (2001) and others.

Note that the best-fitting model among all candidates may still provide a poor fit to the data. Thus, the power law distribution could still provide a poor fit to the data. A couple options exist for evaluating model fit. A simple approach would be to plot the data versus the theoretical distribution. There is also a way to measure the fit of the model to the data, which I will detail in a subsequent post.

© Scott Pletka and Mathematical Tools, Archaeological Problems, 2014

Advertisements

Tags: , ,

5 Responses to “Comparing the Fit of Distributions to Experimental Flake Size Data I”

  1. mstundrawalker Says:

    Have you considered the Levy distribution and Levy stable distributions? These play an important role in nonlinear dynamics and chaos theory.

  2. Dwight Read Says:

    Scott, A nice article. I’m glad to see that you included the comment “Note that the best-fitting model among all candidates may still provide a poor fit to the data.” since the AIC (or AICc) does not test for goodness-of-fit. Some (who I will not name) have incorrectly tried to use it as a test for goodness-of-fit.

  3. uncertainarchaeologist Says:

    I like your discussion of AIC (and AICc) and glad you’re making it widely accessible.

    However, I’m worried that you don’t go into enough discussion of how to present information-theoretic support. You show all the models you tested and their AIC values, which is ideal – however, by highlighting that only one of them received the most support, I could easily see researchers, when pressed for space in journals, declaring that the model is the best because it has the lowest value without giving all the information.

    A demonstration of AIC weights would also be ideal. While it doesn’t look like it’d be relevant for your dataset, there are many times that different models may have relatively similar support from the data or a few models stand out as much-more supported than others. AIC weights also make it clearer that an information-theoretic analysis is based on only one’s candidate set of models.

  4. Dwight Read Says:

    Let me add to Uncertainarchaeologist’s comment, and elaborate on my earlier comment. Suppose we have several models, none of which provides a good fit to the data. If we used the AIC procedure blindly and then said that the model with the highest AIC value is the model we should use, we would mistakenly be accepting a model that does not fit the data as if it were a good model for the data at hand. The AIC values simply inform us as to which, among the candidate models, is the best choice from an information-theoretic viewpoint, not whether any of the models actually fits the data or not. Thus the AIC (or AICc) procedure is properly used only for choosing between models that have already satisfied goodness-of-fit criteria and/or have substantial theoretical support as plausible models. Scott is well aware of this restriction on the use of the AIC procedure, as I indicate in my quote from his blog, but there are articles in high-impact journals that make this mistake of assuming the AIC values provide the equivalent of a goodness-of-fit test. But without the latter, one does not know if any of the models actually fits the data.

  5. archaeomath Says:

    Thanks to both Uncertainarchaeologist and Dr. Read for their thoughts on model selection. This topic deserves a post of its own. In a future post, I’ll provide some of the additional details suggested by Uncertainarchaeologist. To evaluate the goodness of fit of the power law distribution, I will be following the suggestion of Clauset et al. (2009). They recommend calculating the Kolmogorov-Smirnov (KS) statistic for the data, and then comparing that statistic to the distribution of the KS statistic from simulated data sets. The KS statistic for the simulations is calculated using the simulated data and the power law fit to that simulated data (not the “true” distribution used to create the simulated data).

    I am also working on fitting the Levy stable distribution to my data per the helpful suggestion of mstundrawalker.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: