I am also working on fitting the Levy stable distribution to my data per the helpful suggestion of mstundrawalker.

]]>However, I’m worried that you don’t go into enough discussion of how to present information-theoretic support. You show all the models you tested and their AIC values, which is ideal – however, by highlighting that only one of them received the most support, I could easily see researchers, when pressed for space in journals, declaring that the model is the best because it has the lowest value without giving all the information.

A demonstration of AIC weights would also be ideal. While it doesn’t look like it’d be relevant for your dataset, there are many times that different models may have relatively similar support from the data or a few models stand out as much-more supported than others. AIC weights also make it clearer that an information-theoretic analysis is based on only one’s candidate set of models.

]]>Lots to chew on here. I agree that trying to model individual knapping episodes would be difficult. My goal is more modest. From Brown’s work and my initial attempts, I think core reduction and bifacial reduction, for example, produce distinct distributions. The goal is thus to estimate the contribution of each such strategy to an assemblage. Minimum analytic nodule analysis and individual flake analyses could also be used to create more homogeneous subassemblages and to generate more specific expectations for evaluation through the modeling. Whether this approach will be successful is to be determined.

]]>I am practically overjoyed that someone understands what you propose here. I have addressed these issues in studying diffusion of innovations for which too few points of accumulated numbers of acceptance disallow the determination of which of several mechanisms, in any given context, is responsible for the process. I did use maximum likelihood estimation in a nonlinear regression context. In addition, while studying 162 cases of diffusion of innovations, I compared the fit of six mathematical models of diffusion. I used maximum likelihood to assess the comparative power of each model in relation to each of the other five models. The model I proposed provided significantly better fits in 160 of the 162 cases and no difference for the remaining two cases. This is important. The idea of the null hypothesis needs to be discarded and multiple models compared instead.

With regard to lithic reduction: Back in 1992, I did the same kind of research as Brown. At the time I was working at GAI Consultants in Monroeville, PA. I used many of the same data sets Brown used and a number of experimental data sets that were available to me at the time. I obtained similarly high power law fits and determined that a model of phase space partitioned reduction strategies into thinning (reduction for tools) and thickening (reduction of core material). Of course, a piece such as an arrow point may be thinned, then resharpening thickens it again, so this is a problem. The results for the thinning and thickening trajectories were deduced from the mathematics involved.

With regard to the Weibull distribution offered by Stahle and Dunn, Brown (619) states that the lack of a theoretical basis is a fundamental problem of the Weibull distribution, because it is not informed by any physical theory. I beg to differ. See the following:

Brown, Wilbur K. and Wohletz, Kenneth H. (1995) Derivation of the Weibull distribution based on physical principles and its connection to the Rosin-Rammler and lognormal distributions. Journal of Applied Physics 78(4): 2758-2763. Here is their abstract.

Now, consider order statistics resulting from multiple reduction episodes of lithic reduction. This is what one does when placing a batch of debris in sieves of various sizes and records the frequency of pieces in each sized sieve. In a number of complex networks, the degree distribution is a combination of two distributions, one power and the other Weibull. If one studies the Weibull distribution, as an interated function in two dimensions through repeated iteration of the derivative, the Weibull is clearly chaotic. (Review nonlinear dynamics and chaos theory here.) This lack of predictability makes disentangling assemblage mixtures and even mixtures of specific artifact types in a given cultural context highly problematical if not impossible at the present time. I have gone to a deeper study of finite differences to estimate partial differential equations for the purpose of studying/predicting complex systems. ( I am just fortunate enough to have enough statistical and mathematical background to plow through this material.)

Mixed assemblages are a formidable problem along with the chaotic nature of the resulting distributions.

1. How does one assume stochastic processes to obtain mixture solutions when the data are chaotic?

2. When each reduction episode has its own power law (unique exponent) and we use a mixture of two or more episodes, how do we extract separate episodes from the resulting exponent for the mixture, which is min(the individual exponents, a, b, …)?

3. If the Weibull has a physical interpretation, as suggested in the above cited paper and it fits the fractal nature of reduction processes, then how will stochastic methods work?

I have done hundreds of simulations of diffusion processes, which are best described by the Weibull distribution, but the sensitive dependence on initial conditions yields significantly different rate parameters without patterning with regard to multiple different variables related to degree distributions. Finding the key here is very complicated. I would suggest the same for lithic reduction.