top of page

Chaos Theory and Market Efficiency: Pricing Options with Realistic and Non-Random Assumptions


I'll keep this short and sweet and make the, maybe unfair, assumption that everything I cover here is a known concept to you the reader for the purpose of being concise and ensuring an easily digestible piece.

Economic expansions and contractions cause trends in financial markets, that much is clear. So over a one day period does this mean the trend in the market is predictable? Probably not. But if you increase the number of days, order starts to appear, or rather, chaos emerges.

Lets see if we can use it to predict something over long periods, maybe something like UVXY....

Long-Memory and the Hurst Exponent

Long-term memory in a time series can be tested and indicated by the hurst exponent. I have estimated the Hurst exponent over a 1024 day period on a rolling window for the entirety of data available for UVXY (3367 observations of length 1024 = 3,447,808 "data-points" in our process) using some rather complex matrix applications and a VBA Macro script to loop through the entire time period.

Hurst Exponent = H: 0 < H < 0.5 indicates a mean reverting process,

H = 0.5 indicates randomness,

0.5 < H <=1 indicates a long memory trend.

Note the statistically non-different from a random process on February 13th of 2020 jumping to an extremely significant non-random value of 0.6244 on March 18th 2020. Chart shown below, I think you can guess this means we can utilize a Hurst exponent in other ways as well... (97.67% decline in UVXY between Match 18th 2020 and September 3rd 2021).

So, an H > 0.5 indicates a trend will continue. The next question naturally becomes, well for how long will this trend continue? and how will it evolve?

We turn to spectral analysis and fractal mathematics.

I will leave the proofs for another day, the point is we have proved that for a Hurst > 0.5 Geometric Brownian Motion is an unrealistic assumption.

We apply a modified Fractional Brownian Motion. We could develop this a variety of ways, Fourier transformations, Spectral analysis, or some process of Markov or Hidden Markov Chains.

If there is an empirical data set to be used I almost always prefer to rely on it over an infinite extension of the assumptions to a mathematical process. The determination drives our bias-variance trade off for future pricing, however. Given this I will include, in the model of our future time series motion, the Hidden Markov Chain present in our Empirical Data, with the distribution modeled using brute force optimization on a Johnson SU distribution. I opted not to over optimize the probability distribution with Kernel Density Estimation here for the sake of the bias-variance trade off in applying this to future periods that have a different, yet likely similar, probability distribution.

The Hidden Markov Chain

Let us define 2 states such that X is a positive return for the underlying at time n and Y is a negative return for the underlying at the same time n. Let us then set m quantiles relating to this return bound by {0,1} and relate it to the cumulative probability of the return of the underlying.

Arbitrarily, or through evolutionarily/Non-linear optimization, set m = to any integer between 2 and infinity, while understanding that the larger m is the more biased our model will be and understand that when m is equal to the sample size of our return distribution the quantile will be similar to a Kernel-Density-Estimation of occurrences.

I have arbitrarily set m = 10 as to simplify this explanation and ignore the complicated issue of properly applying differential problem optimization.

We now have 2 states, a positive return and a negative return, 10 quantiles of these returns, and implicitly 10 x 10 transitional probabilities where the diagonal of the matrix is equivalent (i.e. w1 moving to w2 is the same probability as w2 moving to w1) such that we have 90 relevant transitions (10 permute 2).

Empirically determine the Hidden Markov Chain.

Transition matrix shown below:

The Distribution

I applied brute force solving to maximize the log-likelihood for the distribution assuming a Johnson SU Distribution applied. See below:

Count 4391

average -0.25%

stdev 6.53%

gamma (location) -0.3232

ksi (location) -0.0216

delta (scale) 1.1626

lambda (scale) 0.0460

Log-likelihood 6553.36

Supremum 1.0302%

See the PDF for Johnson's SU

Therefore the CDF is determined through "simple" calculus as

Pricing the Future Value of the Underlying with a Markov Chain

Your input variable is the current state of the Markov and the number of periods. You can think of this as a binomial model "bodybuilder on steroids that bribed the judges and is competing against the stars of 'My 600-Pound Life' ."

The Hidden Markov Model allows us to determine "emission probabilities" which we can apply to future periods. Each future period would have 10 associated probabalistic returns. Therefore, a simulation of say 252 trading days or 1 year into the future would follow 10^252 possible paths. This is unimaginable large to simulate.


Octogintillion 10^243 The unofficial 80th -illion (from Latin "octoginta" meaning 80). 47 of this is the volume of the universe in Planck volumes 1.5 nonillion years after the Big Bang. Some time after that point, the universe may start decomposing, if the theory about proton decay is true.

Therefore we should find a closed form way to simulate this.

However, I want to enjoy my thanksgiving and put this out. So, below is a simulation of 252 days based on random number generation to determine the transition state taken based on the given current state. It should be very very close to the true distribution that I will set out next time.

I have re-run the simulation 100 times to give us a range of outcomes. 252*100 = 25,200 seems like a more statistically significant number of data points.

Based on our model, 252 trading days from Friday September 3rd 2021 when UVXY closed at $212.40 should have UVXY at a price of $121.20 (Median Estimate) and it had an actual price of $106.00 (an actual decline of 50.09% vs estimated 43.50% - error 7.1563%)

Below is the OHLC data for 1 month prior and after our target date.

UVXY (9)
Download CSV • 3KB

Valuation of a Financial Derivative

Surprisingly enough, this is actually the simplest part given the work done up to this point. For a $150 strike the payoffs are:

Put average $19.73

Call average $30.75

An appropriate discount rate would need to be applied for your own expectations of return as this was not done in a risk neutral framework.

Additionally, observe that put call parity does not hold unless a negative discount rate is used.

Put average $19.73

Call average $30.75

Risk free -0.2419

Discounted payoff 26.02

Discounted payoff 40.56

Bond197.86 Underlying 212.40

PP 238.424

FC 238.424

Difference 0.000147

From Wikepedia:

In financial mathematics, the put–call parity defines a relationship between the price of a European call option and European put option, both with the identical strike price and expiry, namely that a portfolio of a long call option and a short put option is equivalent to (and hence has the same value as) a single forward contract at this strike price and expiry. This is because if the price at expiry is above the strike price, the call will be exercised, while if it is below, the put will be exercised, and thus in either case one unit of the asset will be purchased for the strike price, exactly as in a forward contract.

The implication of this is interesting. What then is the appropriate discount rate for put call parity to hold?

Interestingly enough it is a discount rate of -24.19%

So where do we obtain this discount rate to apply to future estimations? From the empirical data of course.

Taking the natural logarithm of the level of contango at each term and reespecting the term until expiration gives us a very close estimate of our discount rate to use in this non-risk-neutral framework.


For series where random processes and normal distributions don't apply, this implementation of chaos theory to option pricing may be a more accurate way to value a financial underlying over non-small time periods.

I will publish a closed formula for this next.

Infinitely more to come than I have the time to write about.

Thank you for reading.

Author: Morgan Price

Originally Published October 8th 2023 to Quantitative Financial Advisories Investment Research Page. All rights reserved. Fair use copyright applies. For any reference with attribution to this work simply cite the original webpage as the source and the author. The institution/publication to cite is QFA Investment Research.


Hurst Exponent Inferences

A Hurst Exponent of 0.5 would indicate no advantage over using the Black Scholes model if and only if the underlying return distribution was a normal distribution and the future process followed Geometric Brownian Motion (with drift).

A Hurst Exponent statistically greater than 0.5 means a random walk cannot be assumed and the data follows a long-memory trend. In this case the direction and fractal of the trend should be estimated.

A Hurst Exponent statistically smaller than 0.5 means a long-memory mean reversion is likely, the mean reverting value, ratio, or fractal should be estimated .


bottom of page