top of page

Why not value options with realistic assumptions? The case for forgetting the Black-Scholes formula.

Summary: Forget making assumptions for Black-Scholes and assuming normal distributions represent reality. I propose a new way to perform options valuation using the Kernel method, an unrestricted probability density function estimator, and a regime identification variable for non-stationary data to improve accuracy of the valuation.


I'll try and keep this brief, but I have a habit of overexplaining for my own benefit of getting my thoughts organized and to ensure no misunderstandings. Hopefully you find some benefit from it.

Finance is funny.

First they tell you that markets are efficient, hand you a formula that assumes normality like the mean-variance optimization of modern portfolio theory, some versions of VAR (Value at risk) models, or the famous Black-Scholes model.

Then the next day you're sitting in your finance class and the professor asks if we think stock returns are normally distributed. Now obviously nothing in the real world is normally distributed, but my statistics class taught me that if your sample size is large enough you can likely assume normality (The central limit theorem states that if you have a population with mean μ and standard deviation σ and take sufficiently large random samples from the population with replacement , then the distribution of the sample means will be approximately normally distributed). So in your infinite wisdom as a university student with the opportunity to apply some newly acquired knowledge you answer that

"Yes. Stock returns are approximately normal."

Of course your professor disagrees and brings up the fat left tails, the tendency for extreme events of the left tail, negative returns, happen too regularly to assume normality within a reasonable degree of confidence.

The next class we are, of course, back to assuming normality with a VAR calculation.

Is the Normality Assumption Realistic?

So what's the deal? Is it really that bad to assume a normal, lognormal, or even a T distribution?

Not really, most of the time you can get away with it. Black swan events do happen, with COVID-19 being a recent example, but most of the time things are pretty average.

As an example, I've prepared some data on the S&P 500 (proxied by the SPY ETF). You can download the sheet below if you're curious or want to do anything different with it. I'm unable to upload macro enabled workbooks so you wont be able to re-run the simulations I did unless you save the book as a macro enabled workbook. If you so desire, make sure you're on the SPY Data tab and run this code:

VBA Code for simulations on S&P 500 (SPY) assuming a normal distribution

Sub Macro1()


' Macro1 Macro


Dim i As Integer

i = 0

Do While i <= 790

'Waiting 1 second so sheet can calculate

Application.Wait (Now + TimeValue("0:00:01"))



ActiveCell.Offset(0, 3).Range("A1").Select


ActiveCell.Offset(1, 0).Range("A1").Select

Selection.PasteSpecial Paste:=xlPasteValues, Operation:=xlNone, SkipBlanks _

:=False, Transpose:=False

'Waiting 1 second so sheet can calculate

Application.Wait (Now + TimeValue("0:00:01"))

i = i + 1


'Copied and pasted simulation

End Sub

SPY ETF Daily Return Data Distribution Statistics & Simulations (Jan 29 1993 - Sept 29 202
Download • 1.01MB

So here's a histogram of the daily returns on the SPY (S&P 500) ETF over the last 30 or so years:

Here's what it would have looked like if a normal distribution was appropriate:

Here's an overlay:

Hopefully, it's clear to see that the empirical distribution of the SPY ETF is not normally distributed. I would argue that it is approximately normal as the only place you'll find a real normal distribution is in a textbook.

The real returns on the SPY ETF are more extreme at the tails, mainly on the negative side, and the peak of the distribution is taller. This high peak of the distribution means more average days and more extreme days with less in-between (kind of good or kind of bad) days than a normal distribution would suggest. You could represent this with a measure known as Kurtosis which basically just describes how the data is clustered around the center and gives incites into the distribution tails as well.

Is that a big deal though?

Depends on how you view your investments. I could argue yes and no.

As shown in the plot above, the average actual SPY ETF returns occur more often than a normal distribution would suggest... however, the extreme bad days (left) also occur slightly more (at the very extreme left side).

But if you're going to buy and hold over long periods of time it doesn't really matter.



Average Daily Return


Median Daily Return


Minimum Daily Return


Maximum Daily return


Mode of Daily Returns


Daily Standard Deviation






Sample Size


Compound Annual Growth Rate


Compound Annual Growth Rate (Simulated)


Years in Sample


Days in Sample


Start Date


End Date


The compound annual growth rate over the last 30.66 years has been 9.71% (with dividends reinvested). Now if you run the simulations yourself they will be different from mine. Even though I simulated every day for over 30 years 1000 times there is, by nature, randomness in the simulated values. The simulated compound annual growth rate is the average of 1000 simulations of the past 30.66 years assuming the normal distribution. That was 9.83%

So, if the returns were perfectly normally distributed you would have earned 9.83% vs 9.71%.

Like I said, if you're going to buy and hold over long periods of time it doesn't really matter. Don't get me wrong, every fraction of a percent matters; the 9.71% return turns $1 into $16.12 over 30 years and the 9.83% turns every dollar into $16.66 over the same 30 years, about a 3.35% difference in the end.

Financial Theory and Models are Outdated:

Where my beef with the normal distribution, and Finance in general, comes from is the fact that we use methods developed many decades ago. They're outdated and there are better ways to describe the real world than assuming it normal and calling it a day.

Another one that really gets me is the basis of modern financial theory: The Modern Portfolio Theory. Basically it says investors are risk averse and choose investments based on a mean-variance trade off. In essence, maximize the ratio of expected return (mean) to risk (variance of returns).

Don't get me started on how stupid it is to define risk as the standard deviation of returns, I could ramble all day on that. But the clip notes of my beef is:

You own a stock that averages a 10% return with a 20% standard deviation of returns. Suddenly, everyone else comes to the same realization you did and starts buying this wonderful companies shares as they see, as you did, that this company is undervalued and should be worth more.

The stock returns 20% that year. The standard deviation of returns has now increased. According to modern portfolio theory, the same stock you owned at the beginning that's fundamentals haven't changed on is now more risky simply because the standard deviation of returns has increased more than the return.

Why not define risk as a measure of downside moves? If my stock goes from $100 to $110 I'm happy, if it goes from $100 to $90 I'm sad. That 10% move up should not be considered risk.

I would rather define risk as drawdowns.

If I bought at $100 and its now $110 the addition to whatever my measure of risk is represented by should be 0. If it drops 10% I want that represented as risk.

Anyway, I digress.

Modern Portfolio Theory assumes a normal distribution. It's a good theory, it won Harry Markowitz the 1989 John von Neumann Theory Prize and the 1990 Nobel Memorial Prize in Economic Sciences. But Assuming normality is not realistic. I've attached the original paper below if you're unfamiliar with it:

Portfolio Selection

Harry Markowitz

The Journal of Finance, Vol. 7, No. 1. (Mar., 1952), pp. 77-91.

Download PDF • 514KB

As True Tamplin, BSc, CEPF over at says about modern portfolio theory:

The Modern Portfolio Theory is based on the assumption that there is a normal distribution with two important parameters: the mean and the standard deviation. These parameters are significant in calculating asset returns and implementing risk management strategies.

The article goes on to say:

MPT has several criticisms, including the assumption that asset returns are normally distributed, reliance on historical data to predict future returns, and lack of consideration for market trends and investor behavior. MPT heavily relies on mathematical models and assumes investors are rational.


Now, I can finally get to my point: Why not value options with realistic assumptions? The case for forgetting the Black-Scholes formula.

Attached Below is the original Black Scholes paper for anyone interested:

Download PDF • 327KB

For anyone else to understand my point, here are the assumptions that Black-Scholes operates under:

Now if you are a market maker and are perfectly hedging your position with the underlying asset, lets say a share of stock and the option, that is perfectly fine. That's exactly what the model was built off assuming. Black-Scholes is not wrong. It's risk neutral pricing allows for known payoffs at contract expiration which allows for a hedged portfolio to earn the risk free rate (If you are perfectly hedged in continuous time as the model assumes, not to mention the other ridiculous assumptions I've mentioned before).

But you're not a market maker.

Chances are you have some expectation about how a stock or ETF is going to move over a given time frame and given the non-risk averse investor you are (looking at you wallstreet bets) you decide to leverage up your bet by buying a call option if you expect a move up or buying a put option if you expect a move down. In this case you're likely not hedging your position, let alone perfectly delta hedging in continuous time.

All you care about is, at expiration or anytime before then, will this option be worth more than I paid.

If I'm trying to convince you to throw out the most famous model in finance you better believe I'm not going to leave you high and dry with no alternative. But before we get there, I need to explain a few things:


Enter the Kernel Method.

According to Wikipedia: In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine. These methods involve using linear classifiers to solve nonlinear problems. The general task of pattern analysis is to find and study general types of relations in datasets.

Kernel methods owe their name to the use of kernel functions, which enable them to operate in a high-dimensional, implicit feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick".[2] Kernel functions have been introduced for sequence data, graphs, text, images, as well as vectors.

Concepts of graph kernels have been around since the 1999, when D. Haussler[3] introduced convolutional kernels on discrete structures. The term graph kernels was more officially coined in 2002 by R. I. Kondor and J. Lafferty[4] as kernels on graphs, i.e. similarity functions between the nodes of a single graph, with the World Wide Webhyperlink graph as a suggested application. In 2003, Gaertner et al.[5] and Kashima et al.[6] defined kernels between graphs. In 2010, Vishwanathan et al. gave their unified framework.[1] In 2018, Ghosh et al. [7] described the history of graph kernels and their evolution over two decades.

If you don't know higher dimensional math, that's okay. Anyone can input numbers into a formula. If you'd like to understand it better NEDL has an amazing video on it I've embedded below:

A Specific Example - UVXY:

The point is that there are many examples of assets that are not only not approximately normal but are heavily non normally distributed. Some, or most, of them likely don't follow any defined distribution. The more exotic the financial product the less normal its returns will be. Anyway, a picture is worth a thousand words:

Products like UVXY are based on futures contracts. Normally, the VIX Futures this ETF is based on are priced higher than the spot price of the VIX. This results in UVXY continuously buying a future at price X and selling at a price <X. Do that every day and you end up with a product that suffers from "roll yield" as UVXY does.

Investopedia says: Roll yield is the amount of return generated in the futures market after an investor rolls a short-term contract into a longer-term contract and profits from the convergence of the futures price toward a higher spot or cash price.

That is unfortunately written using the phrase "profits from the convergence" as the normal market for VIX Futures is "contango" (Investopedia: What Is Contango? Contango is a situation where the futures price of a commodity is higher than the spot price. Contango usually occurs when an asset's price is expected to rise over time. That results in an upward-sloping forward curve).

Now, back to UVXY. Lets short it. Clearly it just goes down. I'd love to tell you that's the truth, that UVXY is free money, that you can bet on it going down and sit back while the money rolls in.

Alas, there is no free lunch.

So clearly something happened in early 2020 that caused volatility, and the related futures, to spike. UVXY goes up when it can sell the futures it bought for X at a price >X. When that happens it happens hard and fast.

Well, lets buy put options on it then and limit our loss. Better yet, lets create a put spread by selling a lower strike put to help fund our higher strike put. For example:

Imagine on Friday, September 29th 2023 we, for whatever reason, decided that by October 20th 2023 UVXY would be trading at $14.50 or lower, down from $16.21. Is it reasonable to assume that in just 3 weeks UVXY would drop 10.55% lower? Whoever's selling us these options certainly doesn't want that to happen, although, whoever buys the lower strike option might. So who's right?

Well, its tough to say where UVXY will be tomorrow let alone 3 weeks from now.

But I'd like you to look in the top left corner of that options strategy module. Notice the Black-Scholes model? If we take these prices to be fair, $1.43 for a $15.50 strike put and $0.88 for a $14.50 strike put, then we can use black-scholes to back out the expected move, the implied volatility, the daily standard deviation extrapolated to an annual figure of 124.02% for the $15.50 strike and 115.48% for the $14.50 strike.

That figure is determined essentially by taking daily standard deviation multiplied by the square root of 252 (number of trading days in a year) and adding an appropriate volatility risk premium. So, we should be able to get an idea of the daily standard deviation of the $15.50 strike by dividing the implied volatility by the square root of 252.

That yields a daily standard deviation of returns of 7.8125%

Let's assume (from data) that the average daily return on UVXY is -0.2457% and I said before we need a 10.55% drop over 3 weeks (approximately 15 trading days) to reach our max profit on the put spread (long a 15.5 strike and short a 15.5 strike). So is that likely?

Well, let's pretend the returns are normal and say a Z score is z = (x-μ)/σ, where x is the raw score, μ is the population mean, and σ is the population standard deviation. Our Z score is [ (-10.55%) - (1-(1-0.2457%)^15) ] / [7.8125%] = -0.8866. A Z score of -0.89 equated to an area under and to the left of the normal distribution of 0.1867 or 18.67%

So do we have an 18.67% chance of making money?

No, it's higher than that. Based on the historical data, over a 21 day period (15 trading days) UVXY fell at least 10.55% 45.40% of the time.

Lets look at the option deltas. In addition to predicting option price movement, delta values can also be used as a probability measure. Delta measures the expected probability that an option will end in-the-money at expiration.

Our empirical probability is different than the Black-Scholes deltas... there's our edge, and that's my point: Why not value options with realistic assumptions?

We know that UVXY's returns are not normally distributed and we have easy access to the historical returns. The past is not indicative of the future. Past performance does not guarantee future results. However, unless you have a crystal ball, it's the best thing we've got.

Back to Kernels:

There's our bread and butter; the formula for probability density using a Kernel. What's so important about that you might ask. Well, here's UVXY's 3 week returns and a normal estimate:

The lowest return UVXY ever had was -57%. Using a normal distribution, it estimates that 2.9% of values still lie below that return. It's just not a good fit.

Here's a Kernel:

The above is obviously a much better fit of the data. The flexibility of Kernels allows us to work off of any distribution. For the above I have still used the normal distribution as the base, it correlates to the data at over 97%, but we could fit the data to some other distribution if we had reason to do so. For example, the Beta distribution has skewness and kurtosis as inputs. We could fit the data almost perfectly to a beta distribution as the empirical distribution clearly has non-normal skewness and kurtosis. Now the problem is; does it fit the data too well? Is it going to suffer from overfitting where we match an algorithm or something similar so well to the historical dataset that is has no predictive power for future out of sample "untrained" data?

There are many ways we can deal with overfitting which I will, for the sake of being concise here, leave for another time. But with this product, supply and demand have no influence on the return. UVXY, and other volatility products, derive value from the underlying futures contracts and arbitrage ensures no large divergences.

The point is not to be 100% accurate but to, at the very least, be more accurate than Black-Scholes. Options markets are largely efficient, unless you have an edge, you will not outperform the market maker. The option price explicitly contains a volatility risk premium to ensure that over the long run the market maker selling them wins and you lose.

Given this, we need an edge.


Non-stationarity is a condition where the mean, variance, or autocorrelation of a time series data change over time.

Below is a rolling 1 year average of the daily returns on UVXY (using futures data to simulate UVXY as the product was not launched until 2011:

It's safe to say the data is not stationary.

Closing Thoughts:

There lies our edge. With the use of Kernels for probability density function estimation as well as implementing another variable to indicate regime changes (basically when the mean, variance, or autocorrelation shifts from one level to another) we can value options more efficiently than the Black-Scholes formula with realistic real world assumptions.

No more inputting a ridiculous constant volatility, interest rate, and dividend yield into the Black-Scholes model, not to mention the ridiculous assumption of a normal distribution. Given the current price of the underlying, the strike price of the option, the time left until expiry, and the use of kernels and regime shifts (I'll touch on this later) we can develop a realistic, empirically based estimate of an options fair value. 100% with observable or easily calculated values.

No assumptions of transaction costs, a random walk (Geometric Brownian Motion), short selling, owning fractional shares, interest rates, implied volatility, or continuous time.

This is a versatile model with no unrealistic assumptions. The only assumption that needs to be made is that the distribution exists, we don't even need to know what it is thanks to Kernels and you need to have an idea of what drives regime changes.

For UVXY, it's regimes depend on one thing and one thing only: the futures market. A directly observable market with a wealth of data.

This isn't the place for me to explain proper contango calculations for UVXY, so take my word for it. I arbitrarily split the historical contango data into 4 "regimes" simply by equal ranges between the minimum and maximums.

The third "regime" has the most data points and a negative average return (remember we're betting UVXY goes down so we want it to have a negative return) so I have selected the futures contango as our regime indicator and that 3rd section as our ideal "bet against UVXY" regime.

Additionally, that 3rd regime has a nice simple distribution that will give us a high level of confidence in this systems ability to accurately rely on the probability density the Kernel gives us.

So there it is. I hope I was able to clearly articulate my case for dropping Black-Scholes in favour of a method with realistic assumptions and completely observables inputs into the pricing model. I won't go into the derivation of the exact formula here as I'm sure the majority of people don't want that level of detail. To be clear, the generalized model requires the current price, the strike price, the regime you are currently in, and the Kernel probability density function of returns for the specific market regime you are in currently. Of course, we could integrate Markov chains to calculate transitional probabilities of regimes and then weight each Kernel by its transitional probability. But that's not the point. I'm not trying to introduce another closed form solution like Black-Scholes. My point is that one distribution doesn't describe returns. The returns in a bear market are not comparable to a bull market. If you expect a regime change in your relevant timeframe, don't trade in the current regime using that kernel.

In the future I will upload a spreadsheet with the formulas required to do the pricing as well as explore other assets to apply this method to and go over in better detail what makes a good variable for market regimes.

Thank you for the time you've taken to read my recommendation that we get with the times and price options realistically. I hope you found value in it.


bottom of page