top of page

Asset Allocation in a Post-Modern Portfolio Theory World Part 1: The Single Layer TAARP ML Model


I imagine this has been explored before, however, finding no immediate other research on this I have named this the TAARP Model. TAARP is an acronym for Tactical Asset Allocation Rotational Portfolio. It utilizes Machine Learning/Deep Learning hence the ML being included in the title.

This model requires you to specify some mix of underlying assets you want exposure to and outputs a periodic recommendation of what weights should be assigned to each asset in your portfolio using Machine Learning methods. Any periodicity can be used, I have chosen to rely on the last 21 trading days to drive the allocation and allow for daily position rebalancing.

More or less frequent trading can be applied depending on your inclinations towards buy and hold or active management/tactical investing.


Perhaps you're not an active investor. However, you recognize that not all assets have the same characteristics; average return, risk, correlation, etc. You fit somewhere between buy-and-hold and tactical investing with active management. This model seeks to bridge the gap between Buy-and-Hold and active Tactical Asset Allocation through the use of a Machine Learning Model (essentially a slightly more complicated regression) to drive your portfolio weights with consideration that you desire some exposure to all assets, likely for diversification/risk-tolerance reasons.

The Main Idea

We will develop a Machine Learning model, specifically a deep learning model (more hidden layers to come), to periodically, tactically rebalance the weights of our portfolio based on observable market data and empirically determined statistics combined with feature engineering from the past 21 trading days, and for the VIX we consider its characteristics since inception.

The output will be a range representing the degree to which we bet long, short, or hold cash, and 3 weights that sum to less than or equal to one and greater than or equal to negative one. In essence we will allow shorting of securities and not require our portfolio to be fully invested. Cash is an active position; sometimes the best investment is staying on the sidelines.

I will present two models, one will be shown today and the next I will upload next week.

The model will allow one input layer, one and two hidden layers (to show that more might not always be better, explicitly with the 200 variable maximum excel solver imposes on us), and an output layer with 3 nodes outputting a value between -1 and +1 with -1 representing a full allocation to a short position in the security and +1 representing a fully allocated long position.

This could be modified to allow leveraged positions, for example, by allowing the output to take on the range -3 to +3 we can utilize ETP's like UPRO for +3x leverage or TMF and TMV for +3/-3 leverage on bonds etc.

Be extremely careful adding leverage to any product.

The output activation function utilizes the hyperbolic tangent, TANH in excel, this provides a range of -1 to +1 to drive our long/short allocation.

We will run the model 3 times, once for the SPY portion, once for the TLT, and once for the GLD. We will then optimize the weights utilizing a combination of the Kelly Criterion and a modified mean "variance" optimization borrowed and modified from modern portfolio theory.

We will drop the assumption that standard deviation of returns is a fair proxy for risk. As I've said before, I don't care if my portfolio deviates higher from the mean return, I care if it deviates lower. I care about portfolio drawdown not about the dispersion of my returns giving equal weight to positive and negative returns. I define risk as losses as, in my opinion, you should consider doing.

The Spreadsheet

I downloaded data for SPY (S&P 500), TLT ( 20+ year US Government Treasuries), and GLD (Gold spot ETP), as well as VIX Data.

Click below to download the spreadsheet with the machine learning model completed:

Note on spreadsheet download

Note: To download this spreadsheet, and all others, you must be a subscriber to the group "Unlimited Spreadsheet Downloads" which currently costs $1 a month, a $5 set up fee, and is cancel-able at any point. This pricing plan will increase as more content is added. If you sign up now, you can lock in this price permanently. If pricing is an issue, contact me and we will figure out an alternative that works within your budget subject to the intended use.

Excel solver has a hard limit of 200 variables it is able to change for optimization, these our the models hyperparameters, given that we will push this right to the limit the model is broken into multiple sheets for each optimization to allow maximum use of our computing power.

Use with attribution is permitted. Please don't blindly apply this model without full understanding of the risks and implications. See my disclaimer. Always consult a professional and never use something you don't fully understand.

Steps for Model Reproduction/Fine-Tuning

Step 1: Define your portfolio holdings

  • Download your securities time-series data from a data provider like yahoo finance.

  • Decide if you will reinvest dividends or not. This will effect whether you utilize the close price or adj. close price. I recommend using the adj. close and run with the assumption you have a dividend reinvestment plan set up.

Step 2: Perform Feature selection and engineering

  • See this long form explanation of deep learning if you are unsure of steps here:

  • Select and/or create the point in time data features that will be used to drive asset allocation decisions.

Step 3: Perform optimization with excels solver. I prefer the evolutionary engine. "Backpropagation" is simulated/substituted via excel solvers evolutionary engine - think of this as an application of the chain rule backwards,

  • If solver isn't an option in your version of excel, i.e. you've never added it, go to this Microsoft help link and follow the steps:

  • Decide on what you're optimizing. I prefer a variation of the ulcer performance index where your return is dived by your drawdown, as a positive number, or any max/average/median/measure of the distribution of drawdowns you deem appropriate.

  • Add constraints and allow your model parameters to be varied to achieve optimization.

  • Be careful not to overfit the data. Split your dataset into training, validation, and testing data. For high confidence perform multiple cross validations. See my article on deep learning if you need more context.

Step 4: Reperform the optimization on another sheet for each portfolio security to be included.

  • See step 3

Step 5: Create an output layer to drive the percent of portfolio allocated to each underlying security.

  • I have brute force solved a Kelly Criterion ignoring any inputs to the traditional Kelly Criterion and simply allowed the model to come up with it as a black box.

  • If you feel its important to understand the inputs, here is the logic, see the article for variable definitions:

There are some flaws in this article I will touch on at some point. The assumption of a single return/loss is the first that comes to mind. Either way, it does explain the logic albeit in a slightly flawed way.

Step 6: Evaluate and fine tune

  • Examine the results and fine tune the model until satisfied with the risk/return and bias/variance tradeoff. Again, see my article on deep learning or google if unsure of the specifics of the bias/variance tradeoff.

Step 7: Outperform.

  • Buy and hold investing is the bare minimum. Do better than average. It's not easy but with a benchmark like the VBINX, a 60/40 stocks/bonds fund like most people hold that has drawdowns of nearly 40%, it's not that difficult to outperform either.

VBINX Data and Drawdown file:

Download XLSX • 1.13MB

VBINX Fund Summary

The fund employs an indexing investment approach designed to track the performance of two benchmark indexes. With approximately 60% of its assets, the fund seeks to track the investment performance of the CRSP US Total Market Index. With approximately 40% of its assets, the fund seeks to track the investment performance of the Bloomberg U.S. Aggregate Float Adjusted Index.

The Results

These are the commission free results:

years in sample 13.5

CAGR 13.26%

max drawdown -13%

avg -2%

max ulcer 1.0147984

avg ulcer 6.5608992

Out of sample results i.e. testing data had a CAGR of over 12%

The commission considered results, assumed $9.95 per trade, will be posted with next weeks model. I have a target of when I want to have these posts uploaded and was unable to include everything I set out to for this week.

Note: Parameters may not necessarily be the same when commissions are considered as frequent trading is penalized. This depends not only on trading frequency but portfolio size as well.

Comparison to Modern Portfolio Theory & Mean-Variance Optimization

I would have loved to have more time to write more about this: Its better than Modern Portfolio Theory.


It appears that this sort of model, with refinement, could add significant value to an investors portfolio.

Dropping the over half century old constrains of simple modern portfolio theory, while still respecting its value for the time and computational power available, allows us to define risk more appropriately as portfolio or security drawdown, not standard deviation of returns.

If you are comfortable with the process of Machine Learning/Deep Learning implementing a "Black-Box" to drive security/portfolio allocation weights can seemingly add a significant amount of value. If not, I recommend directly calculating a measure like the Kelly Criterion mentioned above.

Additionally, if you want to test for yourself the economic reality/logic behind the weights and bias's in the hidden layers feel free to start by setting the weights manually to gain confidence in the models output. I would recommend separating the features (input variables) into classes like momentum, correlation, cointegration, volatility/regime and apply appropriate positive/negative coefficients to observe for yourself how logical the optimization process can be.

  • E.g. group momentum features and where a positive number indicates positive momentum apply positive coefficients, group volatility metrics and where higher volatility indicates negative returns apply negative coefficients as there is likely a negative relationship between security return and volatility.

The features used in this model include terms that would be used to trade a cointegration strategy, commonly referred to as pairs trading, which means adding more cointegrated assets could potentially improve the results significantly... if you wish to test that on your own, consider gold/silver, long dated treasures/short dated treasuries, front month VIX products (SVXY or SVIX) to mid futures products (VIXM/VXZ) and related markets like the Canadian and Australian market indices.

Further reading on this:

On Pairs Trading: A Comparison between Cointegration and Correlation as Selection-criteria

Erik Hognesius & Jakob Höllerbauer

Download full paper:

Download PDF • 1.44MB

An excerpt from the above paper:

The history of pairs trading begin in 1985 when a group of quantitative researchers at the investment bank Morgan Stanley, under the lead of Nunzio Tartaglia, created what has come to be called Morgan Stanley’s Black Box. This Black Box was a program which had preprogrammed algorithms for buying and selling different combinations of pairs. (Pole, 2007, p. 1) Portfolio managers bet on the price spread between the pair to decrease based on the statistical anomaly of mean reversion, which practically means that the managers use the assumption of the law of one price1; anomalies among securities valuation will occur in the short run but in the long run will correct themselves by the efficiency in the market. By taking their long/shortpositions the manager will add value to their portfolio when the price spread decreases.

More to come...

I hope you found value in this and thank you for supporting my work.

Written by: Morgan Price.


Do not consider this as investment advice. I make no recommendation to trade in, buy, sell, or speculate on any security or related derivative product.

This example is for illustrative and education purposes only. Past performance is not indicative of future results. Machine learning models, especially deep learning models are extremely prone to overfitting; especially when dozens of input variable are used, no matter the economic significance these variables have.

These models are highly complex and sensitive to assumptions. If you are to implement any of these ideas, consult with an expert.


bottom of page