Some of the most interesting posts that have been published at QUSMA compare and contrast a number of different methods of measuring a given condition. The most recent post in this style sought out to compare different methods of measuring the straightness of an equity curve.
Here are some of the metrics the article looked at:
There are of course some “standard” straightness metrics. R-squared is the most popular, and it works pretty well. I like to raise it to the 4th power or so in order to magnify small differences and make it a bit more “readable”. Another popular metric is the K-Ratio, of which there are at least 3 different versions floating around. The K-Ratio also takes returns into account, so it’s not purely a straightness measure. I prefer the Zephyr versionwhich is calculated as the slope of the equity curve divided by its standard error.
The author continues by suggesting some other variables that would be interesting to consider:
Some other numbers I think may be interesting: the ratio between the area of difference above and below the ideal, the volatility of the difference, the volatility of the difference below the ideal, average absolute deviation, and average absolute deviation below the ideal (both standardized to the magnitude of the curve).
At this point, the author throws a curveball and introduces a brand new metric called the Qusma Equity Curve Straightness, Downward Deviation, and Stability Measure (QECSDDSM). This new metric is designed exclusively to measure the straightness of the equity curve.
The rest of the article uses the new metric and compares the results it calculates to those of the other possible metrics across four different data sets. The results ended up showing that there wasn’t much difference between the different methods:
In general most of the numbers roughly agree with each other in terms of ordering the curves from best to worst, so the actual formulation of QECSDDSM doesn’t really matter all that much.
The article goes on to test the metrics on a different collection of data sets. Once again the new metric failed to significantly outperform the already existing metrics. However, the offer believes that the new metric could be potentially useful following further development.