Better Visualizations

  

(chart by Elizabeth Fosslien)

 

I ran across an interesting article on the Freakonomics blog about the life of a quantitative analyst. A lot of them are very amusing, but the meta-chart above caught my eye in particular.

My personal experience meshes well with the idea she presents in the chart, and that is that quantitative researchers spend way too much time on formatting. Part of it may be that quants tend to be OCD perfectionists, but more importantly, Excel really stinks as a data visualization tool.

This is what the default Excel table looks like:

This is what we want it to look like:

This is what a default line chart looks like:

This is what we want it to look like:

Configuring the table is generally easy but configuring the chart is often quite painful. Formatting the table for this example took me about a minute to get the basics and another additional 4 minutes to get the right column width and align the column header towards the right but still using an indent to leave some space. Formatting the chart for this example took something like 10 minutes of time total as I tried many iterations of axis scales and label orientations. If you’re very proficient with Excel you can probably cut down on that time, but then again, how many quants are Excel experts? And that was just one chart and one table! When you are working with quantitative data, every presentation, no matter how small, yields at least several tables and charts. These configuration times really add up.

Why do Excel suck so bad? Because it does the exact opposite of all the basic data visualization principles outlined in Edward Tufte’s seminal work Beautiful Evidence, one particularly important one being maximizing the ratio of amount of relevant information to the amount of “ink”. Default charts and tables in Excel are a jumbled mess that is always difficult to read. It’s not all Microsoft’s fault. When you paste data into Excel, it is difficult to interpret how the user would want to visualize it. These types of inferences often gets very messy and may do more harm than good. However, a good quant research tool will be able to help the researcher cut down the amount of formatting time by automating the formatting process with sensible default configurations and a flexible interface to customize the formats.

For tables, this means:

-NO GRIDLINES!
-Bold/Solid border around the outside
-Lighter border separating the column/row headers
-Bold the headers
-Fill the header cells with color
Number formatting:
    -Generic floating point can be 2-3 decimal places.
    -Share quantities should be integers
    -Portfolio weights should be at most 2 places
    -Prices should be 2 places
    -Currency rates up to 5 places
    -Large amounts should be scaled down by 1e3 or 1e6
-Cell alignment so digits align down each row
-Column widths and row heights are set properly
-If headers have multiple levels, merge at higher levels
Correlation matrix:
    -Diagonal should be 1 or omitted (or 100%).
    -Rest of the matrix should be 1 or 2 decimals.
-Table title centered over the table with merged cells
-Conditional formatting for max/min/errors.

For generic charts, this means:

-No outside border
-No inside border
-White fill the chart area
-No fill plot area (inside area which is grey by default)
Axis:
    -Dashed and light gridlines for each axis
    -No or light axis lines
    -No or light axis tick marks
    -Axis number format with fewest places possible
    -Axis font size small but easily readable 
    -Label orientation doesn’t interfere with data
    -May need to move the intercept
Title:
    -Always have a title
    -Title font should be bolded or larger in size
Legend:
    -Legend optional. If only one data series, then no legend.
    -Legend should have white fill with light or no border
    -Legend goes where there is the least amount of data

For line charts:

-Line colors should be easily readable
-Line colors shouldnt’ be too bright
-Line weight should be heavier so lighter colors show up better
-No tick marks unless absolutely necessary

Other chart types:

Scatter plots should pay particular attention to scaling, “dot” color and size”
Bar charts should have light or no borders around the bars and use a light or no border between x-axis values

There are hundreds of additional configuration properties you can set to make your data look beautiful (while staying informative!). The ones listed above are simplest some of the most common ones that take up a lot of time to change from the default. In newer versions of Excel, it is possible to change the defaults, but in a corporate settings where you will have multiple workstations and/or need large scale collaboration and teamwork, a local only set of settings will soon become very inconvenient.

There are lots of visualization tools out there in the form of Tableau and such. However, a good visualization tool cannot stand alone. It is nothing without the ability to easily change and manipulate data, and any piping driven by the visualization side is both cumbersome and also too much of a dependency on just the visualizer. Instead, the automation must be driven from the data analytics tools and can output something to Excel so a particularly particular quant has the option to put on a few finishing touches.

Advertisements
Posted in Data Visualization, Uncategorized | Tagged , , , , , | Leave a comment

Multiperiod Attribution

As we saw in my previous post on performance attribution, the returns of a quantitative portfolio that is a the result of a factor model can be decomposed via straightforward regression analysis of the asset returns on the alpha factors and/or risk factors. If we analyze the left hand side of the regression a little more carefully, it’s easy to see that the portfolio contributions of each asset is simply the product of the asset return and the weight of that asset in the portfolio at the beginning of the return period. By regressing the contributions on the factors, we are assuming a set relationship between the weight of the asset in the portfolio and the combination of risk factors and alpha factors. Thus we see that this methodology implicitly assumes a fixed relationship between portfolio weights and the factors.

But what if we have to explain the returns of a portfolio over multiple rebalancing periods? Because the portfolio weights are adjusted during every rebalance, the relationship between the portfolio weights and the factors will change over time. Hence, we cannot assume that the total asset returns are due to a single set of portfolio weights. We must turn to something different.

Summing Up Single Period Attributions

The most obvious way to analyze multiperiod returns is to run a single period attribution between each portfolio rebalance and aggregate the contributions. However, this method has many moving pieces. Depending on whether the trading level is reset in every period, compounding may or may not be valid. Some practitioners like to use geometric returns for easier aggregation while others like to stick with simple returns for ease of interpretation and add various smoothing factors to combine returns. Moreover, because the single period BHB analysis doesn’t account for transactions related items, the aggregated return generally does not equal the whole period return.

The final nail in the coffin is that with this method we don’t have a sense for our overall exposure to various factors within the period. For very stable portfolios whose factor exposures are relatively stable over time, this is not such a big deal. But for portfolios that turnover quickly or are constructed from high turnover factors, this is a non-trivial problem. We could approximate the average exposures with a portfolio-value-weighted mean of the factor exposures, but in many instances a fund manager doesn’t necessarily rebalance on a fixed and uniform schedule. Thus we would also have to take the number of days into account. This gets very messy.

Return Time-Series Attribution

An alternative to summing up the results from the cross sectional regressions is to use a time-series regression. Instead of using a fundamental factor model paradigm where we have known factor exposures in every period, we could use a macroeconomic factor model paradigm where we have know factor returns but unknown factor exposures. Under this paradigm, we can take the time-series of overall portfolio returns and regress them on the time-series of factor returns. Factor returns are usually based on the return of the factor portfolio. We can use the simple factor portfolio return, or we could estimate the factor return by taking the top and bottom quintiles of the factor portfolio.

Once we have the exposures, the factor contributions are straightforward to compute. Just take the cumulative returns of the factor during the entire attribution window and multiply it by the estimated realized factor exposure. The noise component then is the difference between the cumulative portfolio return and the sum of all the factor contributions.

Pitfalls

First and foremost is the sample size issue. The Barra risk model has something like 68 factors, and if we incorporate our own alpha model, the number of factors can easily get up to 80 or 90. Even if we are doing daily return attributions, it won’t be possible to do monthly attributions. The problem is even worse if our factors generally act over longer time horizons than just a day. For daily returns, it may be possible to regress out the style factors from the alpha factors and then decompose the returns for a month (~20 observations) on the residualized alpha factors. But even that means you have to have less than 20 factors. If you have multiple strategies built on the same factors, it maybe possible to setup a panel regression that gets around this problem of sample size, but it’s unclear whether it’s sensible to add this much complexity into the process when the cross-sectional method is available to us as a simple straight-forward alternative.

Another problem with the time-series method is autocorrelation. If we use daily returns, it is fairly reasonable to assume that we maybe violating the no-autocorrelation assumption in OLS. This means we also have to perform additional testing using Durbin-Watson or Breusch-Godfrey tests. The additional testing is especially crucial here because of the sample size issue, since the presence of non-spherical disturbances can make a non-trivial impact on the efficiency of the OLS estimator.

The last problem with the time-series method is that we cannot guarantee that we can reconcile it with sub-period or single period attributions. This problem is a deal-breaker in many situations when we want to look at overall return decompositions and then drill down into sub-periods that had significant contributions.

Conclusion

The cross sectional return decomposition methods are more appropriate for equities portfolios because in most macro strategies the cross section tend to be very small. Decomposing time-series of returns to obtain loadings on various factors or assets is a more comprehensive view on the characteristics of your portfolio over a time window.

 

Posted in Performance Attribution | Tagged , , , , , , , , , , | Leave a comment

Performance Attribution Revisited

Analyzing the performance of an investment strategy is an integral part of the investment process. Systematic decomposition of investment performance began with a seminal study by Brinson, Hood, and Beebower (BHB) where they breakdown a portfolio’s returns into several parts that disentangled timing, stock selection, and benchmark performance. Many variations soon followed that allowed for similar decompositions of performance that helped disentangle country selection, industry selection, and miscellaneous other groupings. The goal of this type of decomposition was to help investor gauge manager skill and for managers to analyze where their competencies are.

BHB Method

The original BHB analysis broke down performance in terms of timing and security selection for an active portfolio measured against a benchmark. Under BHB, the benchmark group return times the active weight is considered the asset allocation component, the portfolio weights times the active return is considered the stock selection component, and the remaining component of the active return is called the interaction component.

Factor Decomposition

The fundamental process that drove the BHB analysis, and indeed performance attribution in general, is two fold: 1) separate out parts of a portfolio return that does not require skill (benchmark) and 2) analyze the performance along various dimensions (allocation vs selection). Both processes have evolved dramatically since its original inception.

With the advent of quantitative investment and factor models, performance attributions along factor dimensions also became a common occurrence. This added a great deal of complexity to the problem. We wish the measure the contributions from each factor, but simply taking the dot product of the returns and the weights won’t suffice because the factor portfolio return will include the effects from other factors as well. Instead, we make use of regression analysis to disentangle all the factor. Suppose we had a vector r of asset contributions to portfolio return and a matrix X of factor exposures, then we could calculate the orthogonalized factor returns according to this linear model:

where we solve for β via OLS:

Once we have the factor returns, it’s a simple step to take the dot product of the factor exposures to the factor returns to obtain the factor contributions to portfolio return.

Finally, residuals of this regression are considered to be return contributions from market noise, which on average should equal zero if the assumptions behind OLS are met or if the weights correctly accounts for the heteroskedasticity.

Factor Decomposition and Risk

Because a factor risk model is also an indispensable part of the quantitative investing process, we must also take into account risk factor exposures as well as the alpha factor exposures. This helps the manager clarify and control their risk along dimensions based on investing style, sector membership, and country membership. It also helps investors see whether the returns are truly coming from alpha factors or from accidental bets of risk factors that are presumed to carry no return over the long run.

Thus we must extend or factor decomposition to include the risk factors as well:

where Ω signifies the risk model

This approach has one drawback, namely that if the alpha factors are correlated with the risk factors, then we run the risk of multicollinearity issues. In a paper via Barra, Menchero and Poduri make the suggestion to regress the risk factors on the alpha or custom factors and use the residuals in the regression. However, this method is not useful for disentangling the parts of a manager’s strategy that simply coincides with risk factors that do not carry sustainable returns.

If we wanted to be really strict and look at strategy alpha above and beyond not just the market but also in excess of the risk factors, then we may want to reverse the roles of the two types of factors and regress the alpha factors on the risk factors:

And the residuals from this regression will be used in our new factor decomposition model:

where


The factor returns from the alpha factor part of the regression now contains only the return contributions that cannot be explained by the risk factors.

Optimization Constraints and Factor Decomposition

One could argue that because of the constraints in the optimization process, bets along the risk factors is inevitable, because we simply cannot achieve 100% of our ideal view portfolio. The short response here is that from the investor’s point of view, it doesn’t matter what the constraints are, what matters is the final value added. But a more nuanced response should recognize that many constraints are client mandated and are often customizable. Thus it is important to be cognizant of the magnitude of their impact.

A proper treatment of constraint decomposition and it’s relationship with performance attribution requires a dedicated post. But I’ll make two suggestions. One way to get a rough sense of how much the constraints are changing the performance attribution is to either use the returns to a view portfolio (or to a theoretical unconstrained portfolio) and look at the differences in factor contributions between that portfolio’s theoretical returns and the actual portfolio returns (need to pay attention to t-costs here). Another way to do this maybe to include the “delta” portfolios associated with each constraint as additional factors in the decomposition regression.

Conclusions

We have progressed much since the days when all we could distinguish was market performance and portfolio performance. With BHB and its variants, we were able to breakdown performance due to timing, stock selection, industry selection, country selection, and passive benchmark positioning. As quantitative investing and factor models came into the fore, we have moved onto using regression analysis to decompose returns amongst various factors. And finally, using a factor risk model and some simple modifications, we can also strip out the return contributions from bets along risk factor dimensions that don’t carry returns.

Posted in Performance Attribution | Tagged , , , , , , , , , | 3 Comments