BISAM was pleased to present at last week’s Portfolio Risk and Performance Measurement forum hosted by Financial Research Management. Our presentation focused on integrated performance and risk, and the role technology has played (and continues to play) in shaping the topic. We presented a number of options a firm should consider as they deploy performance and risk technology, and the pros and cons of each option.
That same morning, the headline on page A1 of the Wall Street Journal declared “BlackRock Shake-Up Favors Computers Over Humans” (wsj.com). The article, whose details I’m sure all of you are familiar with, goes on to say that BlackRock “has taken the view that it is difficult for human beings to beat the market with traditional bets on large U.S. stocks”. Reacting to the trend of moving assets from active management to passive management, BlackRock is betting that quantitative fund management using big data and big computers (and their corresponding fee reductions) will stem the outflows. RIP humans.
One need not rely solely on financial publications like the Wall Street Journal for a perspective on the future of technology in investing. A January article published on wired.com gave a different perspective: that of the data that powers the robots and the engineers at the helm. The article describes the robot’s approach to investing after analyzing “everything from market prices and volumes to macroeconomic data and corporate accounting documents” to “vote on the best course of action”. And it’s not just a small number of disruptive start-up investment managers that are pursuing this approach, established names like Bridgewater Associates are “moving in the same direction”. Meanwhile, the engineers being trusted to lead this wave of disruption are Ph.D. scientists in fields as diverse as astrophysics and linguistics. We are witnessing a revolution in real time.
The purpose of this post is not to debate whether the rise of robots will or won’t happen – the market will dictate that. But rather, how should we react? What should we expect? What should we demand?
Given the reliance on a large number of data sources to make quantitative decisions – from historical prices to tweets to weather reports – should a firm be required to disclose the type and source of that data? Will the data sources be graded, and investors judging the health of the fund by the quality of its data? A quality rating for data sources?
The market has long settled on an attribution methodology to gauge the skill of a portfolio manager, and with it a set of common effects. Given a rise in computer driven investing, is a new attribution model coming too? Should we be working to understand the “Robo Effect” of the portfolio – designed to uncover the portfolio creation skill over and above the speed of the trades?
Portfolio management has historically relied on humans to come up with investment hypotheses, and to test these hypotheses with powerful computers. But that is changing – computers are beginning to drive the process from end to end. Complementing the three predictions we made at the Portfolio Risk & Performance Measurement Forum, I can only ask myself: are we prepared for the next wave of technology leading disruption?
If you have any comments or have any questions about the subjects raised in this post, please fill in your details and message below. We'd like to hear from you.