Daniel Mahr – Glass Box Quant at MDT Advisers (EP.472)
Summary
Quant Investing: The guest pitches a disciplined, diversified quantitative stock-picking approach aimed at all-weather returns using transparent, glass-box decision trees.
Machine Learning: Extensive discussion of a 20+ year use of machine learning in equity selection, emphasizing a forest of shallow trees, transparency, and avoiding overfitting/underfitting.
Model Construction: Signals are generated via decision trees blending financing, momentum, volatility, and context factors like company age to create precise alpha forecasts.
Portfolio Optimization: Portfolios are built daily with an in-house optimizer that balances alpha, risk constraints, and trading costs, with careful attention to liquidity and market impact.
Factor Evolution: Traditional signals like book-to-price were phased out as intangibles rose, while nuanced effects such as momentum consistency and deep drawdown reversals are incorporated.
Data Philosophy: Focus on long-history, high-quality financials, prices, and analyst estimates over alternative data arms races; models trained on roughly 50 years of market data.
AI Tools: LLMs are not used for stock selection due to in-sample contamination risk, but AI co-pilots are explored to enhance software development productivity.
Market Outlook: The guest observes increased inefficiencies in recent years potentially tied to passive flows, retail trading, or pod shops, creating opportunities for active quant strategies.
Transcript
Our head of trading would come to me every day we bought a stock that was down 70 or 80% and say, "Dan, we need to override this trade. The company's CEO just resigned in disgrace." Or, "Dan, this is a one product biotech company and they've just missed their target. It looks like the whole company's got nothing. We have to override these trades. We can't do them." That's when the light bulb goes off. This is precisely why this strategy works is because even quantitative investors who are intentionally trying to buy these stocks find it hard to overcome the human emotions involved with buying a bad story. I'm Ted Sides and this is Capital Allocators. My guest on today's show is Daniel Maher, head of MDT, the $26 billion quantitative equity investing group at Federated Hermes that oversees a suite of actively managed mutual funds, ETFs, collective investment trusts, and separately managed accounts. Dan joined the firm in 2002 as a junior analyst and took over leadership of the team 6 years later, guiding its evolution through vast changes in data, computing power, and investment methodology. Our conversation traces Dan's path from flipping IPOs as a college student to running machine learning models across global equity markets. We discussed the development of MDT's decision tree framework, a glassbox approach to stock selection that blends transparency with sophistication, and how the team balances analytical rigor with human judgment. Dan explains lessons from two decades of modeling markets, including the challenges of overfitting and underfitting data and MDT's steadfast focus on analytical edge rather thanformational edge. Before we get going, it's that time of year when we turn to traditions, like the tradition of Thanksgiving, gathering family and friends to share what we're thankful for. One of which is the ability to eat enough turkey and tryptophen to fall into one of the best slumbers all year. That nap on the couch while watching football. While I'll spend the turkey days with my family, I'm particularly grateful this year for the incredible team of professionals that brings together what you hear and experience with Capital Allocators. That's Hank, our CEO, Morgan, our head of ops, Tamar, our head of business development, and Liz, our head of content. I'd put our starting five up against any NBA All-Star team. Uh, off the court, our values of quality, entrepreneurial spirit, intellectual curiosity, respect, generosity, and fun win championships. Although I can't say the same on the court for my getting dunked on cuz at 61, I'm the least vertically challenged of our starting five. Outside the office, I'm grateful for you for listening, engaging with our guests, and sharing kind words all year long. This podcast is the gift that keeps on giving. So, before you start the mad year-end dash to calculate performance, conduct 360 reviews, and shop, take this time to be grateful for the many gifts in your life. As my friend Dasha Burns recently shared on the occasion of my 55th birthday, may the best of your yesterdays be the worst of your tomorrows. While you're feeling all warm and fuzzy, don't forget to spread the word about capital allocators to those closest to you to give them the gift that keeps on giving. Please enjoy my conversation with Daniel Mar. >> Dan, great to be here with you. >> My pleasure. I'd love you to take me back to your original path leading into your involvement in investing. >> I was a kid who loved numbers. I'm old enough that my parents subscribed to a physical newspaper every day and I would pour through every section that had numbers, whether it was sports or business. When I get to college, I had my first experience with investing. I was a freshman at Harvard in 1998 going into 1999. I came across a realization that there was an interesting opportunity to be had which was that there were a lot of IPOs in that market environment the dotcom bubble they would go up 100% 200% or more on the day that they priced and there were a small number of investment firms that would get allocations to these IPOs and they would offer them first come first serve. As a college student, I had a faster internet connection than just about anyone else in the world, and I had a lot of flexibility in terms of my time. I managed to get a number of allocations to those hot IPOs. You'd get a 100 shares, but as a college student, that seemed like great money. I studied computer science, was really interested in being a software developer. As I pursued the internships, I realized what was really important to me was that I found the industry and the work and the product to be exciting. I had a summer internship where I was building mobile internet apps, which you might think would be exciting until you remember what a cell phone was like in the year 2000. As I progressed, I realized that that quant investing was a field where I'd be able to blend the investment markets with my software background. And I haven't looked back. >> So, in that period of time, you're flipping hot IPOs. There's a lot happening around the internet, a lot of excitement. How did you think about leaning into that compared to leaning on the quantitative background and going into finance that way? The experience of being invested in those IPOs made me convinced that I was a great technology investor. For as much as those hundred share allocations led me to some nice wins in my portfolio, I would then translate it into some giant losses by straying from the original thesis and believing that I was an expert in something that I was certainly not. That experience was also very formative in my appreciation of a more disciplined, systematic, rigorous approach to investing which I found in the quant space. >> So how'd you get started? >> I joined a firm called MDT advisors in 2002 as a junior analyst. MDT was a pioneer in the quant investing space. They had been building a strategy since 1991. very early practitioners of quant investing. I worked closely with the founder of the strategy for a number of years. We were acquired by a firm Federated Investors, now Federate Hermes, in 2006. I took over running the team when the founder retired in '08. If you look at the long history of quant and you can go back to MDT's founding in the early 90s, how do you think about what's the same and what's changed in the application of quantitative investing? >> One thing that people sometimes miss about quant is that there's a lot of diversification in terms of quant strategies. There are certainly folks out there who are still applying strategies that are very similar to what was applied in the early days, the early types of quant strategies. But over the decades, there's been such an explosion in processing power, in data, in algorithms that marry those two things. the types of strategies and the sophistication of strategies that can be run has really exploded over the years aided by those tailwinds of processing and data. >> I'd love you to walk me through your journey of what the investment strategies looked like using the quantitative tools available when you joined in 2002 and how that's evolved to today. In 2002, we had made a big transition at MDT. For the first decade, the strategies were traditional factor tilting strategies. There was a formula that used a small number of characteristics and the portfolios would tilt toward them. Those strategies generated a pretty good outcome, but it was lumpy. As with many quants, the strategies had a difficult time in 1998 1999 as value and quality was not well rewarded by the market. The firm started looking for a differentiated approach, something that wasn't relying on the factors always working because the portfolios were always tilted in the same way. that led us to the decision tree approach that we still use today, albeit in a much evolved way from where we started in 2001. >> Why don't you walk me through what that means? A decision tree approach applied to stocks. >> So, a decision tree, these are things that people have probably seen. It's just a series of yes and no questions about characteristics that lead to a forecast or an outcome. A common place that they're used is in an insurance setting. In the life insurance industry, you may want to build a model to predict longevity. Decision trees are often used in that space. The first question might be on age. Depending on how you answer that question, whether you're above or below a certain age, there will be differentiated questions that are asked to help provide the most precise decision making as possible. For folks who are above the age of 65, the risk factors tend to be different than for people who are under the age of 65. If you're a smoker, there will be questions about how much do you smoke, how long have you been smoking, that won't be relevant of people who don't have that characteristic. Translating that back to the stock world instead of asking about risk factors for longevity, we're asking about the characteristics of companies. And depending on how those questions are answered, the lines of questioning will evolve based on what's relevant of those types of companies. Before we dive into that approach, I'd love to take a step back and ask how you think about an investment philosophy tied to quantitative investing. In particular, what do you believe that leads its way into your investment approach? Very succinctly, what we believe is that a disciplined quantitative approach to stockpicking can lead to a analytical advantage that will help us generate superior portfolio outcomes in the sense of generating more all-weather type portfolio returns. >> And what is it about the analytical approach that leads you to believe that? When you think about how do you construct a portfolio that is going to be able to perform well in lots of different market environments, there's two approaches to doing that. One is to be able to predict what the market environment is going to be with a fair degree of accuracy and then tilt your portfolio ahead of time to be in the right stocks, the right sectors at all times. global macro crystal ball approach. There are investors who do that. It's not in our wheelhouse as quants. The other approach is very much on the other end of the spectrum of leaning on diversification of not having a reliance on any one company, anyone sector, anyone type of stock to be able to drive your portfolio outcome and to diversify across companies with differentiated alpha drivers. gives you that opportunity to have a portfolio where you'll have a fighting chance at performing well no matter what market environment comes in a world with much more computing power than we've had in the past and more people able to look at these types of strategies. How do you think about what differentiates your approach and how you think about it from other participants in the market? What differentiates us at MDT is the use of machine learning. AI and machine learning are very hot topics right now. I read academic journals. I see what competitors in the space are publishing. And there's a lot more enthusiasm than there was 5 years ago for competitors in the space to be adopting or at least researching using some of these technologies. But that said, at MDT, we've been using these machine learning tools since 2001. So, we have a 24 year head start on someone who is new to the game. We have learned a tremendous amount over the last 24 years on what the advantages, but more importantly what the potential pitfalls are of using these powerful but also finicky and sometimes misled algorithms in a noisy data space like forecasting stock returns. >> What are some of those important pitfalls that you've learned along the way? There are a pair of very related issues in the data science space which are overfitting and underfitting. Obviously, they're two sides of the same coin. It's easy to not build a model that's overfit just by having a very simple model, but that very simple model is going to leave a lot of explanatory power on the table. It's going to be underfit. That's a problem that gets less press than overfitting, but is a significant one nonetheless. Figuring out what techniques can allow us to strike the right balance between having a model that's too complex versus having a model that's not complex enough is something that we have put a lot of thought into and evolved significantly over the decades. Our view in the machine learning space is that transparency is exceedingly important to understand precisely how these models are working. A common epithet that gets thrown at us in the quant investment management space is that we're using black boxes. At MDT, that is not the case. We like to position our investment strategies as being a glass box. There's a lot of machinery on the inside, but we can see into it. We can see how it's working and understand what's driving all of the decision-m on a day-to-day basis. >> I'm curious how you go about doing that when computer programs go to be a human. It makes a move that nobody really understands. So, there's this power of the machine learning figuring something out that you couldn't as a human, which also means it would be hard to know what it is. So this balance between the black box and the glass box, >> the algorithms behind Alph Go are not a decision tree. Machine learning, one of the advantages of that field is that it is able to discover insights on its own. And generally speaking, when we have a new research idea that makes it into the model, we X aantie have figured out why this idea we expect it to add value. or not. Occasionally, we're modestly surprised by what comes out of the research process. A number of years ago, we start adding pricebased factors to our model. And the pricebased factors found momentum effects as was published in the academic literature and as we fully expected to see. But it also found some very powerful reversal effects for companies whose share prices were down 70 or 80% over the last year in combination with other characteristics such as value and quality could lead to strong outcomes. We didn't quite have a feeling for it, but we beat up the data and we convinced ourselves that it was worth implementing. As soon as we started trading stocks that fit that profile, it immediately became obvious because our head of trading would come to me every day we bought a stock that was down 70 or 80% and say, "Dan, we need to override this trade. The company's CEO just resigned in disgrace, or Dan, this is a one product biotech company and they just missed their target. It looks like the whole company's got nothing. We have to override these trades. we can't do them. That's when the light bulb goes off. This is precisely why this strategy works is because even quantitative investors who are intentionally trying to buy these stocks find it hard to overcome the human emotions involved with buying a bad story. I'd love to tease through how you go about the investment process and we'll just go top to bottom as you're building a model to try to understand what stocks are likely to outperform. How do you come up with the ideas that you want to test quantitatively to see if it makes it into your model? There two big sources of research ideas for our process. Certainly, we read all of the academic and practitioner literature in the investment finance space and occasionally we get some good ideas out of seeing what's published. More often than not, we test an idea and either it's not replicable when we look at it with our data set or something else in our model essentially captures the same underlying effect. Where we find more value typically is when we generate ideas that are driven by our own observations on the behavior of our strategies. That's one of the advantages of having the long history. We've been investing our strategies over 30 years now. The observations that we've made across multiple different market cycles over those decades have informed meaningful enhancements to the process. >> When you build out your model, do you start with a couple of core factors that you believe true all the time and then build from there? As you build the model, how do you construct what those inputs are? It's driven by machine learning which leads us to a differentiated view on factors to a lot of other investors. One of the most unusual factors that we use we call company age. We measure that simply as how long has the company been publicly traded andor filing financial statements. It's an unusual factor to the best of my knowledge. Few quant investors use that in their models. Also, very few, to my knowledge, traditional portfolio managers explicitly take the company's age into account when they're formulating their views. And there's a reason for why it's unusual, which is that on its own, company age tells you nothing about whether a company is going to outperform or underperform. companies don't stop performing well because they've hit some magic number of years since their IPO. Why do we use this factor in our model? The power of the decision tree is that it allows you to make use of factors that don't explain returns on their own but can give you context on to how to explain returns. What we find is that the important questions to ask about young companies, companies that are within 10, 15, 20 years of their IPO are a little different than the important questions to ask of companies that have been around for 50, 80, 100 years. The decision tree gives you that framework to say, if you're a young company, let's ask these questions, but if you've been around for 80 or 100 years, let's ask a different set of questions. Valuation is an important differentiator. Valuation is a lot more important for companies that have been around for a long time than a brand new entrant to the public markets. >> How much of those model inputs come from your own qualitative insights of what should matter? The selection of factors is driven by the potential questions that can be asked is driven by the investment team. That's a major area of focus for us on the research side. Once we present that list of factors to the algorithm, it's completely mechanically determined. A lot of times we'll have an idea about a factor as a new idea help the model improve its forecasting and the decision trees will simply say nice try guys but I don't find a lot of profitable questions to ask about this factor I'm going to ignore it in terms of how does it decide to use the factors in relation to all the other characteristics that's 100% driven by the algorithm. How do you go through the process of retesting a factor that's working to see if the market catches up or it no longer works? >> We do occasionally remove factors from our modeling. The reasons that you do it are that first one where the factor no longer works for one reason or another, whether you were mistaken or whether markets have evolved. Occasionally we'll remove a factor if we add something new that captures a correlated underlying effect. An example of that first factor we used book to price in our models going back to version 1.0 in 1991. But as markets evolved and more importantly as the economy has changed we saw less and less explanatory power to incorporating that in our model. And we've had an intuitive sense of why that factor seemed to explain returns in data through the 1970s or ' 80s, but maybe doesn't work given how the intangible economy that's arisen in the decades since changes how companies trade on their book values. In the process of going from book to price being an important factor to not being in the model, what's the process to toggle on and off compared to decreasing its importance into the construction of the model? >> It's very datadriven. The process of removing a factor from the model, it's just the inverse of the process of adding a factor to the model. When we have a sense that a factor is working less well, generally that sense comes from the fact that we don't observe decision making being driven by that factor on a day-to-day basis. When we review our trades every morning, year after year, we see fewer and fewer trades that are being driven by this one factor. That's the value of the glass box of being able to understand what's driving the decision making. When we have that intuition that a factor has decreased in efficiency, we'll run that research project and say, well, the model seems to be making less use of this over time. What if we made zero use of it? What if we removed it from what we present to the algorithm? How does that impact our research results? How does it impact the returns and the risk that we generate from our back test? If we see that we can remove a factor from the model and have very little or no impact on portfolio outcomes over the course of decades, that gives us confidence that that's a factor that no longer needs to be there. As you work through and test all these potential signals for your model, I'm envisioning peering into the glass box and seeing a huge piece of paper on the wall with all these decision trees and different questions and nodes that gets down to a signal somewhere. What does that ultimately look like in the sense of how many thesis are at the top and work their way down into different nodes if you could actually visualize what's inside this glass box? We started our decision tree journey with one tree. Over the years with faster processing power and more advanced algorithms, we've been able to improve the forecasting by relying on algorithms that employ a forest of trees. Back in the one tree day, we would print out the tree and tape it on the wall of our trading room. Every time we were reviewing a trade, we would simply walk through the sequence of questions on that paper tree on the wall to help inform what specifically was motivating every trade that happened in our portfolio. As you move to a forest of trees, we can't put a thousand paper trees on the wall anymore. We've built some tools, some analytical helpers to synthesize and summarize what's happening across the thousand trees. At the end of the day, you could go through that exercise. as it would be tedious walking through tree by tree whether anything has changed specifically what and dig in on the data updates that are driving every decision that happens in the portfolio >> and in one individual tree to get from the top to the bottom how many different decision points and nodes are there >> typically we ask between two and five questions in each tree the reason we don't ask more questions is We found that as you ask questions deeper and deeper in the tree, you're working on smaller and smaller pools of data because the trees are customized to the branch of the tree that you're working down. If you think about trees breaking up 50/50, at the second layer of the tree, each question is motivated on half of your original data. Down another layer, it's a quarter. Down 10 layers, each question is going to be motivated on 1/ 1,000th of the data. down 20 layers, you would be operating on 1 one millionth of the data. You can quickly see that there's a sharp limit to how deep you want to make these trees. Fortunately, we have another approach to asking more questions about companies, which is rather than relying on a very deep tree, relying on a forest of relatively shallow trees. So, if you either took me back to that original tree or took one of the trees in the forest, I'd love you to walk through an example of what those two to five questions end up being as you tracked one company from the top of the tree to the bottom. >> The questions, they're asked in sequence and in context. At the top of the tree, you're going to ask a question of all companies. You're going to want a question that is relevant to explaining returns for big companies, small companies, growth companies, value companies. A common question we'll ask at the top of the tree will be about a company's use of financing, whether they're issuing debt andor shares or buying those things back. That's a good question to ask of any company. We find as the academics have that companies that are engaged in significant amounts of financing tend to underperform and those that don't have better outcomes. Down both of those branches, the algorithm proceeds in the same way. For the companies with a high level of financing, it tries to figure out what are the right questions to ask to find good companies and companies that are going to underperform. Same on the non- financing side. That's a very important thing that we don't give up on the high financing companies just because the odds are stacked against them. When the algorithm continues down that branch, what it finds are that a lot of companies with significant financing do underperform, but it finds that there is a class of stocks where they outperform despite the financing. And generally speaking, it's the strongest momentum companies that can generate good outcomes regardless of the financing. It makes intuitive sense. When companies are looking at the strongest, highest growth companies, they don't punish them for a little bit of share issuance, which often takes the form of stockbased compensation for their employees. Down the other branch of the tree though, it's not all going to be about momentum. We're going to find some strong differentiated groups of companies down the other branch of the tree to pair with that particular set of high alpha stocks on the financing side. If you go down that next level, if you're on the financing need, good momentum, what might a next question be to determine in that subset which companies are likely to outperform? >> Typical questions would be about volatility. We tend to find momentum works better when it is consistent. When the stock price is rising in a consistent manner, it leads to better outcomes than companies that have one giant price move the momentum measurement. Company age also comes into account there. We find that momentum typically is more meaningful when you're looking at newer companies than companies that have been around for a long time. They're generally higher growth businesses. they are more often in industries that are evolving. Knowing that the sentiment is strong around those companies is an even more positive indicator of future returns than knowing that a company that's been around for 100 years had a good quarter. How do you decide at what point in time it's worth going down another level compared to ending the tree and having your signal at the third level or the fourth level? The stopping rules on question asking are mechanical. The whole process is mechanical. In our models, we stop for two reasons. One is we have a hard limit at five questions. After you've asked five questions, you're done. There's also a limit that if you've asked a question and created a branch that has too small of a pool of data, that will also be a reason to stop. The questions don't have to break companies up 50/50. Occasionally, we'll ask a question about extreme price returns, whether on the positive or negative side. Generally, there aren't that many companies that have extreme returns, but they're very interesting. we will see some questions that pull out relatively small. So >> you can imagine this toggling back and forth of questions between fundamental data financing to pricing data momentum and then fundamental data and back and forth that you could have these thousands and thousands of trees and different questions you could ask. How do you then create a portfolio from all those signals aggregating thousands of different trees? We're going to use technology. We have a portfolio optimizer that we've built that takes into account a couple of key things as it is constructing portfolios every day. It takes into account the alpha forecasts. It's a precise numerical forecast that comes out of the decision tree model. We're taking into account risk management. We use a set of hard risk constraints that are consistent across all of our portfolios as well as a statistical risk model predicting the volatility and the tracking error of a portfolio. So that all else equal will prefer portfolios that have more consistent outcomes than portfolios that have volatile expected outcomes. We also take into account trading costs. We want to make sure that we're not repositioning the portfolio unless we think that the improvement that we're getting from an alpha and risk perspective will compensate us for not only the visible costs of trading of spreads and commission but the less visible costs of market impact. How do you assess the relative importance of the signal from the model to market impact in trading? >> We have a trade-off that's embedded in the optimization that captures that dynamic. market impact specifically where it intersects with the portfolio construction is in terms of number one or ultimate position sizing. Companies that are less liquid will tend to have smaller overall positions in them. But it also impact the speed of trading to get to those positions. The biggest most liquid names in the world trade in some of our portfolios. It's easy to trade tens of billions of dollars of multi-t trillion dollar stocks. Whereas some of our strategies are involved with small companies. We run small cap strategies, run a micro cap strategy. In those spaces, we deal on some very illquid stocks. And it's important there not only to think about the ultimate size of the position, but how quickly do you trade to those positions. In several of the things you've mentioned along the way, there is human judgment that's coming into play. Whether that is the risk constraints in the model or news coming out about a company and say, "Well, that's not what the model is trying to signal." How do you think about the degree to which your human judgment should override anything that comes out of the model? We take a data oriented view on that. We try to put all of the potential overrides that we might make to the decision-m of the model as much as possible through the lens of data. When we're thinking about trades, we're thinking about specifically what data inputs lead into what factors that are driving the decision-m. When a company that we're trading has reported great earnings, we want to dig into, okay, well, how are those great earnings going to impact all of the factors in our model at the next level? How will those factors changing impact the decision making that comes out of the trees? It's often the case that we're trading something and they've just reported great earnings, but we are buying them for reasons that have nothing to do with analyst forecast. Whether the analysts raise their forecasts a ton or whether they make modest updates can be irrelevant for certain of the trading that we're doing in our portfolios. That's the value of the glass box is being able to see how the decision making is being made allows us to be precise in terms of how we think about potentially stepping in and overriding the model. >> How do you think about the reflexivity of other quantitative participants in different models as it impacts what you're doing? When we think about that reflexivity in the quant space, there's a tendency to conflate natural fluctuations, good performance, bad performance with quant strategies, which can be true of any investment strategy with the impacts from running a strategy with leverage. When people talk about the most famous quant blowups of all time, long-term capital management, the quant quake in August of 2007, what they're highlighting are events that were caused by a period of underperformance for a quant strategy, but were magnified by the use of leverage in those strategies. If long-term capital management hadn't have been running a 50x leveraged strategy, they wouldn't have ended up in the trouble that they ended up with. Similarly in the quant quake the paper that was published on that was written by Andrew Low who's famous from MIT and a gentleman by the name of air Kandani who in a the world is small anecdote happened to be my roommate at the mobile internet startup that I worked at in college but the run on quant strategies that happened was predicated by the fact that statistical arbitrage strategies had gotten more crowded over the years leading up to 2007. In response to that, certain managers began running those strategies with additional leverage. And leverage doesn't just blow up quant strategies. It's equal opportunity. No one panics out of traditional portfolio strategies. When Archigos blows up last year, there's a tendency for folks to think that quants are all one and the same. There's a lot of variety under the covers. >> How much do you think about what your competitors are doing in an attempt to be the best you can be? >> We don't spend a lot of time worrying about what our competitors are up to. Where we do tend to pay attention would be when competitors are publishing strategies that are specifically sounding like they're encroaching on our space which is machine learning approach to traditionally looking investment fundamentally based investment portfolios. the impact of leverage on trading strategies to what we were talking about before feels like it may exist in the hedge fund pod shops today. And I'm curious if you've seen any changes in market structure that's impacting how you invest as those strategies have grown in size over the last bunch of years. >> I wouldn't say that we have felt a huge impact from pod shops per se. It does feel like the markets are different in the last couple of years than they were a decade ago. If you asked me five years ago, are markets on a neverending trend towards efficiency and is your job as a systematic investor going to get harder and harder and harder every single year? I would have said absolutely because that's the way it had gone for decades. traditional factor tilting became commonplace in order to have an edge. It got harder and harder every year. Something feels like it snapped in the last couple of years. I wish I knew what it was. It could be the rise of paw chops for all I know. It could be that we've hit a tipping point in terms of passive management in the equity space. It could be the rise of retail trading and meme stocks and Robin Hood. It could be all of those things wrapped up in one. The good news is we don't need to know what's driving it. The important thing is having strategies that are active and that are able to take advantage of inefficiencies when they present themselves. I'd love you to walk me through what your day looks like because there's aspects of what you're talking about that are seeing what the model does and then there's other aspects of observing the outputs based on a trade block. So as you go through a typical day in your life of managing the portfolios, what does that path look like? Every night we download updated data from all of our vendors. We recalculate all of our characteristics. We run all the companies in the domestic equity market through our forest and have updated forecasts. Every portfolio that we run is reoptimized and generates a trade list. The first thing every day is the trade review process. We're not doing trade review from the perspective of interjecting our own subjective behavior on what trades we think should happen and which shouldn't. But what we're after in that process is making sure number one that the data is correct. Then number two to be able to understand the dynamics of the model and what's driving our trading. And also to make sure that there's not news out there in the marketplace that our data inputs do not see, but that will impact a company and eventually probably will impact the data itself once it's updated. A company announces an acquisition, it can take upwards of a year of that deal closing. Once it closes, there's a lag until financial statements that reflect the deal are filed. We can get a jump on data by using our own eyes occasionally. The rest of the day, for most of the team, the focus is on research. It's on idea generation and execution on those ideas of thinking about how do we improve various aspects of our model. Being in the business since 1991, we pretty much used proprietary tooling for all the components of our process. Back in the '90s, there weren't third-party software vendors trying to sell you back testing engines, risk models, trading cost forecast models. Everything that we use is built in house. That gives us a lot of flexibility and breath in terms of the idea generation and what we can consider doing in terms of making enhancements to the process. It's not all about the factors that go into the stock picking even though that's the most exciting part of research. >> In that research piece, there's a wide swath of data on fundamentals and technicals, the stock price. Then you've had this whole explosion of alternative data sets. I'd love to hear how you've thought about the value and integration of alternative data into your research. >> In the markets, there areformational edges and there are analytical edges. It's not black and white between the two of those things, but generally speaking, when people get excited about big data, it's because they're excited about anformational edge. They're excited about finding some new source of information about companies going to drive returns that maybe a lot of people don't know about yet. There are a lot of investors who are successful pursuing informationational edges, but it can be a little bit of an arms race. These new data sources can be expensive when too many people find out about them depending on the size of the mispricing related to that data. the ability to generate returns can get diminished over time. We have intentionally focused not on that arms race but on the analytical piece through the use of decision trees and machine learning. The data that feeds our models is oriented towards the longest historical data and the highest quality data sets that are out there in the quant space. So financials, prices, analyst estimates. It's not to say that the big data explosion doesn't have any value. It certainly does and lots of people are a testament to that. But it's important to know what your edge is. And for us, it's using these sophisticated machine learning tools to tease out differentiated insights about companies to have differentiated alpha sources. These algorithms are extremely data hungry. It's really important when you're building these machine learning models on noisy data like forecasting equity market returns to give them as much data as possible. We train our models on roughly 50 years worth of data which I say that to some potential investors and they're surprised. We think that market data from the 1970s and 80s is still useful for forecasting mispricing. It is in the sense that these machine learning tools become more robust the more data and more different market cycles and context of investing that they're able to be trained on. >> When you've been working with machine learning models for a long time, what does the introduction of chat GPT change if anything in the way you've approached what you've done? large language models and chat GPT specifically are not anything that we're presently making use of in our modeling. One of the big challenges for folks who are trying to use those types of models in a stockp picking context is the problem of in sample versus out of sample. Especially if you're using a commercial model, you don't have any control over what data that model was trained on. When you're running a back test through the better part of the last decade, Chat GPT knows that Nvidia became a multi-trillion dollar company. Chat GPT knows what the mega trends were in the economy and the market over those time frames. It's not realistic to trust a back test that chat GPT generated. That said, there are exciting things going on in the AI space. And we use a lot of proprietary software and tools in our investment process. One area in AI that is really appealing to us is the idea of software development co-pilots. the idea that AI can make and enhance software development at an organizational level. We're a small team with a lot of software and any ways in which we can improve efficiencies there are valuable to us. >> What new research are you excited about? >> We are doing a lot of research in the factor space. We used to use stock ownership to drive some of our factors and that stopped working at a certain point. But we're coming back around to the idea that knowing who owns the stocks that you're contemplating investing in might be able to tell you something about how to evaluate opportunities there. We are also looking in the AI space at ways that that can enhance productivity but also idea generation. It's probably still a ways off before we're asking large language models to suggest stocks for the portfolio. It's important to be open-minded about the possibilities. Computers are going to keep getting faster. Data is going to keep getting more and more prevalent and accessible. the algorithms are going to keep getting more powerful intersecting those two things. Even if something seems far-fetched today, that doesn't mean that it won't be 10 years from now and 20 years from now. >> What do you find more challenging today than when you started your career? >> One of the most challenging things is on the team building side. 20 years ago, talented data scienceoriented programmers were not in demand by every single other firm in the entire world. We had a much easier time finding junior analysts to join our team. Recruiting, given that the skill sets we're looking for, a lot more demand, has gotten more difficult. We've also tried to adapt and be more flexible in the types of people that we're looking for. In response to that, in the same way that folks with data science backgrounds and AI knowledge are super in demand, the software programming space has hit a little bit of a soft patch. There's a lot of opportunities to hire great engineers these days. We're strategically trying to lean into where do we see the market for talent presenting opportunities to us. What continues to excite you to keep you motivated to keep getting better? >> The markets are exciting every single day. I've been at this for 23 years. We have learned a lot and improved our models significantly over those decades. But you're never going to solve the financial markets. There's always new information out there. There are always curveballs coming from a macro perspective, risks that you had never seen before that all of a sudden manifest themselves. From my perspective, it's a great place to be and a really exciting place to be applying my technical background to >> Dan. I want to make sure I get a chance to ask you a couple of closing questions. What is your favorite hobby or activity outside of work and family? So, I am a home brewer. Took that up shortly after we bought our first house and I had enough space to store all of the equipment. We brew probably a dozen batches of beer a year. Mostly trying to focus on what you don't find at the store all the time. My wife planted a sour cherry tree in our yard. Also, at the time that we moved in, it's turned out to be wildly prolific. So, we pull upwards of 80 pounds of cherries off that tree every year. I do her a favor by using some of them to brew interesting sour cherry beers. >> Which two people, other than your wife, have had the biggest impact on your professional life? >> I've worked at MDT my entire career, and I was really fortunate to have two mentors from day one. David Goldmith and Sarah Stall. David was the founder of the Quant Group and the CIO. Sarah was one of David's first hires who led analytical and portfolio attribution effort here for many years. What was great about the two of them was that they were incredibly different from one another in terms of of mentors. David was the mad scientist of our group. He would be thinking about algorithms 24/7. Come in and tell us about the idea he had while he was in the shower. Sarah was also very brilliant in a less wild and unconstrained way. She was very meticulous, very focused on craftsmanship and understanding precisely what was driving the returns of our models. They were both great mentors and helped me appreciate that success in investment management, it's not all about being the brightest and having the most genius ideas. There are a lot of geniuses who failed. It's not just about meticulousness and craftsmanship, but both of those things are very important to success in this business. I'm really indebted to David and Sarah. >> How's your life turned out differently from how you expected it to? >> I would have never expected that 23 years on from graduating college that I would still be a quant working at MDT Advisors. We go through an exercise every couple years. my graduating class where we publish a book on what everyone's been up to the past five years. I think it's just me and a fellow who's worked at Microsoft for 23 years who have gone down the career route and stayed in one place. It's been a phenomenal ride over the decades. The career that has managed to grow with me at every step where I needed it. I'm really fortunate that things turned out this way even though I would have never guessed it. >> Dan, last one. What life lesson have you learned that you wish you knew a lot earlier in life? >> I've always been an incredibly competitive person. And when I was young, I would take setbacks very hard. And frankly, I see that in my kids, too. They come by the competitiveness. And it's hard for them every time a little thing goes wrong. I wish I knew earlier on the life is a journey and that no one wins everything. often doors that seemed closed open in time. Sometimes the path that you end up on as an alternative ends up being the right path. I try to stress that with my kids as much as I can when I see them having the same struggle as I did. >> Well, Dan, thanks so much for sharing your insights on this quantitative approach to investing. >> My pleasure. It was a great conversation. Thanks, Ed. >> Thanks for listening to this sponsored insight. Sponsored episodes are paid opportunities for another 12 to 18 managers a year to appear on the podcast. If you're interested in telling your story in front of the largest audience of investors in the industry, please email us at team@capallocators.com to apply for one of the slots. An important disclaimer, views are as of November 2025 and are subject to change based on market conditions and other factors. These views should not be construed as recommendation for any specific security or sector. Investments involve risk and may lose value. Past performance is no guarantee of future results. There can be no assurance that quantitative investing will be a successful investing approach. The quantitative models and analysis used by MDT may perform differently than expected and negatively affect performance. Investing in equities is speculative and involves substantial risk. Diversification does not assure a profit nor protect against loss. Forward-looking statements or projections are subject to certain risks and uncertainties. Actual results may differ from those expressed or implied. Alpha measures excess returns of an investment relative to the return of a benchmark index. MDT Advisers is a federated advisory
Daniel Mahr – Glass Box Quant at MDT Advisers (EP.472)
Summary
Transcript
Our head of trading would come to me every day we bought a stock that was down 70 or 80% and say, "Dan, we need to override this trade. The company's CEO just resigned in disgrace." Or, "Dan, this is a one product biotech company and they've just missed their target. It looks like the whole company's got nothing. We have to override these trades. We can't do them." That's when the light bulb goes off. This is precisely why this strategy works is because even quantitative investors who are intentionally trying to buy these stocks find it hard to overcome the human emotions involved with buying a bad story. I'm Ted Sides and this is Capital Allocators. My guest on today's show is Daniel Maher, head of MDT, the $26 billion quantitative equity investing group at Federated Hermes that oversees a suite of actively managed mutual funds, ETFs, collective investment trusts, and separately managed accounts. Dan joined the firm in 2002 as a junior analyst and took over leadership of the team 6 years later, guiding its evolution through vast changes in data, computing power, and investment methodology. Our conversation traces Dan's path from flipping IPOs as a college student to running machine learning models across global equity markets. We discussed the development of MDT's decision tree framework, a glassbox approach to stock selection that blends transparency with sophistication, and how the team balances analytical rigor with human judgment. Dan explains lessons from two decades of modeling markets, including the challenges of overfitting and underfitting data and MDT's steadfast focus on analytical edge rather thanformational edge. Before we get going, it's that time of year when we turn to traditions, like the tradition of Thanksgiving, gathering family and friends to share what we're thankful for. One of which is the ability to eat enough turkey and tryptophen to fall into one of the best slumbers all year. That nap on the couch while watching football. While I'll spend the turkey days with my family, I'm particularly grateful this year for the incredible team of professionals that brings together what you hear and experience with Capital Allocators. That's Hank, our CEO, Morgan, our head of ops, Tamar, our head of business development, and Liz, our head of content. I'd put our starting five up against any NBA All-Star team. Uh, off the court, our values of quality, entrepreneurial spirit, intellectual curiosity, respect, generosity, and fun win championships. Although I can't say the same on the court for my getting dunked on cuz at 61, I'm the least vertically challenged of our starting five. Outside the office, I'm grateful for you for listening, engaging with our guests, and sharing kind words all year long. This podcast is the gift that keeps on giving. So, before you start the mad year-end dash to calculate performance, conduct 360 reviews, and shop, take this time to be grateful for the many gifts in your life. As my friend Dasha Burns recently shared on the occasion of my 55th birthday, may the best of your yesterdays be the worst of your tomorrows. While you're feeling all warm and fuzzy, don't forget to spread the word about capital allocators to those closest to you to give them the gift that keeps on giving. Please enjoy my conversation with Daniel Mar. >> Dan, great to be here with you. >> My pleasure. I'd love you to take me back to your original path leading into your involvement in investing. >> I was a kid who loved numbers. I'm old enough that my parents subscribed to a physical newspaper every day and I would pour through every section that had numbers, whether it was sports or business. When I get to college, I had my first experience with investing. I was a freshman at Harvard in 1998 going into 1999. I came across a realization that there was an interesting opportunity to be had which was that there were a lot of IPOs in that market environment the dotcom bubble they would go up 100% 200% or more on the day that they priced and there were a small number of investment firms that would get allocations to these IPOs and they would offer them first come first serve. As a college student, I had a faster internet connection than just about anyone else in the world, and I had a lot of flexibility in terms of my time. I managed to get a number of allocations to those hot IPOs. You'd get a 100 shares, but as a college student, that seemed like great money. I studied computer science, was really interested in being a software developer. As I pursued the internships, I realized what was really important to me was that I found the industry and the work and the product to be exciting. I had a summer internship where I was building mobile internet apps, which you might think would be exciting until you remember what a cell phone was like in the year 2000. As I progressed, I realized that that quant investing was a field where I'd be able to blend the investment markets with my software background. And I haven't looked back. >> So, in that period of time, you're flipping hot IPOs. There's a lot happening around the internet, a lot of excitement. How did you think about leaning into that compared to leaning on the quantitative background and going into finance that way? The experience of being invested in those IPOs made me convinced that I was a great technology investor. For as much as those hundred share allocations led me to some nice wins in my portfolio, I would then translate it into some giant losses by straying from the original thesis and believing that I was an expert in something that I was certainly not. That experience was also very formative in my appreciation of a more disciplined, systematic, rigorous approach to investing which I found in the quant space. >> So how'd you get started? >> I joined a firm called MDT advisors in 2002 as a junior analyst. MDT was a pioneer in the quant investing space. They had been building a strategy since 1991. very early practitioners of quant investing. I worked closely with the founder of the strategy for a number of years. We were acquired by a firm Federated Investors, now Federate Hermes, in 2006. I took over running the team when the founder retired in '08. If you look at the long history of quant and you can go back to MDT's founding in the early 90s, how do you think about what's the same and what's changed in the application of quantitative investing? >> One thing that people sometimes miss about quant is that there's a lot of diversification in terms of quant strategies. There are certainly folks out there who are still applying strategies that are very similar to what was applied in the early days, the early types of quant strategies. But over the decades, there's been such an explosion in processing power, in data, in algorithms that marry those two things. the types of strategies and the sophistication of strategies that can be run has really exploded over the years aided by those tailwinds of processing and data. >> I'd love you to walk me through your journey of what the investment strategies looked like using the quantitative tools available when you joined in 2002 and how that's evolved to today. In 2002, we had made a big transition at MDT. For the first decade, the strategies were traditional factor tilting strategies. There was a formula that used a small number of characteristics and the portfolios would tilt toward them. Those strategies generated a pretty good outcome, but it was lumpy. As with many quants, the strategies had a difficult time in 1998 1999 as value and quality was not well rewarded by the market. The firm started looking for a differentiated approach, something that wasn't relying on the factors always working because the portfolios were always tilted in the same way. that led us to the decision tree approach that we still use today, albeit in a much evolved way from where we started in 2001. >> Why don't you walk me through what that means? A decision tree approach applied to stocks. >> So, a decision tree, these are things that people have probably seen. It's just a series of yes and no questions about characteristics that lead to a forecast or an outcome. A common place that they're used is in an insurance setting. In the life insurance industry, you may want to build a model to predict longevity. Decision trees are often used in that space. The first question might be on age. Depending on how you answer that question, whether you're above or below a certain age, there will be differentiated questions that are asked to help provide the most precise decision making as possible. For folks who are above the age of 65, the risk factors tend to be different than for people who are under the age of 65. If you're a smoker, there will be questions about how much do you smoke, how long have you been smoking, that won't be relevant of people who don't have that characteristic. Translating that back to the stock world instead of asking about risk factors for longevity, we're asking about the characteristics of companies. And depending on how those questions are answered, the lines of questioning will evolve based on what's relevant of those types of companies. Before we dive into that approach, I'd love to take a step back and ask how you think about an investment philosophy tied to quantitative investing. In particular, what do you believe that leads its way into your investment approach? Very succinctly, what we believe is that a disciplined quantitative approach to stockpicking can lead to a analytical advantage that will help us generate superior portfolio outcomes in the sense of generating more all-weather type portfolio returns. >> And what is it about the analytical approach that leads you to believe that? When you think about how do you construct a portfolio that is going to be able to perform well in lots of different market environments, there's two approaches to doing that. One is to be able to predict what the market environment is going to be with a fair degree of accuracy and then tilt your portfolio ahead of time to be in the right stocks, the right sectors at all times. global macro crystal ball approach. There are investors who do that. It's not in our wheelhouse as quants. The other approach is very much on the other end of the spectrum of leaning on diversification of not having a reliance on any one company, anyone sector, anyone type of stock to be able to drive your portfolio outcome and to diversify across companies with differentiated alpha drivers. gives you that opportunity to have a portfolio where you'll have a fighting chance at performing well no matter what market environment comes in a world with much more computing power than we've had in the past and more people able to look at these types of strategies. How do you think about what differentiates your approach and how you think about it from other participants in the market? What differentiates us at MDT is the use of machine learning. AI and machine learning are very hot topics right now. I read academic journals. I see what competitors in the space are publishing. And there's a lot more enthusiasm than there was 5 years ago for competitors in the space to be adopting or at least researching using some of these technologies. But that said, at MDT, we've been using these machine learning tools since 2001. So, we have a 24 year head start on someone who is new to the game. We have learned a tremendous amount over the last 24 years on what the advantages, but more importantly what the potential pitfalls are of using these powerful but also finicky and sometimes misled algorithms in a noisy data space like forecasting stock returns. >> What are some of those important pitfalls that you've learned along the way? There are a pair of very related issues in the data science space which are overfitting and underfitting. Obviously, they're two sides of the same coin. It's easy to not build a model that's overfit just by having a very simple model, but that very simple model is going to leave a lot of explanatory power on the table. It's going to be underfit. That's a problem that gets less press than overfitting, but is a significant one nonetheless. Figuring out what techniques can allow us to strike the right balance between having a model that's too complex versus having a model that's not complex enough is something that we have put a lot of thought into and evolved significantly over the decades. Our view in the machine learning space is that transparency is exceedingly important to understand precisely how these models are working. A common epithet that gets thrown at us in the quant investment management space is that we're using black boxes. At MDT, that is not the case. We like to position our investment strategies as being a glass box. There's a lot of machinery on the inside, but we can see into it. We can see how it's working and understand what's driving all of the decision-m on a day-to-day basis. >> I'm curious how you go about doing that when computer programs go to be a human. It makes a move that nobody really understands. So, there's this power of the machine learning figuring something out that you couldn't as a human, which also means it would be hard to know what it is. So this balance between the black box and the glass box, >> the algorithms behind Alph Go are not a decision tree. Machine learning, one of the advantages of that field is that it is able to discover insights on its own. And generally speaking, when we have a new research idea that makes it into the model, we X aantie have figured out why this idea we expect it to add value. or not. Occasionally, we're modestly surprised by what comes out of the research process. A number of years ago, we start adding pricebased factors to our model. And the pricebased factors found momentum effects as was published in the academic literature and as we fully expected to see. But it also found some very powerful reversal effects for companies whose share prices were down 70 or 80% over the last year in combination with other characteristics such as value and quality could lead to strong outcomes. We didn't quite have a feeling for it, but we beat up the data and we convinced ourselves that it was worth implementing. As soon as we started trading stocks that fit that profile, it immediately became obvious because our head of trading would come to me every day we bought a stock that was down 70 or 80% and say, "Dan, we need to override this trade. The company's CEO just resigned in disgrace, or Dan, this is a one product biotech company and they just missed their target. It looks like the whole company's got nothing. We have to override these trades. we can't do them. That's when the light bulb goes off. This is precisely why this strategy works is because even quantitative investors who are intentionally trying to buy these stocks find it hard to overcome the human emotions involved with buying a bad story. I'd love to tease through how you go about the investment process and we'll just go top to bottom as you're building a model to try to understand what stocks are likely to outperform. How do you come up with the ideas that you want to test quantitatively to see if it makes it into your model? There two big sources of research ideas for our process. Certainly, we read all of the academic and practitioner literature in the investment finance space and occasionally we get some good ideas out of seeing what's published. More often than not, we test an idea and either it's not replicable when we look at it with our data set or something else in our model essentially captures the same underlying effect. Where we find more value typically is when we generate ideas that are driven by our own observations on the behavior of our strategies. That's one of the advantages of having the long history. We've been investing our strategies over 30 years now. The observations that we've made across multiple different market cycles over those decades have informed meaningful enhancements to the process. >> When you build out your model, do you start with a couple of core factors that you believe true all the time and then build from there? As you build the model, how do you construct what those inputs are? It's driven by machine learning which leads us to a differentiated view on factors to a lot of other investors. One of the most unusual factors that we use we call company age. We measure that simply as how long has the company been publicly traded andor filing financial statements. It's an unusual factor to the best of my knowledge. Few quant investors use that in their models. Also, very few, to my knowledge, traditional portfolio managers explicitly take the company's age into account when they're formulating their views. And there's a reason for why it's unusual, which is that on its own, company age tells you nothing about whether a company is going to outperform or underperform. companies don't stop performing well because they've hit some magic number of years since their IPO. Why do we use this factor in our model? The power of the decision tree is that it allows you to make use of factors that don't explain returns on their own but can give you context on to how to explain returns. What we find is that the important questions to ask about young companies, companies that are within 10, 15, 20 years of their IPO are a little different than the important questions to ask of companies that have been around for 50, 80, 100 years. The decision tree gives you that framework to say, if you're a young company, let's ask these questions, but if you've been around for 80 or 100 years, let's ask a different set of questions. Valuation is an important differentiator. Valuation is a lot more important for companies that have been around for a long time than a brand new entrant to the public markets. >> How much of those model inputs come from your own qualitative insights of what should matter? The selection of factors is driven by the potential questions that can be asked is driven by the investment team. That's a major area of focus for us on the research side. Once we present that list of factors to the algorithm, it's completely mechanically determined. A lot of times we'll have an idea about a factor as a new idea help the model improve its forecasting and the decision trees will simply say nice try guys but I don't find a lot of profitable questions to ask about this factor I'm going to ignore it in terms of how does it decide to use the factors in relation to all the other characteristics that's 100% driven by the algorithm. How do you go through the process of retesting a factor that's working to see if the market catches up or it no longer works? >> We do occasionally remove factors from our modeling. The reasons that you do it are that first one where the factor no longer works for one reason or another, whether you were mistaken or whether markets have evolved. Occasionally we'll remove a factor if we add something new that captures a correlated underlying effect. An example of that first factor we used book to price in our models going back to version 1.0 in 1991. But as markets evolved and more importantly as the economy has changed we saw less and less explanatory power to incorporating that in our model. And we've had an intuitive sense of why that factor seemed to explain returns in data through the 1970s or ' 80s, but maybe doesn't work given how the intangible economy that's arisen in the decades since changes how companies trade on their book values. In the process of going from book to price being an important factor to not being in the model, what's the process to toggle on and off compared to decreasing its importance into the construction of the model? >> It's very datadriven. The process of removing a factor from the model, it's just the inverse of the process of adding a factor to the model. When we have a sense that a factor is working less well, generally that sense comes from the fact that we don't observe decision making being driven by that factor on a day-to-day basis. When we review our trades every morning, year after year, we see fewer and fewer trades that are being driven by this one factor. That's the value of the glass box of being able to understand what's driving the decision making. When we have that intuition that a factor has decreased in efficiency, we'll run that research project and say, well, the model seems to be making less use of this over time. What if we made zero use of it? What if we removed it from what we present to the algorithm? How does that impact our research results? How does it impact the returns and the risk that we generate from our back test? If we see that we can remove a factor from the model and have very little or no impact on portfolio outcomes over the course of decades, that gives us confidence that that's a factor that no longer needs to be there. As you work through and test all these potential signals for your model, I'm envisioning peering into the glass box and seeing a huge piece of paper on the wall with all these decision trees and different questions and nodes that gets down to a signal somewhere. What does that ultimately look like in the sense of how many thesis are at the top and work their way down into different nodes if you could actually visualize what's inside this glass box? We started our decision tree journey with one tree. Over the years with faster processing power and more advanced algorithms, we've been able to improve the forecasting by relying on algorithms that employ a forest of trees. Back in the one tree day, we would print out the tree and tape it on the wall of our trading room. Every time we were reviewing a trade, we would simply walk through the sequence of questions on that paper tree on the wall to help inform what specifically was motivating every trade that happened in our portfolio. As you move to a forest of trees, we can't put a thousand paper trees on the wall anymore. We've built some tools, some analytical helpers to synthesize and summarize what's happening across the thousand trees. At the end of the day, you could go through that exercise. as it would be tedious walking through tree by tree whether anything has changed specifically what and dig in on the data updates that are driving every decision that happens in the portfolio >> and in one individual tree to get from the top to the bottom how many different decision points and nodes are there >> typically we ask between two and five questions in each tree the reason we don't ask more questions is We found that as you ask questions deeper and deeper in the tree, you're working on smaller and smaller pools of data because the trees are customized to the branch of the tree that you're working down. If you think about trees breaking up 50/50, at the second layer of the tree, each question is motivated on half of your original data. Down another layer, it's a quarter. Down 10 layers, each question is going to be motivated on 1/ 1,000th of the data. down 20 layers, you would be operating on 1 one millionth of the data. You can quickly see that there's a sharp limit to how deep you want to make these trees. Fortunately, we have another approach to asking more questions about companies, which is rather than relying on a very deep tree, relying on a forest of relatively shallow trees. So, if you either took me back to that original tree or took one of the trees in the forest, I'd love you to walk through an example of what those two to five questions end up being as you tracked one company from the top of the tree to the bottom. >> The questions, they're asked in sequence and in context. At the top of the tree, you're going to ask a question of all companies. You're going to want a question that is relevant to explaining returns for big companies, small companies, growth companies, value companies. A common question we'll ask at the top of the tree will be about a company's use of financing, whether they're issuing debt andor shares or buying those things back. That's a good question to ask of any company. We find as the academics have that companies that are engaged in significant amounts of financing tend to underperform and those that don't have better outcomes. Down both of those branches, the algorithm proceeds in the same way. For the companies with a high level of financing, it tries to figure out what are the right questions to ask to find good companies and companies that are going to underperform. Same on the non- financing side. That's a very important thing that we don't give up on the high financing companies just because the odds are stacked against them. When the algorithm continues down that branch, what it finds are that a lot of companies with significant financing do underperform, but it finds that there is a class of stocks where they outperform despite the financing. And generally speaking, it's the strongest momentum companies that can generate good outcomes regardless of the financing. It makes intuitive sense. When companies are looking at the strongest, highest growth companies, they don't punish them for a little bit of share issuance, which often takes the form of stockbased compensation for their employees. Down the other branch of the tree though, it's not all going to be about momentum. We're going to find some strong differentiated groups of companies down the other branch of the tree to pair with that particular set of high alpha stocks on the financing side. If you go down that next level, if you're on the financing need, good momentum, what might a next question be to determine in that subset which companies are likely to outperform? >> Typical questions would be about volatility. We tend to find momentum works better when it is consistent. When the stock price is rising in a consistent manner, it leads to better outcomes than companies that have one giant price move the momentum measurement. Company age also comes into account there. We find that momentum typically is more meaningful when you're looking at newer companies than companies that have been around for a long time. They're generally higher growth businesses. they are more often in industries that are evolving. Knowing that the sentiment is strong around those companies is an even more positive indicator of future returns than knowing that a company that's been around for 100 years had a good quarter. How do you decide at what point in time it's worth going down another level compared to ending the tree and having your signal at the third level or the fourth level? The stopping rules on question asking are mechanical. The whole process is mechanical. In our models, we stop for two reasons. One is we have a hard limit at five questions. After you've asked five questions, you're done. There's also a limit that if you've asked a question and created a branch that has too small of a pool of data, that will also be a reason to stop. The questions don't have to break companies up 50/50. Occasionally, we'll ask a question about extreme price returns, whether on the positive or negative side. Generally, there aren't that many companies that have extreme returns, but they're very interesting. we will see some questions that pull out relatively small. So >> you can imagine this toggling back and forth of questions between fundamental data financing to pricing data momentum and then fundamental data and back and forth that you could have these thousands and thousands of trees and different questions you could ask. How do you then create a portfolio from all those signals aggregating thousands of different trees? We're going to use technology. We have a portfolio optimizer that we've built that takes into account a couple of key things as it is constructing portfolios every day. It takes into account the alpha forecasts. It's a precise numerical forecast that comes out of the decision tree model. We're taking into account risk management. We use a set of hard risk constraints that are consistent across all of our portfolios as well as a statistical risk model predicting the volatility and the tracking error of a portfolio. So that all else equal will prefer portfolios that have more consistent outcomes than portfolios that have volatile expected outcomes. We also take into account trading costs. We want to make sure that we're not repositioning the portfolio unless we think that the improvement that we're getting from an alpha and risk perspective will compensate us for not only the visible costs of trading of spreads and commission but the less visible costs of market impact. How do you assess the relative importance of the signal from the model to market impact in trading? >> We have a trade-off that's embedded in the optimization that captures that dynamic. market impact specifically where it intersects with the portfolio construction is in terms of number one or ultimate position sizing. Companies that are less liquid will tend to have smaller overall positions in them. But it also impact the speed of trading to get to those positions. The biggest most liquid names in the world trade in some of our portfolios. It's easy to trade tens of billions of dollars of multi-t trillion dollar stocks. Whereas some of our strategies are involved with small companies. We run small cap strategies, run a micro cap strategy. In those spaces, we deal on some very illquid stocks. And it's important there not only to think about the ultimate size of the position, but how quickly do you trade to those positions. In several of the things you've mentioned along the way, there is human judgment that's coming into play. Whether that is the risk constraints in the model or news coming out about a company and say, "Well, that's not what the model is trying to signal." How do you think about the degree to which your human judgment should override anything that comes out of the model? We take a data oriented view on that. We try to put all of the potential overrides that we might make to the decision-m of the model as much as possible through the lens of data. When we're thinking about trades, we're thinking about specifically what data inputs lead into what factors that are driving the decision-m. When a company that we're trading has reported great earnings, we want to dig into, okay, well, how are those great earnings going to impact all of the factors in our model at the next level? How will those factors changing impact the decision making that comes out of the trees? It's often the case that we're trading something and they've just reported great earnings, but we are buying them for reasons that have nothing to do with analyst forecast. Whether the analysts raise their forecasts a ton or whether they make modest updates can be irrelevant for certain of the trading that we're doing in our portfolios. That's the value of the glass box is being able to see how the decision making is being made allows us to be precise in terms of how we think about potentially stepping in and overriding the model. >> How do you think about the reflexivity of other quantitative participants in different models as it impacts what you're doing? When we think about that reflexivity in the quant space, there's a tendency to conflate natural fluctuations, good performance, bad performance with quant strategies, which can be true of any investment strategy with the impacts from running a strategy with leverage. When people talk about the most famous quant blowups of all time, long-term capital management, the quant quake in August of 2007, what they're highlighting are events that were caused by a period of underperformance for a quant strategy, but were magnified by the use of leverage in those strategies. If long-term capital management hadn't have been running a 50x leveraged strategy, they wouldn't have ended up in the trouble that they ended up with. Similarly in the quant quake the paper that was published on that was written by Andrew Low who's famous from MIT and a gentleman by the name of air Kandani who in a the world is small anecdote happened to be my roommate at the mobile internet startup that I worked at in college but the run on quant strategies that happened was predicated by the fact that statistical arbitrage strategies had gotten more crowded over the years leading up to 2007. In response to that, certain managers began running those strategies with additional leverage. And leverage doesn't just blow up quant strategies. It's equal opportunity. No one panics out of traditional portfolio strategies. When Archigos blows up last year, there's a tendency for folks to think that quants are all one and the same. There's a lot of variety under the covers. >> How much do you think about what your competitors are doing in an attempt to be the best you can be? >> We don't spend a lot of time worrying about what our competitors are up to. Where we do tend to pay attention would be when competitors are publishing strategies that are specifically sounding like they're encroaching on our space which is machine learning approach to traditionally looking investment fundamentally based investment portfolios. the impact of leverage on trading strategies to what we were talking about before feels like it may exist in the hedge fund pod shops today. And I'm curious if you've seen any changes in market structure that's impacting how you invest as those strategies have grown in size over the last bunch of years. >> I wouldn't say that we have felt a huge impact from pod shops per se. It does feel like the markets are different in the last couple of years than they were a decade ago. If you asked me five years ago, are markets on a neverending trend towards efficiency and is your job as a systematic investor going to get harder and harder and harder every single year? I would have said absolutely because that's the way it had gone for decades. traditional factor tilting became commonplace in order to have an edge. It got harder and harder every year. Something feels like it snapped in the last couple of years. I wish I knew what it was. It could be the rise of paw chops for all I know. It could be that we've hit a tipping point in terms of passive management in the equity space. It could be the rise of retail trading and meme stocks and Robin Hood. It could be all of those things wrapped up in one. The good news is we don't need to know what's driving it. The important thing is having strategies that are active and that are able to take advantage of inefficiencies when they present themselves. I'd love you to walk me through what your day looks like because there's aspects of what you're talking about that are seeing what the model does and then there's other aspects of observing the outputs based on a trade block. So as you go through a typical day in your life of managing the portfolios, what does that path look like? Every night we download updated data from all of our vendors. We recalculate all of our characteristics. We run all the companies in the domestic equity market through our forest and have updated forecasts. Every portfolio that we run is reoptimized and generates a trade list. The first thing every day is the trade review process. We're not doing trade review from the perspective of interjecting our own subjective behavior on what trades we think should happen and which shouldn't. But what we're after in that process is making sure number one that the data is correct. Then number two to be able to understand the dynamics of the model and what's driving our trading. And also to make sure that there's not news out there in the marketplace that our data inputs do not see, but that will impact a company and eventually probably will impact the data itself once it's updated. A company announces an acquisition, it can take upwards of a year of that deal closing. Once it closes, there's a lag until financial statements that reflect the deal are filed. We can get a jump on data by using our own eyes occasionally. The rest of the day, for most of the team, the focus is on research. It's on idea generation and execution on those ideas of thinking about how do we improve various aspects of our model. Being in the business since 1991, we pretty much used proprietary tooling for all the components of our process. Back in the '90s, there weren't third-party software vendors trying to sell you back testing engines, risk models, trading cost forecast models. Everything that we use is built in house. That gives us a lot of flexibility and breath in terms of the idea generation and what we can consider doing in terms of making enhancements to the process. It's not all about the factors that go into the stock picking even though that's the most exciting part of research. >> In that research piece, there's a wide swath of data on fundamentals and technicals, the stock price. Then you've had this whole explosion of alternative data sets. I'd love to hear how you've thought about the value and integration of alternative data into your research. >> In the markets, there areformational edges and there are analytical edges. It's not black and white between the two of those things, but generally speaking, when people get excited about big data, it's because they're excited about anformational edge. They're excited about finding some new source of information about companies going to drive returns that maybe a lot of people don't know about yet. There are a lot of investors who are successful pursuing informationational edges, but it can be a little bit of an arms race. These new data sources can be expensive when too many people find out about them depending on the size of the mispricing related to that data. the ability to generate returns can get diminished over time. We have intentionally focused not on that arms race but on the analytical piece through the use of decision trees and machine learning. The data that feeds our models is oriented towards the longest historical data and the highest quality data sets that are out there in the quant space. So financials, prices, analyst estimates. It's not to say that the big data explosion doesn't have any value. It certainly does and lots of people are a testament to that. But it's important to know what your edge is. And for us, it's using these sophisticated machine learning tools to tease out differentiated insights about companies to have differentiated alpha sources. These algorithms are extremely data hungry. It's really important when you're building these machine learning models on noisy data like forecasting equity market returns to give them as much data as possible. We train our models on roughly 50 years worth of data which I say that to some potential investors and they're surprised. We think that market data from the 1970s and 80s is still useful for forecasting mispricing. It is in the sense that these machine learning tools become more robust the more data and more different market cycles and context of investing that they're able to be trained on. >> When you've been working with machine learning models for a long time, what does the introduction of chat GPT change if anything in the way you've approached what you've done? large language models and chat GPT specifically are not anything that we're presently making use of in our modeling. One of the big challenges for folks who are trying to use those types of models in a stockp picking context is the problem of in sample versus out of sample. Especially if you're using a commercial model, you don't have any control over what data that model was trained on. When you're running a back test through the better part of the last decade, Chat GPT knows that Nvidia became a multi-trillion dollar company. Chat GPT knows what the mega trends were in the economy and the market over those time frames. It's not realistic to trust a back test that chat GPT generated. That said, there are exciting things going on in the AI space. And we use a lot of proprietary software and tools in our investment process. One area in AI that is really appealing to us is the idea of software development co-pilots. the idea that AI can make and enhance software development at an organizational level. We're a small team with a lot of software and any ways in which we can improve efficiencies there are valuable to us. >> What new research are you excited about? >> We are doing a lot of research in the factor space. We used to use stock ownership to drive some of our factors and that stopped working at a certain point. But we're coming back around to the idea that knowing who owns the stocks that you're contemplating investing in might be able to tell you something about how to evaluate opportunities there. We are also looking in the AI space at ways that that can enhance productivity but also idea generation. It's probably still a ways off before we're asking large language models to suggest stocks for the portfolio. It's important to be open-minded about the possibilities. Computers are going to keep getting faster. Data is going to keep getting more and more prevalent and accessible. the algorithms are going to keep getting more powerful intersecting those two things. Even if something seems far-fetched today, that doesn't mean that it won't be 10 years from now and 20 years from now. >> What do you find more challenging today than when you started your career? >> One of the most challenging things is on the team building side. 20 years ago, talented data scienceoriented programmers were not in demand by every single other firm in the entire world. We had a much easier time finding junior analysts to join our team. Recruiting, given that the skill sets we're looking for, a lot more demand, has gotten more difficult. We've also tried to adapt and be more flexible in the types of people that we're looking for. In response to that, in the same way that folks with data science backgrounds and AI knowledge are super in demand, the software programming space has hit a little bit of a soft patch. There's a lot of opportunities to hire great engineers these days. We're strategically trying to lean into where do we see the market for talent presenting opportunities to us. What continues to excite you to keep you motivated to keep getting better? >> The markets are exciting every single day. I've been at this for 23 years. We have learned a lot and improved our models significantly over those decades. But you're never going to solve the financial markets. There's always new information out there. There are always curveballs coming from a macro perspective, risks that you had never seen before that all of a sudden manifest themselves. From my perspective, it's a great place to be and a really exciting place to be applying my technical background to >> Dan. I want to make sure I get a chance to ask you a couple of closing questions. What is your favorite hobby or activity outside of work and family? So, I am a home brewer. Took that up shortly after we bought our first house and I had enough space to store all of the equipment. We brew probably a dozen batches of beer a year. Mostly trying to focus on what you don't find at the store all the time. My wife planted a sour cherry tree in our yard. Also, at the time that we moved in, it's turned out to be wildly prolific. So, we pull upwards of 80 pounds of cherries off that tree every year. I do her a favor by using some of them to brew interesting sour cherry beers. >> Which two people, other than your wife, have had the biggest impact on your professional life? >> I've worked at MDT my entire career, and I was really fortunate to have two mentors from day one. David Goldmith and Sarah Stall. David was the founder of the Quant Group and the CIO. Sarah was one of David's first hires who led analytical and portfolio attribution effort here for many years. What was great about the two of them was that they were incredibly different from one another in terms of of mentors. David was the mad scientist of our group. He would be thinking about algorithms 24/7. Come in and tell us about the idea he had while he was in the shower. Sarah was also very brilliant in a less wild and unconstrained way. She was very meticulous, very focused on craftsmanship and understanding precisely what was driving the returns of our models. They were both great mentors and helped me appreciate that success in investment management, it's not all about being the brightest and having the most genius ideas. There are a lot of geniuses who failed. It's not just about meticulousness and craftsmanship, but both of those things are very important to success in this business. I'm really indebted to David and Sarah. >> How's your life turned out differently from how you expected it to? >> I would have never expected that 23 years on from graduating college that I would still be a quant working at MDT Advisors. We go through an exercise every couple years. my graduating class where we publish a book on what everyone's been up to the past five years. I think it's just me and a fellow who's worked at Microsoft for 23 years who have gone down the career route and stayed in one place. It's been a phenomenal ride over the decades. The career that has managed to grow with me at every step where I needed it. I'm really fortunate that things turned out this way even though I would have never guessed it. >> Dan, last one. What life lesson have you learned that you wish you knew a lot earlier in life? >> I've always been an incredibly competitive person. And when I was young, I would take setbacks very hard. And frankly, I see that in my kids, too. They come by the competitiveness. And it's hard for them every time a little thing goes wrong. I wish I knew earlier on the life is a journey and that no one wins everything. often doors that seemed closed open in time. Sometimes the path that you end up on as an alternative ends up being the right path. I try to stress that with my kids as much as I can when I see them having the same struggle as I did. >> Well, Dan, thanks so much for sharing your insights on this quantitative approach to investing. >> My pleasure. It was a great conversation. Thanks, Ed. >> Thanks for listening to this sponsored insight. Sponsored episodes are paid opportunities for another 12 to 18 managers a year to appear on the podcast. If you're interested in telling your story in front of the largest audience of investors in the industry, please email us at team@capallocators.com to apply for one of the slots. An important disclaimer, views are as of November 2025 and are subject to change based on market conditions and other factors. These views should not be construed as recommendation for any specific security or sector. Investments involve risk and may lose value. Past performance is no guarantee of future results. There can be no assurance that quantitative investing will be a successful investing approach. The quantitative models and analysis used by MDT may perform differently than expected and negatively affect performance. Investing in equities is speculative and involves substantial risk. Diversification does not assure a profit nor protect against loss. Forward-looking statements or projections are subject to certain risks and uncertainties. Actual results may differ from those expressed or implied. Alpha measures excess returns of an investment relative to the return of a benchmark index. MDT Advisers is a federated advisory