Top Traders Unplugged
Sep 14, 2025

Algorithms Don’t Just Predict You… | Ideas Lab | Ep.41

Summary

  • Investment Insight: The podcast emphasizes the importance of learning from top hedge fund managers to enhance investment strategies, while also acknowledging that past performance does not guarantee future results.
  • Data and Psychology: Dr. Sandra Mottz discusses how digital footprints can be used for psychological targeting, presenting both risks and opportunities for improving health and well-being.
  • Algorithmic Influence: The conversation explores how algorithms can predict and potentially influence human behavior, raising ethical concerns about privacy and the complexity of human interactions.
  • Technology and Mental Health: There is potential for technology to aid in mental health diagnostics and treatment by using data to identify deviations from personal baselines, offering early intervention opportunities.
  • Data Privacy Solutions: The discussion suggests using federated learning to maintain personalization without compromising data privacy, allowing companies to offer tailored services without accessing personal data.
  • Data Co-ops: The concept of data co-ops is introduced as a way for individuals to collectively manage and benefit from their data, with examples from the healthcare sector demonstrating its potential.
  • Regulatory Considerations: The podcast highlights the need for regulatory frameworks that protect consumer data while allowing for beneficial uses, such as opting in for data sharing to enhance service personalization.
  • Future Outlook: Emphasis is placed on the need for ongoing discussions and innovations in data management to balance privacy concerns with the benefits of personalized technology.

Transcript

[Music] If that's the only thing that kids interact with, they're going to lose the ability to now deal with a kid in the playground that's pushing them over and is not going to have the argument in a very nice and kind way. So, and we've gone through this, right? We've experienced the messy world and like the arguments, the conflict, the tension, the emotions. the next generation that interacts with these chat bots a lot more than potentially with other human beings that they're just going to lose this ability to argue with someone, get into a fight with someone, and still come out somewhat okay on on the other side. >> Imagine spending an hour with the world's greatest traders. Imagine learning from their experiences, their successes, and their failures. Imagine no more. Welcome to Top Traders Unplugged, the place where you can learn from the best hedge fund managers in the world, so you can take your manager due diligence or investment career to the next level. Before we begin today's conversation, remember to keep two things in mind. All the discussion we will have about investment performance is about the past and past performance does not guarantee or even infer anything about future performance. Also understand that there's a significant risk of financial loss with all investment strategies and you need to request and understand the specific risks from the investment manager about their product before you make investment decisions. Here's your host, veteran hedge fund manager Neils Krop Len. [Music] For me, the best part of my podcasting journey has been the opportunity to speak to a huge range of extraordinary people from all around the world. In this series, I have invited one of them, namely Kevin Coldine, to host a series of in-depth conversations to help uncover and explain new ideas to make you a better investor. In the series, Kevin will be speaking to authors of new books and research papers to better understand the global economy and the dynamics that shape it so that we can all successfully navigate the challenges within it. And with that, please welcome Kevin Cold Iron. Okay, thanks Neils and welcome everyone to the Ideas Lab podcast. Our guest today is Dr. Sandra Mottz. She is a professor at Columbia Business School and an expert in the hidden relationships between our digital footprints and our inner mental lives. She's here to explain how these footprints allow algorithms to do shockingly accurate psychological targeting on all of us. Now, obviously that's dangerous in the wrong hands, but it's also potentially a huge resource for improvement in health and well-being. She's written a fascinating new book called Mind Masters: The Data Science, the datadriven science of predicting and changing human behavior that explains both the risks and the opportunities. And it's a topic that's of deep importance to all of us. And I'm very excited to have this conversation. Dr. MZ, thanks for joining us and welcome to the show. >> Thanks so much for having me, Kevin. I'm excited. >> Um, okay. So, you grew up in a rural village of 500 people in southern Germany and it sounds like the experience of growing up in an environment where basically everyone knows your business and uh has been pretty informative for you know actually framing the research you've done in your career. I was curious, could you just start by telling us a little bit about what it was like growing up in in that small village and then how indeed that experience kind of shaped what eventually became your your research focus. >> Yeah, absolutely. And it's funny because I didn't make the connection between my upbringing and the research that I was doing for for a couple of years already um until recently when I started thinking about it in the context of the book. So, I'll get there in a second. But yes, so I grew up in this in this really tiny village, 500 people somewhere in the southwest corner of of Germany. And as you already mentioned, the experience there was very much shaped by the 499 uh other inhabitants of that small town because they were in your business every day, right? So, they knew exactly what you were doing on the weekend, which music you were into, who you were dating. Some of it was because they were interacting with you directly. Some of that was just them observing you going through your experiences, putting your bumper sticker on the car, you constantly running to the bus in the morning. And in a way, what that allowed my neighbors to do was not just observe who I was as a person, but also in a way interfere with my life's choices. So, you can imagine that my neighbors were not just in the business of trying to figure out who I was dating, but they were also trying to influence who I was dating. And the more that they knew about what I wanted to do in life, what my fears were, dreams were, hopes were, aspirations, the easier it became for for them to to do that. And so the way that I think about this experience of growing up in the village um with an analogy to the work that I now do in terms of how algorithms and computers can turn our digital footprints, if you want into these pretty accurate descriptions of who we are when it comes to our psychology and then ultimately prescriptions of what we should buy, which music we should listen to, potentially which jobs we should pick. Um it's very similar in both the upside and the downside. So the the experience of growing up in the village was in a way shaped by my the feeling that there was someone there at all times who truly understood me. So that was the upside in that there was someone who understood what I wanted out of life and could support me when it came to making these big decisions of what do you do after school like what like where do you go? Do you travel the world? Do you not? So having someone there to who really understood uh what I wanted was extremely valuable. But also on the downside that meant that someone just constantly was poking around in my private life in ways that I didn't appreciate um and was trying to meddle with it in ways that I didn't have any control over and and really didn't um didn't appreciate in the moment. And the same in my opinion is true with data. And I'm sure that we're going to dive uh much deeper, but the moment that someone gets a sense of who you are and your preferences, needs, motivations, and can use it to change your behavior, that creates a lot of upside, but also creates a lot of obvious downside. >> And so you said it you didn't really have your kind of aha moment until a few years ago. What What was it? Was it one of those, you know, just you're walking along in the park and all of a sudden bing, you it came to you or like what what >> how did that happen? Um, so I I it just happened in a conversation with a friend. Um, I got I went back uh home for Christmas and we were just he asked me about my research and I was trying to explain to him what I'm doing with data and how it's like an amazing opportunity but also this um pretty severe ethical challenge when you think about the impact on individuals on society. And I was just kind of telling him how hard it is for me to sometimes to live with this tension because I obviously truly believe in the in the upside. And I talked to companies um and how could they use data to make the products better to really serve um their customers in a in a way that creates value for them, not just profits. But then also kind of at the same time this tension of like no but I also do understand and see how the research that I'm contributing to could actually be abused to really exploit individuals and undermine some of these core values that we have as a society. And then we just kind of started talking like you know what this is actually the same thing that happened to us growing up in the village. It was like something beautiful about being seen but also something horrendously annoying about being seen by our neighbors. So it was like in this conversation with my friend um that I was like had this a harmonious like no this is actually it's not exactly the same and a lot of my thinking after that has has has been spent on like what are the differences um and how do we how do we mitigate against the risks that maybe weren't there in the village but we do see now with data but that was the moment >> that's really cool um yeah and we're going to talk about some of those differences as we get into it but I may maybe we can you know just talk about, you know, the your book is split into three sections. In the first section, you talk about how data is a window into our psychology and and I wanted to maybe talk about some of those um examples. >> You know, one of them is um you talk about Facebook and you say >> with just a few hundred likes, Facebook knows you better than your spouse. So my my question is two questions. I mean, I want to I'm kind of curious to, you know, if you can explain how they do that, but also what does know you mean in that context? >> It it's a it's such a good question and it's funny like because the the question that pops up in my mind immediately is like how bad are our spouses at understanding who we are, right? If that if that's the comparison, is it just that the computer and the algorithm is really good or do our spouses really suck at making those predictions? And you know, it's it's a little bit of both. Um, but the the one thing that kind of constantly pops on my mind when I think about how could it ever be that an algorithm is just as good at our spouse at knowing who we are. And I'm going to say more about what knowing means is I always think of Google, right? Google like you type questions into that seemingly anonymous search bar that you don't feel comfortable asking even your closest friends and spouse. So the idea that there could be an entity that just by observing our data can actually make predictions that are more accurate than what the people around us who also know us pretty well um can can understand about us I actually think makes a little bit more sense. So Google for me is always the the part that I think people can relate to more easily. Now, with with this study that you that you mentioned, the way that we typically capture accuracy of these models, when we think of like how much do they know about you, well, how good are you, how good are those models at capturing who you are on like these psychological dimensions, we typically have a comparison to how you describe yourself. So, we have you complete a questionnaire, a personality questionnaire, and we asked you questions like, "I'm the life of the party. To what extent do you agree with this?" um I make a mess of things. To what extent do you agree with that? So we kind of get your self perception and the way that you think of yourself and when it comes to personality traits and then we have an algorithm sift through your data and the way that we train them is essentially we give them access to data from thousands of people. So they can see over time, well, okay, if you follow the Facebook page of Lady Gaga, maybe on average people are more extroverted if you do that. Or if you follow the page of CNN, maybe that makes you a little bit more conscientious and open-minded than the average person. So they kind of do this Sherlock Holmes game for many, many of these traces and for many, many people. And now because they have an understanding of on average people who like CNN are more conscientious, on average people who like Lady Gaga are more extroverted. Now if they see your profile, they can actually put the puzzle pieces together and say, "Everything that I know about you and everything I know about everybody else in the space, it seems to be the case that you might be more extroverted, more conscientious, more neurotic, and so on." And then it's really this comparison of here's what the computer predicted in terms of your big five personality traits or some of these other dimensions and here is how you describe yourself. How much overlap is there? Right? Do you agree that you're more extroverted than the average person? Do you agree that you're monerotic or are there some discrepancies? So that's kind of how we how we think about how well does the computer know you? Can a computer or an algorithm replicate what you would tell us in a questionnaire? >> That's interesting because it's as you sort of get into more deep philosophical questions here, but like which is the right benchmark? You know, is it what you say you are or is what the computer says you are? >> Yeah. And that's it's a fascinating question. Or is it what other people say >> you are? Right. So in this case, we're pitting the computer against others in terms of trying to replicate what you would say on a questionnaire, but maybe it's the other people in your life who actually have a have a much better read. What you can do to try and disentangle some of that or at least see who is right, right? Who has the right um answer to who is Kevin? Um, you can see how both your self-reports or the other reports of the people in your environment or the computer-based predictions are good at predicting other life outcomes, other stuff that we know about you such as your life satisfaction, which profession do you choose? And then you can see, okay, if you if the computer says you're extroverted, are you more likely to be a salesperson? Or if you say you're extroverted, are you more likely to be a salesperson? What we know is that even though the computer doesn't fully replicate your self-reports, it's still it can still be just as good at predicting these life's outcomes, which kind of goes back to your question of like yeah, it might know something about you that you either don't know yourself or you're not willing to disclose in a questionnaire. So often times if is especially if you combine the two, you actually get an even more accurate reading. So if I take your self-report which has something like the more subjective experience like this quality of well maybe there's some information in there that really just has to come from you because we can't observe it in data but then also adding this prediction piece from the computer. So your extraversion level your computer based prediction um of your extraversion and those together are even better than predicting which job you're going to choose just because we're tapping into these different parts of of who you are. I gotcha. I got you. Okay. Um well, you know, as I so I took your test and um you know, you have a you have a section later in the book where you say, "Okay, you know, we have personality uh types or traits. You know, that's kind of our our if you were our baseline, but we don't always behave that way." Like you're you're an introvert. You say in the book that you like to go dancing. I'm apparently someone who's um you know likes social situation and has a lot of energy but I quite happily spent last Sunday alone in a dark room watching the French Open for 6 hours by myself. So the point is you know we we sometimes behave in a ways that are different than our kind of baseline personality trait. But you say actually computers can tell when you're behaving differently. They can tell what state you're in. How is how is that possible? >> Yeah. So for me, I mean in in on some level it's actually one of the most interesting parts when it comes to recent developments in personality psychology and also really one of the most interesting parts when it comes to a computer's ability to really understand who you are at any given point in time. Because what what we what personality psychologists I think have realized over time in conversation with social psychologists who were always like well your behavior is determined by the situation and nothing else right forget about this idea that you come with certain personality traits and like a certain genetic makeup that is also determined by your upbringing. you're just a blank slate and your behavior is just dependent on the situation. And they always were in conversation or almost like in a in a friendly fight with personality psychologists who insist that no there's something about Kevin that makes him behave consistently across different situations and the two have in a way agreed I would say that well there is something core to Kevin's personality and that's a general tendency to behave maybe somewhat more extrovertedly. So, that's a preference for behavior that you're partially born with and that you partially um grow into as you're being raised as a as a kid. But we also know that we're not always the same. And not in the sense that we're hypocritical and just flip-flop around completely randomly, but in the sense that who we are at the core interacts with our environment. Right? So if both of us spend time um in a social setting with friends at a bar at a club for me it's in the classroom. So even though I think of myself as more introverted as a as a tendency I can I can totally step it up in the classroom right especially in an MBA classroom you have to be somewhat entertaining that's the main goal that's the main thing that you have to do there and be entertaining. So there's depending on the situation both of us can be somewhat more extroverted but also more introverted. you just mentioned like well when you're sitting at home by yourself yeah you probably feel a little bit more quiet as opposed to like more outgoing and social cuz that's what the situation dictates and the interesting part and this is like where psychologists kind of agreed on the social psychologists and personality psychologists is those deviations are not completely random right I there's a certain system to whether we feel more extroverted compared to our baseline or more introverted if there's other people around and the situation is social, yeah, maybe we feel more extroverted. If the situation is somewhat more kind of quiet and reserved, there's no one around. We probably both feel more introverted. So, we can make educated guesses of whether you might be moving up from your average or you might be moving down. And that's what computers can do as well. So they can get a sense of well generally speaking Kevin seems to be rather extroverted but actually based on let's say your um data that gets captured by your smartphone. I see that currently sitting at home he's not left the house there's not really any ambient sound going on other than the TV playing and there's not no other people cuz we can see that there's no phones showing up in the same location at the same time. So it seems like he's somewhat in a in a more quiet spot. So now let me adjust my prediction and say, "Yeah, he's generally extroverted, but right now given everything about what I know about the context, he's probably feeling a little bit more introverted than usual." And that is a really interesting insight just into who you are in the moment, but could also be incredibly helpful and valuable when it comes to figuring out like what advertisements we show you. Right? If I'm a marketer, I'm like, "Oh, Kevin is really usually extroverted, but currently he's in a somewhat introverted situation. Do I now maybe want to show him stuff that brings him back to his baseline um level of extraversion? Or maybe now is not a good time because he's not thinking of himself as an extrovert. So, there's all of these fun dynamics that are not just related to understanding, but also potentially to this second step of influence." >> Yeah. And that's a that's an interesting question because I've heard people describe you know the not that these algorithms at least yet have have free will but um in some sense they operate they they they quote want to operate in a world that doesn't change much because then that enhances predictability. So they they want me to always be an extrovert. So if I'm dep if I'm displaying I don't know introverted characteristics online or through my phone or whatever do they then say okay let's send them the introvert ads or do they do they do they force me back to the extraversion? I mean again I'm you I'm kind of anthroporizing these algorithms but um do they do they care what state you're in? um or they just want to be able to identify the state, feed you the ads in that state or are they or are they kind of just trying to nudge you to kind of always stay the same? I don't know if that's >> such a great question. I've been thinking about this question for the last couple of weeks a lot because the way that I I'm currently thinking about it is that in a way algorithms are not optimized to take risks. What they do is, as you said, like if they figure out that you're extroverted, the the least risky option for the next ad is to show you something extroverted. Um, and unless you optimize them for some kind of serendipitous exploration or some kind of optimized exploration that's based on their understanding of what you might be going through in the moment, they're not going to do that. Right? And the concern that I probably share with you based on how I interpret your question is that Dave would actually over time just bring your complexity by just constantly optimizing for your average was like here's who I think you are because I don't want to take risk. What I'm optimized for is showing you something that I can reasonably believe based on everything that I've observed before you're going to like or at least you're not going to hate. Right? It's something that on average you actually respond to pretty well and that's what they're optimizing for unless we actually tell them to optimize for something else and that is how do we keep you complex and how how do we keep you complex in a smart way would actually be tapping into this context right it would be saying like okay actually instead of just throwing darts and now serendipitously showing content that comes from all over the place and most likely you're not actually going to enjoy because it doesn't fit your general profile. of things that you like, but it seems to be there's like a window of opportunity to show you something that's more introverted because actually now currently seem to be in a slightly more introverted situation. So now is a good time to keep pushing those boundaries and keeping you complex in the experiences that you have. So I 100% agree in that it's like such an interesting question because these agents are not trained for that. they're not optimized to take risks and they're certainly not optimized currently to kind of keep you complex given the situation that you're in. So then I guess you know kind of going back to sort of the implications of this there is a a a risk that the more we're in you know more that we're influenced by these algorithms the the less we kind of like I said change our state the less complex we become as people >> um >> I think of it as a my current working title is for the paper is the basic effect because it's essentially it all it all makes us more similar and it all makes us so shallow right We're always the same person. We all It's not just that we shrink as individuals, but we also look more similar over time because it's pulling us in the direction of the average um of the population. So, who knows if that's going to fly as a paper title. But um it it's at least it gets people's attention. >> Well, it's it's interesting. Not to get too kind of grumpy old mananish about it, but I mean I do find that you know if you spend a lot of time online um I guess in that kind of non-complex state and then you go out in the physical world all of a sudden you're buffeted by things that aren't being controlled by an algorithm and you you know your your your your mood can change very quickly and I'm wondering maybe part of that is because you know you're you're gone from this almost forced simple state to a more having to deal with more complex complex stuff. >> Yeah. And I think sometimes that can be incredibly liberating, right? It's like, okay, now I actually did find a movie just walking past a movie theater that I otherwise would have never seen or maybe I found this coffee shop and restaurant because I wasn't Google using Google maps. My concern for actually mostly the next generation is that it's going to be harder for them to deal with the complexity and messiness of the real world. Right? If you think about it in the context of like the conversations that we have with algorithms, they're first of all customized to you. So they're much more likely to speak your language. They're also much more likely to do it in a very nice and constructive way. Right? If you have a um an argument with an algorithm about something, yeah, they might argue for the other side. They could do that, but they're still going to do it in a very nice and constructive and polite way because that's what they were trained for. That's the guardrails that companies put in place, which makes sense, but also means that if that's the only thing that kids interact with, they're going to lose the ability to now deal with a kid in the playground that's pushing them over and is not going to have the argument in a very nice and kind way. So, and we've gone through this, right? We've experienced the messy world and like the arguments, the conflict, the tension, the emotions. But my is that like the next generation that interacts with these chat bots a lot more than potentially with other human beings that they're just going to lose this ability to argue with someone, get into a fight with someone, and still come out somewhat okay. On on the other side, [Music] we talked about a little bit about how your digital footprint can be captured. Um, and I think a lot that's intuitive to a lot of us certainly if we talk about like Facebook or Google searches, but there's other stuff in the book that were was even kind of more shocking and particularly thinking about images um versus language. And uh there's re, you know, you quote some research that says, you know, computers can can act accurately predict your personality, your sexual orientation, even political ideology just from your face. and um which was pretty shocking and I think you you were actually I think skeptical of that work when it first came out um but you're no longer I think as skeptical so can you kind of explain that to >> yeah still skeptical I just wouldn't rule it out so I think the the interesting part of of images right could be pictures could be videos is that they come with very specific challenges when it comes to ethics because you can leave your phone at home you don't have to post on social media but the moment that we could make those predictions from something like your face or just an image of you. That just means that like you can do it anytime that I get a picture of you, whether that's a picture that you posted or maybe you're just walking in the background of someone else's picture and they upload it and with facial recognition I can im immediately um tag you in that picture. So the the reason for why I'm very um interested and intrigued by this research is just the implications. But the main idea behind it is that when you post pictures, some of that signal comes from grooming, right? So some of that signal comes from extroverts, for example, um being more likely to put contact lenses like of blue eyes. So when you look at the average image of an extroverted woman in this case, you see that their hair looks blonder, which probably means that they're dying their hair more because there's no reason genetically that they should be blonder on average. And they also seem to have bluer eyes. They also seem to be much better at taking pictures because you can't see the nostrils usually. Um, so they probably do the duck face from above because they figured out that that makes their face look slimmer. Introverts on the other hand just don't seem to care that much. So you do see the nostrils. Usually there's an outline of glasses. So again, if you if you kind of combine this with we know that introverts typically are more inclined to read and so on, that actually probably makes makes sense on some level. So some of it is just um the way that you groom yourself, the way that you go through life and the activities that you engage in. Now the more the part that really is somewhat concerning and where a lot of people are skeptical about what does the research actually show is looking at actual facial features. So strip away all of the grooming from beards, from hair, from makeup and just look at the facial features of your face. um could that be predictive of some kind of personality traits or other dispositions? And I remember the first papers coming out of a good friend of mine, Miha Koshinsky, who was a pioneer in the space and I respect um very much cuz he knows what he's doing. And I remember him publishing this stuff and it's like there's just no way that this can be true. That sounds like pseudocience. you know, we've gone through this already like in different centuries where people tried to say here's different how different facial features relate to um certain character traits and it's always been debunked. So I was very very skeptical going in and I remember him giving a talk and just saying like look I understand that you're skeptical this is what we observe in the data and you know I'm happy for people to replicate this but let me give you at least a few reasons for why theoretically this could be the case and the reasons for me were actually so compelling that I thought you know let's let's think about it in a little bit more open-minded way and what he some of the reasons were essentially you can imagine that um take extraversion. If you are a really beautiful kid, right, like super symmetric face, rosy cheeks and so on, chances that your environment are going to is going to respond to you in a much more positive way. You're constantly getting smiles. You're constantly kind of being like, "Oh, how beautiful are you, da da da." So, you get a lot of very positive social feedback. Now, the likelihood that you might also turn out a little bit more social and extroverted and trusting in other people could actually go up. And there's research showing that this is the case. If you kind of have a somewhat like attractive physical appearance, people seem to become more extroverted just because you get a lot of positive social feedback. Another pathway could be hormones, right? So, we know that there are certain types of hormones like testosterone that um very much influence our behavior and the way that we show up. It's essentially related to kind of being somewhat more aggressive and assertive. But we also know that hormones shape our facial features. So there might just be kind of certain parts of like biology that determine both behavior and the way that our faces look. So if you take some of these pointers of like no there are certain pathways by which this might play out then it's also conceivable that computers just because they can take in so much information might pick up on these subtle cues that we as humans just miss. >> That's a good explanation. I mean, how does that then feed into I mean, I could see, you know, how that might feed into kind of personality characteristics. Uh although I would have to be an exception in that uh in that instance of of symmetric features leading to extraversion, but um um >> on average, >> on average, right, there's always outliers. U but what about like political views or sexual orientation? that that seems to be something that you know is well as post-political views I could see being more environmentally shaped but sexual orientation not so much >> well sexual orientation so I think there's also again with the sexual orientation one there were some arguments about like to what extent um is it grooming to what extent is it just facial features and here again so and I'm not making that argument I'm now challening what Miha would be potentially saying is like there's still arguments that there's like different hormones at play, right? So like some of the things that make you somewhat more female um as a man might also influence your facial features. So I think there the some of the biological pathways um are probably the more um likely ones. But again, so in this case, I think the the verdict in terms of what might be driving some of these predictions and to what extent do they hold when we fully control for all of the facial features, which is really tricky, right? You can control for some of them, but you still have well, what do your eyebrows look like? But there's like a few things that are very, very hard to control for. So I think there it's a it's a trickier question than with personality. >> Okay. Okay. Well, thanks for that. Well, so let's talk about some of the implications of this, you know, I guess ability of these algorithms to kind of again know who who we are um to kind of a shocking extent. I mean, I think it's obvious or not obvious necessarily, but the first things that spring to mind for all of us is the the the downside of it, but there's also um you know uh lots lots of potential upside and um you know, as you were talking, we were describing you know how you can I don't know say for instance um an an extrovert person all of a sudden displaying a lot of introverted characteristics. I mean, you could also you could imagine um a doctor having uh having that data and saying, "Well, hey, you're starting to display, you know, if I look at your whatever this footprint, it looks to me like maybe you're depressed." Um or even I don't know as you were talking, it's like if you're talking to a therapist and you're giving them your story, they could be looking at your footprint like not so sure you're telling. >> Um yeah. >> Can you explain? Yeah. So, how how do you imagine this some of this being uh used in in a positive way? >> Yeah. So, and I 100% agree that it's it's very obvious. You see the downsides, right? Both in terms of privacy, in terms of our loss of agency, self-determination. But for me, it's really coming back to this idea of the village. In the village, the fact that other people knew me was the only way that they could provide the best advice ever. it was the only way that they could provide support that was exactly what I needed in a certain point of time. Um, so when it comes to psychological targeting at scale, um, there's many different contexts ranging anywhere from how do you help people accomplish some of the goals that they set for themselves but are just having a hard time implementing. Savings is a classic example where if I can understand what is it that's really motivating you, what are some of the needs? Let's say you're somewhat agreeable. you're the type of person who really cares about their their loved ones and so on. Well, maybe convincing you to save is not going to work if I just tell you, well, just put some money in the bank so that it sits there or to get ahead in life and get a competitive advantage of other people around you. What you really want to hear is, well, saving actually allows you to protect your loved ones in the here and now and in the future. So speaking their language, tapping into their motivations is often times a way in which we can make these difficult behaviors, right? Giving up something in the here and now because you can't can now get this extra stuff, this extra gadget that you wanted, a PlayStation, this extra um the watch that you've been eyeing for a while because you put some money in the bank and maybe you're going to need it for a rainy day, but it's not entirely clear. So it just makes it easier for people to accomplish these goals. That's like one study that we've run. For me, the context that you mentioned, the context of mental health is probably besides education actually, but it's probably the one um that I think is the most promising just because the baseline currently when it comes to diagnosing um something like depression or treating depression, they're just so bad. Right? If you think about diagnostics in the context of depression, you have to be doing pretty poorly to go out, find a therapist that then diagnoses you with a depression. Um, and you have to actively reach out, which is in the context of depression extremely hard because one of the hallmarks of depression is that you just turn inwards a lot more and you don't interact with your social environment as much. So one of the ways in which um you could imagine reinventing diagnostics or at least these early warning system is to say well is there any way that we can passively see that your behavior starts to deviate from your typical baseline and phones in a way are like the the perfect gadget for or any type of wearable is like is made for that because you can see for example and this is like real research that we've done in my lab is can we based on your smartphone sensing data. See that maybe you're not leaving your house as much anymore as you typically do tapping into your GPS records or maybe there's much less physical activity. Maybe you're not making taking as many calls as you typically do. So again, it might be nothing. Maybe you're just on vacation and you're having a great relaxed time. But why don't we use it as like a almost like a smoke alarm that says, "Hey, there seems to be something as off and try to catch you early." Right? if we can catch you early and say there's some deviations from your baseline, why don't you try and reach out to someone right now and get some support? Or if you're someone who knows they have a history of suffering from depression, you could actually nominate someone that you trust and and love to say, why don't you also get these early alerts that tell you, hey, again, it might be nothing. Maybe I'm having a blast on vacation, but reach out to me to just check in and see if you can if you can support me. So we don't have to wait until you enter this valley of depression that is really hard to get out of but we catch you early and we try to sort of supply you with the support that you need which is the second part where um AI and an understanding of who you are and how you operate actually comes in extremely handy when it when it comes to treatment. Right? So think of like Amazon for example is kind of making recommendations about what you should buy next. You could imagine the same principle being applied to therapy that says, "Okay, we know that not everybody responds to all treatments in exactly the same way. Maybe there's a treatment that works better for you, while Kevin, all I need to do is I need to send him to nature, and that's probably going to help him recover more quickly. Sandra, that doesn't work at all. She doesn't care about nature. She really needs to be surrounded by people that she loves." So, this is a treatment that is much better um and much more effective. So the same way that Amazon recommends the best product for you, we could say based on everything about we know about you and everything that we know about other people, this is the type of treatment that you might be responding to best or now building on AI in these large language models like can we offer some kind of um conversations and therapy to people who otherwise can't afford it or don't have access. So, I'm not saying we should be replacing therapists with AI that can kind of take your data and then customize um their their therapy to to you, but there's so many people who currently can't afford or don't have access. And so much so that I think for every 100,000 people worldwide looking for a therapist or for for help, there's 13 human professionals. So, there's this huge huge gap in in terms of supply and demand. Obviously, not even even the distributed, right? If you live on the upper east side in Manhattan, you're not going to have a problem finding a therap there's plenty. You can find a therapist for your dog, but that's certainly not true for other parts of the world. So, in those cases, having an AI who not only can read up on the latest science, right? The AI probably can read up on here's a paper that got published six months ago that tells us about a more effective way of treating PTSD or depression. um but also can learn over time what is best for you and how to communicate with you in the most effective way. >> Yeah. Yeah. I hadn't I hadn't thought about that that application that that makes a lot of sense. You know, as you were talking, I'm thinking of like an an aura ring with a lot more functionality. Um you know, aura ring for those of you who don't know is just a little device you wear on your finger and it kind of tells you, hey, you're you didn't move much today or you didn't sleep that well last night. Stuff like that. And it's helpful. um you can adjust your behavior but really you're you're thinking a much much richer way. >> Yeah. But I think it's it's a it's a great example and they are currently um implementing AI coaches. So the idea that yeah like tracking is to some extent valuable because we want to see how well do we sleep, what's like physical activity levels. But what most of us want in the end and that again is like what the neighbors were good at is offering advice. It's not just mirroring back to you. Here's what your life looks like, but here's what you could do to make it better. [Music] >> Okay. Well, that's a good way, I think, maybe to pivot to the last third of your book, which is talking about how to make data work for us. And um you know you you say you you focus mainly on principles not on specific recommendations but there there are some kind of general um general ideas you've got that I think are worth um exploring. And the first thing you talk about and I think this is getting really more into the kind of how do you avoid some of the negative consequences of of psychological targeting is kind of an optin versus an opt out I guess framework for collecting personal data. Um, so maybe can you sort of explain how you envision well how that works now and how you how it, you know, might work, you know, in a in a better setup. >> Yeah, absolutely. And as you said, this is like mostly trying to protect people from the most egregious abuses. So the idea that the way that it's currently set up is in most cases your data is just being tracked continuously once you've consented to like the original terms and conditions that obviously nobody reads. H nobody has a time to read. So like by you signing up to a product most of the time these products can grab as much data as they want and do with it whatever they want. So they can use it to make their products better. That's true. but they can also sell it on to third parties and you have no control over um so you could opt out. You could say in some cases at least you could opt out and say well here's the data that I don't want you to track and I want still want to use the product without the tracking which is sometimes possible not always but the burden of opting out is on you and we all know that we're lazy right as as human species the last thing that we want to do is to now go through all of the products opt out of every single one of them after reading the terms and conditions really carefully because we now understand what might happen with our data. So there's just no way that we're going to do that because we also all only have 24 hours in a day and hopefully we have better stuff to do than now having to opt out of all of these um data tracking um policies from all of the products and services that we're using. So the the switch to opt in which essentially make use of your laziness is a superpower. That's how I think about it. If it's true that um we are we kind of need a good reason to give someone our data otherwise laziness takes over and we're just not going to opt in. That means that a company has now to convince me that by using my data they're actually making the product so much better that I say okay actually you know what I should change these terms and I should change the tracking so that they can actually use my data. And in many cases that might be true, right? If you if you think of YouTube, YouTube has an option to um just get rid of your behavioral history entirely and just clear kind of start with a blank slate every single time that you open it. So you don't get the recommendations for which videos to watch. And on some level, it's extremely annoying. So I there's so many times like I have this experiment where on my desktop I have the recommendation so he kind of knows which videos I watched before and then the on the phone I switched it off just to get a comparison. So annoying. I just I my son is crying and I want to pull up a quick video for him to watch and I can't find it because there's no history. So in this case, YouTube would probably convince me to say, "No, actually the value that I'm adding is so good that I might actively say, okay, I'm willing to opt into some of some of the data tracking." But it would really put the onus on companies to say, I am going to offer a product that's so so much superior with data tracking that you're willing to do it. And in the absence of you taking action, your data is protected. >> I've got two questions here. Uh well the main question is wouldn't that just put us into a world kind of like now with cookies where you know you go to a website and it's like accept all cookies or whatever and you just end up accepting it. No no no idea really what's that's doing mainly just cuz you want to use the product. And so I can imagine I'm opening up my Gmail and like do you want me to do you want to opt in to allow data and like look I really need to use my email so yes or I'm simplifying it but I mean how how would you avoid that situation where they just you know withhold the service completely if you don't opt in. >> I I think I mean perfect question because I 100% agree with you. Right. If we're doing it the same way that we're doing with those cookies and you can't use the service unless you say yes then most people are going to say yes. Um, so we can talk about in a second how what I actually think is a smarter way of dealing with the idea that you do want the service and the convenience and the personalization, but you don't want to give your data in the first place. Now, the one thing that at least the companies and websites with cookies that take it seriously and do want to support you in making the right choice, what they do is that they make the most obvious selection is only accept the ones that are necessary. Right? So in most cases the websites that try to grab as much data under the regulation that requires you to accept the cookies they make the the option of accepting all the most salient one right you can go in and you can untick the boxes but nobody does that and the button that is like red and blinking is the one just accept all and I'm going to share my data anyway there's a few websites and I always appreciate them when I see them is that the red blinking button that is the most obvious one is actually just one button that says I only accept the necessary cookies. So in this case, you don't have it's not more work for you to to protect your privacy. It's just like how do you how do you make it easy for people in this case to to accept only the necessary cookies and the same could be true for data, right? So it could be that the mandate now is in addition to you having to accept the cookies, the option that we want to make the most salient is the one that protects your data. So then you get around some of that issue. Again, it could be that we're still not able to use the services without kind of giving our data. And that that becomes another regulatory question. And it also becomes a little bit of a competitive question of like can these companies um that require you to give away your data still operate even if you have competitors who who don't. And here's the where I think actually the the the ideal solution lies. Even kind of coming back to my my YouTube example because what I said is like YouTube might be able to convince me to give my data just because I really want this recommendation because it's annoying to start from scratch every single time. In an ideal world, I would actually get those recommendations without having to give my data to YouTube. And that sounds like a well, how would that ever be possible because you kind of need the data to make the recommendations. That's no longer true with technology that we that we have because what YouTube can do is they can make use of the fact that your phone is essentially a supercomput, right? Your phone is like a million times more powerful than the computers that you use to launch rockets to space with. And what they can do is instead of me sending my data to YouTube and then they're making recommendations, they can send their intelligence, their recommendation model directly to my phone. So they essentially just send me the intelligence to my phone where it locally kind of updates based on my viewing history. Everything, all of my data just sits on my phone. It never leaves the phone. YouTube just sends me the algorithm to locally process, make recommendations, kind of say, "Okay, based on what you've been viewing before, this is the the you the video that you probably want to show to your son when he's not falling asleep." So it makes the same recommendations, gives me the same convenience, gives me the same personalization, but YouTube doesn't have to see the data. Um, and now what we still want to do is to make sure that everybody benefits, right? The the algorithm of YouTube should be getting better over time. And for that, my data is actually helpful. So instead of sending my data to YouTube, I can still send back an updated version of the intelligence. So what YouTube gets is here's how I want you to tweak your algorithm to make it a bit better, but that's just the intelligence that I'm sending and I'm not sending you my my data. And for me, that's a total gamecher because now I can say, hey, I do get exactly the same benefits, but without the downside of you now having my data. Um, and that solves so many problems for consumers, right? Because I in a way I can now have it all in a way that I could never do in the village. in the village. I could never get the support from my from my villages if they didn't get to see my data if you want if they didn't observe who I was and what I wanted. Now in this world of technology we can actually do it and I would argue that it's also good practice for companies to do that because that's always the question that I get is like why would companies agree to that? Don't they just all want to collect that data? I don't think so. So if I mean unless you're in the business of selling data then you probably would not want to go with that strategy. But if you're not, you're much better off providing the same service, convenience, personalization without now sitting on this pile of gold, right? Collecting personal data. You're sitting on this pile of gold that you now have to protect. And if you look at the number of data breaches and the cost associated for companies with these data breaches has gone up kind of rapidly over the last couple of years. So it's a huge financial risk for companies to do that and it's also a a reputational risk, right? If that kind of gets out and people know about data breaches, that's a reputational hit that you're taking. And on the other side, if you can be the company that says, "Hey, we offer exactly the same kind of product as our competitive, but you know what? We don't actually need to see your data. Your data is protected. you might actually switch to that that competitor because you get everything that you want, but you don't have this risk of of them abusing your data. >> Yeah. Um that that seems to make a lot of sense. I mean, can can you just explain um I just want to make sure I understand what gets sent back from the phone to the company in that situation. So it's not all of your data, it's a what an anonymized version or is it just it's kind of reduced to a set of um I don't know uh factors as opposed to all the specifics. How how does that work? >> So it's not the data itself. It's essentially tweaks to the model. If you think of take a regression analysis, a regression analysis essentially tells you here's how certain inputs, right? Like certain kind of variables are associated with an output. And what we get is like here's the coefficient. It tells us if you go up in X by one, here's what happens to the outcome variable. So what I'm doing here is I'm essentially kind of sending you updated coefficients. >> I'm telling you, here's how I want you to update the model, but you don't know anything about the underlying data. >> Gotcha. Okay, that's a great explanation. Thank you for that. Another idea that you talk about um in the book is this notion of a data co-op or a data union which is people banding together to control and manage their data. We had Ra Furuhar on the show a couple years ago and she talked about data unions on in her book and I got very excited about it and I tried to you know find one and join one but I didn't have any luck. So could you maybe just explain the concept again and then if are there practical steps we can take now to to you know >> to join one of these co-ops data co-op. >> Yeah. Um it's a great question. So the idea behind data cops is essentially it's it's how do you not just alleviate some of the risks associated with your data being out there but how do you actually maximize the utility and the value that each and every one of us can get out of the data that we generate. and they they are member-owned entities of people who have a shared interest in using their data in a certain way. So you could imagine expecting moms, which is my go-to example these days of like they want to pull their data to understand well what should I be doing based on my genetics, my medical history, my lifestyle, my environment um to make sure that I'm healthy and the baby is healthy. Now, I don't want this data to go to a former company because I don't trust them. But I would happily pull my data with other with the data from other expecting moms um in what is essentially an entity that has fiduciary responsibilities to its members. So, it's legally obligated the same way that financial institutions are legally obligated to act in the best interest of their customers to help me make the most of my data. So in this case we could figure out again based on these different trajectories medical histories genetics here's what a specific woman should be doing at a specific stage in their pregnancy. Now the hard part which is I think what you mentioned in terms of why don't we see more of these data copes cuz on a conceptual level they make a lot of sense is they're not easy to set up right so it needs essentially coordination of like in this case uh hundreds if not thousands of women that say okay let's get together and start one of these entities or it needs a visionary that says okay I'm the person who's going to put in all of the this effort and now kind of I get other women behind it and most of the time their run is nonprofits, right? So, it's not that you establish one of them and then you make a lot of money um running them because that's again coming back to this idea that we're member owned and we want to create value for our members, not necessarily the the entity. But there are a couple of examples of existing data co-ops that I think are very compelling. My favorite one is one in Switzerland that operates in the healthcare space. It's called my data and they have different problems that they tackle but one of them is essentially understanding and better treating multiple sclerosis which is one of these diseases that is it's so poorly understood because it's determined again by anything from your genetics, lifestyle, medical histories. Um, and what they do is they essentially have patients who are part of the the co-op and but then also non-patients because you need a a comparison group to see how do symptoms track in like healthy individuals um and individuals suffering from MS. And the benefit that members of this co-op get is by sharing their data with a co-op, they not only contribute to a better understanding of the disease itself, but they also benefit directly by saying, well, the data co-op now has access to the the symptoms and the treatments of like thousands of people. And now the same way that Amazon again can say, "Hey, here's um the products that you might respond to the most positively, it can say, well, we've seen other patients with similar symptom trajectories as yours." And they now communicate directly with that patient's doctor to say, "Hey, based on everything that we know about your patients and everybody else in our data set, why don't you try treatment X?" Because other patients who've had similar trajectories, they've responded really positively to that treatment as opposed to something else. and then report back to us and say if this was actually helpful in the end. And that's a completely different model, right? Usually what happens is if you're one of these people who has a disease that's poorly understood or still pretty rare, your best hope is to now give your data to a pharma company. Best case scenario is they develop a drug that you can now pay millions of dollars for um so that you benefit. It's absolutely crazy. In most cases, you don't benefit at all because it's like either you don't have access to the drug or it's just like not in time for you to to to reap the benefits. And data clubs completely flip that on its head because you can benefit immediately. They're just harder to set up. So, I think that's one of the reasons for why we don't see them as much yet, but there's a world in which the existing entities could take on that role. So, I think the the most promising solution that I've that I've heard um Sandy Pentland at MIT um argue for is like why don't credit unions play some of this role? Because they're already trusted entities that kind of organize someone's data like this. This could be one of these entities that actually helps us facilitate the processing of our data. And again, because they're legally obligated to act in your best interest, they might also be the entities that are pushing for these technologies like federated learning where they don't themselves want to hold the data, but they want to facilitate the exchange. So they help you connect your data to some of the providers that you want to want to share it with. And then it's just like a trading of intelligence as opposed to trading of data. >> So it sounds like you would characterize where we are now as in the very very early stages. of that. Yeah. >> I'm I'm curious because we're we're kind of bumping up against our time limit now. Um, you know, what online tools do you use? How do you protect your privacy and get the most out of your personal data? >> It's such a good question and I think just looking at myself is one of the reasons for why I've become a lot less optimistic about just putting people in charge, right? Like the cookies. Like I think about this all the time and I'm very much concerned about the privacy risk and my loss of self-determination and still I can't keep up with it. I like even though I understand the space fairly well, I don't have the time and I don't have the energy to manage it properly. The one part that I might be a little bit more mindful um than other people is the phone. Um just because the phone is like right it's like this person looking over your shoulder 24/7 knows exactly where you go, who you meet and so on. And we mindlessly accept all of these requests from app that apps that we download. You have a weather app and that wants to tap into your microphone, your GPS, and your photo gallery. And we're like, you clearly don't need access to my photo gallery to tell me what the weather's going to be like in New York tomorrow. So with that, I'm a little bit more mindful. But generally speaking, just observing my own failure is why I kind of advocate for these technologies that just make it easy for people to do the right thing. Could we have um AI bots that you know obu obfuscate who we are? So like I have a personal bot that you know when my data gets sent out it takes it and throws a whole bunch of random stuff in there so that you know all of a sudden I just look like a random um I don't know a data generator. Is that possible or >> we could I just don't think it's the ideal solution because then you don't get the upside right. I don't want you to live in this world where you have to choose between, oh, I can now trick the algorithm and you're not going to be able to figure out who I am and what I want, but then you also don't get what you want. Right? That's the that's the terrible example of YouTube when you're just entirely lost. And there's so much information, so much so many products out there that you need some kind of filtering. And I'd ideally figure out a way where you get the personalization without the risk of having your data out there. >> Gotcha. Okay. Um, well, that's a I think that's a good place to leave it for today. Uh, Sandra, thanks so much for writing the book and taking your time to share your ideas with us. I mean, without question, this is a a topic that impacts everyone listening today. So, um, thanks for uh, thanks for joining us today. >> Thanks so much. >> Okay. Well, um, the book is called uh, Mind Masters, the data science uh, datadriven science of predicting and changing human behavior. So, please make sure to go get a copy and to follow Sandra's work because I think you can tell that uh not only are these very important ideas, but they're not being discussed enough on mainstream media. So, for all of us here at uh Top Traders Unplugged, thanks for listening and we'll see you next time. >> Thanks for listening to Top Traders Unplugged. If you feel you learned something of value from today's episode, the best way to stay updated is to go on over to iTunes and subscribe to the show so that you'll be sure to get all the new episodes as they're released. We have some amazing guests lined up for you. And to ensure our show continues to grow, please leave us an honest rating and review in iTunes. It only takes a minute and it's the best way to show us you love the podcast. We'll see you next time on Top Traders Unplugged. [Music]