Top Traders Unplugged
Dec 31, 2025

OLD: The Future of AGI: Not What You Think | Ideas Lab | Ep.44

Summary

  • AI Transition: The guest outlines blockers to AGI—energy, foundational industry data gaps, and agent coordination—arguing we may be a capital cycle or two away from full impact.
  • Energy Infrastructure: Massive new power is needed for AI, with opportunities in nuclear, geothermal, grid upgrades, demand response, and behind-the-meter solutions.
  • Foundational Industries: Manufacturing, supply chain, and construction are data-poor; investments in data normalization, visibility, and skilled trade enablement are near-term opportunities.
  • AI Agents: Agent-human and agent-agent interoperability is clunky, creating opportunities in trust, memory, and marketplaces that benchmark and coordinate agents.
  • Big Tech Dynamics: Walled gardens (e.g., OpenAI, Anthropic, Google/Gemini, Apple, Meta, LinkedIn) hinder interoperability, but economic pressure may force more open standards.
  • Data Centers: Surging power demand (e.g., long wait times in Northern Virginia) highlights capacity constraints and investable themes in compute optimization and energy provisioning.
  • Market Outlook: Expect a boom-retrench-reinvest cycle; near-term alpha likely in infrastructure over pure application-layer plays vulnerable to platform shifts.
  • Societal Shifts: The “Aquarius economy” emphasizes human agency; opportunities include hybrid AI-human services, new third spaces, and platforms enabling authentic creators.

Transcript

[music] that all needs to get worked out for these systems to be really smooth. [music] Um cuz right now it's it's very very clunky to create trust [music] and and memory between systems. Then you put that on crack when it's like agent to agent, right? because it's like [music] the agents if there's not a human in the loop it becomes that [music] much more incumbent to basically have like the USBC for AI and [music] that just doesn't really exist yet. [music] >> Imagine spending an hour with the world's greatest traders. Imagine learning from their experiences, [music] their successes, and their failures. Imagine no more. Welcome to [music] Top Traders Unplugged, the place where you can learn from the best hedge fund managers in the world, so you can take your manager due diligence or investment career to the next level. [music] Before we begin today's conversation, remember to keep two things in mind. All the discussion we will have about investment [music] performance is about the past and past performance does not guarantee or even infer anything about future performance. Also understand that there's a significant risk of financial loss [music] with all investment strategies and you need to request and understand the specific risks from the investment manager about their product before you make investment decisions. Here's your host, veteran [music] hedge fund manager Neils Kstrop Larson. [music] For me, the best part of my podcasting journey has been the opportunity to speak to a huge range of extraordinary people from all around the world. [music] In this series, I have invited one of them, namely Kevin Coldine, to host a series of in-depth conversations to help uncover and explain new ideas to make you a better investor. In the series, Kevin will be speaking to authors of new books and research papers to better understand the global economy and [music] the dynamics that shape it so that we can all successfully navigate the challenges within it. And with that, please welcome Kevin Cold Iron. [music] Okay. Hey, thanks Neils and welcome everyone to the Ideas Lab podcast series. Um, our guest today is Aubrey Pagano. Aubrey is a general partner at Alpaca VC, which is an early stage venture capital company. And before joining Alpaca, she was an entrepreneur. She built and then later sold the online apparel company Bow and Drape. Aubrey joins us today to talk about a white paper she just published about the transition to artificial general intelligence, how it will and how it won't transform society, and also given her day job, uh what investment opportunities lie in the near and distant future. So, I think it's a topic we're all wrestling with right now. Very timely. Uh Aubrey, really excited to have you on the show today. Thanks for lending us your time and and welcome. Yeah, I'm so excited to be here and talking about this, Kevin. It's honestly what I've been talking about and thinking about for the last six months. So, it's fun to have more forums to do it. >> All right. Uh, well, you know, I I've been wanting to do a show focused on AA for a while, and I've been struggling to find kind of the right guest. So, when I read your paper, I thought, ah, this is it. And I think the reason is that you're coming at this from a perspective similar, I think, to most people listening to the show, right? You're not an AI insider. You don't build AI models. You don't study them as a researcher. But you do need to understand their impact in order to thrive personally and also professionally. And you know, I think that's kind of what we're all trying to do one way or another. So perhaps could you could you start off by just telling us about your professional background, your experience as an entrepreneur, and then how that led to, you know, where you are now? >> Yeah, of course. I will give you all my context. So, um, first I think I, you know, I've lived through both sides of technology and culture for a long time. As you mentioned, I built and exited a digitally native brand called Bow and Drape, which was the first mass customization company for apparel. So, think of it like a Build-a-Bear for women's clothing. Um, and so we scaled that in, you know, over 800 department stores. We had shop and shops and so just navigated a lot around the real world in terms of supply chain, manufacturing, real estate, retail and consumer. Um and so you know combine that with um my experience I have a really cross-disciplinary background kind of uh education as well which is I think why I like writing so much. So I had studied history and literature and had training in kind of primary research. Also spent years consulting before I ran a business doing primary and secondary research at Fidelity Investments during the great financial crisis. And so just come at this from a very uh entrepreneurial and kind of consulting lens. And then if you layer on that, you know, the last 5 years I've been investing after I exited my business really in the foundational industries and consumer um that I ended up building in. So have been investing behind you know supply chain manufacturing energy and so have you know built a track record and whisper networks which we'll talk about later um where we started to have these conversations and so while I'm not an uh AI expert I'm obviously now in venture seeing the front lines of all of this and really started to think about okay how do we how do we think about this for culture, you know, that's really where we're investing. We're investing in the the real world and how it runs and then investing behind culture and so come from it from that background uh from outside AI looking in. >> So your paper is titled our transition to AGI and the Aquarius economy and that kind of reflects two parts obviously. One is kind of an analysis of the current and future impact of AI and then the second is kind of more speculative. You're kind of imagining the contours of the future economic system which you call the Aquarius economy and then you sort of work backwards to think about okay what might that mean for our lives and also for investment. So I thought let's start you know let's start at the beginning the first part. Um you say the transition to AGI is the last economic cycle built upon labor scarcity and the transition is organized by blockers things that stand in the way of AGI reaching its full economic potential and that's kind of what caught my interest because it's you know the blockers that create the investment opportunities right these are the problems that if someone can solve you have huge economic reward so um I that you know let maybe use that framework. Can you start by telling us what you mean by the last cycle built on labor scarcity and how that's kind of directing AGI investment right now? >> Yeah. So when we set out to think about the impacts of AI and AGI, artificial general intelligence on culture and and humanity really we wanted to first zoom out and really say okay we don't actually know when this is going to happen. Like we know that there's a lot of change right now and we know that the facts on the ground are astonishing and contradictory and kind of incoherent and really like existential. Um so what we think AGI has the promise of doing um is really automating away some jobs but making some jobs so cheap to do that as you come down the cost curve of inference and compute it makes it so that labor itself uh becomes infinitely accessible and that labor becomes no longer scarce. which over the prior industrial revolutions we were sort of changing the nature of work but it kind of had a finite cost in the sense that it was very humanentric um and and technology was used as a tool not as a replacement and so that's really the the precipice that we're on and so as we started to think about that you know we see everyone sees the headlines right it's like openai hit 10 billion in annualized revenue hyperscalers have invested $200 billion in 2024 alone own. Um, but at the same time, it's like the the demands are so extreme for energy there that, you know, by 2030 we need over 150 gawatts of new power, which is like double California's entire grid. Um, and then we see rural families like barely using GBT. Um, and there's, you know, so it's like we we have this like fact set on the ground that's like, okay, I think for most people it's confusing. It's like we're saying it's going to change the world, but we also are hearing in headlines there's going to be a bubble and chatb2 is cool, but are we really adopting AGI? And so we said, okay, we don't actually know um when this economic cycle for AGI will come to fruition. It may happen tomorrow, which I think a lot of tech accelerationists are saying. We actually as we dug into the data set and and given our again my background which is much more in kind of the real world uh and the movement of goods globally I was like you know I actually don't actually think I don't think this is around the corner I think we might be a capital cycle at least away from this um and so when we started to think about that framework of what are the blockers to AGI it's sort of like what are the blockers to the cost of labor and the cost of inference being so cheap and abundant that it becomes almost like a utility that it becomes like the internet which I think is the premise of the promise uh of AI and so that was kind of the framework that we investigated and and where we saw that there was pretty substantial coordination problems that exist right now before we achieved that. I gotcha. So, let me see if I can reflect that back to you. And I' I've as you were talking I was thinking about we had Philip Carlson on the show, the chief economist at BCG and we he was you know he's talking about the potential impact of AI on productivity, economic growth and you know and you have some of these statistics in your paper too but they're all over the place. AI is going to increase productivity by a little bit or a lot or you know but you know he said look hey for AI to impact productivity it has to replace labor at scale right so it h and and what what that does is it raises real wages um and then people with higher real wages spends on other stuff and it's the other stuff that creates the new jobs like that's kind of been the historical cycle of how technology gets embedded into the economy and um so you're you're kind of it's that reminded me of what you you were talking about. So you're really saying well what are the some sense what are the blockers for um AI to get so you know cheap and ubiquitous that it that it replaces labor at scale. Um is is that right? Have I kind of like reframed it in a reasonable way? >> Yeah. No, that's right. like what do we have to unlock for it to get so cheap at scale that it replaces labor? And I think where we sort of take a I guess a more pessimist view is that we actually think through that uh that cheapness of labor that our comparative advantage in terms of opportunity cost for humans erodess a little bit. So I think in what you just quoted, if I heard it right, it was, oh well, you know, humans are going to go work elsewhere and they're going to get higher real wages. We think that's true in parts, but then we also think that creates a lot of peril um in in the short and medium term because it's not actually clear where um you know the tens of millions of knowledge workers go in the short to medium term. So, uh, but but overall, yes, it's like how do we even get to the point where that's a problem? We think there are some real investment opportunities in the short term. >> So, let's talk about some of those blockers. Um, the the first one, you've already mentioned it and it's something that's popping up again and again in the headlines is energy abundance. So, just sort of summarize what the issue is. Why is that a blocker? and then you know put putting your sort of VC hat on what are some of the opportunities for you know the companies that can unblock >> yeah of course so the and again the the way I've tried to write the paper is to be very approachable to a wider swath of people and so the shortand of it is it takes a lot of energy for all of this compute to happen for AI to run for these servers to run um and so even if bet that compute gets more efficient, that we have more efficient chips, um, which we think in some senses will happen. There's there's good evidence to say already that compute costs have gone down even on that sort of linear curve that it's been going on. It cannot scale without massive amounts of stable power. We just don't have enough power in the US to do it. Um, you know, Goldman, there's a bunch of stats like Goldman Sachs has said that data centers will require 50% more global power within the next three years than it has. Anthropic alone said that it's going to need like 50 gawatts of new power over the next two years. Just to give scale, that's like 4x the peak demand of New York City. Um, you know, in in North Virginia, there's data centers with a seven-year wait time. like it's just there are just so many examples of this. Um and and so obviously this is a massive problem and blocker uh to us achieving what we think AGI can achieve. Um and so we see a bunch of opportunities there to invest. What would be an example of you know an opportunity? Are you talking about um you know ways to bring power online faster or optimize the existing power that we have? I how do you >> Yeah, it's like d all of the above. It's sort of like it's a wide open net. It feels like um it's you know we need more firm power. So, I think nuclear and geothermal are the ones that are most interesting to us. Um, because they're uh energy dense by square foot. Um, and we think of them as sustainable. Not everybody does, but we do. Um, so we just need more power online. We need better grid resiliency. Uh, the grid can't handle the existing loads. Um, there is, you know, better demand response that needs to happen. So, um, the idea that we're utilizing the grid during peak times, potentially even peak shaving, and like selling energy back to the grid when people aren't using it during peak times, there's like a lot of efficiency within the system. Um, that hasn't been solved yet. Um, and then there's even some like compute optimization tech um, that needs to happen. And so it's just, you know, we need a lot more energy either through the grid and through the utilities and then outside of them and kind of behind the meter. And so, um, we've invested be it's one of the thesis that we've already started to invest behind, but we actually don't think it's like near getting solved. Um, and so we think over the next two years there's there's a lot more that can be done there. >> What about putting power stations on the moon? I've seen Elon Musk tweeting about that. >> Yeah. Is that is that the ultimate sign we're in a bubble or is that just me not being able to think creatively enough? >> I mean, part of me thinks that that's almost like it's like billionaires realizing that we're like in a potentially like postgrowth environment where it's like, yeah, we literally can't put enough fast enough in the world. And so it's like what are the wackadoo ways we can do it? you know, I think you could I don't know if that's the most costefficient way to get power online. Um, but it is a potential literal moonshot way. Um, and so so yeah, I think it's I I think it more points to the fact that it's like people people are going to throw spaghetti at a wall to figure out the fastest, quickest way to get there because I think without that, you're just not going to see AI advance quick enough. Uh because right now the models are not efficient enough to do as much as they promise to without more power. >> Gotcha. Okay. Um Okay. So that's blocker number one. And then the second one was fascinating to me. It's you call it foundational industry resilience. So can you explain exactly what that means and what you know why that's a potential blocker? >> Yeah, of course. Um probably the the example that I've used the most is u Moravex paradox it's called which is this idea that we talk a lot about in the real world in the real economy all the things that we think AI and robots can automate right we we see the fancy um demos of all these robots like cleaning dishes and um you know it's very Judy Jetson Um, >> actually, sorry to interrupt. I was at a a restaurant at this hotel, airport hotel last week, and a robot brought out a birthday cake and sang happy birthday to the person next to me. >> Yeah, exactly. >> That's like the embarrassing jobs the waiters don't want to do. We'll give them to the robot >> are going to be automated first. That actually >> it's probably good utility for them. But yeah, it's it's and that's a perfect example, right? It's like something that is a easy repetitive task that doesn't have a lot of edge cases. Uh is something that can be achievable in the short term like making coffee or bringing out a birthday uh cake and singing a song. Um what ends up being true is that the actual like human edge cases uh that happen are incredibly hard um for for robots to achieve. Um like it it it the example of like you know picking up a blueberry um and dealing with the fact that it drops like that's something that you know my 14-month-old can handle. that's way harder for uh a machine to handle. And so what we actually think is that that's kind of one layer of the real world that needs to catch up is is actually you know there are edge cases that exist that are very very challenging to overcome to actually automate away the vast majority of these jobs that take up a huge percentage of the economy. Um, the other piece is that you can't really have good AI and automation without good data, right? They all run on good data. And what's really interesting to us and what we've seen again as operators and investors is that these we call them foundational industries but we're talking now just to be clear like manufacturing, supply chain, agriculture, um real estate, construction, like these big physical um movers of the economy like they have terrible data. like in manufacturing 65% of manufacturers don't have usable data. Um and so we you know I think it's almost it's very cart before the horse to be like oh yeah we're going to automate all these factories. It's like well in nurseries for plants for example for egg they still take inventory on paper for most nurseries. can't really have robots doing anything when that's the case because they don't have any data to go off of to train. Um, and so there's this kind of resilience and coordination problem on the ground for the real economy that needs to catch up before we get to these really sophisticated general purpose robotics. Um, and on top of that, when you actually look at the numbers, um, there are real bluecollar labor shortages that right now need to be solved before people are willing to invest, you know, the capex um, behind some of these robotics. So, it's something like 450,000 workers a month in the US. >> Yeah, that was extraordinary. And I, you know, I keep coming up against this. I think you say that you know many manual and skilled roles plumbers electrician welders etc. they're indispensable. Not maybe not ultimately AI proof, but certainly in the imaginable future. And then a I had a question. This is a little unfair maybe, but uh you know, I'm wondering if the kind of quote unquote investment opportunities there are more kind of like a personal kind in terms of like career choice rather than a kind of a structural kind. Like it's, you think about my my kids, they went to they're older now, they're in their 20s, but you know, when they went to high school, the high school was like, "Hey, 95% of our graduates go to and get a college degree." You know, that was the selling point of the high school, college prep high school, literally is what, you know, what they uh call themselves. And that's fine, but I'm wondering, you know, is that actually the what you want to be doing now as a as a um as a high school? I mean, you're a parent of a young child. I mean, are you do you start thinking, hm, you know, maybe, you know, maybe the track that we've all been told is the right track isn't necessarily the case. You know, trades are a more viable option. I don't know. It's a bit of a rambling question, but um, >> no, it's a good one and I think it's top of mind, right? I think, um, you know, enrollment for young men has dropped um, in college, you know, people are facing debt there. Um, as knowledge work is the first place where some of these jobs are getting automated away, entry-level knowledge work, you know, a lot of people are saying, well, that's what I went to college for, like I went for a marketing degree. Um, and so I do think it raises the question of what are these degrees good for and where are people going to earn a living, which you know was a question for most people still. And so I think it it's one of the areas that we are excited to look into is just this like skilled trade enablement. There's lots of areas to that. That's in the education layer, that's in the uh training and retention layer, that's in the uh upskilling or reskilling layer. Like there's there's just a lot there. Um alongside all this other stuff around, you know, there's huge opportunities for data normalization and still supply chain visibility has been like a problem for like [laughter] probably before I was born, but I just think it hasn't been solved. um before you get to these really sophisticated visions that everyone has of like these general purpose robots that are like, you know, fixing your plumbing, >> right? Okay. Well, that's good that's good. So, you know, you're saying, hey, um you know, we need a lot more skilled workers, a skilled worker shortage. Any you know, the investment opportunities there is something that can it can help with that. And then um also you know creating the d the data that is kind of like [music] the first step in in this kind of much more advanced automation. [music] So the third blocker you talked about was you call it agent and human coordination. And I guess there's [music] like coordinating between chat bots and people or agents and people and there's also coordinating between agents and agents. Chat bots and chat bots to use a you know simple example. Um so give me an example of what you see in terms of like agent to human coordination problem and why that's a if you from an investor point of view why that's a blocker to to AI. >> Yeah. And this is it's probably our most amorphous category. there's kind of a catch-all for like we have kind of v1 of all these co-pilots and pilots. Um if you think of you know co-pilots being the tools that humans use and the pilots being the agents that are doing the actual work together um there's just still a bunch of clunkiness to it. You know, if anybody listening has like played around with this stuff, you know, probably the in the agent to human space, your question, it's like agents still lack a lot of um there's a lot of trust issues. There's a lot of emotional resonance issues, like people don't feel connected to them, which sounds silly, but it's kind of part of the UX that makes them really hard for people to use. And then there's also a complete lack of interoperability, right? So if you want to go, you know, between your chat GPT and you want to look at Claude and have them interact, like they're totally walled gardens, the memory layer between the two gets lost. So like if I want to transfer something over, I can't. Like my memory window closes, for example. And so as you're trying to build with these tools more sophisticated workflows and actually trying to automate away real tasks, you run into this clunkiness that we call coordination. Um, >> see that ends up >> that's a really good point because you know I think about a lot of tech you know the tech and this is the tech business model this may be oversimplifying um but you know the idea that hey you you rush to get scale you get a ton of users in your network or whatever um but you know it's not really in your interest for them to go outside that network you know if you're on Facebook we don't want you doing another social network you know if you're on a particular search engine. We don't want to make that, you know, like that's part of the business model. >> Totally. >> So you're sound sounds like you're saying which totally makes sense. That's also part of these kind of agent models, chatbt, etc. Let's make our agent as good as possible, but we don't really have any interest in making it interoperable. >> 100%. And that's between models and that's also between platforms to your point, right? It's like we have so like internally we use a lot of these tools both to be efficient and to create alpha but just like to learn so you know like we use anthropics MCP we use claude like that um we would love to be able to scrape LinkedIn right like we can't do that it's a closed system so we actually have to do all these interesting workarounds and um data dump all of our connections which they make very hard for you to do uh intentionally because that those that that data is those companies IP and and their monetization path. And so um it it's it's made so much worse by AI because AI is necessarily about opening up those context windows that you would search normally to do work. um it it just is not solved at all and intentionally in some ways obtuse. And so um it it's a big problem that's still it we've talked about a lot internally as sort of like being on the back of a wave like it's changing so much and daily you see these hundreds of v1 application layers like just becoming obsolete as um >> of version one application layers. That's what you mean. >> Oh yes, sorry. the V1 of these different applications. Um, so like you know uh like Chad GPT OpenAI just launched like a shopping um app there are I've maybe been pitched 50 you know shopping agents like all of a sudden those are all made obsolete uh because Chachi GPT said oh we're not going to coordinate with anyone else we're just going to build it ourselves and so I just think a lot of these dynamics of like what do the agents own what do these different platforms own which data sources and protocols are going to be open to MCPs versus not. Um, like that all needs to get worked out for these systems to be really smooth. Um, cuz right now it's it's very very clunky to create trust and and memory between systems. Then you put that on crack when it's like agent to agent, right? because it's like the agents if there's not a human in the loop it becomes that much more incumbent to basically have like the USBC for AI and that just doesn't really exist yet. >> Do you see any examples? I mean you gave a couple examples I think in uh the agentto agent coordination section where you talk about you know kind of highly personalized agents. Um >> yeah. Can you maybe give an example or or kind of imagine what that might look like? >> Yeah, there and there are some interesting ones like we started to look at some companies that are around kind of pitting agents against each other to see which one is there's one called EURP. There's one that we were looking at super early called Dialectica um where it's basically as agents become very good at fulfilling tasks and also have knowledge that we if you imagine a world where like these agents which already exist like they've identified you know certain minerals and things that humans haven't before. Like if these agents become so smart that their their knowledge actually supersedes humans like how do we trust them? How do we know that it's true? How do we know that which one is right and which one to use? Like there's kind there's interesting marketplaces, for example, that have popped up to kind of pit agents against each other. Um or agent marketplaces where you can uh vet out your agent um and people can test it and use it uh and pit it against their agent. Like that's kind of a new frontier that is still emerging um to get around some of these to try to create better coordination and try to separate uh the signal from noise in terms of the feedback that we're getting from some of these agents. So that's one example uh of what we've seen, but I still think it's kind of early days. And I do think a lot of this may get solved by some of the bigger players too as they sort of align to what we call in the paper like this big AI superructure where you know where open AI talks to Facebook and talks to Google. Um like I don't know if they're all going to be closed closed gardens forever. So, you think at some point they realize, hey, it's in our economic interest to find some way to, you know, I don't know if it Yeah. an adapter, a plug that we can that we can use to to coordinate to to to merge >> the right word. >> Yeah. I mean, and I think it's kind it's, you know, it's like the free markets. Like, of course, like Gemini wants to like own all of the memory system because they already have like your Google, they have your Gmail, um, they have an Android user, like but, you know, Apple doesn't necessarily agree and maybe Facebook and Meta doesn't agree. So, it's like at some point someone's going to try to make a move. And so it's like I actually think as the as the free market moves toward these large businesses that need to show growth on top of these mega mega mega investments and valuations that have been garnered. Um I just think it's going to necessitate people playing together because otherwise the whole system becomes uh non-operable >> in reality. >> Yeah. >> Okay. Um, so maybe we can pivot and talk a get a little more. Um, I know speculative is probably not the right word, but you know, you do say that it's not a typical VC white paper, right? It's part investment thesis and we've talked about that, but he'll say, hey, it's also part science fiction, a little bit of philosophy in there. So, let's talk about that science fiction and the philosophy bit. Um in in the second part of the paper you what I you know I think what you you were trying to do is say hey let's let's think much much longer term about what the impact on society and the economy is going to be and you call it the Aquarius economy and you say well we can just we don't really know what the specifics are but we can start to see the broad contours of that and then we can kind of walk backwards and say well in that what in that world what what I don't know what are the specific roles of people in society and what again what investment opportunities are out there. So maybe can can you tell us a little bit about first of all just how you decided to do that? Was that always part of the plan or was it like hey there's no other way we can kind of really try to understand the very long term impact. >> Yeah that's a great question and yeah I can try to walk it back a little. So we as we were doing all this research f right we just went through like first we were like okay everyone's talking about AI and AGI as we unpack this we're like we actually don't know that it's like here like we think we might be at least a capital cycle maybe two away from this really happening. >> So like if you've said that before and so let me I keep interrupting but that seems quite important to me. So when you say a capital cycle or two away, do you mean that we go through this investment cycle um it doesn't get us to the kind of full AGI? We need you know I don't know another five years with another boom and then another one or you know is that what you're talking about? >> That's exactly what I mean. Yeah. Exactly. In the same way that if you look at like the clean tech energy capital cycles, you know, it's like we had V1 and then we had V2. I think we're on like V3 of like um people investing in clean energy where where the promise of like full decarbonization, you know, hasn't happened. And it's taken a lot of infrastructure and government intervention and um and a lot of deep tech work and research and and infrastructure to to have it to have the needle moved. And and we see this similarly where um we don't believe that with all the investment that's even gone into date, it is enough. We believe that we're we're riding toward a peak. it will retrench a little bit, you know, in terms of the the markets pricing all this and then we still think it has the real potential maybe more than like any other techn technological advance, you know, advancement to create real value. So we don't think people are going to like just go away from investing in it, but we think there may be a little bit of a boom, retrenchment, reinvestment, reinvigoration that happens before we actually achieve all that we think it can achieve just given what we just discussed all all the blockers. So So that's exactly what we think and and we don't actually know how like we're not here to say especially in our position. I'm no, you know, I'm just I'm an investor who who likes writing. It's like I don't know if that's 5 years, 10 years, 15 years, 20 years into the future. Like it's actually not my job to think about it. And so that's kind of how we backdoored into this. like how do we describe this to people because we don't actually know how long it will take society to fully evolve if you assume AGI is in some unknown future and you assume that has real implications for like the cost of labor for the shape of people's work for the shape of how people spend their time like if that's all unknown like how do you start to talk about it in a framework that's usable because that just sounds like a big mystery. And so to us or to me the way that my brain works um which is how I wrote it was I was like okay you know I was a like I said I was a histon lit major like >> let's write about it in a kind of narrative format. Let's write about it kind of like a sci-fi book where it's like >> just a you know imagine sometime in the future. It's almost like the Star Wars opening where it's got like the big the big text going across the starry screen. It's like imagine some distant future where AGI has finally arrived. And I think if you think about it that way, one, it's a little less uh terrifying um because it seems sci-fi. And then the second is it allows you if it seems so narrative, it allows I think for better extrapolation because you're distancing yourself from any future or sorry any current biases you have on like how the real world operates now. So that's kind of why we did it. We're like let's just like think of a sci-fi future and think of a framework for where we think the world might go and then that becomes kind of like our language uh which we use internally. um it becomes our language to think about how these shifts in culture in the real world will happen and so that's how we kind of backed into it. >> That's cool. Um so tell us about tell us about what that you know the sketch of that that world. I mean you talk about I'm not sure if this is the right place to start so if it's not let me know. and two hierarchies. There's the technocore and there's the hygeonomy hygeonomy. Um so if that's the right place to start maybe explain what those two two things are. >> Yeah. So the way we think about it is like I said we're like okay sometime in the future AGI is fully achieved. We are calling this the Aquarius economy which was a nod to the fact that uh astrologically and a uh astronomically uh we are shifting out of the Piscian age into this age of Aquarius. So sort of symbolizing that there's this new era of how the world works and we think about it in terms of these big superructures. So assume AI fully comes online. Assume we've automated away a lot of work. The we have these two superructures. We have the technocore is what we called it. Uh which again is a little bit of a nod to sci-fi um to a Hyperion if you guys have read it. Um and so the technocore is kind of like the digital superructure that's running AGI. It's like the powerful AI overlords that are like running the way that AGI goes through our work. It's the Judy Jetson um you know robot at home. Um it is what is controlling all of our devices and modalities. And then the other is the uh hedgeimony which is basically like institutionalized humans. So think of that as like the corporate political dominant elite families and resource owners. Um and those two the technocore and the hegemony have you know a serious interplay. they sort of rely on each other um to to continue, right? So it's like, you know, the Elon Musk and the and the Jeff Bezos, you know, elite families on crack where they're sort of owning and centralizing the power of AGI. Um and they're running society. Um so we see sort of massive consolidation of power um in a controlling techno state is how I would describe it. So that's how we kind of set the stage. >> Okay. Um and then within that techno state, you say there's some key what you call outlier groups. And um you actually say well you could think of them as much as kind of like state of mind as as opposed to like physical groups. But um can you identify who those those groups are and what their role is in society? >> Yeah. So the way that we thought about this is okay that you know if you assume that society is kind of the the hedgeimony and the technocore is this superructure that kind of wraps it then there are folks who exist within this um where the core differentiator and this is kind of the thesis at the end for us and why we started to talk about Aquarius is this idea of like human what we call agent gency or human emotion. This thing that is uniquely human is what these outlier groups all have in common is their expression of their human agency. And so in some way they're sort of breaking out of the norm of the hedgeimonyy because they're expressing their human agency which is really where we see humans most survivable future is in this like extreme uh expression of that and where people don't express that we actually think that people will flail a lot and that will cause some potential opportunities for investment but also some problems in society. So the the groups we've identified are one we call them the nomads um which are folks who kind of reject this centralization. They're very much about human connection off the grid. Um they are nomadic obviously. So they're folks who like aren't tied to one place. Um, we almost think of them as kind of the new age hippies where they're um they're thinking about the earth and connectedness to the earth and the rejection of the technocore and and being fully sucked into the matrix essentially. >> Why would the technocore allow those people to exist? [laughter] You know what I mean? Is it just that ah they're too annoying to to get rid of or is it uh or in some sense do the >> do the people right you know the the uber powerful families and corporations do they >> I suppose they need people to to continue to exist and reproduce otherwise they they don't you know totally >> what is what is their powers maybe they have some interest in allowing dissent as long as it doesn't get too serious I that that's kind of the way we see it, right? Is like it's disscent if it's not too serious and it's sort of like off quote unquote off the grid. Um so it's sort of like okay it's like no harm, no foul. It's like, okay, if I'm nomadic and I'm like, you know, I'm building in Palm Desert a totally off-the-grid living community and we are espousing the idea that we are self-sufficient, uh, we are self-organizing and we are intentionally not connecting into AI like, you know, maybe it's just overlooked, >> right? Um, so that's kind of that's a I guess a real world example of it. And again, I don't think these are meant to be taken like exactly, but it's sort of like, okay, if that were true, like we think there's enough people in the world, even if you look at, you know, the folks that are I think of like Blue Sky, right? Like, and some of these outcroppings of even digital communities where people are like, we want to preserve um the right to independence, we want people to preserve their data, privacy, like there there's enough of that ethos that we think that will exist in some form uh in the future. Um, but you're right. Yeah, if there's a nomad uprising, um, maybe the technocore will squash them. Um, the second group we talk about are the gurus, which are, and maybe gurus, you know, it's the word we used. Gurus kind of have a negative term. is kind of like the Tony Robbins of the world which I don't know is the best but like it's the people who we think are individuals with the most authentic kind of human spirit and relational influence in society. So think of these as like artists, healers, athletes, these kind of superhumans in the sense of like truly uh living their most embodied spiritual selves. Um, and so we see those people actually elevating in terms of people looking to them for inspiration. Um, they're sort of proof of humanity, quote unquote. Um, and that they're producing things that are authentic, uh, and not AI generated, not AI slop. um there's a going to be a premium for for what those people produce because it comes from this very human centric place and it's the high art of that. Um and so we think that those folks will be a special outcropping and have kind of a unique place. Um, and then the third that we'd call out, which we think is kind of maybe as big, if not bigger, of a group, uh, we call them incelss, which is, um, kind of tongue and cheek to what people talk about now as incelss. But I think incels is also potentially good word for them because it's really people who are spiritually and socially and physically disconnected from the rest of society. So these are folks who are kind of victims of the hedgeimonyy and the technocore and kind of the technical nealism that sets in in some future where they're just really isolated. Um and we already see again for a lot of this stuff we see tendrils in in modern life. Um but we think as there's whole generations of people who come up AI native as AI and AGI and this kind of like elite techno um ruling kind of oligarchy takes over like there there's real implications for society and for for young people especially and um and we think that'll cause a lot of of peril um and and some implications that that will hopefully create some opportunities too to help. >> How do you go from that kind of sketch of a world to thinking about investment opportunities? It almost sounds a little bit like, you know, you you're going from a very philosophical perspective to something that's very concrete, but I mean it maybe just give one or two examples of how that kind of thinking at least could lead to an investment idea and then you know is this are these investment ideas that are that you one should be thinking about acting on now or is it still is it like hey keep keep it in the back of your mind and you know wait to see how things develop. Yeah, it's a good question. Yeah, and if I haven't lost anyone yet, if you're following me through, um the the way that we think about these maybe I'll answer your first question first, which is we think about again, we think about this as like a shared language internally where as we see opportunities, we're like, "Oh, that's Aquarius coded. Oh, that sounds like a guru platform. Oh, that sounds like this is like nomadic um financing. Like we've started to use it as this like okay if we believe that this is our future end state like these opportunities are reflective of that future. So that's really how we start to use these is that we don't actually think we're there tomorrow, but we even see today um opportunities that look and feel and rhyme with that potential future and and uh monetizing some of the opportunities with these outlier groups. So um so a couple examples. So one like we said we think this like this hyperdigital connection actually leads to hyper isolation for some of these you know incels. So some of the things we've already seen are um uh sort of interesting therapy models um where it's like a hybrid human plus AI caregiving model. We've seen kind of new third space cooperatives to help people connect in person uh in a way that's been lost. Um we've seen things like sensory gyms for example where people learn to come in and like touch and feel and be and have contact. Um these are things that we see, you know, today um in some form or fashion being pitched to us and and we see that as kind of early innings to we think some of it may be a little early but we think some of it is spoton. Um another example is let's say we think a lot about we call them whisper networks, right? It's like >> Yeah. >> Oh yeah, go ahead. >> Sorry. No, I just that was the next question I was going to ask about. Sorry, I just blurted that out. But yeah, because you mentioned whisper networks early on in the conversation. So, I'm curious to hear, you know, how what those are and why you think they're so important. >> Yeah, we think we think that as we increasingly have these digital lives, there is what is available to us and necessarily then AI like online. Um and then there are things that our digital presence cannot capture and we call these these kind of whisper networks. So they're they're networks of people, relationships, connections that happen through serendipity that happen through recommendation that happen through humanto human contact. That's very hard for AI to in infiltrate. you know, maybe over time, you know, you use mirror glasses and and Meta's glasses and they have a parser that integrates all those real world encounters, but we actually think those whisper networks and and the way that the real world operates uh as far as humanto human connection is going to have a premium for a while. And so um we think about what does that look like in a future state? How do you take advantage of the fact that as more and more of this digital gets commoditized like these whisper networks become increasingly important? uh you know we've already seen platforms like this uh get pitched to us where it's like um a marketplace for introductions like there's a B2B company that got pitched to us where it's like you can upload your connections B2B companies looking for those LinkedIn connections can talk to you and you get a bounty for connecting these companies to the right champion internally right so that's like a way to monetize these whisper networks but we see other ways to do you know There may be these kind of >> isn't that in some sense like antithetical for what they're trying to do like >> yes >> I mean yes serendipity of human human contact and okay let's make a let's create a digital platform for it I mean it's >> yeah in some senses yeah it's like if you play that out you know if you become known for the person that's like monetizing and shilling your networks like are you that valuable of a connector over time I'm not Sure. Um but we see so so but again I think it's like it's early innings in like that type of platform would not be as interesting I think 5 years ago right whereas now I actually do think I'm like oh okay like it is kind of a saturated market and if we see all this playing out you know the ability to kind of leverage those network it's interesting. So the way we think about is a little different like you know something that could be interesting could be um you know like an encrypted reputation market where maybe it's encrypted and anonymous but you know your reputation and and and there can be whispers about you um to kind of show a social proof. Maybe there's we call it like a guru as a service platform where um you as someone expressing your own authentic human knowledge uh can rent that out um and sort of itemize that because that's something that's really hard to replicate. So things like that again it's like within this framework you can start to build a language around that [clears throat] and think about it. It doesn't solve the underwriting, but but for us, it helps us to think about the interesting ways that that [music] relationships in society and work will really change. I want to talk just to wrap up in the last few minutes here and this is going to be maybe a bit of a downer given what we've just been talking about but you you had a quote in your paper from uh Ethan Mollik who's an e expert on AI at University of Pennsylvania. He's written a book about AI and he says assume this is the worst AI you will ever use. And I have to say that that kind of got me a little bit riled up. I was a little, you know, why is this bugging me so much? And and I think what I eventually realized is that technically, I'm sure he's right. Like, I'm sure this is like the least technically sophisticated AI model you'll ever use. But is it really going to be, you know, the the worst end user experience? Because we've seen this over and over again with technology that's great at first, attracts a large user base, and then just bas becomes extractive. And um you know it it and I don't mean that in a kind of like political way so much as it's just yeah we're going to make money out of this thing now and it and the quality of the actual product goes down. Like Google search is terrible now compared to what it used to be. A lot of these social networks aren't very good anymore. They certainly don't connect to each other. And then you know like just the general you know automated customer service is just pushing the work down onto individual you know to the to the customer as opposed taking away from the company. And that you know I used to think this is me being an old man ranting but the more guests I've had on the show including very well-known economist we had Diane Coyle talking about this the time tax is real. It's a real thing. Um the work is being shifted from companies to their users. So anyways, that's that's my little mini rant. But my question is, you know, is that like can that happen with AI applications? Um, could this be the best chat GPT you're ever going to use? I I I don't know. I mean, just playing devil's advocate a little bit. Is that something that that we have to concern ourselves with? >> Yeah, I think your intuition is probably right. My sense from that quote from Ethan Malik was that he was really thinking about like this is the if you think about agents on the scale of AGI and what they can do versus like the AI slop you see online like they're going to get better. I feel like that was the intent of that quote and like I somewhat believe that that's true. like we and we've already kind of experienced that like even if you look at uh Nano Banana and like Gemini's um photo rendering like just the the model updates have been immense in terms of like how photorealistic they are. So, so part of me feels like, okay, Ethan, you have a point. But to your point, you know, the second thing I would say is, yes, it's like, how could it get worse? [laughter] Because the amount of AI slop that we see online is just so so bad already. It's like it's like part of me if in AI slop. >> Oh yeah. So um AI generated content that is being posted all over the like because people have access to this tool they are just generating generating generating. There's a couple stats that I've seen that have said that AI content has surpassed human content recently on social media networks. So, it's like whether it's your chat GPT LinkedIn post, whether it's your video of um random personified anthropomorphicized like vegetables eating things as if they were humans. Like like mudang videos of hippos. Like I've seen so much random stuff. Any cat video online now I'm like, "Oh, that's AI." Um, but there's just so much that's what I mean by slop is just like just content for content's sake being put out that is recursive and like non-original and just like volume. Like it has kind of infested these social networks to the point where now you hear some of these some of these social networks are putting controls on it. uh Tik Tok is putting labels on things and is actually tamping down on AI content because it's actually hurting engagement. So it's like the the inshitification of these platforms is almost like happening by the users itself. Um and so that that was another point I was going to make where I was like yeah the the slop waiting I actually think is is creating a real moment. But but to your point that is a different point than okay if we assume that we have all these tools now for free and we get to use them and we get to generate AI slop like what happens when the literally gajillions of dollars OpenAI has you know invested needs to start to turn profitable after they do their trillion dollar IPO. You know, it's like there's going to be a point at which which is what, you know, when initification happens, it's like when Google needs to monetize, when Meta needed to monetize to prove value to its shareholders, they start to turn on their users um and make the platform work for advertisers. And so there's that looming in the future, too, you know. Um, and so part of me, part of me thinks we're still a long way off cuz these platforms are so subsidized. They raise so much money. Um, that they can like Uber, they can give us discounted rides for a long time. >> Well, hey, that's a that's a good place to to wrap up. This was really fascinating and thoughtprovoking, and I appreciate you sharing your work and taking the time uh to talk to us. So, thanks so much. >> No, thank you, Ke. I'm honored to be on here and thanks for giving me a place to share it. >> Okay. Um, so you can get a copy of Aubre's white paper on her Substack which is called I'd buy it and uh also on the Alpaca VC website. Uh, so go out, get a copy. It's a fun read, challenging, and um, I think you can tell from the conversation that a lot of these topics are not being yet discussed enough on mainstream media. So, for all of us here at Top Traders Unplugged, thanks for listening and we'll see you next time. >> Thanks for listening to Top Traders Unplugged. If you [music] feel you learned something of value from today's episode, the best way to stay updated is to go on over to iTunes and [music] subscribe to the show so that you'll be sure to get all the new episodes as they're released. We have some amazing guests lined up for you. And to ensure our show continues [music] to grow, please leave us an honest rating and review in iTunes. It only takes a minute and it's the best way to show us you love the [music] podcast. We'll see you next time on Top Traders Unplugged. [music]