'I'm Expecting A Bubble Burst': Markets Could Face Reality Check In 2026 Warns Harvard Futurist
Summary
Market Outlook: The guest expects an AI-led correction or bubble burst due to overhype, limited productivity gains, and weak evidence of broad-based ROI.
Data Centers: US growth is heavily concentrated in data center and information-processing investment, with concerns over circular financing among big tech firms.
AI Theme: Broad AI and Large Language Models are transforming workflows but remain unreliable, with failures in enterprise pilot projects and limited net job replacement.
Robotics/Physical AI: Humanoid robotics and physical AI face high costs, battery limits, and reliance on human teleoperation, suggesting slower timelines for real-world autonomy.
Search Disruption: Chatbots could replace traditional search; Alphabet’s GOOGL should focus on safer, incremental improvements across YouTube, Workspace, and Drive.
Education Use Cases: AI tutors and tools can enhance learning, translation, and career placement, but human teachers and in-person interaction remain essential.
Regulatory Landscape: Liability, minors’ safety, hallucinations, and IP issues drive tighter guardrails; Europe’s caution on autonomous systems underscores unresolved responsibility questions.
Investment Implications: Favor specialized, domain-specific AI over generic LLMs, while monitoring risks from AI slop, overinvestment, and potential demand air pockets if expectations reset.
Transcript
I am expecting a bubble burst or at least some form of correction. We're not seeing value. We don't see artificial intelligence delivering the change in organizations and in the way we work. And we also don't see such spikes in productivity. We are heavily relying on this technology. You cannot use chat GPT to invest in the stock market, right? You cannot consider it to be a reliable source of information or an oracle. We have placed all our bets on artificial intelligence and if we underdel um I think next year it might be an issue. AI has been touted by some as probably more important to changing society than the internet. Is it the next industrial revolution? How are our lives changed forever? We're talking about how AI will impact the labor market and impact uh society at large into the future with our next guest Alexandra Shagalinska. She is the uh senior research associate at Harvard Law School Center for Labor and a Just Economy. She's a futurist and she'll be talking to us about AI and our lives in the future. Welcome to the show, Alexandra. Good to host you. >> Thank you so much for the invitation. >> I want to start by talking about this warning. Well, I guess not warning, but uh uh prognostication by Bill Gates uh who recently said that AI will take over most jobs and leave humans working just two days a week. That sounds great, by the way. Wouldn't we all like to work two days a week? Billionaire Bill Gates forecast that technological advances, especially in artificial intelligence, will probably result in a shortened two-day work week. He revealed his prediction in March. Uh he explained that while current AI lacks specialized knowledge, with human experts like physicians and educators still essential, the coming decade could bring dramatic shifts. It's kind of profound because it solves all of these specific problems like we don't have enough doctors or mental health professionals. But it brings with it so much change. What will jobs be like? Should we just work two or three days per week? Um, I love the way it'll drive innovation forward, but I think it's a little bit unknown. How would you answer these questions? What will jobs be like, and should we just work two or three days per week, Alexandra? >> Well, I would say that uh the promise of working for two or three days is actually really great. Uh, but I don't think that's going to happen anytime soon. And the data that we are collecting at Harvard is actually suggesting that AI's impact on the world of work is going to be much more complex, uneven h and also probably not so positive. So what we'd see already is is definitely an impact on the so-called entrylevel jobs. And here there is an issue. uh people would rather hire Chad GPT than an intern which I think is a problem because if you want a senior at a certain job you should also have a junior that's going to learn uh to become a senior. So that is actually quite an issue but uh in other areas um this impact I think is uh definitely overhyped. uh there was a study recently by Yale that stated quite clearly that the impact on jobs at large when it comes to artificial intelligence is quite low close to zero. So maybe in some professions related to IT you do see a change in how people work. The replacement of a job is actually not happening. And to that I would also add a recent study by MIT that stated that pilot projects at organizations um you know focused on uh implementing artificial intelligence are mostly failures. 95% of them fail. So I think um maybe if we speculate about long-term future, we might see artificial intelligence stepping in to some professions, but current level LLMs are definitely not there to replace jobs. Maybe some tasks and I don't think that's going to change anytime soon. So we cannot hope for a a two-day work week in the next 5 years or so. Okay, I want to talk in detail about what kinds of jobs will be impacted first, but let's talk about uh some of the biggest themes within the AI sphere are artificial general intelligence. MIT technology review just last week review uh revealed this article called how AGI became the most consequential conspiracy theory of our time. Um I hear it's close two years, 5 years, maybe next year. And I hear it's going to change everything. It will cure disease, save the planet, usher in an age of abundance. It will solve our biggest problems in ways we cannot imagine. It basically goes on to debunk some of these claims. Um, what is your uh analysis of this trend of AGI? Will we actually get it in the next 2, 5, 10 years? What would it look like when we finally achieve it? Some argue it's even here. So maybe you can help us discuss what's actually happening on the ground. >> Well, I think the problem with this very term is that it became very blurry recently. AGI used to mean a slightly different thing. If you look at the booksh textbooks 10 years ago, AGI meant a technology that can mimic the whole spectrum of human intelligence and successfully emulated including bodily intelligence, perceptive intelligence and many other things uh effective intelligence, emotional intelligence. So clearly that's not happening with LLMs. Um but I think big tech is is trying to propose a different vision of what AGI is and what they're trying to propose is a technology that can be successful at solving some tasks that we generally pay for that have an economic value right and in that way you could certainly argue that AGI is already here in some form but this is definitely not comparable with the vision of AGI that has been proposed prior and I think when somebody's referring to AGI I and the next wave being super intelligence. Quite often we think of a technology that is um acting on a voluntary basis that is intentional that is uh conscious even and that is definitely not happening. We have LLMs that can be good at certain tasks, but they're not going to be good at other tasks. And they are working in a surprisingly weird way in the sense that when you interact with them, sometimes they're very good at what they're doing and sometimes there is a banal question popping up and they cannot answer that question. So I think this is something that we experience on a daily basis and that's certainly not AGI as we used to formulate it. So I think that's very important to say and in that way I also agree with this article. There was a promise of AGI that word was very speculative and it was used to kind of overhype artificial intelligence but what we have right now is many companies actually backpedaling on that. I don't hear open AI talking about AGI all that much as they used to for instance last year. >> Uh you're right that forget AGI even large language models right now are not perfect simulations for human conversation. Sometimes you say one thing, it doesn't understand you. My counterargument is I talked to some people, humans in my personal life and I have the same problem. So I don't think machines are that far behind real life. Um but to your point, what needs to develop for a system to be universally smarter than maybe the smartest humans? What you know, it's not inconceivable to think that maybe one day we'll get there. At least that's what science fiction makes us believe. But how do we bridge the gap between science fiction and reality here from now on? >> Uh when I went to Davos to the economic forum this year, we had plenty of conversations about physical AI and it's actually a very big topic um in artificial intelligence that you need a body to collect experiences and to learn from those experiences and then further on operate like a human. Obviously, we do not know what would happen if we had that perfect robot uh that would both have a an LLM/ aentic mind and on top of that a body that can experience reality as noisy as it is. But um many people were saying that well we were trying to approach this problem of super intelligence or even AGI from the perspective of language and language acquisition and this is something that LLMs are very good at. But in fact our intelligence you know is a very adaptive system that developed first because of our interactions with the physical world. So maybe we should revert that and think about physical artificial intelligence. And you see that right because currently there are some companies for sure in China also in the US to some extent in Europe that are trying to build better physical systems that could integrate what LLM can do and what they're good at with physical capabilities in the real real world. That is the most challenging uh I would say uh thing for artificial intelligence because our world is constantly changing and adapting to it is actually quite difficult right so if AI can overcome that and become good at it who knows uh what could happen next. >> This leads back to our initial discussion about the labor force and I want to know how exactly our labor force will be impacted or which areas of the labor force will be impacted first. People have been speculating about futurists have been speculating about uh machines and our labor force for decades. This was a time um article that uh my team helped me pull up. This was from 1965 and um and I want to just bring it to your attention this particular paragraph here where uh it cited um here men such as IBM economist Joseph F uh Fkin feel that automation will eventually bring about a 20-hour work week. I guess that's what Bill Gates was citing perhaps within a century thus creating a mass leisure class. Some of the more radical prophets foresee the time when as little as 2% of the workforce will be employed. Warn that the whole concept of people as producers of goods and services will become obsolete as automation advances. Uh if we ever have and this is not like an an industrial revolution where we have farmers move into the cities because now there's factory work and farms can be automated. We're talking about every single job being automated, which then brings us to existential questions like what is a purpose if society if machines can just do everything for us? Can you help us foresee this future here? >> Well, I I hate to be maybe not a pessimist but a realist here, but I'm just afraid that more technology does not add up to more leisure for us. So, we've experienced various waves of technology already, and it seems to me that we are working much more than we used to. And you know our workday is so widely stretched right now because of the fact that we can constantly be present online and work using various systems. There's this expectation that our productivity will grow and this is also something that AI people talk about so much how our work will be enhanced and we'll become even more productive and that is a strong contradiction to this vision that we will become obsolete that we will not be needed anymore. So for me you know the next decade is definitely not about that and I think talking about technological unemployment is uh is sort of a replacement topic to me because there are many other issues I have with artificial intelligence impact on society on miners um AI slop deep fakes these are very current topics that we should discuss uh taking into consideration what this technology offers and what's possible with AI today. So I think you know in terms of understanding the job market like I mentioned prior there's no evidence at this point that we're trying to collect here at Harvard and and you know at many other institutions that this impact is so large you do see areas I mentioned particularly IT where that impact is clear and then you have also sectors that are benefiting from this technology like for instance marketing where you have a lot of content development and content production where artificial intelligence can really speed things up. If we take the avenue of uh developing robots um that could uh serve as systems that are replacing us in professions that uh require require certain manual expertise uh then that's going to be probably a very different trajectory but I do want to say building robots is essentially very very uh cumbersome. Uh it's also um something that is very costly. um we don't have a solution for battery life of these robots. Something that is never mentioned at those advertisements that are showing us humanoid robots is that they usually work for half an hour or so and not longer. So we have plenty of issues related to data related to batteries like I mentioned that are preventing uh those robots entering our life very quickly. Robotics was the first big important topic in AI always right. So when you think about different disciplines of AI, everybody talked about robots and somehow we never had a breakthrough in robotics. So I'm sort of you know more skeptical when it comes to that and I think well maybe we can expect some breakthroughs but if we don't see them uh actually the march of robotics is not going to happen anytime soon and when it comes to LLMs we're going to work differently. I absolutely agree with that but whether we will work less uh that is a very big question for me. There is a big let's talk about robots now. There's a big race amongst tech companies to get uh to be the first to put a humanoid robot in people's homes uh in in some capacity. So, let me just bring up one example. This has been making the rounds on the internet. The the uh 1x robot from Neo. Uh I'm just I'm just show you this one clip. It's cooking. So, take a look. We want to push and spread this out into a nice big ring for pizza. You want to give it a try? >> Yeah, let's do it. I'm just going to slowly kind of get my thumb in there real quick. >> You don't want to pinch too hard in one certain area. You want to be gentle. See like this? Yeah. You can even go like that. Make your own method. I like that. Stretch it out that way. It's a little sticky though. Can I get more flour or something yet? >> I don't know if flower is the problem right now. See this spot right here? >> You stretch that out too much. We're going to take it. We're going to pinch it back together and we'll hit it with a little bit of flour. You see how much better this already looks? >> Let's not pick it up again. I think that's where I went wrong. I'm going to start stretching it in my head. I'm pretending I'm a famous pizza chef from Italy right now. >> There you go. >> Okay. It's very good at conversation. It's not good at making a pizza. Uh, it also cost $20,000 and I can't comment on the battery life. I don't know. Uh, but this is happening. Can you just comment on what you just saw? >> Yeah. Well, I I think we have to add one more thing. Although I do admire the campaign here uh involvement of so many content creators, influencers because it's a very effective campaign. We've all seen that robot probably somewhere on Instagram or elsewhere. So, that is definitely a very successful campaign. But I think the part that is often omitted is the fact that this robot is actually steered by two or three people um who are wearing VR glasses to make sure that uh you know it does the things that it's supposed to do and not some other weird things. I will say however I saw a couple of very funny videos of well maybe not this robot but some other robots that are similar to it that are actually really failing miserably at the tasks that they're supposed to perform. So on the one hand you have here a system that is actually quite good at communication like you said and we know that LLMs can do that and various agentic systems. On the other hand when it comes to this manual work uh you can see that there's still so much more to do and then there is also a human controlling this system from behind. So yeah, I mean I I think I will be persuaded when I see a fully autonomous system and then I'm ready to have a discussion, a meaningful one about, you know, what the degree of placement could be. But currently, since it's still actually human labor behind the robot labor, I'm sort of again, you know, worried that this is not necessarily what we're seeing here is not what we're getting. So, so now the robot needs a human input to learn. And how much learning does it need to to do before it can be fully autonomous? And by the way, I don't think anyone's going to want to buy a robot that's controlled by somebody else. Uh I I I I wouldn't personally I don't can't speak for you or anybody else, but I I don't want you're basically inviting somebody to virtually be in your house and you don't know who that person is. But the question is if that step is not done, then the machine can't learn or is there another way that it can learn? Well, I think this learning process is actually something that we uh don't know many things about in the sense that when you compare it to for instance a child uh the amount of video recording that a small child is doing you know at its early uh um you know phase of life is is uh pretty enormous and uh here we really don't know what the learning curve could be. Let's assume that this robot collects materials, video materials and audio materials and conversations for five or 6 months. Will that translate into a learning curve where it's actually going to master certain skills that are, you know, important in those daily errands? I honestly don't know. This is something we will have to see. But I will say that I understand to some extent why this turn to humanoid robotics is happening. And that is because many people were actually quite upset with developments in AI recently. I refer here to this AI slop right to the deep fakes that are penetrating the internet. But also certain concerns about uh well maybe detaching meaning from knowledge work, analytical work because of the fact that AI can generate lots of reports and lots of text etc. So some people said this is not what we expect from artificial intelligence. what we expect is real help at daily tasks and I think it is a good time um for those who are betting on robotics because there is a general concern over LLMs and people sort of expressed that they would want something different from artificial intelligence. So I'm expecting a big wave of interest in robotics but also investments as well. >> A a sector that's that's uh in using AI that we need to discuss is education. And I'll just play this in the background as we talk about it. This is just one example. You see a lot of these on the internet. This is uh China uh a Chinese school and it's showing uh the use of robotics and AI agents. Uh actually I don't know if that's a real person or an AI agent. I can't tell. Uh but certainly they're using a lot of tech. You can't really tell these days. you can't uh there's there's there's a lot of technology that's in this film and I I know it's a showcase but tell us about how AI is being used in the field of education at all levels right now and uh ultimately should we have AI teachers in our children's school classrooms in the future. Alexandra, >> well I I would say that what is a very interesting idea are AI tutors that can help out in the educational process. And I can tell you that I'm also involved in a project called EU on air uh which embraces 11 different universities in Europe but also we are collaborating with Harvard here uh to develop um an AI based uh system that could enhance the process of education at the university level. And what I mean by that in is for instance that you uh obviously go to school or to your university you attend a lecture and perhaps you did not understand everything from that lecture. uh and you want uh a tutor to help you out and I think artificial intelligence can be really good at such a task. We can also think about systems that could enhance the process of career placement for students and here artificial intelligence could be really a very meaningful tool. So I think there is plenty of space where you can add artificial intelligence to the educational process. for instance, to simplify content, to translate it, to make it more entertaining or to introduce uh services that would otherwise not be available for students because well, you know, when it comes to workforce, uh obviously there can be a consultant who's going to help with career placement, but this person usually works for 8 hours a day and quite often does not have access to all the databases and cannot fully analyze, you know, uh the proper placement for you as a student in the future. So I think um we can find areas where artificial intelligence can really make a meaningful uh change. We've heard about the alpha schools in California but still there is a human teacher there. So there can be AI based tutoring or self-based learning with artificial intelligence as synchronous learning. You are um uh let's say exposed to a lecture then you do some exercises with artificial intelligence. you're being graded by AI but afterwards there is the part when where you are socializing and there is a teacher and there are real humans next to you and this is what works and those alpha schools have been showing that and what we are trying to pursue also with EU and is a similar concept AI should be an enhancement uh an additional layer to the educational process but not a replacement of the educational process and I will add this um we've had an experience of self-paced learning and online learning during co and I think for the majority of students that was not a great time. Students want to interact with others. They want to hear their perspectives. They want to be engaged in a classroom experience uh exchange of uh ideas and I think this is not something that you can easily replace by any technology even the most savvy ones. So I would not disregard that and I would say the future of education is with AI but not um in a direction where AI is replacing the education as we knew it. >> Japan is testing uh an artificial city where people are supposed to live in harmony with automation. This is this is from a few years ago. Mass human experiment inside Japan's futuristic 15 billion dollar smart city uh funded by Toyota. It's called Woven City and the purpose here is to quote there will be pilot experiments and innovations that will propel society forward. The specific tech technology that will be tested will not be disclosed. Uh but it will be uh something along the lines of personal mobility, automated driving, robotics, and AI. Uh they just announced in September this year that they they launched a city. Um and they're we're going to keep tabs on I guess the development. what what what would a city like this look like where we have uh integration of robotics AI with humans um built from the ground up? How how would that be different than let's say a normal city today? >> Honestly, it's it's very hard to imagine for me because there is so much uh possibility for friction I think when it comes to interaction of humans and AI. Um I do remember a recent study published by Nature um where doctors were supposed to work together with artificial intelligence or on diagnosing patients and it turned out that they were either rebelling against it or over relying on artificial intelligence and there was kind of no midway there. So I I feel like we are not really prepared for experiments like that. But I will say one thing I think bringing together such projects that could simulate a good future with artificial intelligence or you know AI based so some sort of multi- aent um spaces where we could test certain policies whether they work or not. We hear so much about the UBI. What if we did an experiment that would involve an artificial society where UBI is actually rolled out for the whole of the society? I think it's a fantastic, you know, research area and we could find out so much if we were able to kind of emulate different personalities, different humans, different walks of life and how UBI or another policy could impact their h their livelihoods. I think um I would not stop, you know, I I I think AI is a great area to experiment with to figure out what could work in a society. Obviously, you have to treat it with certain caution because that's not going to be fully indicative of of what we would do. But nonetheless, um doing these sorts of experiments can be a very very interesting thing. I'm just not sure about combining real humans with that type of AI simulation because I think well the possibility of friction is is generally quite large and and maybe not that's not something that we would like to pursue at this nassent phase of the development of artificial intelligence. So in this hypothetical scenario, could you see a could you see the possibility of let's say machines producing most of the output in the economy, UBI being rolled out, everybody is just given income and then we consume what the machines make. Is that is that what that looks like? >> We can definitely simulate something like that. We have possibilities with AI, with deep learning to simulate a scenario of that kind, but also to simulate other scenarios, more utopian, more dystopian, um different kinds of visions for the future and then try and find out what happens. I think it's actually great. I am hearing so much about AI based surveys where you actually don't survey real people but you survey artificial intelligence multi- aent sort of you know society of AIs that are responding to questions and it can be a meaningful addendum to kind of what we do as researchers to find out about reality but also to project some vision of reality. So I think I'm not sure about a real synthetic city, but an artificial city that is a simulation of some sort of platform where we find out about what happens when we for instance deploy UBI is definitely a very interesting usage of artificial intelligence. >> We don't even have to look that far into the future. How much is AI and the development of AI contributing to economic growth in the US right now? Well, I think uh artificial intelligence is probably currently the sole reason of enthusiasm in the American economy as badly as it sounds. But we are hearing that around 92% of US GDP growth in the first quarter of 2025 came from investment in data centers and information processing equipment or or software. So that shows you that uh actually we are heavily relying on this technology. I think this is one of the reasons why so many people talk about the AI bubble right now and are worried about its burst because I think we have placed all our bets on artificial intelligence and if we underdel um I think next year it might be an issue. >> This is well this has been circulating on the internet to echo your point. This is uh the author of this particular post calls it the scariest chart in the world where maybe he's quoting somebody else. I'm not sure. Uh but it shows something that hasn't happened in a few decades which is the divergence between the S&P 500, the stock market and job openings um in the economy and he marked a vertical line when Chad GBT released. I don't think he's insinuating that chat GBT took all our jobs. We know it hasn't. But it does strike a very interesting correlation. Um, is there any sort of relationship here at all, Alexandra? >> Well, I would say that their relationship is the following. The job market is not moving as fast as the promise of artificial intelligence and in other areas we have not seen that much development and we have not seen so many innovations introduced. There's just AI and then there is the rest of the world. So um obviously the the reasons for the current state of the job market in the United States are quite polycausal. You can mention tariffs, you can mention many other things, the geopolitical situation. Um but then on the other hand you have this big pillar of enthusiasm and that is artificial intelligence. And I think that's why you see this divergence. It's not so that AI is impacting the economy. It is so that we talk about artificial intelligence so much and it's so overhyped that it's just you know moving rapidly. The investments are are flooding into the AI space and the rest of the job market is just you know moving quite slowly and uh uh is is detached from it to some extent. >> I talked to a fund manager earlier today and he showed me this chart. This is the circular AI financing that people are talking about um that's happening right now. big tech companies are investing in each other um either through infrastructure or capital expenditure. We're outright just buying equity. Can you just explain from a technologist point of view, not purely from a financial point of view, why this may be happening and why this is necessary? Have you seen anything like this before? >> Well, uh I think some people do make a reference to the dotcom bubble, but it's yet a different phenomenon. What we see is a certain batch of money just uh you know circulating among those big tech companies and they are usually doing investments together. They rely on one another and this money is just kind of flowing inside that little bubble. Um so I I would think that this is concerning you know um we have seen big promises uh beginning of this year. Oracle um together with open AI and I believe Soft Bank said that they would place 500 um billion dollars on the market to build data centers something we have just mentioned right and then that money goes to other tech providers. So this is sort of a a closed bubble as you mentioned with this circular spending inside and I think this should be a point of concern um for some of us. So I am expecting if you ask me I am expecting a bubble burst or at least some form of correction because uh this overhype is is definitely um um you know you know already visible to to some people and since we're not seeing value we don't see artificial intelligence delivering the change in organizations and in the way we work and we also don't see such spikes in productivity. If you know your productivity is raising by rising by 10% this is not so meaningful. is definitely not comparable with this ginormous promise of artificial intelligence. So I am expecting a correction in the coming months or a year >> and these companies are actually when you think about it competitors each other. Yep. They're collaborating, they're sharing ideas, they're investing in each other. Uh people are talking about how the search engine is going to change. Um, I'll let you comment on uh Chad GBT's latest Atlas launch and just overall the introduction of chat bots uh that some people say will spell the end of the traditional Google search engine. Is that true? Will we use the internet completely differently in the next year or two? >> I fully agree with that and I think that is very likely to happen. um chat GPT and other systems uh that are similar to it say Gemini or others are in fact developing capabilities of a search engine maybe search engine will become an obsolete world you know some sometime from now uh kids won't know what a search engine even is because they will just interact with AI to find out about the world and to look up information I I think it's very very likely so I think the internet as we know it with applications and with search engine is is something that is going to be a relic quite soon. Uh and Chad GPT and others are going to take over if you can now already shop uh at Walmart via Chat GPT or listen to music um um on Spotify. But again via ChatGptd that means obviously that chat GPD is taking over functions of other services of other applications and kind of swallowing the internet as a whole. For me, actually the main point of a concern is that if you're building this new um information architecture, um the slop is a concerning problem here. Uh because if artificial intelligence eats up the internet and now a large proportion of text on the internet but also content online is just deep fakes then that means that the quality of what we will get from systems like JPD is actually going to be much lower soon because they're going to be just simply eating up this uh deep fake slop and then uh giving it back to us and multiplying it. So for me that is actually quite concerning and I'm hoping that some of these companies will reconsider that that if they're adding more slop to what we already see what they will give back to us is not going to be uh really meaningful and qualitative but instead it's going to be you know of poorer quality than the internet and the internet was definitely far from perfect. So let's say if you were uh working at the uh product development division at Google, how are you responding to chat GBT and other uh similar large language models uh being released in the commercial space? What is your priority then? >> I would say that there are different avenues you can try and build meaningful change on top of services that were deployed prior and that have been working well. So I can absolutely see how our interaction with something like YouTube or workspace or drive could be different and better thanks to artificial intelligence and this is something that I think users would totally appreciate. Um so I would focus on more incremental change but that can lead to you know a really better service for all of us as users. Um and I would also really consider hard, you know, how to somehow attack the main problems that we experience with artificial intelligence in that agentic assistive uh form that we uh you know have available today. AI is constantly agreeing with us. It's hallucinating. You know, it's being this people's pleaser. Uh and I think this is not an avenue forward, right? If you want really good system, it should be able to say I don't know this or that or I'm uncertain or let's double check that and this is not something that's happening. Artificial intelligence is quite often simulating expertise. So I would really try and attack those biggest problems that we see with artificial intelligence and try and deliver real value so that companies that have to be compliant with various regulations that have to be responsible for what they're doing feel that they can safely use this technology and not be worried that it's actually something that's going to um you know be a major problem for them because it hallucinated some sort of results or flip data. So I think I really try and focus on that instead of uh advancing more platforms that can generate uh infinite slop. >> Well, how how are artificial intelligence uh legal frameworks going to change the future? So this just came in 2 days ago. Chat GBT is no longer giving financial, legal, or medical advice. It's now being classified as an education tool rather than a consultant. So let's say I go on chat GBT and I do I ask it for some legal advice. I do it. I get in trouble. Who who do I sue? Is it is it Sam Alman's fault? And in the future it's going to extend beyond Chad GBT. Maybe there will be uh maybe at some point we'll have robots thinking autonomously. Then whose fault is it if I if I if I do something that the robot consulted me to do? Um you know what what what what does my lawyer do in that case? Right? Have you thought about this issue? >> I it's unresolved. Um uh so listen I'm from Europe and in Europe we don't have um autonomous vehicles. We don't have even attempt to introduce autonomous vehicles to our you know cities. The reason for that it's that it's an unresolved matter. Exactly the issue of responsibility. If there is an accident caused by such an autonomous vehicle what happens next? Right. So Europe decided to just say okay we are not even trying to roll that out because we don't know answers to these crucial questions and I think you know currently if you are the one prompting it's your responsibility uh to some extent right so you are taking over that responsibility as a user and if your content uh your report um is just you know infused with hallucinations then that's sort of your fault today but obviously I don't think we can dismiss much larger issues. The reason why uh Chad GPT or its founders are more cautious now is because they've seen what's been happening. So there were many issues for instance with minors using Chad GPT. Um one case ended up with addiction to Chad GPT. uh and there were many other cases for instance character AI where a kid really interacted heavily with an AI that would not sort of abandon its role and would kind of you know fuel certain anxieties in that child and it ended up with a suicide. So I think some of these companies do see that as a major concern right that you cannot overrely on artificial intelligence uh that you cannot use chat GPT to invest in the stock market right you cannot consider it to be a reliable source of information or an oracle and I think I don't know about the US because the US obviously doesn't like to be overburdened with regulations and maybe guard rails is a better attempt to that but you see how these companies are indeed um trying to come up with a solution that is um safer for them where they say this is not a technology that you can trust. This is a technology that can prototype certain things for you, can be helpful, but it's not going to be uh something that you can fully rely on. And maybe that's going to be envisaged in the upcoming regulations. Uh intellectual property is yet another thing, right? We have many unresolved cases here. AI systems that can make sounds, music compositions, videos. This is all still an open space. We have a regulatory vacuum in these areas, but that is likely to change next year and the next two years. >> Uh let's talk about this work slop that you mentioned earlier. It's causing real problems for the profitability of companies. And some economists on my show have been saying, well, AI isn't really generating high ROI. This uh this particular article by uh HBR uh highlighted this issue. Despite a surge in generative hi use across workspaces, most companies are seeing little measurable ROI. One possible reason is work slop that you talked about. And to summarize, it's basically people have to redo the work that AI did. I I know this firsthand. Um I get Chad to do something. It's terrible. I have to do it again. I may as well have done it myself in the first place. Uh not always, right? It adds some value. But what can we do to make AI profitable is I think the biggest question that investors are wondering to themselves right now. Well, I think particularly this article and also the other one that I mentioned by MIT, they are highlighting this major problem that artificial intelligence is is very good at writing redundant emails that sound really nice, you know, and they're super polite and maybe that is uh something that advances the organizational culture that people are sending each other polite emails, but what else is there? I is is that really what we asked for? You know, is this where all these investments are are going? I think this this is a major issue. I have to tell you that when I was in Davos, something I mentioned prior uh this year, I remember various consultancy boutiques where they they were like sort of you know advertising themselves as those who know what the ROI on AI will be. But I honestly think that nobody knows that quite yet because we have major limitations to LLMs and they are good at some tasks. You can definitely use them in communications. You can use them in marketing in text analysis. They can serve as a form of internet for the organization again cautiously but they can be quite helpful right at finding information in different hidden pockets of the organization. Um but there are also um many areas where they are not so usable anymore. And that's why I said when you asked me about you know what would be a meaningful thing to do if I was say at Google and competing against open AI I would say try and solve these things and and show maybe smaller areas not such a generic technology that is attacking all the problems but a technology that is more specialized in certain areas in taxation in the legal system in healthcare we need more specialized tools generic chat GPT will not be a good solution for these areas that are very important for us vital for us. So I would say we should expect a bit more in terms of really focusing on real problems that people experience at their daily jobs, right? And solving that instead of giving them a an encyclopedia Britannica of the 21st century, which chat GPD is for me, right? It's a responsive encyclopedia sort of that knows everything about some things or actually a little things about all the things, but it doesn't have that in-depth knowledge and expertise that you usually use when you're doing something meaningful at work. So to sum up then, what new developments or new applications of AI do you see actually lasting and actually contributing to real substance, not just writing nice emails, for example, like things that will actually add to productivity, things investors can bank on for real ROI. What's coming up that excites you? >> Uh what I'm excited about is essentially this turn towards more specialized tools for different professions. And I think that there if there's enough knowledge sharing uh and if there's enough you know catalog of best practices that we can use we can actually build meaningful tools in the future. Um for me a bit of a problem is that companies quite often don't share how they've been working with AI and what has worked and not because they're maybe you know a bit afraid to overshare too much of you know failed experiences with artificial intelligence. Everybody wants to showcase themselves as those who are meaningfully working with AI. But I think we totally deserve an honest conversation about what has not worked because then that can bring us to a point where we have really uh useful tools and a nurse can use them, a doctor can use them, a lawyer can use them, a journalist researcher uh in a more meaningful way. For for now, these tools are very good at coding. They're very good at researching some things for us plus a bit of hallucination. uh but they're essentially definitely not good for the majority of professions as we know them. >> Who's working on these specialized tools? Is it the big tech companies? Is it startups? Is it researchers? >> Some of some of some of the some of the some of the I'm not sure if that's counts as big tech but definitely anthropic is an interesting company. They have a slightly different trajectory and the way they want to build AI is is certainly a different approach from say open AI. Open AI is really advancing to replace the internet as we know it and I think that's their main ambition and then build tools that are very engaging for humans also on that emotional level. But I think anthropic is trying to really tackle to tackle the problem of um workplace issues and challenges and how to kind of solve them together with artificial intelligence in a meaningful way. I do know of other uh obviously ideas. We mentioned physical AI and robotics and who knows what comes out of that. Uh we might see some very interesting breakthroughs in how AI for instance tackles manual work or how it avoids obstacles that are showing up you know in our real environments in in a in a very very good way. So, um I think there's plenty to look ahead to, but but certainly LLMs do have their limitations and I think uh the hype on LLMs might be uh you know a big deal now but who knows maybe two years from now we'll just consider them um one of many technologies that can be good for some things but definitely not for everything. >> Uh final question for you and I'll let you go Alexandra. So uh tech investors are often cautious about the rise and fall of mega tech companies. We've seen what happened with Blackberry. they were the dominant smartphone player and then Apple wiped them out. Cisco was the most valuable company at the time and then they lost all their market edge. I think it lost 85% of its value or something like that. >> And uh are there if you look around right now are there any companies that you think are slow adopters or lagards in innovation that could risk becoming either a Blackberry or a Cisco? >> Well, I am thinking more about companies that are deciding not to embark on a journey with artificial intelligence altogether. Right? So when you look back, you can think about Kodak and how they missed out on the digital photography revolution. And I think there are companies that are dismissive of artificial intelligence also because of the fact that it's not a technology they would trust. And I kind of understand that position. But I think that if you don't invest in understanding this technology and kind of roadmapping how you could use it in in a meaningful way, its current version or its future versions and if you're not trying to figure out what's the best pathway forward with AI for you, then you're missing out on something very very important. And I do hear when I for instance talk to my students sometimes about their workplace experiences that there are companies that are definitely dismissive of artificial intelligence. Uh, and it's, you know, if you're dismissive of artificial intelligence today, it's like being dismissive of the internet 20 years ago. That does not end up well. >> Very good. Thank you very much, Alexandra. Great talk. Where can you follow you, learn more about you? >> Um, I'm inviting you all to my LinkedIn or to my Twitter. Uh, find me there and I'm obviously happy to interact. >> Okay, good. We'll put the links down below >> for LinkedIn and uh X. We'll we'll make sure to follow Sandra down below. Yeah, Elon won't be happy if we call it Twitter. I'm kidding. Who cares? >> It's still, you know, when you type in Twitter, it's still there. >> Yeah. Well, there's also a verb, you know, you can tweet things, but you I can't X things. That doesn't makes Anyway, it became part of the lexicon. But I I I appreciate your time. We'll follow you and we'll speak again soon for any um new AI developments. So, thank you for joining the program. Thank you. Thank you for watching. Don't forget to like, subscribe.
'I'm Expecting A Bubble Burst': Markets Could Face Reality Check In 2026 Warns Harvard Futurist
Summary
Transcript
I am expecting a bubble burst or at least some form of correction. We're not seeing value. We don't see artificial intelligence delivering the change in organizations and in the way we work. And we also don't see such spikes in productivity. We are heavily relying on this technology. You cannot use chat GPT to invest in the stock market, right? You cannot consider it to be a reliable source of information or an oracle. We have placed all our bets on artificial intelligence and if we underdel um I think next year it might be an issue. AI has been touted by some as probably more important to changing society than the internet. Is it the next industrial revolution? How are our lives changed forever? We're talking about how AI will impact the labor market and impact uh society at large into the future with our next guest Alexandra Shagalinska. She is the uh senior research associate at Harvard Law School Center for Labor and a Just Economy. She's a futurist and she'll be talking to us about AI and our lives in the future. Welcome to the show, Alexandra. Good to host you. >> Thank you so much for the invitation. >> I want to start by talking about this warning. Well, I guess not warning, but uh uh prognostication by Bill Gates uh who recently said that AI will take over most jobs and leave humans working just two days a week. That sounds great, by the way. Wouldn't we all like to work two days a week? Billionaire Bill Gates forecast that technological advances, especially in artificial intelligence, will probably result in a shortened two-day work week. He revealed his prediction in March. Uh he explained that while current AI lacks specialized knowledge, with human experts like physicians and educators still essential, the coming decade could bring dramatic shifts. It's kind of profound because it solves all of these specific problems like we don't have enough doctors or mental health professionals. But it brings with it so much change. What will jobs be like? Should we just work two or three days per week? Um, I love the way it'll drive innovation forward, but I think it's a little bit unknown. How would you answer these questions? What will jobs be like, and should we just work two or three days per week, Alexandra? >> Well, I would say that uh the promise of working for two or three days is actually really great. Uh, but I don't think that's going to happen anytime soon. And the data that we are collecting at Harvard is actually suggesting that AI's impact on the world of work is going to be much more complex, uneven h and also probably not so positive. So what we'd see already is is definitely an impact on the so-called entrylevel jobs. And here there is an issue. uh people would rather hire Chad GPT than an intern which I think is a problem because if you want a senior at a certain job you should also have a junior that's going to learn uh to become a senior. So that is actually quite an issue but uh in other areas um this impact I think is uh definitely overhyped. uh there was a study recently by Yale that stated quite clearly that the impact on jobs at large when it comes to artificial intelligence is quite low close to zero. So maybe in some professions related to IT you do see a change in how people work. The replacement of a job is actually not happening. And to that I would also add a recent study by MIT that stated that pilot projects at organizations um you know focused on uh implementing artificial intelligence are mostly failures. 95% of them fail. So I think um maybe if we speculate about long-term future, we might see artificial intelligence stepping in to some professions, but current level LLMs are definitely not there to replace jobs. Maybe some tasks and I don't think that's going to change anytime soon. So we cannot hope for a a two-day work week in the next 5 years or so. Okay, I want to talk in detail about what kinds of jobs will be impacted first, but let's talk about uh some of the biggest themes within the AI sphere are artificial general intelligence. MIT technology review just last week review uh revealed this article called how AGI became the most consequential conspiracy theory of our time. Um I hear it's close two years, 5 years, maybe next year. And I hear it's going to change everything. It will cure disease, save the planet, usher in an age of abundance. It will solve our biggest problems in ways we cannot imagine. It basically goes on to debunk some of these claims. Um, what is your uh analysis of this trend of AGI? Will we actually get it in the next 2, 5, 10 years? What would it look like when we finally achieve it? Some argue it's even here. So maybe you can help us discuss what's actually happening on the ground. >> Well, I think the problem with this very term is that it became very blurry recently. AGI used to mean a slightly different thing. If you look at the booksh textbooks 10 years ago, AGI meant a technology that can mimic the whole spectrum of human intelligence and successfully emulated including bodily intelligence, perceptive intelligence and many other things uh effective intelligence, emotional intelligence. So clearly that's not happening with LLMs. Um but I think big tech is is trying to propose a different vision of what AGI is and what they're trying to propose is a technology that can be successful at solving some tasks that we generally pay for that have an economic value right and in that way you could certainly argue that AGI is already here in some form but this is definitely not comparable with the vision of AGI that has been proposed prior and I think when somebody's referring to AGI I and the next wave being super intelligence. Quite often we think of a technology that is um acting on a voluntary basis that is intentional that is uh conscious even and that is definitely not happening. We have LLMs that can be good at certain tasks, but they're not going to be good at other tasks. And they are working in a surprisingly weird way in the sense that when you interact with them, sometimes they're very good at what they're doing and sometimes there is a banal question popping up and they cannot answer that question. So I think this is something that we experience on a daily basis and that's certainly not AGI as we used to formulate it. So I think that's very important to say and in that way I also agree with this article. There was a promise of AGI that word was very speculative and it was used to kind of overhype artificial intelligence but what we have right now is many companies actually backpedaling on that. I don't hear open AI talking about AGI all that much as they used to for instance last year. >> Uh you're right that forget AGI even large language models right now are not perfect simulations for human conversation. Sometimes you say one thing, it doesn't understand you. My counterargument is I talked to some people, humans in my personal life and I have the same problem. So I don't think machines are that far behind real life. Um but to your point, what needs to develop for a system to be universally smarter than maybe the smartest humans? What you know, it's not inconceivable to think that maybe one day we'll get there. At least that's what science fiction makes us believe. But how do we bridge the gap between science fiction and reality here from now on? >> Uh when I went to Davos to the economic forum this year, we had plenty of conversations about physical AI and it's actually a very big topic um in artificial intelligence that you need a body to collect experiences and to learn from those experiences and then further on operate like a human. Obviously, we do not know what would happen if we had that perfect robot uh that would both have a an LLM/ aentic mind and on top of that a body that can experience reality as noisy as it is. But um many people were saying that well we were trying to approach this problem of super intelligence or even AGI from the perspective of language and language acquisition and this is something that LLMs are very good at. But in fact our intelligence you know is a very adaptive system that developed first because of our interactions with the physical world. So maybe we should revert that and think about physical artificial intelligence. And you see that right because currently there are some companies for sure in China also in the US to some extent in Europe that are trying to build better physical systems that could integrate what LLM can do and what they're good at with physical capabilities in the real real world. That is the most challenging uh I would say uh thing for artificial intelligence because our world is constantly changing and adapting to it is actually quite difficult right so if AI can overcome that and become good at it who knows uh what could happen next. >> This leads back to our initial discussion about the labor force and I want to know how exactly our labor force will be impacted or which areas of the labor force will be impacted first. People have been speculating about futurists have been speculating about uh machines and our labor force for decades. This was a time um article that uh my team helped me pull up. This was from 1965 and um and I want to just bring it to your attention this particular paragraph here where uh it cited um here men such as IBM economist Joseph F uh Fkin feel that automation will eventually bring about a 20-hour work week. I guess that's what Bill Gates was citing perhaps within a century thus creating a mass leisure class. Some of the more radical prophets foresee the time when as little as 2% of the workforce will be employed. Warn that the whole concept of people as producers of goods and services will become obsolete as automation advances. Uh if we ever have and this is not like an an industrial revolution where we have farmers move into the cities because now there's factory work and farms can be automated. We're talking about every single job being automated, which then brings us to existential questions like what is a purpose if society if machines can just do everything for us? Can you help us foresee this future here? >> Well, I I hate to be maybe not a pessimist but a realist here, but I'm just afraid that more technology does not add up to more leisure for us. So, we've experienced various waves of technology already, and it seems to me that we are working much more than we used to. And you know our workday is so widely stretched right now because of the fact that we can constantly be present online and work using various systems. There's this expectation that our productivity will grow and this is also something that AI people talk about so much how our work will be enhanced and we'll become even more productive and that is a strong contradiction to this vision that we will become obsolete that we will not be needed anymore. So for me you know the next decade is definitely not about that and I think talking about technological unemployment is uh is sort of a replacement topic to me because there are many other issues I have with artificial intelligence impact on society on miners um AI slop deep fakes these are very current topics that we should discuss uh taking into consideration what this technology offers and what's possible with AI today. So I think you know in terms of understanding the job market like I mentioned prior there's no evidence at this point that we're trying to collect here at Harvard and and you know at many other institutions that this impact is so large you do see areas I mentioned particularly IT where that impact is clear and then you have also sectors that are benefiting from this technology like for instance marketing where you have a lot of content development and content production where artificial intelligence can really speed things up. If we take the avenue of uh developing robots um that could uh serve as systems that are replacing us in professions that uh require require certain manual expertise uh then that's going to be probably a very different trajectory but I do want to say building robots is essentially very very uh cumbersome. Uh it's also um something that is very costly. um we don't have a solution for battery life of these robots. Something that is never mentioned at those advertisements that are showing us humanoid robots is that they usually work for half an hour or so and not longer. So we have plenty of issues related to data related to batteries like I mentioned that are preventing uh those robots entering our life very quickly. Robotics was the first big important topic in AI always right. So when you think about different disciplines of AI, everybody talked about robots and somehow we never had a breakthrough in robotics. So I'm sort of you know more skeptical when it comes to that and I think well maybe we can expect some breakthroughs but if we don't see them uh actually the march of robotics is not going to happen anytime soon and when it comes to LLMs we're going to work differently. I absolutely agree with that but whether we will work less uh that is a very big question for me. There is a big let's talk about robots now. There's a big race amongst tech companies to get uh to be the first to put a humanoid robot in people's homes uh in in some capacity. So, let me just bring up one example. This has been making the rounds on the internet. The the uh 1x robot from Neo. Uh I'm just I'm just show you this one clip. It's cooking. So, take a look. We want to push and spread this out into a nice big ring for pizza. You want to give it a try? >> Yeah, let's do it. I'm just going to slowly kind of get my thumb in there real quick. >> You don't want to pinch too hard in one certain area. You want to be gentle. See like this? Yeah. You can even go like that. Make your own method. I like that. Stretch it out that way. It's a little sticky though. Can I get more flour or something yet? >> I don't know if flower is the problem right now. See this spot right here? >> You stretch that out too much. We're going to take it. We're going to pinch it back together and we'll hit it with a little bit of flour. You see how much better this already looks? >> Let's not pick it up again. I think that's where I went wrong. I'm going to start stretching it in my head. I'm pretending I'm a famous pizza chef from Italy right now. >> There you go. >> Okay. It's very good at conversation. It's not good at making a pizza. Uh, it also cost $20,000 and I can't comment on the battery life. I don't know. Uh, but this is happening. Can you just comment on what you just saw? >> Yeah. Well, I I think we have to add one more thing. Although I do admire the campaign here uh involvement of so many content creators, influencers because it's a very effective campaign. We've all seen that robot probably somewhere on Instagram or elsewhere. So, that is definitely a very successful campaign. But I think the part that is often omitted is the fact that this robot is actually steered by two or three people um who are wearing VR glasses to make sure that uh you know it does the things that it's supposed to do and not some other weird things. I will say however I saw a couple of very funny videos of well maybe not this robot but some other robots that are similar to it that are actually really failing miserably at the tasks that they're supposed to perform. So on the one hand you have here a system that is actually quite good at communication like you said and we know that LLMs can do that and various agentic systems. On the other hand when it comes to this manual work uh you can see that there's still so much more to do and then there is also a human controlling this system from behind. So yeah, I mean I I think I will be persuaded when I see a fully autonomous system and then I'm ready to have a discussion, a meaningful one about, you know, what the degree of placement could be. But currently, since it's still actually human labor behind the robot labor, I'm sort of again, you know, worried that this is not necessarily what we're seeing here is not what we're getting. So, so now the robot needs a human input to learn. And how much learning does it need to to do before it can be fully autonomous? And by the way, I don't think anyone's going to want to buy a robot that's controlled by somebody else. Uh I I I I wouldn't personally I don't can't speak for you or anybody else, but I I don't want you're basically inviting somebody to virtually be in your house and you don't know who that person is. But the question is if that step is not done, then the machine can't learn or is there another way that it can learn? Well, I think this learning process is actually something that we uh don't know many things about in the sense that when you compare it to for instance a child uh the amount of video recording that a small child is doing you know at its early uh um you know phase of life is is uh pretty enormous and uh here we really don't know what the learning curve could be. Let's assume that this robot collects materials, video materials and audio materials and conversations for five or 6 months. Will that translate into a learning curve where it's actually going to master certain skills that are, you know, important in those daily errands? I honestly don't know. This is something we will have to see. But I will say that I understand to some extent why this turn to humanoid robotics is happening. And that is because many people were actually quite upset with developments in AI recently. I refer here to this AI slop right to the deep fakes that are penetrating the internet. But also certain concerns about uh well maybe detaching meaning from knowledge work, analytical work because of the fact that AI can generate lots of reports and lots of text etc. So some people said this is not what we expect from artificial intelligence. what we expect is real help at daily tasks and I think it is a good time um for those who are betting on robotics because there is a general concern over LLMs and people sort of expressed that they would want something different from artificial intelligence. So I'm expecting a big wave of interest in robotics but also investments as well. >> A a sector that's that's uh in using AI that we need to discuss is education. And I'll just play this in the background as we talk about it. This is just one example. You see a lot of these on the internet. This is uh China uh a Chinese school and it's showing uh the use of robotics and AI agents. Uh actually I don't know if that's a real person or an AI agent. I can't tell. Uh but certainly they're using a lot of tech. You can't really tell these days. you can't uh there's there's there's a lot of technology that's in this film and I I know it's a showcase but tell us about how AI is being used in the field of education at all levels right now and uh ultimately should we have AI teachers in our children's school classrooms in the future. Alexandra, >> well I I would say that what is a very interesting idea are AI tutors that can help out in the educational process. And I can tell you that I'm also involved in a project called EU on air uh which embraces 11 different universities in Europe but also we are collaborating with Harvard here uh to develop um an AI based uh system that could enhance the process of education at the university level. And what I mean by that in is for instance that you uh obviously go to school or to your university you attend a lecture and perhaps you did not understand everything from that lecture. uh and you want uh a tutor to help you out and I think artificial intelligence can be really good at such a task. We can also think about systems that could enhance the process of career placement for students and here artificial intelligence could be really a very meaningful tool. So I think there is plenty of space where you can add artificial intelligence to the educational process. for instance, to simplify content, to translate it, to make it more entertaining or to introduce uh services that would otherwise not be available for students because well, you know, when it comes to workforce, uh obviously there can be a consultant who's going to help with career placement, but this person usually works for 8 hours a day and quite often does not have access to all the databases and cannot fully analyze, you know, uh the proper placement for you as a student in the future. So I think um we can find areas where artificial intelligence can really make a meaningful uh change. We've heard about the alpha schools in California but still there is a human teacher there. So there can be AI based tutoring or self-based learning with artificial intelligence as synchronous learning. You are um uh let's say exposed to a lecture then you do some exercises with artificial intelligence. you're being graded by AI but afterwards there is the part when where you are socializing and there is a teacher and there are real humans next to you and this is what works and those alpha schools have been showing that and what we are trying to pursue also with EU and is a similar concept AI should be an enhancement uh an additional layer to the educational process but not a replacement of the educational process and I will add this um we've had an experience of self-paced learning and online learning during co and I think for the majority of students that was not a great time. Students want to interact with others. They want to hear their perspectives. They want to be engaged in a classroom experience uh exchange of uh ideas and I think this is not something that you can easily replace by any technology even the most savvy ones. So I would not disregard that and I would say the future of education is with AI but not um in a direction where AI is replacing the education as we knew it. >> Japan is testing uh an artificial city where people are supposed to live in harmony with automation. This is this is from a few years ago. Mass human experiment inside Japan's futuristic 15 billion dollar smart city uh funded by Toyota. It's called Woven City and the purpose here is to quote there will be pilot experiments and innovations that will propel society forward. The specific tech technology that will be tested will not be disclosed. Uh but it will be uh something along the lines of personal mobility, automated driving, robotics, and AI. Uh they just announced in September this year that they they launched a city. Um and they're we're going to keep tabs on I guess the development. what what what would a city like this look like where we have uh integration of robotics AI with humans um built from the ground up? How how would that be different than let's say a normal city today? >> Honestly, it's it's very hard to imagine for me because there is so much uh possibility for friction I think when it comes to interaction of humans and AI. Um I do remember a recent study published by Nature um where doctors were supposed to work together with artificial intelligence or on diagnosing patients and it turned out that they were either rebelling against it or over relying on artificial intelligence and there was kind of no midway there. So I I feel like we are not really prepared for experiments like that. But I will say one thing I think bringing together such projects that could simulate a good future with artificial intelligence or you know AI based so some sort of multi- aent um spaces where we could test certain policies whether they work or not. We hear so much about the UBI. What if we did an experiment that would involve an artificial society where UBI is actually rolled out for the whole of the society? I think it's a fantastic, you know, research area and we could find out so much if we were able to kind of emulate different personalities, different humans, different walks of life and how UBI or another policy could impact their h their livelihoods. I think um I would not stop, you know, I I I think AI is a great area to experiment with to figure out what could work in a society. Obviously, you have to treat it with certain caution because that's not going to be fully indicative of of what we would do. But nonetheless, um doing these sorts of experiments can be a very very interesting thing. I'm just not sure about combining real humans with that type of AI simulation because I think well the possibility of friction is is generally quite large and and maybe not that's not something that we would like to pursue at this nassent phase of the development of artificial intelligence. So in this hypothetical scenario, could you see a could you see the possibility of let's say machines producing most of the output in the economy, UBI being rolled out, everybody is just given income and then we consume what the machines make. Is that is that what that looks like? >> We can definitely simulate something like that. We have possibilities with AI, with deep learning to simulate a scenario of that kind, but also to simulate other scenarios, more utopian, more dystopian, um different kinds of visions for the future and then try and find out what happens. I think it's actually great. I am hearing so much about AI based surveys where you actually don't survey real people but you survey artificial intelligence multi- aent sort of you know society of AIs that are responding to questions and it can be a meaningful addendum to kind of what we do as researchers to find out about reality but also to project some vision of reality. So I think I'm not sure about a real synthetic city, but an artificial city that is a simulation of some sort of platform where we find out about what happens when we for instance deploy UBI is definitely a very interesting usage of artificial intelligence. >> We don't even have to look that far into the future. How much is AI and the development of AI contributing to economic growth in the US right now? Well, I think uh artificial intelligence is probably currently the sole reason of enthusiasm in the American economy as badly as it sounds. But we are hearing that around 92% of US GDP growth in the first quarter of 2025 came from investment in data centers and information processing equipment or or software. So that shows you that uh actually we are heavily relying on this technology. I think this is one of the reasons why so many people talk about the AI bubble right now and are worried about its burst because I think we have placed all our bets on artificial intelligence and if we underdel um I think next year it might be an issue. >> This is well this has been circulating on the internet to echo your point. This is uh the author of this particular post calls it the scariest chart in the world where maybe he's quoting somebody else. I'm not sure. Uh but it shows something that hasn't happened in a few decades which is the divergence between the S&P 500, the stock market and job openings um in the economy and he marked a vertical line when Chad GBT released. I don't think he's insinuating that chat GBT took all our jobs. We know it hasn't. But it does strike a very interesting correlation. Um, is there any sort of relationship here at all, Alexandra? >> Well, I would say that their relationship is the following. The job market is not moving as fast as the promise of artificial intelligence and in other areas we have not seen that much development and we have not seen so many innovations introduced. There's just AI and then there is the rest of the world. So um obviously the the reasons for the current state of the job market in the United States are quite polycausal. You can mention tariffs, you can mention many other things, the geopolitical situation. Um but then on the other hand you have this big pillar of enthusiasm and that is artificial intelligence. And I think that's why you see this divergence. It's not so that AI is impacting the economy. It is so that we talk about artificial intelligence so much and it's so overhyped that it's just you know moving rapidly. The investments are are flooding into the AI space and the rest of the job market is just you know moving quite slowly and uh uh is is detached from it to some extent. >> I talked to a fund manager earlier today and he showed me this chart. This is the circular AI financing that people are talking about um that's happening right now. big tech companies are investing in each other um either through infrastructure or capital expenditure. We're outright just buying equity. Can you just explain from a technologist point of view, not purely from a financial point of view, why this may be happening and why this is necessary? Have you seen anything like this before? >> Well, uh I think some people do make a reference to the dotcom bubble, but it's yet a different phenomenon. What we see is a certain batch of money just uh you know circulating among those big tech companies and they are usually doing investments together. They rely on one another and this money is just kind of flowing inside that little bubble. Um so I I would think that this is concerning you know um we have seen big promises uh beginning of this year. Oracle um together with open AI and I believe Soft Bank said that they would place 500 um billion dollars on the market to build data centers something we have just mentioned right and then that money goes to other tech providers. So this is sort of a a closed bubble as you mentioned with this circular spending inside and I think this should be a point of concern um for some of us. So I am expecting if you ask me I am expecting a bubble burst or at least some form of correction because uh this overhype is is definitely um um you know you know already visible to to some people and since we're not seeing value we don't see artificial intelligence delivering the change in organizations and in the way we work and we also don't see such spikes in productivity. If you know your productivity is raising by rising by 10% this is not so meaningful. is definitely not comparable with this ginormous promise of artificial intelligence. So I am expecting a correction in the coming months or a year >> and these companies are actually when you think about it competitors each other. Yep. They're collaborating, they're sharing ideas, they're investing in each other. Uh people are talking about how the search engine is going to change. Um, I'll let you comment on uh Chad GBT's latest Atlas launch and just overall the introduction of chat bots uh that some people say will spell the end of the traditional Google search engine. Is that true? Will we use the internet completely differently in the next year or two? >> I fully agree with that and I think that is very likely to happen. um chat GPT and other systems uh that are similar to it say Gemini or others are in fact developing capabilities of a search engine maybe search engine will become an obsolete world you know some sometime from now uh kids won't know what a search engine even is because they will just interact with AI to find out about the world and to look up information I I think it's very very likely so I think the internet as we know it with applications and with search engine is is something that is going to be a relic quite soon. Uh and Chad GPT and others are going to take over if you can now already shop uh at Walmart via Chat GPT or listen to music um um on Spotify. But again via ChatGptd that means obviously that chat GPD is taking over functions of other services of other applications and kind of swallowing the internet as a whole. For me, actually the main point of a concern is that if you're building this new um information architecture, um the slop is a concerning problem here. Uh because if artificial intelligence eats up the internet and now a large proportion of text on the internet but also content online is just deep fakes then that means that the quality of what we will get from systems like JPD is actually going to be much lower soon because they're going to be just simply eating up this uh deep fake slop and then uh giving it back to us and multiplying it. So for me that is actually quite concerning and I'm hoping that some of these companies will reconsider that that if they're adding more slop to what we already see what they will give back to us is not going to be uh really meaningful and qualitative but instead it's going to be you know of poorer quality than the internet and the internet was definitely far from perfect. So let's say if you were uh working at the uh product development division at Google, how are you responding to chat GBT and other uh similar large language models uh being released in the commercial space? What is your priority then? >> I would say that there are different avenues you can try and build meaningful change on top of services that were deployed prior and that have been working well. So I can absolutely see how our interaction with something like YouTube or workspace or drive could be different and better thanks to artificial intelligence and this is something that I think users would totally appreciate. Um so I would focus on more incremental change but that can lead to you know a really better service for all of us as users. Um and I would also really consider hard, you know, how to somehow attack the main problems that we experience with artificial intelligence in that agentic assistive uh form that we uh you know have available today. AI is constantly agreeing with us. It's hallucinating. You know, it's being this people's pleaser. Uh and I think this is not an avenue forward, right? If you want really good system, it should be able to say I don't know this or that or I'm uncertain or let's double check that and this is not something that's happening. Artificial intelligence is quite often simulating expertise. So I would really try and attack those biggest problems that we see with artificial intelligence and try and deliver real value so that companies that have to be compliant with various regulations that have to be responsible for what they're doing feel that they can safely use this technology and not be worried that it's actually something that's going to um you know be a major problem for them because it hallucinated some sort of results or flip data. So I think I really try and focus on that instead of uh advancing more platforms that can generate uh infinite slop. >> Well, how how are artificial intelligence uh legal frameworks going to change the future? So this just came in 2 days ago. Chat GBT is no longer giving financial, legal, or medical advice. It's now being classified as an education tool rather than a consultant. So let's say I go on chat GBT and I do I ask it for some legal advice. I do it. I get in trouble. Who who do I sue? Is it is it Sam Alman's fault? And in the future it's going to extend beyond Chad GBT. Maybe there will be uh maybe at some point we'll have robots thinking autonomously. Then whose fault is it if I if I if I do something that the robot consulted me to do? Um you know what what what what does my lawyer do in that case? Right? Have you thought about this issue? >> I it's unresolved. Um uh so listen I'm from Europe and in Europe we don't have um autonomous vehicles. We don't have even attempt to introduce autonomous vehicles to our you know cities. The reason for that it's that it's an unresolved matter. Exactly the issue of responsibility. If there is an accident caused by such an autonomous vehicle what happens next? Right. So Europe decided to just say okay we are not even trying to roll that out because we don't know answers to these crucial questions and I think you know currently if you are the one prompting it's your responsibility uh to some extent right so you are taking over that responsibility as a user and if your content uh your report um is just you know infused with hallucinations then that's sort of your fault today but obviously I don't think we can dismiss much larger issues. The reason why uh Chad GPT or its founders are more cautious now is because they've seen what's been happening. So there were many issues for instance with minors using Chad GPT. Um one case ended up with addiction to Chad GPT. uh and there were many other cases for instance character AI where a kid really interacted heavily with an AI that would not sort of abandon its role and would kind of you know fuel certain anxieties in that child and it ended up with a suicide. So I think some of these companies do see that as a major concern right that you cannot overrely on artificial intelligence uh that you cannot use chat GPT to invest in the stock market right you cannot consider it to be a reliable source of information or an oracle and I think I don't know about the US because the US obviously doesn't like to be overburdened with regulations and maybe guard rails is a better attempt to that but you see how these companies are indeed um trying to come up with a solution that is um safer for them where they say this is not a technology that you can trust. This is a technology that can prototype certain things for you, can be helpful, but it's not going to be uh something that you can fully rely on. And maybe that's going to be envisaged in the upcoming regulations. Uh intellectual property is yet another thing, right? We have many unresolved cases here. AI systems that can make sounds, music compositions, videos. This is all still an open space. We have a regulatory vacuum in these areas, but that is likely to change next year and the next two years. >> Uh let's talk about this work slop that you mentioned earlier. It's causing real problems for the profitability of companies. And some economists on my show have been saying, well, AI isn't really generating high ROI. This uh this particular article by uh HBR uh highlighted this issue. Despite a surge in generative hi use across workspaces, most companies are seeing little measurable ROI. One possible reason is work slop that you talked about. And to summarize, it's basically people have to redo the work that AI did. I I know this firsthand. Um I get Chad to do something. It's terrible. I have to do it again. I may as well have done it myself in the first place. Uh not always, right? It adds some value. But what can we do to make AI profitable is I think the biggest question that investors are wondering to themselves right now. Well, I think particularly this article and also the other one that I mentioned by MIT, they are highlighting this major problem that artificial intelligence is is very good at writing redundant emails that sound really nice, you know, and they're super polite and maybe that is uh something that advances the organizational culture that people are sending each other polite emails, but what else is there? I is is that really what we asked for? You know, is this where all these investments are are going? I think this this is a major issue. I have to tell you that when I was in Davos, something I mentioned prior uh this year, I remember various consultancy boutiques where they they were like sort of you know advertising themselves as those who know what the ROI on AI will be. But I honestly think that nobody knows that quite yet because we have major limitations to LLMs and they are good at some tasks. You can definitely use them in communications. You can use them in marketing in text analysis. They can serve as a form of internet for the organization again cautiously but they can be quite helpful right at finding information in different hidden pockets of the organization. Um but there are also um many areas where they are not so usable anymore. And that's why I said when you asked me about you know what would be a meaningful thing to do if I was say at Google and competing against open AI I would say try and solve these things and and show maybe smaller areas not such a generic technology that is attacking all the problems but a technology that is more specialized in certain areas in taxation in the legal system in healthcare we need more specialized tools generic chat GPT will not be a good solution for these areas that are very important for us vital for us. So I would say we should expect a bit more in terms of really focusing on real problems that people experience at their daily jobs, right? And solving that instead of giving them a an encyclopedia Britannica of the 21st century, which chat GPD is for me, right? It's a responsive encyclopedia sort of that knows everything about some things or actually a little things about all the things, but it doesn't have that in-depth knowledge and expertise that you usually use when you're doing something meaningful at work. So to sum up then, what new developments or new applications of AI do you see actually lasting and actually contributing to real substance, not just writing nice emails, for example, like things that will actually add to productivity, things investors can bank on for real ROI. What's coming up that excites you? >> Uh what I'm excited about is essentially this turn towards more specialized tools for different professions. And I think that there if there's enough knowledge sharing uh and if there's enough you know catalog of best practices that we can use we can actually build meaningful tools in the future. Um for me a bit of a problem is that companies quite often don't share how they've been working with AI and what has worked and not because they're maybe you know a bit afraid to overshare too much of you know failed experiences with artificial intelligence. Everybody wants to showcase themselves as those who are meaningfully working with AI. But I think we totally deserve an honest conversation about what has not worked because then that can bring us to a point where we have really uh useful tools and a nurse can use them, a doctor can use them, a lawyer can use them, a journalist researcher uh in a more meaningful way. For for now, these tools are very good at coding. They're very good at researching some things for us plus a bit of hallucination. uh but they're essentially definitely not good for the majority of professions as we know them. >> Who's working on these specialized tools? Is it the big tech companies? Is it startups? Is it researchers? >> Some of some of some of the some of the some of the I'm not sure if that's counts as big tech but definitely anthropic is an interesting company. They have a slightly different trajectory and the way they want to build AI is is certainly a different approach from say open AI. Open AI is really advancing to replace the internet as we know it and I think that's their main ambition and then build tools that are very engaging for humans also on that emotional level. But I think anthropic is trying to really tackle to tackle the problem of um workplace issues and challenges and how to kind of solve them together with artificial intelligence in a meaningful way. I do know of other uh obviously ideas. We mentioned physical AI and robotics and who knows what comes out of that. Uh we might see some very interesting breakthroughs in how AI for instance tackles manual work or how it avoids obstacles that are showing up you know in our real environments in in a in a very very good way. So, um I think there's plenty to look ahead to, but but certainly LLMs do have their limitations and I think uh the hype on LLMs might be uh you know a big deal now but who knows maybe two years from now we'll just consider them um one of many technologies that can be good for some things but definitely not for everything. >> Uh final question for you and I'll let you go Alexandra. So uh tech investors are often cautious about the rise and fall of mega tech companies. We've seen what happened with Blackberry. they were the dominant smartphone player and then Apple wiped them out. Cisco was the most valuable company at the time and then they lost all their market edge. I think it lost 85% of its value or something like that. >> And uh are there if you look around right now are there any companies that you think are slow adopters or lagards in innovation that could risk becoming either a Blackberry or a Cisco? >> Well, I am thinking more about companies that are deciding not to embark on a journey with artificial intelligence altogether. Right? So when you look back, you can think about Kodak and how they missed out on the digital photography revolution. And I think there are companies that are dismissive of artificial intelligence also because of the fact that it's not a technology they would trust. And I kind of understand that position. But I think that if you don't invest in understanding this technology and kind of roadmapping how you could use it in in a meaningful way, its current version or its future versions and if you're not trying to figure out what's the best pathway forward with AI for you, then you're missing out on something very very important. And I do hear when I for instance talk to my students sometimes about their workplace experiences that there are companies that are definitely dismissive of artificial intelligence. Uh, and it's, you know, if you're dismissive of artificial intelligence today, it's like being dismissive of the internet 20 years ago. That does not end up well. >> Very good. Thank you very much, Alexandra. Great talk. Where can you follow you, learn more about you? >> Um, I'm inviting you all to my LinkedIn or to my Twitter. Uh, find me there and I'm obviously happy to interact. >> Okay, good. We'll put the links down below >> for LinkedIn and uh X. We'll we'll make sure to follow Sandra down below. Yeah, Elon won't be happy if we call it Twitter. I'm kidding. Who cares? >> It's still, you know, when you type in Twitter, it's still there. >> Yeah. Well, there's also a verb, you know, you can tweet things, but you I can't X things. That doesn't makes Anyway, it became part of the lexicon. But I I I appreciate your time. We'll follow you and we'll speak again soon for any um new AI developments. So, thank you for joining the program. Thank you. Thank you for watching. Don't forget to like, subscribe.