Jack Kokko – Building the Google of Finance at AlphaSense (EP.461)
Summary
Efficiency in Research: AlphaSense's AI technology significantly reduces the time and effort required to produce deep research reports, enhancing decision-making speed and confidence for investment professionals.
Market Reach: AlphaSense serves a wide range of clients, including 90% of top asset management firms, leading investment banks, and over half of Fortune 500 companies, positioning itself as a critical tool in financial analysis.
Evolution of Technology: The platform has evolved from a semantic search tool to an AI-powered research platform, leveraging large language models (LLMs) to enhance its capabilities and provide more precise insights.
Proprietary Content: Through strategic acquisitions like Stream and Tigus, AlphaSense has expanded its library of expert transcripts, offering unique insights that are not available elsewhere, particularly in private company research.
AI Integration: The introduction of an AI interviewer showcases AlphaSense's innovative use of AI to conduct expert interviews, providing scalable, high-quality content generation that enhances market intelligence.
Corporate Expansion: Initially focused on hedge funds, AlphaSense has broadened its customer base to include corporate clients across various departments, demonstrating its versatility and value in strategic decision-making.
Future Vision: AlphaSense aims to create an "always on" intelligence machine that continuously processes and analyzes information, offering proactive insights and transforming how financial and business decisions are made.
Leadership and Growth: CEO Jack Kokko emphasizes the importance of staying close to product development and maintaining flexibility to adapt to technological advancements, driving the company's continued growth and innovation.
Transcript
a single deep research report that our AI produces now gives them same 10 pages that they spent three weeks producing with a team of people. So it just gives you now this incredible efficiency and breath in what you can do. Allows you to do much more diligence and ultimately be more confident in your decision-m and still be a lot faster. So it's going to addresses the quality and the speed and the confidence all at once. [Music] I'm Ted Sides and this is Capital Allocators. My guest on today's show is Jack Koko, co-founder and CEO of Alphasense, the market intelligence platform often described as Google for finance. The company's 6,000 customers canvas 90% of the top asset management firms, all the world's leading investment banks, and over half of the Fortune 500 companies. Our conversation covers Jack's early frustration as an investment banking analyst that sparked the idea for AlphaSense, the evolution of the business from a simple semantic search tool to an AI powered research platform, the promise and perils of LLMs and highstakes decision-making, and Jack's vision of an always intelligence machine that will transform how business gets done. Jack offers a fascinating glimpse at the intersection of technology, data, and investment decision-making. Before we get to Ted's interview, it's football season, which in my house also means it's indoctrination season. Because, let's face it, young minds are malleable. And when you've got kids, you've got a once- ina-lifetime chance to wire them the right way with your favorite football teams. Just ask my four-year-old. >> Go dogs. Sick him. Woof woof woof. Now, that's an easy one. The Georgia Bulldogs are a college football powerhouse. Three national championships in recent years, tons of glory. Who wouldn't want to be a dogs fan, but on Sundays? >> Here we go, Brownies. Here we go. >> That one's just mean. The Cleveland Browns are famous not for winning, but for testing your character year after year, heartbreak after heartbreak. And yes, I made her a Browns fan anyway. Some might call that cruel. I call it parenting. That's the thing about young minds. They believe what you repeat. So, just like forcing your kids to cheer for your favorite football teams, now's the time to plant another seed. Share the Capital Allocators podcast with friends, family, and colleagues in their formative years. Because if you get to them early enough, they'll be lifelong fans, too. Thanks so much for spreading the word. Please enjoy my conversation with Jack Coco. Jack, thanks so much for joining me. Thanks for having me. I'd love you to take me back to your background that led to this entrepreneurial journey. If I go all the way back to where I grew up, that was in Finland. I studied electrical engineering, thought I was going to be building mobile phones and during that time I really got enamored by finance. studied both in parallel and then one summer late in those studies I spent a few months working at a startup in Brussels called East where we were building the company was going to be the European version of NASDAQ never really took off but I was talking to US investment banks and trying to raise money for that business and also made some connections to firms in London where got an interview at Morgan Stanley and then ended up working as an analyst as my first job out of college in London first during the telecom boom and then convinced them where I wanted to be. My dream land was Silicon Valley and I ended up in San Francisco working on tech deals during the docom boom. Spent the first weekend working on a billion dollar deal and things were moving so much faster than in Europe. was very exciting at the time, but certainly also a stressful job for someone who wanted to do a good job and the tools just weren't up for it and that ultimately planted the seeds in my mind for what I'd one day want to go and build with Alpha Sims. >> What were your frustrations at the time doing the work you were trying to do without the tools needed? >> I had a pretty strong work ethic. I'd work the usual allnighters and trying to do a really good job doing the analysis, but the pace was really fast and you'd end up not having enough time to do a really good job. I remember some client boardrooms where I'm sweating and barely awake, but also afraid of what did I miss and what am I going to be called on by the CFO or CEO of that company where I just missed something in my analysis. And that feeling has stuck to me, frankly, still every day when I walk into a boardroom. I have flashes from those situations. That really was because of the lack of technology to help an analyst who needed to consume so much information and trying to catch up on a new industry that you didn't know, new companies you didn't know, and the sort of cross-sectional information across different industries that you really should have known to be smarter with your analysis, with your viewpoints, and to be able to talk to some really experienced business professionals working on about to bet billions on a deal. Certainly we had the data terminals that a very financial professional has still today. That was a big part of the frustration that you could go and manually look for data but it was very hard to consume it at the scale and speed that was needed. And back in those days you already started to have technology for consumers where I had Google to search the internet and so forth and we could get our hands on information very efficiently but this wasn't available for professional that desperately needed that every day and every minute in the job. That was a big part of the frustration and has stuck with me. I know a lot of people still today are doing their jobs with fairly manual tools. >> What was the vision for what you first wanted to build? >> In the early days, we even called it forget if it was us calling it or clients first calling it Google for finance or Google for analyst. really built a semantic search engine that would understand what an analyst really is looking for when they are reading a financial filing or an earnings call or research report from Wall Street. We built that system that understood millions of terms and link them to core concepts. Understood that revenue is the same as topline across the whole vocabulary of finance. So as we built a system that was able to do all that and look at an earnings call happening in Japan or SEC filing in the US, all the information around the world, all companies using different vocabulary and reliably find every single data point about every single topic or theme that an analyst was researching that was somewhat revolutionary at the time. That just wasn't available. People were still control F searching for individual terms one at a time in PDF reports. shocking but that was how work was done and we had magnified the speed and efficiency and reliability of finding that information. That was the vision that it started to execute on and the product we built got pretty quick traction in the hedge fund world where there was this strong thirst for information and efficiency and speed to insight that I had experienced as an analyst and we were now able to bring to that market. Without going too deep into the technology, what you're describing now, someone would say, "Oh yeah, I'll just go on to chat GPT and figure that out." What was it like bringing these pieces together into a technology platform before a couple years ago? Today, language models are able to be trained and inherently understand a lot of that. Not all of it, but a lot of it. There's still a lot of value in really specialization and refinement in that industry level deep dive understanding. But back then the tools were more basic. We did have an AI first vision from day one. The AI of that day was just a lot simpler. Used AI and you trained models to classify information. They would be able to do things like understand the sentiment of a given statement in a paragraph of text in a document and say is this good or bad for the document. That was something we were really proud about. Took years to build that. We had a team of dozens of people in India tagging very large volumes of those statements to be able to train those models and train it to do that one classification task. But it does it so well that even today's LLMs still struggle to be able to do that kind of thing at that deep industry level understanding. And there were many other classification tasks. Just taking in feeds of documents and understanding is this broker research report, is it an initiation, is it a change in price target, recognizing companies in text, being able to do all these things that would help users to slice and dice information efficiently and be able to ask questions that would cut through millions of documents and get to just the right insights, the right documents, and the right paragraphs and commentary that you'd be able to very efficiently go through. So it was much more work to do these individual things that now for a lot of it come with large language models as they are trained as really universally capable models when they were very targeted narrow models back then. When you dive into how you go about implementing this at that first layer of data sources, what's the core set of information that you've wanted to train and then where are there alternative data sources you've accumulated over the years? We started from the information that was hiding in plain sight, hiding because there was so much of it that was hard to get to the insights even though they were available to every professional in the market. So all the SEC filings, global filings from every country with the stock exchange, earnings call, transcripts, conference presentations at every investment banks and then of course press releases, news and then broker research getting Wall Street research on a platform where you could now compare what is the company saying, what is an analyst saying about any topic, any company and then that was still information that people could get access to on other platforms. One big step for us was acquiring a company called Stream where they had built an expert transcript library that allowed us to start scaling and generating highv value proprietary content that you couldn't get anywhere else. And we could really point that system to generate information on specific companies, what are their customers saying, what are their suppliers, partners, former employees, executives saying about things that really matter. Before this, you had to rely on what is the company saying, what are they putting out there in their press releases or saying in public forums and filings, but you really had to go talk to management to question that or get alternative points of view. And the expert interviews started to add this invaluable additional perspective. We started to double down on that. That of course led to our much bigger acquisition of Tigers where they were the market leader in that. They'd built by far the largest expert transcript library and a really incredible operation working with a thousand by side firms out there who were doing calls on Tigers and across public equity by side as well as private equity venture capital and you had public company content, private company content and that started to expand the diversity of what we were able to offer to the market. There wasn't much qualitative research on private companies out there, but these expert interviews started to really pull that in. And today we feel like we've got the richest source of insights on private companies. We do keep on adding content all the time, but that is where really the core of the focus is because we see so much unique proprietary incremental value that we can add. >> So what you described there's some building blocks of quantitative publicly available information. And then you have company reporting information and then this huge library of called expert opinions. If you're in that use case a hedge fund analyst, how would you describe to someone who hasn't used the system what it is they're seeing so that they can pull out whatever information they want to pull out? >> Today the easiest comparison point is something like a chatbt where you're asking a language model a human language question. The system is now able to precisely understand what you're looking for from your prompt. And now it goes across all the half a billion documents in our system and is able to find the most relevant ones and then dig deep into them and ask those same questions from every single document and see is the answer here? Is it here? And do that hundreds of times, thousands of times for the most relevant documents for the user to look at and provide a narrative format answer that is granularly cited to those documents. A crucial difference to what people are used to with these chat bots we all use as consumers is that we focus on taking users to those underlying documents. Our users are serious professionals that care about reading and getting deep into the context that is stated in a SEC filing or research report or expert interview. And we have made our user interface such that it's very easy on the same screen to see all those citations in the narrative format answer but also dig deep into the underlying document and really understand the context and go deeper and lodge additional queries from there. So you can get a very strong confidence in what you're reading because you know where it's coming from. It's coming from highquality sources. you see exactly what company did it come from, what analyst wrote it or what expert provided an opinion and then judge for yourself. Not just trust that, but get the whole 360°ree view of looking at all those different sources and multiple instances of each one on that same topic to gather more of the mosaic of information. And we made that really efficient. So you can very quickly step through all these different breadcrumbs that lead you to that mosaic to then be able to draw conclusions. Of course, the machine is now able to give us those conclusions in that LLM provided narrative answer. You don't have to draw those connections yourself. You can question what the machine is giving you and ask different questions from different angles, but at least you get a narrative answer that is very intelligent through chain of thought reasoning that the machine has already done so much work that probably is a lot more than you would have been able to do as an analyst or very few of us have time to spend weeks researching a project. and often hear from clients that a single deep research report that our AI produces now gives them same 10 pages that they spend three weeks producing with a team of people. So it just gives you now this incredible efficiency and breath in what you can do allows you to do much more diligence and ultimately be more confident in your decision- making and still be a lot faster. So it's going to addresses the quality and the speed and the confidence all at once. as you talk to your hedge fund clients and you're able to get that example of a 10-page report faster, deeper than what would have taken a lot longer. How do they think about the alpha component? Meaning, in the past, you would do all that hard work and other people weren't going to do it. Now, all they have to do is become a client of yours and they can get that work done. What have you heard from your clients about where they can derive an edge on the market and where does the information that you've created become some type of table stakes? We are raising the bar for sure as any technology that is introduced into the investment process. Now everybody's able to do things much more quickly, efficiently and move more in an agile way because the research can be applied to so much more that you in the past you just had to ignore. It becomes now a question of who's asking the right questions and how are you asking them and what angles and how do you look at cross industry impacts and read throughs from this company to that company or this industry to that industry is also about technology adopters early and late and how are you able to adapt to these new solutions and how well can you deploy them and people spend a lot of time trying to be great prompterss how do I write a prompt that the system really understands well we see customers asking us how do we really help them how do we raise everybody to the same level so they're very good at asking the machine and this is even hard for us humans you ask your colleague a question and you have to think did I give a good enough prompt that I can trust the answer and the same applies to these machine models but this machine automation really just does the work that nobody wanted to do the work becomes more interesting and it's easier ultimately to do the value adding work when the machine does the heavy lifting and and collects the information for you. What are some of the initial responses you give to someone who's asking you how to create a better prompt into AlphaSense? >> I'd go back to how do you ask an analyst that's working for you? How do you make sure that you convey all the information that the analyst needs to know so that you can be sure that your request has been understood? It's the same thing with the machine. If you keep it too vague, it might misunderstood the question or it may make assumptions that you don't like. So, if you want to be very clear about what you're looking for, then it pays to be detailed in what you're asking about. But you can also be iterative. We've built our system so that you have different modes. You can ask questions in fast mode, generative search, where the average answer comes back in six seconds. There's a mode where you let the machine think longer. It takes maybe one or two minutes of chain of thought reasoning. It runs dozens of searches, synthesizes an answer and brings it back and then there is deep research where you let it work for 10 15 minutes and it comes back with a 10-page report and it's going to have gone much deeper. When you're doing that longer cycle work, you're going to be more careful with your prompts. You don't want to wait and then realize that you weren't precise enough. But when you're iterating quickly on a six-cond cycle, it's cheap to ask lots of questions. So if you don't think the first question got there, then you can ask again. We feel it's our job to make sure we understand the user and understand what they're looking for. And we're constantly refining this and trying to make sure that the system has the quote unquote intuition to understand what you didn't say. This kind of is really the value of specialization where we're trying to really understand our financial professional users, our knowledge professional users and different roles and what they're looking for and allowing the system to take that into account to understand their prompt in a way that their colleague would in that same seat or same industry, same company. How's in the last couple years starting with chat GPT? I'm curious how that has changed the trajectory of what you've done for a long time before they were around. >> It's been an incredible breakthrough from a vision perspective. I remember telling our team five, six years ago about how our system is going to be this oracle that you can ask any question in human language and it'll understand your question and go do the research, come back with a great answer. Frankly, I had no idea how and when we're going to get there. When large language models actually started to be able to do that, that was incredible. Just felt like sort of a Cambrian explosion of opportunity of what we could build with it now that the system can just understand what's on our users's minds. So much better than when you just had to express it in keywords. Just really hard to deduce what someone is looking for when they put in a couple of keywords. But when they put in a full sentence, full prompt, you have a really strong chance of getting to exactly what they were looking for. So it was an incredible opportunity and of course has allowed us to do a lot since then and it'll be hard to go and cover all of it. Everything really that we've done in the past, we've almost redone with language models. Although I mentioned the sentiment piece that is still, you know, the incredible sentiment model we built with the prior generation of technology still is running and doing a job that's better than what we see LLM doing. But mostly LLM now can just raise the game on everything, every part of what we're doing, opening up new opportunities. One thing that we built and released just weeks ago was a new AI interviewer. So our Tigus expert transcript library, it's created by buyside firms doing interviews with experts and analysts at a hedge fund or VC firm, private equity firm has to get on a phone and talk to the expert for half an hour, 45 minutes. And there's a lot of friction in that process. But what we now have is an AI interview that does a pretty good job of that. We're now able to create this new system that is much more scalable where we can point it in any direction and the AI interviewer will hop on the phone anytime the expert is available and it's also able to help us generate new content sets where maybe by side analyst wouldn't be available to do this repeated work where for example we're doing channel checks covering dozens of industries and talking to the same expert every month to understand what is going on with pricing and demand supply dynamics. mix market share shifts and so forth in a granular industry and getting a pulse on that industry every single month from the same person and doing that across dozens of industries and getting a pulse in the whole economy that way. It'll be hard to get by side analyst to commit to doing that every single month as a repeated process but AI will do it whatever we ask it to do. This is a very exciting new product that we were able to launch on the back of AI being able to do something that just wasn't at all possible even months earlier and now suddenly is possible. There were signs of this being feasible in the past, but now able to have the human language conversation with a true expert with high technical and market expertise where they wouldn't talk to an AI unless it was really able to have a conversation able to respond well and ask smart questions. I was quite shocked about what I was seeing in the first interviews where it's talking to an expert in the high bandwidth memory market and memory chips able to talk about the various technologies and pricing dynamics and have a very fluid conversation. That is an incredible advancement that we think is gamechanging. As soon as we announced that we started hearing from clients that hey can we offload our calls that we like to do ourselves to this AI. There were many people that don't want to be recorded perhaps and I would love to give those calls and say, "Hey, here's what I'm trying to find out. Can you recruit the right experts and have your AI do the work and just send me compliance transcripts back?" That's another big improvement to the industry's workflows where nobody wants to be spending hours and hours preparing for expert calls. It's a dreaded job and suddenly AI can do a lot of it. I think the industry will be very happy about that and we can still go and get the same insights and people can now focus on what do they do with those insights. Now of course the vast majority of this is still happening by people getting on the phone they do want to do it to the point of the ones that really care about what insights they need to get they'll do that themselves but there's a portion of those calls that get offloaded to AI now and that's pretty exciting. as you've worked with the LLMs and as example of building the AI interviewer, what have you seen as some of the challenges that you had to overcome to make these work the way you wanted them to? There are lots if we're thinking about technical challenges. It starts from what is the leading edge LLM for this particular task that we're trying to solve and can we actually get it to do that work and what is the right combination of can it absorb enough context in this case the AI interviewer needs to read a whole deep research report of maybe dozens of pages in some cases to really get expertise on any topic can it hold that in its memory and also take the speech of the expert it's interviewing and then also when it hears something unexpected can it do quick research and now adjust what's in its context and pivot and ask a new better question. That took a lot of iterating. You've got to test different models. You got to test different configurations. We built this our system as kind of an LLM agnostic orchestrator that is working with just about every one of the leading edge models in the market. And we're deploying them in different ways. And depending on the task at hand, you'll end up using this model now and maybe another model in a few weeks when a new breakthrough happens. So the team has had to build these capabilities of tracking and testing and very effectively staying up to speed on every new breakthrough to understand where is the right combination of these things. There isn't one perfect model out there as models that have been optimized for given tasks better than others. >> What's the process for figuring that out? which model is the right one for the right task. >> It's a very systematic engineering le process where you're really just testing all of them all the time. We have teams of people evaluating outputs. We have LLM evaluating outputs. So there's a system that keeps running standard checks and when you have a new model, you can compare it to the baseline and see, okay, what does this new blackbox deliver? You can't evaluate it until you've tested it. You kind of look at the external test results and there are some of these metrics, some real metrics, some validity metrics. Hard to really draw conclusions from what you read. You have to just deploy them and test them and see what is the effective performance. Try to test numerical metrics and see how often or what percentage of the time do human analysts agree with the output. But ultimately, there's also a style test. Do I like the way that this LLM speaks? Is it too verbose? Is it speaking specifically in a finance type language where we've ended up even giving financial services, the buy side a different model and the corporate world a different model. We've learned that there are different stylistic references as to how you want the LLM to be speaking to you. So, lots of aspects that you have to test and some of them are qualitative in that way. What's been most surprising to you in this process of testing the LLMs? >> What's been hard to do is knowing where the cutting edge is, knowing what each model is capable of in practice. Even the system, you can't really build it, design it, build it is ready. No, it's a very iterative process. You have to go and keep evolving it and seeing how much human evaluation you could do, how much LLMs can actually effectively evaluate each other and that changes as their capabilities evolve. It's a process that keeps you on your toes. You can't claim to master it at any time or if you feel like you've mastered it, some new breakthrough happens and now you have to rechallenge your assumptions again. Learn that we just have to have teams that keep on doing this and we have to be ready to pivot when something changes. That readiness to pivot and the flexibility is perhaps one of the bigger learnings from this that you have to be on top of this all the time and invest the time. Including myself as the CEO, I got to understand what's going on. These are critical choices for our product. Product is what adds value to users. I feel like I need to be reading a lot and trying to have tentacles through our team to making sure I'm up to speed and that applies to everybody that's part of that chain. It feels like an around the clock very intense process of staying up to speed with all the development. >> So business started mostly with hedge funds. You did mention oh there's a little different use case for corporations. How has it evolved from that initial user base to who your customer base is today? >> We always had the idea that one day the corporate world should like this too. They need information. They're making big decisions. They're deploying capital. They're acquiring companies. they're making investments and launching new products, entering new markets, they should need the same information. And that was the thesis. We learned that at least this thesis played out well in investor relations. At first, they were hearing from hedge funds that they're using this new great tool called AlphaSense. We started to really spread like wildfire through word of mouth in the investor relations community across public companies and then started to map out all the other big pockets of knowledge workers in those companies from corporate strategy to competitive intelligence to corporate development to strategic marketing product management even engineering these days and then going back to the CFO's office and across the seauite it's really gotten across dozens of personas within corporations where they are just trying to be as much on top of the information as the investment world is but more narrowly focused on their industry or different forces affecting where that industry is going. That's today pretty broad diverse landscape of corporate users and of course then everybody else that's in the knowledge worker universe from consultancies to bankers they're similarly now users dozens of different knowledge worker personas but everybody's doing the same kind of work take an M&A deal as an example when that deal breaks there's probably even usage in all of these areas from analyzing it from private equity biders to corporate biders to hedge funds maybe trading on rumors to the investment bankers that worked on the deal to consultants that worked on the deal and so forth. So dozens or hundreds of people have been using Alpha from all the different angles around that same deal because it impacts all of them. It's become kind of an ecosystem that uses information about industries, markets, companies that we're able to serve from all these directions. really curious about the unit costs of the business in the sense that some of the data sources you talk about are probably data sources you have to buy and then on the other side you're curating and providing this information that's essential for huge decisions. How do you figure out what someone's willing to pay for that? The biggest variable today in that is how much intelligence do you apply and what does that cost? more steady pieces of that apply to any data business. But the raw LLM intelligence and chain of thought reasoning, if you take that to the maximum of where that is heading, you're going to have a system running 24/7 and running processes both initiated by users, initiated by APIs that clients are running. Clients are building agents for their own internal workflows, triggering these generative search and deep research reports through those APIs and then the system is initiating work to generate more information. Where that is heading is recognizing what information gaps exist and going through our expert network and finding experts and launching calls and bringing back new information into the system. So there's this sort of intelligence factory that processes millions of tokens for every decision that happens in these financial and business workflows. We are having to look at what benefits the overall system. What can we advertise across all customers? Where is that token usage so high for individual client that we need a pricing model where they're leveraging the API in a massive volume and they get a lot of value. That is a very evolving world right now where you have to do whole new kinds of math and estimations on what that intelligence is going to cost, how that cost is evolving over time. So it's hard to give a very precise answer, but I'd say that's another very quickly evolving game currently in an intelligence factory business that we are. >> So as you look at that today, is there a minimum fixed cost and then a variable usage cost above that to the customer? The vast majority of our business is still fixed in terms of how we price. We try to make it predictable and easy. But when clients do want to start to experiment themselves, that's when you have to give them more flexible models where they can leverage the platform in different ways. It has traditionally been a user- based model. We've started to go much more into enterprise pricing where clients are adopting the system wall to wall. They're incorporating their internal content. Senior teams get involved. they suddenly say okay how do we make this something that our whole organization can use let's forget about this per user model let's think of us as one big client it has meant that our pricing models and packaging models have evolved in the same way you've done a series of acquisitions along the way how does that fit into the business strategy for AlphaSense >> there is no specific acquisition focus we don't feel like we need to do acquisitions everything is somewhat opportunistic as to what is out there. Now, if there was another Teagles-like asset, I'd be very excited. These are few and far between, this was quite perfect and it made a lot of sense and it was a huge bet for us. But we had a lot of confidence that this is the right thing to do. We made a much smaller investment at first and then we're able to make this very big investment almost a billion dollars that we deployed in that Tigus deal. But we felt that as this sort of intelligence factory, the more content and proprietary insight that you feed it, the more value you're able to deliver to customers. So it felt if we have a great user interface and it's the better interface to deliver this qualitative content then acquiring and plugging that content and data into the system is going to make it so much smarter that the combined value proposition is 1 plus 1 equals 5 and that's what we have seen. So that is the thesis. I'd say we're forced to be optimistic because these kinds of fantastic content assets aren't available all around. They are pretty rare. As you look at the business today and where you're headed, what are the most important metrics that you're reviewing to gauge your progress? >> There are the standard financial SAS company metrics where the recurring revenue is probably the primary one that you stare at and going to have big targets. And beyond that, we're fortunate that we've been able to build a business with great SAS metrics that we feel both private and ultimately public market investors will really appreciate. So it allows us to use that spectrum of metrics and make sure that we're gradually tuning and turning the right knobs to improve the performance. But there isn't anything that we feel, oh, we've got to really refocus on this. We have good gross margins unlike some companies leveraging language models that have big challenges there. We are able to deliver enough value that we're able to continue investing a lot in the intelligence and the token capacity and still absorb that cost and not worry about our overall metrics. beyond revenue and growth which is primary. To me, it's all about growth. We we're accelerating our growth every quarter and that's pretty rare. It's great to see in this environment where language models have both created a lot more client demand and created a lot more value in our product. So, there's this great tailwind and momentum. If I was going to crystallize it down to one thing, it's about growth because we're trying to build something really big and getting there faster is the primary objective. How do you go about leading a team to achieve the kind of growth that you're hoping to? >> One big change that I felt the need to make is to get even closer to the product. As a founder, having built the original product and having had the vision for what you wanted to build for the market, you're naturally going to be close to the product as a result of language models and also acquisitions and doing a lot of things at once across the acquired companies, the companies they had acquired, content assets, data assets, technology, and so many different pockets. I've put myself much closer to all of that and effectively taken the role of the ultimate head of product and taken on many more direct reports to be closer to what's happening and be closer to every decision to make sure that I can unblock obstacles from people. I can give them aggressive ambitious goals and then figure out, okay, what's preventing you from achieving it? Well, let's go and take that out of the way. If you like at our size of the organization over 2,000 people, you get to the point where people assume that things that are blocking them are not changeable. I think I have a unique ability to go and figure out when I see that there are priority projects that need to move fast and I hear that they're not moving fast for some reason. I feel like everything can be moved, everything can be changed, but it's hard to do that unless you're really close to all those critical product decisions. I put myself right in the mix of and the flow of where it's happening and I feel that that's the one critical thing. It certainly keeps my Slack channels busy. I feel like it's still the right thing to do to drive that growth. You started this business hoping to solve this frustration that you had with the efficiency of gathering all the financial information for the decisions you need to make. There's so much evolved over the last 17 years that you've been at the front foot of increasing that efficiency. What excites you about what will happen for the next 17 years? >> It's certainly hard to see that far, but it's a very exciting time given what this technology revolution with AI and language models has enabled. It's an incredible time for creative building. As a startup founder, CEO, I'm sure many in the same position would share this. The most fun thing to do is just to wake up every morning and think about what can I create next? And there's just the incredible sandbox of what we can do to create the next generation intelligent machine that the whole financial and business world can use and be smarter and make smarter decisions when you're making big bets and be more agile with better confidence, better data. So, it's a very fun place to be building all that as I see it not that far forward but into the next months and years. It's about building this always on machine that is working for all of the investment firms, public and private markets and banks and consultancies and corporations across every industry. If you think about every user having kind of thousand analysts in their pocket to think about AlphaSense that way. Well, what if all our clients had that available through our system? And that means our system is churning through this machine intelligence day and night every minute and running things on their behalf when they ask, even when they don't ask. If you're a byside firm, you've got a portfolio. Well, our system can be monitoring that portfolio and analyzing what's happening with every one of those companies, every new piece of information that comes out. It can proactively go figure out what's the meaning of that, what other impacts does it have in other industries, what are the readroughs. All of this can be automated. the system can be sort of thinking 24/7 and then informing our clients more proactively what should you know right now and that's a really exciting thing to be building to me that's the next vision when we're able to build the oracle using LLMs now it's making the system really run around the clock and do this work proactively and when it doesn't find the information it can recognize that autonomously and go and generate expert calls and have the AI do more expert calls bring back the information and now tell you that what this new thing mean when the next deepseek like event happens and nobody knows what it means. Our system can figure that out that this is a problem. Let's find the right experts in our network. Let's go and bring back the information and hours later it can provide really high confidence information on this thing that people don't have to be scrambling around anymore. >> Fantastic. Well, Jack, I want to make sure I get a chance to ask you a couple fun closing questions. What was your first paid job and what did you learn from it? very first paid jobs didn't pay much, but I was delivering newspapers on a bicycle in cold, icy Finnish roads as a teenager. I was selling magazines door to door. I remember being one summer at a steel plant, cleaning furnaces in a rubber suit with a gas mask and a rock drill. And so I feel like all of these things I learned the value of trying to do your best in whatever you're doing and working hard and something good will come out of it. I still remember or maybe now I can remember fondly those experiences. I still feel like that's a valuable thing to have to keep trying to do your best job and maybe today I can do more fun work when it's more creative. That is one thing that I still think about. >> What was the best advice you ever received? One thing that I have been told and I dismissed it at first. It sounded fluffy, follow your passion. If you're going to be an entrepreneur and launch a company, do something you're passionate about. I've actually learned that that was incredibly good advice. Having spent decade and a half on this company, if that wasn't true, if I was building some, I don't know, some accounting software, I can't imagine being equally passionate about it. But I'm doing something that I wake up excited about every day and I have this personal passion to go and solve this problem that I still feel in my bones thinking back to my analyst days and feel I can go and help the whole industry do these things so much more better and more efficiently that it keeps me going. So following the passion actually has turned out to be really good advice not at all fluffy as I thought at first. >> What brings you the greatest joy? This probably applies to a lot of entrepreneurs, just being able to think what doesn't exist yet, what would help a lot of people that would be able to be scaled and big and how can I have impact doing it. So, I'd say it's that creativity and building. >> What life lesson have you learned that you wish you knew a lot earlier in life? One thing that I grew up with was learning the value of self-reliance from not just my parents but even the culture I grew up in. You had to do your best and work really hard and believe that you can crush through any obstacle and with perseverance you can get there. I think I've overindexed on that and pushed too far into the idea of self-reliance and believing as an entrepreneur you tend to believe that hey I can just go and do this and it's all possible. What I have learned is that there's so much that you can do by working with others that surpasses what you could do on your own. However hard you work and however persistent you are humans are the primary species here because we are able to collaborate. tapping into that and asking for help doesn't come easily to me. That's been one learning. I wish I knew earlier, but I think I've been able to deploy that a little bit more in later times and it's been a big thing for me. >> All right, Jack, last one. If the next five years are a chapter in your life, what's that chapter about? I think it's about playing in this sandbox. And this might sound a little geeky of like it's all about technology and what it can enable, but it's such a Cambrian explosion of opportunity that these language models and what generative AI has given us. It feels like we can reinvent everything. I come up with business ideas every day. If I wasn't doing this, I'd be pursuing one of those things that come to mind every day. So, it feels like that's a great personal mission that I can be excited about. And of course, I'm building what feels like the exact right sandbox for me, which is Alpha Sense. So, just deploying that creativity and enjoying what I really find that techy geeky enjoyment in. That's what my next chapter will be about for sure. >> Well, Jack, thanks so much for sharing this incredible application of what you've done with Albense. >> Thank you. >> Thanks for listening to the show. To learn more, hop on our website at capitalallocators.com where you can join our mailing list, access past shows, learn about our gatherings, and sign up for premium content, including podcast transcripts, my investment portfolio, and a lot more. Have a good one and see you next time. All opinions expressed by TED and podcast guests are solely their own opinions and do not reflect the opinion of capital allocators or their firms. This podcast is forformational purposes only and should not be relied upon as a basis for investment decisions. Clients of capital allocators or podcast guests may maintain positions in securities discussed on this podcast.
Jack Kokko – Building the Google of Finance at AlphaSense (EP.461)
Summary
Transcript
a single deep research report that our AI produces now gives them same 10 pages that they spent three weeks producing with a team of people. So it just gives you now this incredible efficiency and breath in what you can do. Allows you to do much more diligence and ultimately be more confident in your decision-m and still be a lot faster. So it's going to addresses the quality and the speed and the confidence all at once. [Music] I'm Ted Sides and this is Capital Allocators. My guest on today's show is Jack Koko, co-founder and CEO of Alphasense, the market intelligence platform often described as Google for finance. The company's 6,000 customers canvas 90% of the top asset management firms, all the world's leading investment banks, and over half of the Fortune 500 companies. Our conversation covers Jack's early frustration as an investment banking analyst that sparked the idea for AlphaSense, the evolution of the business from a simple semantic search tool to an AI powered research platform, the promise and perils of LLMs and highstakes decision-making, and Jack's vision of an always intelligence machine that will transform how business gets done. Jack offers a fascinating glimpse at the intersection of technology, data, and investment decision-making. Before we get to Ted's interview, it's football season, which in my house also means it's indoctrination season. Because, let's face it, young minds are malleable. And when you've got kids, you've got a once- ina-lifetime chance to wire them the right way with your favorite football teams. Just ask my four-year-old. >> Go dogs. Sick him. Woof woof woof. Now, that's an easy one. The Georgia Bulldogs are a college football powerhouse. Three national championships in recent years, tons of glory. Who wouldn't want to be a dogs fan, but on Sundays? >> Here we go, Brownies. Here we go. >> That one's just mean. The Cleveland Browns are famous not for winning, but for testing your character year after year, heartbreak after heartbreak. And yes, I made her a Browns fan anyway. Some might call that cruel. I call it parenting. That's the thing about young minds. They believe what you repeat. So, just like forcing your kids to cheer for your favorite football teams, now's the time to plant another seed. Share the Capital Allocators podcast with friends, family, and colleagues in their formative years. Because if you get to them early enough, they'll be lifelong fans, too. Thanks so much for spreading the word. Please enjoy my conversation with Jack Coco. Jack, thanks so much for joining me. Thanks for having me. I'd love you to take me back to your background that led to this entrepreneurial journey. If I go all the way back to where I grew up, that was in Finland. I studied electrical engineering, thought I was going to be building mobile phones and during that time I really got enamored by finance. studied both in parallel and then one summer late in those studies I spent a few months working at a startup in Brussels called East where we were building the company was going to be the European version of NASDAQ never really took off but I was talking to US investment banks and trying to raise money for that business and also made some connections to firms in London where got an interview at Morgan Stanley and then ended up working as an analyst as my first job out of college in London first during the telecom boom and then convinced them where I wanted to be. My dream land was Silicon Valley and I ended up in San Francisco working on tech deals during the docom boom. Spent the first weekend working on a billion dollar deal and things were moving so much faster than in Europe. was very exciting at the time, but certainly also a stressful job for someone who wanted to do a good job and the tools just weren't up for it and that ultimately planted the seeds in my mind for what I'd one day want to go and build with Alpha Sims. >> What were your frustrations at the time doing the work you were trying to do without the tools needed? >> I had a pretty strong work ethic. I'd work the usual allnighters and trying to do a really good job doing the analysis, but the pace was really fast and you'd end up not having enough time to do a really good job. I remember some client boardrooms where I'm sweating and barely awake, but also afraid of what did I miss and what am I going to be called on by the CFO or CEO of that company where I just missed something in my analysis. And that feeling has stuck to me, frankly, still every day when I walk into a boardroom. I have flashes from those situations. That really was because of the lack of technology to help an analyst who needed to consume so much information and trying to catch up on a new industry that you didn't know, new companies you didn't know, and the sort of cross-sectional information across different industries that you really should have known to be smarter with your analysis, with your viewpoints, and to be able to talk to some really experienced business professionals working on about to bet billions on a deal. Certainly we had the data terminals that a very financial professional has still today. That was a big part of the frustration that you could go and manually look for data but it was very hard to consume it at the scale and speed that was needed. And back in those days you already started to have technology for consumers where I had Google to search the internet and so forth and we could get our hands on information very efficiently but this wasn't available for professional that desperately needed that every day and every minute in the job. That was a big part of the frustration and has stuck with me. I know a lot of people still today are doing their jobs with fairly manual tools. >> What was the vision for what you first wanted to build? >> In the early days, we even called it forget if it was us calling it or clients first calling it Google for finance or Google for analyst. really built a semantic search engine that would understand what an analyst really is looking for when they are reading a financial filing or an earnings call or research report from Wall Street. We built that system that understood millions of terms and link them to core concepts. Understood that revenue is the same as topline across the whole vocabulary of finance. So as we built a system that was able to do all that and look at an earnings call happening in Japan or SEC filing in the US, all the information around the world, all companies using different vocabulary and reliably find every single data point about every single topic or theme that an analyst was researching that was somewhat revolutionary at the time. That just wasn't available. People were still control F searching for individual terms one at a time in PDF reports. shocking but that was how work was done and we had magnified the speed and efficiency and reliability of finding that information. That was the vision that it started to execute on and the product we built got pretty quick traction in the hedge fund world where there was this strong thirst for information and efficiency and speed to insight that I had experienced as an analyst and we were now able to bring to that market. Without going too deep into the technology, what you're describing now, someone would say, "Oh yeah, I'll just go on to chat GPT and figure that out." What was it like bringing these pieces together into a technology platform before a couple years ago? Today, language models are able to be trained and inherently understand a lot of that. Not all of it, but a lot of it. There's still a lot of value in really specialization and refinement in that industry level deep dive understanding. But back then the tools were more basic. We did have an AI first vision from day one. The AI of that day was just a lot simpler. Used AI and you trained models to classify information. They would be able to do things like understand the sentiment of a given statement in a paragraph of text in a document and say is this good or bad for the document. That was something we were really proud about. Took years to build that. We had a team of dozens of people in India tagging very large volumes of those statements to be able to train those models and train it to do that one classification task. But it does it so well that even today's LLMs still struggle to be able to do that kind of thing at that deep industry level understanding. And there were many other classification tasks. Just taking in feeds of documents and understanding is this broker research report, is it an initiation, is it a change in price target, recognizing companies in text, being able to do all these things that would help users to slice and dice information efficiently and be able to ask questions that would cut through millions of documents and get to just the right insights, the right documents, and the right paragraphs and commentary that you'd be able to very efficiently go through. So it was much more work to do these individual things that now for a lot of it come with large language models as they are trained as really universally capable models when they were very targeted narrow models back then. When you dive into how you go about implementing this at that first layer of data sources, what's the core set of information that you've wanted to train and then where are there alternative data sources you've accumulated over the years? We started from the information that was hiding in plain sight, hiding because there was so much of it that was hard to get to the insights even though they were available to every professional in the market. So all the SEC filings, global filings from every country with the stock exchange, earnings call, transcripts, conference presentations at every investment banks and then of course press releases, news and then broker research getting Wall Street research on a platform where you could now compare what is the company saying, what is an analyst saying about any topic, any company and then that was still information that people could get access to on other platforms. One big step for us was acquiring a company called Stream where they had built an expert transcript library that allowed us to start scaling and generating highv value proprietary content that you couldn't get anywhere else. And we could really point that system to generate information on specific companies, what are their customers saying, what are their suppliers, partners, former employees, executives saying about things that really matter. Before this, you had to rely on what is the company saying, what are they putting out there in their press releases or saying in public forums and filings, but you really had to go talk to management to question that or get alternative points of view. And the expert interviews started to add this invaluable additional perspective. We started to double down on that. That of course led to our much bigger acquisition of Tigers where they were the market leader in that. They'd built by far the largest expert transcript library and a really incredible operation working with a thousand by side firms out there who were doing calls on Tigers and across public equity by side as well as private equity venture capital and you had public company content, private company content and that started to expand the diversity of what we were able to offer to the market. There wasn't much qualitative research on private companies out there, but these expert interviews started to really pull that in. And today we feel like we've got the richest source of insights on private companies. We do keep on adding content all the time, but that is where really the core of the focus is because we see so much unique proprietary incremental value that we can add. >> So what you described there's some building blocks of quantitative publicly available information. And then you have company reporting information and then this huge library of called expert opinions. If you're in that use case a hedge fund analyst, how would you describe to someone who hasn't used the system what it is they're seeing so that they can pull out whatever information they want to pull out? >> Today the easiest comparison point is something like a chatbt where you're asking a language model a human language question. The system is now able to precisely understand what you're looking for from your prompt. And now it goes across all the half a billion documents in our system and is able to find the most relevant ones and then dig deep into them and ask those same questions from every single document and see is the answer here? Is it here? And do that hundreds of times, thousands of times for the most relevant documents for the user to look at and provide a narrative format answer that is granularly cited to those documents. A crucial difference to what people are used to with these chat bots we all use as consumers is that we focus on taking users to those underlying documents. Our users are serious professionals that care about reading and getting deep into the context that is stated in a SEC filing or research report or expert interview. And we have made our user interface such that it's very easy on the same screen to see all those citations in the narrative format answer but also dig deep into the underlying document and really understand the context and go deeper and lodge additional queries from there. So you can get a very strong confidence in what you're reading because you know where it's coming from. It's coming from highquality sources. you see exactly what company did it come from, what analyst wrote it or what expert provided an opinion and then judge for yourself. Not just trust that, but get the whole 360°ree view of looking at all those different sources and multiple instances of each one on that same topic to gather more of the mosaic of information. And we made that really efficient. So you can very quickly step through all these different breadcrumbs that lead you to that mosaic to then be able to draw conclusions. Of course, the machine is now able to give us those conclusions in that LLM provided narrative answer. You don't have to draw those connections yourself. You can question what the machine is giving you and ask different questions from different angles, but at least you get a narrative answer that is very intelligent through chain of thought reasoning that the machine has already done so much work that probably is a lot more than you would have been able to do as an analyst or very few of us have time to spend weeks researching a project. and often hear from clients that a single deep research report that our AI produces now gives them same 10 pages that they spend three weeks producing with a team of people. So it just gives you now this incredible efficiency and breath in what you can do allows you to do much more diligence and ultimately be more confident in your decision- making and still be a lot faster. So it's going to addresses the quality and the speed and the confidence all at once. as you talk to your hedge fund clients and you're able to get that example of a 10-page report faster, deeper than what would have taken a lot longer. How do they think about the alpha component? Meaning, in the past, you would do all that hard work and other people weren't going to do it. Now, all they have to do is become a client of yours and they can get that work done. What have you heard from your clients about where they can derive an edge on the market and where does the information that you've created become some type of table stakes? We are raising the bar for sure as any technology that is introduced into the investment process. Now everybody's able to do things much more quickly, efficiently and move more in an agile way because the research can be applied to so much more that you in the past you just had to ignore. It becomes now a question of who's asking the right questions and how are you asking them and what angles and how do you look at cross industry impacts and read throughs from this company to that company or this industry to that industry is also about technology adopters early and late and how are you able to adapt to these new solutions and how well can you deploy them and people spend a lot of time trying to be great prompterss how do I write a prompt that the system really understands well we see customers asking us how do we really help them how do we raise everybody to the same level so they're very good at asking the machine and this is even hard for us humans you ask your colleague a question and you have to think did I give a good enough prompt that I can trust the answer and the same applies to these machine models but this machine automation really just does the work that nobody wanted to do the work becomes more interesting and it's easier ultimately to do the value adding work when the machine does the heavy lifting and and collects the information for you. What are some of the initial responses you give to someone who's asking you how to create a better prompt into AlphaSense? >> I'd go back to how do you ask an analyst that's working for you? How do you make sure that you convey all the information that the analyst needs to know so that you can be sure that your request has been understood? It's the same thing with the machine. If you keep it too vague, it might misunderstood the question or it may make assumptions that you don't like. So, if you want to be very clear about what you're looking for, then it pays to be detailed in what you're asking about. But you can also be iterative. We've built our system so that you have different modes. You can ask questions in fast mode, generative search, where the average answer comes back in six seconds. There's a mode where you let the machine think longer. It takes maybe one or two minutes of chain of thought reasoning. It runs dozens of searches, synthesizes an answer and brings it back and then there is deep research where you let it work for 10 15 minutes and it comes back with a 10-page report and it's going to have gone much deeper. When you're doing that longer cycle work, you're going to be more careful with your prompts. You don't want to wait and then realize that you weren't precise enough. But when you're iterating quickly on a six-cond cycle, it's cheap to ask lots of questions. So if you don't think the first question got there, then you can ask again. We feel it's our job to make sure we understand the user and understand what they're looking for. And we're constantly refining this and trying to make sure that the system has the quote unquote intuition to understand what you didn't say. This kind of is really the value of specialization where we're trying to really understand our financial professional users, our knowledge professional users and different roles and what they're looking for and allowing the system to take that into account to understand their prompt in a way that their colleague would in that same seat or same industry, same company. How's in the last couple years starting with chat GPT? I'm curious how that has changed the trajectory of what you've done for a long time before they were around. >> It's been an incredible breakthrough from a vision perspective. I remember telling our team five, six years ago about how our system is going to be this oracle that you can ask any question in human language and it'll understand your question and go do the research, come back with a great answer. Frankly, I had no idea how and when we're going to get there. When large language models actually started to be able to do that, that was incredible. Just felt like sort of a Cambrian explosion of opportunity of what we could build with it now that the system can just understand what's on our users's minds. So much better than when you just had to express it in keywords. Just really hard to deduce what someone is looking for when they put in a couple of keywords. But when they put in a full sentence, full prompt, you have a really strong chance of getting to exactly what they were looking for. So it was an incredible opportunity and of course has allowed us to do a lot since then and it'll be hard to go and cover all of it. Everything really that we've done in the past, we've almost redone with language models. Although I mentioned the sentiment piece that is still, you know, the incredible sentiment model we built with the prior generation of technology still is running and doing a job that's better than what we see LLM doing. But mostly LLM now can just raise the game on everything, every part of what we're doing, opening up new opportunities. One thing that we built and released just weeks ago was a new AI interviewer. So our Tigus expert transcript library, it's created by buyside firms doing interviews with experts and analysts at a hedge fund or VC firm, private equity firm has to get on a phone and talk to the expert for half an hour, 45 minutes. And there's a lot of friction in that process. But what we now have is an AI interview that does a pretty good job of that. We're now able to create this new system that is much more scalable where we can point it in any direction and the AI interviewer will hop on the phone anytime the expert is available and it's also able to help us generate new content sets where maybe by side analyst wouldn't be available to do this repeated work where for example we're doing channel checks covering dozens of industries and talking to the same expert every month to understand what is going on with pricing and demand supply dynamics. mix market share shifts and so forth in a granular industry and getting a pulse on that industry every single month from the same person and doing that across dozens of industries and getting a pulse in the whole economy that way. It'll be hard to get by side analyst to commit to doing that every single month as a repeated process but AI will do it whatever we ask it to do. This is a very exciting new product that we were able to launch on the back of AI being able to do something that just wasn't at all possible even months earlier and now suddenly is possible. There were signs of this being feasible in the past, but now able to have the human language conversation with a true expert with high technical and market expertise where they wouldn't talk to an AI unless it was really able to have a conversation able to respond well and ask smart questions. I was quite shocked about what I was seeing in the first interviews where it's talking to an expert in the high bandwidth memory market and memory chips able to talk about the various technologies and pricing dynamics and have a very fluid conversation. That is an incredible advancement that we think is gamechanging. As soon as we announced that we started hearing from clients that hey can we offload our calls that we like to do ourselves to this AI. There were many people that don't want to be recorded perhaps and I would love to give those calls and say, "Hey, here's what I'm trying to find out. Can you recruit the right experts and have your AI do the work and just send me compliance transcripts back?" That's another big improvement to the industry's workflows where nobody wants to be spending hours and hours preparing for expert calls. It's a dreaded job and suddenly AI can do a lot of it. I think the industry will be very happy about that and we can still go and get the same insights and people can now focus on what do they do with those insights. Now of course the vast majority of this is still happening by people getting on the phone they do want to do it to the point of the ones that really care about what insights they need to get they'll do that themselves but there's a portion of those calls that get offloaded to AI now and that's pretty exciting. as you've worked with the LLMs and as example of building the AI interviewer, what have you seen as some of the challenges that you had to overcome to make these work the way you wanted them to? There are lots if we're thinking about technical challenges. It starts from what is the leading edge LLM for this particular task that we're trying to solve and can we actually get it to do that work and what is the right combination of can it absorb enough context in this case the AI interviewer needs to read a whole deep research report of maybe dozens of pages in some cases to really get expertise on any topic can it hold that in its memory and also take the speech of the expert it's interviewing and then also when it hears something unexpected can it do quick research and now adjust what's in its context and pivot and ask a new better question. That took a lot of iterating. You've got to test different models. You got to test different configurations. We built this our system as kind of an LLM agnostic orchestrator that is working with just about every one of the leading edge models in the market. And we're deploying them in different ways. And depending on the task at hand, you'll end up using this model now and maybe another model in a few weeks when a new breakthrough happens. So the team has had to build these capabilities of tracking and testing and very effectively staying up to speed on every new breakthrough to understand where is the right combination of these things. There isn't one perfect model out there as models that have been optimized for given tasks better than others. >> What's the process for figuring that out? which model is the right one for the right task. >> It's a very systematic engineering le process where you're really just testing all of them all the time. We have teams of people evaluating outputs. We have LLM evaluating outputs. So there's a system that keeps running standard checks and when you have a new model, you can compare it to the baseline and see, okay, what does this new blackbox deliver? You can't evaluate it until you've tested it. You kind of look at the external test results and there are some of these metrics, some real metrics, some validity metrics. Hard to really draw conclusions from what you read. You have to just deploy them and test them and see what is the effective performance. Try to test numerical metrics and see how often or what percentage of the time do human analysts agree with the output. But ultimately, there's also a style test. Do I like the way that this LLM speaks? Is it too verbose? Is it speaking specifically in a finance type language where we've ended up even giving financial services, the buy side a different model and the corporate world a different model. We've learned that there are different stylistic references as to how you want the LLM to be speaking to you. So, lots of aspects that you have to test and some of them are qualitative in that way. What's been most surprising to you in this process of testing the LLMs? >> What's been hard to do is knowing where the cutting edge is, knowing what each model is capable of in practice. Even the system, you can't really build it, design it, build it is ready. No, it's a very iterative process. You have to go and keep evolving it and seeing how much human evaluation you could do, how much LLMs can actually effectively evaluate each other and that changes as their capabilities evolve. It's a process that keeps you on your toes. You can't claim to master it at any time or if you feel like you've mastered it, some new breakthrough happens and now you have to rechallenge your assumptions again. Learn that we just have to have teams that keep on doing this and we have to be ready to pivot when something changes. That readiness to pivot and the flexibility is perhaps one of the bigger learnings from this that you have to be on top of this all the time and invest the time. Including myself as the CEO, I got to understand what's going on. These are critical choices for our product. Product is what adds value to users. I feel like I need to be reading a lot and trying to have tentacles through our team to making sure I'm up to speed and that applies to everybody that's part of that chain. It feels like an around the clock very intense process of staying up to speed with all the development. >> So business started mostly with hedge funds. You did mention oh there's a little different use case for corporations. How has it evolved from that initial user base to who your customer base is today? >> We always had the idea that one day the corporate world should like this too. They need information. They're making big decisions. They're deploying capital. They're acquiring companies. they're making investments and launching new products, entering new markets, they should need the same information. And that was the thesis. We learned that at least this thesis played out well in investor relations. At first, they were hearing from hedge funds that they're using this new great tool called AlphaSense. We started to really spread like wildfire through word of mouth in the investor relations community across public companies and then started to map out all the other big pockets of knowledge workers in those companies from corporate strategy to competitive intelligence to corporate development to strategic marketing product management even engineering these days and then going back to the CFO's office and across the seauite it's really gotten across dozens of personas within corporations where they are just trying to be as much on top of the information as the investment world is but more narrowly focused on their industry or different forces affecting where that industry is going. That's today pretty broad diverse landscape of corporate users and of course then everybody else that's in the knowledge worker universe from consultancies to bankers they're similarly now users dozens of different knowledge worker personas but everybody's doing the same kind of work take an M&A deal as an example when that deal breaks there's probably even usage in all of these areas from analyzing it from private equity biders to corporate biders to hedge funds maybe trading on rumors to the investment bankers that worked on the deal to consultants that worked on the deal and so forth. So dozens or hundreds of people have been using Alpha from all the different angles around that same deal because it impacts all of them. It's become kind of an ecosystem that uses information about industries, markets, companies that we're able to serve from all these directions. really curious about the unit costs of the business in the sense that some of the data sources you talk about are probably data sources you have to buy and then on the other side you're curating and providing this information that's essential for huge decisions. How do you figure out what someone's willing to pay for that? The biggest variable today in that is how much intelligence do you apply and what does that cost? more steady pieces of that apply to any data business. But the raw LLM intelligence and chain of thought reasoning, if you take that to the maximum of where that is heading, you're going to have a system running 24/7 and running processes both initiated by users, initiated by APIs that clients are running. Clients are building agents for their own internal workflows, triggering these generative search and deep research reports through those APIs and then the system is initiating work to generate more information. Where that is heading is recognizing what information gaps exist and going through our expert network and finding experts and launching calls and bringing back new information into the system. So there's this sort of intelligence factory that processes millions of tokens for every decision that happens in these financial and business workflows. We are having to look at what benefits the overall system. What can we advertise across all customers? Where is that token usage so high for individual client that we need a pricing model where they're leveraging the API in a massive volume and they get a lot of value. That is a very evolving world right now where you have to do whole new kinds of math and estimations on what that intelligence is going to cost, how that cost is evolving over time. So it's hard to give a very precise answer, but I'd say that's another very quickly evolving game currently in an intelligence factory business that we are. >> So as you look at that today, is there a minimum fixed cost and then a variable usage cost above that to the customer? The vast majority of our business is still fixed in terms of how we price. We try to make it predictable and easy. But when clients do want to start to experiment themselves, that's when you have to give them more flexible models where they can leverage the platform in different ways. It has traditionally been a user- based model. We've started to go much more into enterprise pricing where clients are adopting the system wall to wall. They're incorporating their internal content. Senior teams get involved. they suddenly say okay how do we make this something that our whole organization can use let's forget about this per user model let's think of us as one big client it has meant that our pricing models and packaging models have evolved in the same way you've done a series of acquisitions along the way how does that fit into the business strategy for AlphaSense >> there is no specific acquisition focus we don't feel like we need to do acquisitions everything is somewhat opportunistic as to what is out there. Now, if there was another Teagles-like asset, I'd be very excited. These are few and far between, this was quite perfect and it made a lot of sense and it was a huge bet for us. But we had a lot of confidence that this is the right thing to do. We made a much smaller investment at first and then we're able to make this very big investment almost a billion dollars that we deployed in that Tigus deal. But we felt that as this sort of intelligence factory, the more content and proprietary insight that you feed it, the more value you're able to deliver to customers. So it felt if we have a great user interface and it's the better interface to deliver this qualitative content then acquiring and plugging that content and data into the system is going to make it so much smarter that the combined value proposition is 1 plus 1 equals 5 and that's what we have seen. So that is the thesis. I'd say we're forced to be optimistic because these kinds of fantastic content assets aren't available all around. They are pretty rare. As you look at the business today and where you're headed, what are the most important metrics that you're reviewing to gauge your progress? >> There are the standard financial SAS company metrics where the recurring revenue is probably the primary one that you stare at and going to have big targets. And beyond that, we're fortunate that we've been able to build a business with great SAS metrics that we feel both private and ultimately public market investors will really appreciate. So it allows us to use that spectrum of metrics and make sure that we're gradually tuning and turning the right knobs to improve the performance. But there isn't anything that we feel, oh, we've got to really refocus on this. We have good gross margins unlike some companies leveraging language models that have big challenges there. We are able to deliver enough value that we're able to continue investing a lot in the intelligence and the token capacity and still absorb that cost and not worry about our overall metrics. beyond revenue and growth which is primary. To me, it's all about growth. We we're accelerating our growth every quarter and that's pretty rare. It's great to see in this environment where language models have both created a lot more client demand and created a lot more value in our product. So, there's this great tailwind and momentum. If I was going to crystallize it down to one thing, it's about growth because we're trying to build something really big and getting there faster is the primary objective. How do you go about leading a team to achieve the kind of growth that you're hoping to? >> One big change that I felt the need to make is to get even closer to the product. As a founder, having built the original product and having had the vision for what you wanted to build for the market, you're naturally going to be close to the product as a result of language models and also acquisitions and doing a lot of things at once across the acquired companies, the companies they had acquired, content assets, data assets, technology, and so many different pockets. I've put myself much closer to all of that and effectively taken the role of the ultimate head of product and taken on many more direct reports to be closer to what's happening and be closer to every decision to make sure that I can unblock obstacles from people. I can give them aggressive ambitious goals and then figure out, okay, what's preventing you from achieving it? Well, let's go and take that out of the way. If you like at our size of the organization over 2,000 people, you get to the point where people assume that things that are blocking them are not changeable. I think I have a unique ability to go and figure out when I see that there are priority projects that need to move fast and I hear that they're not moving fast for some reason. I feel like everything can be moved, everything can be changed, but it's hard to do that unless you're really close to all those critical product decisions. I put myself right in the mix of and the flow of where it's happening and I feel that that's the one critical thing. It certainly keeps my Slack channels busy. I feel like it's still the right thing to do to drive that growth. You started this business hoping to solve this frustration that you had with the efficiency of gathering all the financial information for the decisions you need to make. There's so much evolved over the last 17 years that you've been at the front foot of increasing that efficiency. What excites you about what will happen for the next 17 years? >> It's certainly hard to see that far, but it's a very exciting time given what this technology revolution with AI and language models has enabled. It's an incredible time for creative building. As a startup founder, CEO, I'm sure many in the same position would share this. The most fun thing to do is just to wake up every morning and think about what can I create next? And there's just the incredible sandbox of what we can do to create the next generation intelligent machine that the whole financial and business world can use and be smarter and make smarter decisions when you're making big bets and be more agile with better confidence, better data. So, it's a very fun place to be building all that as I see it not that far forward but into the next months and years. It's about building this always on machine that is working for all of the investment firms, public and private markets and banks and consultancies and corporations across every industry. If you think about every user having kind of thousand analysts in their pocket to think about AlphaSense that way. Well, what if all our clients had that available through our system? And that means our system is churning through this machine intelligence day and night every minute and running things on their behalf when they ask, even when they don't ask. If you're a byside firm, you've got a portfolio. Well, our system can be monitoring that portfolio and analyzing what's happening with every one of those companies, every new piece of information that comes out. It can proactively go figure out what's the meaning of that, what other impacts does it have in other industries, what are the readroughs. All of this can be automated. the system can be sort of thinking 24/7 and then informing our clients more proactively what should you know right now and that's a really exciting thing to be building to me that's the next vision when we're able to build the oracle using LLMs now it's making the system really run around the clock and do this work proactively and when it doesn't find the information it can recognize that autonomously and go and generate expert calls and have the AI do more expert calls bring back the information and now tell you that what this new thing mean when the next deepseek like event happens and nobody knows what it means. Our system can figure that out that this is a problem. Let's find the right experts in our network. Let's go and bring back the information and hours later it can provide really high confidence information on this thing that people don't have to be scrambling around anymore. >> Fantastic. Well, Jack, I want to make sure I get a chance to ask you a couple fun closing questions. What was your first paid job and what did you learn from it? very first paid jobs didn't pay much, but I was delivering newspapers on a bicycle in cold, icy Finnish roads as a teenager. I was selling magazines door to door. I remember being one summer at a steel plant, cleaning furnaces in a rubber suit with a gas mask and a rock drill. And so I feel like all of these things I learned the value of trying to do your best in whatever you're doing and working hard and something good will come out of it. I still remember or maybe now I can remember fondly those experiences. I still feel like that's a valuable thing to have to keep trying to do your best job and maybe today I can do more fun work when it's more creative. That is one thing that I still think about. >> What was the best advice you ever received? One thing that I have been told and I dismissed it at first. It sounded fluffy, follow your passion. If you're going to be an entrepreneur and launch a company, do something you're passionate about. I've actually learned that that was incredibly good advice. Having spent decade and a half on this company, if that wasn't true, if I was building some, I don't know, some accounting software, I can't imagine being equally passionate about it. But I'm doing something that I wake up excited about every day and I have this personal passion to go and solve this problem that I still feel in my bones thinking back to my analyst days and feel I can go and help the whole industry do these things so much more better and more efficiently that it keeps me going. So following the passion actually has turned out to be really good advice not at all fluffy as I thought at first. >> What brings you the greatest joy? This probably applies to a lot of entrepreneurs, just being able to think what doesn't exist yet, what would help a lot of people that would be able to be scaled and big and how can I have impact doing it. So, I'd say it's that creativity and building. >> What life lesson have you learned that you wish you knew a lot earlier in life? One thing that I grew up with was learning the value of self-reliance from not just my parents but even the culture I grew up in. You had to do your best and work really hard and believe that you can crush through any obstacle and with perseverance you can get there. I think I've overindexed on that and pushed too far into the idea of self-reliance and believing as an entrepreneur you tend to believe that hey I can just go and do this and it's all possible. What I have learned is that there's so much that you can do by working with others that surpasses what you could do on your own. However hard you work and however persistent you are humans are the primary species here because we are able to collaborate. tapping into that and asking for help doesn't come easily to me. That's been one learning. I wish I knew earlier, but I think I've been able to deploy that a little bit more in later times and it's been a big thing for me. >> All right, Jack, last one. If the next five years are a chapter in your life, what's that chapter about? I think it's about playing in this sandbox. And this might sound a little geeky of like it's all about technology and what it can enable, but it's such a Cambrian explosion of opportunity that these language models and what generative AI has given us. It feels like we can reinvent everything. I come up with business ideas every day. If I wasn't doing this, I'd be pursuing one of those things that come to mind every day. So, it feels like that's a great personal mission that I can be excited about. And of course, I'm building what feels like the exact right sandbox for me, which is Alpha Sense. So, just deploying that creativity and enjoying what I really find that techy geeky enjoyment in. That's what my next chapter will be about for sure. >> Well, Jack, thanks so much for sharing this incredible application of what you've done with Albense. >> Thank you. >> Thanks for listening to the show. To learn more, hop on our website at capitalallocators.com where you can join our mailing list, access past shows, learn about our gatherings, and sign up for premium content, including podcast transcripts, my investment portfolio, and a lot more. Have a good one and see you next time. All opinions expressed by TED and podcast guests are solely their own opinions and do not reflect the opinion of capital allocators or their firms. This podcast is forformational purposes only and should not be relied upon as a basis for investment decisions. Clients of capital allocators or podcast guests may maintain positions in securities discussed on this podcast.