David Lin Report
Aug 29, 2025

Future Of Global Finance: MIT’s Game-Changing Blockchain Fix | Muriel Medard

Summary

  • Investment Theme: The podcast discusses the use of Random Linear Network Coding (RLNC) in blockchain technology to address scalability and storage challenges, enhancing the efficiency of decentralized systems.
  • Market Insights: RLNC, already utilized in 5G and satellite communications, is being adapted for blockchain to improve data propagation and reduce latency, potentially revolutionizing the Web 3 infrastructure.
  • Company Focus: Optimum, a blockchain infrastructure project, aims to optimize data transmission and reduce network congestion, offering significant improvements in speed and efficiency for platforms like Ethereum.
  • Technological Advancements: By using RLNC, Optimum seeks to overcome the blockchain trilemma of scalability, security, and decentralization, challenging the notion that these cannot be achieved simultaneously.
  • Opportunities: The introduction of flex nodes allows for greater decentralization and participation in the blockchain network, enabling smaller contributors to engage without the heavy lifting of validation.
  • Future Outlook: Optimum's advancements could reshape global finance by enabling faster trade settlements and collaboration with data-heavy industries like AI, fostering a more decentralized and efficient financial ecosystem.
  • Key Takeaways: The integration of RLNC in blockchain technology promises to enhance the performance of decentralized systems, potentially leading to new applications and reduced costs in the blockchain space.

Transcript

Our next guest is Emiriel Madard, the NEC professor of software science and engineering at MIT. Professor Madard is an inventor of random linear network coding or RLNC, a patent in technology used in 5G and satellite communication systems. Her work has already been used by industry leaders like Cisco and Ericson. And now she's bringing her expertise to the web 3 space by using RLNC in Optimum, an infrastructure for blockchain designed to eliminate a lot of the problems surrounding scalability bottlenecks and efficient storage. How will the technology of web 3 evolve? And ultimately, what does the future of global finance look like with faster, more efficient peer-to-peer systems? Professor Madard, welcome to the show. >> Great to see you, David. Have Thank you so much for having me. Professor, you are uh one of the um I want to call you inventors or uh project originators of something called the random linear network coding. Um it's uh interesting technology. Tell us what it is, how it's used by um satellite communications. Tell us a little bit about your work leading up to this project before we talk about uh how it impacts web 3. >> Yeah, thanks a lot. So um I've been leading uh for the last two decades uh the network coding and reliable communications group. Uh it's uh part of the research lab electronics at MIT. Um and what network coding is at a high level is representing data not just as straight up data but as equations which are effectively equivalent to the original data but actually have much more flexibility um and allow for greater robustness greater efficiency in transporting or storing data. uh so uh that has been really my life's work. Uh and very early on I became quite interested in doing this in a decentralized way. That's to say not uh having a single entity that's orchestrating and creating all of these uh equations uh but allowing a decentralized approach to creating these equations in a way that still maintains uh the integrity, robustness and security of the data. >> Okay. Maybe describe uh to the layman watching how some of these technologies could be used and applied on an industrial scale today. Absolutely. Uh so suppose that you and I are chatting right now David and I'm trying to send you some data. Now >> as we all know data is just numbers. Uh that's uh what makes it data rather than something else. And suppose that I have uh two different pieces of data. Let's call them X and Y to transmit to you. I could tell you X. I could tell you why. Um, but maybe there's a delay and you don't get X, you just get Y. That's not good enough, right? If I'm sending you a bank account and I only sent you the second half uh of the numbers of the bank account, that's not going to do it. You need both halves. So, you might say, "Hey, look, Muriel, it looks like I'm missing X. I waited around for a while. You know, we have this discussion. I sent it to you again. then you recon reconstitute the XY uh pair, but you know that kind of takes a while and it's not very efficient. Um, if I go, you know what, um, I really want to make sure that David gets both X and Y and I really want to make sure that he gets X and Y in a timely fashion with high reliability. Uh, I could try to send X twice. Um, but then what if Y gets lost? Then you have two copies of X and you're still missing Y. That doesn't help you. Um, I could try to send X twice and Y twice, but that has used a lot of resources. You sent four numbers when you really just need to send two numbers and you're pretty sure that one of them will get lost. So, that's kind of wasteful. Instead, what I could do is I could send X Y and then send you just X + Y. So, let's see what happens. If X gets lost, like in our first example, you got Y and X + Y. That's enough to recover X. You just subtract Y from X + Y. If Y gets lost, you'd get X and X plus Y. That's good enough to recover both numbers. And of course if X + Y gets lost, no harm, no foul. You already had the X and the Y. So in that case, if you will, the extra data that was sent matches uh the lack of reliability of the channel, those losses as we usually call them. Uh and you're not wasteful in what you've transmitted. But also that means that we don't have to engage in a whole conversation about what you got, what you didn't get, when you got it, when you didn't get it. And it really speeds up and smooths significantly um the communication that we have between the two of us. >> Oh well, RLNC has already been deployed in 5G and satellite systems. Can you highlight some of the key improvements in those infrastructures that the technology has already made? So um it has been uh it has been deployed in softwaredefined wide area networks currently uh and it's also deployed in uh systems that have um very challenged environments uh such as underwater systems. Um so that's the kind of uh that's the kind of system uh which can really benefit when you think of these losses. Now why are we doing this now in uh in web 3? The reason we're doing this in blockchain and web 3 is the added complication is the complication of scalability when you have a decentralized system. And so the challenges there uh are perfect uh for RLNC. Well, one of the other challenges uh for blockchain tech is uh the bandwidth and storage bottleneck problem that a lot of uh existing blockchains and layer ones have. How does uh your technology solve those issues? >> Yeah. Uh that's a great question. Um let me take one example which actually connects to our first offering uh optimum uh P2P which we've nicknamed mom P2P. Okay. >> Um so if you think of systems like for instance Ethereum uh but there are some other systems out there. Uh it's a decentralized system. Uh you have what's called propagation. Uh you need to have validators in the system exchange information and that can take a long time. um think of it as taking multiple seconds which is very different from the kinds of latencies were used for instance in web two where you know if you told me that any sort of service in web two is going to take seconds I would say I don't think people are going to want to use it that's way too slow um so what is one of the reasons it takes so long well it's uncoordinated it's decentralized so uh you have different nodes foods in the network trying to exchange information with each other. And the way they exchange that information with each other um is they do what's called gossiping. And gossiping is exactly what it sounds like. I find something out, I tell some of my neighbors, and then there's all kinds of exchanges that take place. Um so that kind of gossip uh can become actually really inefficient. Why? Well, you want to have every node communicate with enough nodes so that there's actual exchange of information. Like if you have a network that relies on gossip to disseminate but nobody talks to anybody, nothing's going to disseminate. On the other hand, if you have too much communication, then people are wasting a lot of resources repeating information that was already known. So, it's like, you know, you've heard the same piece of information from four different neighbors. That was a real waste of time. That time that you spent, you could have spent on new information rather than old information. So, that's what uh current state-of-the-art is struggling with. Um if you look now at using equations uh a little bit like we just described the x + y but just like more complicated equations but similar concept. Now what happens is rather than repeating the same thing to you rather than telling you x say twice three four times I'm actually each time telling you a new equation and the way we construct our equations is such that it's highly likely that the equations that you get are telling you stuff that you didn't know before. So the example we had, we had two unknowns, X and Y. Two unknowns, two equations. That's all it takes. Suppose that I told you now I have 10 unknowns. I just need 10 equations. And as long as those equations are sufficiently different, I can actually reconstruct the original data. So then I don't have any wasted transmissions. I'm making optimum use, hence our name. I'm making optimum use of all the transmissions of information and everything that's been told to me is something that is actually useful uh for reconstructing the data. >> Another common problem is that during periods of high network congestion, a lot of crypto users experience high fees, slow confirmation times. Um how do you plan to change or how could this tech change that uh scenario? >> That's a really really good point. Uh so what happens in any network uh whether it be a network that's exchanging data or a network that's transporting cars is that there's a very nonlinear relationship between the load uh and the delay. So let's think of for instance being uh on a road. We all know what happens. Either traffic is light, there's no delay, or you're in a traffic jam. There's not a lot of times where you feel that, hey, there's a lot of traffic on the road, but this is great. I'm not getting backed up. There's no delays. Amazing. Things are going great, even though it's not very um even though there's a lot of uh traffic. Um and there's reasons for that. And it's the same thing that happens when people are doing transactions. If there's a lot of demand, you get backed up. And once you start getting backed up, your system gets really sluggish and you have to back a lot away from that level of usage of the system. So what does optimum do? What optimum does is that it completely changes it completely changes those uh those dynamics so that the point at which you get backed up now happens much much later. So uh let's give the example going back to the data propagation that we were talking about. Remember, if I don't code, I'm going to get, say, four copies of X, which means that it's like you gave me a highway with four lanes, but I'm only using one lane because it's the same car traveling, you know, just picking up all four lanes. How is that useful? So, I've really wasted my lanes. With coding now you get to use your four lanes fully because it's four different equations. So it's means that you know rather than having a four lane system that's working like a one lane system you actually get your four four lanes and therefore the point at which you're going to encounter a traffic jam massively changes. >> Okay. So in the context of crypto um how is it that uh optimum is going to uh fix the problem of perhaps more decentralization being slower? Uh I think that's one of the concerns from a lot of crypto users that as a blockchain becomes more decentralized you get slower performance. >> That's a really good point. Um, so that tradeoff between speed and decentralization is a tradeoff that happens when you don't code. That's why people have actually experienced it. Um, and basically if you code that apparent tradeoff goes away. So it's not an inherent property. It's actually a experience that comes out of bad algorithms. So what happens is if you're going to have a lot of repeats, you go, "Hey, I need somebody to try to control what's going on. Tell people not to send too many repeats." But that doesn't scale. Why does it not scale? Because you know, if you're going to try to have a central controller, you need to get fresh information to that controller so that that controller can actually take action. And if your network grows, that amount of information is going to grow and the delay to get that information will also grow and eventually the whole system is just going to collapse under its own weight. So the decentralization and the speed is actually a false tradeoff. It's if you will a simple side effect of inefficient algorithms. >> Uh are you aware of the um so-called Bitcoin or blockchain trillemma uh where it's it's argued that you can't have all three scalability, security, and decentralization. some some projects have claimed to achieve that. Uh where do you stand on this debate? >> Yeah. Yeah. Thank you for for bringing that up. I think it's it's a really really good question. It's a core question. >> Um so uh let me start with the fact that uh the trillemma is very clear from the get-go um that it's not a theorem, it's not a lema. uh it's um it's if you will um a meta remark and I think it was definitely a very useful meta remark. If you think really on the mathematical side, uh there is something called the cap theorem uh which was actually um uh established by one of my my colleagues uh and friends and actually a an adviser to our to our project who had the NEC chair in software science and engineering before I held it. So I sort of inherited from her Nancy Lynch. some of your uh some of your um um some of our listeners might might know her. She's you know a giant in the field. Um and I think that the trilmma was maybe a little bit of a reinterpretation of that theorem but that theorem says very specific things under very very specific conditions. in general that trilmama is I think generally misinterpreted has been misinterpreted uh and actually we have a recent piece on that uh along with um Dr. Conoir our uh our CTO um um Dr. Nicolola, one one of our uh Optiman researchers uh basically showing that the trillemma such as it is commonly understood does not hold and explaining why RLNC changes that uh trillemma framework. >> Okay. Well, with that in mind, let's move on to talk about Ethereum specifically. Let's talk about how Optimum is going to improve Ethereum. But before we do that, let's just review the last decade of Ethereum's performance. How would you how would you evaluate how Ethereum has evolved from its genesis to where it is now and what improvements could be made? >> Yeah, I I mean I think by any measures it has been an amazing project. uh the effect it has had, the penetration it has had, um the variety of applications it has allowed. Um it's really impressive. Uh and there's no I I I don't think there's any other way to to describe it. Um now I think if you go to sort of its general philosophy if you think of it as um roughly that an EVM matches to an operating system um let's think a little bit of what a computer is um so um does a model for comfort what a computer is it's actually a question that people asked uh right after World War II, you know, there had been all the uh um big strides in computing that came out of for instance the Enigma machine and and all of those uh you know legendary um progresses in computation. Um and a voyman uh put forth uh a model for computer which actually has quite stood the test of time and the voy model is the following. So the computer has a compute part which has what they call the controller which we would recognize probably as kind of the operating system. >> It has the arithmetic logic unit what we would probably think of kind of like your CPU. It has a bus and it has memory. Uh, and a lot of the work has gone and of course it has to have an input and an output. And a lot of the work has gone around that compute part and it's been amazing work. the buzz and the memory um don't look as recognizable at all uh as you know if you took say a computer architecture class in college or even if you just decided to build a computer you would say wow you know for instance using gossip for propagation that's kind of a strange way to mimic a bus in a computer so I think that Ethereum has done amazing things. Um, but it's sort of starting to hit a bit of a wall on the part that I would say is propagation uh and uh memory versus the computation part >> and really that's where optimum kicks in. That's that's that's how we're we're we're helping out. uh you've said I think uh offline to me uh that uh validator coordination will be one of the biggest uh big next frontiers for you uh for improvement. So maybe explain how optimum P2P and uh RNL RL and C can improve coordination on that front and potentially reduce latency. >> Absolutely. Uh and actually um we are right now uh in testn net and actually moving to hoodie testn net uh with a lot of validators. You can see an entire list uh on our website. Um but uh you know uh I would say the majority of the very large validator uh projects we're working with. Um, by the way, it's been a fantastic experience and they've been extremely um extremely um uh cooperative and and collaborative. Uh, so what we do is we uh start with speeding up uh what's called the consensus layer uh which of course is a core part uh of any blockchain and Ethereum is no exception. Um and what we see is an improvement of an order of about a factor of six in the delay. Uh so if you think of any system you know improving latency by about 6x and any portion of that uh of any portion of uh of a system particularly one where you know it's taking seconds normally that's a massive improvement. Um what that means is going back to where you asking before about gas fees that allows people to reduce gas fees have you know faster finality um it allows um validators to improve their APY it basically >> removes artificial inefficiencies in the system. So once these efficiencies are realized uh professor what practical applications do you see uh being built on top of Ethereum that maybe could not exist today and just stretching beyond Ethereum uh once we achieve more uh optimal performance pun intended uh overall across blockchains um you know how do we see the landscape changing in terms of actual applications built on layer ones? Yeah, that's a great question. Uh, and let me start by saying that as you mentioned, our approach is not Ethereum specific. >> Uh, we're starting with Ethereum because it's a good place to start. It's a great chain. Um, and uh, our permissionless approach is uh, is able to manage that well. Uh our next step is actually to go to Salana um which um uses some coding but uses a very uh old-fashioned um very you know inefficient code called read Solomon um and you know which dates back to the 60s50s so um certainly we're looking to improve any chain and we looking to scale any chain >> absolutely >> um so I think that there multiple One can never always predict uh how giving a more powerful engine is going to allow new applications. Um but so it's it's always a little bit difficult to to guess. Um but certainly having a system which is going to allow for lower gas fees means that you know and also uh enable people to have much faster finality. It means that certain applications which right now are just not that attractive because they're too expensive or too slow could actually become uh very feasible and very attractive. Uh I'd like to add one aspect though uh which is not so much around the applications but one of the things that optimum allows if you look currently at blockchain um you know blockchain is really very um imbued in a culture of community and there's sort of a couple different ways in which members of the community can get involved you know very roughly Um one is as users uh of the technology maybe you know they trade in crypto they follow etc. um maybe um that's you know basically where it stops or they can contribute in terms of building software or being uh part of the infrastructure generally as validators. Um being a validator is you know it's a big commitment uh generally requires a fair amount of um upfront investment uh in terms of the machines and then a lot of um operational investment in terms of uptime so that for instance you don't get slashed because you know your your your validator went down etc. Uh and those are sort of the the the the approaches and of course some people do more than one. What we allow is we allow something that we call a flex note which means that let's go back to our equations. I can be a machine. I can run a machine that just propagates equations. Is not doing the heavy lifting of validation. Um is just a cog in a decentralized machine where I'm helping my neighbors. I'm helping my community with getting faster equations and I get rewarded for it. So that aspect means also that we're looking at a much more decentralized infrastructure of flex nodes where flex nodes can be small nodes, it can be large nodes um and they can be from small contributors or of course uh you know larger contributors. So that's really core to what we're looking to do. So you mentioned that you're moving on to uh Salana next. Can you tell us about some of the major milestones that Optimum has already Optima has already achieved so far? I believe you raised close to 11 million so far. Uh you've launched a test net with major validators like you've mentioned previously. Um what what what's next besides moving on to Salana? >> Absolutely. Um so as you mentioned we have raised our seed round. Um we we actually raised just under 12. We were looking for 10, but we were so heavily overs subscribed >> uh that we ended up uh you know trying to make sure that we could accommodate as many of the fantastic partners uh as we're willing to um to join even though we we had to as I said we were very very heavily subscribed. Um so we have our amazing test net. I mentioned, you know, the the mult multiples, you know, typically 6x. Um I don't think I've seen under 4x right now, but you know, we've seen up to 9x uh improvements in latency. Um I mentioned that we have onboarded uh very uh very significant uh number of validators who together represent a very large uh percentage of the um uh the Ethereum uh uh traffic. And um what we're going to be doing next uh is of course working towards mainet. Uh we're also starting our um uh network hackathons. The first one will be at MIT just next month um just in a couple of weeks um for people to basically play and see how you can bring up a flex node the flex nodes that I just described. So bringing together a community of people who can operate and see the gains uh that they can get from operating a flex node. Um and then in the ne in the following year we're looking for uh so next year we're looking for our salana uh uh permissionless approach. Now what's comes longer term and we've already uh started working on this is moving not just to propagation where we're now uh but the entire memory layer remember I was talking about the bus the memory you know if you think of your computer you've got a bus uh you have read only memory uh and you have readwrite memory what we call RAM random access memory the read only memory memory is the ROM. So we are bringing thanks to coding DROM uh and D RAM so a read write uh random access memory. So that's that's our long-term uh uh road map uh and that's our vision. >> Okay. Do you have any advice for builders watching this program right now for how to develop a project or design a system for long-term resilience and also um to optimize efficiency? >> Yeah, that's a really good question. Um it's a very complicated questions because there's so many aspects to it. >> I know you kind of teach a course on this, but maybe sum summarize it for two in two minutes or less. >> Yeah. Yeah. So um I think that one of the things that to me is striking going back to this computer analogy and one shouldn't overwork analogies but is the following. If you find that things are being done in a way that's different from what you would want to do in a computer architecture. It's worth taking a break and asking why, right? There has to be a good reason. Was this just because people had to pull something together quickly and that's kind of the best they could do under the circumstances? Is it because there's a really fundamental reason for doing it that way? Um, I think it's really important and you know, and this is this going to scale, is this going to break? Uh, are you building on sound? or you building on something that's very um you know that's very robust. Um I mean you were talking particularly about developers you know when we think about programming certainly you know when people learn to program um they generally learn to program in a way which is fairly independent of the machine they're on. Sure, your program will work more slowly, less, you know, less efficiently or whatever if you have a slow computer or fast computer. But your basic programming until you get into pretty complicated stuff. It's pretty much machine independent, right? I mean, imagine if you were like, I don't know, learning Python and having to go, "Oh, darn." You know, I have to check what kind of bus I have, uh, what my CPU is. you know, you you you'd think this is really not very modular. So, if that's not what's happening right now, it's probably a cluge. Um, so I think that in terms of robustness, trying to develop for the long term in a way that's going to t stand the test of time, uh, means really going back to principles. >> I'm curious how you would rate Satoshi's attempt to build a blockchain. I'm talking about Bitcoin. uh reviewing the original white paper, uh Bitcoin was meant to be a peer-to-peer system. How would you rate rate or assess Bitcoin's peer-to-peer effectiveness? Um how would you rate its uh its its usefulness as a peer-to-p peer system and ultimately its other character characteristics that we talked about such as scalability, security, and decentralization. >> So again, you know, it's been absolutely remarkable, right? I mean what it has achieved its longevity and so on. Uh I think that the question there to me is a little bit more uh what it does. Uh let's go back to an operating system. Um you know when we talk about operating systems we usually mean an operating system where I can program pretty much anything we want. You know I was talking about things like the Enigma machine. The name Turing comes to to mind. And um you know then you're talking about something that's tour incomplete or pretty much I can program it to do whatever I want. Okay, maybe not at the speed I want and so on, but you know I can I can I can get it to do whatever I want. Um compare that for instance with uh the operating system on my thermostat. I have a thermostat over there. Uh it's not built to do everything, right? I mean, I don't know exactly what operating system it has, but basically what it's supposed to do is, you know, bring the temperature up, bring the temperature down. That's kind of what it's supposed to do, and it does it really well, right? >> Um, would I want to program an arbitrary application on my thermostat? Probably not. >> So, you know, so judging it as a thermostat, it's fantastic. It's a great >> We shouldn't be judging Bitcoin based on Ethereum's properties. We should be judging Bitcoin based on what it does well. So, what does it do very well? >> It's an amazing distributed ledger, right? A decentralized ledger. It's an amazing ledger. Um, so that's that's great. Um, it's it's managed to do that remarkably. Um but um you know from then to go okay I want to program this to run some arbitrary application it's a little bit unfair that that's not you know >> some people have attempted that there there is this question as to whether or not people should be building layer 2s to run in your words arbitrary applications on the Bitcoin layer 1 versus just building a new project or a new chain altogether. um you know those are two extremes. What what what would you advocate for? >> That's a really good question. Um my general view is that it really becomes a question of the particular trade-offs that people have, right? If they go, well, you know, maybe I'm not so interested in speed, >> uh and I really have a strong reason, whatever it may be, to try to use this as a ledger. And that's really what I'm trying to get out of, hey, why not, you know? Um, but I I think one has to be um realistic uh about what something was built for. And um you know, and as long as one does that and one is well aware of the you know, the pluses and minuses, that's fine. Uh it's it's just a question of being lucid. I believe >> looking ahead 5 to 10 years, how could Optimum and broadly RLNC reshape global finance networks, settle trades faster um and even collaborate with uh data heavy industries uh like AI um to train models um in in the global finance sphere. >> That's a fantastic question, David. Um we're building really what the decentralized memory layer. Uh and going back to programming on a computer, uh what we want to do is to take optimum advantage of the available resources. Um that means whatever transmission resources there are whatever memory that is we want to provide a fully decentralized experience a fully decentralized approach so that the experience is the same as if you were on a single computer. Um, so people should be able to program basically not having to worry about, you know, where's the data, how am I going to get access to it and do that in a way which is transparent and as fast and as reliably as physically possible given the current uh given whatever the resources are that are available at that time. and do this in a decentralized way where flex nodes all over the world are helping each other out, are allowing people to get the benefit of Optimum, reap their own benefits uh and help each other out without the need for centralization. >> Well, I uh I appreciate your time and um I uh I feel bad that I am not smart enough to have taken your course at MIT. So, I am uh >> It's not too late, David. I think it's kind of too late for me, but I enjoy speaking with you nonetheless. >> Um, how I I'm wondering, I'm just curious, is your course on the online uh MIT course catalog? Is that uh >> so uh if you're interested in in network coding, random network coding, uh we um I just published a book uh with some collaborators. It's uh appealingly named network coding for engineers. uh so you know if there's engineers out there who who want to read up on it I I recommend that they get the book and uh I don't assume we don't assume any knowledge uh of math um behind you know um be behind the coding uh it has you know Python examples and so yeah if if people are interested in in just playing with the tech by all means we we start from scratch and we take it um we take it very far in that book. >> And uh where else can people learn about your work and uh what you're currently working on in the meantime? >> Absolutely. Thanks for asking. Um our website uh get optimum.xyz has a lot of papers, a lot of resources um and we keep it very much up todate. Um also if people want to participate in our hackathons, I mentioned a first inerson hackathon next month. The following month we're going to have one in person in Europe and then we'll be doing online hackathons uh throughout the world uh so that people can actually interact uh with the optimum system and really in a hands-on fashion. Um again this is towards people being able to spin up their own flex nodes and become an active participant uh in the optimum network. So if people are interested and say hey you know I want to play with it please uh stay tuned. Uh we have a very large um community on discord where we have um and of course we're on X where we uh we advertise those different activities. Uh so if you want to read a book and you know please read our book, read the papers that are on our website. Uh get engaged in the community, participate in the hackathons. Uh we really want to pull as many people as uh as are interested into our project. >> Okay. Excellent. Uh well, I appreciate it. Once again, this was very educational, even though um uh a lot of it went over my head, but we'll put uh we'll put the links down below to uh to your work so people can uh study more and learn more about uh what you're currently working on. Thank you very much, Professor Madar. We'll speak again soon. Take care. >> Thank you so much for the opportunity, David. >> Thank you for watching. Don't forget to like and subscribe. [Applause]