Mendel Team
December 9, 2022

Leslie Lamport, 2013 Turing Award Winner on Patientless Podcast #011

Karim Galil: Welcome to the Patientless podcast. We discuss the good, the bad and the ugly about real world data and AI in clinical research. This is your host, Karim Galil, Co-Founder and CEO of Mendel AI. I invite key thought leaders across the broad spectrum of believers and descenders of AI to share their experiences with actual AI and real world data initiatives.

Karim Galil: All right. Hi everyone. This is another episode for our podcast. Today's guest is, is not a household name. I was just telling him that before we get started, but his work has been the foundation of a lot of your day to day computing from your internet search to the infrastructure that Google and Amazon runs with on to almost every AI engine that is working today.

His work has granted him the Turing Award. Interestingly, there is no Nobel Prize for computing, so it's the analog of of, of Nobel Prize in computing. In 2013, he won Turing Award for his work on distributed systems. Our guest today is Leslie Lamport. And we're super fortunate to have him joining our podcast.

Thank you for making the time for this. 

Leslie Lamport: Thank you. 

Karim Galil: So why don't we start by explaining what are distributed systems? I, I've tried to educate myself on what that means, and the way you described it is it's the failure of your computer. Because of the failure of another computer that you're not even aware of, its existence.

So can we explain more about that and what are distributed systems and how does it affect computing and if there is anything in healthcare?

Leslie Lamport: Well, distributed computing means that you're basically running a program that uses more than one computer. Simple as that.

Karim Galil: The, the, the work that you have done had to do.

time, because I believe one of the things that you've said is like the notion of time for two observers is not the same. And I, I'm, I'm still cannot understand that. And how does that lead to distributed systems? 

Leslie Lamport: Well, I guess I need to explain special relativity to you.

Karim Galil: We have an hour.

Leslie Lamport: Okay.

I can really explain it very simply. You may have read, you know, articles or books or something that says, oh, this is really strange, and meter sticks shrink when something is moving, and all stuff like that. What the thing that. It's the basis of relativity that makes relativity different from Newtonian mechanics is the realization by Einstein that what.

It means for two things to happen at the same time is not some invariant notion that's the same for everybody. It depends on two different observers. We'll have a different notion of what it means for two things to happen at the same time if they're moving relative to one. And that's it. And special relativity simply comes from that observation and the observation that no matter these two people who are moving with respect to one another, when they measure the speed with which a light beam is traveling, they both get the same speed, you know, 300,000 kilometers per second.

And you take. That fact. And basically the rest of special relativity falls from that, follows from that and well, somebody built designed an algorithm for, well, it was an algorithm for distributed database. And distributed database means you have a single database, but two people using different computers may be accessing the same data.

And so you need some way of synchronizing them. And particular if two people.

Issue a command at about the same time, the system has to decide which one occurred first. If what is setting a value, changing a value, and the other is reading the value. They have to decide whether the read comes before the change or or after it. And they had an algorithm for doing that. And what I realized is that the, his algorithm.

Their algorithm was a pair of people, violated causality. what that means is that, even though, you know, one command should have happened before the other they, the commands. For example, I might send you a message saying, Hey, I've just added, you know, a thousand dollars to your bank account. Now you can go withdraw it and you try to withdraw it. And it's the system says, no, you can't, because it decides that your withdrawal request came after my deposit request, even though.

It shouldn't have, of course. And so that, caused me to realize that in order to be able to synchronize what's going on the tutor from computers, you have to have a way for them to agree on you know, what event, whether an event in one computer happened before or after an event in another computer.

And I realized that there's an analogy between special relativity and distributed computing. The analogy is that if you think in what in special relativity, the notion of coming before means that one event comes before another. If communicating at the speed of light the first event can influence the other event, namely, it's possible for something that the exist, it's the first event to be communicated to the second event.

Well, there's an obvious analogy to that in distributed systems, namely one event comes before and other. If it's possible for that event to influence, you know, the first event to influence the second, not by sending light beams, but by sending messages, but not by light beams that could be sent, but by messages that actually are sent in the system.

And then using that notion of causality, I was able to modify those two, the algorithm those two guys had so that it, satisfies causality.

Well, the other idea is that I realized that this applies not just to distributed databases, but that the whole key to building any distributed system, which is getting the different computers to cooperate with one another can be solved by an algorithm that globally orders the things that happen in the in different machines.

in a consistent way that they all agree on. 

Wael Salloum: So that's the partial order that's worked on. Yeah. And this is, this is fascinating cause this is like an instance of consid, like how you can abstract from outside of computer science from special relativity into computer science. I also heard that some physicists looked at your work and brought it back to physics.

Leslie Lamport: Yeah, there are physicists as one particular physicist. I can't remember his name offhand, who, thinks that this is very important and I've never been able to understand what he's doing and you know, what the point of it is. And so I can't I can't say what whether that is really, you know, some important physics or, or not.

This is not in some sense a, you know, the, relation between physics and, you know, computing here is, is should not be surprising because computers are basically physical devices and, you know, the laws of physics apply to them. And one thing that has I think distinguished what things that I've done from what most other computer scientists working in this field of concurrent computing or, you know, concurrency is that.

They tend to view it as a mathematical problem. You know, concurrency is a mathematical problem and I tend to view it as a physical problem. You've got two things happening at the same time, that's physics. That has just inspired me to look at things in a, in a way that, you know, different from, little different from the way other people have.

That is really interesting. Yeah. 

Wael Salloum: And that's basically your work on, on temporal logic and on TLA. Can you talk a little bit more about TLA and how it revolutionized the work, even including some recent work on how competing at Amazon other patients? 

And how important it is to define systems in a formal way to make sure that you reduce the number of bugs and you know the system is actually doing what it's supposed to do. 

Leslie Lamport: Well I think it should be obvious that you know, we'd like to remove bugs from programs. You know, the, I don't know if you get as frustrated as I do at all of the stupidities in that I see in in the programs that I use, but I got into it because concurrent algorithms are simply very hard to get right and.

You can take a concurrent algorithm, you should say algorithms rather than programs. Concurrent programs are hard to get right, but I've worked on, you know, I started working on algorithms, not programs, and you can write a, you know, just an algorithm in a few lines that, you know, looks sort of so obvious and it can have.

You know, a very, it could be wrong, you know, simply it has a bug doesn't do what it's supposed to do. So I realized that, I needed, a way of reasoning rigorously about, concurrent algorithms Well, I guess the basic thrust of what I did in the, in the course of, you know, 10 or 15 years was come around to the realization that reasoning about algorithms and reasoning means mathematics. And if I'm in a reason about something mathematically. The best way to to do it is to describe it mathematically, and I basically developed a way of describing algorithms, me particular concurrent algorithms mathematically, and I discovered that.

 It really worked well. I mean, the TLA is the particular way I just found that makes it work well for both describing mathematically concurrent algorithm and proving things about it and.

You know, while that was going on, people also began to realize that this people, we started out proving prop things, properties of particular algorithms, but then people started worrying about, well, what is exactly is this algorithm supposed to accomplish? And so, we also started looking at the problem of how you describe precisely what it means for this algorithm to be correct.

and when you describe the, the algorithm correctly, and then you describe what it's, I'm sorry. When you describe the math, the algorithm mathemat. And then you describe what it's supposed to do mathematically, then it's correctness becomes a, you know, a very simple mathematic, well principle, simple mathematical formula.

Namely that the formula that describes the algorithm implies the formula that describes what it's supposed to do. And so everything gets reduced very beautifully to mathematics.

But I realized that, this way of thinking about, first of all, about mathematically about what something is supposed to do is useful in practice. Because when people build distributed systems, the first thing they should do is figure out what that system should. And what a system should do, especially when it comes to a concurrent system, is,

I mean, it can be a subtle matter and it really helps building the system to, you know, to get it right. I mean, I noticed. When people were describing when engineers were describing standards for, you know, some kind of communication standards or something like that, you want to have, you know, a precise notion of, of what it means.

So two people, you know, two people can go off, if it's communication standard. And build, you know, separately build the systems and, you know, and if they follow the standard correctly, you want to make sure that they will do, you know, what, you know, what they're supposed to do, what the two systems will work together properly.

And then also, I realized that algorithms, well, the people, Writing programs. something Tony Ho said wrote years ago that I didn't understand at the time. He said, inside every big program there's a little program trying to get out and what he realized, what he meant, and I confirmed this with him a few years ago, is, Inside every algorithm, or I would even say not inside of it, but, what that algorithm is trying to do is implement.

What the program is trying to do is implement an algorithm a more, a simpler, you know, more abstract thing, that. And if that algorithm is correct then and you correctly implement it, then the program will be correct. And that the best way in some sense to write the program is you write the algorithm first and then make sure that the algorithm works right and then.

Can then know that all that if you implement it right then the program is gonna work. Right.

Karim Galil: I think that's why you said writing is nature's way of showing you how stupid or, or like how flawed your thinking is or something like that. 

Leslie Lamport: Yeah. That's actually, that's a quote from somebody else. The quite quote is writing is nature's way of showing you how sloppy your, how your thinking is.

And my addition to that is math is nature's way of showing you how sloppy your writing is. So you know, before you write a program, you do some thinking about you know, what it's gonna do and, and you know what you know and how it should do it. You should write that . You know, you should write and you should not.

You know, just think because it's easy to fool yourself when you're just thinking that you've you. What you thinking makes sense. But if you start writing it out carefully, then that's when you discover that that nonsense, you know, you know you've made a mistake or what you're saying, you know, doesn't work or, or something.

And

if you writing things in math is, is first it's more precise. It makes it, you know, you can actually, you know, prove whether what you're suing makes sense. And also you can build, because it's precise. You can build tools to check its correctness. And so that's what TLE has for, it's basically for writing.

For writing down precisely the thinking that you should do before you write the program. And then check that this thinking that this algorithm, I, I, I don't call it an algorithm anymore because when people think about algorithms, they think about, you know, things that are in textbooks, but those are the algorithms that you know.

The algorithms that go into programs are you seldom find them in a textbook because, you know, they're just used for this particular problem. So I, you know at the moment I'm calling them abstract programs, but it's like an algorithm. It's something that's higher level, more abstract than the program, and it's something that you implement with the program.

Karim Galil: You have very famous quotes. Or like thinking thoughts around that programming is not quo and an algorithm would not approve this conjecture. And seems that this is within the same theme of, of that. 

Leslie Lamport: Yeah. Well, I guess, you know, what that reveals is that, you know, I was trained as a mathematician, , and.

I just sort of slipped into computer science, 

Karim Galil: but in today, programming and coding are almost synonyms. And there isn't like really a key distinguishing between someone who's writing a program versus someone who's just implementing and, and, and coding that, that 

Leslie Lamport: program. Yeah. Well, what I've said Is that, or what I've been recorded as saying is that you know, coding is to programming, what typing is to writing.

Now that's it's not a terribly accurate metaphor because coding involves, you know, thinking more than you know, and typing, you know, is just a mechanical action, but there's a kernel of truth. In the sense that coding should be the easy part of programming. And the hard part of programming is sort of figuring out what the program should do and then figuring out how it should do it.

And the metaphor that I've just stumbled upon, I think is better is that a program is like a book. and the code is like the words on the page. Now the code is written in particular language, but if you take the book and you translate it into a different language, it's still the same book. And it's the same book because the ideas what it's expressing are the same. And if you wanna, so if you really wanna talk about what the book well, we were talking about books, you know, you know, war in Peace or something. We don't have any better way of talking about, you know, what this book is really about other than by, you know, writing cliff notes or something , but, you know, in some natural language. But because what programs are about, Should be precise. There's a better way. We don't have to talk about them in programming languages. We can talk, we can talk about it in math because that is a nat, that is a language that is specifically designed to express the kinds of things that, the ideas that are behind.

Many of the aspects of programming is particularly the idea of what it means for this program to be correct. And so that's now my, the way I, you know, will try to sell TLA is that, It's independent, something that's independent of, you know, the particular implementation, you know, the particular code you write, it's telling you what's in, you know, explaining the essence of what your program is doing. Now there are other ways of doing that have been proposed, and a lot of them are really good for what I call traditional programs.

Programs that are sequential programs, they, you know, do one thing at a time, and what they do is they, you know, take some input, compute and produce an output. I call those the traditional program. And there are some, you know, really nice ways of, of doing that. But those ways, Don't work for concurrent algorithms, concurrent programs, but you know, that don't just do something that simple that first of all, they generally run forever and they're interacting with their environment.

They're not just, you know, producing an answer. And secondly they don't do things one at a time, but they enter, you know, the different pieces inter, you know, different processes interact with one another. And so I had to develop something that was different from these methods that were developed for traditional programs.

Although the ideas, you know Came from things that were done for traditional programs, but needed to be extended in, in some ways. 

Karim Galil: So it's, it's as As a physician, I can't help but think of like the analogy of a human, so I, as I'm hearing you, I'm thinking of a program as the whole, like physiology of a human or like the whole human body.

And as the algorithm or abstract program as you said is an organ, They're acting in a concurrent way. In, in separate, in, in, they're acting separately, but also in concurrency to, for the body to kind of live and, and, and, and, and do what it's supposed 

Leslie Lamport: to do, basically. Okay. Well it's close, but what the different organs are, the different processes that are running concurrently.

And the algorithm is a description of how. Cooperating should work, but it's a higher level description. Instead of talking about, you know, individual nerve pulses or, you know, individual blood cells or something you're talking about you know, the brain gives the stomach, you know, that, that they have gets information from the.

What is it? Your, your throat that, you know, something is heading that way and it sends, you know, that information goes to the brain and the information is sent to the stomach, which does something which will then, you know, send some kind of hormonal signal to something else. And so the algorithm is like this high level view of, of.

of how it works and the code, you know, the actual nerve fibers and blood vessels and cells and stuff that are the signal 

Karim Galil: s being tasked, right? Mm-hmm. , that, that makes a lot of sense. I think for our audience, it's analogy. 

Wael Salloum: So you're also touching on. Or proposing math as an abstraction of old programming languages that, like for example, if you're an author writing a novel as figuring out the characters and how they interact with each other on a very high level, regardless of whether you write it in English or French, like you will enjoy more in peace In French or in Arabic, or in Hebrew or, yeah.

English the same way. Exactly right. So math is a higher level representation. It's like a form of inter ling. A term we use in machine translation as like old languages are, like there is one language originally that old languages kind of evolved and located from. And you are talking about like the thinking that that math is kind of the, that you need to learn math.

And you mentioned that if, if you define the program mathematically or the behavior of the program before even you write the. That can give you ideas about edge cases or how actually to even write a better quote. 

Leslie Lamport: Oh, give you a, some you missed, said a couple of words that I missed, right? Just at the very end.

Wael Salloum: Yeah. I said like when you, when you write the program Mathematically first, like you define it, let's say TLA trust. Mm-hmm. , you define the formulas that say this behavior of the system is acceptable and those behaviors are not. These are only the acceptable behaviors that it could open your eyes into building the program in a much better way.

Like it can, like, as opposed to starting to writing the code and then writing the unit test, which most people do today. Write the code, write unit, test, and integration test. What you're saying or what you mentioned with TLA is first, write the formulas in math, make sure that these are the acceptable behaviors, but that could also influence the way you can write the code itself.

Leslie Lamport: Well, there's this sort of view

People sometimes give that, you know, first, you know, you start with what the program is supposed to do and you don't care at all about, you know, how it's being gonna be implemented. And then you write this description and then you say, okay, now I've done this. Now let's see how I can implement it. The real world doesn't work that way.

You have an idea. Because it's very easy to specify something to, you know, say the program should do something that's impossible or that's simply going to be too expensive. So in practice, you generally have an idea of, you know, roughly what it's supposed to do, and roughly. How it's going to do it. And the actual process is a

it's an interaction between these two. The, the cool thing, you know, a cool thing about TLA is because, In an algorithm is a mathematical formula, and you know what it's supposed to do is a mathematical formula. So you can describe what it's supposed to do as a higher level algorithm. And then how it does it, you know, is a, you know, the general algorithm sort, the general idea of, you know, how it's going, the computers gonna work, how the, how the program is gonna work.

You know, can be a lower level algorithm described mathematically, and then that can then be coded. And what happens in, in actual practice is that this this second layer of, of algorithm is, becomes a useful design document. , and it's not, it, it generally tells you the really the important things about how the program should work and, and in more precisely in TLA, what you use it for generally is.

It to explain those aspects of the program that involve communication between the different parts, between the different pieces, between the different things that are acting concurrently. And the reason important reason for doing that at the high level is. Those things are a hard to get correct and b, hard to detect the errors at the lower level and the code, because They're hard to test for because there are concurrency introduces so many more possibilities because there's so many more orders in which different things can happen in different pieces. And in testing, you're unlikely to be able to catch all of them and particular, because you're not testing on the real system.

the particular orderings of things that you test are not likely to be the same as the ones that will occur in the real system. And therefore you can get, you know, you're very likely to get bugs that don't appear in testing, but appear, In the real system. and those can in fact be really hard to, to, to figure out what's going on.

I mean, I know of cases where, in order to try to, you know, there's been a, you know, a bug that they have been able to find . And so what they did was to write the high level TLA des. Of what the system is supposed to be doing and seeing if that bug occurs. And they then will have found the bug at the high level and say, oh, and then they know, you know, what the problem is in the program.

Well, of course, if they started with this high level, they would've found that bug before, you know, writing all this code and, you know, going to the trouble of, of debugging. A colleague of mine Gives a talk on TLA that's is titled How to Save Two Hours of TLA With Two Weeks of Debugging 

Wael Salloum: you, you mentioned Sony Ho who did whole Logic, and you work, you work on logic, you work on logic and merge it. Logic of action, getting the obviously have you thought about working on logic and reasoning for AI purposes? 

Leslie Lamport: No. Because, you know why? Well logic you know, logic at means, formalism and, and.

I don't start with the formalism. I mean, TLA came from my years of experience, of reason actually writing algorithms and trying to prove that they're correct, and so the logic came after. You know, after the practice, you know, after how it should be done. And, you know, I knew things were right because, you know, when, I was able in TLA to make precisely rigorous the kind of reasoning that I had been doing, Now I can't think about a logic for AI without having done AI

It's that simple. 

Wael Salloum: So I can give you an example of what I'm working on, for example. So let's say you got a patient received, had cancer, and you got the date of diagnosis, and then you got the surgery for that cancer and the date of surgery, and you got that the date of surgery is before the date of cancer.

So that's obviously illogical, right? And your partial over actually would work this way. Like it can, there is an application with a lot of massaging of that technology into let's say a knowledge graph. Not a multi, not a multi system or distributed system, but maybe a multi, multi Asian system, like multi cognitive Asian system where each one of them has a certain belief of, let's say, what happened to a patient.

and there would be conflicts, there would be paradoxes. They can be detected using something like TLA or something like temporal logic or the different species of temporal logic. Right? So there are applications that I find your work very inspiring in that domain. You said once that you like to work on conflict more than collaboration in terms of.

On conflict more than collaboration in terms of like less on parallelization or concurrency and more on. basically something that p access system. Can you talk a little bit about like how you work on P system? How revolutionized distributed systems and the banking transactions? 

Leslie Lamport: Well, hold it.

You went from two things, you know, from AI to p and which do you want me to talk about?  

Wael Salloum: If you wanna talk about AI please talk about AI. 

Leslie Lamport: I, well, What I wanna do is not talk about AI, but, many years ago, actually, when I was in grad school, I had a roommate who was you know, studying linguistics.

In fact, he's George Lakoff who's become a very well known linguist, of course. How did, and you know, what he would do is he was actually taking a class, I think from Chomsky who was at MIT, and The, he would come and say, I wanna do this. Is there any math that can do it for me? And you know what I had to tell him is that, you know, math doesn't solve the problems for you like that.

What you have to do is figure out. What the solution to your problem is. And then you can say, well, is there a math that will, that I can then describe this solution in? And then maybe, you know, I could then use math. That's, that's been developed. So the advice I would give to you is start with, you know, some very particular problem, you know, and as small as you can make it or as that, you know,

that would make a, a solution that it's for which a solution would teach you to something, teach you something. Then figure out how are you gonna solve this problem? You know what the problem is, how are you gonna solve it? And then when you see the solution to this very simple problem, say, well, how do we generalize this?

What is the, you know, what is the math that's going on under there? But in some sense, don't start with the problem and then look for the math that's gonna give you a solution. Look for the solution, and then say, you know, to a very simple problem and say, what is the math that's behind the solution? 

Wael Salloum: Yeah.

Then you formulate it with math. Yeah. That, that makes perfect sense. 

Leslie Lamport: Okay. Now what did you wanna know about Paxos?  

Wael Salloum: So that's the, the idea of a multi-systems multiple systems working together on one point census. I, I find a lot of applications to that in also in knowledge representation and in in AI.

But if you wanna talk about it from a distributed system perspective, here's the 

Karim Galil: super interesting thing that I'm noticing, right? Everyone claims that they're doing AI or they know something about AI or they want to talk about ai, but actually people with knowledge like you, you sent us an email, we asked you a lot of questions, some of which were about AI.

And one of your comments where I don't like to speak about things that I don't understand or something like around those lines. And I was like, wow, you're saying that, but when we're we go, actually in the industry, we're seeing the opposite. Where everyone's claiming that they know something about the AI or two.

Leslie Lamport: Yeah, that's well, there's something called that I call lamp port's law. You may have heard of Peter's principle which is that people get promoted to their level of incompetence. , they do a good job at something, then they get promoted, and if they do a good job, they get promoted. And when they do a lousy job, they stay there.

Well, that's actually a corollary of lamp port's law, which is that the qu the. Qualifications for at position are uncorrelated to the qualifications of actually performing the tasks that that position. So, and the example that, that I use that you know is relevant here is that, you know, because I've, you know done, Some, because I've gotten a touring award for getting, you know, doing some particular work.

I get invited to speak about all sorts of things that I know nothing about and completely incompetent to talk about. And I have the good sense to decline those invitations. 

Karim Galil: I, I, it doesn't really make sense. I, I think I, I, we're, we're, we're coming to the end of the hour and I wanna be conscious of your time, but one other thing that I thought was very interesting when I was looking into your thoughts and, and, and, and writings is you're saying scientists will find more things in the industry than staying in academia or in, in the lab.

And it was almost like kind of an invitation for scientists to move towards industry rather than stay on, on the academic side of, of the equation. 

Leslie Lamport: Well, remember that that was a comment I made about my career, which was at a particular point in time in a particular discipline. You know, namely computer science.

For example, I see no sign that a physicist needs to go to industry in order to you know, do elementary particle physics. Don't take that, you know, too seriously. You know, especially not in disciplines unrelated to computing. 

Karim Galil: It's not an Abstractable algorithm. To borrow them to borrow like what we're talking about. We're coming into the end of the hour. We're super appreciative for you, like taking our invitation. We just sent a cold email. We actually were super surprised when we got the response back and, it, it just, I learned a lot in the process of what it means to actually be a good scientist just from our interactions.

And I'm super appreciative for you being generous with your time with us today. 

Leslie Lamport: Well, I'm happy to talk to you. Thank you. 

Karim Galil: Thank you. 

The Feed