Episode 3

August 21, 2024

00:25:06

The State of AI in Case Management (with Nathan Morris)

Show Notes

In this episode, Gabriel Stiritz interviews Nathan Morris, co-founder and chief culture officer of Filevine, about the state of AI in case management. They discuss the limitations and hype surrounding AI, as well as the current state of AI in law firms. They also explore the importance of using AI as a tool and the need for guardrails to reduce errors. Nathan emphasizes the value of tying AI models to ground truth and the significance of data sets in AI applications. They also touch on the impact of AI on the legal profession and the need for law firms to adapt and embrace AI technology.

View Full Transcript

Episode Transcript

[00:00:01] Speaker A: Welcome to the relay presented by Lexamica. My name is Gabriel Steeritz, founder and CEO of Lexamica, the leading attorney referral network. This is a podcast for leaders who are passionate about leveraging technology and AI to enhance law firm practices. Our listeners, like you, are the owners in c suite at personal injury, consumer and mass tort law firms. My guest today is Nathan Morris, co founder and chief culture officer of Filevine. Today we're talking about AI in case management, the hype and the limitations. Where are we today? Nathan, great to have you on the show. Super excited to have you here. [00:00:36] Speaker B: Gabriel, thank you so much. We are really excited to be on here. You know, we always think of Filevine as two things, and that's. That's tech and community. And really, we've been. We've been loving what Lexamica is doing, really bringing together that community component. Really, all PI law firms are really community centers. And that ability to connect representation with those in need is something that I can speak. Having practiced personal injury in the past for many years. It's something that we have been waiting a very, very long time for. [00:01:16] Speaker A: Absolutely. Well, I'm excited to talk to you about AI over the next 20 minutes. A couple of things here. One, we're going to have a discount code for you. Lux summit is coming up. Lexamica is going to be there in force, and we've got a discount code for you as one of the Filevine partners, so stick around to the end for that. And also just found out that Nathan and I may be related. We both have family going way back in north central Arkansas. We're not going to hit that on the show, but that is pretty exciting to me as an Arkansas guy myself, and someone who really loves this part of the country. So, Nathan, you know, this is something that everybody's talking about. I'm at all of these conferences, and basically every other talk is on AI right now. So not to beat a dead horse, generally, I think we can get into more of the specifics. What's kind of, what's the state of the art right now? I mean, every. You know, we could pull the chart up, but there's all these hype cycles, right? It's self driving cars. It's Uber says it's happening in the next five years, and then we still don't have self driving cars. AI has moved very quickly. Based on what I've seen, the rate of change has slowed down a little bit with the new OpenAI and anthropic models, but kind of where are we and what should law firms be expecting? Kind of, of AI at this point in time? What's kind of, what are you seeing? [00:02:36] Speaker B: You know, it's funny, Gabriel, it's a really great question. And I always start off kind of sitting the table with a similar answer, at least over the past year and a half, because, I mean, that's, I think it's great to step back and realize that it's really just been a year and a half since this has actually materialized as an actually viable tool. Right? I mean, the days of what, about ten years ago, talking about dragon being the AI that we would need and never, in my opinion, never really materialized for what we thought it would be. So getting to that, setting the table, and I'm going to say what I'm sure also many have said over a very long time, I do believe that. I'm not going on a limb saying that this is the first substantial change in the practice of law in 1000 years. The reason why I say that and the reason why I want to set the table this way is because for so long as practitioners, and again, not just attorneys, but all those involved in the process, we take sets of data, sets of information, we consume it, we digest it, we identify relevant data points, then we start making these really critical, human trained, expert decisions, which I call the lawyering, and then we execute on those, right? And so the reason why I say we have the first appreciable change is, I'm gonna go back to this quote from Steve Jobs, is that so much of his work was to eliminate the drudgery. And frankly, I think so much of tech recreated the old practice of law, recreated the digital file cabinet or the digital case file. Right. And what AI really offers in that biggest jump in appreciable change and why I think this is so important to set the table, to understand how to use it, is that all of a sudden that consumption, digestion and identification of data points, we have a tool to do that. So much of what we have always done in the practice of law has been centered on the time that it takes to review documents, to review evidence, to review all that data. And truly we can come in now and say, look, that era, essentially the end of that era is here. We no longer have to spend the, like most of us, those hours upon hours going through looking for the needle in the haystack document, the Alicia Silverstone clueless where they're coming in, highlighting everything that's gone. But that's just those three components, right? [00:05:23] Speaker A: No, that's amazing. And I think that's where I want to drill down deeper, because good LLM's incredible. One of the big things that we have in law. And I agree, this is the first time where technology is really purely aligned with the domain of which is words. Right? So now we have large language models. We're dealing in a domain that perfectly aligns with legal. [00:05:43] Speaker B: Correct. [00:05:44] Speaker A: And so now you have these computational tools that can really start to work inside of words. Like, we had spreadsheets, and we had computation around math, and now we're in the era of word computing, for lack of a better word. And so how. So one of the things that we're seeing is incredibly powerful models. Um, but a lot of the. The margin for error is so low. Right. If you introduce hallucinations, bad information into the legal workflows, that's no better than having a person introduce any of those errors. And also, it's worse from a legal perspective, because it's okay if a human makes a judgment error, but not if your computer is introducing just, you know, flawed information. So I think one of the things that I'm curious about is, like, so you've got chat GPT, the generative models, they're incredibly powerful. They really need human oversight. And I think what Filevine has the ability to do is to put guardrails. Like, it's not that you need to go and make a more powerful engine. You need better brakes, you need better steering, you need better handling. What are you seeing as, like, the ways that you're able to put guardrails on this so that it reduces the errors in these incredibly powerful models? [00:06:54] Speaker B: So, Gabriel, these are extremely pression points, and I appreciate you bringing them up. So let's unpack that a little bit of. Right. I mean, I think first we start off to really appreciate what the value is here, is that even who I would characterize as your 90% legal professionals, right. Are only 90% to page 300, and then 300 to 500. They're this and that. And I'm not just talking about consumption and digestion. Also, when it gets to drafting. And that's what we should expect. The reason why I think that's so important to consider here is because then we can start to understand the value of AI and understand that it does not replace the lawyering, it does not replace the actual legal minds that are at work. So, when we look at these specific engines, it's very interesting how they are much like the different professionals in a law firm, the different engine. So, first off, we have different engines that are good at different things. Some are great with numbers, pulling out numbers, data summations and clear pictures of numbers. Others are great at sentiment analysis. Others are great at understanding what is expected and what's missing and so on. So when you talk about guardrails, the first guardrail in place is just knowing what the right tool is, the right tool for the right job. So what Filevin has done, for example, is we've been able to choose the right tool for the right job, but also create the appropriate interactions and relationships with those tools, such that, one, you're not having a Facebook type of exposure of the data that's being used. Right. It's not going out there anywhere. I mean, that's something we might not think about when we think of guardrails is confidentiality and those responsibilities that we have as legal professionals. So that's one in particular, because we see so many professionals now using the different LLMs that are available, the different tools that are available, and yet they're not really asking that question or getting answers to the question of how is it learning? What am I doing from here? So, guardrail number one, appropriate tool, appropriate task. Now, guardrail number two, I would say that Filevine is doing, and we think this is extremely important, is to prompt engineering. Because we thought Google was cool in the two thousands, the aughts, we thought it was cool to be able to do that. I remember how for many, it was a skill to know how to use the Google, to know how to use the different ways to care. [00:09:41] Speaker A: My wife says that it still is a skill and I'm really good at it, so I just need to get that out there. [00:09:46] Speaker B: Fair enough. And so the prompt engineering is something else. We have to understand that the language that we use that we think is totally clear. Unless we understand how that engine is going to interpret that language, we have no idea what we're actually saying. Yeah, so you're thinking about that? You're thinking about that. What's going on? [00:10:08] Speaker A: Yeah. So you have this core operational system file, for example, and you're saying that you're either hard coding or building in baked in prompts to get better results from LLMs by the users that need to engage in certain ways. Like, I need to draft an email. So you're building the prompt engineering into the system so that the starting point is further down the road so that someone doesn't have to learn prompt engineering and then bring that into the system. So that's an example of how you're enhancing the capabilities there. [00:10:41] Speaker B: Absolutely correct. Because by having those. I mean, I it's not uncommon for a prompt to be three to five pages long. [00:10:49] Speaker A: Oh, wow. Three to five pages of prompt. Just to be clear, like my chat do not look that long. So you're really dialing in these prompts based on iteration? That's fascinating. Okay, continue. [00:11:03] Speaker B: So, that's not to say we're not also pursuing the route of natural language, right. Because it is getting better and better. But we've also got to take a step back and realize our thesis is that truly what's most valuable in AI is your data set. And as a law firm and as a practitioner, you got to ask that question in how AI works. Do you want it learning off of someone else's work that you don't trust, or do you want it running off of the best work that's available or using your data in particular to do what it needs to do? That's another question that I think isn't asked nearly as much as it needs to be. [00:11:41] Speaker A: I agree. I mean, I think people should be, man, if y'all are able to offer fine tuned models based off of law firms data sets, and I understand that's a sophisticated ask that customers may not be making yet, but they should be making. That's. That's a real value add. Cause my thesis, and, like, I'm talking as someone who doesn't, like, I don't know how things are being. Like, I think the models themselves are getting powerful very quickly, right. Like, anthropic and open air doing great work. And you get a new model every week. Immediately connect that new model to your internal operational knowledge base, to the way that you do things yourself. And if you don't, like, no law firm that I've ever talked to has the actual ability to go and fine tune a model every week. So if you're even creating that, like, I think that that's a massive way to not put guardrails, but to, like, make the model more operationally useful. [00:12:31] Speaker B: Well, but I even consider that as part of the answer to your guardrail question, though, right? Because it's keeping it within the area that I'm assuming and requiring that that output is being created within. Right. And it's something. I mean, I'm not saying that, like, right now, it's an LLM for every customer. Right. Am I certainly seeing things go in that direction? Absolutely. Absolutely. Because, again, already we're starting to see the commodification of Aih. And so what is not commodified, that's the actual actionable, historical and current data on how a law firm functions, how they draft complaints, how they draft responses, contracts. [00:13:17] Speaker A: And I think that one of the big things that I see, I'd love to hear your take on this. The way to create better, more useful models is to tie those models to ground truth. So using rag or other types of citation based communication. So it's like, okay, the best way to put guardrails on is to say, look, you can do this. Here's a, you know, create this in this, this email, this complaint, draft this thing, go search for it, and then cite to every assumption that you're making using the information we already have available to me. And I'd love to hear that. Seems like the biggest missing piece right now on the out of the box models with OpenAI in anthropic is that they're just, they're nothing pointing to ground truth because they're trained generically on information. But if I'm running a business, I want it to go point to either something that's factual. And I had an earlier conversation with Mike Listener at the Free Law Project, biggest open source database of legal information that exists and open. He's partnered with OpenAI to use his data for training. But I think that you also need to be able to point to a database like that to site sources, and also you need the internal information so you can cite to the LLM generates a response to this client and then says, and here are the five data points that I use to write that response. Is that something that you all have built or are working on? [00:14:40] Speaker B: Yeah, absolutely. I think one of the key points, for example, let's just talk about some of the tooling that we have for the digestion analysis of medical records, right. Just a year ago it was, okay, how do we get this out of just a paragraph, right? How do we get this in a form where we can start, say, an excel format with columns just so that we can compare data, let's say, in the mass towards context, right. Something extremely valuable. That's something where we've really come in and we have focused in seeing the development of that over time, where really everything that comes out as an output, we have to understand where is that coming from? Because this kind of goes back to what we haven't really jumped on, what I've spoken of more than anything else on AI, and that's to pull everybody back and say, look, I know you've spoken about this in other podcasts, but you've got to retrain people on how to use it as a tool. Right. And so that's why, for example, just something as simple as a side by side comparison. Here's the AI output and here is the link to, here's a brief summation and a link to exactly where that came from. When we don't have that. It's really not as effective as a tool. It's less information to go through, but it's just not as effective when you can't, as the practitioner, confirm. That's where I, where it comes from. Because frankly if you can't confirm, that's where it comes from. Again, going back to the less sexy, boring part of AI, that is really the foundation of how we use it. That's where we can say, okay, there is direct value because you can as an attorney, sign off on something that you can confirm, okay, this is where it came from. Because really it's just getting you to a conclusion more quickly and then providing you how it came to that conclusion. [00:16:39] Speaker A: Absolutely. Especially for internal research. Like that's the gold standard. I don't see it getting more useful than here's the summary and here are the links to ground truth. That's it. That's what we need in the space. And then as that gets more accurate, you're, you're willing to, you know, let take the hands off the wheel a little bit more. But once you're seeing the ground truth tied to the synthesis or the summary or the fields that have been filled in. [00:17:04] Speaker B: Correct. [00:17:04] Speaker A: Now you can have confidence that the model is doing what it's supposed to be doing. And I think that's fantastic. [00:17:10] Speaker B: And I call that confirmed confidence. Right. It's just absolutely necessary as practitioners that we, our eyes, our decision making is saying, yes, I have confidence in that because here's where it came from and I have confirmed that. And I think that's the standard we're held to anyways, where we're really going wrong by any time we take an output without understanding because we've got it right there where it came from. I mean, that's just another, going back to the 101 class on using AI. That's just another part of that standard. We have to ensure that there is no question where it came from. We've confirmed it because that's where the ridiculous cliche examples of malpractice are popping out of right now. [00:18:00] Speaker A: Yeah, 100%. And what I think is so cool is like now we're talking about not just being at the standard of the best humans, but being better than the best humans can be because humans aren't the ultimate standard for the practice of law. We know that we make errors, that we make bad assumptions. But when you start off with synthesis tied to ground truth, one, you're not starting at zero anymore. And we haven't even talked about, like, how amazing it is to be able to not draft things from scratch anymore. Like, I don't start at zero ever. Like, that's a game changer. And so then it's like now you have this mesh, this sieve, where you have AI that's making a pass at it, and you have that tied to truth, and then human overlaid on that. Now, instead of one layer of error checking, you have two with fundamentally different understandings. You have an AI understanding, and then layer human on top of that, and maybe a couple humans. You have a better work product than if you were just doing it by human manpower. [00:18:58] Speaker B: I mean, I assert that these are, we're making ourselves into hybridized humans. And I think it's, again, important to look at it that way. I mean, because there's also this whole other area of discussion around, what is this doing to the legal professional? And my firm answer is that this is leveling up every legal professional, but it will also be invalidating and leaving behind every legal professional that doesn't engage it. Because the bottom line is, it is going to still take you 15 hours to go through a complete depot where everyone else who has adopted the technology, it's going to take an hour, hour and a half. [00:19:38] Speaker A: Right? And your hour and a half is better than the 15 hours where people get lazy and they stop looking. And we've seen this in other industries, like airlines. It's like pilots get bored of doing the same thing. Human beings can't do the same thing over and over and over. [00:19:51] Speaker B: That's right. [00:19:51] Speaker A: And so you need these humans augmented by technology. And we're just getting used to the idea that that now applies to the domain of words, which is, which is where we are with the law. So, I mean, I'm, I'm all for it. I think that there's ways that you can sell it poorly, which is to say, yeah, it's going to replace everyone's jobs. But if you look at the way that, like, self driving cars have not replaced humans, maybe they will someday. But what's happened is that Tesla's actually made driving safer. Cadillac's made driving safer. That's where we are with LLMs. And, and I think if you want to take some comfort in not having your job replaced tomorrow, we're seeing a leveling off of the kind of intelligence curve of these. But I think that the utility, when combined with people, is just increasing as we're able to really use these as fundamental parts of our tooling connected to better ground truth and human in the loop. [00:20:42] Speaker B: Look, I completely agree. And again, that's where I come back to this thesis, that it is so absolutely critical that the AI be intertwined with that ground truth. What is the repository? Where is all that being kept? And is it able to communicate back and forth with one another as you, for example, through the process of drafting? I mean, we've all done this. You're encountering the errors and the mistakes. And in the past, what have we had to do? We've had to go back to that data source. We've had to adjust that data source. Well, in tools like Filevine, you're able to have a sync that goes two ways, which is how it should be, because AI is creating that, that consciousness that takes us out of the era of having to have a file drawer and a file cabinet of where data is kept and allows us to access this greater consciousness, which, again, goes back to the core data, that ground truth that we have to be focused on understanding. That's where the value is. And by keeping it as part of that platform that holds the data and that truth, it's also less access points. I mean, security, another point we're not going to go into today, but another point that I feel extremely strongly that we need to be aware of. There is a whole area of, I will say, practice that I'm seeing grow out of folks misusing data and messing up the rules on data. And that's just, again, not that we want to go down that rabbit hole, but another consideration of why we truly have to be deliberate and understand what we're doing when we engage AI and whatever format it is. [00:22:22] Speaker A: Absolutely. No, absolutely. It really is. These are really another core. And just the quick pitch is, look, if you're not in the cloud yet, you're not going to get the use out of the AI, because you said this is the first thing. But I would say that getting law firms into the cloud has unlocked the ability to do things like this. To have Filevine, to have a referral network like Samika, you need these in order to be able to. So our, our audience, they are in the cloud already. If you're not, go do that and then come back and start listening to the podcast. Try to leave people with r1 actionable takeaway here. This has been a very theoretical, very enjoyable discussion. Anything that you specifically want to say, you should be thinking about this with your AI application as you walk into the office on Monday morning. [00:23:15] Speaker B: Look, I'm going to go back to the boring because we are still not doing it right. It's identify. And when I say identify, express, write them out for every single individual role. What are the AI tools that we're using? How do we use them? And along with that, the change in expectations for output of all those individuals. If you have not written out each of those three points so that every single individual contributor of the law firm knows exactly what's expected of their role and how they're supposed to use these tools, please start there. It's boring. It's not fun. Many of us are litigators. We like the action hot and fast. But we have to start by building that foundation role by role. And so all of you that have touched AI, please, those three points, please institute those when you come into the office tomorrow. [00:24:11] Speaker A: Love it. Nathan, thank you so much. And as promised, there's a discount code for Lex summit. It is a partner exclusive for dollar 500 off. You can join. It is an absolutely fantastic conference. I have been the past two times. It's nothing something that you want to miss. You're not just going to get a file vine sales pitch. You're going to get the leading edge of what legal tech is doing right now, along with the brightest minds in the space. And we've got a Harvey Spector as the not keynote speaker maybe, but a speaker on stage. Excited to meet him. Big fan of suits. Nathan, thank you so much for being on the show today. [00:24:49] Speaker B: Gabriel, thank you. We're so excited to gather the falvers here in Salt Lake City in the beginning of September. We'll see you there. And everybody, check out what filevine is doing with AI. It's going to be something with which you will be extremely pleased. Thanks, Gabriel.

Other Episodes