Episode Transcript
[00:00:00] Speaker A: Year three of the AI revolution, and I'm starting to see a real divide emerging. On one side, you've got firms that are genuinely getting leverage out of AI tools, saving time, improving work product, maybe even changing their economic model. And on the other side, you've got firms that have tried chat, bd, GPT for a month, got nervous about hallucinations, and went back to doing things the old way. The challenge today is figuring out which AI tools are actually worth your time and money versus which ones are just riding the hype wave. Today I'm talking with Justin McCallon, the founder of StrongSuit AI. They've built what's calling a complete legal AI toolkit for litigation workflows, claiming you can go from complex fact pattern to first draft of litigation documents in about 10 minutes. And that's a big claim. And if it's true, it matters a lot for how you think about staffing, case selection, what kind of work you keep in house versus referring out. But here's what I want to dig into. A lot of PI firms aren't running complex litigation on every case. And where does this litigation focused AI tool fit fit? Is this solving real problems for the personal injury law firms of the world, or is this built for different types of practices? And if so, how are they using them? I'm going to push on that today, not because I'm skeptical of AI, but because I want to understand specifically how these new tools create value for law firms trying to scale efficiently. Let's find out. Justin, thanks for joining me today on the Relay.
[00:01:20] Speaker B: Thanks so much for hosting me.
[00:01:22] Speaker A: Absolutely excited to have you here. So I'd love to hear, as we dive in, let's start talking about something concrete. Walk me through an actual use case. You have this very, very broad tool from a litigation sense, what does it look like to use this? And how do attorneys enter into a work stream where now they're using AI for litigation? And what does that mean?
[00:01:44] Speaker B: Right. So when you look at our platform, we have about 13 different workflows that align to litigation. And so think about the most common types of tasks that you'll do as a litigator. We try to think through what those tasks are and then build very useful workflows and that are very much aligned to those tasks. We try to repeat the same types of things that attorneys do in their day to day work to complete a task, but do it with AI and then keep the lawyer in the loop. And so it's.
[00:02:10] Speaker A: Justin, let me, let me, let me just cut in right there. Because I want to dive into that. I think attorneys are notoriously persnickety about their workflows. Right. If you ask any attorney, like, well, how does this work? Well, it depends, right. Are you finding that attorneys have replicable workflows? Because I think that's where you're probably going to find your first set of objections. Well, no, I don't have workflows. Everything I do is bespoke. Spoke. I used my judgment on it. I wouldn't be a litigator if, if it wasn't a human judgment issue for everything.
Is it true that there are repeatable workflows specifically for litigators? And how often do you get pushback of, well, no, I don't have workflows. Everything I do is bespoke.
[00:02:49] Speaker B: We certainly think that there are very frequent and repeatable patterns in litigation. So let's take an example. Let's say that you're doing legal research to find the most relevant case law. You probably, at some point obviously need to look for the right cases that align to the point you're trying to prove and to be able to set your data up in such a way to do that. Well, it's very helpful to have some sort of litigation outline where you're drawing out, okay, these are the elements we want to prove, and these are kind of the background facts. Let's make sure that we find a case that supports each of these pieces that we want to approve. So those types of things are very common and very frequent. We do have ways that lawyers can adjust and change the workflows a bit to be a bit more tailored. But we think that generally, yes, when you're doing legal research, there are some very common patterns that the attorneys work with.
[00:03:40] Speaker A: Do you take. So you could have a platform that says, listen, you have a workflow, we will replicate that flow because you're the expert, your workflow is the best. On the other side, you could have a platform that says are highly opinionated. We believe that this is the way that good research works, good litigation process works, and maybe you can tweak it a little bit, but it's highly opinionated versus very deferential. What do you see as, like, the, the best way to approach that? Like, you've talked to hundreds or thousands of litigators. Like, do you think that there is a meta pattern that works as the best practice for these litigation flows, or is it really like each litigator has a different enough workflow that they should be building that from scratch themselves?
[00:04:26] Speaker B: I think if you try to do the latter, it's just not very practical for how you build software. So, so what we.
[00:04:31] Speaker A: Sorry, which one is the latter? The opinionated or the differential?
[00:04:34] Speaker B: The. In the, the very. In the view that you tailor to a specific firm's very specific unique workflow. I think that if you try to do that, it's just not very practical for how you build software. So what we try to do is talk with many different practitioners, try to understand what is the kind of most common way of moving through a specific workflow, and then build that in a way that is still somewhat customizable so that users can still understand and adjust it as they need to.
[00:05:02] Speaker A: What kind of pushback do you get from lawyers when you're hearing that approach? Because you're saying it's not practical for software development. But my pushback would be, well, that's fine. I'm going to run my law firm the best way possible. I don't really care if it's practical for software development. I want a great tool.
So from a litigator's perspective, why would I say, okay, let me take Justin's workflow, or, you know, the Gabriel flavored version of Justin's workflow and implement my firm? Like, what are the benefits that I get from taking that more like slightly off the shelf approach and then implementing that internally? Like what, what are you seeing in terms of like, results, Performance outcomes.
[00:05:42] Speaker B: Yeah. And so you highlighted it at the beginning. One example would be, let's say that you want to start with an initial client intake call and you have that client intake call transcript. We can take that and move you all the way through a full litigation outline and a first draft of a litigation document, a pretty wide variety of them, in about 10 minutes. And so that process, typically, I mean, it obviously depends on the case and how complex and serious it is, but that might take you a week to do properly. In the old world, you at least get to the high quality outline in a quality first draft very quickly and then you can adjust it however you want to tailor it. But the loss in your perfect tailoring versus the benefit you gain from 10 minutes of effort, it's just such a night and day difference that we haven't heard a whole lot of complaints that people want us to adjust to their exact workflow.
[00:06:34] Speaker A: Yeah. So you. A week, I mean, that's not, you're not saying 40 hours of work, presumably, but there's, there's gaps in time between this thing happening and this piece of feedback.
Why? How does it get that much shorter Is it the. Is it. Is it the collection of data? Is it the synthes? Is it synthesizing it? Is it having the right, like law? Here's like, for. For my industry, personal injury, like, time on desk obviously matters a ton because it's contingency. Like, you're not billing hours out. Like, if I can get something done in 10 minutes versus a week, that's just money in my pocket, Right? And if I can do it at the same quality level, I should do that every single time, every day, because my goal is to save money and keep the work product at least as good as it was before. So, I mean, walk me through, like, how. How do you functionally shorten time that much when, you know, a lot of personal injury firms are really focused on these metrics and they are trying to drive faster outcomes and create higher performance?
[00:07:34] Speaker B: It depends on the solution. But let's take legal research as that one example to go deep on. So when you're doing legal research, first you have to figure out what are the areas of law and causes of action that you want to consider and look into. What are the legal authorities that are on point, what are the cases on point, what are the elements you need to prove for your case? And then finding the individual cases for each of those elements on point is very important. And then you're starting to, like, kind of pull that all together in a good litigation outline, ideally, and then draft documents off of that. Maybe you don't do the litigation outline. And this is an example where our tool might do something that you don't, but we kind of have to do it so that we can build a wide range of documents on the back end. And so we build that nice corpus of information that's highly structured to do that. Well, I was. I was finding myself as a junior attorney spending quite a long time even just looking for the relevant cases for each element. And I was always worried, did I miss one of the important cases for proving this element, or did I just have a mediocre case? And that's just the best there is? And then, then you have that level of unconfidence all the time. But I think that we can really help with that. And what's happening generally is the AI is synthesizing and working through huge amounts of data. There's 11 million US presidential cases. We have all of that in our database. The AI is looking through those, finding the mel. The relevant points of analogy to your case and being able to surface the relevant ones right away. Whereas going through that process In Westlaw or Lexis, it just takes a long time and kind of doing that for each point is just very repetitive and time consuming.
[00:09:09] Speaker A: And so being able, it's like a, it'll automate the research process for you. It'll speed up your time to have that research done. And you find that the research is, can be built into a workflow well enough that it will reduce that both reduce the time, but it also increase the quality of the research itself. Is that, is that like the, the premise?
[00:09:35] Speaker B: That's right. And there's a couple of things that we do uniquely so. One is we very much believe in highly visual workflows.
So instead of a chatbot, think more like TurboTax meets litigation. So what we want, we want to have the lawyer in the driver's seat. We want everything to be highly interactive. We want to show our work and let you audit us. And we want to make sure that the steps are multi step visual and so forth. And so there's a lot that we're doing where we know it might take three or four minutes for the AI to crunch the analysis. We might kick that off really early and do it in the background while you're working on other pieces.
[00:10:11] Speaker A: One, there's two things that are like to me are the big gnarly issues with AI right now and adoption. And if I'm an, if I'm a buyer of technology, I want to know the answer to these, one of them and maybe we'll talk about this. It's more broad, which is just hallucinations, guardrails. How do I know that this thing's not going to do yet?
I'd love to hear your actual like technical answer for that in a little bit. But I want to get to one which I actually think might be more difficult, which is the generic nature of AI on the market right now. Which is to say, look, Justin, I, I can't tell if I'm on your website versus someone who got out of Y Combinator a week ago with a six week proof of concept. I, I can't tell from the website the difference between the two because honestly, I can build a website. I could build a strong suit competitor website that looks just as good as yours with a bunch of fake quotes on and a bunch of logos in about 15 minutes. And I've been in the room where lawyers who are running giant firms have seen their logo on products that they've never used that no one's ever signed up for in their firm. So even the social proof is, is disintegrating before our eyes. Like, how am I supposed to make a buying decision? What questions should I be asking as a buyer and as a seller? What do you think are the. The proof points that are relevant today when anyone can launch AI really fast and it will. The flavor, the demo will be the same. Like I. You could push back, but I will disagree with anyone who says that. Justin, your demo is going to look substantively different From a Vercel V0 demo that got whipped together. Like, if I'm a lawyer, I don't know the difference because I'm not a tech guy like you are. So, like, how do I as a buyer understand what are the questions that I ask to figure out if you're the real deal versus some guy who just walked out of a closet with a prototype?
[00:11:56] Speaker B: I think the most important piece is test it. So I would take a bit of time and very quickly put together a list of sanitized documents or legal questions or example like fake client cases and try to say, okay, this is kind of what I would expect a great answer to look like. This is the input. And let's see what tools are able to go from input to something close to that. Great answer in a way that is intuitive and easy to use.
Now that's going to be. You're still going to want to kind of narrow down to a few solutions that there's like a thousand legal AI solutions on the market and you don't have time to test all of them. A few things you can do to try to find good social proof. One, where we stand out. What we won the award for the best legal research solution on the market. We've got that on the front of our page. You can see this.
[00:12:43] Speaker A: And who gave you that? Because I could also make up a whole bunch of fake awards and throw those on my website too.
[00:12:49] Speaker B: That's fair. I mean, so in our case, we link back to it. So you're able to see and go back and say, okay, this was the legal Tech Breakthrough award. Let me go and research this group and make sure it's legit. So, yeah, for sure.
[00:13:03] Speaker A: Yeah.
[00:13:04] Speaker B: So there's that. That helps. Obviously it does help to see testimonials, it does help to see the logos, but there's some degree when you're buying any piece of software, if someone's just outright lying, you can't do a whole lot about that. But the test will prove out those.
[00:13:17] Speaker A: Types of issues, which I think is the unique challenge of our moment. And I've been in this industry for about three years now. I've seen it was completely, to be clear, it was unsexy as all get out When I came in there it was like we're going from server to cloud transformation. In 2019, no one really knew about legal tech and now it's a super hot vertical, which means that there's all of these new entrants. Like the opacity of legal tech before helped keep a lot of the, the scammier prototype year players out and now you have a lot of entrants and it's hard to tell. I think a lot of lawyers are overwhelmed and so that's why I'm really pushing on this is what are the litmus tests that you would recommend where you'd be like, here are the things that a lawyer, a buyer should ask me or ask me to prove that someone who has a prototype or a piece of trash to be, you know, more, more clear wouldn't be able to do. And so you're saying the testing, like give me documents that you know what the output should look like and ask me to run those for you. I think that's a, that's a great way of looking at it because again, like we're moving into this era where AI can really replicate testimonials, logos, awards, great website, even like the working prototype on the demo. So it becomes harder to get that proof point of like, is this something that I should move into my firm to run a pilot? Right. Like that sales cycle becomes difficult for the buyer and in a really, in like a really difficult way. So that's a great. Are there any other recommendations? Any other. So it sounds like even questions aren't necessarily so much as hey, let's give this a road test, ask me to prove this out because this will show you whether this thing can deliver the results that I'm promising.
[00:15:00] Speaker B: Yeah, I think that's right. So two parts. So what else could you do to look for those proof points? I mean obviously if someone just outright lies to you and lies on their website, there's not a whole lot you can do about that. But you can see, are they SOC 2 compliant?
What, what do their testimonials look like? Are they getting traction in the market? Do they have a meaningful amount of customers? Do they have decent reviews? Stuff like that I think is helpful. But, but to some extent you're going to have to narrow down to a list and probably a couple of groups on the list look good. That and will really fall flat.
[00:15:32] Speaker A: And I think, and Justin. But sock two I think is an interesting one.
Again, let's assume that you're not blatantly lying about SOC 2.
That is one that actually I'm surprised how few people ask us about SOC2 in the personal injury space. It is a security standard that, that we meet and that you get audited on. But I do agree with you that it's actually not just about the security, it's about the level of legitimacy of the company. Because it takes a boatload of resources to pass SOC 2 compliance. Which means not only do you have the security in place, but you have the resources as a company to go pursue something that isn't just whipping together a product. So I think that's a really good point too. Just asking about SOC2 will help you understand the kind of resourcing the company has. Are they able to think about security standards, pass audits, deal with a third party auditing service? That's, that's a great point too. So I would highlight that.
[00:16:23] Speaker B: Yeah. And then on the second part, what can you do to ensure that your test is robust? I would go through and think about what are the five or so biggest types of things that your firm does and where do you spend the most time? Is it on discovery and doc review? Is it on putting together a factual analysis? Is it on research? Is it non drafting, wherever. And come up with that list of areas where you really want to investigate and then come up with some sanitized documents for each of those areas. Come up with an interesting fact pattern that you want to solve and see through the research and drafting phase and then have a good idea of what does good look like and what are the inputs and then just put the inputs into the different tools that you might want to test, see which one comes comes up further.
And you can run these tests fast. I mean, you can get a good sense within an hour of a tool pretty. And be pretty robust in your testing of it if you know what to look for and what to do.
[00:17:19] Speaker A: Yeah, but I think that's a good point and you kind of alluded to this obliquely. Don't just go shopping for AI tools. Understand where your time in the law firm is going so that you know what's worth spending time vetting. Don't just go look at shiny objects and window shop a lot. Understand?
Does our team spend a lot of time on discovery? Do we spend a lot of time drafting documents? Is it responding to RFPs? Is it trial prep? Like those are different categories. You need to have to understand your problems before you go looking for solutions. It's well said. Also, and I do think that to understand the scope of the problem also means that once you understand that that's a big enough use of your time now it becomes rational to spend time vetting solutions for that versus trying a hundred pieces of software because they showed up on your Instagram feed and they looked cool.
[00:18:11] Speaker B: Totally agree.
And as you're doing this, one other suggestion would be don't theorize too much about this stuff. Go, go test it. But we've noticed that lawyers really want to strategize about the right law firm strategy to, to pick, to pick a tool. And really, until you're using this stuff, you really don't get it. Go try some, go put together some standardized documents, spend a few hours doing that, then go test some, some useful tools that seem like market leaders. And you'll pretty quickly figure out, okay, here's how it works, here's the benefits, here's how we can use it in the firm. And then you'll be able to work together to figure out what is the right tool for us that makes the biggest dent in what we're doing.
[00:18:50] Speaker A: Yeah, I think that's, that's really helpful. As, as our law firm owners are listening to this and thinking there are so many solutions, every conference there's more AI vendors. How do I think about it? It's start with the problem like where is your time going? Where is your energy being spent? Then look for tools that fall into that bucket, then identify market leader tools and then actually run tests. And I think what you're saying too is like one, I agree with that. It is a, that's a robust process. But I think that the gates have to get, the walls have to get taller, the gates a bit narrower for, for law firms as more and more of these tools come to market. And so this is great. Justin, I really appreciate it. Like, this is a really helpful way to think about how you adopt AI tools, how you look at AI vendors. This is, this is really rigorous and I appreciate it because it's helpful to hear from your perspective. Like I, you know, we have more of a platform marketplace tool.
I don't necessarily know how to tell every buyer how you look at the AI tools that are out there. And so it's helpful to hear it from your perspective as someone who's like, yeah, I'll put my money where my mouth is, try this thing out. Ask me to prove X is a really helpful piece of advice. Just as we, you raised a question.
[00:20:03] Speaker B: About hallucinations do you mind if I.
[00:20:05] Speaker A: Yeah, yeah, let's talk about that.
[00:20:06] Speaker B: Okay, so let me walk through technically what we're doing and why it's so important. So I think most folks now, but not everybody understands a hallucination might be where the AI might invent a case that just doesn't exist to prove a point. And it will look very much like a real case function very much like a real case in the, in a brief or legal research, and it can become indistinguishable. So how do we prevent doing that? Which if you were to turn that in, you, you risk serious ethical issues. You could even be potentially disbarred. So the way we, we do this is first when we're doing legal research, we have AI suggest relevant cases. Some of them are going to be hallucinated, but we're not showing those to the user yet. So then we take the bluebook citation for each of those cases and we throw them into our database checker where we have a database of all relevant US cases or all US cases. And then we're seeing, okay, does this bluebook citation match something in our database? If so, we're allowed to show it to the user. If not, we need to go back and go through another loop and figure out another relevant case instead of the one we suggested. And so what, we've actually been doing that for a couple of years now.
This is something that's been live, it's been tested, to my knowledge, have never had a hallucinated case in about two years.
This is something that you need a domain specific tool to do well, but you definitely need this protection. So I would certainly verify that whatever tool you're using has this protection. So that's kind of stage one. We go a little bit further than that. So we don't want to just not make up a case. We also want to make sure that it's relevant and the holdings relevant. So we've gone through already and gone through every case in our database, figured out what is the holding of the case, what are the material facts, what's the summary of the case and so forth. About 15 different pieces of information on each case. And then we're going to always ensure, okay, here's the way that the holding here was the actual wording from the court to get to the holding. And we're going to check quotes. So make sure that the quotes lead to that holding and that the quotes actually exist. And we can use traditional programming to check that. So we can say search for this quote in the case, make sure it exists. If so, we can show it to the user and then we.
[00:22:11] Speaker A: So basically just so that I. To kind of recap where we are so far, this is great. You're basically treating the AI like someone who is allowed to go to the library, look for books, pull a book out of the shelf and hand it to you. Not the way that Chat GPT would do it. Which is to say, hey, I'm just going to ask you from memory, what do you remember about all the books that you've seen in the library? It's a very different process. You're like, you're saying, hey, tell me all the books that you think we should look for two of those books out of five, but until they're pulled down and passed to the person who's doing the research, it doesn't mean anything because you're not going to let them say, all right, you made up a few books, but all that's getting passed are the actual books. Here's the book, here's the page number. And then doing another layer and saying, hey, not only AI do you have to pull the book from the shelf, you also have to go to the right page and you have to go to the, the table of contents and point to the concepts that are relevant for that, that book, so to speak, and then you can present those to the user. But you're not using your own words. You're presenting against an actual database of ground truth. And the LLM is not itself allowed to present the ground truth on its own, of its own accord, which is amazing. Like that, that makes a ton of sense. So you're just basically saying, look, you can be a recommended. You are recommend recommending these things. You are not the source of truth directly.
[00:23:33] Speaker B: That's exactly right, yep. And, and then we give the audit trail, we let users see the case and you can click the link. So you, you can, if you have any amount of lack of confidence, you can always check and see, okay, what, what are the actual quotes from the case that led to this conclusion? What's the case link, and so forth.
[00:23:51] Speaker A: Which is really the danger of anyone who's using.
Even if you put, take Chat GPT, for example, you throw in, you can't even, you can't throw all the case law in. So, like, that doesn't work. Even if you throw in some relevant documents, Chat GPT is still not perfectly trained to say, hey, I know the difference between pointing to the place in the document and making up quotes from the document that you gave me, which is why you really need a tool that understands that like the tool itself, the broader tool.
Don't trust the LLM, only go and look where the LLM is suggesting things. But ultimately you're pulling from the ground truth from the actual case law, the cases citing those, et cetera. And that's not available off the shelf. Which, which is an important distinction. Like if your attorneys are using chat, GBD or claude, that's fine, but they can't not let citations through into their work product that have been generated directly by an AI that's dangerous. Like that just isn't going to go away. Like you. Well, how long has it been? Three years. Like hallucinations are not leaving. They're an inherent byproduct of an inferential model. And if you don't know what that means, it means you need to go and find someone who does know what that means and can solve for it.
[00:24:59] Speaker B: That's right. Yep.
[00:25:01] Speaker A: So sorry, please, please continue. This, this is interesting.
[00:25:04] Speaker B: So I think that answers the hallucination part but the other piece that's really unique about domain specific software and us in particular is that we're by making these flows very specific, we're able to do engineering on each step. So this was one example of an engineered step where we were checking for hallucinations for cases. When we look up legal authorities, we're doing similar pieces to not just find, not just to move out anything that's hallucinated, but also to find the most relevant authorities and search extensively, use good databases and so forth that are legal specific. That's we're much more likely to get a good result that's aligned to your fact pattern than ChatGPT is, which will be just much more cursory. And because we know kind of what we're, what our goal is on every step we can do heavy engineering that is using the best model for. And we're considering a wide range of potential models to pick the best one. But we're using the best model, we're using additional heavy engineering and then oftentimes we're running models in parallel to give you a fantastic result. These are things that just are not possible by using ChatGPT off the shelf and your result just won't be as good.
[00:26:11] Speaker A: Yeah. And kind of in defense of the model that you're, that you're talking about and I would assume that you have a significantly higher price point than ChatGPT.
You're doing good old fashioned engineering which is going and like running searches and going and finding cases and you're using AI as a way to enhance that process but not replace the process, which is what a chat bot will do, or a chat wrapper that's just, you know, a fancy UI for Chad, GPT or something like that. So I mean, that's that basically it's less AI, but it's more good old fashioned engineering that works and has checks and balances built into it. Which is in my view, more credible than saying this whole thing is just AI. What you're really saying is we use AI to speed up a process that was done by people, but also is now largely engineered and AI helps fill the gaps on that to make it run faster.
[00:27:04] Speaker B: That is very well said. And we also are always surfacing our work so that the lawyer is in the loop and the lawyer is making still a lot of strategic decisions. We try to become that cape for the lawyer and make them a multiple of themselves before and in that process of AI synthesizing information, lawyer making strategic decisions. We think that works very well.
[00:27:24] Speaker A: And let me ask you this, and I think this is a really important question because if you answer it wrong, everyone's going to say, oh, well, this guy's just trying to sell me on something.
What are the key limitations? What are the things that it doesn't do today that you're like, man, I really wish that it would, or we can't do this yet, but this is something that people really want it to be able to do.
[00:27:40] Speaker B: So the next six months, one of the big factor areas that we're pushing is trying to make everything talk to each other really well. So let me give an example. We do depositions, we do legal research.
After you've done legal research with us, we have background information on your case that might span hundreds of pages. When we go to do deposition prep and you're working on questions you might ask an expert witness, those questions could help you prove the elements of your case very well. And right now they don't. So we want to take all that context we have from research and then feed it into depositions. Same thing with every other step. Whether it's timelines or discovery or similar, we want them all to kind of speak to each other. And what we're doing over the next six months is saying, okay, let's have a way of storing memory for your case that brings in the important information, has agents that can sift through and look through the documents you've uploaded, look through the work that you've done before, and then bring in the relevant context for every Individual API call that we have to where you get an output that reflects all the other work that you've done. So there's still something that we need to do, but over the next six months we feel very confident we'll be able to do it.
[00:28:51] Speaker A: And why is that hard today? Why is that a hard problem to solve in a product?
[00:28:57] Speaker B: A lot of it comes down to speed and then context. The AI tools today, depending on the tool you use, can only handle between about 700 and 1,000 pages of context.
But you might upload discovery, that's 40,000 pages. Or you might have time, but when you build your timeline, you might have in the record just thousands of pages of record information, plus a lot of research you've done on all the cases and the background of the opinions and so forth. Being able to bring the right amount into context is very challenging. But we have an agent based system that we've been building that will work very well for that.
[00:29:34] Speaker A: Yeah, yeah. But essentially the amount of context is difficult to traverse. You can't. It's like asking a person, hey, remember, I mean, and then I'm going to list a hundred things for a human and it just, it at this point that's like the, a limiting factor for being able to, to know what to surface and to bring it all together so that everything is presented to you at the. At the right time.
[00:30:00] Speaker B: That's right.
[00:30:01] Speaker A: Interesting.
Well, Justin, I really appreciate you being on the podcast. I have certainly not let you off easy. You've been a great sport and I appreciate your insights both into the buying process, where we are with structuring AI, so that it's creating really, really reliable results on the output, and also what some of the limitations are and where you're headed with the product. So thank you for being on the show. I always learn a lot and I really appreciate your time and your expertise.
[00:30:33] Speaker B: Thanks for hosting. I really appreciate it.