Bridging AI & Law: Technology's Role in Legal Services with Thomas Daley and Donna Mitchell

In today's episode, we're discussing the intersection of law and artificial intelligence with our esteemed guest, Thomas J Daley. Thomas is a board-certified family law attorney from Texas, a former high-frequency trading software developer, and an innovator with a keen interest in leveraging technology to improve legal services.
In this episode, Thomas shares fascinating insights and first-hand experiences about the evolving role of AI in the legal industry. We'll discuss an incident where an AI, possibly ChatGPT, generated a legal brief with incorrect case citations, emphasizing the critical need for diligent fact-checking. We'll explore AI's potential to streamline legal tasks like document classification and discovery and the ethical considerations crucial for its responsible use.
Thomas will also discuss the future of AI in law by 2025, ways to bridge the gap between self-represented litigants and professional legal representation, and the importance of gradually integrating technology into legal practices. We'll discuss smart contracts, AI's efficiency in document handling, and his mantra of "dream big, start small" for adopting new tech.
Whether you're a legal professional or tech enthusiast, this episode is packed with valuable insights on how AI and large language models are poised to revolutionize the legal landscape. So, sit back and join us for an enlightening conversation with Thomas J Daley on the Pivoting to Web3 Podcast!
About Thomas J. Daley:
Thomas Daley is a board-certified family law attorney who practices throughout Texas and beyond. With an dedication to safeguarding what his clients value most—their relationships with their children, their financial stability, and their reputations—Tom combines a razor-sharp legal acumen with cutting-edge innovation.
Before revolutionizing the practice of family law, Tom was a trailblazer in financial technology, crafting proprietary algorithms for high-frequency trading that earned him over 20 patents worldwide.
Today, he leverages his unique expertise in technology and the law to redefine the legal landscape, harnessing the power of Artificial Intelligence to help attorney's deliver better service at lower cost and is a frequent speaker on the topic to national organizations.
Tom loves answering questions about the behind-the-scenes workings of the legal system and training other attorneys and litigants to be more effective advocates.
Connect with Thomas J. Daley:
Twitter: https://x.com/famlawyer
Instagram: https://www.instagram.com/mongovlad/
LinkedIn: https://www.linkedin.com/in/tomdaley/
About Donna Mitchell:
Donna Mitchell achieved two impressive careers in her lifetime. She dedicated 24 years to aviation, & US Airways, currently American Airlines, as a change agent, and 16 years with Johnson & Johnson.
As an industry-level
Connect with Donna Mitchell:
Podcast - https://www.PivotingToWeb3Podcast.com
Book an Event - https://www.DonnaPMitchell.com
Company - https://www.MitchellUniversalNetwork.com
LinkedIn: https://www.linkedin.com/in/donna-mitchell-a1700619
Instagram Professional: https://www.instagram.com/dpmitch11
Twitter/ X: https://www.twitter.com/dpmitch11
YouTube Channel - http://Web3GamePlan.com
What to learn more: Pivoting To Web3 | Top 100 Jargon Terms
What to learn more: Pivoting To Web3 | Top 100 Jargon Terms
00:00 - Lawyer, formerly software entrepreneur in stock trading.
03:47 - Prioritize human judgment over AI in law.
08:57 - LLMs narrow gap, aiding equitable legal judgments.
11:13 - Efficiency with less; debate on achieving AGI.
13:04 - AI enhances you; legality in law's questioned.
16:28 - Can't trust blindly; requires human oversight.
21:45 - AI excels in human-assisted discovery tasks.
23:59 - LLMs efficiently classify documents, aiding litigation.
26:37 - AI used in judicial decisions raises concerns.
29:35 - AI can't discern reasonable from unreasonable biases.
33:48 - Older leaders must explore and adopt new technologies.
37:38 - Mediator used for unfamiliar real estate transactions.
Thanks for checking in the Pivoting to Web3 podcast. Go to pivotingtoweb3podcast.com to download and listen or Web3 game plan to check out the videos. Thank you. Good morning, good afternoon, good evening. Welcome, welcome, welcome, welcome to pivoting to web3. And today we have something special in the law and legal field. We have Thomas Daley. Thomas Daley is a board certified family law attorney who practices throughout Texas and beyond.
Within a dedication to safeguarding what his clients value most. Their relationships with their children, their financial stability and their reputations. Tom combines a razor sharp legal acumen with cutting edge innovation. Before revolutionizing the practice of family law, Tom was a trailblazer in financial technology, crafting proprietary algorithms for high frequency trading that earned him over 20 patents worldwide. Today he leverages his unique expertise in technology and law to redefine legal landscape, harnessing the power of artificial intelligence to help attorneys deliver better service at lower cost and is a frequent speaker on the topic to national organizations. Tom loves answering questions about the behind the scenes workings of the legal system and training other attorneys and litigants and how to be more effective advocates. Now that's why Tom is here, because Tom has a legal field that I found very intriguing on what he's doing today. So I'd like to introduce you to Tom Daley.
And Tom, say hello to your audience and ours and let them know exactly who you are and how you really got into this space. Space using AI.
Well, thank you, that was a kind introduction. I'm glad to be here this morning. So what brought me here was for me, law was a second career and my first career was developing software for stock traders. Essentially I worked at brokerage firms, stock exchanges, ran a nationwide consulting firm and ultimately developed my own software company in 1998 that, that harnessed some pretty sophisticated algorithms. We would have called some of them AI based. It's not the same sort of AI that people use today, but, but it would have been kind, it would have been certainly state of the art in, in 1998 and then I sold that company in 2002. So I know you got a background in the airline industry where everything has to be done exactly once. You know, when somebody makes a plane reservation, we don't have 15 other people typing it back in again and making mistakes with it and the amount of labor and cost involved in that.
And the financial industry works the same way. We call it straight through processing. So you come with that mindset and you think about the efficiencies and the cost savings and the throughput that you can get with that kind of efficiency, and you get to the practice of law, which hasn't changed a lot since the Norman conquest of about the year 1060. So you're thinking, what in the world happened here? So bringing that, I don't know, affinity for efficiency and what it can mean to clients and the practice, to the practice of law has really become my passion.
Well, your passion is definitely what's needed today, but everything and all the changes. There's a lot of concerns about AI and the role of AI in the workforce and financials and security. Cybersecurity is just about. In everything. So you've done a lot of work in the algorithms, and how does that really play a part for those of us that don't know, how do those algorithms really play a part in the legal space and with family law and everything that you're doing?
Okay, yeah, that's, that's probably the most important question is what, what do they do? What can they do? What should we ask them to do, and what should we not ask them to do? I was talking to a gentleman last night, and he was, he was, he was really emphasizing the human side of the practice of law. And so he kind of got me out of my little techie head, and I started thinking about this human side that he kept pressing on. And I said, you know what, we talk about artificial intelligence all the time, and we should, but we don't talk about artificial judgment because we shouldn't. We still need to have that human in the loop human judgment, because at the end of the day, the law is about people, and we still need that human judgment. So what, what can AI do for us? So the number one thing I think to look out for with these new language models, these LLMs, as we call them from, say, ChatGPT or, you know, OpenAI or Anthropic or Gemini from Google, is that they're very confident. So if you ask them a question, they will very often, if, particularly if you don't pose it right, they will give you a very confident answer. And if you don't have the experience or the judgment to know the scope of the usability of that answer, you could be sent down a pretty dark, difficult path. So what are they good for? We know the way they work.
What they've done is they've scooped up all the text they can find anywhere on the web, and they've made a map. It's not quite this simple, but they've made a grid, basically, that says, if I Hear these three words in a row, the most likely fourth word is going to be this. And so that's really what the LLMs are doing is this kind of statistical processing. Every time you ask the LLM a question, you give it what they, well, behind the scenes they call it a. There's a factor, you give it about creativity, a creativity factor. I always use zero for creativity because we're in law, we don't use creativity. We want the right answer, not an interesting answer. And so when we.
So in that case, what happens is these three words in a row, what is the most likely fourth word? Pick that one. Whereas if you give it a higher creativity factor, like you might do if you're writing a novel or a bio about yourself, it'll say, well, these three words in a row, these are the two or three most likely words to follow it. And let's just randomly pick from amongst those. Now we have a new set of words to look for. And so it keeps generating, generating the next part of the answer. So when you see, when you're typing in a question, or we call them prompts to an LLM model, you'll see it's kind of like on a movie where it goes, you know, the answer just kind of comes out word by word. That's how it's thinking. It's thinking of a word by word.
If you say, what did you just say? It doesn't know. It's just. So if we know that, if we know that it's just generating responses based on some sort of statistical probability of what came prior, we know that it's not a source of truth. It's, it's okay. So truth is for Google, who is the number one podcaster in the United States by audience share amongst 18 to 24 year old gamers, there is an objectively true answer to that. And an LLM is not designed to give you the objectively truthful answer to that. Google is, or the search engines are. Now, whether they are successful or not, you know, that's, that remains to be seen when you do the query.
But they're designed for truth. These are designed for, for creativity at different levels. So where does that help you? Well, an example of that is I was representing a mom who didn't want her child to spend much time with dad. Dad has a pretty serious mental health issues and had a bad relationship with his girl who was 15 or 16. You know, children get to an age where you just kind of like, you really want to fight this wave here. So in Texas we can ask the judge to interview a child who's 12 years or older what the child's preferences are. It's not binding on the court, but it is sort of advisory. So he didn't want that to happen because he knew how that was going to go.
So what he did was he drafted his own response to it. And again I said he had, he has mental health issues. I don't think English is his first language. And so he submitted. The court was just this insane, useless diatribe. So I file an absolutely perfect motion up here that I know is going to get granted and he files a response down here that calls into question his sanity. And I thought if he had just gone to Chad GPT and typed in these three sentences, would that have been better? So I did, I went in and one of the first things you always tell an LLM is who it is, right? It's, it's gathered up all the information that's available on the Internet. So it adopts every Persona you want it to, you tell it to become a teenage social media influencer.
It's going to respond to you with yes girl, let's go. If you tell it you're a Texas litigation attorney, it'll button itself up a little bit, right? So you tell it you're a Texas litigation attorney representing a mom in a custody battle, representing a dad in a custody battle. Mom has filed a motion for the child to confer with the court. You don't want that to happen. Draft a response arguing against the interview. And so that's the three sentences. And what it generated was the most coherent, honestly from my perspective, persuasive argument. I read what it generated.
I thought should I really be doing this? So it wasn't generating truth, but it was generating an argument. And so if he had done that again, you know, I kind of use my hand gestures here that if we're performing up here as seasoned professionals with staffs and research experience and training and the self represented person is down here not because they're not as smart, they just don't have access to the tools and experience these LLMs can, can get you to hear. They're not going to get you, they're not going to completely level the playing field. They can narrow the gap. And that gap in the field of law is what we call the judge's discretion to act in an equitable fashion. So when you're this far apart, the judge can't give you what you want because you're not even asking for the right thing. And the Judge is constrained by the law. But if you can close that gap, narrow that gap enough, the judge can look at it and say, okay, that's not exactly the right thing, but you're in the ballpark.
And now I can exercise my discretion to say, is this right? Is this the right thing to do? And they exercise that equitable discretion. And now you can start prevailing or at least not losing on every single issue that comes before the court. So that's a very long response to your question, like, what can they do for you? What they can do is level this playing field. They can do that for every single listener that you have. I apply it to law. I, I saw the other day, I think last month you were interviewing Mr. Anon, the branding sensei, I think was the title sensei. Yeah.
And I, I listened to that because of course, branding is something that we all know there's value in branding, but I'm kind of like that pro se litigant. I'm the guy down here on branding. You know, my, my education and background is in technology and stock trading and law, not in creating the public's mental image of my service offerings. Right. Like, what is my brand? And so, and he is so I, I, I listen to that podcast with a lot of interest. And I, and, and, and what he kept, I think what he kept saying, and you may have your own takeaway from it, is kind of the same thing. You can use these LLMs to do better than you might do on your own. But it's not the same as hiring Mr.
Anand with his Saatchi and Saatchi background and his big brand background.
He got some big brand, he's there with me or even further. Yeah, I rely on him.
Yeah, he's pretty impressive. But one thing he said I took with me is, is if you know what you're doing, you can do a lot more with a lot less. And that's the same thing, I think, with these AI models and helping people who are either self represented or I think one of the, one of the guys, one of the engineers at one of the big AI companies the other day said he thinks they've achieved artificial general intelligence, which is their acronym, AGI, which is their academic way of saying human level of intelligence. Okay. And which I, you know, I don't like to get into, to really into semantic battles with people, but I think that, you know, we could probably talk for three solid days without breathing and not come to an agreement on what intelligence is, but we think we've got a degree definition of artificial intelligence or general intelligence. So whatever. I think that's, you know, the first brick hasn't been laid yet. We got a wall on top of it.
But what he said was, I don't think that our models are better than every human at every task, but I think they're better than most humans at most tasks. And I don't know if that's accurate or not because my, my use of it is very limited. It's in the field of family law litigation. But I think you may be right there. There's. I, I tell some of our attorneys who don't do a great job of writing emails that either emails, are they ramble or they may not be the best English that we were taught in 10th grade. Or, or they may be. They're a little too acerbic.
I don't paste that email draft into Claude or chat, GPT or Gemini, whatever you like every time before you send it and tell it. You're a Texas family law litigation attorney who speaks in a succinct and direct manner, but is also polite redraft the above email. You can almost copy and paste it. Of course, we're all going to start with, I hope this email finds you well. You got to get rid of that sentence. But other than that, they'll, they will improve your performance.
Well, at the end of the day, you're absolutely right, because AI is really going to enhance who you are. It's going to enhance your intelligence. It's going to bring you to the next level. It's not going to replace you, but it's definitely going to help you no matter what sector or industry that you're in. But with that said, everyone is using it in different ways and learning how to use it at this point. But in the field of law, should people really be looking at the chatgpts of the world putting their case in there or learning how to utilize it as an argument when they talk to their attorneys? Is that ethical? Is that legal? Is that going to cause a lot of problems? Is there any governance in that area yet? Is it moral, ethical? I guess all of that's my question. Where is that falling right now in the conversations internally with the governance and justice and the attorneys in the legal field? I guess is my question, what does the conversation look like that we don't know about?
So that's a powerful set of questions because that conversation is going on all up and down from the Supreme Court of Texas, I don't know, outside of Texas, all the way down to the individual paralegals working in each firm. And really we have two things we concern ourselves with is disclosing confidential client information. So you can go to these LLM models websites and read their privacy policies, but you need to read them every day because they change every day. So the very first thing I did before getting so I work at a pretty large law firm, but I work in the Plano office. And the first thing I did when I wanted to start having our, the attorneys and the paralegals there use the AI is I had them go through a Coursera course that was sponsored by Google. So it's very Gemini biased but you know, who cares? But it was about the ethical, responsible use of AI. It didn't teach them anything about statistical models or the math. They don't even know that right.
Anymore than I need to know how a spark plug works to get, get out of town. But, but, and so the, the two things we learned was if we're going to take a say, we say we have a client document like a, a settlement offer and we want to. Well, I'll give you a better example. Let's say we have a pleading, somebody's filed for divorce. What are they asking for, what are they upset about? Well, a quick way to do that, you can just drag that up to one of these LLMs and say, read the document carefully and tell me what the salient issues are in there. And it'll do a pretty good job of that. But it also has everybody's names in it. It's got children's names and children's birth dates and that sort of thing.
So the first thing I told him, look, the privacy policies when you visit those websites would lead you to believe that it's safe to do that. But can you imagine what will happen to your reputation and your business if that turns out to be wrong? So the first thing you do is just go in there and do a search or a place and just get rid of the names. Change Tom Daly, you know, to husband or father and change, you know, the, the other party's name to wife or mother and change the kid's name to, you know, thing one, thing two, thing three. Because the names of these people isn't important. Their addresses are not important, their birthdays are not important. It's the legal substance that's important. You don't change that then drag it up. First lesson then is to protect client confidentiality.
The second is that you know, you cannot trust this thing blindly. It's like having a brilliant associate attorney working for you with Zero life experience. Kind of a, you know, kind of a Rain man type person working for you. Right. They've got infinite mental capacity, but no life experience. And so human in the loop is a concept we throw around a lot. Once you have sanitized any data that you're going to send to the LLM so that you prevent, you protect client confidentiality, ask it your questions. And I teach them how to ask questions in a manner that's designed to get the responses that are useful.
Look at it with your own eyes. There's a real famous example, I think it may have been in New York, but I might be wrong about that, of an attorney who went to, I think it was chat gbt. It was the first one. And he typed in something and it wrote a brief complete with case and statute citations.
But it wasn't real.
Wasn't even real. Wasn't even real. And that was. What was that? That was probably over a year ago, wasn't it?
It was about a year ago that it was really what I call hallucinations, you know, gh and it really, you know, compiles something. When I heard that you do have to double check your work or, or at least research it a little bit so you know, you're getting accurate information. That's whether you're in a privacy wall, AI or something that's out there in the public. A lot of people are still working in public. I have kind of, you know, gravitated to something with more privacy. But. Yes, please continue. But that was recent.
In the last 12 months, it's still recent enough for everybody to take heed.
Well, they should, because I frequently ask the LLMs the same sorts of questions. And like I said, one of the problems with them is they are confident little suckers. So it comes back and I said, you know, I had a question the other day. You know, you think I've been practicing law now because of the second career. I've been practicing law about 17 or 18 years. But you think in 17 or 18 years in a narrow field I should know? Maybe not everything, but a whole lot of everything. Right. And every once in a while you bump into this real fundamental question.
When does property start stop accumulating in the marriage? When the judge says from her mouth you're divorced from the bench or when she signs an order, because that's very often two different dates. So I asked the LLM, like, what? When did we start accumulating? Community property in Texas. Texas, a community property state upon rendition. That's the oral pronouncement of justice judgment or upon entry, which is the signing of the order. And it came back with a confident answer. And it gave me case sites. So, you know, I learned, I swiped the case site. I go to Westlaw, I'm gonna look at it myself.
Okay. The first case does exist, and it was topically relevant, but it didn't stand for the proposition that it was cited for. The second case I swiped and looked up was a criminal case. It had nothing to do with when you divide property and divorce in Texas. It had to do with some sort of criminal appeal. I have no idea. It was completely irrelevant. So I typed back, hey, you know that citation you just gave me above? And I pasted back in because it's still in my copy buffer.
Do you really think that stands for the proposition for which you cited it? Churn. No, I'm sorry, it doesn't. I should have checked closer. So that's the second downfall. You can have these LLMs. One, they're very confident. So if you, if you just adopt that confidence without checking, you're going to get hurt bad. And the second thing is they, they don't have feelings, but it feels like they do.
Particularly if you're in a transactional world where a lot of the people you deal with don't seem to be very warm and fuzzy. You know, I know that, you know, when I'm busy, I can, I can be very transactional. I don't, don't mean to be all the time, but, but you're trying to move things along and, and so then you type in this computer and it says something like, oh, that's horrible. Yes, of course I'll help you think, oh, there's some warmth out there. So it'll be confident when it shouldn't be. And then you'll start to think that it's, that it feels, that it understands and it really doesn't remember. All it's doing is that statistical processing. I heard these three words.
What's the most likely fourth word? And I use three as an example. When you see these new models come out, one of the things they brag about is their, their look back buffers, like how much prior context they can take into consideration. And it's thousands of words. Now it can remember a whole lot about you when it answers. You can remember stuff about you from five days ago when it answers the next question. But that's what, that's the two big downfalls. We taught them just in that course. We didn't even teach them how to use it.
Like how to type in a prompt accurately. But it's more like these are things to look for accuracy and a fake empathy.
So what do you think is going to happen with the legal field and attorneys in 2025? What trends do you see coming your way?
So the trend I see, and I hope we can get the ethical and appropriateness added to this, what I see is that there are some people being very successful in applying AI to the field of law. And so when you hear that, you hear, oh, Tom is using AI to handle incoming discovery requests, doing it very successfully and saving clients money, moving things along quicker. Let me do the same thing.
Well, how does that happen? What do you mean by that? Paint a picture on what that means.
Sure. So what are, what are, what are good tasks for the AI right now? So good tasks are tasks where you can throw a human in the loop and check for ground truth. Let me give you an example in, in the practice of litigation law, certainly in family law, probably in any kind of any litigation field, one place where you burn money is in discovery, requesting and obtaining documents you need to help prove your case up. And it's, it can be a disaster. And there's a pace at which that works. Let's say I need a bunch of financial information and so I send you the requests. I say, Donna, on behalf of your client, please provide the following bank statements and you have 30 days to do it. Will you wait till 11pm on the 30th day? Like, ah, I'll tell you what I'm going to do.
I'm going to give this to Tom at the last possible second. I'm going to dump 2,000 documents on him. And that'll keep him out of my hair for a couple of weeks while he sorts through that. That's that, you know, that's a lot of people think that way, you know, you know, I've thought that way. So you dump that 2,000 documents on me. What happens next? So in the traditional path, we pay a paralegal or a clerk to go through those 2,000 documents. Now work with me on the math here. Let's say you pay this clerk or bill this clerk out at say $200 an hour.
We don't bill them out at that much. It's slightly less. But in your mind it's easier to multiply something by 200 than 167.50. So anyway, say you're building out a dollar 200 an hour. Let's say they're going to take anywhere from 15 to 25 hours billing time to go through those documents and to come up with that billing time. It may take them a week or two because of other projects. So you got two weeks of work and several thousand dollars to go through all those documents. That is a classic document classification problem that people have been working on since the 1950s.
And so the LLMs can be very useful there when you get a mass of documents. And I work with the LLMs through their APIs with either their programming interface as opposed to typing into the web browser. So I have different safety and firew wall capabilities there. And it can do it, do a very good job. You send in a document, you say, what is this document? We don't ask it that open ended because it'll say it's a bank statement. The next time it'll say it's a statement from a bank. And the next time it'll say something similar but not exactly the same. You want the labeling to be consistent.
So you give a finite list and you say, is this a bank statement, a credit card statement, a retirement statement, a tax return? You know, give it a whole bunch of lists of what things are and then it does that classification and you'll find probably greater than 90% accuracy in a task like that. And it'll take maybe an hour and at a cost, a hard cost by the time you pay the LLMs fees for their API and maybe amortize some EC2, Amazon, EC2 or the Microsoft or Google's cloud based computing costs across that $50 of hard cost. So that's a great task for an LLM is classic classification. It can be offloaded and it addresses an enormously important part of the litigation process in terms of being able to respond quickly. Because now that I can go through documents in an hour, you know, I, you know, the model's going through documents in an hour. Within an hour I can go to lunch and when I come back I've got what I call a compliance matrix that shows me everything that they didn't send me. I can send that to the opposing attorney. You know, two or three hours later I do Human in the loop.
We do have somebody go through it to make sure we really didn't get it. And you know, like, okay, so you thought you're gonna dump those documents on me to keep me busy for two weeks. I'm back at you right after lunch with what you missed out on. So it's a big difference. It's a huge difference. And so it gives our clients a little bit of an advantage in the sense that we're able to move the game along at a slightly greater or maybe even tremendously greater pace. It lowers the cost of servicing the client, which is a benefit to us because we don't have to hire as many people to do. We could do more cases with the same number of people that we have, and the client doesn't have to pay as much.
It truly is win, win, win. But that didn't happen overnight. There were months and months of disappointing results, of different experiments, different ways of posing the questions and staging the data, and different mixes of technology. You know, you get these. You know, we've all dealt with PDFs. Well, you get a nice downloaded searchable PDF. Yeah, anybody can figure that one out yet. Clients will take a.
Take a document and. And the way they scan it is they. They hold it sideways with their thumb across the most important part of it and send you a picture of that. You know, they'll say, hey, what does this mean? And with your own human eyes, like, I can't even read that because the parallax distortion of the fonts. So you need some technology to help you straighten out the text. But it took months and months to pull together just the ancillary technologies to do that. But once you do it, that is an excellent task for it.
Are judges using AI? Well, can you share what you can?
I don't think as much. There were some really famous tragic stories of judges using AI in the criminal field. You know, some of these very large cities, you know, like Los Angeles in particular, is where I remember one horror story. There is this company that said, hey, judges, you know how busy you are all the time? And some of them truly are very, very busy. And the whole concept of justice delayed is justice denies. They said, you know how you have to make thousands of bail decisions a week or maybe hundreds of bail decisions per week? What if we could gather data, apply it to our model, and tell you if the defendant in front of you is a likely recidivism risk if they release back into the community, or whether they're a safe member to send back to the community. Okay, that sounds great, doesn't it? Now we can start being objective so we don't have the judge's bias in making those decisions. We can make those decisions quickly so people don't rot in jail waiting for a bail decision to be made.
And maybe we can keep dangerous people off the street while sending safe people who just had a bad day back to work to take care of their kids and our money. Sounds great. Here's the problem.
Yeah, but wouldn't there be biases in there too?
So that's the problem. And the biases were horrible. Because here's what happened. Before I tell you what happened, think through how these AI models work, their statistical processing. They don't know an unreasonable bias from a reasonable bias. For example, if you were to ask an AI model, I'm going to go to war tomorrow, what weapons should I use? Statistically, the answer is stone axes and spears. Because more wars have been fought and won in human history with those instruments of destruction than anything else we know. That's a good way to lose a war.
Tomorrow is stone axes and spears. But it has this bias of 10,000 years of human history where only modern warfare started. Fighting wars with horses and bullets and things like that and bombs. So now, and you're going to know where this goes as soon as I tell you this next fact. They have this sheet that they fill out, they filled out about each defendant. And here's some of the questions around there. Ethnicity and education. Well, we know in the criminal justice system we ram a lot of people of color and minorities through the system.
And if you were, if you had a system that rammed a lot of left handed people through, even though left handedness isn't what made you create a crime or commit a crime, but just happen to be left handed, we would not let left handed people have bail. We would make adverse bail decisions to left handed people. Because more crimes in this database, because of the history of the data are being committed by left handed people. But left handedness had nothing to do with it. So a person's ethnicity had nothing to do with why they committed that crime. But it was a question on there. Their education level probably didn't make them commit that exact crime that day. It may have put them in a place where they had to make tougher decisions that some of us have to make each day about how to put bread on our plates.
But the education level didn't make a person commit a crime. But when those factors were in there, the AI doesn't know the difference between a reasonable bias and an unreasonable bias. Because we would think a reasonable bias would be this was the crime committed using a weapon. Well, I think that's a fair one to know. If this person is running around our community brandishing guns, then maybe, maybe they should cool their heels in jail for a little while while they're waiting for their charges to come up. So that would be, that would be a bias. But that would seem like a reasonable Bias, or they've had six prior convictions of dangerous, violent behavior in the last eight months. That would seem like a reasonable bias.
This person's got a proclivity for that kind of activity based on his own conduct. But these other things up here led the denial of bail to thousands of people of color and people with lower education and people that came from just certain neighborhoods. And we know that's the wrong outcome. But judges just cranked that through. It became a national embarrassment. And I don't think anybody uses those sorts of systems anymore. At least certainly not judges. I don't know if prosecutors or police forces use them.
Yeah, I was just kind of curious about that, if any judges were really using it at this point in time.
Yeah.
From a perspective of the law.
So the. And the thing to think through is, what is a judge in our. In our legal system? There was a guy. I can't tell you his name right now, but he wrote the book Getting to Yes. I think he was a Harvard professor. Skinny little book. Really good book to read, if you're into skinny, but profound books. And he had a TED Talk several years ago.
In his TED Talk, he talked about a woman who solved a community problem with her camel. And I don't remember the exact story, but I've morphed his story to. I use his story this way. In a family law case in particular, there's always at least three sides. We always think of two sides. There's three sides. There's spouse one, there's spouse two, or parent one and parent two. But the third side is the surrounding community's interest in predictable and just outcomes.
Well, spouse one hires me to represent them. Spouse two hires you to represent them. Who represents the surrounding community's interest in predictable and just outcomes? The judge. That's the judge's job. And so that's why I say we talk about artificial intelligence. But we should not be talking about, in any positive manner, artificial judgment, because that's where the judge closes the gap. In a place like. In a place like Texas, we elect our judges.
So, you know, there's. There's good and bad with that. But what that means is that the surrounding community here has said, donna, we want you to have the authority to sit up there and resolve disputes in our community where two people can't. Can't agree, we can't resolve our own problems. We want you to do it. We want you to do it because we trust your. Your moral compass, your judgment, your direction, your experience maybe wants you to do it because you vote the way we do for President. You know, whatever, whatever our, our reasoning is, we want you to resolve our disputes for us.
And so when the judges start using technology as a substitute for judgment, they're thwarting that democratic process and the intent of the, of the, the judge in the, in the legal system.
So what keeps you up at night with all the AI and all the changes you're seeing? What really keeps you up that you're concerned about going forward into 2025?
So the, the, the thing that, that. Well, I'll be honest with you, the thing that keeps me up at night is excitement. Because I see what's coming and it is going to be big. You know, I'm, I'm, I'm sure I'm, I'm 60. And the, in the, in the field of law is governed by people. My. You've got people who've been pressing law for 35, 40 years. Usually by the time they're my age, that's not my case, but it's case for a lot of them.
And they've done well, they've, they've, they've, they've done a fantastic job with thousands of clients over their career. And so like, you know, this font works, you know, courier, that always worked in my pleadings. That's what I still use. This, that, this, this works. Why do I want to adopt some of these new technologies? That just sounds like so much hocus pocus.
And then you don't think, so you don't think it's going to be adopted? Is that what keeps you up at night?
I think it will be, but what I think is the people who could be at the forefront of it aren't. Because when I look at my firm, just in my own office, you know, we've got three people who are kind of my age and sort of, you know, looking toward retirement and that sort of thing and have had fantastic careers. But behind us there is a group of really smart attorneys who need us to be exploring the safe and ethical implementation of these technologies so that as the legal landscape changes, they're not caught off guard. They don't have to make a radical experimental change in their life. Things work fine the way they've been working for the last thousand years in terms of doing pleadings right now. But we see the promise of greater efficiency and in some cases even greater accuracy. So if we can start adopting, if you can get people my age and to start adopting these technologies and these promises now, we can be leaders so that when the 40 year olds come along and they're in their age. It was a smooth adoption.
The wheel stayed on the car, the car stayed out of the ditch, and everybody made it to their destination safely.
So that sounds like your wish list.
That's the wish list. Thus my evangelizing about, you can do this, you know, you don't have to it. Things don't have to be big. You don't have to start big. I talked to. I have an uncle who's. He's a strange dude, but he is a really smart guy. And I was talking to him about wind generated power one time.
I thought I was well briefed on the topic, and I was able to hold my own, man, he shot past me so quick, I could just kind of shake my head. But we had this conversation.
Well.
So later on, he sent me a Coke bottle with a little kind of pinwheel on top of it, hooked up to a little electric motor. And if you blew on it, it would make a light come on. And he said, dream big, start small. And that's really the point of my evangelizing here, is dream big, but start now and start small.
So that's your message to the legal attorneys. And I got a last question for you. I'm not sure if you're aware, because this is more on the blockchain technology side, but there are attorneys that are starting to do smart contracts. How do you feel about that? Are you familiar with the smart contracts and that whole piece? What do you think about that? I'm just curious.
Yeah, so, yeah, so my. My research or reading into smart contracts. I see that, you know, say you got an oil and gas contract. That makes sense. You pumped what you pumped on the day that you pumped it, and you're entitled to this sort of a payment. That seems like something that, you know. And there's. And it's data based, so data can be fed into the smart contract system.
They can know when to execute the next step. Kind of act as an automated escrow agent or payment agent in family law. So much of our stuff is like, let's say you have a parent who uses drugs, okay. And I believe in possession schedules, custody plans that always put that parent up at the very top of where they would be if they were a healthy parent. But through phases, and it's up to them to work the phases. You know, a lot of them don't, but let's give them this redemption plan so they can get there. Okay. So the thing we'll have things like, dad, after a year of clean drug tests, will do a evaluation with Dr.
So and so. And if Dr. So and so believes dad has taken responsibility for his addiction issues and, you know, maybe some other things in there, then he'll move to the next level. Well, there's. There's a. There's a lot of subjectivity in that which I don't think is amenable to smart contracts. We do have property division issues. Know that once I receive this payment from you, you receive this piece of property from me.
And sometimes we do have to get a mediator to act as kind of an escrow on that. Let's say that my client is supposed to sign a deed conveying a piece of real estate to your client, and your client is supposed to transfer title something else to me. Well, we don't. I mean, if I know you and we've been doing this together for years and years, you and I will just exchange documents. I know you're not going to hand it over until you've given me what I need. I trust you. But if we have two attorneys who. Maybe I'm working a case out of New Mexico, probably a fantastic attorney over there, but I don't know that person.
I want a third party to hold my client's title document. So it's not released until the escrow agent receives the payment. Just kind of. You would do it at any kind of a closing. And I can see that if. Of things like payments and conveyance of real property. Well, payments are very automated. But if.
But if conveyance of real property was a little more automated, I could see the promise of smart contracts there. And I could also see people using. Learning how to use the AI, the LLMs, to articulate, if you will, the software, so to speak, that goes into a smart contract. So they say it in a manner that is objective and immeasurable and actionable.
Thank you so much for that. So I don't want to keep you too long, but my last question is how can people reach you? And what is it that you haven't shared that's burning in your heart that you want to make sure you get out to the audience?
I appreciate that. So I have a link on my. On the bio page, which gets you to my law firm website and my bio on there. And that is the way to get a hold of me. That's how my wife gets a hold of me and my mother and clients and people who are curious and that sort of thing. That's how they get a hold of me. And. And I tell you that the secret is one of the links I think I gave you was to a site called Marriage Docs Store that is, that is nascent, that is that it's in its infancy.
But the idea there is to start taking some really simple legal documents that people frequently get wrong. Or they might hire an attorney and pay them several thousand dollars to just do kind of this little one up thing like, like evicting, evicting a boyfriend from your apartment. Right, right. You got to go through that process. You really need to hire me to do that. I mean, I'll do it, but. But there's some paperwork that if you'll file it, that thing's on autopilot. So I'm trying to make some of that stuff available to people so they can help themselves and help themselves better in a lower cost.
Oh, how exciting. So thank you so much for that. So with that said, I really appreciate having you here. I hope everybody's learned something today from Tom Daly and the legal system. And thank you for really delving into the AI piece and law. So good afternoon, good evening, and good morning. And thank you for sharing and shaping tomorrow together on pivoting to web3 podcast. Thanks for checking in the pivoting to web3 podcast.
00:40:27 - 00:40:33Exclude
0:00 - 40:34
Go to pivotingtoweb3podcast.com to download and listen or web3 game plan to check out the videos. Thank you.