Ethics in #legaltech – A Discussion

(The following is an adaption of a talk I’m giving at the 5th Consortium for Access to Justice Conference on Friday, April 13, 2018.)

 

First, a disclaimer.  I am not speaking for the ABA at this moment or ever.   The ABA is a membership driven organization and the only people that speak for the ABA are it’s elected officials or their designee.  Which I am not. They’re always pretty cautious about this, but I have a feeling that given the subject that I’m going to be talking about, they’re going to be super-hinkey about it, so I just want to make it abundantly clear.  As for me, I’m just a schmuck that works in a cubicle on the 17th floor making websites and occasionally mouthing off on twitter.    But I will take take a moment to plug the fact that if you don’t like what the ABA does, become a member and work your way up the leadership ranks.  That’s how change will happen.

Secondly, keynotes and plenary talks are not really my bag.  I’m much more interested in communal learning and discussions and unconferences, not the “sage on the stage.”   Especially because I do not feel particularly sage-like with regards to what I’m going to talk about today.  I will be the first to admit to you that I am still early in the learning phase with regards to this subject.  I am definitely not holding out myself as an expert in this subject, but the good news is that I don’t think anyone is.  From my research, this area is so new and elastic that no one yet has mastered it.  There are parts of it that people can claim to know a lot about, but it’s my belief that they’re not seeing the whole picture.  They’re like the blind men feeling the elephant.  And as far as experts go, and part of this is due to my open source preference for community made decisions, I wouldn’t want there to be a few people making these decisions about what’s right.  This is something that should be decided by the community.  I’m not promising any answers today. I’m just hoping I help you to think about the issues (which hopefully some of which have occurred to you before) and get and keep the conversation rolling.

With all that said….

I’m going to start with a literary allusion. Which I love.  They make me feel so smart and cultured.  And this one is actually a two-fer which makes me feel extra special.  But I will explain them to you all so we can all be in on the secret in case it was a mystery to you.

My talk is titled “Legal Tech: The Modern Prometheus.”   We start in ancient Greece with the myth of Prometheus.  As with all myths, it varies from telling to telling, but the basics are this:  Prometheus was a Titan, which is a minor type of God.  He may be responsible for making mankind.  But what he’s definitely responsible for is stealing the secret of fire from Zeus (the numero uno God) and giving it to man, which allowed them to become technologically advanced.  And possibly brought war to them.  For his trouble, he was strapped to a rock by Zeus and every day an eagle eats his liver, which grows back overnight, and there he shall remain for eternity.  Don’t cross Zeus, y’all.

The second allusion is related.  The official title of Mary Shelley’s Frankenstein is Frankenstein: The Modern Prometheus. You may be more familiar with this story than Greek mythology, but still I’ll review.  This Dr. Frankenstein decides to sew together a bunch of parts of corpses, which he then reanimates and it comes alive.  Two problems:  1) The creature is hideous.   No one can bear to look at him, including the good doctor.  And 2) prompted by number 1, the monster goes on a murderous rampage.  Oops.  The end up in the arctic and Dr. Frankenstein finally stops the monster.

What does this have to do with Legal Tech?  Well, both of them ask a question and it’s something I’ve been wondering for a while now with regards to legal tech.  That is…. “What responsibilities do we have when we make and use legal tech?”  And by that, of course, I mean “What ETHICAL responsibilities do we have when we make and use legal tech?”

Yes.  Ethics.  And this is where people get hinkey.  It’s such a loaded term, especially in the professions that have an ethical code, but I find that people bring a lot of baggage to the discussion as well.  There’s internal codes of conduct -almost connected to one’s religious beliefs – and professional codes of conduct that are immortalized by our professional associations and governing bodies and then there’s the “well that’s just not what a [insert profession here] does!”   Legal tech ethics is extra confusing because we have three distinct users and creators of technology.

There’s the Technology creators, who may or may not be lawyers.  I always think the question for them is like that quote from cinema classic “Jurassic Park.”  “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”  When your primary goal to push product and push the realm of what’s possible, it’s not always easy to think about the consequences of those actions.

Then we have lawyers using (and possibly creating) technology.  I don’t know if he’s still saying it, but a few years ago at least my friend Ed Walters (the CEO of Fastcase, adjunct professor at a couple of T-14 schools and all around awesome guy) was fond of saying in talks that “data is the new oil and we should start drilling.”   And that’s true!  Especially when it comes to things like legal data such as what’s produced by courts, but there’s also a lot of data housed internally in law firms that can be analyzed and used to make decisions about how things could operate more efficiently.

But as I saw recently – and forgive me, I don’t remember who or where I got it, but I can guess it’s from Twitter since that’s where I get 90% of all my information – “data was the new oil, but really it’s the new nuclear waste.”  Especially think in terms of user data.  You have to protect that stuff!  And dispose of it properly.  Let’s not forget, the Panama Papers leak – probably the biggest leak (by which of course we mean theft) of private information ever – was because a law firm got hacked.  So do you keep data so you can analyze it when you know it’s a ticking time bomb that is liable to get stolen at any time?

Finally we have the non-legal professional user of legal tech.  The consumer facing tools and systems that make it easier (hopefully) for an individual to participate in the legal system.  What types of duties are owed to them by the manufacturers of legal technology?  It seems like so much of the dialog is about protecting them, but do they have any responsibilities as a user?

Given the prevalence of “technology” in our lives – said with quotes because as an anthropologist and a librarian I can give a whole other talk on “What Do We Mean When We Say Technology?”, and I have actually given a talk called “Books Were the Original Legal Tech”, but for today we can assume that I mean computers and things that operate on computers – anyway, given the prevalence of technology in our lives, you’d think the field of Tech Ethics would be a little more developed.  I mean, it is, there’s reams of academic works on the subject.  But very few practical guides.

Tech ethics seem to start with World War II.  And it was easy then, although it failed.  There was an obvious Big Bad…Hitler. And tech helped him!  IBM developed computer systems for making the holocaust operate more efficiently. Clearly very very bad.  No excuse for doing it.   But what about the scientists on the US side making atomic weapons?  Yes, that’s where the navel gazing begins.

But today, depending perhaps on your political views, while some Big Bads do still exist, and technologists puzzlingly aren’t uniformly against it, today the “bad guys” maybe aren’t as obvious.   There’s hackers to be aware of and guard against.  But maybe you yourself are the bad guy or gal, doing something with technology that is putting someone vulnerable at risk.

But those are generally technology ethics.  I want specific legal tech ethics.  I looked around and I found a couple different things that came close, but nothing that really directly answered my questions about “what the heck am I supposed to do????”

First of all, I found that everyone really wants there to be ethics involved in the use and creation of legal technology.  A lot of the criticism of legal tech makers is that they are somehow violating lawyer ethics while at the same time maintaining that they are not lawyers and what they do cannot take the place of lawyers.  I don’t know how familiar you are with GamerGate, but sometimes – just sometimes – it feels like a GamerGate-esque “Actually it’s about ethics in Legal Technology” type of straw man argument.

Let’s be clear: I want there to be ethics in legal technology.  I’m a German-Catholic-Lawyer-Librarian.  I like rules and feeling guilty. I’m just saying that it’s easy to pretend to have the moral high ground in professional discussions when you frame yourself as the only one that cares about ethics.

Some companies have vague statements that show their commitment to behaving in an “ethical” way.  Google’s “Don’t be Evil” comes to mind.   And that’s good, I guess, but there’s a lot of grey area between “evil” and “not evil” and ironically, I don’t think Google has managed to stay out of the black.

Of course, for those of us that are lawyers, we have the ABA Model Rules, which are used by the states to craft their professional rules of conduct.  And we’ve recently (I mean, it’s a couple years on now) had some big news on this front.  ABA Model Rule 1.1 is about lawyer competence.  Well, there’s now an official comment that says that as part of that, lawyers need to be technologically competent.   This has been adopted in over half the country.  Bob Ambrogi keeps an ongoing list of adoptions if you’re curious.

I don’t feel like the model rules are quite sufficient.  In some ways, it’s trying to pound something into a shape it’s not meant to be in.  These are rules for lawyers, not technologists, and as such it’s never going to be a perfect fit.

That’s not to say we should ignore the model rules.  No, I have a mortgage so I would like to be very clear that I am pro-model rules.  This just means that for people like me who are both lawyers and technology creators, we have twin masters that we must obey.

Then we have the fun thought experiments like “What happens when something like the Terminator really happens and robots start killing us?  Who’s legally responsible for that?”

And finally we see some proactive actions by various industry participants like the Legal Cloud Computing Association that’s come up with a set of standards for its members to use in their products.  These aren’t quite ethics, but they’re close.

And that’s it.

It’s really tempting to want to make a list of absolute rules of dos and do not dos for using and creating legal technology.  But there are problems with that.  For one thing, when you get exacting rules, there are exacting ways around them.  Also, tech changes so fast and the processes around making rules are…not.  I mean, the ABA just came out with a rule about blogging in 2018.

So what’s better than rules?  I must tell you, there’s a little voice inside of my head screaming “Nothing!  Nothing is better than rules!” but for the previous reasons and my research, I know that’s not true.  What’s better is a framework that analyzes any particular situation and allows you to come to the proper or ethical decision.  So, ethical frameworks are available and highly googlable, and I particularly liked this one.  So let’s break that down.

1) Recognize there’s an ethical issue.  This is the most important step and also the hardest.   You’ll have to know the rules that do exist, as the ABA model rules, but also the ones that haven’t been written down yet.  Think about the responsibilities and obligation to others.

2) Consider the parties involved.  For technology creators, this requires a lot of empathy, which means putting yourself in the shoes of your user.  Of course, as  Mike Monterio, a designer, once said “Empathy is a pretty word for exclusion.”  Diversity of view points and diversity of experiences is so important in the creation of technology. You will miss issues if you don’t have a diverse work room.

3) Gather all the relevant information.  You may notice a theme here….once again you have to step outside of yourself and do some research.  Not only your experience is important, but the experiences of others who may or may not be like you.

4) Formulate actions and consider alternatives. Of all the pieces in the framework, this is the one that gets most ethic-ish. Ethic-y?  Ask “Which option does the most good?”  “Which option does the least harm?” “Which option leads me to be the person I should be?”  I mean, really dig deep with these questions.

5) Make a decision and consider it.   Which option best addresses the situation?  What would the reaction be if I told someone my decision?

6) Act!  You can’t stare at your navel forever.

7) Finally, reflect on the outcome.  Did your actions have the intended outcome?  Were people affected that you didn’t consider?

So in the tech world, there’s the maxim, fail fast.  But as you can see, behaving ethically is a deliberative process.   Like I said, I don’t have all the answers, but I think a framework like this is a good place to start the conversation.

Okay, so you know how I said “rules were pointless”?  Well, I lied.  Sort of.  Okay, one of my favorite shows of all time is The WireIf you’re familiar with the show, you may understand when I say that I feel like McNulty a lot of the time, but my favorite character was Omar Little.  Omar was a thief, a stick-up artist.  He made his living robbing drug dealers.  But he was a Robin Hood, anti-hero hero type who was one of the good guys. In a thieving murderous type of way.  And one of his iconic lines in the show was “A man got to have a code.”   And that’s something I greatly believe.  So while a bunch of set in stone rules for using and creating technology probably won’t work, that doesn’t mean that we can’t have a code.   Or a “Statement of Values” if you want to get really hippy dippy about it.

Interestingly, when looking for inspiration for some values that I thought would be good to embrace in our world, I found examples in two very disparate locations.

First was the hacker/punk (and hacker punk) worlds.  The term hacker, especially lately, has such a negative connotation, but really there’s a vibrant community of people that just like to mess around with technology.  And they have a pretty strong ethical tradition.

Second was my home turf – librarians.  Yes, give a librarian a chance and they will go to jail rather than give up the fact that you like to check out romance novels.  Librarians have been thinking about the ethics of information use – which really is all the practice of law is, when you think about it – for a long, long time.

I started brainstorming a “statement of values” sort of list or touch points that one might have to build upon when going through their ethical framework.  It’s not exhaustive, and I’m probably forgetting something big.  But that’s okay.  My main goal is to start a conversation.  Values and morals and ethics and all their related terms are both very personal and at the same time need, in some respects, to be decided on by the community.

The first thing that one needs to accept is that Tech is not Neutral.  Arthur C. Clarke very famously said that “Any sufficiently advanced technology is indistinguishable from magic.”  The 21st century Nuremberg Defense is “The Algorithm told me to do it.”  We’re at the point in technology’s evolution that a lot of people are quite ready to accept that what technology does is magic and there’s no effect from the biases and wishes of the creators of the technology.  And that’s bunk.  When you go to Google something, the number one hit is not because “it’s the best.”  It’s because the creator of that algorithm has made choices.  The librarian Angela Galvan has a quote that goes something like “show me your bugs and I’ll show you what your priorities are.”   This is already affecting people in legal tech.  Algorithms are deciding who gets bail and who stays in prison and guess what?  They’re just as racially biased as human decision makers.

I’ll never say that lawyers need to code.  It’d be nice if some learn to code, but definitely not all of them.  More importantly is that lawyers need to learn to talk to coders.  Any piece of legal technology needs to have lawyer ethics baked in from the very beginning.  I’ve always held lawyers in high esteem – we don’t have nobility or royalty in this country, we have lawyers to maintain the rule of law and keep the balance between the sovereign and the people in balance.  That sense of duty needs to be present in our technology, in addition to the Professional Rules that we operate under.  For example, is there a way to make privilege kick in when a consumer uses a piece of technology to connect to an attorney?  If so, let’s do that so their information is protected.

I’m a big fan, user and creator of Open Source tools and technology.  Open basically means that the underlying code that runs a piece of software is free – free of cost, free to share, free to adapt. If I were Queen of the Legal World, I would demand that all legal technology be Open Source.  But I’m not and it’s not looking like I’ll have that position any time soon.  But I will say that any piece of technology used by The State or deprives a person of life, liberty or property needs to be transparent.  People should be able to examine it for the aforementioned biases.  There should be no black boxes in this technology.

The next are three very interrelated pieces.  The privacy of the user is tantamount.  This is of course closely related to attorney-client privilege, but I’m not even sure anonymized data is okay to share.   But mainly that’s because I don’t believe that truly anonymized data is really possible in many circumstances.  David Colarusso has also posited whether it’s legally ethical to allow your client’s data to be used anonymously in certain data analytics situations that may eventually help opposing counsel.

When protecting privacy, the easiest and most important step is to always use encryption.

And when thinking about user data, we need to think about who ultimately owns user data.  I think the user should always be able to port out their information and use it however they see fit -whether we’re talking about a lawyer user of a legal tech product or a consumer using one.

Legal tech companies need to be honest about the capabilities of their software.  There was a consumer facing “robot lawyer” (barf) a few months ago that promised that it would “file a lawsuit for the user against Equifax.”  Well, that’s not what it did.  It prepared some paperwork, but the user still had to go to court and actually file the case.  From what I understand, it’s been very successful for those that did do this.  I just wish it had been more upfront about what it really did.

Finally, know and protect your user, especially if they come from marginalized groups.  If you’re creating software that assists people in the immigration process in the United States in the year of our Lord 2018, well then you have to know that this information is highly sensitive and very damaging to your user should the information fall into the government’s hands.

Alright.  So let’s talk about ethics in legal technology.   Thank you.

Leave a comment

Your email address will not be published. Required fields are marked *