Share:

Abstract

In this paper I discuss my transition from legal positivism to legal realism and how this has impacted upon my construction of legal decision support systems. As a child living with parents who were heavily engaged in politics, and who had disastrous experiences with the twin evils of fascism and communism, I was encouraged to become a scientist. But my interest was always in law and politics. Constructing legal decision support systems was a pragmatic balance between my skills and interests. So I began constructing rule-based systems. But gradually I became aware of the discretionary nature of legal decision making and the need to model legal realism. Through the use of machine learning I have been able to develop useful systems modelling discretion. The advent of the world wide web has allowed the wider community to become more aware of legal decision making. It has fostered the concept of online dispute resolution and provided tools for self-represented litigants. Most importantly, we have become aware that the major impediment to the use of technology in law is not the lack of adequate software. Rather it is the failure of the legal profession to address user centric issues.

 

Summary

1. My beginnings

2. My life as a university student and assistant professor of mathematics

3. Making the transition to artificial intelligence and law research

4. Using machine learning to support legal decision making

5. How information technology can assist dispute resolution

6. From legal positivism and rule-based reasoning to legal realism and online dispute resolution

7. References

1. My beginnings

Both my parents were born in Poland between the two world wars1. Their parents had been born in the late nineteenth and early twentieth centuries, in various parts of the old Russian Empire. They were Eastern European Ashkenazi Jews who despised the Czar and joined the BUND, a socialist non-Zionist Jewish political party. Indeed my paternal grandfather was a trade union organiser and member of the Jewish Community Council of Vilna. He spent many years in a czarist prison for his pains.

A detailed discussion by me of my parental grandparents can be found in Zeleznikow (2011). They were both murdered before I was born, by the twin evils of Fascism (Hitler) and Communism (Stalin). This led my father to see everything as black and white!

My parents met when they were students at the University of Lodz in 1946. Their intention had been to create a new Jewish life in Poland. However, when it became obvious in 1948 that Poland would have a Communist Government, they fled to Paris, where they lived as refugees for three years. I was born in Paris in 1950. The first of three children. A detailed history of my parents’ experiences in the Holocaust can be found in the book Café Scheherazade by Arnold Zable (Zable 2003).

My parents arrived in Melbourne in March 1951—a place they considered as the end of the world. But things continued to remain black—I contracted polio in 1953. Fortunately, I survived with only very minor impediments.

Growing up in Melbourne in the 1950s and 1960s was not easy. My father, who had trained as a Yiddish teacher in Poland, worked in labouring jobs. My mother, who had trained as a doctor, was home caring for me. Initially I went to school on crutches, and later had a limp. And there were innumerable visits to the Royal Children’s Hospital. There was also the anger my father had imbued me with—apparently, in an incident I cannot recall, I beat a child at school, because his parents were German.

In Melbourne, my father continued his involvement in politics. He joined the New Australian branch of the Australian Labour Party and was in constant conflict with the left-wing administration of the Victorian branch. After a few years, the branch was disbanded, and the secretary, Bono Wiener, my father’s best friend, was expelled from the party.

My parent’s involvement in politics2 influenced my interests. I was to become Vice President of the Victorian Branch of Young Labour, in 1974, but ended any active involvement in politics once I went to the United States to become an assistant professor of mathematics.

At school, I excelled in mathematics and history. I loved reading about law and was passionate about becoming a barrister! However, the 1960s were the era of the space race, culminating in the first moon landing in July 1969. Males were encouraged, if possible, to study science.

My mother did not want me to study law— she felt that whilst I might have the skills to be an excellent barrister, I would be a disastrous solicitor. She pointed out my surfeit of organisational skills, my impatience with performing trivial tasks and my ability to lose everything. She felt there would be a greater future for me in science.

As luck would have it, the timetable of Matriculation subjects at Elwood High School in 1968 only allowed me to take the Renaissance History subject. I had wanted to take Revolutions —after all I was passionate about politics and revolutions. A renaissance history class, focussing upon literature and art, was not what a sports loving, politics mad boy wanted to learn about. I only obtained a second-class honours for my history subject, but received a first class honour in Pure Mathematics.

2. My life as a university student and assistant professor of mathematics

So in March 1969, I commenced study in a Bachelor of Science at Monash University. I had zero interest in science and abhorred conducting laboratory experiments. I enrolled in two mathematics subjects, chemistry and psychology. Yes, psychology was a laboratory-based science subject at Monash University in 1969. From 1970 onwards, I only studied mathematics – so I have a First Class Bachelor of Science degree, having only studied two laboratory based subjects3. Both of these laboratory-based subjects were first year subjects.

By December 1972, simultaneous with the election of first Australian Labour Party government in twenty-three years, I graduated with a first-class honours degree. I was uncertain what to do next! I liked studying and working at a university (I had my first experience tutoring at a university in 1972), so I abandoned my idea of being a barrister and decided to study for a PhD.

My research was in abstract algebra—essentially showing if the multiplicative semigroup of a semiring had certain properties, then it would follow that the additive semigroup would have additional structure (Zeleznikow 1979). Whilst this research has now been found to have implications for automata theory and computer science, the research was very theoretical. According to Google Scholar it has only been cited 17 times over the last forty years—and half these citations came from my follow up work.4

During my postgraduate student years, I became very involved in politics. I was elected Vice President of Victorian Young Labour (1974-5) and to the Caulfield City Council (1977-9). My experience in party and electoral politics led me to the conclusion that politics did not reward performance—but rather connections and dogmatic adherence to the party line. Ability and competence were not necessarily a virtue.

Thus I decided not pursue a political career. Even though I was on the Public Office Selection Committee of the Victorian Branch of the Australian Labour Party, which was an ideal platform for seeking a position in parliament, I felt I could make more valuable contributions to society via academia. Further, I felt university life would be more certain.

In June 1979, I submitted a thesis for the PhD degree and soon after was offered an Assistant Professorship in Mathematics at Northern Illinois University in De Kalb Illinois. At that time there was a worldwide glut of pure mathematicians and the likelihood of my receiving an academic position in Australia was limited.

This led me to an existential crisis—would I take up the Northern Illinois offer or take the safe route and stay home. I decided on the first choice, leaving my family and any potential political career. It is a decision that I have never regretted.

Over the next six years, I immersed myself in travel, running marathons5, theatre and US politics. When I had the time, I wrote the occasional research article to appear in mathematical journals. However, I was always aware that there were no mathematics academic jobs in Australian universities and I did want to return to Australia.

The idea of studying to be a lawyer persisted. In 1982, I took the Law School Admissions Test6. I was accepted to study Law at Monash University in 1983. But at that time, I had commenced a relationship with an Australian psychologist who had been awarded a postdoctoral fellowship at the Massachusetts Institute of Technology. She did not want me becoming a student and returning to Australia. I managed to find a position as an Assistant Professor of Mathematics at Mount Holyoke College in South Hadley MA. Mount Holyoke is one of the seven sisters—prestigious, private all women colleges.

My partner and I returned to Melbourne in January 1985, when I started a graduate diploma in Computer Science. My goal was to retrain as a computer science academic! Any notion of being either a lawyer or politician had been abandoned.

3. Making the transition to artificial intelligence and law research

In March 1985, I was back with first year students, studying Computer Science at the University of Melbourne. I had great difficulty mastering computer hardware and software engineering. But I was fascinated by the notion of artificial intelligence. I very soon decided that artificial intelligence was the area in which I would conduct research.

Even before the reaching the half way stage of my graduate diploma in computer science, I was offered a lectureship at the Royal Melbourne Institute of Technology. A year later, I received a French Government Scientific Fellowship to conduct research at Université Paris VI.

But exactly what research would I conduct? There was much interest in logic programming at the University of Melbourne. Professor John Lloyd, who had worked with me in the university’s mathematics department ten years previously, encouraged me to work in the domain. At that time, there some significant work was being conducted at Imperial College London on using logic programming to analyse the British Nationality Act of 1986 which appeared in the paper by Sergot et al. (1986). I looked at the application with awe! With my then positivist outlook, influenced by parental mentoring, political action and a Pure mathematics PhD, I then believed this was the future of law—having robots replace judges in making legal decisions. It took me many years to abandon this approach. But my exposure to legal decision making influenced this change.

At the same time, I moved to the Department of Computer Science at La Trobe University. There I was fortunate to attract computer science students who wanted to work with me on artificial intelligence. They included George Vossos, Andrew Stranieri, Mark Gawler, Emilia Bellucci and Jean Hall. I also attended a Victorian Society for Computers and Law meeting where I met a young lawyer Dan Hunter. Dan also had a computer science degree.

Following discussions with Dan, I started to realise that the work by Kowalski et al. (1986) failed to accept many of the inherent fallacies of using logic programming to model law—such as imprecision and vagueness. Investigating these issues led to a book and many papers by Dan Hunter and myself7. Dan Hunter has continued to become a prominent legal scholar, focusing upon research in intellectual property8.

Perhaps my most fortuitous action was to read a paper in the Communications of the Association for Computing Machinery by Don Berman and Carole Hafner (Berman and Hafner 1989), about the benefits of artificial intelligence for law. Don was a law professor at Northeastern University in Boston Massachusetts whilst Carole was a computer science professor at the same university. They became the co-founders of the artificial and law community. I wrote to Don about his seminal work. He immediately replied and invited me to Boston. I stayed with Linda (Don’s wife) and Don in Brookline MA in December 1990. I and a combination of my wife, children and Dan Hunter have stayed with Don and his family almost every year since 1990. Even though Don passed away in 1997, I will never forget his compassion, intelligence and mentoring of me.

Our laboratory on artificial intelligence and law at Latrobe University was named after Don Berman. Members included Andrew Stranieri, George Vossos, Dan Hunter, Mark Gawler, Emilia Bellucci, Jean Hall and Subha Viswanathan9. It folded in 2002, after I had left for the University of Edinburgh and Andrew Stranieri left for the then University of Ballarat. During its time, the laboratory taught graduate courses on artificial intelligence and law (Don Berman taught the inaugural course in 1992), hosted visitors, received numerous large Australian Research Council grants, built systems for Victoria Legal Aid, published research articles and graduated PhD and honours students.

One of the attendees at Donald Berman’s course at La Trobe University was Domenico Calabro, then Director of Education at Victoria Legal Aid (VLA). Domenico saw the potential benefits that artificial intelligence had for enhancing access to justice, especially for public interest law organisations. Over the next fifteen years, we partnered with VLA to build them useful systems10. In return VLA gave us important legal advice.

My first work in the domain of artificial intelligence and law, was to model the then Victorian Workers Compensation Act. The work was suggested by Alan Schwartz at Anstat Legal Publishers and conducted in conjunction with a Melbourne solicitor Graeme Taylor11. As we said in Zeleznikow (2003):

Given our desire to move beyond rule-based systems13 when modelling law, we commenced the IKBALS (Intelligent Knowledge Based Legal Systems) project. IKBALS (Zeleznikow 1991) used the object-oriented approach to build a hybrid rule-based/case-based system14 to advise upon open texture in the domain of Workers Compensation. IKBALSI and IKBALSII both deal with statutory interpretation of the Accident Compensation (General Amendment) Act 1989 (Vic). The Act allows a worker who has been injured during employment to gain compensation for injuries suffered. These compensation payments are called WorkCare entitlements. IKBALS focuses on elements giving rise to an entitlement.

The original prototype IKBALSI was a hybrid/object-oriented rule-based system. Its descendant, IKBALSII, added case-based reasoning and intelligent information retrieval to the rule-based reasoner, through the use of a blackboard architecture.

The defeat of the Victorian Labour Government in October 1992 led to significant changes in the relevant legislation and abandonment of the specific system dealing with Workers’ Compensation. However, we were still determined to use a hybrid agent architecture to build a legal knowledge based system and thus searched for suitable application areas and domain experts. We were fortunate to find an interested legal partner in the Credit Law domain (Allan Moore of Allan Moore & Co). The resulting integrated deductive and analogical system was called IKBALSIII (Zeleznikow et al. 1994).

Meanwhile I was also working with Dan Hunter15 trying to justify to legal practitioners that they should be interested in the application of artificial intelligence to law. Whilst numerous journal articles and a book resulted from our collaboration, there were no substantive practical applications. We had to wait a further two decades for this to occur.

My discussions with Dan Hunter and Don Berman gradually changed my legal philosophy. I became aware of the concept of legal realism that judges make decisions for a range of reasons which cannot be articulated or at least are not apparent on the face of the judgement given. Under this paradigm, there are unwritten or recorded reasons for judicial decision-making. Our challenge was to construct legal decision support systems based upon legal realism. Our approach to this challenge was to consider using machine learning.

4. Using machine learning to support legal decision making

Don Berman challenged us to investigate whether there was any possibility of using machine learning to model law. Machine learning is that subsection of learning in which the artificial intelligence system attempts to learn automatically (Lodder and Zeleznikow 2010). Previously law had primarily been modelled using rule-based reasoning and case based reasoning. Indeed, in the early 1990’s, our laboratory published many articles on rule-based and case-based legal expert systems.

Dr. Richard Ingleby, then a senior lecturer in law at the University of Melbourne suggested that we might to use machine learning to investigate how Australian Family Court judges exercise their discretion when distributing marital property following divorce. Dr. Ingleby introduced us to Family Court Judge Tony Graham, who assisted us in obtaining access to the appropriate data.

In Stranieri et al. (1999) we claim that at that time few automated legal reasoning systems have been developed in domains of law in which a judicial decision maker has extensive discretion in the exercise of his or her powers (and this is still the case).

We argued that judicial discretion adds to the characterisation of law as open textured in a way which has not been addressed by artificial intelligence and law researchers in depth. We demonstrated that systems for reasoning with this form of open texture can be built by integrating rule sets with neural networks trained with data collected from standard past cases. The obstacles to this approach include difficulties in generating explanations once conclusions have been inferred, difficulties associated with the collection of sufficient data from past cases and difficulties associated with integrating two vastly different paradigms. The resulting system, Split-Up, was the first computer software to use machine learning to provide legal advice in a discretionary domain.

The aim of the approach used in developing Split-Up was to identify, with domain experts, relevant factors in the distribution of property under Australian family law. We then wanted to assemble a dataset of values on these factors from past cases that can be fed to machine learning programs such as neural networks.

Twenty-five years later, computer hardware is much cheaper and hence computer software makes decisions much more quickly. In 1994, we needed to be very efficient with our use of data, for both the above-mentioned computing reasons and the fact that the Family Court of Australia would not allow us to take any data out of their registry.

Hence, we chose one hundred and three commonplace cases16 from the Melbourne Registry of the Family Court of Australia. Three researchers carefully read these free-text cases and placed the relevant data in a carefully constructed database. The database was constructed following:

  1. Discussions with our family law domain experts Richard Ingleby (University of Melbourne), Dorothy Kovacs (Monash University) and Renata Alexander (Victoria Legal Aid);
  2. Reading judgements from the Melbourne Registry of the Family Court of Australia; and
  3. Speaking with Family Court of Australia judges.

Ninety-four variables were identified as relevant for a determination in consultation with experts. The way the factors combine was not elicited from experts as rules or complex formulas. Rather, values on the 94 variables were to be extracted from cases previously decided, so that a neural network could learn to mimic the way in which judges had combined variables.

However, according to neural network rules of thumb, the number of cases needed to identify useful patterns given 94 relevant variables is in the many tens of thousands. Data from this number of cases is rarely available in any legal domain.

Furthermore, few cases involved all 94 variables. For example, childless marriages have no values for all variables associated with children so a training set would be replete with missing values. In addition to this, it became obvious that the 94 variables were in no way independent.

In the Split-Up system, the relevant variables were structured as separate arguments following the argument structure advanced by Toulmin (1958). Toulmin concluded that all arguments, regardless of the domain, have a structure that consists of six basic invariants: claim, data, modality, rebuttal, warrant and backing.

Every argument makes an assertion based on some data. The assertion of an argument stands as the claim of the argument. Knowing the data and the claim does not necessarily convince us that the claim follows from the data. A mechanism is required to act as a justification for the claim. This justification is known as the warrant.

The backing supports the warrant and in a legal argument is typically a reference to a statute or a precedent case. The rebuttal component specifies an exception or condition that obviates the claim.

In twenty of the thirty-five arguments in Split Up, claim values were inferred from data items with the use of neural networks whereas heuristics were used to infer claim values in the remaining arguments. The neural networks were trained from data from only 103 commonplace cases. This was possible because each argument involved a small number of data items due to the argument-based decomposition.

The Split-Up system produces an inference by the invocation of inference mechanisms stored in each argument. However, an explanation for an inference is generated after the event, in legal realist traditions by first invoking the data items that led to the claim. Additional explanatory text is supplied by reasons for relevance and backings. If the user questions either data item value, he/she is taken to the argument that generated that value as its claim.

The Split-Up system performed favourably on evaluation, despite the small number of samples.

Because the law is constantly changing, it is important to update legal decision support systems. The original hybrid rule-based/neural network version of Split-Up was constructed in 1996. In 2003, the tree of arguments was modified in conjunction with domain experts from Victoria Legal Aid to accommodate changes in legislation including:

  1. The then tendency by Family Court judges to view domestic abuse17 as a negative financial contribution to a marriage.
  2. The re-introduction of spousal maintenance as a benefit to one of the partners. Under the clean-break philosophy, Family Court judges were reluctant to award spousal maintenance, since it would mean one partner would continue to be financially dependent on his/her ex-partner. However, the increasing number of short, asset-poor, income-rich marriages led to a re-consideration of the issue of spousal maintenance.
  3. The need to consider superannuation and pensions separately from other marital property.

The argument-based representation facilitated the localization of changes and made maintenance feasible. The use of the argument-based representation of knowledge enabled machine learning techniques to be applied to model a field of law widely regarded as discretionary. The legal realist jurisprudence provided a justification for the separation of explanation from inference.

With the provision of domain expertise and financial support from VLA, we developed a web-based version of Split-Up using the web-based shell ArgShell and the knowledge management tool JustReason. As a web-based system Split-Up informed divorcee of their rights and supported them to commence negotiations pertaining to their divorce.

The shell and the knowledge management tool were further developed by the JUSTSYS company18. The company was formed by Andrew Stranieri in 2002. It was based at the Global Innovations Centre at the University of Ballarat (Zeleznikow 2003). Systems were built in

  1. Refugee Law—Embrace;
  2. Eligibility for Legal Aid—GetAid;
  3. Copyright entitlements—RightCopy;
  4. Plea bargaining—Sentencing Information System; and
  5. Eye Witness Identification—ADVOKATE.

The Split-Up system was the focus of much publicity. Late in the evening of Wednesday 3 July 1996, I received a telephone call from the London Daily Telegraph. The newspaper had received a press release from La Trobe University about our Split-Up system. It wanted to use our software on the then forthcoming divorce of Prince Charles and Lady Dianna. I was initially reluctant to meet their request because:

  1. Split-Up operated in the domain of Australian Family Law, and Charles and Diana were not Australian residents;
  2. The goal of Split-Up was to provide advice about commonplace cases. The marriage of Charles and Dianna was anything but commonplace; and
  3. No-one had any idea of the common pool of marital assets held by Charles and Dianna.

I informed the Daily Telegraph I could not use the Split-Up system to provide an accurate solution. The Daily Telegraph journalists told me that they were not concerned about the validity of the result – all they wanted was an interesting article.

After thinking about the issue, I decided that the project would receive much valuable publicity by providing the Daily Telegraph with a solution. The journalists gave me an estimate of the common pool property and the contributions and needs of the couple. The system ended up classifying Lady Dianna as a single mother who had lost her job. It thus suggested awarding her 70% of the common pool. The heading of one article was “SOFTWARE TAKES A HARD LINE ON THE PRINCE”. A second article had as its heading “Computer to help divorce couple’s assets”. Of course, in 1996, the idea of using machine learning and artificial intelligence to make legal decisions was very futuristic!

The Daily Telegraph article led to much media coverage, primarily in Australia, but also globally19. On Monday 26 August 1996, we had a ten minute simulation on the GTV9 network news show A Current Affair. The take away message from the session was that negotiation rather than litigation should be the logical first step in trying to resolve family disputes.

5. How information technology can assist dispute resolution

Marc Galanter, in his work on the Vanishing American trial, indicated that whilst litigation in USA might be increasing, the number of cases decided after fully contested trials is rapidly decreasing (Galanter 2004). Alternative Dispute Resolution has become the appropriate form of dispute resolution.

Fisher and Ury (1981) introduced the concept of Principled Negotiation—principled negotiation promotes deciding issues on their merits rather than through a haggling process focussed on what each side says it will and will not do. Central to the idea of principled negotiation is that of a BATNA (Best Alternative to a Negotiated Agreement). The reason you negotiate with someone is to produce better results than would otherwise occur. If you are unaware of what results you could obtain if the negotiations are unsuccessful, you run the risk of entering into an agreement that you would be better off rejecting; or rejecting an agreement you would be better off entering into.

We soon realised that Split-Up provided useful advice about BATNAs in Australian Family Law Property distribution. But given a BATNA, how can Information Technology provide useful support to disputants?

Our focus upon BATNAs, negotiation and evaluation led us to apply for and receive four Australian Research Council Linkage Grants.

  1. An Australian Postdoctoral Award (Industry) to Andrew Stranieri to build intelligent web based legal decision support systems. In conjunction with Software Engineering Australia, we built a number of web-based systems, including a generic shell Webshell. A spin-off company JUSTSYS was formed.
  2. An Australian Postgraduate Award (Industry) to Jean Hall to work on the Evaluation of Legal Decision Support Systems.
  3. An International Research Exchange Award with Uri Schild at Bar Ilan University in Israel to work on computational models of discretion (Kannai et al. 2007).
  4. An Australian Postgraduate Award (Industry) with Victoria Legal Aid to Andrew Vincent to work Plea Bargaining Decision Support (Hall et al. 2005).

Walton and Mckersie (1965) propose that negotiation processes can be classified as distributive or integrative. In distributive approaches, the problems are seen as zero sum and resources are imagined as fixed: divide the pie. In integrative approaches, problems are seen as having more potential solutions than are immediately obvious and the goal is to expand the pie before dividing it. Traditional negotiation decision support has focused upon providing users with decision support on how they might best obtain their goals. Such advice is often based on Nash’s principles of optimal negotiation or bargaining (Nash 1953). Game theory, as opposed to behavioural and descriptive studies, provides formal and normative approaches to model bargaining. One of the distinctive key features of game theory is the consideration of zero-sum and non-zero-sum games. These concepts were adopted to distinguish between distributive and integrative processes. Game theory has been used as the basis for the Adjusted Winner algorithm (Brams and Taylor 1996) and the negotiation support systems: Smartsettle (Thiessen and McMahon 2000).

We decided to adapt the Adjusted Winner algorithm to negotiation in Australian Family Law. Family Winner (Bellucci and Zeleznikow 2006) takes a common pool of items and distributes them between two parties based on the value of associated ratings. Each item is listed with two ratings (a rating is posted by each party), which signify the item’s importance to the party. The algorithm to determine which items are allocated to whom works on the premise that each parties’ ratings sum to 100; thereby forcing parties to set priorities. The basic premise of the system is that it allocates items based on whoever values them more.

Originally, the system was developed to meet clients’ interests, with no concern for legal obligations. In Zeleznikow (2014), we incorporated principles of justice into the new Asset-Divider system. The ideas behind the Family Winner system have also been used to build systems providing advice upon plea bargaining (Hall et al. 2005) and the Israel-Palestinian dispute (Zeleznikow 2014).

In 2005, there was further media interest in our work on artificial intelligence and Law. In February we had an article in the MIT Technology Review on logging on to your lawyer.20 In March, the Information Technology supplement of The Economist had a focus upon our research with title AI and the Law.21

In September 2005, the Boston Globe contacted me re the choice of a new US Supreme Court Chief Justice. Chief Justice William Rehnquist had recently died and President George Walker Bush needed to choose a replacement. Ever since the choice of Chief Justice Earl Warren by President Dwight Eisenhower in 1953, presidents had been worried about the predictability of Supreme Court Justices. The Boston Globe postulated if only a computer system could predict how a Justice would act. On 11 September 2005, the Boston Globe published an article about our work: Do we have the technology to do a better legal system.22

There was much ensuing publicity including the Sydney Morning Herald23 (shorter version in the Age) Divorce? Let the computer be the judge. BBC Radio 5, the BBC World Service and the Times of London which discussed our work on using game theory for negotiation support.24

On Wednesday November 16 our software was displayed on the Australian Broadcasting Commission’s science show The New Inventors.25 We won our heat and received invaluable publicity. This included March of the robolawyers (9 March 2006), from The Economist print edition (p. 9-10)26 and Desktop Divorce by Ben Tinker on the CNN Money Program (12 October 2007).27

As a result of such publicity Relationships Australia Queensland and Victoria Body Corporate Services contacted me wishing to conduct collaborative research. The end result was two Australian Research Council Linkage Grants—a postdoctoral fellowship for Brooke Abrahams (Abrahams et al. 2012) and a PhD fellowship for Peter Condliffe (Condliffe and Zeleznikow 2014).

6. From legal positivism and rule-based reasoning to legal realism and online dispute resolution

The granting of two SPIRT Grants (now called Linkage Grants) by the Australian Research Council extended our collaboration with Victoria Legal Aid (VLA). At that time a major issue for VLA was to determine when potential clients should receive legal aid assistance. At that time the task chewed up 60% of VLA’s operating budget, yet provided no services to its clients. After passing a financial test, applicants for legal aid needed to pass a merit test. An ensuing system, GetAid was developed in conjunction with web-based lodgement of applications for legal aid (Hall et al. 2002). It was expected that commencing the middle of 2003, VLA clients would use the GetAid system. This never occurred. The system was used in house for five years before being discarded.

The work with VLA had us thinking of how to help self-represented litigants and what were appropriate techniques for building web-based legal decision support systems. At the opening session of the Third International Symposium on Judicial Support Systems held at Chicago Kent College of Law, in May 2001, the theme was What can judicial decision support systems do to improve access to justice? I presented an article at the symposium with the title Legal Aid and Unrepresented Litigants: Building Legal Decision Support Systems for Victoria Legal Aid. In Zeleznikow (2002) I discussed the demands that the rise of pro se litigation poses for the judicial system and how community legal services can help meet these challenges through the development of web-based decision support systems. This commenced our interest in Online Dispute Resolution (ODR). In particular we wished to develop a process for developing Intelligent ODR systems.

In Lodder and Zeleznikow (2005) we advocated a three-step process in the development of ODR systems. Their proposed three-step conforms to the following sequencing.

  1. First, the negotiation support tool should provide feedback on the likely outcome(s) of the dispute if the negotiation were to fail—i.e., the BATNA28.
  2. The tool should attempt to resolve any existing conflicts using argumentation or dialogue techniques.29
  3. For those issues not resolved in step two, the tool should employ decision analysis techniques and compensation/trade-off strategies in order to facilitate resolution of the dispute.30

Finally, if the result from step three is not acceptable to the parties, the tool should allow the parties to return to step two and repeat the process recursively until either the dispute is resolved, or a stalemate occurs. A stalemate occurs when no progress is made when moving from step two to step three or vice versa. Even if a stalemate occurs, suitable forms of ADR (such as blind bidding or arbitration) can be used on a smaller set of issues.

By narrowing the issues, time and money can be saved. Further, the disputants may feel it is no longer worth the pain of trying to achieve their initially desired goals.

A truly helpful ODR system should provide the following facilities:

  1. Case management: the system should allow users to enter information, ask them for appropriate data and provide for templates to initiate the dispute;
  2. Triaging: the system should make decisions on how important it is to act in a timely manner and where to send the dispute;
  3. Advisory tools: the system should provide tools for reality testing: these could include, books, articles, reports of cases, copies of legislation and videos; there would also be calculators (such as to advise upon child support) and BATNA advisory; systems (to inform disputants of the likely outcome if the dispute were to be decided by decision-maker, e.g. judge, arbitrator or ombudsman);
  4. Communication tools—for negotiation, mediation, conciliation or facilitation. This could involve shuttle mediation if required;
  5. Decision Support Tools—if the disputants cannot resolve their conflict, software using game theory or artificial intelligence can be used to facilitate trade-offs;
  6. Drafting software: if and once a negotiation is reached, software can be used to draft suitable agreements.

Of course, no single dispute is likely to require all six processes. However, the development of such a hybrid ODR system would be very significant, but costly. Such a platform would be an excellent starting point for expanding into a world where artificial intelligence is gainfully used.

Having spent twenty-five years (1990-2015) developing intelligent legal decision support systems, I came to the realisation that the major problem in the domain was not building such systems but designing and regulating their use. Artificial intelligence software arose from the pioneering work of Turing31 and Nash (1951) in the 1950s. Even Machine Learning has a thirty-year-old history (Quinlan 1986). The reason that artificial intelligence and Machine Learning are finally being used in the legal professional is because that recent developments in computer hardware enable such systems to be much faster and easier to use.

With the general availability of such systems we need to become cognisant of more user centric issues:

  1. Ethics—what should be the remit of such systems, who should use them, to what extent should they be relied upon (Ebner and Zeleznikow 2015);
  2. Fairness—how can we ensure the negotiation advice offered is based on issues of justice rather than merely the interests of the disputants (Zeleznikow and Bellucci 2012);
  3. Governance—currently ODR can be seen as the “wild west”—anyone can develop any system without regulation. In Ebner and Zeleznikow (2016) we propose four models of how to govern Online Dispute Resolution: No Governance, Self-Governance, Internal Governance and External Governance
  4. Security—in Abedi and Zeleznikow (2019) we identify three elements of information security, privacy and authentication as standards for an appropriate ODR legal framework;
  5. Trust—in Abedi et al. (2019) we identify three elements as standards to measure trust in ODR systems: knowledge, expectations of fairness, and the existence of a code of ethics.

Having commenced research in artificial intelligence and Law, thirty years ago, my emphasis was upon using rule-based and case-based reasoning to develop legal decision support systems based upon a legal positivist approach.

Over time I realised that there are often “undetermined reasons” why legal decisions are made and that blindly adhering to legal positivism has its negatives. I gradually became aware that law was more than a mere robotic application of rules. Law is used as a social device to reflect society’s changing attitudes. No more is this so than the case of family law. Until recently children were seen as the property of their parents—especially their mothers. But fortunately, society has gradually transitioned to the notion that parents have obligations to children and that family law decision making should reflect the paramount interests of the children. But if judges are encouraged to exercise discretion in their decision-making, how can we model this exercise of discretion.

I then realised that machine learning could be used to try and understand the reasons why legal decisions are made. This more closely aligns to notions of legal realism. I also became aware that the major impediment to the use of technology in law was not the lack of adequate software. Rather it has been the failure of the community to address user centric issues.

Endnotes

  • 1. My father was born in Vilno. Poland is now known as Vilnius, Lithuania.
  • 2. Both of them received life memberships of the Australian Labour Party.
  • 3. To receive a Bachelor of Science degree at Monash University in 1969, one had to complete at two laboratory-based subjects.
  • 4. Abawajy et al. (2013), Hannah et al. (1980), Zeleznikow (1980), Zeleznikow (1981), Zeleznikow (1984).
  • 5. As of May 2019, I have run 197 full marathons.
  • 6. This was a requirement for potential law students who had matriculated more than ten years previously. It did not matter than in the thirteen years since matriculating I had completed a first-class honours degree and PhD and had taught at Australian and US universities for ten years.
  • 7. These include Zeleznikow and Hunter (1994), Hunter et al. (1993), Hunter and Zeleznikow (1994), Vossos et al (1993), Zeleznikow and Hunter (1992), Zeleznikow and Hunter (1995a), Zeleznikow and Hunter (1995b).
  • 8. See https://www.swinburne.edu.au/business-law/staff/profile/index.php?id=dhunter last accessed 12/7/ 2019.
  • 9. Now Dr. Subha Chandar.
  • 10. In particular with regards to eligibility for legal aid (Zeleznikow, J. and Stranieri, A. 2001. The use of Legal Decision Support Systems at Victoria Legal Aid.Proceedings ofISDSS2001- Sixth International Conference on Decision Support Systems Brunel University, London: 186-192. and plea bargaining (Hall et al. 2005).
  • 11. Of Tony O’Brien and Associates, Solicitors.
  • 12. Open textured legal predicates contain questions that cannot be structured in the form of production rules or logical propositions and which require some legal knowledge on the part of the user in order to answer.
  • 13. Rule-based reasoning involves using a system of rules of the form: IF <condition(s)> THEN <action>.
  • 14. Case-based reasoning is the process of using previous experience to analyse or solve a new problem, explain why previous experiences are or are not similar to the present problem and adapting past solutions to meet the requirements of the present problem.
  • 15. First at Freehills, then Deakin University and finally at the University of Melbourne.
  • 16. Most decisions in any jurisdiction are commonplace, and deal with relatively minor matters such as vehicle accidents, small civil actions, petty crime, divorce, and the like. These cases are rarely, if ever, reported upon by court reporting services, nor are they often made the subject of learned comment or analysis. More importantly, each case does not have the same consequences as the landmark cases. Landmark cases are therefore of a fundamentally different character to commonplace cases. Landmark cases will individually have a profound effect on the subsequent disposition of all cases in that domain, whereas commonplace cases will only have a cumulative effect, and that effect will only be apparent over time. Commonplace cases are those used in training sets for machine learning algorithms.
  • 17. There are six types of domestic abuse: physical abuse, sexual abuse, psychological abuse, social abuse, economic abuse and spiritual abuse. See http://www.aic.gov.au/publications/current%20series/rip/1-10/07.html last accessed 19/1/2016; See also the definition of family violence in section 5 of the Family Violence Protection Act 2008 (Vic).
  • 18. See https://www.bloomberg.com/research/stocks/private/snapshot.asp?privcapId=36277668 accessed 2/7/2019.
  • 19. In particular on the BBC and Canadian newspapers.
  • 20. http://www.technologyreview.com/articles/05/02/issue/forward_lawyer.asp, last viewed 10/7/2019.
  • 21. https://www.economist.com/technology-quarterly/2005/03/12/ai-am-the-law last viewed 10/7/2019.
  • 22. http://www.boston.com/news/globe/ideas/articles/2005/09/11/robo_justice/ last accessed 10/7/2019.
  • 23. https://www.smh.com.au/national/divorce-let-the-computer-be-the-judge-20050921-gdm3t8.html, last accessed 10/7/2019.
  • 24. http://www.timesonline.co.uk/article/0,,8163-1806165,00.html, accessed 10/7/2019.
  • 25. See http://www.youtube.com/watch?v=YOZczuvrou4, accessed 10/7/2019 for an edited version,
  • 26. http://economist.com/displaystory.cfm?story_id=E1_VVSTQRG, accessed 10/7/2019,
  • 27. http://money.cnn.com/video/news/2007/10/12/tinker.desktop.divorce.cnnmoney/, accessed 10/7/2019.
  • 28. As we did with the Split-Up system.
  • 29. As in Lodder (1999).
  • 30. As we did with the Family Winner System.
  • 31. See https://weightagnostic.github.io/papers/turing1948.pdf, last accessed 12/7/2019.

References

  1. Abawajy, J., Kelarev, A. V. and Zeleznikow, J. 2013. “Centroid sets with largest weight in Munn semirings for data mining applications.” Semigroup Forum, 87: 617--626..
  2. Abedi, F, and Zeleznikow, J. 2019. “Developing Regulatory Standards for the Concept of Security in Online Dispute Resolution Systems.” Computer Law & Security Review: The International Journal of Technology Law and Practice, Accepted on 14 January 2019..
  3. Abedi, F., Zeleznikow, J., Bellucci, E. 2019. “Universal Standards for the Concept of Trust in Online Dispute Resolution Systems in E-Commerce Disputes.” International Journal of Law and Information Technology, Oxford University Press, Accepted on 6 December 2018..
  4. Abrahams, B., Bellucci, E., and Zeleznikow, J. 2012. “Incorporating Fairness into Development of an Integrated Multi-agent Online Dispute Resolution Environment.” Group Decision and Negotiation, 21 (1): 3-28..
  5. Bellucci, E., Zeleznikow, J. 2006. “Developing Negotiation Decision Support Systems that support mediators: a case study of the Family_Winner system.” Journal of Artificial intelligence and Law, 13 (2): 233-271..
  6. Berman, D.H., Hafner, C.D., 1989. “The potential of artificial intelligence to help solve the crisis in our legal system.” Communications of the ACM, 32 (8): 928-938..
  7. Brams, S. J., Taylor, A. D., 1996. Fair Division, from cake cutting to dispute resolution. Cambridge, UK: Cambridge University Press..
  8. Condliffe, P., Zeleznikow, J. 2014. “What process do disputants want? An experiment in disputant preferences.” Monash University Law Review, 40 (2): 305-339..
  9. Ebner, N. and Zeleznikow, J. 2015. "Fairness, Trust and Security in Online Dispute Resolution." Hamline University's School of Law's Journal of Public Law and Policy, 36 (2): Article 6. http://digitalcommons.hamline.edu/jplp/vol36/iss2/6.
  10. Ebner, N. and Zeleznikow, J. 2016. “No Sheriff in Town: Governance for the ODR Field.” Negotiation Journal, 32 (4): 297-323..
  11. Fisher, R. and Ury, W. 1981. Getting to YES: Negotiating Agreement Without Giving In. Boston: Haughton Mifflin..
  12. Galanter, M. 2004. “The vanishing trial: An examination of trials and related matters in federal and state courts.” Journal of Empirical Legal Studies, 1 (3): 459-570..
  13. Hall, M. J. J., Calabro, D., Sourdin, T., Stranieri, A. and Zeleznikow, J. 2005. “Supporting discretionary decision making with information technology: a case study in the criminal sentencing jurisdiction.” University of Ottawa Law and Technology Journal, 2 (1): 1-36..
  14. Hall, M. J. J., Stranieri, A. and Zeleznikow, J. 2002. “A Strategy for Evaluating Web-Based Discretionary Decision Support Systems.” Proceedings of ADBIS2002 - Sixth East-European Conference on Advances in Databases and Information Systems, 2002. Bratislava, Slovak Republic: Slovak University of Technology. September 8-11, pp. 108-120..
  15. Hannah, J., Richardson, J, and Zeleznikow, J. 1980. “Completely semisimple ring-semigroups.” J. Austral. Math. Soc. (Series A), 30:150-156..
  16. Hunter, D., Tyree, A. and Zeleznikow, J. 1993. “There is less to this argument than meets the eye.” Journal of Law and Information Science, 4 (1): 46-64..
  17. Hunter, D. and Zeleznikow, J. 1994. “An overview of some reasoning formalisms as applied to law.” Think, 3: 24-40..
  18. Kannai, R., Schild, U. and Zeleznikow, J. 2007. “Modeling the evolution of legal discretion – an Artificial Intelligence Approach.” Ratio Juris, 20 (4) December: 530–558..
  19. Lodder, A.R. 1999. Dialaw: On Legal Justification and Dialogical Models of Argumentation. Amsterdam: Kluwer Academic Publishers..
  20. Lodder, A. and Zeleznikow, J. 2005. “Developing an Online Dispute Resolution Environment: Dialogue Tools and Negotiation Systems in a Three Step Model.” The Harvard Negotiation Law Review, 10: 287-338..
  21. Lodder, A. and Zeleznikow, J. 2010. Enhanced Dispute Resolution through the use of Information Technology. Cambridge, UK: Cambridge University Press..
  22. Nash, J., 1951. “Non-cooperative games.” Annals of mathematics, 54 (2): 286-295..
  23. Nash, J. 1953. “Two Person Cooperative Games.” Econometrica, 21: 128-140..
  24. Quinlan, J.R. 1986. “Induction of decision trees.” Machine learning, 1 (1): 81-106..
  25. Sergot, M.J., Sadri, F., Kowalski, R.A., Kriwaczek, F., Hammond, P. and Cory, H.T. 1986. “The British Nationality Act as a logic program.” Communications of the ACM, 29 (5): 370-386..
  26. Stranieri, A. and Zeleznikow, J. 2005. Knowledge Discovery from Legal Databases, Volume 69, Dordrecht: Springer Law and Philosophy Library..
  27. Stranieri, A., Zeleznikow, J., Gawler, M. and Lewis, B. 1999. “A hybrid—neural approach to the automation of legal reasoning in the discretionary domain of family law in Australia.” Artificial intelligence and Law, 7 (2-3): 153-183..
  28. Thiessen, E. M. and McMahon, J. P. 2000. “Beyond Win-Win in Cyberspace.” Ohio State Journal on Dispute Resolution, 15 (3): 643-667..
  29. Toulmin, S.E., 1958. The use of argument. Cambridge, UK: Cambridge University Press..
  30. Vossos, G., Zeleznikow, J. and Hunter, D. 1993. “Building intelligent litigation support tools through the integration of rule based and case-based reasoning.” Law, Computers and Artificial intelligence, 2 (1): 77-93..
  31. Walton, R. E. and Mckersie, R. B. 1965. A Behavioral Theory of Labor Negotiations. New York: McGraw – Hill..
  32. Zable, A., 2003. Cafe Scheherazade. Melbourne: Text Publishing..
  33. Zeleznikow, 1979. On regular semigroups, semirings and rings. Phd thesis, Faculty of Science, Monash University, Australia..
  34. Zeleznikow, J. 1980. “Orthodox Semirings and Rings.” J. Austral. Math. Soc. (Series A), 30: 50-54..
  35. Zeleznikow, J. 1981. “Regular Semirings.” Semigroup Forum, 23: 119-136..
  36. Zeleznikow, J. 1984. “Regular ring-semigroups.” Commentationes Mathematicae, Universitaes Carolinae, 25 (1): 129-141..
  37. Zeleznikow, J. 1991. “Building intelligent legal tools—The IKBALS project.” Journal of Law and Information Science, 2 (2):165-184..
  38. Zeleznikow, J. 2002. “Using Web-based Legal Decision Support Systems to Improve Access to Justice.” Information and Communications Technology Law, 11 (1): 15-33..
  39. Zeleznikow, J. 2003. “An Australian Perspective on Research and Development required for the construction of applied Legal Decision Support Systems.” Artificial Intelligence and Law, 10: 237-260..
  40. Zeleznikow, J. 2011. “Life at the end of the world was an anti-climax – memories of sixty years of life of a Jewish Partisan in Melbourne.” Holocaust Studies: A Journal of Culture and History, 16 (3): 11–32..
  41. Zeleznikow, J. 2014. “Comparing the Israel – Palestinian dispute to Australian Family Mediation.” Group Decision and Negotiation Journal, 23 (6): 1301–1317..
  42. Zeleznikow, J. and Bellucci, E. 2012. “Legal Fairness in Alternative Dispute Resolution Processes – Implications for Research and Teaching.” Australasian Dispute Resolution Journal, 23 (4): 265-273..
  43. Zeleznikow, J. and Hunter, D. 1994. Building Intelligent Legal Information Systems: Knowledge Representation and Reasoning in Law, Amsterdam: Kluwer Computer/Law Series, 13..
  44. Zeleznikow, J. and Hunter, D. 1992. “Rationales for the continued development of legal expert systems.” Journal of Law and Information Science, 3: 94-110..
  45. Zeleznikow, J. and Hunter, D. 1995. “Reasoning paradigms in legal decision support systems.” Intelligence Review, 9 (6): 361-385..
  46. Zeleznikow, J. and Hunter, D. 1995. “Deductive, Inductive and Analogical Reasoning in Legal Decision Support Systems.” Law, Computers and Artificial Intelligence, 4 (2): 141-160..
  47. Zeleznikow, J., Vossos, G. and Hunter, D. 1994. “The IKBALS project: Multimodal reasoning in legal knowledge-based systems.” Artificial Intelligence and Law, 2 (3):169-203..

Send mail to Author


Send Cancel