AI in the Real World

Each year or so we see the media in a buzz about a new technology. 2017 was the year of AI (and bitcoin/blockchain which we’ll being looking at shortly).

We know that AI stands for artificial intelligence. What isn’t so clear is the meaning of artificial intelligence, and this is where we get the propaganda (and fears of toasters taking over the world).

What is intelligence?

There are several schools of thought of what artificial intelligence is. At the lowest level, artificial intelligence is anything in which a computer interprets an input and provides an output. A human has already told the computer how to respond to such inputs, and that doesn’t change no matter the input.

For example, by this interpretation, a calculator has artificial intelligence as a human has taught it to take inputs, add them together, and output them to a screen. A more modern example is Siri on the iPhone or Alexa on an Amazon Home device. A voice command is recorded, servers interpret the command, and an output is provided. A human has programmed Siri and Alexa how to take such commands, act on them, and output them. While superior in the type of commands it can take, it is still acting in the way it was programmed. It is working on the same basis as the calculator.

This has been extended further to where systems have been taught to create a better system. It is still following its programming, and is limited to that programming.

The singularity

At the other end of the scale is those who see true artificial intelligence as being something we have yet to see, and are very thankful for that. They see intelligence as being the ability to act beyond what has been programmed, to have a self-will, and to act for no other reason apart from that self-will. For example an artificial intelligence could be tasked with analysing data from traffic lights, but instead decides that it wants to compose music, so does that instead.

It has begun to follow its own interests, to do what it wants to do, not what it has been tasked to do.

This is what we call the singularity. This is the “Skynet” as seen in the Terminator movies, software that does what it wants to do. We are not at that point yet, and may never be. Some very intelligent people believe we are already there, we just don’t know it yet (or the AI is smart enough to not let us be aware).

Somewhere in between

However, for most people what we’re now seeing isn’t the self-awareness, but the ability to process many more inputs into a useful output. From a legal perspective, an example of modern AI is being programmed to analyse thousands of court cases, looking for certain indicators, compare those indicators to a new case, and provide advice on what a court might do. It is still following programming, and handling inputs through to outputs, but it is doing it so much faster than a human can do it that it appears to be intelligent. And, it may actually be.

That gut feeling

At a recent conference in Sydney, I discussed the elements of AI with several other attendees. One of them saw the difference in AI versus human intelligence being that gut feeling, that ability to just feel that one output is preferred to the other.

My response was that this was also programmed, albeit in our own minds. Our gut feelings are based on experience. If I was to see a few notes on a court decision on an area of law I knew, I would have a gut feeling as to what the decision would be. That gut feeling is based on my experience in law. My mind is quickly weighing up the factors that I see as being the most important, and forming a reaction.

The same could be said if I was driving down a narrow road and saw a car had just parked just ahead of me. My gut feeling would be to be cautious that the driver might open his door just as I was driving past. It is a feeling of gained from experience.

However, I wouldn’t get the same gut feeling in situations where I have no experience. If I was on a sailing boat, I would have no real gut feeling as to the wind shifts, as I have no real experience sailing.

In essence we are acting in the same way as AI. We have been raised to apply our experiences to future situations. Our gut feeling is merely the fastest method of that application. AI can do much the same thing, if given the opportunity to learn. In that way, there is little to distinguish our gut feelings from AI decisions except that an AI is restricted as to what it can analyse.

Applying this to employment/industry

What this all means is that, if taught how, AI can do the same tasks we do but at a much faster rate. It can also remove any bias that we have as its decision can be based purely on the facts (it can be programmed to ignore race etc). In the legal world this means that the simplest and most time consuming jobs will be gone first, as these can be easily automated. Whether all legal jobs will eventually be gone is another argument. What it can’t do, yet, is what it hasn’t been programmed to do. For lawyers, this could be the one advantage we still have over AI.

Conclusion

AI is here and it will continue to get more involved in our lives. While it might not be that machine revolution that many fear, it will become a tool used in more industries as processes become more automated. It should lead to a more productive world, with less bias in our legal system. If used correctly, it should be for the benefit of society. If used incorrectly, we need to hope that AI can be more forgiving that humans often are.

Please contact me if you have any questions, suggestions for future posts, or if you are seeking legal advice.

Arran