FSG Blog
January 15, 2019

Intelligent Technology and the Coming Liability Crisis

Kevin McDermott
Principal

Intelligent technology is already pervasive

We live lives saturated in distributed intelligence.  In ways we seldom think much about intelligent technology is making consequential choices for us all the time whether it’s in our health care, our hiring decisions, our personal finances, even our police work.

Intelligent technology is so woven into our lives that we no longer see it.  It is not us.  Yet we permit it to decide for us.

It’s no stretch at all to think about AI futures in which the pervasiveness of intelligent technology—and the phenomenon of its invisibility—accelerates. Unintended consequences will accelerate as well.  Among those could be a liability-insurance crisis like that in the United States in the mid-1980s.

An expansion of consumer rights in US courts in the 1970s created a ticking timebomb for insurers who had no scenarios in which they priced for their expanded exposure.  Ten years later the bomb went off.  Losses mounted.  So did premiums for general liability, tripling between 1984 and 1987.  Casualty insurance looked likely to become extinct as a category.  The consequence was a prolonged convulsion in tort law.

Intelligent technology creates ethical dilemmas

So now picture a driverless car 50 years later in 2035.  Say the car jumps a curb and strikes a child.  How will insurers decide where liability resides? Who will they say was “driving”—the owner of the car, the car’s manufacturer, the software designer, the person who installed sensors in the curb?

It’s the sort of problem that, in AI scenario thinking, engineers typically discuss in terms of “ethics”.  Engineers use ethics not in its conventional moral sense but to describe a response to unintended biases hidden in the datasets on which systems are trained—a car, for example, that can’t understand the voice of a woman or that doesn’t recognize dark-skinned people because none of the engineers who built it were women or dark-skinned.  If your ambition is to sell cars this problem is more of an engineering problem than a moral one.

For the moral sense of “ethics” try this analogy for the car that’s jumped the curb: If my kid punches another kid in the nose I have not behaved unethically (which is more than I can say for my kid).  But I am responsible.  I can deny responsibility but only with difficulty, a difficulty that will before long become unsupportable.

Such a view of liability among creators and sellers of distributed technologies is not an alternative future; it’s already the norm.

Look at how reflexively, for example, social-media platforms deny responsibility for the frightening purposes to which users sometimes put those platforms. The builders and maintainers of those platforms portray themselves as bystanders.  To use a term of moral philosophers, they deny agency.

In a rather different context Donald Trump said not long ago that “the buck stops with everyone”.  And when the buck stops with everyone it stops with no one.

See you in court.

Blog Sign-Up

This field is for validation purposes and should be left unchanged.

1 thought on “Intelligent Technology and the Coming Liability Crisis”

  1. Kevin: Very nicely argued. I
    Kevin: Very nicely argued. I never really thought about how engineers “distance” themselves from the moral ethics, when they consider system design choices. Additionally, while it did not succeed in the courts (if I recall correctly), an effort to shift gun death responsibility to gun manufactures did gain some public acknowledgment. I think your warning on liability is well taken.

    Reply

Leave a Comment