Elizabeth Kiehner, leading design operations for IBM Interactive Experience across the globe with multi-disciplinary expertise and an appetite for business growth, highlights the weight of our biases and values when it comes to programming artificial intelligence. What is their role and where does algorithmic accountability come into play?
“When we think about artificial intelligence, we set up this crossroad where we often hear like a quote like this a very apocalyptic picture painted for us about what the future is going to be like.
I’m here to fly in the face of that. You might refer to this Stephen Hawking quote or similar quotes made from Elon Musk or even from Bill Gates. I have a much more optimistic outlook that I want to talk about with you here today.
So when we think about AI, the term was first coined in 1956. That’s 61 years ago and since then we’ve done very little to educate the general public on the nuances of artificial intelligence and what that actually means what is this impact on our lives?
So the first thing that I want to draw a distinction around is the difference between Artificial Intelligence and Artificial General Intelligence. A new acronym AGI. So if you remember one thing AGI come away with that tonight and tell your friends. Because artificial general intelligence is what people typically think of when they imagine that robot buddy they’re going to have in the future that has full autonomy total consciousness it might be as intelligent as them. It might be even a little bit more intelligent than them.
We are so far away from that today. And it’s something that we may not even see in our lifetime. I think that we’re focusing our attention on the wrong thing. I want to argue that we should be scrutinizing people. It’s the people who are doing the programming of AI today that really have an impact on driving the future of where this technology goes.
And people are wonderful but we all have so much baggage don’t we? We have biases and we have prejudices and we have all sorts of different ethical values beliefs and religions that inform and what we do every day? We bring that with us quite frankly to our work.
But, we luckily have new rules for the cognitive era. So, we have at IBM establish the role of an ethics adviser. We have field testing in our production process before anything gets deployed and then explanation based collateral systems.
So if you’re working with Watson and you ask Watson a question, you will not only get a very quick answer. You’ll find out the level of confidence Watson has in that answer. You’ll find out all of the rationale behind the decision making process. So that’s really great.. Right?
But, What about bias? What happens when we talk about bias? Who here in this room would purport to be biased? Raise your hand. That’s great. It’s everyone. And it’s inevitable that humans are all bias. It’s part of what makes us human quite frankly. These gut instincts and our feelings that we sometimes get in the pit of our stomach that give us a sense- you know “I should do this thing instead of that”. All of these underlying fundamental drivers which sometimes include religion help us to inform the up to 35,000 decisions we make on average each and every day.
What about turning our attention now to religion? Who here in the room believes in God? So whether you do or not you’ve certainly heard of this quote before we all have and I would argue that we stand here today at this point where we’re creating artificial intelligence in our own image and we need to be very very careful about how we’re doing that and how we’re programming for that.
If we turn the table the other way and ask Watson “Hey Watson, do you believe in God”, how would Watson respond? Well that would actually depend on which instance of Watson you are talking to. Watson is trained in several different domains and has very deep domain knowledge in each of those areas. So are we talking to financial services Watson? or are we talking to legal Watson? We might be lucky enough to talk about Watson who won jeopardy.
But today let’s pretend that we’re talking to the Watson that’s very skilled in understanding medicine and oncology. And when we think about that, we’re definitely treading in the territory of life and death and with life and death inevitably comes up the topic of religion.
Just think back a few years to all of the controversy that was surrounding the Terri Schiavo case. And then pivot for a moment to other legal aspects such as the fact that there are 13 countries in the world where being an atheist is illegal. Not only is it illegal but it is punishable by death. You see how important it is for us to consider what religion means when we train our cognitive systems. So we have these people that we call quality experts and it’s pretty easy for us now to take structured and unstructured data and feed it into Watson.
But what we have to understand and where the subjectivity comes in is another acronym called URL. That’s our ability to help Watson understand and reason and learn. That’s the area again where we need to focus our attention as to what we’re programming into the systems as well as algorithmic accountability.
What’s happening in our data sets? Where is the data coming from? Is it coming from a safe source? And our data sets biased? Ultimately when we look at the values that were embedding into the system, we’re going to ultimately need to agree on what those are and where they’re coming from. Are they reflective of judeo-christian values? Are they reflective of secular values? Or they reflective of values that we haven’t even defined it?
“That is what keeps me up at night. Despite that, I have a very very positive feeling about the future. In just three years will see 5 million people connected to the Internet. That’s more than half of the world global population. So all different people from different races, religions, colours, working together online be it with AI be it with quantum computing on the cloud. It’ll be miraculous time for us to think together. So what I asked you here today and the question on the table that I want to leave you with is really this: The question is not whether machines think.. But whether humans do..”
Elizabeth Kiehner leads the Global Design Services for IBM with multi-disciplinary experience in user experience, design, and technology. She is the director of Global Design Services and Chief of Staff at IBM.
Elizabeth Kiehner is a design thinking evangelist who believes in harnessing the power of creativity to solve global business challenges. By unearthing human insights, Liz co-creates customer-centric experiences that have the power to transform the enterprise.
“She’s the co-creator of GM’s new OnStar Go offering—what the car company is calling the “first cognitive mobility platform” that uses IBM’s Watson learning supercomputer to plug drivers into connected services and pick up skills based on people’s patterns. For instance, it could pre-order a coffee for pickup from a drive-in window, or use listening habits to create a personalized radio station,” according to TechCrunch.
Her creativity and passion have impacted some of the world’s most recognized brands with regular engagements and workshops with the C-suite. She has recently co-created with General Motors the world’s first cognitive mobility platform, OnStar Go.
For two decades Elizabeth has led creative, design, and technology teams to produce groundbreaking ideas for organizations including Thornberg & Forester (co-founder), Havas, Suissa Miller, Trollback & Company and Freestyle Collective. Her portfolio includes campaigns and game-changing digital, mobile and wearable platforms for brands such as Google, Microsoft, Viacom, American Express, Apple, Turner, Fidelity, Schwab, GM, GE and Khan Academy and many more.