Wednesday, November 7, 2018

A Morality Setting for Artificial Intelligence

Many of us interact with artificial intelligence (AI) on a daily basis without realizing it. If you look at the recommendations provided by Netflix or Amazon, then you are being served results from some sort of AI. Most of the time, those recommendations are based on previous purchases or actions. If you watched one of the Marvel superhero movies, you might get recommended another one that you may not have seen.

Now imagine that you want to change your behavior. Perhaps you have decided that smoking is bad for your health and want to quit or that you want to stop watching violent movies. At first you may have a firm resolve to not do those things any more, then you get an e-mail telling you about a sale on violent movies or smoking products. While you may not immediately give in, you may find it difficult to resist the temptation to give in and go back to your old ways.

So how do you tell online retailers and service providers that you would like to change your behavior and to please stop sending you recommendations for products you are no longer interested in purchasing? Right now, you can't without doing something drastic like changing your phone number, e-mail address, and creating all new accounts.

This also leads to the question of if service providers should build in some sort of morality. When you go to watch the latest superhero movie, how would you feel if you received the message, "Violent crimes are up, perhaps you should watch a romantic comedy." Or you go to purchase cigarettes online (I don't even know if that is possible) and see, "Smoking is bad for you, how about some nicotine gum instead?" My instinct says nobody would be happy even though we would all be better off with such suggestions.

No comments:

Post a Comment