The conversation is just starting—morality in A.I.

I went to an amazing presentation earlier this week. 

Harry Glaser of Periscope Data gave a speech on the moral responsibility data professionals have to safeguard the proper use of AI. According to Harry, "AI unchallenged runs a strong risk of delivering immoral outcomes."

IMG_2398.JPG

He's right. What happened at Cambridge Analytica shows how powerful the use and misuse of AI technology can be.

Thought leaders like Harry are starting to talk about the role data and other professionals play in being moral custodians of AI technology.

Artificial intelligence needs human intelligence behind it—moral guardrails to guide it. Early adopters of AI have the responsibility to set the tone for its professional and moral use.

A recent op-ed in the New York Times takes this idea further—digital marketers are excited about the economic and commercial potential of AI but we have all but ignored its potential to be used with ill intent.

Despite these myriad risks, industry professionals seem to have turned a blind eye to the oncoming specter of A.I., likely because they are optimistic about its commercial potential. ~New York Times, 26 March 2018

From where I sit, I see many marketers grappling with the 'how' of AI. They're getting their heads around what technology is, what it can do and how it might impact the scope, scale and delivery of their marketing treatments.

Some marketers are more experienced with AI and have already integrated it into their routine use. The more seasoned AI marketers are the ones who need to be leading the conversation about its moral use.

There are no easy answers here, except to say the conversation about the moral use of AI technology is only just starting. We need to put aside our excitement about what AI can do in a commercial sense and start debating what moral use of AI technology in marketing looks like, how we should uphold it, and what should happen to those who don't.