Episode 4: Building creative restrictions to curb AI abuse

For better or worse, AI is changing our world. Thanks in part to some imaginative sci-fi dystopian future movies about AI systems taking on a life of their own beyond the control of humans, a discourse about ethical use of AI has begun. However, this discourse is currently dominated by powerful companies, governments and elite universities. Often excluded from these conversations are those with less power, like the individual citizen or even software developers.

In this podcast, Stefano Maffulli, executive director of the Open Source Initiative, chats with David Gray Widder, a Ph.D. student in the School of Computer Science at Carnegie Mellon University. Widder has been investigating AI from an ethical perspective, studying the challenges software engineers face related to trust and ethics in AI. He’s conducted his research at Intel Labs, Microsoft and NASA’s Jet Propulsion Laboratory.

In reviewing Widder’s research and findings, this podcast explores some intriguing questions, such as:

  • What is a deep fake, and how is deep fake technology being used today for good and bad purposes?
  • What ethical responsibility do corporations have for the technology they develop or use and what kind of power or control do they have?
  • In what way do open source licenses limit developers’ sense of responsibility and control for how their software is used downstream?
  • Some developers ascribe to a notion of “technological inevitability.” What does that mean, and how does this philosophy shape their ethical approach to AI technology?
  • Similarly, some developers believe in “technological neutrality.” How does this ethical perspective play out?
  • With respect to ethical AI, why is open source great for implementation purposes but not so great with respect to use?
  • What are we—the broader open source community— to take away from these findings as we frame a discussion about AI ethics?
  • How might the open source community contribute to bringing about the best future for AI and for helping it become most trustworthy in the future?

This episode raises a warning: we can’t let the conversations about ethical AI be driven by the interests of big tech companies alone, because their profit motivations bias the outcomes. Open Source communities can be a balancing force, adding the voice of a broad, diverse community that is not dependent on the profit context. However, Open Source has its own issues to address when it comes to AI ethics: putting software that can be used for harm online, and letting anyone use it for anything, can cause harm in ways that proprietary, closed source software does not.

If you’re a software developer, you are going to love this Deep Dive episode, but be warned: it will definitely challenge some of your perspectives, and it may even inspire you to make your voice heard. We double-dog dare you to listen!

Play in new window

Subscribe: Apple Podcasts | Google  Podcasts | PocketCastsRSS | Spotify

In its Deep Dive: AI event, OSI is diving deep into the topics shaping the future of open source business, ethics and practice. Our goal is to help OSI stakeholders frame a conversation to discover what’s acceptable for AI systems to be “Open Source.” A key component of Deep Dive: AI is our podcast series, where we interview experts from academia, legal experts, policy makers, developers of commercial applications and non-profits.