Episode 5: Why Debian won’t distribute AI models any time soon

When “artificial intelligence” is mentioned in casual conversation, much of the world thinks about chess. Alan Turing, widely considered to be the father of computer science and artificial intelligence, published the first chess-playing program in 1951. In the decades which followed, the science of artificial intelligence progressed, and AI-based chess engines improved to the point that IBM’s “Big Blue” finally beat the reigning world-champion chess player Garry Kasparov in 1997. Fast forward a quarter century, and where is AI today?

Stefano Maffulli, executive director of OSI, recently discussed modern AI applications with Mo Zhou, who is a postdoctoral AI researcher at Johns Hopkins University. Mo has also been a Debian volunteer since 2018 and is currently maintainer of the machine learning policy for Debian, so he has interesting perspectives on the intersection of AI and open source software.

In this podcast, Maffulli and Mo chat about how AI has evolved, exploring questions such as these:

  • How are recent advances in other technologies, such as big data and hardware capacity, impacting the scope and capabilities of AI today?
  • What is a neural network and what role does training data play in machine learning?
  • How does the need for massive training data sets complicate the notion of open source AI and machine learning software?
  • What is dataset bias, and how can that compromise AI systems? What are some of the ways experts are trying to overcome this problem?
  • What is Debian? What challenges has the Debian community faced with regard to enabling the hardware capacity and speed required for AI applications?
  • What kind of licensing schemes are more popular in the AI community research community?
  • What optimistic outcomes should we aspire to with respect to honoring the principles of open source and free software as AI technology progresses and use cases become even more pervasive?

This discussion sheds light on how “the nuts and bolts” comprising modern AI systems make AI software fundamentally different from other software, therefore straining our understanding of how Open Source principals could or should apply. If you hate that feeling of not knowing what you don’t know, this podcast is for you. Listen in to learn what’s inside AI and why it matters.

Play in new window

Subscribe: Apple Podcasts | Google Podcasts | PocketCasts | RSS | Spotify

In its Deep Dive: AI event, OSI is diving deep into the topics shaping the future of open source business, ethics and practice. Our goal is to help OSI stakeholders frame a conversation to discover what’s acceptable for AI systems to be “Open Source.” A key component of Deep Dive: AI is our podcast series, where we interview experts from academia, legal experts, policy makers, developers of commercial applications and non-profits.