Balancing Innovation and Safety: Navigating the Dilemma of Open-Sourcing AI Models
338 | Sun 04 Aug 10:45 a.m.–11:30 a.m.
Presented by
-
Abhishek Gupta
@atg_abhishek
https://abhishek-gupta.ca
Abhishek Gupta is the Director for Responsible AI with the Boston Consulting Group (BCG) advising clients on building end-to-end Responsible AI programs. He is also the Founder & Principal Researcher at the Montreal AI Ethics Institute, an international non-profit research institute with a mission to democratize AI ethics literacy. He holds the BCG Henderson Institute Fellowship on Augmented Collective Intelligence, studying the complementary strengths in hybrid collectives of humans and machines and is a Senior Fellow at UNIDIR working on national security and technology issues.
Through his work as the Chair of the Standards Working Group at the Green Software Foundation, he is leading the development of a Software Carbon Intensity (SCI) specification, now an ISO standard, towards the comparable and interoperable measurement of the environmental impacts of AI systems.
He serves as a technical expert and member at the NIST AI Safety Institute Consortium, the Standards Council of Canada, Accessibility Standards Canada, the Partnership on AI, the AI Alliance, and the Linux Foundation (AI & Data).
His work focuses on applied technical, policy, and organizational measures for building ethical, safe, and inclusive AI systems and organizations, specializing in the operationalization of Responsible AI and its deployments in organizations and assessing and mitigating the environmental impact of these systems.
He has advised national governments, multilateral organizations, academic institutions, and corporations across the globe. His work on community building has been recognized by governments from across North America, Europe, Asia, and Oceania. He is a highly sought after speaker with talks at the United Nations, European Parliament, G7 AI Summit, TEDx, Harvard Business School, Kellogg School of Management, amongst others. His writing on Responsible AI has been featured by Wall Street Journal, Forbes, MIT Technology Review, Protocol, Fortune, VentureBeat, amongst others.
He is an alumnus of the US State Department International Visitors Leadership Program (IVLP) representing Canada and has received The Gradient Writing Prize 2021 for his work on The Imperative for Sustainable AI Systems. His research has been published in leading AI journals and presented at top-tier ML conferences like NeurIPS, ICML, and IJCAI. He is the author of the widely-read State of AI Ethics Report and The AI Ethics Brief. He formerly worked at Microsoft as a Machine Learning Engineer in Commercial Software Engineering (CSE) where his team helped to solve the toughest technical challenges faced by Microsoft's biggest customers. He also served on the CSE Responsible AI Board at Microsoft. You can learn more about his work here: https://abhishek-gupta.ca
Abhishek Gupta
@atg_abhishek
https://abhishek-gupta.ca
Abstract
The recent trend of leading AI labs either open-sourcing their models or restricting access has ignited a critical debate: How should we share increasingly capable AI models? Open-sourcing, the practice of making model architecture and weights freely accessible, has been a cornerstone of software and AI development, fostering external oversight, accelerating progress, and decentralizing control. Yet, as AI models become more powerful, the risks of misuse and unintended consequences escalate.
This talk scrutinizes the delicate balance between the benefits and risks of open-sourcing highly capable foundation models. While open-sourcing has historically been a boon, we contend that for certain future models, the potential dangers may outweigh the advantages. We propose that such models should not be open-sourced, at least initially, to mitigate these risks.
Exploring alternative strategies, we delve into non-open-source model sharing options that could safeguard the benefits of open-source while minimizing potential harms. Our recommendations aim to guide developers, standard-setting bodies, and governments in establishing safe and responsible model sharing practices, ensuring that the progress of AI remains both open and secure.
The recent trend of leading AI labs either open-sourcing their models or restricting access has ignited a critical debate: How should we share increasingly capable AI models? Open-sourcing, the practice of making model architecture and weights freely accessible, has been a cornerstone of software and AI development, fostering external oversight, accelerating progress, and decentralizing control. Yet, as AI models become more powerful, the risks of misuse and unintended consequences escalate. This talk scrutinizes the delicate balance between the benefits and risks of open-sourcing highly capable foundation models. While open-sourcing has historically been a boon, we contend that for certain future models, the potential dangers may outweigh the advantages. We propose that such models should not be open-sourced, at least initially, to mitigate these risks. Exploring alternative strategies, we delve into non-open-source model sharing options that could safeguard the benefits of open-source while minimizing potential harms. Our recommendations aim to guide developers, standard-setting bodies, and governments in establishing safe and responsible model sharing practices, ensuring that the progress of AI remains both open and secure.