An Atlantic Article that came out on August 7th brings several experts together to discuss the topic of how we get a handle on AI to protect us from potential harm.
An Atlantic Article that came out on August 7th brings several experts together to discuss the topic of how we get a handle on AI to protect us from potential harm.
Sarah Myers West Executive Director of AI Now Institute talks about the importance of third-party validation of agreed-upon metrics relating to bias, accuracy, and interpretability.
Alexandra Reeve Givens, the President of The Center for Democracy & Technology, calls out the importance of requiring an audit of certain types of AI systems.
Ellen Goodman a Professor at Rutgers Law School calls for audits not just after the technology is built but also during the development of the models.
The author, Matteo Wong, rightly summarizes that we can’t trust builders of this technology when they declare their AI “safe”.
A few of the experts, including Alexandra Givens and Alex Hanna, Ph.D. of The Distributed AI Research Institute (DAIR), both push back against the need for a federal AI agency. They believe that responsibility lies with existing bodies such as the FDA. On this point, I disagree with them on this as the technology is built on highly complex concepts that most experts struggle to understand (at best) and the field is advancing so rapidly that experts in existing regulatory agencies will not fully comprehend the potential harms.
The article goes on the say that tamper-proof seals should be required for image and video content as well as a way for web browsers to detect and validate the seal. Additionally, there is a dire need to evaluate and update copy write protection and define what it means in the world of generative AI technology for both the original creators as well as the creators using generative AI. We can’t wait for litigation to compete to have these answers as that will likely take years.