A recently launched initiative called Fairly Trained aims to bring more accountability to the development of AI systems by certifying models that obtain proper consent for their training data.
Everything you need to know:
- Fairly Trained is a new certification program launched to promote transparency and ethical practices around how AI models are trained.
- The initiative emphasizes obtaining consent from creators for any copyrighted works used to develop AI systems.
- By highlighting companies that prioritize consent, Fairly Trained aims to distinguish between approaches and incentivize respecting creator rights.
Fairly Trained was created by Ed Newton-Rex, who previously resigned from his role at AI safety startup Stability AI. Newton-Rex serves as CEO, with guidance from experts including Tom Gruber of Apple’s Siri team and AI music startup LifeScore, as well as lawyer Elizabeth Moody and leaders in the music publishing industry.
The initiative launched with backing from Universal Music Group and organizations like the Association of American Publishers.
It debuted by certifying nine AI companies, eight focused on music generation. These included Beatoven, Boomy, Endel, LifeScore, Rightsify, Somms·ai, Soundful and Tuney. The ninth certification went to image creator Bria, demonstrating Fairly Trained aims to cover multiple creative fields.
In discussing the new certification scheme, Newton-Rex explained “Everyone gets very hung up on the legal question. And the legal question is important. It will be decided, and it may well be that it’s not fair use. Probably it will be super-nuanced. That is all really, really important.”
He went on to say “But at the same time you can set yourself apart from the legal question and just ask: ‘Really? If you’re training a system – and I think people get hung up on this word ‘training’, so let’s just use the word ‘building’ – if you’re building a product, and you’re using this [copyrighted] material to build that product, and then the product you’re building then competes with that material?’ I just can’t square that, even setting the legal argument apart.”
Initiatives like Fairly Trained aim to enhance trust between the public and AI by establishing best practices for handling sensitive information.
Newton-Rex noted “Fundamentally, I think you’re in a better position if people have more information. One of the problems is that there is a big, big spectrum of approaches people take.” He also said “To me, though, there is such a clear divide between companies who scrape, essentially, and do so without consent, and companies who are going and getting consent. There’s such a clear break there, I think that’s worth highlighting.”
Newton-Rex expressed his hope that the certification would show “hey, there are already [AI] companies getting this consent. So go and work with them. And hopefully it’s a spur for more of that.”
In summary, efforts to promote open discussion on important questions around data privacy, security and quality will be key to cultivating understanding between creatives and the AI industry going forward.