SoundCloud has quietly updated its terms of use to permit the platform to train artificial intelligence (AI) models on the audio content users upload. As noted by tech ethicist Ed Newton-Rex, the new terms, last updated on February 7, include a provision stating that users explicitly agree that their content may be used to “inform, train, develop or serve as input to artificial intelligence or machine intelligence technologies.”
This change has raised concerns, especially considering that the platform is already collaborating with AI vendors to bring AI-powered tools for remixing, generating vocals, and creating custom samples. However, these collaborations, according to a blog post from SoundCloud, would include safeguards like content ID solutions to ensure creators are properly credited and compensated.
The company has not yet clarified an opt-out option for this new AI training clause, leaving some users worried about their rights regarding AI use of their content.
SoundCloud’s Commitment to Ethical AI Use

Despite this new provision, SoundCloud claims that it has never used artist content to train AI models nor does it allow third parties to scrape content for AI purposes. In a statement provided later, the platform emphasized that the February update aimed to clarify how content interacts with AI technologies used within SoundCloud itself. These technologies include personalized recommendations, content organization, and fraud detection. SoundCloud also reassured users that it has safeguards in place to prevent unauthorized use of content for AI training.
SoundCloud’s spokesperson explained that the updates reflect the platform’s evolving relationship with AI, and future applications will be designed to support human artists. This includes using AI tools for improving music recommendations, generating playlists, and detecting fraudulent activity. However, the statement stressed that AI tools like those from SoundCloud’s partner Musiio are solely for artist discovery and content organization, not for training generative AI models.
This decision follows similar changes by other platforms like X (formerly Twitter), LinkedIn, and YouTube, which also updated their policies to allow AI training on user content. Many users have voiced their concerns about these shifts, calling for opt-in policies and requesting proper compensation for their contributions to AI datasets.
Also Read : Bill Gates Orders Gates Foundation to Wind Down by 2045, Pledging Full Fortune