Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper suscipit lobortis nisl ut aliquip ex ea commodo consequat. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet,
Meta has introduced a groundbreaking generative artificial intelligence tool, capable of translating text prompts into musical compositions. Alongside this announcement they confirmed they are releasing it as open-source for research pursuits. Named AudioCraft, the tool utilises a trio of models – MusicGen, AudioGen, and EnCodec.
In order to teach MusicGen its capabilities, Meta employed a staggering 20,000 hours of licensed music for training, comprising an impressive collection of 10,000 “high-quality” licensed music tracks.
It is noteworthy that during the unveiling, Meta’s researchers acknowledged the ethical issues they confronted while crafting their generative AI models. It’s hard to make a sound judgement on how quickly AudioCraft will affect life as an independent label, but its potential to completely change how music is created, licensed and shared seems inevitable.
The Independent Music Publishers International Forum (IMPF) has responded to the growing debate surrounding AI-generated songs emulating artists’ voices by releasing a...
Google is rapidly expanding access to its conversational AI capabilities within search, enabling new interactive experiences for millions more users across 120...