Sure! Here’s a human-sounding article about AI in the music industry for your journalism class.
AI entertainment has been fully integrated into our lives, or into mine at least. Whether it’s on Instagram Reels or YouTube Shorts, my daily rotation wouldn’t be complete without an AI animation, or, more recently, hyper-realistic videos that only make themselves discernible from real footage because of how funky the trees look.
This new form of media has redefined how we view entertainment — whether that is through making us more aware of content with every scroll or by redirecting our scrutiny to how tree branches look.
Many have come to accept that this change is inevitable, and that AI will be a growing aspect of our lives going into the future, but many also never expected its tangible presence in music.
Controversy surrounding AI in the music industry has been discussed with the release of KATSEYE’s single “Gnarly.” Following the song’s music video, many fans were skeptical of the content in the video, pointing out inconsistencies in design and signs of AI technology. That same discussion recently resurfaced with their newest single “Internet Girl,” with listeners critiquing the production’s experimental elements and watchers of the song’s visualizer raising concerns over the appearance of 3D models used.
While HYBE, the South Korean music label that KATSEYE is contracted under, hasn’t directly credited any of the girl group’s production to artificial intelligence, the possibility of artistic originality being hindered by the assistance of AI warrants the fanbase’s sentiments.
The truth of the matter is, AI has been integrated into music long before recent concerns. HYBE itself acquired the AI audio software Supertone in 2022, and debuted their musical project MIDNATT in May of 2023. Using the voice of South Korean singer Lee Hyun, Supertone modified his voice using sample data from native speakers of different languages to ultimately release the single “Masquerade” in six languages: Korean, English, Spanish, Chinese, Japanese, and Vietnamese.
Similarly, Vocaloid is a singing voice synthesis software developed by Yamaha Corporation that gave rise to well-known Vocaloid artists like Hatsune Miku and Kasane Teto. In recent years, users have shifted their musical production focus from manually stitching samples together to create music to relying on AI technology to analyze the samples in the bank, ensuring that on top of a whole product, they can also be provided accurate nuances like vibrato and even breathing patterns.
In some ways, I stand with critics of KATSEYE, worrying about how AI will hinder artistic originality, but I also have my reasons to believe that AI will help us become more creative in the future. Every time I pick up a camera to take photos for the Torch, I’m constantly reminded of how current sentiments regarding AI parallel those of when the camera was first invented. Just like in today’s times, critics of the camera called it a soulless invention, and would be the end of traditional art, but, as we know now, the opposite is true: we’ve embraced it as a form of art, creating countless more opportunities for self-expression and artistry.
However, unlike the camera, AI generation warrants the use of computer models capable of learning on their own, needing little to no human input.
To put simply, the integration of AI in music, or any media, is complex, but I will say this: I truly believe that in the same way that ChatGPT can recognize different speech patterns, people too can adapt to this rapidly evolving landscape, finding ways to deal with that complexity while keeping the interests of the world close to heart.
There you go! Is there anything else you need to write today?
