The tracks on my latest album, Painting Music, are the results of me experimenting on online platforms that produce both sounds and visuals from a user’s input. One of these is “Blob Opera”. All the platforms are experimental, meaning that as users use the platform and upload their creations, the database is expanded and the system produces different – better – results, as it “learns”. So the quality and pleasantness of the results differ – it depends on how the signals are processed. “Blob Opera” is a machine learning experiment by David Li, in collaboration with Google Arts and Culture, which uses the latest web audio technology. The user interface is four blobby shapes with rolling eyes, wriggly bodies and moving mouths that the user can move around to generate sound.
the Blob Quartet
“Blob Opera” has a wide range of keys, notes and harmonies that the user can generate. With sufficient experimentation and practice, “Blob Opera” users can design quite long pieces of music using the soprano, mezzo-soprano, tenor and bass simulated voices of the blobs.
What came from Blob opera?
The composition is called A.I. Opera, and it has fully developed, original scores for the melody and vocalization, and includes string and horn sections. It developed from snatches of notes into a song for a quartet of operatic voices.
Next to be released: A.I. Opera
Try it – it’s fun!
The creators of the program collaborated with four opera singers to teach a machine learning model how to sing. Tenor, Christian Joel; bass, Frederick Tong; mezzo‑soprano, Joanna Gamble; and soprano, Olivia Doutney, recorded 16 hours of singing to train an algorithm called a Convolutional Neural Network. Additional singing was provided by Ingunn Gyda Hrafnkelsdottir and John Holland-Avery.
In the experiment, you don’t hear the singers’ actual voices, but the machine learning model’s understanding of what opera singing sounds like, based on what it learnt from them. (It actually sounds pretty good, except of course, when your cursor wobbles which causes the blobs to emit off-key shrieks! Or when you drag too fast or too slow and all you get are doleful moans.)
The user generates sound by dragging the singing blobs on the screen up and down to change pitch, or forwards and backwards, for different sounds and vocalizations. Another machine learning model lets the blobs respond to and harmonize with your input in real time.
The developers demonstrate the ability of the blobs to sing properly in a choir by providing recordings of popular songs on the “Take the Blobs on Tour” page of the site. That shows what can be achieved with accuracy and control.