If you can’t be filmed, a meta-human will stand in for you!

So, ladies and gentlemen; this is the state of A.I. for the purposes of visualizing music: Now, you can generate video of a computer-generated figure “lipsynching” to your song. And believe you me, it looks as real as damnit, as my Papa used to say. The particular platform that I am subscribed to is Runway A.I., and the program is Gen-2. The trick to getting realistic renderings without morphing or physical abnormalities, is to feed the program with clear, clean, perfectly deep-etched graphics, and clean, perfectly trimmed audio clips. The result is what you can see in the video below, of my latest release, “To Travel Hopefully”.

This is a solution to my much-maligned issue of not being able to sing myself, and my vocalists not being willing to perform in the music videos of the songs that they have recorded for me (because they are session musicians with other personas). In this song, Ben Alexander, (who actually looks disconcertingly like a very young Elvis Presley), now looks like a blonde, muscled, square-jawed hunk. Occasionally, his neck muscles flex like he’s got a snake in there. But watch the lips and the teeth for the sibilants and plosives. Spot-on.

Why bother with creating these A.I. figures? Well, for one, people prefer to look at people, rather than patterns or landscapes, in videos, and they make more sense out of the lyrics if they can see someone form the words with their lips. In this case, the fact that the “singer” is somewhat artificial, fits the song because it is about outer space travel, and about the far future. But, so I’ve read, A.I. is no longer the future. It’s the present. I’ve just learned how useful it can be so long as your input is intelligent. As with all computer programs: Rubbish in, rubbish out.

To Travel Hopefully music video

Producer details in the video titles at the end.