Columnists

Artificial intelligence reshaping art, music

IN the mid-1990s, Douglas Eck worked as a database programmer in Albuquerque, New Mexico, while moonlighting as a musician. After a day spent writing computer code inside a lab run by the Department of Energy, he would take the stage at a local juke joint, playing what he calls “punk-influenced bluegrass” — “Johnny Rotten crossed with Johnny Cash”. But, what he really wanted to do was combine his days and nights, and build machines that could make their own songs.

“My only goal in life was to mix AI (artificial intelligence) and music,” Eck said.

It was a naive ambition. Enrolling as a graduate student at Indiana University, in Bloomington, not far from where he grew up, he pitched the idea to Douglas Hofstadter, the cognitive scientist who wrote the Pulitzer Prize-winning book on minds and machines, Gödel, Escher, Bach: An Eternal Golden Braid.

Hofstadter turned him down, adamant that even the latest artificial intelligence techniques were much too primitive.

But, during the next two decades, working on the fringe of academia, Eck kept chasing the idea, and eventually, the AI caught up with his ambition.

Last spring, a few years after taking a research job at Google, Eck pitched the same idea he pitched Hofstadter all those years ago. The result is Project Magenta, a team of Google researchers who are teaching machines to create, not only their own music, but also to make so many other forms of art, including sketches, videos and jokes.

With its empire of smartphones, apps and Internet services, Google is in the business of communication, and Eck sees Magenta as a natural extension of this work.

“It’s about creating new ways for people to communicate,” he said during a recent interview at the Google AI research headquarters in Mountain View, California.

The project is part of a growing effort to generate art through a set of AI techniques that have only recently come of age. Called deep neural networks, these complex mathematical systems allow machines to learn specific behaviour by analysing vast amounts of data.

By looking for common patterns in millions of bicycle photos, for instance, a neural network can learn to recognise a bike. This is how Facebook identifies faces in online photos, how Android phones recognise commands spoken into phones, and how Microsoft Skype translates one language into another. But, these complex systems can also create art. By analysing a set of songs, for instance, they can learn to build similar sounds.

As Eck says, these systems are at least approaching the point — still many, many years away — when a machine can instantly build a new Beatles song or perhaps trillions of new Beatles songs, each sounding a lot like the music the Beatles themselves recorded, but also a little different.

But, that end game — as much a way of undermining art as creating it — is not what he is after. There are so many other paths to explore beyond mere mimicry. The ultimate idea is not to replace artists, but to give them tools that allow them to create in entirely new ways.

For centuries, orchestral conductors have layered sounds from various instruments atop one other. But, this is different. Rather than layering sounds, Eck and his team are combining them to form something that did not exist before, creating new ways that artists can work.

“We’re making the next film camera,” Eck said. “We’re making the next electric guitar.”

Called NSynth, this particular project is only just getting off the ground. But, across the worlds of both art and technology, many are already developing an appetite for building new art through neural networks and other AI techniques.

“This work has exploded over the last few years,” said Adam Ferris, a photographer and artist in Los Angeles. “This is a totally new aesthetic.”

In 2015, a separate team of researchers inside Google created DeepDream, a tool that uses neural networks to generate haunting, hallucinogenic imagescapes from existing photography, and this has spawned new art inside Google and out. If the tool analyses a photo of a dog and finds a bit of fur that looks vaguely like an eyeball, it will enhance that bit of fur and then repeat the process. The result is a dog covered in swirling eyeballs.

At the same time, a number of artists — like the well-known multimedia performance artist Trevor Paglen or the lesser-known Adam Ferris — are exploring neural networks in other ways.

In January, Paglen gave a performance in an old maritime warehouse in San Francisco that explored the ethics of computer vision through neural networks that can track the way we look and move.

While members of the avant-garde Kronos Quartet played onstage, for example, neural networks analysed their expressions in real time, guessing at their emotions.

The tools are new, but the attitude is not. Allison Parrish, a New York University professor who builds software that generates poetry, points out that artists have been using computers to generate art since the 1950s.

“Much like as Jackson Pollock figured out a new way to paint by just opening the paint can and splashing it on the canvas beneath him, these new computational techniques create a broader palette for artists.”

**The veteran tech writer is the reporter for emerging technologies for the New York Times San Francisco bureau.

Most Popular
Related Article
Says Stories