This Post Was Written By A Bot


Machine learning could be the next-gen set of tools for creatives.

Machine learning is everywhere. Social feeds, media streams, dating sites and world problem solvers are leveraging it to tie abstract data to concrete decisions. It's one of the most talked about topics in the tech world, yet most people are just starting to grasp the influence it has on our lives—including in the creative world. This was the topic at a panel I attended at last week's Resonate festival in Belgrade: "Machine Learning for Artists, Musicians, Gamers and Makers."

So why machine learning? This is a valid question, especially if you are used to a more traditional creative workflow. Are Cylons going to take my job, design the next groundbreaking experience and build it in a fraction of the time? Is the machine the artist or am I? Here are a few reasons I gleaned from the panel that explain better why machine learning is worth taking a deeper look at.

Machine learning poses an exciting new direction for art and creative work.

A recent use of machine learning in the arts was a #deepdream project by Memo Akten. His psychedelic effects were created by flipping image classification around and asking a neural network to make a source image look "more like" a certain image category. The algorithm looks for groups of pixels on the source that have similar matching features to the target image. It then rearranges some pixels to make that local feature more like the target category. Do this enough times, and you get these trippy dog-squirrel images. Of course this is just one facet to the machine learning story. The overall takeaway is that machine learning poses an exciting new direction for art and creative work, and I for one am excited to dive in. 

You can map endless inputs to create endless outputs.

In most interactive work we are used to a standard 1:1 or a 1:many interaction model. If we move a slider, maybe the volume changes on a sound. If we raise our arm and a Kinect camera is tracking us, maybe the pitch and the filter cutoff changes. The process of mapping these relationships is grueling at best. Using machine learning tools, like Wekinator application by Dr. Rebecca Fiebrink (which I was fortunate enough to play with), we can map any number of inputs to any number of outputs. This means anything from touchscreens to Kinect to Leap Motion or any other kind of sensor can be used as an input to a system and the output of the system is a multidimensional regression of all those variables. More simply put: when my hands are in this position, the output should look or sound sort of like this. When my hands are in this other position, the output should look or sound more like this. This approach allows for much faster translation of ideas into actual creative production. Rather than spending hours mapping specific inputs to specific outputs, you just let the algorithms do the work.

Source: Wekinator

Source: Wekinator

We still have a lot to learn from computers.

This year master Go player Lee Sedol squared off against AlphaGo, and AI developed by Google. Go is a game considered nearly unmasterable by a computer because the moves and strategies are so complex and require "more elements that mimic human thought than chess." In a similar fashion to the famous match of Garry Ksparaov and Deep Blue in 1997, AlphaGo came out victorious. The computer apparently made some questionable moves along the way, which the commentators took note of. A few moves later it became obvious the computer knew exactly what it was doing. It was actually teaching master Go players new ways to play the game. Similarly machines may be capable of teaching us new approaches to artistic techniques. This can come in the form of new source materials for digital art, or just understanding better how a computer "sees" in the case of deep dream.

Broad spectrum of control over a system

Another interesting topic that was brought up was considering how much of the art is done by the computer vs. the artist with a machine as a collaborator. It is clear from looking at the viral popularity of the Deep Dream images that outputs can have a tendency to look similar depending of the model produced from training. In the case of Deep Dream, the outputs contain the strange images of dogs because the model was in fact trained with images of dogs. It is sort of like the Baader-Meinhof phenomenon. This would be a system where much of the control has been given over to the machine. In fact, current Deep Learning techniques would not allow for much of the user's input to affect the output of the system. That is why, after enough iterations, many of the images start to look similar.

On the other side of the spectrum would be something like Wekinator, where the training data is not so heavily weighted and much more noise and randomness can be introduced into the system. Depending on the output you wish to achieve, this may or may not be desired—but the point is that it is up to the artist how much control to retain for themselves, and how much to give to the machine. That said, it was the opinion of at least one panelist that we "do not need to value mathematical systems over tacit systems." Meaning emergent behavior and more abstract relationships between input and output are not any less valid than a system with a 1:1 relationship, i.e. a painter painting. If you can accept this premise, the options are endless for creating a system to help you create your art.

This post wasn't really written by a bot—but if a robot can write a novel, I'm sure it could jot down a few words in a blog post. Header image source: WallpaperCave.

Comment