M WINGREN TAIKE PORTFOLIO

After feeling bored one day listening to tracks that the Spotify Algorythym had selected for me, I wondered if the whole process of creating and consuming music could be automated. To answer this, I turned to a predictive-text model, typing: “the name of my album is:” and getting “Mood Tape.”

Mood Tape’s music was generated using an AI app, my role was to select the timbres and when different sections began/ended. Track titles were also generated using predictive text. To create the album cover/artist image, I turned to GANs, or Generative Adversarial Networks. Through cross-breeding generative models for concepts like “website,” “garden snake,” “puppy,” “flower,” and “bubble,” imagery with its own aesthetic emerged.

Putting it all together, I now needed an army of bots to “listen” to Mood Tape, hyping it up and providing me with passive income. This part has yet to be figured out, but you can hear Mood Tape’s debut  album “AI-Anm” here.

 

MOOD TAPE

2019

VR MUSIC PLAYGROUND

With the dematerialization of digitally-created music, the only limitation on how interfaces look, sound, and behave is our imaginations. This project explores means of playing music, with motion, in virtual space.

An open-world-interface gives people the option to play with real (and unreal) physics to create sonic-interactions. “Air drumming” lends itself to the VR environment - sounds in this world exist as objects, which can all be activated by touch. Some of these objects can float in the air, these can be combined and played together to create multi-faceted sounds. Other sound-objects can be thrown, or used with gravitational interactions. “Player objects” don’t produce sounds, but are instead used to activate sounds via bouncing, spinning, or construction of Rube-Goldberg-Machine-like configurations.


 

Demo at the Helsinki XR Center

synthetic selfie

Aurora Portal

2019

A prototype for magnetism researcher Eija Tanskanen, and the Sodankylä observatory, this "Aurora Portal" is a bespoke volumetric mist display designed to bring Auroras into a gallery space as they occur.

 

What makes a human face? What makes your face? See a version of yourself as made by an AI system, and take a synthetic-selfie. Use your synthetic-selfie for profile pictures and see if people notice: deepfake-selfie. How does it work? A generative model is trained to create portraits by looking at tens of thousands of photos, the software seeks to find you in its vast internal space of human faces. It then creates its own impression of you.

How are your unique features captured by this machine learning technology? Are you well represented by the images used to train the model? Technology is a mirror of culture, and with a project that is literally mirror-like, the data sets employed are directly reflected. Diverse representation in training-sets is one key aspect of an AI model created to mimic any human face it sees. Another is data privacy, specifically in relation to face-tracking. One could use a synthetic-selfie as a mask to confuse tracking online. On the other hand, how will the software for this project use your likeness? Your image will not be retained, although the generative model of your face will be available online.

In an exhibition scenario, visitors see a screen with a continuously-morphing series of synthetic-faces generated by the software. Upon detection by webcam, the deep learning model creates a generative model of your face based on its training data. This uncanny "reflection" follows your movement and mimics your facial expressions. To capture a "selfie" you simply press a button and the image is saved to a website where it can then be downloaded.