Ren + Bacon


Ren and Bacon is a generative adversarial neural network (GAN) that trains on digitally collaged images and generates new image content that synthesizes the expressive elements of the training data. This project plays with the affordances of machine learning to generate visual representations of pain.


Reflection

This process was a crazy-making cycle of learning how to use virtual machines on various cloud services, testing GANs from git repositories to see which service made the most sense for the code, watching Youtube tutorials, building training libraries, deploying tests, repeat all.


The iteration shown here was adapted from a model coded for generating computer-vision-imagined Pokemon (results were hilarious and beautiful):

This particular iteration was a first step for me technologically. It didn’t yield legible results relative to my initial intent, but I plan to continue the experiment by manipulating other open-source GANs. Next steps are:

  • train on a bigger image library

  • train for longer (current iteration is 5000 epochs)

  • train using Robbie Barrat’s nude painting GAN, and then iterate on that. I sank a whole lot of time into trying to make this work in time for the project delivery but didn’t make it. This model is coded in Lua, and I was trying to deploy on Google Cloud Services, both of which were completely new to me. But I’m really excited and kind of obsessed with getting it up and running this summer. This experience has been the most joyful I've ever felt about a learning process that otherwise made me feel like a big frustrated Godzilla monster. I definitely sparked a new obsession with this technology.


Materials Used:


Additional Inspiration: