Abstract Paintings with StyleGAN
Do androids dream of digital abstract paintings?
AI algorithms help us in many tasks such as predicting future sales, classifying diseases, and translating sentences. These algorithms mainly utilize linear algebra, probability, and statistics. The question is that can we create AI models with a gist of creativity with the same instruments? There are many types of artistic activity that involve a certain level of creativity such as writing, composing, drawing, and painting. In this post, we investigate the creative capacity of an AI model that dreams of abstract paintings.
What is abstract art?
Abstract art uses shapes, colors, forms, and gestural marks to evoke a feeling instead of using depictions of visual reality. It discards the real-world objects as a component and attempts the invoke response. Since the early 1900s, abstract art has formed a central stream of modern art. Abstract expressionism is the term applied to new forms of abstract art developed by American painters Jackson Pollock, Mark Rothko, and Willem de Kooning in the 1940s and 1950s. It is often characterized by gestural brush-strokes or mark-making, and the impression of spontaneity which makes it emotionally appealing. Below see an abstract painting and examine its characteristics in terms of style and expressions.
The dataset is the only source of information that the model is exposed to. In this case, it is very similar to how a person learns painting by examining a limiting number of paintings. In the project, Painter by Numbers dataset  and its subsets (specific genres) are used. There is an older project that uses all genres for generating artworks without a specific genre . We preprocess the paintings with size filtering and padding steps. The final dataset has the content of ∼30k paintings from various genres and ∼3k paintings from abstract art category.
What is GAN?
The main aim of Generative Adversarial Networks (GANs) is to generate data from noise. The data domain may be images, music, text, etc. GAN architecture is composed of two networks, one for generating new data, and another for discriminating between generated and real data. The generator and discriminator try to outperform each other in the training phase which is the adversarial relationship between two parts.
The training objective of GANs is equivalent to minimizing Jensen-Shannon divergence between generated and true data distributions. Therefore, the generator is trained to learn true data distribution in theory.
The StyleGAN Model
This artificial intelligence art project utilizes Style Generative Adversarial Network (StyleGAN) , which is developed by NVIDIA researches. StyleGAN is an extension to traditional GAN architecture with drastic changes in the generator model such as learning an intermediate (disentangled) latent space, using a disentangled latent vector to control style in the generator model, and introducing Gaussian noise on the multiple layers of the generator as a source of variation. In other words, it uses unsupervised separation of high-level attributes (e.g., scenery and theme when trained on painting dataset) and stochastic variation in the generated images (e.g., brushstrokes), and it enables scale-specific control of the synthesis. For further reading, check out ProGAN , Wasserstein GAN .
Through the project, multiple models are trained. The most appealing results are generated with the model which is first trained by paintings of all genres and then training is continued with abstract images via transfer learning. This approach helps us to preserve composition in generated images.
The artworks are chosen via tweaking the disentangled latent vector. I let you decide whether or not androids dream of digital abstract paintings.