bookmarks portrait

GAN-aided creation and blurry authorship

Ganbreeder is a “massively collaborative creative tool and network” developped by Joel Simon. The proposition is to create images by playing with the “genes” of generative adversarial network [1]


The interface is minimal: after creating an account, you are invited to pick and start from an existing “image” or start one from scratch, edit its genes, weight them or mix two existing configurations.

Above, a series of images showing the evolution of a configuration, from left to right. The strange blobfish of the first image is a mixture of these images:

The blobfish image is a child of an image by Mark Grim (see below), that was already containing these genes: assault rifle, bell cote, hard disc & rifle.
The blobfish has different weights and more genes: shield, tank, hay & gyromitra have been added.
The lineage of the image is accessible via the diagram icon next to the image.


The question of “who is the author of the image” is problematic here. First of all, you can not add images of your own to the process, only a predefined set of genes is accessible. These genes are already GAN generated images. The process has been built upon Tensorflow, turned into a webservice thanks to NodeJS & a PostgreSQL database, all being open-sources tools.
Another important aspect of the proposition is that new series can be based on images bred by anybody in the website, without any restriction! This kind of practice is common in developers platforms such as github or gitlab What’s special with Ganbreeder is to apply this way of collaborating to image creation, allowing anybody to fork the work of someone else and deform it. Obviously, the whole site system is available on git under GPL v3 license, closing the loop by ensuring that the tool AND the results are and will stay libre.

When you go to the download interface, a license is attached to the different resolutions:

  • CC BY 4.0 for thumbnails;
  • CC0 for high resolution (1024×1024, small high res);
  • And an unclear “Upscaling included” when printed.

Computer-aided creation

Ganbreeder is also proposing a new approach to collaborative image creation. Via a simple interface controlling a complex process, Joel is merging intimately the highly statistical approach of a computer to the human intuition. As a user, you can not fully grasp the influence of each gene and its weight, and the computer processing the image does not have any consciousness of why it is doing it. Program and human brain are really collaborating to the creation of the images, instead of competing for control or victory. There is no clear goal here, and the way Joel has architecture the breeder is a way to review what we think the work of a creative professional is, or how can it be different with this kind of algorithm to help you.
Garry Kasparov have done something similar 20 years ago. After loosing against Deep Blue in 1997 [2], he conceived a blended approach of computer in chess: instead of confronting human to computer, he built up human-computer team competing with other human-computer teams, humans or computer only.
He called this Advanced Chess or Centaur Chess [3].
As Kasparov, Joel is not trying to beat the computer but imagine a much smarter approach of technique: let’s work together with computer for what they are good at, which is making the human part even more important and visible!



UI for rotation editing #1

I wanted to enable rotation editing via a simplified 2D UI for futari-app. I have started by the most obvious way: 3 concentric circles (XYZ axis).

Even if the design was cool and super simple, there was a huge problem with this widget: NOTHING is circular in the UI! Obviously this appeared once the graphical and functional research was nearly over.

I decided to trash the whole design and start from scratch, and rebase the widget on an element already present in the interface: the color picker. It is easy to modify 2 axis of rotation at the same time by representing the rotation as a 2d gradient. It was only missing the plane selection, the display of axis constrains, current and target values (rotation will be smoothed on the object).

The visual impact is better, it is confusing at first sight but i’m quite sure it will be intuitive to use and will remove the usual difficulty to set rotations (all 3d guys are using shortcuts instead of UI to rotate their objects, as it is always tricky to click on the right axis). Starting the implementation now.


B4D – blender for graphic designers

Ready with the notes about the quick training session of tomorrow, focused on fonts manipulation, svg import/export and rendering in blender.



A small training about how to use fonts…

A small training about how to use fonts and curves in blender targeted to graphic designer of OSP. The intend is also to evaluate the influence and interest of 3d software in print workflows.

This is an internal event.