Computer generated images are everywhere, from television, cinema and computer games. Doing a half-hearted job of it though can be worse off than not doing it at all. As MIT researcher and former Aalto doctoral student Miika Aittala points out, “Each detail on the screen must be in order so that nothing out of place sticks out. Viewers are really good at noticing if the graphics are done wrong.”
This requirement to make sure that all computer-generated items in a scene look convincing is why the list of credits at the end of films have grown substantially longer, especially when they come to the parts listing everyone responsible for the visual effects. Automation of the laborious process of making computer generated objects look “just right” would allow CGI artists to spend more time actually creating art, as opposed to making their creations look passable. To achieve this goal, researchers in professor Jaakko Lehtinen’s group in the Department of Computer Science at Aalto have turned their A.I. algorithms on the problem.
One of the common ways a computer-generated image will stand out as clearly fake is when its surface looks unconvincing. In real like, the way the surface of an object appears depends on how light interacts with the material, and if the computer gets this wrong the object on screen may appear either too shiny, or too dull.
The reason surfaces are hard to perfect is that their appearance is dominated by their microscopic imperfections. These are the lumps and bumps that are too small to perceive by the naked eye alone, but together add to what makes the surface of objects appear different to each other. Currently artists creating computer generated replications have to manually play around with a variety of unintuitive parameters that simulate these things like bumpiness to get the surface to look right. If a computer could be made to be able to predict what these parameters should be from a sample image, it would make the whole process a lot quicker. For example, if an artist wants a CGI leather briefcase in a scene, it would be great if the artist could feed the computer a few images of a leather surface and have the computer generate what the CGI briefcase looked like straight from the images. Even more ideal would be if the artist could upload simple phone images, and not have to worry too much about high resolution and controlled lighting.
To solve this, the researchers have been training a machine learning algorithm on a series of images of real‑world surfaces and corresponding computer‑generated objects with the same surface. The computer works out what parameters in the CGI object make it a good match for the real‑world surface by trying to predict what the parameters should be, and then checking if the predictions it made match the values of the real‑world example. Once the computer is able to predict these parameters correctly, it is able to start producing CGI models for completely new real‑world photos, such as the photo of the leathers in the examples above. These results, including generating simple CGI surfaces from simple phone images continue to influence Dr Aittala’s current research work.
Neural networks and machine learning have begun radically changing many areas of human/computer interaction in unpredictable ways, and computer art is no exception. “From a researcher’s perspective, things are happening rapidly and in entirely new ways. Neural networks allow us to tackle problems that were difficult to grasp in any way earlier.
The operating model as a whole will start to become clearer once neural networks have been utilised for a sufficiently long time.” Says Dr Aittala.