News

Artificial intelligence brings efficiency to processing pictures and sound

Deep neural networks enable the opening of image layers and distinguishing between audio signals.
The robot can open image layers.

Researchers into artificial intelligence have found a new way of approaching a visual reasoning principle called perceptual grouping. A robot learned to group its observations meaningfully unsupervised so that it had not been taught grouping criteria separately.

“When a robot sees pictures, it learns not only to distinguish between the independent parts of the picture, but also to combine the parts that belonged together into wholes and, if necessary, to fill in the missing parts of the picture. For example, a household robot learned to navigate around furniture and other obstacles and to distinguish which objects were situated behind others. The task set for the household robot could be to take hold of a mat whose edges were visible from under the edges of a sofa. To carry out the task, the robot has to learn that the two pieces of mat are part of a complete mat and that it is enough if it takes hold of the mat from one side of the sofa”, explains Antti Rasmus who is carrying out the research for his doctoral dissertation.

Unsupervised grouping of observations has not yet really been researched, but it could be utilised for example in image processing in opening different image layers and in selecting layers to form the final image. This property for example makes it easy to remove disturbing elements from an image.

“The property can be utilised in noisy situations too when there is a need to concentrate on only one sound. In this case the robot makes it possible to separate one audio signal from another”, observes Rasmus.

Deep neural networks, which previously required a lot of data, learn more efficiently when using new visual reasoning. Every image brings more information for the robot’s learning task and so the efficiency of an individual image improves and fewer images are needed than previously.                                                                                                                            

“Movement is also a strong clue to the robot about things that belong together because parts that are linked to one another always move in the same direction. For example, the robot finds it easier to spot a dog behind a fence when the dog starts to move”, he adds.

The research is being carried out by Antti Rasmus, Mathias Berglund and Tele Hotloo Hao from the Department of Computer Science and The Curious AI Company, Klaus Greff and Jürgen Schmidhuber from  IDSIA, the Swiss research laboratory that specialises in artificial intelligence, and The Curious AI Company’s CEO Harri Valpola. The research is part of Mr Rasmus’ and Mr Berglund’s doctoral research.

More information:

Antti Rasmus
Doctoral student
Aalto University, Department of Computer Science
[email protected]

Article Tagger: Deep Unsupervised Perceptual Grouping

Video

  • Published:
  • Updated:

Read more news

Group Picture
Cooperation Published:

DeployAI Partners Gather for Heart Beat Meeting in Helsinki

The European DeployAI project's partners gathered for the Heart Beat meeting hosted by Aalto University Executive Education in Helsinki.
Professori Maria Sammalkorpi
Research & Art Published:

Get to know us: Associate Professor Maria Sammalkorpi

Sammalkorpi received her doctorate from Helsinki University of Technology 2004. After her defence, she has worked as a researcher at the Universities of Princeton, Yale and Aalto.
blueberry
Campus Published:

A! Walk-Nature connection: Walk, pick, discuss

The afternoon adventure on Lehtisaari offered more than just a walk in the forest; it sparked conversations about renewed appreciation for the simple pleasures of life and the beauty of the Finnish wilderness...
AI applications
Research & Art Published:

Aalto computer scientists in ICML 2024

Computer scientists in ICML 2024