Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
March 7, 2019 08:45 pm

Researchers Are Training Image-Generating AI With Fewer Labels

An anonymous reader shares a report: Generative AI models have a propensity for learning complex data distributions, which is why they're great at producing human-like speech and convincing images of burgers and faces. But training these models requires lots of labeled data, and depending on the task at hand, the necessary corpora are sometimes in short supply. The solution might lie in an approach proposed by researchers at Google and ETH Zurich. In a paper [PDF] published on the preprint server Arxiv.org ("High-Fidelity Image Generation With Fewer Labels"), they describe a "semantic extractor" that can pull out features from training data, along with methods of inferring labels for an entire training set from a small subset of labeled images. These self- and semi-supervised techniques together, they say, can outperform state-of-the-art methods on popular benchmarks like ImageNet. "In a nutshell, instead of providing hand-annotated ground truth labels for real images to the discriminator, we ... provide inferred ones," the paper's authors explained. In one of several unsupervised methods the researchers posit, they first extract a feature representation -- a set of techniques for automatically discovering the representations needed for raw data classification -- on a target training dataset using the aforementioned feature extractor.

Read more of this story at Slashdot.


Original Link: http://rss.slashdot.org/~r/Slashdot/slashdot/~3/MGlsuMHSdSM/researchers-are-training-image-generating-ai-with-fewer-labels

Share this article:    Share on Facebook
View Full Article

Slashdot

Slashdot was originally created in September of 1997 by Rob "CmdrTaco" Malda. Today it is owned by Geeknet, Inc..

More About this Source Visit Slashdot