A team of researchers from The State University of New York at Albany, State University of New York at Buffalo and Keya Medical found a standard flaw in computer generated faces by which they can be identified.
The group written a paper describing their findings & have uploaded them to the arXiv pre-print server.
Over the past few years, deep-fake pictures & videos are in the news as amateurs & professional editors alike have created images & videos that depict people doing things that they never actually did.
Less reported but related is the increased use of computer-generated images of people that look human but who haven’t actually existed.
Such images are created via generative adversary networks (GANs) and they have reportedly begun showing-up on fake social media user profiles, which allows for catfishing & other sorts of nefarious activity.
GANs are a type of deep-learning technology, a neural network is trained on images to learn what human heads & faces appear as if. Then they’ll generate new faces from scratch.
The output can be thought-of as the average look of all the people that the network studied. The generated face is then sent to another neural network that tries to find out if it’s real or fake.
Those deemed as fake are sent-back for revision. This process continues for a number of iterations, with the resulting images growing ever closer realism.
At some point, they’re deemed finished. But such processing isn’t perfect, of course, as researchers with this new effort report.
Using software they wrote, they found that most of GANs tend to create less than round pupils, which, they note, can be used as a marker of computer-generated faces.
The researchers note that in most cases, users can simply zoom-in on the eyes of a person they suspect might not be real to spot the pupil irregularities.
They note that it might not be difficult to write down software to identify such errors & for social media sites to use it to remove such content.
Unfortunately, they note that now that such irregularities are identified, the people creating the fake pictures can simply add a feature to ensure the roundness of pupils.