Advertisement

OpenAI Teases AI-Detector for Images

January 30th, 2024 Jump to Comment Section 2
OpenAI Teases AI-Detector for Images

OpenAI, a key player in the AI game, has announced a series of actions aimed at intercepting malicious use. The coming election year in the United States is a motivation for these actions. OpenAI is experimenting with an image provenance classifier, which should help secure the integrity of the electoral process.

Generated imagery engulfs the visual media world. This trend holds immense potential, for better or for worse, and OpenAI is one of the most prominent players in this field. The company is responsible for Chat GPT, Dall-E, and more. The year 2024 may shape up to be among the most influential years regarding AI, but it is also an election year in the United States. As AI-generated imagery is becoming more accessible and less distinguishable from authentic, photon-based imagery, a conflict is emerging. As much as generative algorithms contribute to the democratization of visual storytelling, they pose a major threat regarding the authenticity of visual information.

OpenAI prospects for 2024

In a recent blog post, the company clarified some of their tactics and strategies for combatting fraudulent conduct buzzing around worldwide elections. The company promises new tools that will try to prevent impersonation. (They weren’t so successful last year with a tool that was supposed to identify AI writing, which was quietly shut down.) Abuse of personalized persuasion is also on the roadmap as the company tries to determine how potent their tools may be in this field. These actions are mostly relevant in the field of written content.

You probably won’t fall for this one… Created by Dall-E 2 and Adobe Firefly

OpenAI – what about generated imagery?

OpenAI states that they are working on several image provenance efforts. The company has joined forces with the Coalition for Content Provenance and Authenticity (C2PA). Our loyal readers may remember the initiative, covered in several articles, and including some key players in the filmmaking business. This includes Adobe, Canon, Nikon, Sony, Leica and more. Leica was the first to launch a camera compliant with the initiative’s guidelines, the M11-P. Sony will soon join the fray with several existing cameras.

This effort should result in some sort of authentication information imprinted on every file originating from the company image-generating app – Dall-E.

Dall-E provenance classifier

Another front, a bit more innovative and arguably more interesting, is the Dall-E provenance classifier. This tool is only in its experimental stage, but once completed it should possess the power to identify Dall-E generated imagery. The extent to which it will be able to do so is yet unclear. Also at this stage, no other image generator is prone to identification, but if this sort of classification works, it may have a profound effect in the field.

A long road ahead

Though these measures seem like just a drop in the ocean, they point to a trend that may (and in my opinion should) get some momentum. A level of authenticity is required to make informed choices, and those stand at the core of the society in which I wish to live.

Do you believe such initiatives can turn the tide of fake news based on generated imagery? Would you consider buying a camera that supports these measures? Let us know in the comments.

Featured image credit: AI-generated with MidJourney by CineD.

2 Comments

Subscribe
Notify of

Filter:
all
Sort by:
latest
Filter:
all
Sort by:
latest

Take part in the CineD community experience