OpenAI’s ‘highly accurate’ AI content detection technology has no release plans
The company was worried that its recognition method might make using AI look bad to people who don’t speak English.
It seems like OpenAI is putting off releasing a new “very accurate” tool that can recognize ChatGPT-generated material because they are worried it would be hacked or make non-English users reluctant to employ AI models to produce text.
In May, the organization disclosed that it was developing a variety of strategies to identify content that was in particular generated by its products in a blog post. The Wall Street Journal published an exclusive report on Aug. 4 that suggested that the tools’ release had been postponed due to internal disagreements regarding the potential consequences.
OpenAI amended its May blog post with new information regarding the detection tools in response to the WSJ’s report. In summary, the company has not yet issued a release date, despite the fact that at least one instrument for determining text provenance is “particularly accurate and even effective against localized manipulation.”
Regrettably, the organization maintains that there are still methods by which malicious actors could circumvent the detection system; and as a result, it is reluctant to disclose it to the public.
The company appears to suggest in another passage that non-English speakers may be “stigmatized” against using AI products to write due to an exploit that involves translating English text to a different language in order to evade detection.
“Our research indicates that the text watermarking method has the potential to disproportionately affect certain groups, which is another significant risk we are currently considering.” For instance, it could potentially stigmatize the utilization of AI as a valuable writing instrument for individuals who are not native English speakers.
Although there are currently many offerings that are purported to detect AI-generated content, none have yet to demonstrate a high degree of accuracy across general tasks in peer-reviewed research, to the best of our knowledge.
The system created by OpenAI would be the first to depend on proprietary methods for identifying and invisible watermarking for content that is specifically produced by the company’s models.
Also Read: Scammers got into thousands of Australian crypto wallets