In a world where AI technology is developing at a breakneck pace, it is increasingly difficult to distinguish between human-made and machine-made content. OpenAI, the company behind the popular ChatGPT, has announced that it is working on adding an “imperceptible secret signature” to its AI-generated content to identify it and detect possible fraud. This signature could be used to detect exam cheating or credible malicious content. However, academics and experts have expressed doubts about its effectiveness, as it might not work everywhere and could be circumvented easily by using synonyms.

Faced with this situation, how can man be sure of the veracity of the information he receives? What solutions are still possible to identify content generated by AI? In this article, we will try to answer these questions and better understand the issues related to the differentiation between human work and machine work.

OpenAI develops a signature for ChatGPT

OpenAI, the company behind ChatGPT, has announced that it is working on adding an “imperceptible secret signature” to its AI-generated content .

This signature could be used to detect exam cheating or credible malicious content. A prototype of this tool is already ready.

However, scholars and experts have expressed doubts about its effectiveness, as it might not work everywhere and could be circumvented easily by using synonyms.

Why is this signature necessary?

Launched in December 2022, ChatGPT is able to generate text from many questions and adapt it instantly.

This feature can be used to write an essay with a given style or to find an error in code.

That said, ChatGPT can also be used to generate malicious content or to hide the real origin of the produced texts.

Universities are particularly concerned about academic cheating, and so OpenAI is looking for ways to identify ChatGPT-generated content.

What are the limits of this signature?

Scott Aaronson, Visiting Researcher at OpenAI, declined to divulge the details of this signature and OpenAI only revealed that this signature, similar to a watermark for images, was part of a “bundle of solutions” being developed by their side to identify content generated by ChatGPT or other AI-based text generators.

However, Jack Hessel, a researcher at the Allen Institute for Artificial Intelligence, pointed out that adding a signature would not completely solve the problem of identifying AI-generated content, as it would still be possible to ” falsify” this signature.

What are the other possible solutions to identify AI-generated content?

Several academics and experts have suggested other ways to identify AI-generated content. Among these solutions:

  • Use language models to detect differences between human language and AI-generated language: This method could be used to detect AI-generated content that tries to impersonate human content. However, this method has its limitations, as the language models may not be good enough to detect all the differences between human language and AI-generated language.
  • Use Turing tests: These tests, which were developed in the 1950s, are used to test a machine’s ability to “pass as” a human being. However, this method also has its limitations, as it is possible for Turing tests to be fooled by high-quality AI-generated content.

It is important to note that, for now, none of these solutions are perfect and it is possible that AI-generated content will continue to be difficult to identify in the near future (by Google but also by any human).

However, as technologies evolve, new solutions may be developed to help identify such automatically generated content.

OpenAI’s secret signature and the detection of AI-generated content in conclusion

OpenAI has announced that it is working on adding an “imperceptible secret signature” to its AI-generated content to identify it and detect possible fraud.

However, academics and experts have expressed doubts about the effectiveness of this signature, which may not work everywhere and could be easily circumvented by using synonyms.

There are several other possible solutions to identify AI-generated content, such as the use of language models, Turing tests or captchas.

However, none of these solutions are perfect and it is possible that AI-generated content will continue to be difficult to identify in the near future.