We will explore the issue of detecting text generated automatically by models such as GPT. These models are capable of creating very realistic looking text, so scientists have explored various ways of detecting them. We will take a high-level look at the main techniques they have created (watermarking, heuristics, classification models).
However, they are not perfect techniques: they can be circumvented, and we will see how. Finally, we will dive into the ethical considerations related to their possible misuse.