AI-generated text is confusing organizations – leading to a no-win “arms race” with AI detectors

In 2023, the science fiction literary magazine Clarkesworld I stopped accepting New requests because many of them are generated by artificial intelligence. As far as the editors could tell, many submitters pasted the magazine’s detailed story guidelines into the AI and sent in the results. And they were not alone. Other fantasy magazines have it too You mentioned a large number From submissions generated by artificial intelligence.
This is just one example of the trend. The old system relied on the difficulty of writing and perception to limit size. Generative AI overwhelms the system because the humans on the receiving end can’t keep up.
This happens everywhere. Newspapers are being inundated by artificial intelligence Letters to the editoras it is Academic journals. Lawmakers have been inundated with artificial intelligence Founding comments. Courts around the world are filled with artificial intelligence filingsEspecially by people representing themselves. Artificial intelligence conferences are Flooded With research papers generated by artificial intelligence. Social media He is Flooded with The scourge of artificial intelligence. in music, Open source software, education, Investigative journalism and EmploymentIt’s the same story.
As with the initial response to Clarkesworld, some of these organizations have closed their submissions. Others have met an AI input attack with some defensive response, often including the use of a counter-AI. academic Peer reviewers AI is increasingly being used to evaluate papers that may have been generated by AI. Social media platforms resort to Artificial Intelligence Supervisor. Court systems use artificial intelligence to Sorting and process Litigation Volumes Charged by AI. Employers resort to Artificial intelligence tools To review candidates’ applications. Not only teachers use artificial intelligence Class papers and Examination managementbut K comment Tool for students.
These are all arms races: rapid, aggressive iteration of applying a common technology to opposing purposes. Clearly, many of these arms races have harmful effects. Society suffers if the courts are filled with trivial cases fabricated by artificial intelligence. There is also harm if the established measures of academic performance – publications and citations – go to the researchers most willing to fraudulently submit dissertations and papers written using AI, rather than to those whose ideas have the most impact. Ultimately, the fear is that fraudulent behavior enabled by AI will undermine the systems and institutions that society relies on.
Advantages of artificial intelligence
However, some AI arms races have surprising hidden upsides, and the hope is that at least some organizations can change in ways that make them stronger.
Science seems likely to become stronger thanks to AI, but it faces a problem when AI makes mistakes. Consider an example prattle,Filtering AI-generated phrases in scientific papers.
A scientist using AI to help write an academic paper can be a good thing, if used carefully and discreetly. Artificial intelligence is increasingly becoming a Basic tool In scientific research: for literature review, programming, data coding and analysis. For many, it has become a crucial support for scholarly expression and communication. Before AI, better-funded researchers could hire humans to help them write their academic papers. For many authors whose primary language is not English, this type of help has been costly necessity. Artificial intelligence provides it for everyone.
In fiction, AI-generated works that are submitted fraudulently cause harm, both to human authors who are now subject to increased competition and to those readers who may feel cheated after unwittingly reading a machine’s work. But some outlets may welcome AI-powered posts with appropriate disclosure and under certain guidelines, and leverage AI to evaluate them according to criteria such as originality, relevance, and quality.
Others may reject the work produced by AI, but this will come at a price. It is unlikely that any human editor or technology will be able to maintain the ability to distinguish between human writing and machine writing. Instead, outlets that want to publish exclusively with humans will need to limit submissions to a group of authors they trust not to use AI. If these policies were transparent, readers could choose the format they preferred and read happily from either or both types of outlets.
We also see no problem if a job seeker uses AI to polish their resume or write better cover letters: the wealthy and privileged have long had access to human assistance for these things. But it goes beyond limits when artificial intelligence is used to lie About identity and experience, or He cheats In job interviews.
Likewise, democracy requires that its citizens be able to express their opinions to their representatives, or to each other through a medium such as a newspaper. The rich and powerful have long been able to hire writers to turn their ideas into compelling prose, and in our view AI providing this assistance to more people is a good thing. Here, AI errors and bias can be harmful. Citizens may be using AI for more than just a time-saving shortcut; This may be to increase their knowledge and abilities, and to make statements about historical, legal or political factors that they are not expected to independently verify.
Fraud booster
What we don’t want is for lobbyists to use AI in sly marketing campaigns, writing multiple messages and passing them off as individual opinions. This is also an order Older problem That artificial intelligence is getting worse.
What distinguishes the positive from the negative here is not any inherent aspect of the technology, but rather the power dynamic. The same technology that reduces the effort required of a citizen to share their life experience with a legislator also enables corporate interests to misrepresent the public on a massive scale. The first is an application of artificial intelligence that equalizes power and enhances participatory democracy; The latter is the application of energy concentration that threatens him.
In general, we believe that biblical and cognitive assistance, which has long been available to the rich and powerful, should be available to all. The problem comes when AI systems make fraud easier. Any response needs to balance embracing this new democracy in access to information with fraud prevention.
There is no way to stop this technology. Highly capable AI systems are widely available and can be run on a laptop. Ethical guidelines and clear professional boundaries – for those acting in good faith – can help. But there will absolutely be no way to prevent academic writers, job seekers, or citizens from using these tools, whether as legitimate assistance or to commit fraud. This means more comments, more messages, more requests, and more submissions.
The problem is that whoever is on the receiving end of this AI-fueled deluge can’t handle the increasing volume. What can help is the development of assistive AI tools that benefit organizations and society, while also reducing fraud. This may mean embracing the use of AI assistance in these adversarial systems, even though defensive AI will never achieve superiority.
Balancing harms with benefits
The sci-fi community has been grappling with AI since 2023. Finally, Clarkesworld reopened submissions, claim It has a convenient way of separating stories written by humans and artificial intelligence. No one knows how long, or to what extent this will continue to work.
The arms race continues. There is no simple way to know whether the potential benefits of artificial intelligence will outweigh its harms, now or in the future. But as a society, we can influence the balance between the harms they cause and the opportunities they present as we make our way through the changing technological landscape.


