In envisioning the role of anAI-agent journalist of the future, I draw from my experience as theeditor-in-chief of a small newsroom with limited resources and a strong focus on social media—particularlyTelegram and Instagram, where information can be disseminated quickly andcost-effectively. Our work format relies on the rapid, high-quality adaptationof external news texts into a more reader-friendly format, along withoccasional exclusive reports based on insider sources that tend to go viralthrough reposts.
Functions of the AI-Agent
What could an AI agentcontribute to such a newsroom model? First and foremost, it would help usestablish a full-fledged news feed comparable to those of much larger andbetter-funded media outlets. The agent would take over the time-consuming tasksof news scanning and rewriting, which currently consume valuable humanresources without generating original content.
In addition, delegating thetechnical aspects of fact-checking to the AI—specifically, the search forcorroborating or refuting sources—would be highly effective. That said, Ibelieve interpretive functions (e.g., evaluating the reliability of sources)should remain under human oversight for as long as possible.
An AI agent could also provideinvaluable support in sourcing stock imagery and generating originalillustrations. Even the most traditional news outlets (and ours is not one ofthem) are now obliged to adapt to the multimedia era: text without formatting,emojis, or images becomes less and less readable each year.
Our newsroom already employsautomated translation and voiceover for all our website materials—from Russianinto English and Uzbek. However, we must admit that the technology we use isfar from cutting-edge. A well-developed AI agent could handle these tasks withfar greater finesse.
Modes of Interaction
The central mechanic of theAI-agent, as I envision it, would be its “multiplicity of personas.” Inother words, the agent should operate in several distinct modes, each trainedon relevant data and optimized for specific tasks. These personascould include:
- AI-News-guy (finds and rewrites news)
- AI-Factchecker (searches for factual evidence and presents it to the human operator)
- AI-Designer (formats longreads and posts, selects emojis and images, creates illustrations, advises on multimedia formatting)
- AI-Editorial-Assistant (generates creative headlines, offers fresh perspectives, locates overlooked details, and suggests questions for public figures and government bodies)
Creating separate personas fortranslation or voiceover might be unnecessary, as those are mostly mechanicalprocesses. However, if emotionally expressive voiceovers become a priority,such a persona might be justified. Otherwise, these functions could be handledin a standard AI chat interface.
Currently available AI modelssuch as ChatGPT, Gemini, and Claude are already capable of performing most orall of these tasks. If providers make it feasible for even small media outletsto build customized corporate chatbots, access will no longer be thebottleneck. From personal experience, Gemini has proven to be the best forlong-form translation, while ChatGPT excels at crafting formal correspondence,editorial stylization, and, of course, illustration.
The training of a specializedagent, ideally, should combine vendor-provided resources with in-housematerials and expertise. However, smaller outlets suffer from limited datasets.A promising and journalistically valuable initiative would be for establishedmedia organizations to offer parts of their training data to help smallerones—preferably as a low-cost or philanthropic service.
Risks
The most obvious challenge inworking with AI agents is, of course, hallucinations. Invented ordistorted facts, critical omissions, and interpretations steeped in racism,bias, or outdated so-called “common sense” all pose risks. This problem is notunique to journalism—it affects every AI application—and its solution willlikely lie in technological and legal developments rather than editorial ones.For now, human review remains the only truly effective safeguard.
Another issue concernscopyright and legal ambiguity. Many of such risks can be mitigated throughclear agreements with the AI provider regarding ownership of generated content,as well as by training the agent to flag materials requiring licensing orcommercial use restrictions for human review.
Human factor-related riskswill be addressed in the Conclusion.
Accountability: Who Is Responsible?
First, I would exclude AIdevelopers from the chain of responsibility—unless malicious intent can beproven. Responsibility should lie with the users, based on the simple marketprinciple: “Why did you buy a faulty product?”
In well-structured workflows,the bulk of responsibility should fall on AI operators and the editors whoapprove their output. If an AI hallucinates or misrepresents facts, and theoperator passes the content on for publication, they should be heldaccountable.
However, this requires fairtask distribution in terms of volume and expertise, clear editorial protocols,and a well-trained AI agent. If these conditions are unmet, the operator andeditor become victims of circumstance, and responsibility shifts to those incharge of AI selection, training, and task delegation.
Where Are We Headed?
When it comes to the future—which, in my view, will inevitably intertwinejournalism with AI agents—my main concerns are not hallucinations or the spreadof misinformation by the Agent. In fact, I believe we already have a fairlygood grasp of how to combat these issues, which makes them seem lessfrightening.
Instead, I would highlightthree key problems, for which clear answers are still far from obvious:
- Journalist job displacement
- The “dead internet theory”: AI-flooded content ecosystems
- Operator apathy and eroded responsibility
The first problem is perhaps the most obvious. A journalist’scareer still often begins with the role of a news writer. Yes, “...rewritingthe news. The fingers of a dead man tapping at the keys in a stuffy office,” asthe rap group Makulatura once recited — hardly the most exciting kind of work,and it’s only natural to want to free people from it.
And yet, the journalisticcommunity still hasn’t fully developed a clear understanding of what will nowserve as the professional school for young journalists. Where will they learnto respond quickly to breaking news, to collaborate effectively in a team, toinstantly translate complex ideas into plain language, and to navigate theoverwhelming flood of information?
It’s likely that a newprofession will emerge — that of the AI news operator. But it seems that such arole would already require the knowledge and skills that a traditional newswriter only acquires over time. Moreover, one AI operator might replace five toten junior reporters — but that doesn’t mean the desire to become a journalistamong young people will decline just as sharply. And so, the question remainshanging in the air.
The second issue concerns the increasing saturation of theinternet with AI-generated content — material produced at various stages of thetechnology’s development. So far, the volume of such content hasn’t been highenough for us to fully grasp its impact — somewhat paradoxically — not only onus, the users, but also on AI training itself. I suspect that we may eventuallyencounter a closed-loop effect, where these “averaged,” subtly unnaturalAI-generated texts and media flood the internet so thoroughly that “havens”will start to emerge — platforms where AI-generated content is strictly banned,much like some websites currently prohibit NSFW material.
What kind of internet —including its news environment — will exist when content is predominantlycreated by AI (which can produce it on a scale humans could never match)remains to be seen. A striking real-world example, and perhaps a forewarning,is DeviantArt, where AI has already pushed many human artists aside, promptingmass protests — though ultimately futile ones.
The third concern is that of responsibility. As AI becomes“almost perfect,” any AI operator will be tempted to let go of manual controland stop reviewing the agent’s output. This same temptation — or rather,necessity — may arise when an operator faces an overwhelming volume of contentto review.
In such cases, missedhallucinations and dangerous errors will be inevitable. Sadly, it’s preciselyin this space that machine unreliability fuses with human fallibility — and Idon’t believe we’ll ever be able to fully resolve that problem.
To sum up these reflections, Iwant to share a feeling: when I think about the present and future ofjournalism in the age of AI, I am seized by a sense of standing on a threshold— a blend of excitement and frustration, of fear and wonder, of the joy of apioneer and the dread of witnessing the end of an era.
And my main takeaway, for now,is this: there’s still far too much we don’t know — or even suspect. And maybethat’s the most fascinating thing about being alive in our time.