This is a man-to-man conversation

photo: canva

Audiences can no longer be sure whether they are reading text written by humans or artificial intelligence in the media, whether images and videos are authentic or generated. Journalist Ivana Kragulj tells “Vreme” how these tools are used in the media in Serbia

 

Journalists are masters of uncritically accepting the tools they use to destroy themselves. Artificial Intelligence (AI) now he’s knocking on the door. And it’s a strange moment – programs like of ChatGPT they are just so good that they can help with something. But still not good enough to write whole texts.

 

Where are we going and how much can audiences rely on media rushing into the use of AI? Ivana Kragulj, a journalist of the Independent Association of Journalists of Serbia, who deals intensively with this topic, talks about this for our monthly newsletter Redakcija.

 

 

VREME: What do journalists in Serbia already use artificial intelligence for?

 

IVANA KRAGULJ: Most have tried, but fewer use AI in their daily work. For example, investigative journalists use to analyze piles of data. Others are most often used to generate headlines, transcribe recorded interviews, produce illustrations and graphics, even video.

 

AI can be used in the so-called optimization of texts – that’s what you want the article to be highly ranked on Google. Let’s say, the weather forecast for Belgrade – everyone publishes it, but how can your text be ranked first? AI can recommend which keywords to insert, which links and the like.

 

 

How is it regulated in the code?

 

The enhanced version of the codex has a section on artificial intelligence. It is the only document regulating its use in journalism – it is not mentioned anywhere in media laws. The law on artificial intelligence is also being worked on, but there are no representatives of journalistic associations in that working group.

 

And this in the codex is quite general. It says that the media must use artificial intelligence transparently, responsibly and proportionately, but that the media themselves remain responsible for the content.

 

 

So if someone’s entire text or entire image is generated by artificial intelligence, it would have to be signed like that?

 

For illustrations, of course – you need to know whose prompt it is, that is, who assigned the task to the AI program, and which tool was used.

 

As far as articles are concerned, it is of course not advisable to publish an entire article generated by AI. It would be transparent to say that AI was used, but the journalist has to verify all the information. Because tools like ChatGPT have a tendency to hallucinate and give incorrect data. It’s another if the AI just checks the style, spelling, embellishes the text a bit.

 

Large global organizations recommend that newsrooms adopt an AI usage policy that states how and for which tasks artificial intelligence is used and which tools exactly.

 

 

Do you think journalists are canceling themselves?

 

It depends. Certainly not those who work to build credibility and trust. But we have big media houses where it is noticeable that they use AI a lot and do not check anything.

 

Let’s say, I think Kurir published a transcription of one of Ana Branabić’s performances without correction – it couldn’t even be read. Or the example of Blitz that copied the entire text, along with the question that the artificial intelligence asked the journalist. Neither do they state that they use AI, nor does the text pass any editorial control.

 

And how it will affect the journalists themselves and employment, we will see.

 

 

In the country of fake news, how long does it take for someone to produce entire websites and portals like this? Let’s say, assign the program to stamp only texts in which Vučić is praised.

 

I wouldn’t give them such ideas (laughs). Although, for that, you need to know the tools well. Because artificial intelligence collects all possible data from the Internet, so it could also pick up what is not praiseworthy, say for Vučić.

 

 

To what extent is the audience able to recognize that some work is sponsored by AI?

 

I noticed that people, even though they are not media literate enough, somehow recognize when the content was created with the help of artificial intelligence. There is a lot of discussion about it on social networks, and there are people who are a bit more expert or fact-checkers. So you notice that someone’s fingers are missing in the photo or the video has strange details.

 

It’s probably harder when it comes to text. It is necessary to dig deeper, to look for sources, and this can be demanding for our circumstances.

 

Source: Vreme

Tags

highlighted news

Related posts