“You absolutely have to use AI”: AI researcher on Czech-US alignment in AI regulation, and more
Today, I am speaking with Lea-Ann Germinder, an AI researcher based in Missouri, US, who focuses on the responsible use of artificial intelligence. Germinder researches areas where AI technologies raise ethical concerns, particularly in how they impact privacy, decision-making, and bias. In addition to her work in Missouri and other projects, Germinder recently shared insights at the Academy of Science and Charles University in Prague. We'll also explore how AI research is viewed in the US compared to Czechia, and how emerging AI technologies are influencing the world today.
Let’s begin with your areas of focus and research. Could you explain the specific areas and also some ethical concerns since we're talking about ethics and AI in your research at the University of Missouri?
“Well, I'm proud to be at the journalism school at the University of Missouri, where my focus is on strategic communication and public relations. I have decades of experience, and I decided to go back and pursue my doctoral degree and specifically to do research on how public relations practitioners can practice responsible AI, in the use of generative AI, and then how they can best counsel their clients or organizations. Relative to ethical practice, I focus on trust, truth, and transparency. I’m a fellow of the Public Relations Society of America, and we have an ethical code of conduct similar to the journalism code.”
What are some ways people are concerned?
“When we're talking about trust, we're focusing on how to protect confidentiality, privacy, and security. How do we avoid copyright violations? That’s a big issue with lawsuits here. In terms of establishing truth, how we fact-check and address these "hallucinations" that AI sometimes generates, making sure we correct them. We also need to correct for bias when we see it and when we’re able to do so. Finally, on transparency, which I think is particularly intriguing right now, it’s about disclosing the use of AI to maintain trust with people and ensure that we are telling the truth. The issue of disclosure is a very interesting aspect of the research.”
And are you noticing disagreement on this? Is there unanimous agreement on these issues? Or are you noticing different discussions across the transatlantic, let's say, such as in the EU, and Czechia, or in the US?
“Oh, no, there’s no alignment. There’s alignment in terms of the AI guidelines that the associations have put out on these issues. For example, I'll mention disclosure. The association guidelines say we absolutely must disclose, but what’s intriguing to me is that both in the US and the Czech Republic, there are still questions about it. So, for example, if I need to disclose my use of AI, does that apply if I’m using it just to generate ideas and then fully developing the final product myself? The guidelines would indicate yes, but there’s still discussion around that. Or, for example, using a tool like Grammarly to correct grammar—largely, no, of course, you wouldn’t disclose that.
READ ALSO
“So, while the associations say we should disclose, I don’t think it’s universal, and I think there’s a lot of discussion about how and when to do it. Certainly, when it comes to agencies, they definitely need to disclose it to their clients, but even that varies in how it’s disclosed. Again, this is the case both in the Czech Republic and the US, where there’s no real agreement, if that makes sense.”
I've used Grammarly liberally as an undergraduate and graduate student. But correct me if I'm wrong, Grammarly, Deeple, or Google Translate, these are all types of AI, right? So, what would someone who proposes disclosing, let's say, Chat GPT, or any of these other generative intelligences have against something like Grammarly? Or what's the difference they make between those?
“Well, on the one hand, and again, my research started in 2022, so I’m still continuing it. Initially the consensus was clear: you need to disclose. But it wasn’t exactly clear how to disclose. For example, do I say, ‘I used Grammarly in producing this article’? But now, there’s less concern about that, and more focus on situations like, some experts say, ‘I used ChatGPT to get ideas for this blog post, but the final product was created by the student.’ These are the kinds of discussions happening.
“Others, particularly in public relations and academia, argue that if you only use AI to get ideas, you don’t need to disclose it because the final product is yours. In an agency setting, it’s more about the agreement with the client, and in academia, it comes down to the university’s policies on disclosure.”
And then you're also presenting this research very broadly, even at Charles University [in Prague]. So, how is your research received by the Czech audience? Is there a difference with the US audience you’ve noticed?
“I presented several lectures at Charles University and the Czech Academy of Science in December. I haven’t formally presented this research in the US yet, but I’m planning to.
“The reaction at both Charles University and the Czech Academy of Science was very positive, particularly regarding the need to use AI ethically and follow the guidelines. In the academic setting, there wasn’t as much focus on disclosure because, with students, that depends on Charles University’s policy.
“At the Czech Academy of Science, however, there were many comments about the ethics of AI use and who determines what is ethical. That sparked an interesting discussion. In my research, I reference Kant, the German philosopher, and his concept of the categorical imperative—essentially, do what is right. And to be honest, Jakub, in business we always say that ethics should guide what is right and ethical every day, not necessarily what is legal. Ethics should be above legality. So, there was quite a bit of discussion about that in Prague.”
At the moment, some people still frown upon AI because they view it as detrimental to the creative spirit in fields like journalism, public relations, and others. However, there are many areas where I can immediately see it as something very productive.
"For example, when I was in grad school, I was a teaching assistant, and I could see the human flaw in grading essays. You’ve probably noticed this as well, where grading can be quite subjective from one teaching assistant to another. Maybe between professors, grading can be more consistent, but for newcomers grading essays, I saw it as very imbalanced. There’s probably a plethora of ways AI can assist in making our work more objective."
So, what would you say is a constructive approach to the proper use of AI? Because you don’t seem like someone who’s suggesting we reject using it.
“No, I think from both my practitioner experience and now my academic experience, what I’m seeing is that you absolutely have to use AI. The question is how you use it and how you use it responsibly. Also, are the tech companies being responsible in terms of creating products that are safe, secure, and have the appropriate guardrails for safety and ethical use.
“I’ll go back to Grammarly, because this just came up. One of the big concerns with Grammarly is the scraping of your information and putting confidential data into the tool. If you’re using free tools, how are they handling that information? There’s a concern about not putting confidential information into these free tools. As for the tools with guardrails, if you will, it’s important to make sure they’re safe. I thought it was interesting that this issue just came up at my university regarding the use of Grammarly.”
What do you think about the productivity or economics of this? Many people are saying that we've been lagging in this area [in the industrialized world]; we don't know how to increase productivity, especially with demographic losses and population decreases. Could AI technologies potentially help us revolutionize our productivity?
“I believe in both sides. In the US, the focus is on innovation and productivity. Many of the people I spoke to are confident that AI will absolutely increase productivity. As long as we maintain human oversight, there are tremendous gains to be made in productivity in terms of speed and efficiency.
“For example, let me take you through a process: you ask a question, and we call this ‘ideation,’ and AI gives you an answer. You might say, ‘Okay, I didn’t think about that,’ and it’s so fast that there’s no reason why this won’t boost productivity. The biggest barrier, I think, isn’t just safety but also understanding how to use it effectively. With any new technology, there’s always the question of what percentage of its potential is actually being utilized.
“This is a very exciting time, and it’s moving incredibly fast. From an ethical standpoint, we do need to be concerned about issues like privacy, copyright, and confidentiality to ensure we’re moving forward responsibly.
“President Biden gave a speech, and while there was plenty of political commentary around it, there was one line I really took to heart: ‘We must make sure AI is safe, trustworthy, and good for all humankind.’ Again, I can’t stress that enough: Yes, the technology is great, but how can we make it work for the best interests of humankind? And this sentiment has been echoed both in the US and in the Czech Republic—individuals believe in that idea.”
Do you think the EU or the Czech Republic are better at protecting digital privacy rights than the US?
“Well, that’s interesting because I did think it would be different, especially in the Czech Republic with the EU AI Act. Yes, privacy was expressed as being important, but my concern is that a significant number of people are using free tools. So, I do have a question there.
“Certainly, there’s no suggestion that anyone is entering confidential information, but I do have concerns about the free tools and exactly what information is being entered [and what the free tools are doing with that information]. On a personal level, yes, individuals are concerned. So, I think there needs to be more education on this, if you will.”