The advent of AI has undeniably opened a vast new world of possibilities in science and education, which can potentially save time and money. However, should we entirely rely on the ‘generated knowledge’ from any AI platform?
Most of us have heard about various AI platforms that are used as language models, spelling and grammar, and image creation tools. As a matter of fact, most master’s students have been using ChatGPT and Grammarly to quickly create programming scripts, improve spelling and readability of a text, and even generate nice pictures as the cover of their master thesis report.
I am in no way against the use of such AI tools, although I think we ought to be more aware and precautionary about the risks of generative AI. Privacy and data security are among the risks/problems of AI . Since AI is not well regulated worldwide, there are high risks of information leakage, which can cascade into more serious issues such as violation of intellectual property, privacy laws, and breach of confidentiality agreements. Other problems pertaining to science and education involve the generation of plagiarised or fabricated scientific literature for profiting while unethically improving scientific publication metrics.
Since AI is not well regulated worldwide, there are high risks of information leakage
Innovative technologies like AI can be great for the progress of science and education, if there are clear guidelines about their use. Fortunately, WUR has diligently put enough efforts to provide education support to students and employees via the GenAI intranet or ai.education@wur.nl, which facilitates access to related resources, clear guidelines, thus, correct use of these technologies.
Willy Contreras-Avilés (34) is a second-year PhD candidate in Horticulture and Biochemistry of medicinal cannabis, from Panama. He likes to dance (perrear), cook Italian food, and swim.