Responsible Use of AI

Responsible Use of AI in Research and Publishing: 
Ethical Guidelines for WNE UW Researchers and Student

Introduction

Artificial Intelligence (AI) tools – from machine learning algorithms to generative models like ChatGPT – are increasingly used at various stages of economic research and publishing. While these tools can enhance productivity and insights, their use comes with ethical responsibilities. Researchers at the Faculty of Economic Sciences, University of Warsaw, must ensure that AI is used responsibly, transparently, and with scholarly integrity. The following guidelines summarize key ethical considerations for using AI in data analysis, literature review, and academic writing, drawing on international principles, best practices, and key insights from recent reports.

Ethical Use of AI Throughout the Research Process

Researchers should integrate AI tools in ways that supplement human expertise without replacing it. At every stage – from data collection and analysis to writing – maintain human oversight and critical judgment (EC Digital Strategy). Key considerations include:

  • Data Analysis and Modeling: Use AI-driven analytics (e.g., machine learning models) responsibly. Validate AI-generated results with conventional methods and domain knowledge to avoid blind trust in automated outputs. Always ensure a “human-in-the-loop” approach for important decisions (EC Digital Strategy, Korinek 2023), and be prepared to explain and justify any AI-assisted methodology.
  • Literature Review: AI tools can help survey vast literature and even draft summaries. However, researchers must verify any AI-provided references or summaries. Do not assume accuracy without checking sources – large language models sometimes produce plausible-sounding but incorrect or even fabricated information (WAME). Ensure that important works are not omitted due to biases in the AI’s training data. 
  • Writing and Editing: When using AI for text generation or translation (e.g. to polish language or generate a draft), treat the output as a suggestion requiring careful review. The researcher is ultimately accountable for all content (WAME). Edit AI-generated text to ensure clarity, accuracy, and originality. Never allow an AI to inject uncited material; all factual claims and quotations must be verified and properly referenced by the human author. AI tools should be used as assistants in drafting and revising content. AI can help with tasks like ideation and writing, but human researchers remain responsible for the final output (Korinek 2023).

Transparency and Disclosure of AI Assistance

Transparency is a fundamental ethical principle in research (UNESCO, EC Digital Strategy). Researchers must openly disclose if and how AI tools were used in conducting or writing the research. This aligns with the European Code of Conduct for Research Integrity, which explicitly calls for reporting methods “including the use of external services or AI and automated tools” in a way that facilitates verification. To adhere to transparency requirements:

  • Follow Journal and Publisher Policies: Many journals now require authors to confirm whether AI was used. International editorial ethics bodies (e.g., COPE/WAME) recommend that authors explicitly disclose AI assistance and provide details (the tool name, version, and query or usage method) in the manuscript (WAME). Always check and comply with the specific guidelines of the target journal (The Econometric Society, 2025).
  • Acknowledge AI Tools in Methodology: In the paper’s methods or acknowledgments, state which AI tools were used (e.g., a specific software, algorithm, or language model) and for what purpose (data analysis, generating summary, proofreading, etc.). For example, if ChatGPT assisted in drafting a section, note this in the acknowledgments or a footnote, per journal guidelines.
  • Do Not List AI as an Author: Authorship implies responsibility for a publication, which an AI cannot bear. Consistent with global recommendations, “Chatbots cannot be authors” (WAME). Only human researchers who made intellectual contributions and can take responsibility should be listed as authors.

Addressing Biases and Limitations of AI Tools

AI systems are not infallible. They learn from existing data and thus can embed and even amplify biases, errors, or gaps present in that data (UNESCO). Researchers must be vigilant about these limitations:

  • Bias Awareness: Be aware of potential biases in AI outputs. For example, an economic forecasting model might systematically err if its training data overlook certain groups or regimes, and a language model might reflect gender or cultural biases in how it summarizes literature. International guidelines on AI (UNESCO, EU) stress that unfair bias must be avoided to prevent discrimination or marginalization of groups (EC Digital Strategy). To mitigate this, use diverse and representative data when training models, and critically evaluate AI-generated results for bias or anomalies. Researchers need to understand AI’s limitations and check for biases (Korinek 2023).
  • Verification of AI Outputs: Studies have shown that generative AI can produce fabricated references or incorrect facts (WAME). Treat AI suggestions as hypotheses or drafts – not confirmed truth – until validated (Korinek 2023, The Econometric Society, 2025). Always cross-check AI-derived insights with traditional analysis or external sources. If an AI tool provides a statistical result or a bibliographic reference, verify its correctness through independent calculations or by locating the referenced source.
  • Explainability and Accountability: Prefer AI tools that offer explainability (or use techniques to interpret the model’s decisions), especially for high-stakes analysis. Understanding why an algorithm produced a given result is crucial for trust and ethical use (UNESCO). The recent report from The Econometric Society (2025) recommends that journals adopt clearer editorial guidelines to enhance accountability, including better oversight of AI's role in the publication process.

Ensuring Academic Integrity and Preventing Misuse

Maintaining academic integrity is paramount when using AI. All existing principles of research ethics – honesty, rigor, and responsibility – apply to work involving AI, just as they do to any other research tool (AEA Code of Professional Conduct). Key practices include:

  • Avoid Plagiarism and Fabrication: Do not use AI to generate content that you pass off as your own ideas or to create fake data. If AI produces text or translations, review or rewrite them in your own scholarly voice. Any verbatim material taken (whether from AI output or other sources) must be properly quoted and cited. Remember that you, as the researcher, are responsible for ensuring no plagiarism or misinformation in the final work (WAME, Korinek 2023).
  • Acknowledge Limitations of Expertise: The AEA’s professional code highlights the importance of “acknowledgment of limits of expertise” and exercising honesty and care in research (AEA Code of Professional Conduct). Using AI does not substitute for expertise. Be candid about what tasks were aided by AI and where human judgment was applied (Korinek 2023, The Econometric Society, 2025).
  • Privacy and Data Security: Ensure that using AI does not violate privacy or data protection standards. Do not upload sensitive or confidential data into external AI platforms without proper authorization, as this could breach ethical standards or legal agreements. Use secure, approved tools especially when dealing with personal data, in line with institutional and EU data protection regulations (Korinek 2023, The Econometric Society, 2025).
  • Responsible Collaboration with AI: Treat AI as a junior assistant – helpful for speeding up tasks, but requiring supervision. It should augment, not replace, the intellectual work of the researcher. Retain a critical eye: if an AI-generated output seems suspect, investigate further rather than simply using it. Cultivate the mindset that final accountability lies with the human researcher, and misuse of AI (such as letting it generate entire papers with minimal oversight) can constitute academic misconduct.

Alignment with International Ethical Guidelines

These recommendations echo principles from leading organizations. UNESCO’s Recommendation on the Ethics of AI emphasizes protecting human dignity through values like transparency, fairness, and human oversight (UNESCO). TThe European Commission’s Ethics Guidelines for Trustworthy AI likewise calls for transparency, non-discrimination, and accountability in all AI applications (EC Digital Strategy). The American Economic Association’s Code of Professional Conduct stresses honesty, transparency in research, and acknowledgement of one’s limitations (AEA Code of Professional Conduct) – which extends to disclosing and responsibly using AI tools. Additionally, guidance from bodies like the Committee on Publication Ethics (COPE) and the World Association of Medical Editors (WAME) makes clear that AI assistance should be fully disclosed and that authors bear responsibility for AI-generated content (WAME). Researchers are encouraged to stay updated with such evolving guidelines to ensure their practices remain ethical and acceptable in the global academic community.

Conclusion

AI technologies offer exciting opportunities to advance economic research, but they must be employed conscientiously. By following these ethical guidelines – using AI thoughtfully across research stages, being transparent about its use, guarding against biases, and upholding academic integrity – faculty researchers can harness AI’s benefits while maintaining trust, accountability, and excellence in scholarship. These principles will help ensure that the integration of AI into research and publishing at the University of Warsaw upholds the highest standards of ethics and integrity, consistent with international best practices.