AI, data and GDPR: how to build an ethical, low-risk survey for companies

AI, data and GDPR: how to build an ethical, low-risk survey for companies
ChatGPT Image Jan 9, 2026, 11_22_31 PM

Why ethics mattered from day one in AI IMPACT SK

In AI IMPACT SK, we knew that researching AI adoption in companies isn’t only about tools, budgets, or use cases. It’s also about trust. Companies may share information that is sensitive, for example about internal governance, compliance readiness, or perceived risks, so we designed the survey to be ethical, GDPR-aligned, and low-risk for every respondent.

Throughout the entire research process, we rigorously followed ethical principles and legal requirements, especially those related to personal data protection under GDPR. For us, ethics was not a formal appendix – it was a core part of research quality, credibility, and social responsibility.

Our ethical framework: Responsible Research and Innovation

We conducted the project in line with the principles of Responsible Research and Innovation (RRI), as reflected in European Commission strategic guidance. RRI shaped how we approached the full lifecycle of the research, meaning how we communicated with respondents and how we collected, stored, and interpreted data.

Ethical integrity and transparency

We committed to research integrity and transparency across all steps. That meant collecting and interpreting data objectively, following ethical standards of scientific work and academic honesty, and avoiding conflicts of interest that could bias results.

Voluntary participation and informed consent

Participation in the questionnaire was fully voluntary. Before starting, each respondent received clear information about the purpose of the research, the scope of data being processed, and the measures taken to protect data and confidentiality.

Anonymity and confidentiality by design

To reduce risk and increase trust, we ensured that personal data remained non-identifiable throughout the research. Data were aggregated and anonymized at the point of storage, so individual respondents could not be traced through datasets or outputs.

Risk minimization and proportionality

We designed the survey so that no respondent would face physical, psychological, or reputational risk. We also followed proportionality: we collected only the information strictly necessary to meet the scientific objectives, and we avoided collecting anything extra.

Social responsibility and value for practice

AI IMPACT SK is meant to generate tangible benefits for economic policy and business practice, while adhering to principles of social benefit, non-discrimination, and equitable access to innovation. This matters because AI research has real-world implications beyond academic results.

GDPR compliance: turning legal principles into survey practice

Because a corporate survey inevitably involves data processing, we ensured full compliance with Regulation (EU) 2016/679 (GDPR) and Slovakia’s Act No. 18/2018 Coll. on the Protection of Personal Data. Importantly, compliance was implemented as practical rules that shaped the survey design and the data lifecycle.

Here are the essentials we applied:

  • Lawfulness, fairness, and transparency: the purpose of processing (scientific research) was clearly defined and communicated before the questionnaire began.
  • Data minimization: we did not collect direct identifiers (name, address, email). Where needed, identifiers were replaced with coded labels.
  • Accuracy and data integrity (secure processing): data were stored in a secure environment with access restricted to authorized research team members and used exclusively for this project.
  • Storage limitation: data were retained only for the time needed for scientific processing and then anonymized.
  • Accountability: the research team assumed full responsibility for compliance with all data protection principles and procedures.

AI-specific ethics: assessing adoption and readiness, not just usage

Because the project focuses on AI adoption, we also considered ethical questions in a broader technological and societal context. We drew on the Ethical Guidelines for Trustworthy AI (European Commission’s High-Level Expert Group on AI), which define seven principles:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency and algorithmic explainability
  • Diversity, non-discrimination, and fairness
  • Societal and environmental well-being
  • Accountability and auditability of AI systems

In AI IMPACT SK, these principles served as an interpretive lens. We did not evaluate only whether enterprises adopt AI, but also how ethically and regulatorily prepared they are for implementation. That includes signals such as internal ethical policies, data governance practices (including personal data protection), approaches to explainability, and the presence of risk assessment procedures.

What this approach delivers

By integrating ethical and regulatory dimensions directly into the research methodology, we achieved three critical outcomes: stronger protection of respondents, higher credibility of findings, and alignment with European expectations for scientific integrity and socially responsible research.

 

Stay with Us on the Journey

Follow AI-ImpactSK on LinkedIn

Related posts