Designing a questionnaire like a research study: how we tackled validity, reliability, and objectivity

Designing a questionnaire like a research study: how we tackled validity, reliability, and objectivity
ChatGPT Image Jan 12, 2026, 07_47_34 PM

Behind the scenes of building a questionnaire that actually works

When we started this project, our main goal was clear: to understand the real state of artificial intelligence adoption in Slovak companies. How exactly are businesses implementing AI? How does this influence their internal processes? Which factors support technology acceptance and which barriers slow everything down? And of course, many more related questions. But getting reliable answers is not as simple as putting together a list of questions and publishing an online survey. If we want data that truly reflects what is happening in organizations, every step of the research process matters. And the creation of the questionnaire – our main data-collection tool – requires careful preparation and a scientifically grounded approach.

Designing a good questionnaire means more than just writing questions that “sound right.” It requires working with methodological principles, reviewing existing literature, analyzing previous studies, and understanding how similar instruments were constructed and validated. That is exactly the approach our team chose. In this article, we want to share the foundation behind a strong research instrument: validity, reliability, and objectivity. We’ll explain what these concepts mean, why they are essential, and how we ensured each of them in our questionnaire.

Objectivity

Objectivity is one of the cornerstones of high-quality research. Every person – including researchers – has their own beliefs, experiences, and assumptions about a topic. These subjective factors can subtly influence both how questions are formulated and how responses are later interpreted. That’s why, when designing our questionnaire, we made a deliberate effort to minimize any possible researcher influence that could distort the results.

First and foremost, the entire questionnaire was grounded in scientific literature and findings from previous studies on AI implementation. Every question had a clear theoretical justification, and its inclusion was discussed and agreed upon within the team. None of the items were based on personal intuition or guesswork – they went through several rounds of team review, expert consultations, and adjustments after pilot testing.

 

We also strengthened objectivity by standardizing the questionnaire. All respondents received the same questions, in the same order, with identical instructions and explanations. Thanks to the electronic format, the researcher could not influence how participants understood or interpreted the items.

To avoid subjective interpretation during data analysis, we minimized open-ended questions. Most items were closed-ended, using Likert scales to standardize responses and ensure consistent coding. The coding scheme and evaluation criteria were defined in advance to remove any ambiguity or personal bias during the analysis phase.

Finally, our questions were formulated to be short, clear, and unambiguous, without emotional wording, hidden suggestions, assumptions, or terms that could be interpreted differently by different respondents. This was achieved through collaborative writing, expert feedback, and pilot testing, which helped us verify that each question was easy to understand and interpret.

Validity

To ensure that our results truly reflect how Slovak companies are using artificial intelligence, it was essential to confirm that our questionnaire actually measures the variables we aimed to investigate. This is where validity comes in – the degree to which an instrument measures what it is intended to measure, and the extent to which the results correspond to reality.

Validity has several forms, and each of them plays a crucial role in ensuring the accuracy and meaningfulness of a research instrument.

Content Validity

Content validity determines whether the questionnaire fully covers all key aspects of the phenomenon we want to measure. To achieve this, we based our items on a comprehensive review of scientific literature and previous research on AI implementation and technology acceptance in organizations.

Each question underwent team discussions where we evaluated its relevance, clarity, and distinctiveness. Pilot testing helped us identify gaps or formulations that did not fully capture the constructs or could lead to misunderstandings.

We also paid close attention to terminology. To prevent meaning distortion, we used the forward-back translation method. First, an independent translation into Slovak was completed. Then the team compared different versions and agreed on the most precise wording. Finally, a reverse translation back into English helped us detect any semantic deviations. This approach ensured accuracy and preserved the full meaning of each construct.

 

Construct Validity

At the same time, we worked on ensuring construct validity, confirming that each question accurately reflects the theoretical concepts behind it. Every scale and every item was linked to a specific theoretical model or framework.

We carefully checked that no question blended multiple constructs into one. After pilot testing, we examined the factor structure of our scales, which confirmed that items clustered in line with theoretical expectations. In other words, the questionnaire genuinely measures what it is supposed to measure.

Criterion Validity

We also focused on criterion validity, which shows how well respondents’ answers align with real-world indicators. To verify this, we compared questionnaire responses with characteristics of the companies that could influence AI adoption, such as organization size or industry sector.

We added several control questions to check consistency and detect potential contradictions. This allowed us to ensure that the questionnaire not only makes sense theoretically, but also produces results that align with the behaviors and attributes of real companies.

Reliability

Ensuring the reliability of a questionnaire is essential. If the instrument is not stable and consistent, the entire study loses credibility. Without a reliable measurement tool, it becomes impossible to trust the results – you might think you’re measuring a specific phenomenon, but the responses could fluctuate randomly, leading to inaccurate conclusions about respondents or the factors being examined.

That’s why we focused on making every question as clear and straightforward as possible, so that all respondents would interpret them in the same way. As mentioned earlier, each item went through multiple rounds of internal team discussions and expert consultations to eliminate ambiguity or multiple interpretations. We also conducted pilot testing, which helped us identify items that worked well and those that required refinement.

For our scales, we assessed internal consistency using Cronbach’s alpha and McDonald’s omega, and we further evaluated the constructs through exploratory factor analysis. These steps allowed us to confirm that the questionnaire produces consistent results and reliably measures the intended constructs.

Why This Framework Matters

Building and implementing this research framework is a long-term process. The careful preparation of the questionnaire – along with a thorough evaluation of its objectivity, validity, and reliability – forms the foundation of the entire study. These steps allow us not only to accurately capture the phenomena we are researching, but also to offer practical, data-driven recommendations for organizations.

If you’re curious about how our research unfolded, what insights we uncovered, and how these findings can benefit Slovak companies, we invite you to join us on this journey. Follow our upcoming updates, where we will share both the results and practical takeaways from our work.

 

Stay with Us on the Journey

Follow AI-ImpactSK on LinkedIn

 

Related posts