From Adoption to Impact: Why Our AI Survey Combines Stages, Technologies, KPIs, and Mini-Questionnaires by Business Area
AI adoption inside companies rarely happens as one big, uniform “digital transformation project”. It usually spreads unevenly: a marketing team might start using AI for segmentation and content workflows, operations might focus on predictive maintenance, and HR might test tools for recruitment or onboarding. If we ask only one generic set of questions, we risk getting a shallow answer that reflects just the loudest use case in the company – not the real picture across functions. That’s why AI Impact SK is built around mini-surveys by business area, layered on top of a clear adoption-stage logic.
Why a single AI survey doesn’t capture reality
When surveys ask broad questions like “Does AI improve efficiency?” companies tend to respond from whatever they’ve seen most recently—often a pilot, a chatbot, or a single department’s initiative. But AI behaves differently depending on where it is applied. The goals, data needs, ownership, risks, and KPIs of AI in marketing are not the same as AI in operations, HR, finance, or security. If we want results that are comparable, actionable, and genuinely useful for Slovak companies, we need a structure that reflects that diversity—without turning the survey into an endless questionnaire.
Step 1: segment by adoption stage first
Before we go into any department-level questions, the survey first determines where the company stands in AI adoption. Companies are split into three groups based on the gateway question “Do you use AI in your company?” and then receive a tailored set of questions.
We also focused on sectors classified according to the NACE system, where AI adoption has the greatest economic impact according to Eurostat. The level of AI use varies significantly across sectors:
- Companies already using AI (including those testing/piloting) answer about real implementation, outcomes, and organizational changes.
- Companies planning AI answer similar questions, but framed around expectations, planned technologies, and timeframes.
- Companies not using and not planning AI answer about barriers, what would change their mind, and how familiar they are with AI solutions.
This branching matters because it avoids forcing irrelevant questions and lets us compare “what companies expect” versus “what companies actually experience”.
Step 2: define AI in a way respondents can answer consistently
AI” can mean very different things to different respondents. To reduce ambiguity, the questionnaire asks companies to rate the degree to which specific technology types are used (from “not used” to “key component”). The list includes, for example, machine learning, NLP, computer vision, predictive analytics, RPA, chatbots/virtual assistants, and expert systems, with an option to add additional technologies.
This helps separate “we tried a chatbot” from “we embed predictive models in core processes” without making the survey overly technical.
Step 3: capture the “Why” and the “Who” behind AI adoption
Adoption isn’t only about tech, it’s also about motivations and governance. The survey maps motivating factors using the TOE logic (technological, organizational, environmental) and also asks who acts as the decision-making authority (individual specialist, specific department such as IT/innovation/marketing, cross-department process, or top management).
That’s essential if we want recommendations that land in the right place: the barriers and enablers look very different in a bottom-up experiment compared to a top-down strategic rollout.
Step 4: measure impact through KPIs, not vibes
A key part of the design is comparing expected vs. actual impact on economic and operational indicators. Respondents evaluate the direction and magnitude of change (from significant decrease to significant increase) for indicators such as sales revenue, costs (materials/energy, personnel), operating profit, operating cash flow, CAPEX, labor productivity, turnover ratios (inventory, receivables, payables), decision-making speed, customer service quality, and business risk exposure. To understand whether companies are able to manage AI performance, the survey also asks whether these indicators are used as strategic KPIs (from “we do not use and do not plan to use it” to “a key KPI”).
So why mini-surveys by business area?
Once the survey has a shared foundation (adoption stage, technologies, motivations, governance, impact measurement), we zoom into where AI actually lives: inside functions.
The questionnaire asks respondents to indicate, in two dimensions, where AI is:
- already integrated into processes, and
- planned for future adoption.
This “current vs. intended” view is the reason we call them mini-surveys: each business area has its own short module that reflects the reality of that function.
The functional areas covered include:
- Marketing & sales
- Production & operations (manufacturing)
- Administration & management
- Human resources
- Logistics
- ICT / information security
- Finance
- Research, development & innovation
What this reveals in practice (a few examples)
Marketing & sales can look like AI-driven customer profiling, segmentation, personalization, dynamic pricing, demand prediction, sentiment analysis, and chatbots for customer support—plus automation across campaigns and CRM processes. These use cases are often fast to pilot, but they also come with trust, privacy, and fairness risks, and with a need to measure impact on customer experience, conversion, churn, and brand perception.
Manufacturing and operations often concentrate on predictive maintenance, process optimization, quality control through computer vision, inventory planning, digital twins, energy and resource monitoring, or real-time AI assistants (“copilots”). Here, the limiting factors are frequently infrastructure costs, data quality, and reliability—because false alarms or wrong predictions can directly impact downtime and output.
HR use cases typically include recruitment support, résumé screening, workforce planning, turnover prediction, onboarding and training personalization, and automation of HR processes. HR modules need to explicitly reflect concerns around discrimination, algorithmic bias, employee trust, depersonalization, and perceived monitoring—risks that are qualitatively different from, say, marketing experimentation.
The same logic applies to logistics, information security, finance, and R&D: each has different processes, different benefits, different risk profiles, and different KPIs worth tracking.
What about companies that don’t want AI at all?
This is where segmentation again matters. For companies that neither use nor plan AI, the survey focuses on barriers and conditions that might shift their decision in the next three years, including open-ended questions to capture concerns that a fixed list might miss.
It also tests familiarity with AI solutions (from “never heard of it” to “could assist with a pilot”), because in many cases resistance is driven by lack of information rather than a firm “no”. Finally, the survey checks which developments could realistically change the decision, such as:
- significant reduction in AI solution prices
- public subsidies or tax incentives
- “AI as a Service” offers with maintenance
- centralized/cleaned internal data after digital transformation, etc.
Results that lead to action, not just statistics
A function-specific structure turns survey results into something companies can actually use. Instead of “AI helps efficiency”, we can later talk about where it helps, under what conditions, which KPIs move, what risks appear, and who inside the company needs support to scale it responsibly. That’s the practical reason every business area gets its own mini-survey in AI Impact SK.
Stay with Us on the Journey
Follow AI-ImpactSK on LinkedIn



