Like many, I’m already convinced by the potential of Generative AI to offer significant support to qualitative research; it can already act like a fairly convincing researcher, by automating key aspects of research design, moderation, analysis and reporting. It can even act like fairly realistic participants by creating synthetic data.
This makes it easier for classically trained and novice researchers alike to do more projects that are faster and require less investment, than they ever could before. Overall, I see this as a net positive for classically trained qual researchers since they have the expertise to get more value from AI. It’s also an opportunity for people who want to do research but didn’t have the budget or expertise before.
However, as with every decision a qual researcher makes, there’s always a trade off. The more we rely on automation in order to do faster, larger scale projects with less investment, the more we lose depth, rigour, nuance and tailoring. So clients and researchers need to be aware of the trade offs and know when it is appropriate to rely more or less on AI tools - and they need to price and resource appropriately.
Same insights, different format - view our video
What is 'qual light'?
So for me the first big impact of AI on the qual research industry is the emergence of what has at times been called ‘qual light’ which means projects tackling straightforward objectives that do not require as much nuance, depth or tailoring. The second big change at a more macro-level is the democratisation of qual, as more people with less expertise will start to offer more qual research services. These changes present both risks and rewards to both expert and novice qual practitioners - depending on whether and how they use tools. They highlight how critical it is that researchers popularise best practice principles to avoid the misuse of qual which could damage the industry’s reputation.
Is this a race to the bottom?
I believe a new wave of lighter qual briefs (with less budgets, faster time-lines and simpler reporting) will coexist alongside the more nuanced, traditional, complex, deeper qual briefs - which require more hands-on, human-led, deeper versions of qual.
Those “lighter qual” briefs can be answered with more AI reliant approaches, simple research designs, straight forward lines of questioning, resulting in a more reportage style of summary output. This doesn’t have to be a race to the bottom as some clients will always need the more bespoke, nuanced, human reliant work that requires more investment to deliver properly, for example when dealing with complex concepts, subtle creative or deeper emotional models of humanity.
When is 'good enough' good enough?
This is similar to what we have seen in other industries like webdesign as DIY platforms like Square Space and Wix offer simple templated websites that are often good enough and come at a fraction of the time and cost.
If AI-led lighter projects can offer ‘good enough’ qual that informs ‘good enough’ decisions at a ‘low enough’ price - then there will be a market for it.
If qual researchers want to be part of this lighter qual market (which presents obvious upsell potential for the deeper traditional qual briefs) then they should embrace AI tools and develop their own lighter qual products. AI led summaries may be enough to answer simple briefs. For example, objectives to develop basic hypotheses on the range of behaviours, attitudes, emotions on a straightforward topic or basic responses to executional details in simple stimulus e.g. the colour preferences. Perhaps to inform the development of survey questions or to inspire a deeper study design.
AI vs 6 million years of evolution
The emergence of ‘Qual Light’ does not have to be bad news for qual researchers - as these lighter versions of qual that lean more on AI - will still benefit from the expert human in the loop acting as a gatekeeper of quality, directing AI design suggestions, guiding AI probes and asking the right questions of AI summary tools in order to curate strategically valuable and valid narratives and recommendations.
Thanks to over 6 million years of evolution and many hours spent doing qual research - it is difficult for algorithms to outperform an expert human researcher in terms of our empathy, storytelling, creativity, cultural sensitivity, tailoring, intuition and strategic interpretation. So qual researchers are well placed to offer these lighter qual projects.
The democratisation of qual
The second big change is the democratision of qual. I think we’ll see more consultants offering more qual research services even though they aren’t necessarily classically trained. That could be management consultancies, design, innovation and marketing agencies as well as in-house client teams. More people without classical training or hands-on experience will be able to do more qual work. This could be a risk to the industry’s reputation if the quality of work is over-reliant on AI, lacks validity and nuance and so leads to poor work and bad decisions. This is similar to what has happened with other democratising technologies that were once the preserve of experts; for example, you don’t have to look far to find dodgy DIY survey designed on Survey Monkey, wonky websites built by Wix or skewed interfaces with SquareSpace. So expert qual researchers will still be highly valued in a democratised qual research market.
The Uber paradox
At the risk of mixing metaphors, I think many qual researchers see AI as something of a double-edged sword of Damocles: On the one hand AI offers us efficiencies; on the other, it seems like an existential threat. We could call this the “Uber Paradox”: in that Uber drivers use and benefit from the Uber app in the short term, despite the company’s long term stated ambition to eventually replace them with Autonomous self-driving vehicles. In a similar way, the more we use AI tools, and the better they get at qual, the more it feels like they could eventually replace us. However, I think we need to get over this fear and embrace change. As AI improves, we can improve with it.
The farmer and the plough
That foreboding feeling towards AI is born from a distrust of the unknown in general and of automation in particular rather than necessarily reality. This distrust of automation is deeply ingrained in our culture and psyche - and goes back generations - all the way to the industrial revolution and beyond. While the Luddites of the 19th Century who sabotaged machines of production were heroes to some - the term has become pejorative, in a global economy driven by innovation. I don’t think we’re heading into a future of autonomous robot qualies doing all the work - AI is just another tool that practitioners need to master, like the farmer and the plough before it.
Rather than see this as a threat, because trained qual researchers are true experts, I think they will be able to get more value using them than novices can. Just as professional photographers can take better pictures than a non professional using an iPhone. Or Jimmi Hendrix could knock out a better tune on a ukulele than I ever could on a Fender Stratocaster (if I knew how to play guitar).
Quallies can get the most from AI
Some compare AI to the printing press in its radical democratising impact. At the time the printing press was opposed by religious leaders who feared it would make the monks who used to copy religious text by hand lazy and that they'd lose control over the dissemination of knowledge. AI could present similar risks as if used badly; it could encourage shortcuts and errors and therefore the dissemination of unreliable findings. Because the brain likes to conserve energy people may be tempted down the path of least resistance when using AI.
But qual researchers are a diligent breed by nature. Authenticity is the currency of our craft. We understand best practice and how to unearth authentic insight with diligence. So while AI might democratise qual just as the printing press democratised media production, the risk of error and superficiality from over relying on AI powered qual is lower when it’s trained qual researchers at the helm. In this way, I see AI as more akin to something like laser eye surgery - in that the better your understanding of the human eye and surgery with scalpels, the better you’ll be at using the laser technology. So the advent of Lazer eye surgery technology didn't destroy the careers of experts using scalpels for eye surgery - many of them retrained and still applied those same skills albeit using a different toolkit.
Therefore overall AI can be an opportunity for qual researchers who embrace it, since we are the ones who can get most value from it compared to non experts, as long as we maintain and promote the high levels of discipline required to do authentic qualitative research - which will help protect the reputation of the craft.