Let me be honest, AI in market research feels a bit like when we first got dial-up internet in the ’90s. We knew it was going to change everything, but we also spent way too much time debating whether it was safe to buy things online. Spoiler alert: it was, and now we order groceries from our phones while wearing pyjamas.
After watching our industry grapple with AI for the past few years, I have learned that most of our anxiety stems from not understanding the basics. So, here is what I have observed, without the corporate fluff.
The “Is It Safe?” Question (Spoiler: It Depends)
Remember when your parents told you not to get in cars with strangers, then Uber became a thing? AI security follows similar logic. The question is not whether AI is safe; it is about knowing which version you are using and what you are sharing.
Here is the reality: free ChatGPT is like shouting your ideas across a crowded mall. Great for brainstorming your next campaign theme, terrible for discussing proprietary client data. Paid, enterprise versions are more like having a private conversation in a soundproof booth, still not perfect, but significantly more secure.
The rule that should be adopted (and honestly, I wish more people would embrace it): if it is not yours, do not share it. Client data belongs to clients. Your company’s strategic insights belong to your company. Pretty straightforward, yet with the ease of AI, like ordering fast food or going through the drive-through, then shopping at the grocery store and making your own meal…the trend of convenience is a real thing.
The ‘Test Kitchen’ Approach
One of the best pieces of advice I have heard is to treat AI like a test kitchen. Experiment with innovative ideas in open environments, refine them in secure spaces, then validate by going back to the open world with sanitized versions. It’s similar to how we used to burn CDs; you would evaluate the mix, perfect it, and then share the final cut (while keeping the original safe).
This prevents the echo chamber effect, where your internal AI becomes as isolated as that friend who only watches Netflix recommendations and wonders why everything feels the same.
Beyond the Administrative Stuff
Yes, AI can transcribe meetings and set agendas. That is like using a smartphone to make calls…technically correct but missing the point entirely.
The real value comes from treating AI as a colleague who never gets tired, does not take things personally, and always has time to brainstorm. But here is the catch: not all AI colleagues are created equal. Using the wrong AI for the wrong task is like asking your artsy friend to fix your computer, you might get creative solutions, but probably not the ones you need.
I have started thinking about it this way: Excel manages the counting, AI oversees the “what if” scenarios. Different tools, different strengths.
The Power of the Prompt (And Why Your Experience Matters More Than Ever)
Here is something that took me a while to realize: the quality of AI output is not really about the AI; it is about the person asking the questions. Your years of experience, industry knowledge, and understanding of context are what make AI useful.
Think of it like this: AI is incredibly good at providing answers, but it has no idea whether those answers make sense in your specific situation. That is where your expertise becomes invaluable. You need to know enough about your industry, your clients, and your data to craft questions that will generate meaningful responses. More importantly, you need the experience to recognize when AI is giving you solid insights versus when it is essentially playing truth or dare with your business decisions.
Truth responses are grounded in logic, backed by recognizable patterns, and align with industry knowledge you can verify. Dare responses sound impressive, but push you toward conclusions that could be risky if you do not validate them first. The difference is not always noticeable, which is why your professional judgment becomes more critical, not less.
This evolution of our work is fascinating.
We can now move faster than ever before, generating multiple analysis scenarios and exploring ideas at speeds that would have taken weeks in the past. But this speed comes with a responsibility to think more critically about what we are getting. The human experience is not being replaced; it is becoming more sophisticated. We are evolving from information gatherers to information validators and strategic interpreters.
Understanding What We’re Actually Working With
Current AI is essentially a very sophisticated autocomplete. It is like that friend who finishes your sentences but sometimes gets it completely wrong because they are guessing based on what usually comes next, not what you meant to say.
The newer reasoning models are different; they walk through their thinking process, kind of like showing their work on a math test. These are the ones worth paying attention to because they are starting to think more like humans and less like very clever parrots.
The Transparency Thing (It is Not Going Away)
Clients are getting smarter about AI, which means we need to get better at explaining how we use it. This is not about legal coverage; it is about maintaining trust in an industry built on credibility.
I have started including a simple “How We Work” section in proposals that explains our AI usage the same way we would define any other methodology. Turns out, most clients appreciate the honesty and want to understand how these tools enhance our work.
And here is something I did not expect: sustainability concerns are becoming real. Running AI for everything is like leaving all your electronics plugged in; it adds up. If you are not going to use those meeting transcripts, maybe skip the AI note-taker.
The Speed of Change (Buckle Up)
Remember how quickly we went from Blockbuster to Netflix to “Netflix and chill” becoming a cultural phenomenon? AI development is moving faster than that. New capabilities appear monthly, not yearly.
This creates a dilemma: move too fast and risk security issues, move too slow and risk irrelevance. The market researchers I respect most are taking a “learn while doing” approach, implementing thoughtfully but consistently, rather than waiting for the perfect solution.
What Actually Works
After watching various approaches succeed and fail, here is what I have noticed works:
Start with tedious tasks. Let AI manage the administrative work while you figure out its quirks. Once you trust it with scheduling, you will feel more comfortable asking it to analyze data patterns.
Treat it like training a new team member. Be patient, provide context, and do not expect perfection immediately. The AI assistants that work best are the ones that have been “taught” a company’s style and preferences over time.
Keep humans in the loop. AI can suggest, analyze, and generate ideas, but humans still need to interpret, strategize, and maintain client relationships. We are not being replaced, we are being augmented.
The Bottom Line
AI in market research is not about replacing human insight—it is about amplifying it. The researchers who will thrive are those who learn to work with these tools thoughtfully, not those who either embrace them blindly or avoid them entirely.
We are still in the preliminary stages of figuring this out, and that is okay. The key is staying curious, being transparent with clients, and remembering that our value has always been in connecting data to human understanding. AI gives us better tools to make those connections.
The future belongs to researchers who can leverage AI’s capabilities while maintaining the critical thinking, ethical standards, and genuine human connection that define outstanding market research.
And honestly? That sounds like an exciting future to me.

From the desk of Shonna Caldwell.