Note from John-Carlos:

Four years ago, when we started exploring how AI could transform research at maslansky + partners, I always heard two reactions: “This will never be smart enough” and “This will replace everything.”

Both were wrong. But both have a place in the conversation.

Today, as Director of Product + Innovation at m+p, I’ve been involved in every stage of building CHORUS, our suite of synthetic research methodologies. I’ve seen firsthand what works, what doesn’t, and what separates rigorous AI research from performative guesswork.

Here’s what I’ve learned: Synthetic research isn’t magic. It’s not snake oil, either. It’s a tool that can deliver insights faster, more affordably, and with audiences that were previously out of reach. But like any tool, it works best when you know how and when to use it.

The research projects that will win aren’t the ones that blindly adopt AI research or stubbornly avoid it. They’re the ones that ask the right questions first.

The Current Landscape: Hype, Fear, and Opportunity

Synthetic research is having a moment. AI-powered audience simulations promise to deliver insights in days instead of weeks, at a fraction of the cost, with access to audiences that traditional research can’t reach.

But the noise is loud. Some vendors are overselling capabilities. Some researchers are dismissing the entire category. And a lot of communicators are stuck in the middle, unsure whether to lean in or stay away.

Here’s the real risk: It’s not whether you use synthetic research or not. It’s jumping in without asking the right questions.

I’ve seen companies rush into it with unproven methodologies that deliver results as reliable as a coin flip. I’ve also seen teams avoid it entirely based on outdated assumptions, missing opportunities to move faster and smarter.

The truth is more nuanced. Synthetic research has a real place in the research toolkit – but it’s not right for everything. And not all synthetic research is created equal.

Before you commit to it – or write it off entirely – here are five steps that will help you be successful.

1. Consider the Novelty or Complexity of Your Topic

Not all research challenges are created equal.

If you’re testing something entirely new – a concept that’s never been in market, a deeply controversial topic, or something that relies on yesterday’s breaking news  – you may need the immediate reliability of real human reaction, regardless of cost. Synthetic research has come a long way, but human insight is still irreplaceable in certain contexts.

The good news is: Most research isn’t exploring uncharted territory. Most of the time, you’re refining, iterating, and optimizing within known spaces. You’re testing variations of existing messaging. You’re diagnosing why one message performs better than another. You’re exploring how different audiences interpret the same language.

In those cases? Synthetic research can deliver fast, reliable insights.

The key insight:

The question isn’t “Is synthetic research as good as human research?” It’s “Does the challenge require someone to experience something completely new and controversial? Or can we get what we need faster and more affordably with a rigorous synthetic approach?”

If your challenge falls somewhere in the middle – novel but not unprecedented, complex but not entirely unexplored territory – consider a hybrid approach. Use synthetic research to explore early, then validate with human research. Or do human research first, then use synthetic to scale and iterate.

Ask yourself:

Are we testing something brand new, or refining something already in market?

How much cultural or emotional nuance is involved?

Would a hybrid approach give us the best of both worlds?

2. Identify Where Validation Is Most Important

What level of certainty do you need to take action?

Every research challenge sits somewhere on a spectrum. At one end, you’re finalizing a major rebrand, navigating a recent crisis, or crafting messaging for a very controversial topic. These are moments where you need deep confidence in your insights before you commit. That doesn’t automatically rule out synthetic research. But it does mean you need a higher bar for rigor, validation, and expertise.

At the other end, you’re in the early stages of exploration—testing initial concepts, iterating on messaging, or refining language that’s already in market. Here, synthetic research can give you fast, actionable feedback without the time and cost of traditional methods. It can help you move forward with the confidence you need at that stage.

The key insight:

Confidence isn’t binary. Needing high confidence doesn’t mean “never use synthetic.” It means you need proven methodologies, validation against real-world benchmarks, and experts who know how to interpret the results.

And needing to move quickly doesn’t mean you compromise on quality. Even exploratory research should be rigorous. The real question is: What level of validation do you need to feel confident taking the next step?

Ask yourself:

How final is this decision? Are we exploring or executing?

What would give us the confidence to move forward?

Do we need to validate once, or iterate multiple times before we’re ready?

3. Rigorously Define Your Audience

Some audiences are easy to reach. Others? Difficult and expensive.

If you need feedback from policy leaders, investment analysts, medical specialists, or C-suite executives, traditional research in the exploratory phase quickly becomes too expensive – if you can manage to recruit them at all.

This is where synthetic research shines.

By building high-fidelity audiences using agent-based modeling—not just generic LLM personas—you can test language with the people who matter most to your campaign, even when they’d normally be out of reach. You can simulate how pension fund managers react to ESG messaging, or how state policy leaders interpret healthcare reform language, without the logistical nightmare of traditional recruitment.

The key insight:

Synthetic research opens doors that were previously closed. But the quality of your insights depends entirely on the quality of your audience model. A well-constructed synthetic audience – built on real-world data, validated against human samples, and designed with complex attributes – can deliver reliable insights. A poorly constructed one may be barely more accurate than a coin flip.

Ask yourself:

How hard is it to recruit this audience in the real world?

What’s the cost and timeline for traditional research with this group?

Do we need access to niche segments or hard-to-reach decision-makers?

4. Pick the Right Methodology for the Challenge

Here’s the uncomfortable truth: Not all synthetic research will produce the right answers.

Asking ChatGPT what “consumers think” is not research. Building a single persona and treating it as representative of millions of people is not research. Running a quick AI simulation without validation, benchmarks, or expertise is not research.

It’s guessing. It’s guessing with the letters “AI” in front.

Real rigor in synthetic research requires:

Expertise in research and language

Technology is only as good as the people using it. The best synthetic research is guided by researchers who know how to design studies, interpret results, and translate insights into action. Did you design your own studies before AI? Or did you have expert guidance?

Validation against real-world data

Methodologies should be tested side-by-side with human samples to ensure accuracy and reliability.

Normative benchmarks

Without context, a score is just a number. Rigorous synthetic research compares results to category norms, so you know whether your message is strong, weak, or somewhere in between.

Agent-based modeling (ABM)

Not just LLM personas. ABM generates thousands of individual agents, each with unique attributes that reflect the variability of real-world samples. This allows you to analyze differences between segments—Democrats vs. Republicans, millennials vs. boomers—just like you would in traditional research.
The key insight:

The question isn’t whether to use AI. It’s whether your methodology has been proven to work. If a vendor can’t explain how they validate their approach, how they build audiences, or how they benchmark results—walk away.

Ask yourself:

How does the vendor build synthetic audiences? (Single persona? Agent-based modeling?)

Has the methodology been validated against real-world research?

Can they provide normative benchmarks and context for interpreting results?

Do they have research expertise, or are they just a tech platform?

5. Determine the Research Sequence

Here’s where a lot of people get stuck. They think of synthetic research as either a replacement for traditional research or irrelevant.

That’s a misleading choice.

The best research strategies often combine traditional and synthetic methods. Use synthetic to explore early-stage ideas, then validate top performers with human research. Or start with traditional research to establish a baseline, then use synthetic to scale, iterate, and test variations.

Synthetic research doesn’t have to replace your existing approach. It can enhance it – giving you speed when you need it, access to audiences that were once out of reach, and flexibility to iterate as your needs change.

The key insight:

Synthetic and traditional research aren’t competitors. They’re complements. The most effective teams use both strategically.

Ask yourself:

Do we need to replace our current research approach, or enhance it?

Could we use synthetic research for early exploration and traditional research for final validation?

Are there parts of our audience or process where synthetic would add the most value?

How CHORUS Navigates These Steps

At maslansky + partners, we built CHORUS to capture the opportunity that synthetic research offers with proven methods and real expertise.

Match the confidence you need

CHORUS Solo is great for early exploration. It simulates your target audience to test ideas quickly when you’re still figuring things out. CHORUS Survey delivers structured, quantitative results with benchmarks when you need stronger validation to make a final decision. And if you need both synthetic and traditional research working together, CHORUS Hybrid can do that too.

Reach the audiences that matter

Policy leaders. Healthcare professionals. Investors. CHORUS lets you test language with audiences that are hard or expensive to reach through traditional research. We’ve built realistic synthetic audiences through decades of primary research data, and language insights – they’re not just simple AI personas – they’re trained on real data to reflect how real people respond to language.

Use proven methods, not guesswork

Most synthetic research lacks real rigor. CHORUS is different. CHORUS is built from the ground up with the rigor of our Persuasive Strength methodology. We’ve done the work to validate our approach against dozens of real human samples. And with our CHORUS Survey, we measure the Persuasive Strength of your language against category benchmarks – so you know whether your message will actually move your audience, not just how it compares to your other options.

Complement what you’re already doing

CHORUS doesn’t have to replace your current research. Use it to explore early, then validate with traditional methods. Or use it alongside your existing research to test more audiences or ask follow-up questions – CHORUS Hybrid. CHORUS Community even lets you build an always-on audience you can return to over time.

Get Language Strategy expertise

Technology alone isn’t enough. CHORUS is guided by the team that invented Language Strategy – experts who have spent decades finding the exact right words that make audiences listen, care, and act. You get more than just the data. We tell you what to say, what not to say, and why it matters.

The result? Synthetic research that’s fast, credible, and built to help you win.

The Bottom Line

Synthetic research is powerful. But it’s not for everything. And not all synthetic research delivers on its promise.

The teams that will win are the ones who approach it strategically – asking the right questions, demanding rigorous methodologies, and knowing when to use synthetic, traditional, or hybrid approaches.


To learn more about CHORUS:


John-Carlos Saponara is Director of Product + Innovation at maslansky + partners, where he leads the development of CHORUS and all enterprise AI initiatives. Over the past four years, he has been instrumental in building synthetic research methodologies that combine cutting-edge AI technology with proven Language Strategy expertise.

You Might Also Like