Both versions deliver the six core 2026 headlines. The difference is what V3 cannot show — six secondary Y/Y charts and three depth probes that V2 retains. This page maps the gap so you can decide whether the depth is worth the incentive cost.
Each row below is a chart or stat that V2 publishes and V3 does not. Most are secondary Y/Y items — directional in 2025 but not headline charts. The cost of cutting them is real but bounded: V3 reports the same story, just with fewer supporting data points per section.
| Topic | V2 delivers | V3 cannot show | Cost of the cut |
|---|---|---|---|
| Section 02 · The perception gap | |||
| Satisfaction frequency with brand AIV2 Q30 · Y/Y locked | ✓ Y/Y bar chart: "Most/all the time" 36% → 33% | ✗ Section runs with 3 Y/Y charts instead of 4 | Section still tells the story; one fewer supporting chart for the perception-gap headline. Mitigated in V3 by replacing with the brand-perception-change chart (also Y/Y). |
| Section 03 · Voice AI | |||
| Future intent: would you use voice AI again?V2 Q34 · Net new | ✓ 5-tier breakdown showing 73% would use it again under right conditions | ✗ Section ends on "53% rate it worse than human" without the offsetting "but they'd use it again" finding | Headline still lands, but tilts more negative than V2. V3 substitutes open-text color from Q22 follow-up to recover the nuance, but loses the chartable forward-intent stat. |
| Section 04 · AI disclosure | |||
| Disclosure importance LikertV2 Q35 · Net new | ✓ "81% say it matters that AI identifies itself" — single most quotable disclosure stat | ✗ No quantified importance — only the behavioral "have you been deceived" question | Significant. The 81% headline number is a top-tier PR pitch on its own. V3 recovers some of this through the Q47 added option ("being told up front when I'm talking to AI"), but the dedicated importance Likert is the cleaner stat. |
| Section 06 · Generative AI in research | |||
| How consumers use gen AI for researchV2 Q51 · Y/Y locked | ✓ 6-bar breakdown: 52% summary, 48% compare, 41% generate Qs, etc. | ✗ Knows that gen AI usage rose, doesn't know what for | Cuts the most actionable marketing data point in the section. Brands wanting to optimize for gen AI surfaces lose specificity on which use cases to design for. |
| Gen Z flip from search to gen AIV2 Q52 generational cut | ✓ Shows Gen Z 2026 is the first cohort where "rely more on gen AI" passed "rely more on search" | ✗ Average data only — generational flip happens in the data but isn't reportable at chart level | Cuts a generational story that's been a 2025 chart staple. V3 still has the average-level chart, but the Gen Z headline is harder to pitch without the dedicated visualization. |
| Section 02 · Channel + call experience (not in V2 report but in instrument) | |||
| Why consumers callV2 Q15 · Y/Y locked · multi-select | ✓ Y/Y: "info about product" 46% (2025) → 49% (2026) | ✗ Reasons consumers pick up the phone are not covered | 2025 published this as a chart but it didn't drive a headline. Cut is defensible if speed-to-lead is the calling-section anchor instead. |
| Hold-time expectationV2 Q18 · 7-tier scale | ✓ "63% expect <5 min" — a scannable expectation stat | ✗ Only the binary "have you hung up" survives | Mild. The behavioral binary outperforms the expectation question for headlines. |
| Hang-up duration scaleV2 Q20 · 9-bucket | ✓ Granular bar chart of how long consumers wait before hanging up | ✗ No granular hold-time visualization | Mild. 2025's granular hold-time chart was visually busy and didn't produce a clean headline. |
| Section 07 · Closing items | |||
| "Would you use AI if it's faster than human"V2 Q37 · Y/Y locked | ✓ 74% in 2025; tracked in 2026 | ✗ Lost · the most-cited single stat from the 2025 report | This is the cut V3 should reconsider before lock. The 74% stat was Invoca's most-shared 2025 finding and the most-likely-to-move Y/Y data point. Recommend adding back to V3 if Owen agrees. |
Both V2 and V3 can deliver every candidate headline from the research plan. The difference is depth — how many supporting charts each version brings to back the claim, how many secondary stats are available for the press release, and how rich the sales enablement deck can be.
Both instruments deliver the headlines. V3 fields at Gather's standard incentive rate; V2 requires elevated incentives because of length. The trade is roughly six secondary supporting charts and one significant Y/Y stat ("would you use AI if faster") in exchange for a shorter, higher-completion-rate field.
One specific add-back to V3 worth Owen's consideration: restore Q35 (disclosure importance Likert). The "81% say AI should identify itself as AI" finding is a strong PR pitch in its own right, and it's a single question that gives Section 04 a quantified anchor instead of relying on open-text alone.
How consumers are experiencing — and resisting — AI in the high-stakes buying journey.
Brands have spent a year scaling AI into the buying journey. Consumers have spent a year deciding what they think of it. The gap between those two stories is now wider than it was in 2025, not narrower — and the cost of getting it wrong has moved from a CX inconvenience to a brand-equity liability. This report quantifies the gap, names the new pressure points, and gives the consumer-side data on voice AI for the first time.
Consumer expectations for response speed have collapsed faster than brands have caught up. Across all seven high-stakes verticals, the gap between what consumers expect and what they actually experience is now wide enough to constitute a primary reason for choosing a competitor.
Of the consumers who eventually purchased after their initial outreach, the share who chose the brand that responded first has grown to 62% in 2026 from 51% in 2025. The brand that wins is the one whose response time matches the consumer's expectation — not the brand with the better website, better price, or better product.
I filled out three forms the same night for solar panel quotes. The one that texted me within the hour got my business. The other two took 36 hours and 4 days.
By the time they called me back about the SUV I was already at a different dealership. If you can't be bothered to call, I can't be bothered to wait.
In November 2025, Invoca published that 86% of marketers thought AI was improving the buying experience while only 35% of consumers agreed. One year on, the consumer side of that gap hasn't moved in the direction marketers hoped. On nearly every metric of how AI lands with the people on the receiving end, sentiment is flat or trending negative.
Marketers in Invoca's 2025 study reported confidence that their AI was improving CX. Consumers in our 2026 study report the opposite trend on every dimension we tracked. The implication is clear: brands are over-trusting their internal AI quality signals and under-listening to the consumers on the other end of those interactions.
For the first time, consumers have spent enough time on the phone with AI voice agents to form an opinion. We asked the question no other research has: how does that conversation actually compare to talking to a human?
53% rate voice AI worse than a human on their last interaction — but the same respondents who say "worse" are willing to use it again for simple questions or to avoid a long hold. The product implication: voice AI deployments that try to handle the entire conversation underperform deployments that handle the front-end and route complex calls to humans within seconds.
It worked perfectly to confirm my appointment. I would have hung up immediately if it tried to actually answer my question about my coverage.
I knew right away it was a robot. It didn't sound like one — it sounded too perfect. That's what tipped me off.
As voice AI gets better at sounding human, the question of disclosure has moved from a regulatory abstraction to a lived consumer experience. A meaningful share of consumers now report having been deceived. Their reaction tells brands what they should do about it.
I was venting about a frustrating week to what I thought was a customer service rep. Then I realized I'd been pouring my heart out to a chatbot. It felt like a betrayal.
Just tell me. I don't care if it's AI as long as it can solve my problem. I do care if you tried to sneak it past me.
A common assumption inside marketing teams is that consumers know AI is imperfect and will give brands a pass when interactions go wrong. The data does not support that assumption. When an AI experience goes badly, consumers blame the brand that deployed it — not the technology.
"You think we don't notice. We notice. And we hold you responsible for what we experience — not your vendor, not the AI, not the chatbot. You."
If your AI is bad, that's your fault. You picked it. You deployed it. Don't blame ChatGPT.
The brand chose to put a half-baked AI between me and them. That tells me everything I need to know about how much they care about me as a customer.
Consumers using ChatGPT, Gemini, and Claude to research high-stakes purchases jumped from 41% in 2025 to 56% in 2026. For Gen Z, gen AI has now overtaken search engines as the primary research tool — meaning brands are losing influence at the top of the funnel before consumers ever see their site.
57% of consumers who used gen AI to research their last high-stakes purchase report that the tool surfaced brands or providers they had not previously considered. For 38%, that surfaced brand made the final shortlist. The new question for marketing teams isn't "how do we rank on Google" — it's "how do we get represented accurately in the model that the consumer is asking before they ever search."
They're using AI to capture every lead within minutes — and routing those leads to humans the moment the conversation gets complex. Invoca's AI agents do both, trained on each brand's own conversation data so the buyer journey stays connected from first click to closed deal.
See how Invoca's AI agents work →Percentages may not sum to 100% due to rounding and multiple-selection options. Field survey conducted by Gather. Year-over-year comparisons reference Invoca's 2025 B2C Buyer Experience Report (US edition, n=1,000) and 2022 Buyer Experience Benchmark Report (n=500).
How consumers are experiencing — and resisting — AI in the high-stakes buying journey.
Brands have spent a year scaling AI into the buying journey. Consumers have spent a year deciding what they think of it. The gap between those two stories is now wider than it was in 2025, not narrower — and the cost of getting it wrong has moved from a CX inconvenience to a brand-equity liability. This report quantifies the gap, names the new pressure points, and gives the consumer-side data on voice AI for the first time.
Consumer expectations for response speed have collapsed faster than brands have caught up. Across all seven high-stakes verticals, the gap between what consumers expect and what they actually experience is now wide enough to constitute a primary reason for choosing a competitor.
Of the consumers who eventually purchased after their initial outreach, the share who chose the brand that responded first has grown to 62% in 2026 from 51% in 2025. The brand that wins is the one whose response time matches the consumer's expectation — not the brand with the better website, better price, or better product.
I filled out three forms the same night for solar panel quotes. The one that texted me within the hour got my business. The other two took 36 hours and 4 days.
By the time they called me back about the SUV I was already at a different dealership. If you can't be bothered to call, I can't be bothered to wait.
In November 2025, Invoca published that 86% of marketers thought AI was improving the buying experience while only 35% of consumers agreed. One year on, the consumer side of that gap hasn't moved in the direction marketers hoped. On every metric of how AI lands with the people on the receiving end, sentiment is flat or trending negative.
Marketers in Invoca's 2025 study reported confidence that their AI was improving CX. Consumers in our 2026 study report the opposite trend on every dimension we tracked. The implication is clear: brands are over-trusting their internal AI quality signals and under-listening to the consumers on the other end of those interactions.
For the first time, consumers have spent enough time on the phone with AI voice agents to form an opinion. We asked the question no other research has: how does that conversation actually compare to talking to a human?
53% rate voice AI worse than a human on their last interaction. From the open-text follow-up: consumers describe voice AI as "good for confirmation, bad for problems." The product implication: voice AI deployments that try to handle the entire conversation underperform deployments that handle the front-end and route complex calls to humans within seconds.
It worked perfectly to confirm my appointment. I would have hung up immediately if it tried to actually answer my question about my coverage.
I knew right away it was a robot. It didn't sound like one — it sounded too perfect. That's what tipped me off.
As voice AI gets better at sounding human, the question of disclosure has moved from a regulatory abstraction to a lived consumer experience. A meaningful share of consumers now report having been deceived. Their reaction tells brands what they should do about it.
I was venting about a frustrating week to what I thought was a customer service rep. Then I realized I'd been pouring my heart out to a chatbot. It felt like a betrayal.
Just tell me. I don't care if it's AI as long as it can solve my problem. I do care if you tried to sneak it past me.
A common assumption inside marketing teams is that consumers know AI is imperfect and will give brands a pass when interactions go wrong. The data does not support that assumption. When an AI experience goes badly, consumers blame the brand that deployed it — not the technology.
"You think we don't notice. We notice. And we hold you responsible for what we experience — not your vendor, not the AI, not the chatbot. You."
If your AI is bad, that's your fault. You picked it. You deployed it. Don't blame ChatGPT.
The brand chose to put a half-baked AI between me and them. That tells me everything I need to know about how much they care about me as a customer.
Consumers using ChatGPT, Gemini, and Claude to research high-stakes purchases jumped from 41% in 2025 to 56% in 2026. Brands are losing influence at the top of the funnel before consumers ever see their site.
From the open-text follow-up on Q35, consumers describe gen AI as the way they shortcut research that used to take hours: comparing brands, generating questions to ask salespeople, interpreting reviews. The new question for marketing teams isn't "how do we rank on Google" — it's "how do we get represented accurately in the model that the consumer is asking before they ever search."
They're using AI to capture every lead within minutes — and routing those leads to humans the moment the conversation gets complex. Invoca's AI agents do both, trained on each brand's own conversation data so the buyer journey stays connected from first click to closed deal.
See how Invoca's AI agents work →Percentages may not sum to 100% due to rounding and multiple-selection options. Field survey conducted by Gather. Year-over-year comparisons reference Invoca's 2025 B2C Buyer Experience Report (US edition, n=1,000) and 2022 Buyer Experience Benchmark Report (n=500).