Invoca × Gather · Buyer Experience Report 2026
Gap Analysis · V2 vs. V3

Same headlines. Different evidence base.

Both versions deliver the six core 2026 headlines. The difference is what V3 cannot show — six secondary Y/Y charts and three depth probes that V2 retains. This page maps the gap so you can decide whether the depth is worth the incentive cost.

Two instruments, same shape, different depth.

V2 · Full instrument

The richest possible Y/Y story

52
questions · 4 screeners + 48 main
19
Y/Y locked
7
net new Qs
17
charts in report
  • All 6 candidate headlines fully supported
  • Voice AI: exposure + comparison + intent to use again
  • AI disclosure: importance Likert + behavioral question
  • Gen AI usage breakdown (how consumers use it, not just whether)
  • Generational gen AI cuts (Gen Z flip story)
  • Hold-time granular scale + AI satisfaction frequency
V3 · Tight instrument

Headlines preserved, depth trimmed

40
questions · 4 screeners + 36 main
13
Y/Y locked
5
net new Qs
12
charts in report
  • All 6 candidate headlines still supported
  • Voice AI: exposure + comparison only — no future-intent data
  • AI disclosure: behavioral question only — no Likert importance
  • Gen AI: usage and reliance only — no breakdown of how it's used
  • No generational deep-cut on gen AI flip
  • No hold-time scale, no AI satisfaction frequency

What V3 cannot show — exactly.

Each row below is a chart or stat that V2 publishes and V3 does not. Most are secondary Y/Y items — directional in 2025 but not headline charts. The cost of cutting them is real but bounded: V3 reports the same story, just with fewer supporting data points per section.

Topic V2 delivers V3 cannot show Cost of the cut
Section 02 · The perception gap
Satisfaction frequency with brand AIV2 Q30 · Y/Y locked ✓ Y/Y bar chart: "Most/all the time" 36% → 33% ✗ Section runs with 3 Y/Y charts instead of 4 Section still tells the story; one fewer supporting chart for the perception-gap headline. Mitigated in V3 by replacing with the brand-perception-change chart (also Y/Y).
Section 03 · Voice AI
Future intent: would you use voice AI again?V2 Q34 · Net new ✓ 5-tier breakdown showing 73% would use it again under right conditions ✗ Section ends on "53% rate it worse than human" without the offsetting "but they'd use it again" finding Headline still lands, but tilts more negative than V2. V3 substitutes open-text color from Q22 follow-up to recover the nuance, but loses the chartable forward-intent stat.
Section 04 · AI disclosure
Disclosure importance LikertV2 Q35 · Net new ✓ "81% say it matters that AI identifies itself" — single most quotable disclosure stat ✗ No quantified importance — only the behavioral "have you been deceived" question Significant. The 81% headline number is a top-tier PR pitch on its own. V3 recovers some of this through the Q47 added option ("being told up front when I'm talking to AI"), but the dedicated importance Likert is the cleaner stat.
Section 06 · Generative AI in research
How consumers use gen AI for researchV2 Q51 · Y/Y locked ✓ 6-bar breakdown: 52% summary, 48% compare, 41% generate Qs, etc. ✗ Knows that gen AI usage rose, doesn't know what for Cuts the most actionable marketing data point in the section. Brands wanting to optimize for gen AI surfaces lose specificity on which use cases to design for.
Gen Z flip from search to gen AIV2 Q52 generational cut ✓ Shows Gen Z 2026 is the first cohort where "rely more on gen AI" passed "rely more on search" ✗ Average data only — generational flip happens in the data but isn't reportable at chart level Cuts a generational story that's been a 2025 chart staple. V3 still has the average-level chart, but the Gen Z headline is harder to pitch without the dedicated visualization.
Section 02 · Channel + call experience (not in V2 report but in instrument)
Why consumers callV2 Q15 · Y/Y locked · multi-select ✓ Y/Y: "info about product" 46% (2025) → 49% (2026) ✗ Reasons consumers pick up the phone are not covered 2025 published this as a chart but it didn't drive a headline. Cut is defensible if speed-to-lead is the calling-section anchor instead.
Hold-time expectationV2 Q18 · 7-tier scale ✓ "63% expect <5 min" — a scannable expectation stat ✗ Only the binary "have you hung up" survives Mild. The behavioral binary outperforms the expectation question for headlines.
Hang-up duration scaleV2 Q20 · 9-bucket ✓ Granular bar chart of how long consumers wait before hanging up ✗ No granular hold-time visualization Mild. 2025's granular hold-time chart was visually busy and didn't produce a clean headline.
Section 07 · Closing items
"Would you use AI if it's faster than human"V2 Q37 · Y/Y locked ✓ 74% in 2025; tracked in 2026 ✗ Lost · the most-cited single stat from the 2025 report This is the cut V3 should reconsider before lock. The 74% stat was Invoca's most-shared 2025 finding and the most-likely-to-move Y/Y data point. Recommend adding back to V3 if Owen agrees.

All six headlines survive both versions.

Both V2 and V3 can deliver every candidate headline from the research plan. The difference is depth — how many supporting charts each version brings to back the claim, how many secondary stats are available for the press release, and how rich the sales enablement deck can be.

01
The speed gap that's killing conversion
Q1 expectation · Q3 reality · Q4 consequence · Q5 channel
V2 · Full4 supporting charts
+ open-text bank
V3 · Tight4 supporting charts
+ open-text bank
02
The AI perception gap has widened, not closed
Q15 · Q16 · Q19 · Q20 · Q30 (V2) all Y/Y locked
V2 · Full4 Y/Y charts
(impact, valued, forced, satisfaction)
V3 · Tight4 Y/Y charts
(satisfaction swapped for brand perception)
03
First consumer-side data on voice AI
Q21 exposure · Q22 vs. human · Q34 future intent (V2 only)
V2 · Full3 charts +
quote bank
V3 · Tight2 charts +
quote bank
04
Consumers want AI to introduce itself
Q35 importance (V2 only) · Q36/Q23 deceived behavior
V2 · Full2 charts
+ "81%" headline stat
V3 · Tight2 charts
(no "81%" — open-text only)
05
When AI fails, brands take the hit
Q43/Q29 blame · Q44/Q30 open-text · Q22/Q12 churn Y/Y
V2 · Full2 charts
+ open-text quote bank
V3 · Tight2 charts
+ open-text quote bank
06
Gen AI is now Step 1 of the buying journey
Q50/Q35 usage Y/Y · Q52/Q36 reliance Y/Y · Q51 use cases (V2)
V2 · Full4 charts
(incl. Gen Z deep-cut)
V3 · Tight2 charts
(no use-case breakdown, no Gen Z chart)

The recommendation: V3 with one add-back.

Both instruments deliver the headlines. V3 fields at Gather's standard incentive rate; V2 requires elevated incentives because of length. The trade is roughly six secondary supporting charts and one significant Y/Y stat ("would you use AI if faster") in exchange for a shorter, higher-completion-rate field.

One specific add-back to V3 worth Owen's consideration: restore Q35 (disclosure importance Likert). The "81% say AI should identify itself as AI" finding is a strong PR pitch in its own right, and it's a single question that gives Section 04 a quantified anchor instead of relying on open-text alone.

Pick V2 if

  • Sandy wants the maximum number of pitchable single-stat headlines
  • Sales team needs a deep stat library to staff outbound and one-pagers
  • Budget for elevated incentives is available
  • The 7-vertical spinoffs each need their own deep-cut data

Pick V3 if

  • Standard Gather field rate matters more than chart count
  • Owen prefers fewer-but-sharper findings over breadth
  • Higher completion rate from a shorter instrument is preferred
  • Open-text quote bank is acceptable substitute for some quantified depth
Synthetic data — illustrative only · Prepared by Gather for Invoca · April 2026
The B2C Buyer Experience Report 2026 Year 4 n = 1,200 US consumers

A year of AI investment. A wider trust gap.

How consumers are experiencing — and resisting — AI in the high-stakes buying journey.

Speed-to-lead Voice AI · NEW AI Disclosure · NEW Brand Accountability · NEW 7 verticals 19 Y/Y locked

Brands have spent a year scaling AI into the buying journey. Consumers have spent a year deciding what they think of it. The gap between those two stories is now wider than it was in 2025, not narrower — and the cost of getting it wrong has moved from a CX inconvenience to a brand-equity liability. This report quantifies the gap, names the new pressure points, and gives the consumer-side data on voice AI for the first time.

71%
expect a brand to respond to a contact form within an hour — but only 34% say it actually happened on their last high-stakes purchase
52%
say a brand using AI poorly would damage their perception of that brand — up from 46% in 2025
3x
consumers who've used generative AI to research a high-stakes purchase have tripled since 2022 — now 56% of all buyers
01 · Speed to lead

The hour nobody's hitting.

Consumer expectations for response speed have collapsed faster than brands have caught up. Across all seven high-stakes verticals, the gap between what consumers expect and what they actually experience is now wide enough to constitute a primary reason for choosing a competitor.

Q1 / Q3 · Y/Y locked

Expectation vs. reality: response time after a contact form submission

Base: all respondents who filled out a contact form (n=842)
Q4 · Y/Y locked

What consumers do when a brand misses the window

Base: all respondents (n=1,200)
38%
contact a competitor when the first brand is too slow
24%
give up on the purchase entirely
71%
expect a callback within 1 hour of leaving a voicemail
12min
median response window before consumers begin contacting competitors
Implication for marketers

Speed has become the most expensive variable in the funnel

Of the consumers who eventually purchased after their initial outreach, the share who chose the brand that responded first has grown to 62% in 2026 from 51% in 2025. The brand that wins is the one whose response time matches the consumer's expectation — not the brand with the better website, better price, or better product.

"

I filled out three forms the same night for solar panel quotes. The one that texted me within the hour got my business. The other two took 36 hours and 4 days.

— Home services buyer, 41, Texas
"

By the time they called me back about the SUV I was already at a different dealership. If you can't be bothered to call, I can't be bothered to wait.

— Automotive buyer, 56, Ohio
02 · The perception gap

A year later, still talking past each other.

In November 2025, Invoca published that 86% of marketers thought AI was improving the buying experience while only 35% of consumers agreed. One year on, the consumer side of that gap hasn't moved in the direction marketers hoped. On nearly every metric of how AI lands with the people on the receiving end, sentiment is flat or trending negative.

Q15 · Y/Y locked

How interacting with a brand's AI affected the buying experience

Base: respondents who interacted with brand AI (n=978)
Q19 · Y/Y locked

Consumers feel less valued when AI is the interface — and the gap has grown

Base: respondents who interacted with brand AI (n=978)
Q16 · Y/Y locked

"How often do you feel companies are forcing you to interact with AI?"

Base: all respondents (n=1,200)
Q30 · Y/Y locked · V2 EXCLUSIVE

How often consumers are satisfied with brand AI assistance

Base: respondents who interacted with brand AI (n=978)
The real story

The marketer-consumer gap isn't a perception problem. It's a deployment problem.

Marketers in Invoca's 2025 study reported confidence that their AI was improving CX. Consumers in our 2026 study report the opposite trend on every dimension we tracked. The implication is clear: brands are over-trusting their internal AI quality signals and under-listening to the consumers on the other end of those interactions.

03 · Voice AI · new in 2026

The first consumer read on voice AI.

For the first time, consumers have spent enough time on the phone with AI voice agents to form an opinion. We asked the question no other research has: how does that conversation actually compare to talking to a human?

Q21 · NEW

Have you spoken to an AI voice agent on the phone in the last 12 months?

Base: all respondents (n=1,200)
Q22 · NEW

How that interaction compared to speaking with a human

Base: respondents who experienced voice AI (n=486)
Q34 · NEW · V2 EXCLUSIVE

Would you use an AI voice agent again for a future high-stakes purchase?

Base: respondents who experienced voice AI (n=486)
Where voice AI wins, where it loses

Consumers will tolerate AI voice for short, simple errands. They will not tolerate it for complex ones.

53% rate voice AI worse than a human on their last interaction — but the same respondents who say "worse" are willing to use it again for simple questions or to avoid a long hold. The product implication: voice AI deployments that try to handle the entire conversation underperform deployments that handle the front-end and route complex calls to humans within seconds.

"

It worked perfectly to confirm my appointment. I would have hung up immediately if it tried to actually answer my question about my coverage.

— Healthcare buyer, 38, California
"

I knew right away it was a robot. It didn't sound like one — it sounded too perfect. That's what tipped me off.

— Insurance buyer, 49, Florida
04 · AI disclosure · new in 2026

Consumers want AI to introduce itself.

As voice AI gets better at sounding human, the question of disclosure has moved from a regulatory abstraction to a lived consumer experience. A meaningful share of consumers now report having been deceived. Their reaction tells brands what they should do about it.

Q35 · NEW · V2 EXCLUSIVE

How important is it that AI clearly identifies itself as AI?

Base: all respondents (n=1,200)
Q36 · NEW

Have you ever realized afterward that an interaction was AI, not human?

Base: all respondents (n=1,200)
81%
say it matters that AI identifies itself as AI
29%
have been deceived — thought they were talking to a human and weren't
64%
of those who realized afterward say it damaged their view of the brand
22%
say "being told up front when I'm talking to AI" is the single biggest improvement they want
"

I was venting about a frustrating week to what I thought was a customer service rep. Then I realized I'd been pouring my heart out to a chatbot. It felt like a betrayal.

— Telecom buyer, 34, New York
"

Just tell me. I don't care if it's AI as long as it can solve my problem. I do care if you tried to sneak it past me.

— Financial services buyer, 52, Illinois
05 · Brand accountability

When AI fails, brands take the hit.

A common assumption inside marketing teams is that consumers know AI is imperfect and will give brands a pass when interactions go wrong. The data does not support that assumption. When an AI experience goes badly, consumers blame the brand that deployed it — not the technology.

Q43 · NEW

When an AI interaction goes badly, who do consumers blame?

Base: respondents who had a negative AI experience (n=658)
Q22 · Y/Y locked

"Likely to stop doing business with a brand after one bad experience"

Base: all respondents (n=1,200)
From open-text Q44 · "What do most businesses misunderstand about how consumers feel about their AI tools?"

The most common consumer answer, across 1,200 verbatims:

"You think we don't notice. We notice. And we hold you responsible for what we experience — not your vendor, not the AI, not the chatbot. You."

"

If your AI is bad, that's your fault. You picked it. You deployed it. Don't blame ChatGPT.

— Travel buyer, 47, Washington
"

The brand chose to put a half-baked AI between me and them. That tells me everything I need to know about how much they care about me as a customer.

— Automotive buyer, 33, Georgia
06 · Generative AI in research

Step one of the buying journey is no longer Google.

Consumers using ChatGPT, Gemini, and Claude to research high-stakes purchases jumped from 41% in 2025 to 56% in 2026. For Gen Z, gen AI has now overtaken search engines as the primary research tool — meaning brands are losing influence at the top of the funnel before consumers ever see their site.

Q50 · Y/Y locked

Used a generative AI tool to research a high-stakes purchase

Base: all respondents (n=1,200) · Generational cuts
Q52 · Y/Y locked

Search engines vs. generative AI: which one consumers rely on more

Base: all respondents (n=1,200)
Q51 · Y/Y locked · V2 EXCLUSIVE

What consumers use generative AI for during purchase research

Base: respondents who use gen AI for research (n=672)
Generational cut · Q52 · V2 EXCLUSIVE

Gen Z has flipped: more rely on gen AI than on search engines

Base: Gen Z respondents (n=224)
Why this matters

The brand that gets recommended by ChatGPT wins the consideration set

57% of consumers who used gen AI to research their last high-stakes purchase report that the tool surfaced brands or providers they had not previously considered. For 38%, that surfaced brand made the final shortlist. The new question for marketing teams isn't "how do we rank on Google" — it's "how do we get represented accurately in the model that the consumer is asking before they ever search."

The brands that win aren't choosing between AI and human.

They're using AI to capture every lead within minutes — and routing those leads to humans the moment the conversation gets complex. Invoca's AI agents do both, trained on each brand's own conversation data so the buyer journey stays connected from first click to closed deal.

See how Invoca's AI agents work →

Methodology

1,200
US consumers, 18+, who completed a high-stakes purchase in the last 12 months
7
verticals: auto, healthcare, home services, financial services, insurance, telecom, travel & hospitality
May 2026
field window · two weeks · hybrid quant + qual instrument
52 Qs
52-question hybrid instrument with 19 Y/Y locked items from 2025 baseline

Respondents by vertical

Respondents by generation

Percentages may not sum to 100% due to rounding and multiple-selection options. Field survey conducted by Gather. Year-over-year comparisons reference Invoca's 2025 B2C Buyer Experience Report (US edition, n=1,000) and 2022 Buyer Experience Benchmark Report (n=500).

The B2C Buyer Experience Report 2026 Year 4 n = 1,200 US consumers

A year of AI investment. A wider trust gap.

How consumers are experiencing — and resisting — AI in the high-stakes buying journey.

Speed-to-lead Voice AI · NEW AI Disclosure · NEW Brand Accountability · NEW 7 verticals 13 Y/Y locked

Brands have spent a year scaling AI into the buying journey. Consumers have spent a year deciding what they think of it. The gap between those two stories is now wider than it was in 2025, not narrower — and the cost of getting it wrong has moved from a CX inconvenience to a brand-equity liability. This report quantifies the gap, names the new pressure points, and gives the consumer-side data on voice AI for the first time.

71%
expect a brand to respond to a contact form within an hour — but only 34% say it actually happened on their last high-stakes purchase
52%
say a brand using AI poorly would damage their perception of that brand — up from 46% in 2025
3x
consumers who've used generative AI to research a high-stakes purchase have tripled since 2022 — now 56% of all buyers
01 · Speed to lead

The hour nobody's hitting.

Consumer expectations for response speed have collapsed faster than brands have caught up. Across all seven high-stakes verticals, the gap between what consumers expect and what they actually experience is now wide enough to constitute a primary reason for choosing a competitor.

Q1 / Q3 · Y/Y locked

Expectation vs. reality: response time after a contact form submission

Base: all respondents who filled out a contact form (n=842)
Q4 · Y/Y locked

What consumers do when a brand misses the window

Base: all respondents (n=1,200)
38%
contact a competitor when the first brand is too slow
24%
give up on the purchase entirely
71%
expect a callback within 1 hour of leaving a voicemail
12min
median response window before consumers begin contacting competitors
Implication for marketers

Speed has become the most expensive variable in the funnel

Of the consumers who eventually purchased after their initial outreach, the share who chose the brand that responded first has grown to 62% in 2026 from 51% in 2025. The brand that wins is the one whose response time matches the consumer's expectation — not the brand with the better website, better price, or better product.

"

I filled out three forms the same night for solar panel quotes. The one that texted me within the hour got my business. The other two took 36 hours and 4 days.

— Home services buyer, 41, Texas
"

By the time they called me back about the SUV I was already at a different dealership. If you can't be bothered to call, I can't be bothered to wait.

— Automotive buyer, 56, Ohio
02 · The perception gap

A year later, still talking past each other.

In November 2025, Invoca published that 86% of marketers thought AI was improving the buying experience while only 35% of consumers agreed. One year on, the consumer side of that gap hasn't moved in the direction marketers hoped. On every metric of how AI lands with the people on the receiving end, sentiment is flat or trending negative.

Q15 · Y/Y locked

How interacting with a brand's AI affected the buying experience

Base: respondents who interacted with brand AI (n=978)
Q19 · Y/Y locked

Consumers feel less valued when AI is the interface — and the gap has grown

Base: respondents who interacted with brand AI (n=978)
Q16 · Y/Y locked

"How often do you feel companies are forcing you to interact with AI?"

Base: all respondents (n=1,200)
Q20 · Y/Y locked

How AI changed brand perception

Base: respondents who interacted with brand AI (n=978)
The real story

The marketer-consumer gap isn't a perception problem. It's a deployment problem.

Marketers in Invoca's 2025 study reported confidence that their AI was improving CX. Consumers in our 2026 study report the opposite trend on every dimension we tracked. The implication is clear: brands are over-trusting their internal AI quality signals and under-listening to the consumers on the other end of those interactions.

03 · Voice AI · new in 2026

The first consumer read on voice AI.

For the first time, consumers have spent enough time on the phone with AI voice agents to form an opinion. We asked the question no other research has: how does that conversation actually compare to talking to a human?

Q21 · NEW

Have you spoken to an AI voice agent on the phone in the last 12 months?

Base: all respondents (n=1,200)
Q22 · NEW

How that interaction compared to speaking with a human

Base: respondents who experienced voice AI (n=486)
Where voice AI wins, where it loses

Consumers will tolerate AI voice for short, simple errands. They will not tolerate it for complex ones.

53% rate voice AI worse than a human on their last interaction. From the open-text follow-up: consumers describe voice AI as "good for confirmation, bad for problems." The product implication: voice AI deployments that try to handle the entire conversation underperform deployments that handle the front-end and route complex calls to humans within seconds.

"

It worked perfectly to confirm my appointment. I would have hung up immediately if it tried to actually answer my question about my coverage.

— Healthcare buyer, 38, California
"

I knew right away it was a robot. It didn't sound like one — it sounded too perfect. That's what tipped me off.

— Insurance buyer, 49, Florida
04 · AI disclosure · new in 2026

Consumers want AI to introduce itself.

As voice AI gets better at sounding human, the question of disclosure has moved from a regulatory abstraction to a lived consumer experience. A meaningful share of consumers now report having been deceived. Their reaction tells brands what they should do about it.

Q23 · NEW

Have you ever realized afterward that an interaction was AI, not human?

Base: all respondents (n=1,200)
Q23 open-text · coded

Of those deceived: how it changed their view of the brand

Base: respondents who were deceived (n=348)
29%
have been deceived — thought they were talking to a human and weren't
64%
of those who realized afterward say it damaged their view of the brand
22%
say "being told up front when I'm talking to AI" is the single biggest improvement they want
11%
stopped doing business with the brand after realizing they'd been deceived
"

I was venting about a frustrating week to what I thought was a customer service rep. Then I realized I'd been pouring my heart out to a chatbot. It felt like a betrayal.

— Telecom buyer, 34, New York
"

Just tell me. I don't care if it's AI as long as it can solve my problem. I do care if you tried to sneak it past me.

— Financial services buyer, 52, Illinois
05 · Brand accountability

When AI fails, brands take the hit.

A common assumption inside marketing teams is that consumers know AI is imperfect and will give brands a pass when interactions go wrong. The data does not support that assumption. When an AI experience goes badly, consumers blame the brand that deployed it — not the technology.

Q29 · NEW

When an AI interaction goes badly, who do consumers blame?

Base: respondents who had a negative AI experience (n=658)
Q12 · Y/Y locked

"Likely to stop doing business with a brand after one bad experience"

Base: all respondents (n=1,200)
From open-text Q30 · "What do most businesses misunderstand about how consumers feel about their AI tools?"

The most common consumer answer, across 1,200 verbatims:

"You think we don't notice. We notice. And we hold you responsible for what we experience — not your vendor, not the AI, not the chatbot. You."

"

If your AI is bad, that's your fault. You picked it. You deployed it. Don't blame ChatGPT.

— Travel buyer, 47, Washington
"

The brand chose to put a half-baked AI between me and them. That tells me everything I need to know about how much they care about me as a customer.

— Automotive buyer, 33, Georgia
06 · Generative AI in research

Step one of the buying journey is no longer Google.

Consumers using ChatGPT, Gemini, and Claude to research high-stakes purchases jumped from 41% in 2025 to 56% in 2026. Brands are losing influence at the top of the funnel before consumers ever see their site.

Q35 · Y/Y locked

Used a generative AI tool to research a high-stakes purchase

Base: all respondents (n=1,200) · Generational cuts
Q36 · Y/Y locked

Search engines vs. generative AI: which one consumers rely on more

Base: all respondents (n=1,200)
Why this matters

The brand that gets recommended by ChatGPT wins the consideration set

From the open-text follow-up on Q35, consumers describe gen AI as the way they shortcut research that used to take hours: comparing brands, generating questions to ask salespeople, interpreting reviews. The new question for marketing teams isn't "how do we rank on Google" — it's "how do we get represented accurately in the model that the consumer is asking before they ever search."

The brands that win aren't choosing between AI and human.

They're using AI to capture every lead within minutes — and routing those leads to humans the moment the conversation gets complex. Invoca's AI agents do both, trained on each brand's own conversation data so the buyer journey stays connected from first click to closed deal.

See how Invoca's AI agents work →

Methodology

1,200
US consumers, 18+, who completed a high-stakes purchase in the last 12 months
7
verticals: auto, healthcare, home services, financial services, insurance, telecom, travel & hospitality
May 2026
field window · two weeks · hybrid quant + qual instrument
40 Qs
40-question hybrid instrument with 13 Y/Y locked items from 2025 baseline

Respondents by vertical

Respondents by generation

Percentages may not sum to 100% due to rounding and multiple-selection options. Field survey conducted by Gather. Year-over-year comparisons reference Invoca's 2025 B2C Buyer Experience Report (US edition, n=1,000) and 2022 Buyer Experience Benchmark Report (n=500).

0