#11 Impact of AI on CX
- Julie Ask

- 3 hours ago
- 5 min read
Newsletter #11
A lot has happened in the month since my last newsletter. Today, I am going to choose a few notable news items and press releases and offer a perspective on what it means for customer experiences.
(NY Times headline)
While this is just one case, there are thousands pending. At a high level, here is why it matters to those creating consumer experiences. Until now, Section 230 of the Communications Decency Act (1996) provided immunity to online platforms like social media sites, forums, and other internet services from legal liability for content posted by their users. Meta, like other social media and news platforms, uses algorithms to drive engagement and reap financial rewards. In another ruling, a jury found Meta liable for violating state law by failing to safeguard users of its apps from child predators. This goes way beyond mental health and addiction.
Consumers who are deeply engaged with chatbots (e.g., OpenAI’s ChatGPT) have committed suicide and murder. One of the more prominent cases was in February 2026 in British Columbia where an 18-year-old killed her mother and step-brother before driving to a school and killing more. Staff at OpenAI acknowledged seeing the online chats and discussing whether or not they should alert authorities. While they ultimately decided not to do so, they did suspend the account.
The open question is: how much liability do conversational interfaces have for the actions their users take in the real world? Or for their health?
(NY Times headline)
There is a lot one could unpack here between how the US government might use AI and whether or not a tech company can or should have control over how its product is used. What I want to focus on is the difficulty of uninstalling or not using what is soon to become ubiquitous infrastructure that powers thousands if not more applications. A few of the challenges:
Large language models (LLMs) are akin to infrastructure like a broadband or wireless service. A consumer or enterprise may know that they buy service from Verizon or Comcast. That doesn’t mean that their calls or internet traffic run exclusively on that infrastructure. Is using an artifact e.g., an Email, image, data chart created by Anthropic the same as using Anthropic?
Many tech platforms use AI models from outside companies. They let customers choose which one they prefer or optimize based on accuracy, speed, and cost. Some platforms might let customers opt out of specific AI providers — for example, a button that says 'never use Anthropic' — but those customers may not realize the business consequences (e.g., cost, performance). At this point, it would be difficult to tell developers they can’t use Claude.
Switching costs for most consumers are still very low when it comes to choosing AI tools. While applications can get to know customers over time and offer more relevancy, most consumers aren’t at that stage. When OpenAI agreed to work with the Department of Defense, ChatGPT uninstalls surged 295% the next day - a dramatic increase compared to the company’s day-over-day uninstall rate of 9%. Overall installs that day also fell 13%.
In the first two months of the year, I felt like I could barely keep up with the product and protocol announcements coming out of Anthropic, Google, and OpenAI. News also included retail and payment infrastructure partners signing on (a lot of noise with limited impact).
On March 16, 2026, OpenAI announced that it was cutting back on side projects. On March 24th, they announced they were pulling the plug on their video platform, Sora. There are also rumors of discontinuing Instant Checkout. The list is likely longer. They plan to focus on productivity tools for employees and individuals (i.e., combining its ChatGPT desktop app, coding tool Codex, and browser into one superapp. (Not to sound snarky, but OpenAI is starting to look like a fast follower.) A few thoughts:
There is more money in enterprise IT spend than consumer derivatives such as advertising or search. To put this in perspective, the global digital ad spend is about $700B with Search being about $255B. In comparison, worldwide IT spend is more than $6T. OpenAI is expected to spend more than $25B in 2026 on operating costs.
Working with any start-up has risks - even ones as large as OpenAI (if we can call it a start-up). It was only 3 months ago - December 2025 - when Disney announced a $1 billion dollar equity investment that would allow OpenAI users to generate videos using its characters. That deal must be dead if Sora is.
Just because you can build a service doesn’t mean anyone will want to use it - or use it as you intend. Just ask Meta. They have over three billion monthly active users on their WhatsApp platform. While consumers in Brazil and India do transact on it, most consumers don’t. Third-party platforms often struggle with transactions even if they capture consumer attention higher in the purchase funnel.
OpenAI and other LLMs (e.g., xAI’s Grok) take on significant risk when they allow consumers to create fake images and videos. It was just in October of 2025 when OpenAI announced that they would let verified adults create erotica on their platform. In Sam Altman’s words, “We are not the elected moral police of the world.” I’ll leave this topic here, but we all understand the broader potential harm to society. There is also the issue of energy consumption. Creating a high quality 5 second video uses a comparable amount of energy to using a microwave for an hour.
McKinsey published a report with the title, “Europe’s agentic commerce moment: Decision influence is here; execution is coming.” Based on my 25 years of studying consumer digital behavior, I’m not convinced of the findings. I don’t mean to disrespect the authors. The quality of the research looks to be excellent i.e., they include survey questions and sample sizes. I struggle with the interpretation of the results. In my research, I found that there was a significant delta between “I have done this once” and “I always do this.” The authors make this leap. That said, the recommendations are good though I question the urgency. Every action must be put into context of a brand’s target audience, products, business objectives, and other priorities.
Quick notes:
Apple is on pace to surpass $1B in AI revenue this year - mostly from ChatGPT. What does this mean? Despite not owning AI infrastructure (or carrying the burden of those costs), Apple is making money. Demonstrates the power of owning the endpoint - the smartphone where so many consumers engage with digital experiences.
The White House released a set of AI legislative recommendations which favor doing less and letting the courts handle cases as they come. Laws have long been unable to keep pace with the issues technology creates. To be continued at another time.
The WSJ pitted the top models - Claude, Gemini, and ChatGPT - against its pool of readers in a March Madness bracket. The brackets created by each of the three models are out-performing or matching the average WSJ reader, but not the WSJ pool leader.
For those who believe an AI bubble is being created by circular funding structures, the WSJ published a great graphic of money flow in and out of Nvidia.
Anthropic published a large study on the people’s attitudes towards AI. First (with the use of technology), they “interviewed” 80,508 Claude users across 159 countries. This should blow your mind. Worth a read.


Comments