Newsletter #12
- Julie Ask
- 2 days ago
- 5 min read
Updated: 1 day ago
TLDR: AI models can be dangerous, and they are evolving too quickly for most people, organizations, and governments to process the impact. Surveys of US adults confirm this. Despite spending billions, compute power is scarce-ish.
AI News & What It Means for Customer Experiences
Given the pace of world events, AI headlines are starting to feel predictable. The decisions driving the industry — what gets built, who gets access, what gets shelved — are almost always rooted in financial constraints and compute limitations. Today I'm picking a few notable stories and offering a perspective on what they mean for customer experiences. Many of these developments are one step removed from consumers, so bear with me.
Anthropic Has a Model So Powerful It Hasn't Released It Publicly
On April 7, 2026, Anthropic announced Claude Mythos Preview. The gist: the model can find and exploit security vulnerabilities quickly, cost-effectively, and buried deep within software that is decades old. The intended use is improving cybersecurity capabilities. The risk is obvious — in the hands of bad actors, the same tool could do serious damage to governments, banks, utilities, and critical infrastructure.
A few notable details: Anthropic has launched "Project Glasswing" to help enterprises prepare for what the tool could unleash, and has hand-picked 50 companies — including Amazon, Microsoft, Apple, and Google — for consultation and early testing. Industry commentators are already reaching for Manhattan Project analogies, questioning whether a private company should hold such a powerful tool and unilaterally determine who gets access to it. The fact that Anthropic is simultaneously in a legal dispute with the U.S. government while collaborating with it on national security matters adds a layer of complexity worth sitting with.
What it means #1: Development speed is now outpacing our ability to keep up with it — for consumers and organizations alike. Not long ago, month-long development backlogs were the norm, and product managers were anxious just to roll out a new feature. Now code is being written faster than it can be tested or understood. Do you want a meaningfully new experience in your car or your mobile banking app every time you open it? Speed without guardrails creates its own category of risk.
What it means #2: Our government and institutions are not equipped to keep pace — whether the challenge is national security, chatbots targeting vulnerable teens and adults, harmful synthetic content, or unreliable medical advice. (Studies suggest AI medical guidance is wrong roughly half the time.) We're at risk of another Tide Pod moment: a technology that outpaces our collective ability to understand its consequences before people get hurt.
What it means #3: Consumers and enterprises are operating in the wild west. Navigate carefully. According to a Quinnipiac University report released April 15, 2026, more than 50% of adults believe AI will do more harm than good, and nearly 75% say they don't trust it.
Compute Power Is Not Unlimited — and We've Already Started to Ration It
Over the past two years, we've witnessed a Hunger Games–style mania to raise money, buy chips, build data centers, and lock in compute power — not only to train models, but to serve the millions (soon billions) of enterprise and consumer users who depend on them. In an April 13, 2026 Wall Street Journal article, OpenAI's CFO was quoted saying, "I do spend a lot of time trying to find any last-minute compute available." Token usage on OpenAI's API rose from six billion per minute in October to 15 billion per minute in March.
The scale of capital commitments is staggering:
OpenAI and Anthropic alone expect to spend $65 billion in 2026.
Meta's $21 billion deal with CoreWeave — once eye-popping — now looks like a rounding error.
A WSJ study of capital spending plans for 51 investor-owned utilities tops $1.4 trillion over the next five years.
Several of these companies are designing their own chips.
And yes, the cost of electricity is already outpacing inflation.
The consequences are starting to show. OpenAI has narrowed its focus, shutting down its video creation app and canceling its deal with Disney. Anthropic's Claude API has experienced outages and is beginning to throttle usage.
What it means: We may need to rethink the economics of how we use generative AI — and what we hand to AI agents. These are not unlimited resources. I've had to throttle my own usage of Claude Cowork. I've heard dozens of stories along the lines of "I just asked AI to do it for me" — from drafting business plans, to creating social media posts in five languages, to syncing calendars, to planning a vegetable garden. Humans are often better at these tasks, and possibly, less expensive. We should also ask ourselves which use cases are important enough to justify the resource cost — and whether some of that capacity should be reserved for applications that benefit society broadly, like healthcare.
Amazon's CEO Addresses AI Concerns and Opportunities
In his annual shareholder letter, Andy Jassy covered a lot of ground. Here are the takeaways most relevant to customer experiences:
Consumers will benefit from AI even when they never see it. Much of Jassy's letter focused on logistics, supply chain, and warehouse operations. Consumers demand convenience — fast delivery, low prices. AI's ability to analyze vast data sets and create efficiencies is essential to delivering that, even if it's invisible to the end user. So too are the robots increasingly working alongside humans in fulfillment centers.
"Every customer experience will be reinvented by AI, and there will be a slew of new experiences only possible because of AI." I agree with him. In the same paragraph, he offers proof that we are not in an AI bubble.
GenAI is finally making Alexa genuinely useful. Alexa has 600 million active endpoints across devices, cars, TVs, and more. According to Jassy, customers are now talking to Alexa twice as much and for longer, completing purchases on devices three times more often, streaming music 25% more, and using smart home functionality 50% more. The virtual assistant landscape is getting interesting again.
Stanford's 2026 AI Index Report
Stanford released “The 2026 AI Index Report.” It’s a long read, but worth downloading. It covers investments, consumer adoption, model performance, robotics, geopolitical dynamics, safety, and more. A few highlights with direct relevance to consumer experiences:
AI experts and the general public see the future very differently. Pew Research found similar results. The gap likely stems from a significant difference in hands-on experience — only 28.3% of Americans regularly use AI tools. Meanwhile, media coverage over-indexes on fears and job losses. Brands should be thoughtful about how they position AI in consumer experiences; for most people, it's the thing they don't trust that also wants their job.
The internet now has more AI-generated content than human-generated content. A useful wake-up call for anyone not yet using technology to create content in the places where it matters less.
The United States has more than 5,000 data centers — nearly 10 times the number in Germany or China.
The environmental cost is real and underacknowledged. Page 38 of the report shows how much power and water these models consume. It will likely take a crisis before we collectively pause and reckon with the impact. The tools are still too cheap for users to feel the true cost.
AI accuracy is improving rapidly — but hasn't caught humans yet. General AI assistants improved from 20% accuracy in January 2025 to 74.5% in September 2025. Human accuracy sits at 92%. Stay tuned.
Public sentiment remains anxious. Page 362 echoes findings from other research: consumers are worried about job loss, don't trust AI models, and are more anxious than optimistic about what's ahead. One interpretive note — hypothetical survey questions tend to yield directional answers, not precise ones. Use this data as a compass, not a map.
