CES 2026: What It Means for Consumers
- Julie Ask
- 21 hours ago
- 7 min read
Teaching consumers about new devices and solutions to problems they didn't know they had is challenging. Apple succeeded with iPods, tablets, and smartphones through massive marketing budgets. Few brands have those resources.
The holiday season boasted a slew of TV commercials attempting to communicate the value of genAI tools to consumers. Anthropic positioned Claude as a problem-solver. OpenAI positioned ChatGPT to young adults as an assistant for dating, travel, and dreaming. Google tugged on the emotions of young parents with “Mr Fuzzy’s Big Adventures” to show off the power of its Gemini app. Perplexity encourages us to explore our curiosity. The timing is good as CES is upon us.
Executive Summary
Few, if any, products or services will see the consumer adoption success of core, utilitarian LLM applications from companies such as Anthropic, Google, or OpenAI. Microsoft is making headway in the workplace, while other platforms do so with developers.
Net new consumer electronics devices will be a hard sell. Smartphones not only do “everything” for us, but carriers also subsidize them. Consumers are reluctant to adopt or buy new devices requiring setup or charging. Moreover, the new devices - especially the ones that listen and process language - are technologically amazing, but lack compelling value propositions.
Consumers don’t clearly care HOW their services and products work. Many product descriptions lean heavily into AI language and the benefits of agentic abilities. Agentic seems to refer to proactive analysis and notifications (ok, cool) and automation. It’s fun to geek out on the technology; however, consumers will mostly be satisfied with massive computer power to get stuff done. Adding “AI” to product names feels short-term like broadband was 20 years ago. Hopefully, consumers are not numb to the term before engineers and product designers use it to do new things.
Physical AI (think robots or autonomous machines.) Fascinating and compelling use cases in factories. It’s too early for consumer products beyond autonomous driving features in their Tesla. The talk here reminds me a lot of expensive, connected home appliances that have barely reached critical mass consumer adoption after a decade. Too often, they are expensive and depend on remodels for implementation.
Here are my key takeaways broken down into consumer AI product categories:
1. Utility Applications and Emerging Platforms
This category is evolving as major LLM players (OpenAI, Meta) bring hardware and services to consumers while market leaders (Apple, Amazon) extend their reach with platform plays like Apple Intelligence and Alexa+. AI enables conversational interfaces to the internet or devices. (Note: 1) Amazon, Meta, and OpenAI each have their own foundation model. 2) Apple, Google, and Meta typically announce products at their own events - not CES.)
Notable: Amazon, Apple, and Meta announced third-party integrations to broaden their ecosystems. Samsung outlined an aggressive Galaxy AI roadmap dependent on Google.
My take: Positive for consumers. Most of these announcements add up to distribution or more points of presence for these conversational interfaces. Two key points: 1) consumers need utility applications because they lean into the simplicity of one tool to do a long tail of their activities (e.g., fitness plans, advice, editing, research, recipes) 2) consumers will still use tools tuned to specific tasks, especially ones that demand sensitive context (e.g., finances, health, relationships).
2. Listening and Speech-Processing Devices
OpenAI's Sam Altman advocates for non-smartphone AI devices. These products position themselves as helpers or virtual assistants offering convenience through speech processing, contextual answers, and interaction summaries.
Notable: Friend pendant, Humane AI pin (canceled), Plaud.ai NotePin S
My take: I hesitate to doubt the combined power of Sam Altman’s persuasiveness + resources and Jony Ives’ track record. Today, consumers make heavy use of the screens on their now (again) subsidized smartphones to play games, watch videos, and engage with social media. I believe an audio-only device has potential on a much smaller scale. Here’s why: 1) privacy concerns of the non-device user. Remember Google Glass? Rationally, we are under surveillance by disconnected devices from speed cameras to home monitoring systems to billions of smartphones. Individuals recording us without our consent feels different. Besides, who has time to go back and review summaries? Are we really saving time? 2) the context collected through audio inputs is an incomplete picture and is yet to offer compelling use cases or value at scale 3) natural language interfaces are an “and” and not an “or” to GUI. Humans have become very accustomed to clicking on options in front of them - not imagining what they might want.
3. New consumer devices with a vertical or niche focus, such as healthcare.
These next-generation wearables target general fitness or specific healthcare issues: hearing loss, diabetes, gait analysis, and dental hygiene. Some seek FDA approval or insurance reimbursement. GenAI's role often involves on-device data processing for immediate feedback.
Notable: Dentomi's GumAI, Miraii's Aura Ring, ORPHE Inc.'s AI Insole, VibeBrux Bruxism Guard, Vital Health's VITAL Belt, Vivoo AI Nutrition
My take: These devices intrigue me. I want to trial each one to collect data and improve my health. Generally, consumers lean into convenience and what insurance covers. When it comes to their health, care providers sometimes offer at-home data collection for diagnostics to save money by allowing consumers to be at home rather than hospitals. One interesting angle is the positioning of these devices as agentic in one of two ways: 1) using gestures to control devices or 2) using voice commands to trigger digital activities (e.g., ordering an Uber). Agentic? Maybe. Agentic AI? Not clear.
4. Lightweight devices with cameras and/or screens
Eyewear or heads-up displays dominate this category. Think about VR headsets such as Apple Vision Pro or Meta’s Ray-Ban glasses. Some focus on niche categories that offer real-time coaching in the form of swim guidance or skiing navigation. Doing so relies both on sophisticated sensor systems and the computer power of AI to crunch data and generate data for the displays. The Looki L1 plays the same role as Anne Hathaway did for Meryl Streep in “The Devil Wears Prada” i.e., helping her navigate faces and names throughout events.
Notable: Form Smart Swim goggles, Fraimic Smart Canvas, Meta Ray-Ban glasses, Looki dashcam, Rabbit R2
My take: I understand the real and tangential value in the business or industrial ecosystem, and I want to be dismissive of this category for consumer use cases - at least at scale. In my personal experience, I find it difficult to process information overlays on the world beyond simple, static information. The headsets also make me feel disconnected from my environment. While those are my feelings based on my experiences, consumer survey data also shows that extended reality (XR which is either AR or VR or a combination thereof) has the lowest consumer adoption among all interfaces. On the other hand, Meta’s Ray-Ban glasses apparently have a months’ long wait. And Rabbit - a product I didn’t even have the patience to set up - claims 100,000 orders and only 5% returns. I didn’t even think to return mine - just recycled it.
5. Vertical-Focused Assistants (Hardware and Applications)
The LLMs are tuned to proprietary data and use the consumer’s data as context to deliver more relevant results. These assistants will primarily manifest as AI-enabled chatbots today. Consumers will find them as stand-alone services (e.g., Phi.health), embedded in products they own (e.g., Samsung’s Bespoke AI Refrigerator Family Hub), or as a service when they reach out to contact centers. The services use AI for conversations, image recognition, and more. These products claim agentic AI abilities when they proactively analyze information to generate and alert or take action (i.e., automatically updating grocery lists based on refrigerator contents or purchased groceries).
Notable: Assistants from banks, insurers, retailers, travel companies; Perfect Corp.'s YouCam AI Agent, Phi.health's assistant
My take: Use of AI in the contact center is here, working well, and delivering results to brands and consumers. Narrow conversational assistants such as Phi.health are helpful and become more relevant with time. I am optimistic and hopeful about the agentic features these services offer. As a counter example, my Friend pendant listens to me all day, but seldom says anything (i.e., notifies me of a situation or asks for clarification. Otherwise, many of these connected “agentic AI devices” sound gimmicky and expensive. Using voice commands to open my refrigerator door when my hands are full … I’m not sure what I would be willing to pay to solve that problem. Connected home appliances still haven’t reached mass market adoption due to cost and complexity relative to value.
6. Established Products Enhanced with AI
Connected doorbells, earbuds, laptops, smartphones, speakers, and thermostats existed before manufacturers added genAI features. AI enables conversational interfaces with broader vocabulary, live services (language translation), and expanded display capabilities.
Notable: Amazon's Echo Hub Gen 2, Google's Nest Hub AI, Meta's Orion Dev Kit, smartphones, translation-enabled earbuds
My take: I have two primary expectations. First, with every browser, search engine, and open white box within an app using genAI to converse with consumers, I don’t expect the use of the letters “AI” to persist. Feels like the old days of discussing Internet speeds. Second, just because genAI dramatically changes what is possible, it doesn’t mean consumers have unmet needs that the services will solve. For example, I have not yet tried the AI Form Smart Swim goggles. I imagine the tech is amazing. The idea that I could correct my stroke in real time is powerful. And yet today, I often swat away grammar suggestions. Remember Clippy? I don’t always want to be perfect or to be improving.
7. Robots, Autonomous Vehicles, and Physical AI
Nvidia CEO Jensen Huang emphasized Physical AI—AI that reasons, navigates, and acts in the physical world. San Francisco has experienced Waymo's autonomous vehicles for 5-6 years. GenAI transforms both the economics and quality of robot programming, historically challenged by complex scenarios and nuanced movements—even simple tasks like unloading dishwashers.
Notable: Birdfy Bath Pro, Hisense G2 Humanoid, Narwal Flow 2 Robot, Samsung's Bespoke AI Refrigerator, Unitree Go2 Robot Dog
My take: Cutting to the chase here … the stories, reports, and images of robots out of China’s factories are mind-blowing. One article in the Economist reported that hundreds of thousands of citizens were applying for limited tickets to tour these factories. Think of Amazon warehouses. Robots can do repetitive work, work all day and night, and tackle dangerous jobs. They don’t need clean air to breathe. Employment rules don’t apply to them. For consumers, too expensive for most. Consumers will most likely experience agentic or autonomous machines in cars they already own - not new robots they’ll purchase to help at home.
