Case Study·Amazon · 2022

Alexa Smart Home
Hunches

Designing a proactive suggestion system that learned your smart home patterns and surfaced automations before you knew to ask for them.

Isometric overview of all Alexa Hunches app screens showing the full product ecosystem
Role
Lead Product Designer
Team
2 PM · 1 EM · 4 Eng · 2 Data Scientists
Timeline
2022 – 2023
Company
Amazon

Background

01

Problem

Only a small fraction of Alexa smart home owners had ever set up a routine. The configuration model required users to anticipate their own habits in advance — a fundamental mismatch with how habits actually form. The automation power existed; the onramp didn't.

02

Goal

Build a system that observed real usage patterns and surfaced proactive suggestions at the right moment — without requiring any setup. Make automation something that happens to you, not something you build.

03

Non-goal

Hunches was not a general assistant feature and not a replacement for the full Routines editor. It would not suggest calendar actions, shopping reminders, or anything outside the smart home device graph. Scope containment was intentional — it kept the trust bar achievable.

Target Users

Primary

Casual Smart Home Owner

Has 3–5 smart devices. Uses Alexa for music and timers. Has never set up a single routine — the configuration felt unnecessary or confusing.

Power User

Has 15+ devices and some routines configured, but finds them brittle. Wants smarter automations without rebuilding everything from scratch.

New Device Owner

Just bought a smart bulb or plug. Wants to "make it smart" but has no model for what that means in practice.

Privacy-Conscious User

Skeptical of always-on listening. Values smart home convenience but will disengage entirely if a suggestion feels like surveillance.

Research

Ran diary studies across 24 households to map the line between “helpful” and “creepy.”

Four-week diary studies with 24 households revealed that the same hunch — word for word — landed as either delightfully prescient or deeply unsettling depending on when it appeared and how the reasoning was framed. The device behavior was identical. The perception was not.

We benchmarked against Nest, IFTTT, SmartThings, and HomeKit automations to understand where the category had already built trust — and which patterns had trained users to tune out proactive suggestions entirely.

Design Principles

01

Show the reasoning

"I noticed you usually turn off the kitchen lights at 10pm." Surfacing the pattern behind the suggestion — not just the suggestion — was the single biggest trust driver in usability testing.

02

Make dismissal as easy as action

One tap to act. One tap to dismiss forever. If declining a hunch required more work than accepting it, users would stop engaging with hunches entirely rather than manage the noise.

03

Earn trust before acting autonomously

V1 asked before doing anything. Only after users had accepted multiple hunches did we introduce proactive actions — and only for low-stakes devices like lights. Locks and alarms required explicit opt-in.

Process

Q1Discovery + Diary Studies
Q2Concept Testing
Q3Prototype + Usability Testing
Q4V1 Launch + Instrumentation
Q1+Proactive Mode Rollout
Design Debate 01

Notification vs. In-App Card

We launched with in-app cards. Push notifications for hunches had strong engagement numbers in early testing, but diary study participants described them as "nosy" — especially when the hunch arrived mid-task. An in-app card in the Alexa home feed let users engage on their own terms, which reduced dismissal rates by 30%.

Push notification
ChosenIn-app Hunches card
Design Debate 02

Confidence Signal

We moved confidence into the language itself rather than showing a percentage. "I noticed you usually…" vs. "You sometimes…" vs. "It seems like…" carried implicit confidence without adding cognitive load. Users who saw numeric scores in testing fixated on the number rather than the suggestion.

Show confidence score
ChosenHide confidence, use copy
Design Debate 03

Action Model

Ask first won for V1, even though proactive-with-undo had higher task completion in lab testing. The lab didn't capture the trust erosion that happened when participants reflected on it afterward — "it did something without asking me" — especially for users who joined mid-session and had never seen the onboarding. We shipped ask-first, then introduced optional proactive mode after 6 months of earned trust.

Act proactively, allow undo
ChosenAsk first, then act

Final Design

The Hunch Card

Pattern + suggestion in one sentence

The card led with the observed pattern before the action. “I noticed you usually turn off the kitchen lights at 10pm” came first — not the ask. This framing reduced the perceived surveillance quality and increased acceptance rates in the final round of usability testing.

Automation Promoted

From hunch to routine in one tap

Accepting a hunch immediately created a named routine in the Routines editor. Users could see it, edit it, or delete it — which removed the “black box” concern that surfaced repeatedly in research.

Hunches History

Full control over what Alexa has learned

A dedicated Hunches history screen let users see every pattern Alexa had observed, dismiss patterns permanently, and control which device types were eligible. Privacy- conscious users engaged more after finding this screen — the transparency itself was the trust signal.

Outcomes

3M+

consumers reached across smart home device households

12M+

hunch interactions logged in the first year post-launch

increase in automation adoption vs. manual routine setup

“The ask-first model wasn’t a compromise — it was the product. Users who accepted three or more hunches had meaningfully higher long-term retention than those who set up routines manually.”