Designing for Intelligence
For decades design made tools usable; now it must make them intelligent
Welcome to Designing for Intelligence, a three-part series exploring the paradigm shift reshaping product design in the age of AI. The core thesis is simple but profound: creating truly helpful products is no longer about designing static, usable interfaces. Instead, it’s about choreographing adaptive, intelligent systems that anticipate our underlying intent and respond to the rich context of our lives. This series will unpack what that means for designers and product teams skills, and responsibilities.
Designing for Intelligence
My teenage daughter, after recently moving from Spotify to Apple Music, described a fundamental disconnect with the new experience. She considered Spotify's ability to surface new tracks she'd love to be an almost "magical" and standard feature. To her, this was a baseline expectation, an experience she found missing in a service that felt less personal and slower to adapt.
Her perspective gets to the heart of a new design paradigm. For her generation, a user's intent isn't just to "listen to music". It's to find the perfect vibe for what they're doing—to score the movie of their life, effortlessly. This new generational expectation is built from years of algorithmic predictions, that tries to anticipate what you want before you even know you want it.
That generational expectation of products and service experiences that are less reactive to being more proactive are further accelerated with the emergence of powerful machine intelligence.
A New Framework for a New Era
The powerful foundation that successful product design has rested on for decades is no longer enough. That foundation was the Era of User-Centered Design, where we balanced Desirability, Feasibility, and Viability. But with the emergence of powerful machine intelligence, that framework is now dangerously incomplete.
This new Era of Design for Intelligence demands two critical shifts in our approach.
First, we must expand our foundational framework. The ability of AI to anticipate our needs means that trust has become the new currency of user experience. We must add a fourth, non-negotiable pillar: Responsibility.
Second, our definition of a "desirable" product must evolve. For years, designers masterfully crafted the User Interface (UI) to deliver a seamless User Experience (UX). Now, we must shift to a more powerful tandem: UI/UC—the integration of the User Interface with a deep understanding of User Intent and User Context.
This 3-part article is a guide to navigating these two fundamental shifts. We will explore how to solve for a user's underlying intent by harnessing the rich context of their lives. We will then introduce a new collaborative mindset—the 'Four Hats'—that every product team member must wear to choreograph these intelligent systems. Finally, we will navigate the critical ethical frontiers that make Responsibility an essential pillar for building a more human AI.
The Quiet Limit of Usability
The 'smart' devices in our lives are not intelligent; they are obedient, following commands with precision but lacking comprehension. A smart speaker can follow a recipe, but it doesn't know you're making a special meal for a homesick friend. This is the core limitation of the foundational paradigm: it's passive, it waits for explicit instruction, a collection of reactions, not comprehension.
For decades, User-Centered Design was our North Star, and for good reason. Pioneered by Don Norman, it taught us to listen, to observe, and to build products that felt intuitive and were often a joy to use. It’s a principle Don Norman captured perfectly in The Design of Everyday Things: "Good design is actually a lot harder to notice than poor design, in part because good designs fit our needs so well that the design is invisible." It worked beautifully. But the ground is shifting beneath our feet.
The age of AI demands more than just usability, more than just frictionless interfaces. It demands genuine intelligence delivered in a responsible manner.
The principles of Desirability, Viability, and Feasibility perfected the art of creating competent, reactive tools. But in the AI era, users expect a proactive partnership.
A desirable calendar shows you your appointments clearly.
An intelligent one sees back-to-back meetings across town, alerts you to traffic, and tells you when to leave.A desirable camera takes the picture you frame beautifully.
An intelligent one waits for the moment everyone is smiling with their eyes open.
This leap from a reactive tool to a proactive partner is the new frontier of desirability. Our job is no longer just arranging pixels; it's orchestrating living experiences that learn, adapt, and collaborate with the user to achieve their goals.
As a friend and colleague of mine so eloquently put it: “As designers, we must stop designing for tasks and start designing for relationships between human intent and machine intelligence”
Standing on the Shoulders of Giants
So, do we throw out everything we've learned from the masters of User-Centered Design? Not at all. That would be like a master chef discarding their knowledge of basic knife skills. Instead, we must see this new paradigm as an evolution, building upon the foundations we know.
The groundwork for this evolution was laid decades ago by Don Norman, as his own thinking progressed from User-Centered to the broader scopes of Human-Centered and Humanity-Centered Design. This layered framework provides an essential roadmap for responsible innovation in the age of AI, transforming core design concepts into new mandates:
User-Centered Design (UCD) gives us the fundamentals of usability and clarity. In the age of AI, where systems are often proactive and opaque, these principles evolve. We must now treat trust as a fundamental design material. It is no longer just an outcome of good usability; it is the core component we must consciously build with to create a successful user-product relationship.
Human-Centered Design (HCD) expands our view to the user's broader social and emotional context. When AI is introduced, this focus forces us to confront bias as a critical design risk. Because intelligent systems learn from data that reflects real-world inequities, HCD demands that we proactively identify, manage, and mitigate the risk that our products will amplify societal harm.
Humanity-Centered Design scales our perspective to the global stage. This framework is essential for AI, making us recognize systemic impact as a design obligation. The power of AI to reshape economies and societies means we are obligated to consider the long-term, large-scale consequences of our work from day one.
This progression isn't just academic; it shows that our responsibility expands as technology's impact grows. The principles of UCD don't disappear—they are amplified and applied at each of these higher levels. AI acts as a powerful catalyst, transforming core design practices into something new:
Contextual Inquiry blossoms into Continuous Contextual Awareness. Instead of observing a user for a few hours, AI systems can learn from a constant, privacy-respecting stream of environmental and behavioral cues.
Designing for a Mental Model becomes Dynamically Adapting to a Mental Model. Where we once designed for a static persona, AI systems can now adjust their behavior in real-time as a user’s goals and expertise evolve.
Iterative Feedback Loops transform into Continuous Automated Learning Loops. The classic design-test-iterate cycle that took weeks can now happen in milliseconds, as AI systems test, learn, and refine their responses automatically.
Ultimately, the foundational principles of good design remain our bedrock. AI simply gives us more powerful leverage to build upon it, demanding we think not just about the user, but about the human and humanity at the center of our work.
The New Pillars of Intelligent Design
To meet the challenges magnified by the scale, autonomy, and complexity of artificial intelligence infused products and solutions, we need to a more robust framework than human centered and usability alone can provide.
To meet modern user expectations, we must move beyond exceptional UI/UX and elevate our focus to designing for user intent and user context. They are the levers that allow us to steer these powerful systems, ensuring that their scale, autonomy, and complexity serve human needs, rather than overwhelming them.
1. The Leap from ‘Need’ to ‘Intent’
A need is what someone says they want. It's the explicit request. "I need to find a coffee shop." It's a simple, surface-level task, a problem to be solved.
An intent is the why behind the need—the underlying, often unstated goal. It's the story behind the task. "I need to find a quiet coffee shop with reliable Wi-Fi so I can prep for a huge meeting in 45 minutes, and I'm feeling a little stressed, so a place with a calming atmosphere would be a huge plus."
This distinction is the core of Alan Cooper's Goal-Directed Design methodology, introduced in his book About Face: The Essentials of Interaction Design. Cooper argues that focusing on user goals (their intent) rather than the tasks they perform (their needs) leads to dramatically better product outcomes. This aligns with what Julie Zhuo, former VP of Design at Facebook, calls understanding the "job to be done." In The Making of a Manager, she emphasizes focusing on the user's desired outcome, a principle AI can now address with unprecedented nuance.
Designing for the need gets you a list of nearby cafes, sorted by distance. Functional, but generic. Designing for the intent gets you the right cafe, a hidden gem with comfortable chairs and fast Wi-Fi, with directions that guarantee you'll arrive on time and unstressed. This is the heart of Goal-Directed Design, now supercharged by AI's ability to perceive and process nuance. When we focus on the user’s ultimate objective, their desired end-state, we can deliver solutions that aren't just functional, but profoundly, almost magically, helpful.
2. The Power of ‘Context’ as a Multiplier
If intent is the destination, context is the living, breathing map of the journey. It's the rich tapestry of information that allows an AI to make smart, relevant, and timely decisions. This is often the hardest part for machines to grasp. This mirrors Google's "micro-moments" framework, which identifies that people make decisions in fleeting moments of need—I-want-to-know, I-want-to-go, I-want-to-do. AI's ability to understand context allows us to serve these moments perfectly. It's not just one thing; it's a fusion of different layers that, when combined, create a high-fidelity picture of the user's world:
Temporal: What time is it? Is it 8 AM on a frantic weekday morning, or 2 PM on a lazy Sunday afternoon? The user's priorities and mindset are completely different.
Spatial: Where are you? Are you at home, in the office, on a crowded train, or in a new city for the first time? Each location carries its own set of constraints and opportunities.
Environmental: What's going on around you? Is it raining, which might change your travel plans? Is it loud, suggesting you'd prefer a text notification over a phone call?
Behavioral: What are your habits? What have you done before in similar situations? This is the layer of personal history that allows a system to learn your preferences and routines.
Digital: What else is happening in your digital world? What's on your calendar for the next hour? What did you just search for on the web? This digital footprint provides crucial clues about your immediate goals.
When you fuse these layers, a product can move beyond simple personalization (like remembering your favorite order) to true, meaningful adaptation. It can see you're heading to the gym after work (calendar + location) and proactively suggest a healthy, high-protein dinner spot on your route home (behavioral + temporal), perhaps even one that has a special offer you'd like. Designing for intent, activated by context, gets you the right offer in a convenient spot. This fusion is the engine that drives truly intelligent design. When you weave together a deep understanding of user intent with rich, multi-layered context, something magical happens. Products stop being passive tools and start feeling like active, thoughtful partners.
Beyond the 'What': The 'How' and the 'Should'
This shift from a reactive tool to a proactive partner marks a new horizon for product design. Achieving this level of intelligence, however, requires more than just new technology; it demands a fundamental evolution in how our teams think and work together.
In Part 2: Choreographing Intelligent Experiences, we will explore the new collaborative mindset—the 'Four Hats'—that every member of a product team must wear to choreograph these intelligent experiences.
Following that, in Part 3: Navigating the New Frontier we will navigate the critical ethical frontiers and introduce a new pillar of responsibility essential for building a more human AI