Choreographing intelligent experiences & the designer's evolving role

When intent and context converge, technology starts collaborating

In Part 1, we explored how the foundational design principles of Desirability, Viability, and Feasibility are being expanded by the Era of Design for Intelligence. To build the next generation of truly helpful products, we must add a new layer to our process: the ability to solve for a user's underlying Intent by harnessing the power of real-time Context.

————

The Seeds of Intelligence: Early Signals in Today's Products

This shift from reactive tools to proactive partners isn't a speculative future; the foundations for it were laid years ago by key thinkers, and its early signals are already appearing in the products we use every day.

Two foundational concepts are key. The first is the leap from understanding a user's request to understanding their true intent. This idea was championed by Alan Cooper in his Goal-Directed Design methodology, which argues for focusing on the why behind a task, not just the task itself. Julie Zhuo builds on this by framing it as the "job to be done"—the ultimate outcome a user wants to achieve.

The second concept is the power of context. An AI's ability to act on intent is unlocked by its grasp of a user's contextual data and environment. This mirrors Google's "micro-moments" framework, which identifies that people make decisions in fleeting moments of need—I-want-to-know, I-want-to-go, I-want-to-do.

When these foundational ideas are fused with AI, products begin to feel less like tools and more like partners. The following pioneering services offer a glimpse into this power, putting the principles of intent and context into practice. They are the proof points that this new paradigm works, even as they reveal how much runway is still ahead.

Pioneering Service

Current Capability (Early Signals)

Future Potential (Full Intelligence)

Google Maps as an EV Co-Pilot

Understands the driver's intent to avoid "range anxiety". It fuses context like the EV model, current charge, temperature, and real-time charger availability to choreograph the entire trip.

It could fuse calendar and user preference data to understand the why behind the journey. For a client meeting, it might proactively schedule a stop at a charger with a café to ensure a relaxed and timely arrival.

The Smart Home That Learns You

Moves beyond explicit commands by observing behavioral and environmental context, such as waking times and weather. It learns patterns to create a comfortable and efficient environment without constant input.

It could fuse biometric and digital context from health trackers. After detecting a restless night, it might proactively dim the morning lights and start a calming playlist to support the user's well-being.

Spotify, the Mind-Reading DJ

Excels at understanding the implicit intent to "find the perfect vibe". It moves beyond explicit signals (songs you've liked) to implicit ones (what people with similar tastes enjoy) to create a magical discovery experience.

It could fuse life context from calendars, location, and biometrics. Before a big presentation, it might detect an elevated heart rate and offer a playlist scientifically designed to improve focus, helping you achieve a goal.

As these pioneering services show, what once felt like magic is now becoming the baseline. They are proof points for a fundamental shift, meeting the new standard for a generation raised on predictive algorithms: the expectation of a service that anticipates what you want before you even know you want it. This is the power of designing for intent, activated by context. It allows us to move beyond static screens and choreograph dynamic systems that feel personal, proactive, and profoundly helpful—transforming passive tools into active, thoughtful partners.

Intelligence in Action – An example of design with user intent infused by context

The Proactive Telco Voice Agent: A customer's intent when calling their provider isn't just to check their data balance; it's to feel secure and reliably connected. A standard bot answers the direct question. An intelligent agent fuses context: it sees the customer's plan details, their history of running low on data late in the month, and notes a complaint call they made last week about a now-resolved local network outage. Instead of just a data balance, the agent offers a profoundly helpful solution:

"Hi Alex. I can see you have 2 gigabytes left. I also know you tend to run low at this point in the month. Given the temporary network outage in your area last week—which I'm sorry about—we've added a complimentary 10-gigabyte data pack to your account to ensure you stay connected. Would you like me to activate it now?"

The Empathetic Retail Voice Agent: A customer's intent when asking "Where is my order?" isn't just to get a tracking number; it's to solve the problem that the package was meant for. They are feeling anxious. The agent fuses context: the order is a gift-wrapped baby shower present, the delivery is marked "delayed," and the customer recently visited a physical store. It acts on the user's true goal:

"Hi Sarah. I see you're asking about your gift order. I'm so sorry, it appears the delivery has been unexpectedly delayed and won't arrive until Monday. I know how important this must be. I've located the exact same item in stock at the store you visited last week, just ten minutes from your current location. I can have it held for you to pick up this afternoon, and I've already refunded your delivery fee. Would you like me to arrange that?"

The Telco and Retail agents feel simple and magical to the end-user but achieving that magic requires a fundamental shift in how products are built. This level of proactive assistance isn't the result of a better user interface; it's the outcome of choreographing a complex system of data, logic, and personality. It demands that we move beyond designing static screens and embrace a system thinking mindset and expanded roles and behaviors.

Designing Systems, Not Screens

The foundational element of this new mindset is systems thinking. The shift from designing static objects to choreographing dynamic systems requires us to move beyond linear, cause-and-effect logic and embrace the principles of systems thinking.

The most effective framework for this is systems thinking, articulated best by Donella Meadows. Her work provides a Rosetta Stone for the AI era.  Her most influential concept is the hierarchy of twelve "leverage points" (read more here.) —places to intervene in a system where a small shift can produce a large change. Meadows argues that the most powerful interventions involve changing the system's rules, its information flows, its overall goal, and, most powerfully, the underlying paradigm or mindset from which the system arises.  

Designers and product managers often focus on the least effective leverage points, like tweaking UI elements, AI allows us to intervene at the rule- and paradigm-level.

Adopting this mindset is the first and most critical step. It prepares us for the specific skill shifts required to thrive in this new landscape.

Key Skill Shifts

To thrive in this new paradigm, designers, product managers (PM), and engineers need not only new tools but new literacies

  • Systems Thinking: We must stop thinking in pages and screens and start thinking in flows, feedback loops, and interconnected systems. We must become adept at mapping out how an AI will make decisions, how it will learn, and how it will adapt its behavior across an entire, non-linear user journey.

  • Data Literacy: While you don't need to be a data scientist, you must speak the language of data. Data is no longer just for A/B testing and validation; it's a primary design material. We must learn how to interpret data to find the stories within it, to understand context, and to shape the system's behavior with an evidence-based approach.

  • AI Training & Curation: Think of an AI as an apprentice; it's only as good as its teacher. Product teams will play a crucial, ongoing role in training these systems—crafting effective prompts, curating high-quality training data, and providing the constant, nuanced feedback needed to refine the AI's "judgment," its "personality," and its "common sense."

  • Designing for Graceful Failure: AI will mess up. It's inevitable, and it's okay. A critical new responsibility is designing for those moments of failure with empathy and foresight. How do we create clear, low-friction ways for users to correct the AI, to override its suggestions, and to teach it to do better next time? This is how you build trust and resilience into the system.

  • Qualitative Judgment: In a world drowning in quantitative data, a designer's qualitative judgment—our intuition, our empathy, our taste, our ethical compass—becomes our superpower. We are the voice of the user in the machine, the advocate for humanity, ensuring the system's behavior isn't just accurate, but also appropriate, considerate, and respectful.

These skills are cross cutting regardless of domain expertise. The necessity of these skills blurs the traditional lines between designer, product manager, and engineer. As each role adopts this new literacy, design ceases to be a specialized hand-off and becomes a deeply collaborative and continuous conversation—in short, a team sport.

Design as a Team Sport

This evolution from interface architect to system choreographer changes everything, reorienting the designer's role from crafting finite objects to orchestrating intelligent systems. We can see this shift captured in the language used by today's leading design thinkers. John Maeda labels it a move from User Experience (UX) to Agent Experience (AX).

The responsibility for good design no longer lies with a single role; it becomes a shared mindset across the entire product team. When the user experience is powered by data models and algorithms, the engineer and data scientist are making fundamental design choices. Success in this new era depends on every role embracing a human-centered perspective.

  • UI/UX Designers must move beyond static screens to design dynamic, adaptive systems, defining the rules for how an interface reconfigures itself based on user context.

  • Interaction Designers are now choreographing conversations. They must design the personality of an AI, define its tone, and create patterns for non-visual interactions like voice.

  • Product Managers must now define success not just in metrics, but in ethical guardrails for autonomous systems. Their deep understanding of AI's capabilities—and limitations—becomes the team's strategic compass.

  • Engineers and Data Scientists are now central to the design process, collaborating directly with designers to build, train, and refine the models that power these adaptive experiences.

Cheat codes for PMs, designers, and engineers

Product Managers

Designers

Engineers

Must…

Prioritise proactive outcomes, not just feature delivery

Learn to prototype behaviours, not just interfaces

Optimise for adaptability, not only efficiency

Focus on user intent by…

Shifting personas from “tasks” to “end states”

Shifting personas from “tasks” to “end states”

Building systems that infer underlying goals rather than only parsing surface commands.

Focus on user context by…

Securing access to context-rich data streams

Mapping context as carefully as they map journeys

Solving for privacy-first data fusion

The Product Team's New Mindset: The Four Hats

To choreograph intelligent systems, the team needs a new mental model. We are no longer just architects of interfaces; we are choreographers of intelligence. This requires every member of the product team—PMs, engineers, data scientists, and designers—to become comfortable wearing four distinct hats, each defining a critical aspect of the final system.

1. The Strategist → Defines User Intent

Worn by Product Managers and UX Designers, this hat is obsessed with defining the core human intent the system must serve. The Strategist falls in love with the user's problem, not a predetermined solution. By focusing on the "Why," they set the system's ultimate goal and asks: "How can intelligence help them achieve their goal in a way that was impossible before?". Tony Fadell, creator of the iPod and Nest, emphasizes in Build: An Unorthodox Guide to Making Things Worth Making, you must "fall in love with the problem, not the solution".

Core Question: "How can intelligence help our user achieve their goal in a way that was previously impossible?"

2. The Architect → Designs System Behavior

Worn primarily by Engineers, Data Scientists, and Interaction Designers, this hat designs the system's functional behavior. The Architect maps the triggers, logic, and feedback loops that allow the system to interpret context, act on the user's intent, and improve over time.
Core Question: "How will this system understand what is happening, decide what to do, and learn from its actions?"

3. The Coach → Shapes Interaction Feel

The Interaction and User Experience Designers are the primary coaches, shaping the feel of the interaction. This hat defines the AI's persona, crafting its personality and tone to be appropriate for the user's emotional state. The Coach determines how the system's behavior is expressed to the user—whether as a formal expert, a friendly guide, or an invisible assistant.
Core Question: "Who should this AI be for the user in this context?"

4. The Guardian → Establishes Ethical Boundaries

While a team-wide responsibility, this hat is championed by the Product Manager and lead Designers. As the system's ethical conscience, the Guardian establishes its boundaries and guardrails. This role considers the societal context and potential for harm, defining what the system absolutely should not do.
Core Question: "We have the capability to build this, but should we?"

This shared mindset is critical because the design material is no longer just pixels; it's data and logic. The technical choices of the Architect—how a dataset is cleaned or a feedback loop is coded—directly create the system's behavior and shape the Coach's desired interaction feel. Grounding these technical decisions in the Strategist's core intent and within the Guardian's ethical boundaries is what ensures the final product is truly human-centric.

————

Expanding roles to Evolving Responsibility

Ultimately, wearing these four hats in concert is how teams can build systems worthy of human trust. But this new creative power is balanced by profound risk. While the first three hats give us the ability to design for intelligence, it is the fourth, the Guardian, that demands we design for responsibility.

In Part 3: Navigating the New Frontier we must confront the harder question: how to design for responsibility. 

See Part 1: Designing for intelligence for a view on how we got here and the opportunities ahead of us.

Get in Touch