Choreographing intelligent experiences & the designer's evolving role

When intent and context converge, technology starts collaborating

The Paradigm in Practice

When you weave together a deep understanding of user intent with rich, multi-layered context, something magical happens. Products stop being passive tools and start feeling like active, thoughtful partners. This isn't some far-off, speculative future. It's happening right now, in subtle and powerful ways.

Just look around:

  • Google Maps as an EV Co-Pilot: A driver's intent isn't just to get from A to B; it's to complete a long journey without the dreaded "range anxiety." By understanding the context—their specific EV model, its current charge, the day's temperature (which affects battery life), and real-time data on charging station availability—Maps doesn't just give directions. It choreographs the entire trip. It weaves in optimal charging stops, tells you how long you'll need to charge, and dynamically reroutes if a station suddenly goes offline. It turns a stressful experience into a manageable one. The tool becomes a trusted co-pilot for the journey.

  • A Smart Home That Learns You: A user's intent isn't just to turn lights on and off; it's to create a comfortable and efficient living environment. A basic smart home relies on explicit commands and rigid schedules. An intelligent home observes behavioral context (when you wake up, when you leave, what temperature you prefer) and environmental context (the time of day, the weather outside). It then adapts. It learns to slowly raise the lights and start the coffee maker before your alarm on workdays. It knows to turn down the heat when everyone has left the house. It can even detect an unusual lack of motion and ask if everything is okay. The home moves from being a collection of remote controls to being a responsive, caring environment.

  • Spotify, the Mind-Reading DJ: The difference between anticipatory AI and traditional curation is perfectly captured in a personal anecdote from my own household. My teenage daughter, after recently moving from Spotify to Apple Music, has been vocal about how much she dislikes the new experience. She found Spotify's ability to surface new tracks she'd love almost "magical," an experience she feels is completely missing from Apple Music, affirming the view that its discovery engine can feel frustratingly static for users accustomed to a more dynamic system. Her frustration gets to the heart of this new design paradigm. A user's intent isn't just to "listen to music." It's to find the perfect vibe for what they're doing—to score the movie of their life. Spotify's strength lies in understanding this. The creation of Discover Weekly, led by Matthew Ogle, was a landmark in this space because it moved beyond explicit signals (songs you've liked) to implicit ones (what people with similar tastes are listening to). This data-driven, anticipatory model stands in stark contrast to competitors like Apple Music, which is often commended for its high-quality, human-curated playlists but critiqued for feeling less personal and slower to adapt. Spotify's system anticipates what you want before you even know you want it.

In every case, the design isn't a static screen. It's a dynamic system, a carefully choreographed experience that feels personal, proactive, and profoundly helpful.

The Designer's New Job Description

This shift demands that we, as designers, fundamentally evolve. Our focus has to move beyond perfecting static layouts and toward defining the logic, the personality, and the very behavior of dynamic systems. John Maeda, in his Design in Tech Report 2025, captures this evolution by labelling the phenomenon of the "autodesigner" and the shift from User Experience (UX) to Agent Experience (AX). He argues that with AI, the user can "teleport directly to their goal with a simple prompt," bypassing much of the traditional interface. This change means designers are being "reoriented to orchestrate AI interactions" rather than just crafting finite objects. This echoes Jared Spool's idea that designers must transition from being "gatekeepers" of a fixed design to "gardeners" who cultivate and tend to an ever-evolving system. We are becoming the choreographers of intelligence, the directors of a digital dance between user and machine.

That means we need new skills and, just as importantly, a new way of thinking. The shift from designing static objects to choreographing dynamic systems requires us to move beyond linear, cause-and-effect logic and embrace the principles of systems thinking.

Perhaps no one has articulated this better than Donella Meadows, a pioneering environmental scientist whose work provides the foundational language for this approach. In her crucial text, Thinking in Systems: A Primer, she gives us the tools to understand the complex, interconnected nature of the systems we are now building. For designers, her insights are a Rosetta Stone for the AI era.

Meadows breaks down systems into three basic components: elements (the parts, like users, data, and UI components), interconnections (the relationships and rules that govern how the parts interact), and a purpose or function. She stresses that the purpose is the most crucial determinant of a system's behavior. This framework forces us to ask: What is the true purpose of the AI system we are creating?

Her most influential concept is the hierarchy of twelve "leverage points"—places to intervene in a system where a small shift can produce a large change. Designers and product managers often focus on the least effective leverage points, like changing parameters (e.g., A/B testing a button color). Meadows argues that the most powerful interventions involve changing the system's rules, its information flows, its overall goal, and, most powerfully, the underlying paradigm or mindset from which the system arises.

Adopting this mindset is the first and most critical step. It prepares us for the specific skill shifts required to thrive in this new landscape.

Key Skill Shifts

  • Systems Thinking: We have to stop thinking in pages and screens and start thinking in flows, feedback loops, and interconnected systems. We must become adept at mapping out how an AI will make decisions, how it will learn, and how it will adapt its behavior across an entire, non-linear user journey.

  • Data Literacy: You don't need to be a data scientist, but you need to speak the language. Data is no longer just for A/B testing and validation; it's a primary design material. We must learn how to interpret data to find the stories within it, to understand context, and to shape the system's behavior with an evidence-based approach.

  • AI Training & Curation: An AI is like an apprentice. It's only as good as its teacher. Designers will play a crucial, ongoing role in training these systems—crafting effective prompts, curating high-quality training data, and providing the constant, nuanced feedback needed to refine the AI's "judgment," its "personality," and its "common sense."

  • Designing for Graceful Failure: AI will mess up. It's inevitable, and it's okay. A critical new responsibility is designing for those moments of failure with empathy and foresight. How do we create clear, low-friction ways for users to correct the AI, to override its suggestions, and to teach it to do better next time? This is how you build trust and resilience into the system.

  • Qualitative Judgment: In a world drowning in quantitative data, a designer's qualitative judgment—our intuition, our empathy, our taste, our ethical compass—becomes our superpower. We are the voice of the user in the machine, the advocate for humanity, ensuring the system's behavior isn't just accurate, but also appropriate, considerate, and respectful.

The Four Hats a Designer Must Wear

To navigate this new world, designers need to get comfortable wearing four hats, often all at once, like a one-person product leadership team:

  1. The Strategist (The 'Why'): The Strategist looks past the immediate feature request to the core human intent. They are obsessed with the "job to be done." As Tony Fadell, creator of the iPod and Nest, emphasizes in Build: An Unorthodox Guide to Making Things Worth Making, you must "fall in love with the problem, not the solution." The Strategist asks: "What is the user really trying to accomplish here, and how can intelligence help them do it in a way that was impossible before?"

  2. The Architect (The 'How'): The Architect designs the system's logic, its cognitive framework. They map the contextual triggers, the decision trees, the feedback loops, and the learning pathways. They ask: "How will this system know what's going on? How will it decide what to do? And how will it learn and improve over time?"

  3. The Coach (The 'Who'): The Coach is the AI's acting coach, its personality designer. They are responsible for how the AI "feels" to the user. Is it a formal, all-knowing expert? A friendly, encouraging guide? Or a quiet, almost invisible assistant? They shape its tone of voice, its level of proactivity, and its entire interaction style.

  4. The Guardian (The 'Should'): The Guardian is the ethical conscience of the project. They are paid to be paranoid, to think like a skeptic. They anticipate potential harms, champion transparency, and fight relentlessly for user control and privacy. They ask the most important question in the room: "We can do this, but should we?"

By embracing these roles, we can move from being builders of interfaces to being shepherds of truly intelligent, responsible, and human-centric experiences.

Coming up in Part 3: We've explored the 'how' and 'who' of designing for intelligence. In the final part of this series, we'll tackle the most complex questions: the ethical challenges and profound responsibilities that come with this new power. We'll discuss why 'Responsibility' is the new, non-negotiable pillar of product development.

Get in Touch