Choreographing intelligent experiences & the designer's evolving role
When intent and context converge, technology starts collaborating
In Part 1, we explored how the foundational design principles of Desirability, Viability, and Feasibility needs to consider Responsibility as an added principle in the Era of Design for Intelligence. To build the next generation of truly helpful products and shifting from reactive tools to proactive intelligent solutions., we must solve for a user's underlying Intent by harnessing the power of real-time Context.
————
The Seeds of Intelligence: Early Signals in Today's Products
The shift from reactive tools to proactive partners isn't theoretical—early signals already exist in products we use daily. The following services demonstrate Intent and Context in practice, while revealing how much further we can go:
Pioneering Service | Current Capability (Early Signals) | Future Potential (Full Intelligence) |
Google Maps as an EV Co-Pilot | Understands the driver's intent to avoid "range anxiety". It fuses context like the EV model, current charge, temperature, and real-time charger availability to choreograph the entire trip. | It could fuse calendar and user preference data to understand the why behind the journey. For a client meeting, it might proactively schedule a stop at a charger with a café to ensure a relaxed and timely arrival. |
The Smart Home That Learns You | Moves beyond explicit commands by observing behavioral and environmental context, such as waking times and weather. It learns patterns to create a comfortable and efficient environment without constant input. | It could fuse biometric and digital context from health trackers. After detecting a restless night, it might proactively dim the morning lights and start a calming playlist to support the user's well-being. |
Spotify, the Mind-Reading DJ | Excels at understanding the implicit intent to "find the perfect vibe". It moves beyond explicit signals (songs you've liked) to implicit ones (what people with similar tastes enjoy) to create a magical discovery experience. | It could fuse life context from calendars, location, and biometrics. Before a big presentation, it might detect an elevated heart rate and offer a playlist scientifically designed to improve focus, helping you achieve a goal. |
As these current services show, what once felt like magic is now becoming the baseline. They have raised a generation of users with these predictive algorithms. These services have trained users to expect anticipation over reaction. With advancing AI, this expectation will intensify—users will gravitate toward solutions that feel personal, proactive, and profoundly helpful.
Intelligence in Action – An example of design with user intent infused by context
The Proactive Telco Voice Agent: A customer's intent when calling their provider isn't just to check their data balance; it's to feel secure and reliably connected. A standard bot answers the direct question. An intelligent agent fuses context: it sees the customer's plan details, their history of running low on data late in the month, and notes a complaint call they made last week about a now-resolved local network outage. Instead of just a data balance, the agent offers a profoundly helpful solution:
"Hi Alex. I can see you have 2 gigabytes left. I also know you tend to run low at this point in the month. Given the temporary network outage in your area last week—which I'm sorry about—we've added a complimentary 10-gigabyte data pack to your account to ensure you stay connected. Would you like me to activate it now?"
The Empathetic Retail Voice Agent: A customer's intent when asking "Where is my order?" isn't just to get a tracking number; it's to solve the problem that the package was meant for. They are feeling anxious. The agent fuses context: the order is a gift-wrapped baby shower present, the delivery is marked "delayed," and the customer recently visited a physical store. It acts on the user's true goal:
"Hi Sarah. I see you're asking about your gift order. I'm so sorry, it appears the delivery has been unexpectedly delayed and won't arrive until Monday. I know how important this must be. I've located the exact same item in stock at the store you visited last week, just ten minutes from your current location. I can have it held for you to pick up this afternoon, and I've already refunded your delivery fee. Would you like me to arrange that?"
The Telco and Retail agents feel simple and magical to the end-user but achieving that magic requires a fundamental shift in how products are built. This level of proactive assistance isn't the result of a better user interface; it's the outcome of choreographing a complex system of data, logic, and personality. It demands that we move beyond designing static screens and embrace a system thinking mindset and expanded roles and behaviors.
From Tweaks to Transformation: Applying Systems Thinking
The shift from designing static objects to choreographing dynamic systems requires us to move beyond linear, cause-and-effect logic and embrace the principles of systems thinking. These are nothing new, but now an essential mindset to succeed.
Systems thinking is articulated best by Donella Meadows. Her work provides a Rosetta Stone for the AI era. Her most influential concept is the hierarchy of twelve "leverage points" (read more here.) —places to intervene in a system where a small shift can produce a large change. Meadows argues that the most powerful interventions involve changing the system's rules, its information flows, its overall goal, and, most powerfully, the underlying paradigm or mindset from which the system arises.
Designers and product managers often focus on the least effective levers, like tweaking UI elements, AI allows us to intervene at the rules and paradigm level.
Low Leverage: Changing the Numbers
Meadows identifies "Parameters" (like numbers and settings) as the lowest-impact leverage point. This is the realm of most conventional UX design.
What it is: Tweaking UI elements, changing the number of search results, adjusting the size of a button, or slightly modifying an algorithm's weighting.
The Trap: We spend countless hours optimizing these parameters. While sometimes necessary, it’s like meticulously rearranging the deck chairs on the Titanic. It improves the immediate experience but does nothing to change the system's fundamental behavior or outcome. A better button doesn't make a reactive tool proactive.
High Leverage: Changing the Information Flow & Rules
This is where true intelligence begins to emerge. The proactive Telco and Retail agents examples above don't just have a better UI; they operate on a foundation of superior information flows and entirely new rules.
Lever 6: The Structure of Information Flows
This lever focuses on who has access to what information, and when. A lack of information in the right place at the right time is a primary cause of system failure.
The Old System (Retail Agent): The information flow was siloed. The customer had the anxiety, the tracking system had the location, and the inventory system had the stock levels. The agent was a simple conduit for one piece of that information (the tracking number).
The New System (Retail Agent): The AI agent fundamentally restructures this flow. It fuses previously separate streams of context—order status, gift-wrap flag, local store inventory, and the customer's location—into a single, actionable insight. It doesn't just pass information; it synthesizes it to create a new, superior solution. The magic isn't in the interface; it's in the choreography of data.
Lever 5: The Rules of the System
This lever dictates the "if-then" logic that governs the system's behavior. AI allows us to write and execute far more sophisticated and empathetic rules at scale.
The Old Rule (Telco Agent): "IF a customer asks for their data balance, THEN provide the number from the database." This is a rigid, reactive rule.
The New Rule (Telco Agent): "IF a customer asks for their data balance, THEN analyze their usage history, recent network experience, and current plan, and proactively offer the most helpful solution to ensure their future connectivity."
This new rule, enabled by AI, transforms the agent's function from a simple information clerk into a proactive problem-solver. You've changed the agent's very definition of "a job well done."
The Highest Leverage: Changing the Goal & The Paradigm
The most powerful way to change a system is to alter its ultimate purpose and the deep-seated beliefs from which it arises. This is the final and most crucial step in designing for intelligence.
Lever 3: The Goal of the System
Your examples don't just fulfill a query better; they serve a completely different, more human-centric goal.
The Old Goal (Retail): "Fulfill the order and close the support ticket." The system is optimized for transactional efficiency.
The New Goal (Retail): "Ensure the customer achieves their underlying intent (giving a timely gift) and relieve their anxiety." The system is now optimized for the user's emotional and practical success.
When you change the goal, every component of the system—from the AI agent's rules to the data it accesses—realigns to serve this new, more profound purpose.
Lever 1: The Power to Transcend Paradigms
The ultimate leverage point is shifting the shared mindset from which the system was born. This is the core of your article's argument.
The old paradigm was Human-Computer Interaction. It viewed technology as a set of tools that a user must correctly operate to get a result.
The new paradigm is Human-AI Augmentation . It assumes the "product" is an active partner that understands intent, anticipates needs, and collaborates with the user to achieve their goals.
By moving beyond designing screens and applying these higher-level levers, we are not just building better products. We are fundamentally changing the paradigm of how technology can serve humanity—transforming passive tools into the active, thoughtful partners you envisioned from the start.
Key Skill Shifts
To thrive in this new paradigm, designers, product managers (PM), and engineers need not only new tools but new literacies
Systems Thinking: We must stop thinking in pages and screens and start thinking in flows, feedback loops, and interconnected systems. We must become adept at mapping out how an AI will make decisions, how it will learn, and how it will adapt its behavior across an entire, non-linear user journey.
Data Literacy: While you don't need to be a data scientist, you must speak the language of data. Data is no longer just for A/B testing and validation; it's a primary design material. We must learn how to interpret data to find the stories within it, to understand context, and to shape the system's behavior with an evidence-based approach.
AI Training & Curation: Think of an AI as an apprentice; it's only as good as its teacher. Product teams will play a crucial, ongoing role in training these systems—crafting effective prompts, curating high-quality training data, and providing the constant, nuanced feedback needed to refine the AI's "judgment," its "personality," and its "common sense."
Designing for Graceful Failure: AI will mess up. It's inevitable, and it's okay. A critical new responsibility is designing for those moments of failure with empathy and foresight. How do we create clear, low-friction ways for users to correct the AI, to override its suggestions, and to teach it to do better next time? This is how you build trust and resilience into the system.
Qualitative Judgment: In a world drowning in quantitative data, a designer's qualitative judgment—our intuition, our empathy, our taste, our ethical compass—becomes our superpower. We are the voice of the user in the machine, the advocate for humanity, ensuring the system's behavior isn't just accurate, but also appropriate, considerate, and respectful.
These skills are cross cutting regardless of domain expertise. The necessity of these skills blurs the traditional lines between designer, product manager, and engineer. As each role adopts this new literacy, design ceases to be a specialized hand-off and becomes a deeply collaborative and continuous conversation—in short, a team sport.
Design as a Team Sport
This evolution from interface architect to system choreographer changes everything, reorienting the designer's role from crafting finite objects to orchestrating intelligent systems. We can see this shift captured in the language used by today's leading design thinkers. John Maeda labels it a move from User Experience (UX) to Agent Experience (AX).
The responsibility for good design no longer lies with a single role; it becomes a shared mindset across the entire product team. When the user experience is powered by data models and algorithms, the engineer and data scientist are making fundamental design choices. Success in this new era depends on every role embracing a human-centered perspective.
UI/UX Designers must move beyond static screens to design dynamic, adaptive systems, defining the rules for how an interface reconfigures itself based on user context, whilst also becoming context curators.
Interaction Designers are now choreographing conversations and the contours of a responsive intelligence app. They must design the personality of an AI, define its tone, and create patterns for non-visual interactions like voice.
Product Managers must now define success not just in metrics, but in ethical guardrails for autonomous systems. Their deep understanding of AI's capabilities—and limitations—becomes the team's strategic compass.
Engineers and Data Scientists are now central to the design process, collaborating directly with designers to build, train, and refine the models that power these adaptive experiences.
Cheat codes for PMs, designers, and engineers
Product Managers | Designers | Engineers | |
Must… | Prioritise proactive outcomes, not just feature delivery | Learn to prototype behaviours, not just interfaces | Optimise for adaptability, not only efficiency |
Focus on user intent by… | Shifting personas from “tasks” to “end states” | Shifting personas from “tasks” to “end states” | Building systems that infer underlying goals rather than only parsing surface commands. |
Focus on user context by… | Securing access to context-rich data streams | Mapping context as carefully as they map journeys | Solving for privacy-first data fusion |
The Product Team's New Mindset: The Four Hats
To choreograph intelligent systems, the team needs a new mental model. We are no longer just architects of interfaces; we are choreographers of intelligence. This requires every member of the product team—PMs, engineers, data scientists, and designers—to become comfortable wearing four distinct hats, each defining a critical aspect of the final system.
1. The Strategist → Defines User Intent
Worn by Product Managers and UX Designers, this hat is obsessed with defining the core human intent the system must serve. The Strategist falls in love with the user's problem, not a predetermined solution. By focusing on the "Why," they set the system's ultimate goal and asks: "How can intelligence help them achieve their goal in a way that was impossible before?". Tony Fadell, creator of the iPod and Nest, emphasizes in Build: An Unorthodox Guide to Making Things Worth Making, you must "fall in love with the problem, not the solution".
Core Question: "How can intelligence help our user achieve their goal in a way that was previously impossible?"
2. The Architect → Designs System Behavior
Worn primarily by Engineers, Data Scientists, and Interaction Designers, this hat designs the system's functional behavior. The Architect maps the triggers, logic, and feedback loops that allow the system to interpret context, act on the user's intent, and improve over time.
Core Question: "How will this system understand what is happening, decide what to do, and learn from its actions?"
3. The Coach → Shapes Interaction Feel
The Interaction and User Experience Designers are the primary coaches, shaping the feel of the interaction. This hat defines the AI's persona, crafting its personality and tone to be appropriate for the user's emotional state. The Coach determines how the system's behavior is expressed to the user—whether as a formal expert, a friendly guide, or an invisible assistant.
Core Question: "Who should this AI be for the user in this context?"
4. The Guardian → Establishes Ethical Boundaries
While a team-wide responsibility, this hat is championed by the Product Manager and lead Designers. As the system's ethical conscience, the Guardian establishes its boundaries and guardrails. This role considers the societal context and potential for harm, defining what the system absolutely should not do.
Core Question: "We have the capability to build this, but should we?"
This shared mindset is critical because the design material is no longer just pixels; it's data and logic. The technical choices of the Architect—how a dataset is cleaned or a feedback loop is coded—directly create the system's behavior and shape the Coach's desired interaction feel. Grounding these technical decisions in the Strategist's core intent and within the Guardian's ethical boundaries is what ensures the final product is truly human-centric.
————
Expanding roles to Evolving Responsibility
Ultimately, wearing these four hats in concert is how teams can build systems worthy of human trust. But this new creative power is balanced by profound risk. While the first three hats give us the ability to design for intelligence, it is the fourth, the Guardian, that demands we design for responsibility.
In Part 3: Navigating the New Frontier we must confront the harder question: how to design for responsibility.
See Part 1: Designing for intelligence for a view on how we got here and the opportunities ahead of us.