Navigating the new frontier

Delivering trust and more human-centric AI infused experiences, in a responsible manner

This is Part 3 of Designing for Intelligence, a three-part series exploring the paradigm shift reshaping product design in the age of AI. The core thesis is simple but profound: creating truly helpful products is no longer about designing static, usable interfaces. Instead, it's about choreographing adaptive, intelligent systems that anticipate our underlying intent and respond to the rich context of our lives. This series will unpack what that means for designers, our skills, and our responsibilities.

With Great Power...

From part 2, we explored the new roles that designers and product teams would need to embrace in this era of design for intelligence. Embracing these new roles—strategist, architect, coach, and guardian—gives product teams incredible, almost breathtaking power. The ability to understand someone's context and anticipate their intent and act on their behalf is profound. And with that power comes an unavoidable responsibility. As we venture into this new paradigm, the classic product development framework of Desirability, Feasibility, and Viability feels dangerously incomplete, like a three-legged stool on the verge of toppling over. Each of those pillars is being reshaped by AI, and a fourth, non-negotiable pillar has emerged, demanding equal footing: Responsibility.

Desirability 2.0: From 'Usable' to 'Trustworthy'

In the age of AI, desirability hinges on trust, not just usability. As Rachel Botsman notes in Who Can You Trust?, "The currency of the new economy is trust." This applies tenfold to AI systems that operate with autonomy.

Agency & Control: Users must feel in the driver's seat. Provide clear "off-switches," ways to decline suggestions, and the ability to correct the system when it's wrong.

The 'Creepy' Line: Give users tools to draw their own boundaries through granular control over their data and how it's used.

Ethical Alignment: In the age of AI, ethical alignment requires a fundamental shift in our metrics of success—moving beyond simplistic measures of "engagement" to prioritizing genuine human well-being. As Tristan Harris, Co-Founder of the Center for Humane Technology, warns, the relentless optimization for engagement is not a neutral act. He argues it creates a "race to the bottom of the brain stem," a system that learns to exploit our primal instincts for distraction, outrage, and social validation to keep us hooked. This is not a hypothetical flaw; it's the business model that, as Harris points out in his recent TED talk, has already led to "the most anxious and depressed generation of our lifetime" through social media.

To avoid repeating these mistakes with the far more powerful technology of AI, our systems must be designed to empower, not just engage. This means designing for values like focus, reflection, and meaningful connection. It requires building systems that respect our attention, honor our intentions, and ultimately, help us live the lives we want to live, not just keep our eyes glued to a screen.

In Practice:

  • Conduct "trust audits" during design sprints to map vulnerable moments

  • Track trust metrics alongside engagement and conversion rates

  • Implement "agency checks" in design reviews: Can users see, undo, and understand what happened?

  • Research emotional responses—discomfort, unease, manipulation—not just usability friction

  • Define clear decision trees for when to ask permission vs. act autonomously

  • Establish ethical guardrails early in the roadmap, not as afterthoughts

Feasibility 2.0: From 'Code' to 'Data and Models'

Technical feasibility is now inseparable from the data and models that power AI systems, introducing new challenges.

Data Feasibility: The biggest hurdle is often the data itself. As Kate Crawford notes in Atlas of AI, "AI systems are not autonomous, rational, or able to discern anything without extensive forms of human labor and data." Flawed or biased data produces flawed, biased results that perpetuate inequities.

Talent Feasibility: You can't build these systems with yesterday's teams. The designer-developer duo must expand to include ML engineers, data scientists, prompt engineers, AI ethicists, and even sociologists or psychologists.

In Practice:

  • Conduct data inventory and gap analysis early: What data exists? What quality? Whose perspectives are represented and missing?

  • Create a "data roadmap" parallel to your feature roadmap

  • Factor in data preparation costs (typically 60-80% of AI development effort)

  • Reassess team composition: Do you have the interdisciplinary expertise needed?

  • Build shared language between technical and non-technical team members

  • Ensure ML engineers understand user needs and designers grasp technical constraints

Viability 2.0: From 'ROI' to 'Risk'

Business viability now requires a sober risk calculus alongside return on investment. The potential for catastrophic reputational and financial damage has never been higher.

The New Cost Structure: Building and maintaining AI—from data storage and labelling to energy-intensive computational power—introduces major new factors in long-term financial viability.

Reputational Risk: One instance of harmful content, clear bias, or a major data leak can destroy a brand built over decades. The cost of getting it wrong has never been higher.

In Practice:

  • Model potential losses, not just revenue: What if your AI makes a viral discriminatory decision? What if regulations force restructuring?

  • Run "pre-mortem" exercises to imagine spectacular failures and work backwards to prevention

  • Plan staged rollouts with clear kill-switch criteria

  • Establish cross-functional risk review boards (legal, communications, ethics—not just product/engineering)

  • Create transparent communication plans for when things go wrong

  • Account for ongoing costs: infrastructure, model retraining, data storage, specialized talent

The Fourth Pillar: Responsibility

This is the big one. This is what separates the enduring companies from the fleeting ones. Because of AI's power to shape experiences at scale, Responsibility can't be an afterthought, a PR statement, or a checkbox on a project plan. It has to be a foundational pillar, as important and as well-resourced as Desirability, Feasibility, and Viability. Mike Monteiro, in his book Ruined by Design, puts it bluntly: "The world isn't broken. It's working exactly as it was designed to work. And we're the ones who designed it. We're the ones who need to fix it."

While Desirability ensures users trust your system, Feasibility ensures you can build it, and Viability ensures it survives in the market, Responsibility ensures you deliver AI that genuinely works for everyone—creating benefit for individual users, your organization, and society at large while minimizing potential harms.

User Responsibility: Protecting Individuals from Harm

AI systems interact with real people in vulnerable moments—making decisions, offering recommendations, shaping what they see and hear. At the user level, responsibility means safeguarding individual dignity, agency, and safety. Users must be protected from manipulation, exploitation, and systems that fail them in consequential ways.

Key considerations:

  • Safety & Protection: How might this system harm individual users? Who is most vulnerable to misuse—children, elderly users, people in crisis? What safeguards prevent exploitation?

  • Agency & Consent: Do users understand when AI is involved in their experience? Can they contest decisions? Do they have meaningful control, or just the illusion of choice?

  • Transparency: Can users understand why the system made a particular decision affecting them? Are explanations accessible and actionable, not just technically accurate?

  • Recourse: When the system fails someone, what happens? Is there a clear path to report problems, challenge decisions, or opt out entirely?

In Practice:

  • Design "agency checks" for every autonomous action: Can users see it, understand it, undo it?

  • Create red team exercises specifically focused on vulnerable user scenarios

  • Build accessible explanation mechanisms ("We suggested this because...")

  • Establish clear escalation paths for users who feel harmed or manipulated

  • Test with diverse users to identify who your system might fail

Organizational Responsibility: Building Accountable Teams & Processes

At the organizational level, responsibility means creating the structures, governance, and culture needed to build AI ethically. This isn't just about having good intentions—it's about embedding accountability into how teams work, make decisions, and respond when things go wrong.

Key considerations:

  • Governance & Oversight: Who is empowered to raise ethical concerns? Who has authority to pause or kill features? Are there clear decision rights when ethical and business priorities conflict?

  • Interdisciplinary Expertise: Do you have the right voices in the room—ethicists, diverse perspectives, domain experts who understand potential harms in your context?

  • Accountability Structures: When harm occurs, can you trace it back to specific decisions and decision-makers? Are there consequences for cutting ethical corners?

  • Continuous Learning: How does your organization learn from failures—yours and others'? Are post-mortems about blame or about improvement?

In Practice:

  • Establish cross-functional ethics review boards with real authority, not just advisory roles

  • Create "ethical kill switch" criteria: conditions that trigger pausing or rolling back features

  • Document decisions and their rationales, especially when overruling ethical concerns

  • Build ethics checkpoints into your development process, not just at launch

  • Invest in training teams on responsible AI practices and emerging risks

  • Create protected channels for team members to raise concerns without retaliation

Societal Responsibility: Considering Collective & Long-Term Impact

AI systems don't just affect individuals—they shape communities, amplify inequalities, and create path dependencies that echo into the future. Societal responsibility means grappling with collective impact: who benefits, who's excluded, and what world we're building through our choices.

Key considerations:

  • Equity & Inclusion: Whose voices, cultures, and perspectives are missing? Which communities will this system serve well, and which will it systematically fail? Are you creating technology that narrows or widens existing inequalities?

  • Systemic Effects: What happens when your AI operates at scale? Do feedback loops amplify biases? Does your system create network effects that harm competition or concentrate power?

  • Long-Term Stewardship: What are the environmental costs? How will this system evolve, and what unintended behaviors might emerge? What obligations persist after launch?

  • Regulatory & Ethical Norms: Are you staying ahead of or behind emerging standards? Are you waiting for regulation to force responsibility, or leading the way?

In Practice:

  • Actively seek out underrepresented users in research and testing

  • Model systemic effects: what happens if 10,000 companies adopt your approach?

  • Create transparency reports on equity metrics, not just business metrics

  • Establish ongoing monitoring for bias drift and emergent harms

  • Build feedback mechanisms that make it easy for marginalized users to report failures

  • Design for graceful degradation and the ability to sunset features that cause harm

  • Consider environmental impact in architecture choices: right-sized models, efficient deployment

  • Engage with external stakeholders—academics, civil society, affected communities

Responsibility at these three levels isn't about perfection—it's about building awareness, accountability, and safeguards into every stage of product development. The companies that will endure in the AI era are those that recognize this fourth pillar as non-negotiable, as fundamental to success as creating desirable, feasible, and viable products.

Embedding responsibility into our process from the start is the only way to ensure we're not just building things right, but that we're building the right things for a better future. AI introduces path dependency as models trained today shape future capabilities. Shortcuts in data quality or ethical review can create long-term technical debt. Designers and PMs must weigh not just short-term ROI, but whether today's data and model choices lock the product into unsustainable futures.

The Designer's Burden and Advocacy

Designers have always been the champions of users within product development—advocating for clarity, accessibility, and delight when business pressures push toward shortcuts. But in the age of AI, this role carries exponentially greater weight. The blast radius of our decisions has expanded dramatically.

The amplification effect is real: A poorly designed button confuses one user at a time. A biased AI system can discriminate against thousands simultaneously, learning from and amplifying that bias with each interaction. A manipulative interface pattern might frustrate individuals. An AI optimized purely for engagement can reshape behavior and beliefs at societal scale. What used to be isolated usability problems can now become systemic harms that perpetuate themselves autonomously.

This reality transforms the designer's role from user advocate to something closer to guardian. We must now speak not just for usability across the four pillars of product development:

In Desirability conversations, we ask: Does this build genuine trust, or does it manipulate? Where does helpful assistance cross into creepy surveillance? Can users meaningfully consent, or are we designing dark patterns at scale?

In Feasibility discussions, we push back: Whose voices are missing from this training data? What biases are we baking into the foundation? Do we have the right expertise to understand the harms this might cause?

In Viability planning, we surface uncomfortable truths: What's the reputational cost if this goes wrong? Are we creating technical debt that becomes ethical debt? What happens when this scales beyond our ability to monitor?

In Responsibility decisions, we hold the line: Who could this harm? Who have we excluded? What safeguards are non-negotiable, even if they slow us down?

The hard truth: Designers are often not the ultimate decision-makers. We don't control budgets, timelines, or business models. But our advocacy has never been more critical. We are often the only voice in the room consistently asking: "But should we?" when everyone else is asking "Can we?" and "Will it make money?"

This expanded burden requires expanded courage. It means:

  • Naming potential harms explicitly, even when they're uncomfortable to discuss

  • Bringing receipts—showing examples of similar systems that failed and why

  • Building alliances with engineers, ethicists, and other stakeholders who share concerns

  • Documenting our objections when we're overruled, creating accountability

  • Sometimes drawing hard lines: "I cannot in good conscience design this"

Our advocacy matters, even when it doesn't "win." It slows down dangerous momentum. It creates paper trails that surface in post-mortems. It gives cover to others with concerns. It plants seeds that germinate later. Most importantly, it ensures that if harm occurs, it won't be because no one spoke up—it will be because someone chose to ignore clear warnings.

The stakes have never been higher. Our traditional role of championing the user has expanded into something more profound: we are the conscience of AI-infused systems. Not because we're more moral than our colleagues, but because our craft demands we stay connected to the human impact of what we build. This isn't a burden to endure—it's a responsibility to embrace. The products that will define the next era of technology are being designed right now, and designers must be at the table, speaking clearly and forcefully for the humans on the other side of every AI decision.

Conclusion: Designing a More Human AI

AI has unlocked something profound: the ability to understand what people truly want (their intent) and respond to the full context of their lives. When we fuse these two—intent and context—products transform from tools that wait for commands into partners that anticipate, adapt, and genuinely help. This is the opportunity before us.

Erika Hall, in Conversational Design, reminds us: "Start from the customer's intent, the questions they ask, the needs they express. How do you answer, and where do you answer?" This truth has always guided good design. AI hasn't changed it—it's made it more urgent and raised the stakes immeasurably.

But building these intelligent systems demands more than new technology. It requires new ways of working. The complexity of orchestrating data, logic, and behavior at scale means we can no longer think in terms of screens and buttons—we must think in systems, feedback loops, and emergent behaviors. Success depends on every role embracing shared responsibility: designers who understand data, engineers who grasp user needs, product managers who champion ethics alongside metrics. We're building complex systems to deliver simple, human experiences.

This expanded capability brings expanded obligation. The traditional three considerations—Desirability, Feasibility, and Viability—served us well when we built static tools. But AI's power to act autonomously at scale demands we add a fourth: Responsibility. These four dimensions help us navigate the inevitable tradeoffs in product development, balancing what users want with what we can build, what survives in the market, and what genuinely benefits everyone while minimizing harm. Responsibility isn't optional—it must operate at every level: protecting individual users, embedding accountability in our organizations, and considering collective impact on society.

This is where designers become guardians. The blast radius of our decisions has expanded exponentially. A poorly designed button once confused one user at a time; a biased AI system can discriminate against thousands simultaneously, learning from and amplifying that bias with each interaction. We are no longer just advocates for usability—we must be the conscience of AI-infused systems, asking hard questions across all four dimensions: Does this build genuine trust? Can we build it responsibly? Will it survive scrutiny? Does it deliver real benefit?

The ultimate goal isn't to replace human judgment, creativity, or connection, but to augment them. We can build systems that anticipate needs, respect values, and empower people to achieve their goals in ways we're only beginning to imagine. The work is complex and the stakes are high, but this is our defining opportunity. The products that will shape the next era are being designed right now. We must build them with creativity, conscience, and courage.

——

My deep appreciation and thanks to Tertia Tay, and Gordon Candelin who have help shaped these thought bubbles. It is a joy to be sharing my professional journey with you both!

Read Part 1: Designing for Intelligence, the AI era demand an expanded framework of considerations and focus area moving beyond surface-level needs to focus on user intent and context.

Read Part 2: Choreographing Intelligent Experiences, we explored how to choreograph intelligent experiences and examined the designers and product team’s evolving roles and expanded capabilities needed to execute.

In this Part 3: Navigating the New Frontier, we navigated the critical ethical frontiers and introduce a new pillar of responsibility essential for building a more human AI,

Get in Touch