

Trust Series: Part 3
Rethinking permissions in a world of autonomous AI decisions
The advent of AI agents, operating on both the enterprise and consumer sides, mean that we now need to think about how decisions get automated, and with what permission. This is going to impact experience, brand, compliance and a bunch of other things, so buckle up for a dip in the exciting emerging world of dynamic consent.
Think of an AI assistant that does more than just recommend the best route home; it spots a bargain on flights to Bali and books them instantly—no prompts or final checks. Exciting, or unsettling? As AI morphs from a passive helper into a decision-maker handling financial, healthcare, or personal tasks, the concept of dynamic consent becomes critical.
Informed by the ideas of thought leaders in the trust space, this article tackles how we can shift from static, one-off permissions to adaptive, user-guided controls that evolve alongside personal AI. We’ll reference Doc Searls’ VRM approach for user-led data, Cory Doctorow’s caution against “dark patterns,” Dave Birch’s insights on digital identity, Dr Joy Buolamwini’s focus on algorithmic fairness, and Jamie Smith’s forward-looking stance on Empowerment Tech and customer-centric futures.
What Is dynamic consent?
Dynamic consent means users can revise or revoke permissions over time—rather than a single “I agree” pop-up at the outset (thank you GDPR). It recognises that user comfort, legal requirements, and AI capabilities all change, so permissions must be adaptable. If we don’t design this properly, think of all those pop-ups. Eugh!
Personal AI agents: From suggestions to decisions
Rather than just telling you the best flight price, personal AI agents might negotiate deals or finalise transactions. For instance:
- A finance assistant that rebalances your investment portfolio each month.
- A healthcare chatbot refilling prescriptions or scheduling specialist appointments.
- A mortgage AI scouting lenders and locking in the best rate—sometimes without live human oversight.
Sarah Gold’s work underlines that as AI autonomy grows, so does the need for granular user controls—privacy, consent, and explainability. Similarly, Doc Searls emphasises the user’s power to “pull” services on their own terms, rather than passively accept push-based interactions.
How it works: designing for dynamic consent and autonomous decisions
Setting permission levels
Instead of a yes/no model, dynamic consent allows tiered autonomy:
- Recommend only: The AI can propose ideas but needs explicit sign-off.
- Negotiate & return: The AI can explore options, but the user reviews the best deal before proceeding.
- Negotiate & buy: The AI fully executes transactions under pre-set criteria. This can get extremely complex around financial products or health data, of course.
Real-Life Example – Alexa Voice Purchasing
Initially, Alexa allowed voice-activated purchases with minimal checks, leading to unintentional orders. Amazon then introduced parental controls and confirmation steps—an early step towards dynamic consent, giving households more nuance in how Alexa can transact.
Continuous updates & user feedback
Consent shouldn’t be locked in forever. Jamie Smith points out that user preferences evolve, whether due to life changes or deeper understanding of a platform’s capabilities. Therefore:
- Periodic reminders: Prompt users monthly, quarterly, or on major feature updates to confirm or tweak AI permissions.
- Real-time alerts: Notify users immediately if the AI acts outside typical patterns (e.g., booking a trip that’s 50% above normal budget).
- Feedback loops: Let users rate AI actions or flag errors so the system can learn and refine its parameters.
Transparent decision pathways
Dr Joy Buolamwini reminds us that opaque AI systems can (do) mask biases, and at Else, we think that this is a serious concern going forward. Being transparent about how your AI arrives at decisions reduces user anxiety and fosters trust, but in truth, we don’t think many know why AI arrives at the answers it does – and that is cause for ongoing concern. So we therefore need:
- Explainable AI summaries: Briefly show the logic or data inputs behind an action.
- Decision logs: Offer a timeline view of important AI decisions—what data was used, why a certain outcome occurred—and allow quick user overrides.
- Liability safeguards: If the AI does something counter to user interests, highlight recourse steps—refunds, dispute mechanisms, or live agent assistance.
Come on. You’re excited at the idea of designing experiences around Decision Logs aren’t you?
Benefits and use cases
Elevating user agency and trust
By engaging users through dynamic consent, you respect their evolving comfort zones. Doctorow critiques how some systems trap users in “enshittified” experiences—this approach flips the script, ensuring people remain in charge and not just passive data points.
Regulatory compliance
With rules like the EU AI Act demanding user transparency and fairness, a dynamic consent model naturally aligns with these emerging standards. Dave Birch highlights how identity and data management must keep pace with law and user expectations, or else face legal blowback.
Simplifying complex transactions
Even for intricate processes—like financing a mortgage or structuring a healthcare plan—dynamic consent can make it smoother. Users feel the AI is truly “on their side,” but with built-in checks to maintain final authority. We think that this aspect of designing with AI is what will really help users develop comfort with AI tooling.
Risks and challenges
Consent fatigue
Over-frequent prompts can annoy users. Strike a balance by tying check-ins to significant changes, major AI decisions, or user-set intervals—rather than constant pop-ups.
Legal liability and consumer protection
If the AI locks in a big-ticket purchase, who’s responsible if it’s erroneous? Clear disclaimers and fallback channels (like a human override) are crucial. Gold emphasises that designing user agency also means defining how liability is shared.
Conflicts between automation and personal choice
Users might set a high autonomy level but later regret a decision the AI made. Building in partial refunds, “cool-off” periods, or easy dispute mechanisms helps mitigate these pitfalls.
Comparison
Static Consent | Dynamic Consent | |
Frequency | One-off, easily forgotten | Ongoing, with periodic user check-ins or triggered prompts |
User Control | Minimal, accept/decline once | Granular adjustments and revocations anytime |
Transparency | Limited clarity on how data or AI logic changes | Clear logs and real-time updates if AI operates outside usual patterns |
Outcome | Potential user distrust or complacency | Continuous engagement, higher trust, flexible risk management |
Wrapping up
As personal AI agents grow more capable—and, by extension, more intrusive—dynamic consent emerges as a powerful solution. It reconciles the convenience of autonomous AI with a user’s right to shape outcomes, define boundaries, and stay informed.
We see a shared conviction across thought leaders: user empowerment must be a living, evolving process, not a static once-per-lifetime contract. Designing for ongoing permission aligns AI services with real human needs and ensures that as technology changes, user rights remain firmly at the forefront.
TL/DR
- Growing AI autonomy: Agents that can buy or negotiate deals demand a more flexible, user-centric consent framework.
- Dynamic consent: Moves beyond “one-and-done” checkboxes to continuous, adjustable permissions.
- Tiered autonomy: “Recommend Only,” “Negotiate & Return,” or “Negotiate & Buy” let users define comfort zones.
- Benefits: Aligns with regulations, fosters trust, and streamlines complex tasks—while preserving user control.
- Your call to action: Incorporate dynamic consent mechanisms early, preparing your AI-driven products for evolving laws, user preferences, and technical capabilities. Please reach out if you’d like to discuss how Else can help with this.
Some questions
Q: How frequently should I prompt users to update their AI permissions?
A: It depends on the product. Some platforms opt for monthly or quarterly check-ins, plus triggers whenever major new features or policy changes occur.
Q: Could dynamic consent slow down the user experience?
A: Poorly implemented systems might. However, intuitive design and well-timed prompts usually enhance trust rather than impede usage.
Q: Who’s liable if an AI purchases something without explicit approval?
A: Liability models vary, but disclaimers, refund policies, and clear user agreements can delineate responsibilities—especially in sensitive sectors like finance or healthcare.
Q: Is dynamic consent feasible for small startups?
A: Absolutely. Implementing it early sets a strong ethical foundation, potentially saving headaches as your user base grows or regulations tighten.
Final note
Dynamic consent isn’t just about adding a few extra toggles—it’s about embracing a fundamental shift in how AI intersects with user autonomy. Rather than a one-off contract, you’re creating a living dialogue that evolves with both technological capabilities and personal preferences, offering a balanced path between automation and agency.
At ELSE, we combine deep industry insights with practical design expertise across loyalty (O2, MGM, Shell), identity (GEN’s Midy), regulated sectors (UBS, T. Rowe Price, Boehringer-Ingelheim, Bupa), and AI innovation (Genie, Good.Engine, plus our own R&D).
Drawing on this broad experience, we’re uniquely positioned to help organisations embed trust, transparency, and user-centricity into digital product and service delivery—ensuring that the future of AI-driven experiences remains both ethical and commercially compelling.
Interested in learning more about designing for Trust?
This is the third in a six-part series exploring emergent concepts around trust and AI—vital considerations for anyone responsible for digital product/service delivery and innovation. Across these articles, we delve into frameworks, real-world examples, and thought leadership insights to help organisations design user-centric experiences that uphold privacy, transparency, and ethical autonomy. Whether you’re a designer, product manager, or executive, these perspectives will prepare you to navigate the evolving intersection of technology, regulation, and user expectations with confidence and clarity.
Else briefing
Don’t miss strategic thinking for change agents
One more step
Thanks for signing up to Else insight
We’ll only email you when we release an update.
Please check your email for a confirmation link to complete the sign up.
You’re all set
Ah! You’re already a friend of Else
You won’t miss any of our updates.
In the meantime, please enjoy Our Insights
Shared perspectives
Don’t miss strategic thinking for change agents
Find ideas at your fingertips with Else insight.
Here to help
Have a question or problem to solve?
