The risks of quiet AI in investment management: Why transparency and control still matter
By Fiona Sherwood, Dasseti
Published: 23 June 2025
AI has become a powerful enabler of productivity in alternative investment management, automating routine tasks, surfacing insights, and accelerating decision, making. A new AI, sometimes referred to as ‘quiet AI’ or ‘background AI’, is now entering the workflow. This AI operates invisibly, automating or influencing processes without explicit user instruction, visibility, or consent.
Quiet AI is often marketed as frictionless efficiency. It aims to reduce cognitive load, remove decision fatigue, and deliver a seamless user experience. Think of Outlook’s email filtering system, which quietly sorts your inbox to surface what matters most, no prompts, no configuration, just subtle automation. But the features that make quiet AI appealing, its invisibility, automation, and integration, also pose significant challenges in high, stakes, regulated sectors such as investment management.
Following the noise around Quiet AI, we have evaluated the benefits and risks and argue for a middle path: an approach to AI that prioritises transparency, accountability, and human agency.
Quiet AI vs agentic AI: A comparison
Quiet AI is not to be confused with Agentic AI. Yes, they both aim to enhance productivity through automation, but they operate on fundamentally different principles, and have markedly different implications for trust, transparency, and user control.
Quiet AI refers to background automation, systems embedded into tools and workflows that act autonomously, often without user awareness or consent. Their interventions are subtle, designed to minimise friction, and typically not announced. A user might notice that a data point has been filled in, a sentence reworded, or a recommendation surfaced, but may not know that AI was involved at all.
Agentic AI, by contrast, is explicit, intentional, and goal, oriented. It refers to AI systems that can perform actions autonomously but operate as discernible agents with defined tasks. These systems are typically prompted or instructed by users and their outputs are clearly demarcated as AI, generated. Agentic AI may initiate follow, up actions, iterate on responses, or proactively identify next steps, but its role is visible, bounded, and subject to user approval.
From a workflow perspective, quiet AI operates by assumption, replacing decisions the system predicts you might make. Agentic AI, on the other hand, operates by instruction, supporting decisions the user explicitly wants help with.
This distinction matters deeply in sectors like investment management. Quiet AI may inadvertently alter key content in client documents without a clear audit trail. Agentic AI, while also automated, provides visibility and choice, which are essential for compliance, stakeholder confidence, and operational reliability.
The case for quiet AI
There are legitimate reasons why quiet AI has gained traction, particularly in complex, document, intensive environments:
- Efficiency gains: Studies show up to a 66% increase in daily task throughput in certain professions, and a significant reduction in time spent on administrative tasks.
- User adoption: Users may prefer AI that ‘just works’ behind the scenes without requiring them to learn new tools or interfaces.
- Cognitive relief: By minimising the number of micro decisions a user must make, quiet AI helps reduce fatigue and improve focus.
- Consistency and standardisation: Quiet AI can help enforce standardised approaches across teams and geographies, ensuring that client communications maintain consistent quality and messaging regardless of which team member handles the interaction.
- Error reduction: Research indicates that AI, assisted workflows can reduce human error rates by up to 30% in document, intensive processes, a significant advantage in compliance, sensitive environments where accuracy is paramount.
The case against quiet AI
While quiet AI may streamline processes, several research-backed concerns have emerged regarding its uncritical adoption:
Loss of transparency and provenance
In environments where documentation trails, data lineage, and auditability are essential, such as operational due diligence or investor reporting, quiet AI introduces uncertainty. If a DDQ response was drafted based on AI input, but the source of that data (e.g., an outdated document or internal system) is unclear, confidence in the response is undermined. Inaccurate or unverifiable statements can compromise not only client relationships but also regulatory compliance.
Disruption of expert workflows
Studies have shown that quiet AI can interfere with users’ workflows by restructuring task sequences or inserting suggestions that interrupt concentration. This is particularly acute in complex decision-making tasks such as risk assessment, manager research, or compliance review, where precision and context matter deeply.
Erosion of trust and autonomy
We’ve come some way since 2023, but a 2023 EY survey reported that 71% of employees familiar with AI expressed concern about its workplace impact, with 65% citing anxiety over lack of transparency. This is still an issue today as McKinsey’s 2025 workplace report notes that while AI is becoming less risky, it still lacks sufficient transparency and explainability, both of which are critical for safety, bias reduction, and user trust.
Trust is central to institutional investment. If users suspect their tools are silently altering outputs or surfacing content based on unknown algorithms, trust in both the tools and their own work erodes.
Ethical and privacy risks
Inadvertent AI interference with sensitive or privileged data, particularly when the AI is operating in the background, raises concerns over data governance, client confidentiality, and ethical boundaries.
Moving towards a transparent AI model
The investment industry has always demanded accountability, traceability, and discretion. These principles should extend to AI deployment. Several mitigation strategies have emerged from both industry guidance and academic research:
- Human-in-the-loop models: Ensure humans can review, approve, or override AI outputs.
- Clear disclosure: Notify users when AI is operating and clarify the source of AI-generated content.
- Provenance tracing: Log the exact origin of AI inputs and outputs for audit and review.
- Customisability: Allow firms to configure when and how AI is triggered, and whether to enable or disable automation features.
These recommendations align with operational due diligence standards and investor expectations around accountability. In essence, AI should be a sidekick not the main character.
Operational due diligence (ODD) example
Consider an ODD team reviewing a manager’s risk controls. A quiet AI system might silently prioritise certain risk factors based on historical data. However, emerging risks, those not represented in past models, could be underweighted or ignored. In contrast, a transparent or agentic AI approach would clearly indicate its rationale, allowing the ODD professional to evaluate the reasoning, adjust inputs, and apply domain expertise to ensure nuanced oversight.
A balanced approach: Transparent AI embedded in workflow
There is a middle ground. Platforms that embed AI within existing workflows, but make its presence optional and transparent, offer the best of both worlds. Users benefit from automation but maintain oversight and control.
For example, in RFP and DDQ processes, Dasseti’s AI capabilities can search through internal content libraries and previous responses to surface the most relevant answers. Our approach ensures users can:
- See the exact document or past response the AI is referencing, maintaining complete traceability of all suggested content.
- Choose to accept, reject, or edit the AI’s draft, keeping human expertise at the centre of the process.
- Understand whether a suggestion is directly sourced or AI-generated, with clear visual indicators distinguishing between different sources.
- Benefit from automated data extraction that pulls relevant information from complex documents without losing context or provenance.
- Analyse response patterns and quality across submissions to continuously improve future responses.
This ‘assisted intelligence’ model reduces user burden without compromising trust or compliance. It also helps drive adoption by empowering users rather than replacing them.
AI in investment management
At Dasseti, we are working towards a shift from today’s tool-based AI implementations toward more integrated experiences. The key differentiator we see between firms executing successful and problematic implementations is not the power of the AI itself, but rather how thoughtfully it is integrated into existing workflows and governance structures.
The firms that are thriving are those that view AI not as a replacement for human judgment but as an enhancement tool that respects the unique value of human expertise while eliminating low, value tasks
In investment management, transparency builds trust
Firms considering adding AI to their investment workflows should be shaped by principles familiar to this industry: clarity, accountability, and informed decision-making. Platforms that embed optional, transparent AI, enhancing rather than obscuring human expertise, will ultimately deliver the greatest value.