Skip to content
AdobeStock_1182204899_Editorial_Use_Only
Michael FreulerMar 9, 2026 5:13:39 PM7 min read

AI assistants in asset management

AI assistants in asset management
7:13
Key Takeaways
  • - Claude is an AI assistant by Anthropic – available as a chat interface, desktop tool (Cowork), development environment (Code) and API  
    - Strengths: long documents, regulatory research, text work and knowledge processing  
    - Free plan: not suitable for corporate data. Business/Enterprise plan: no training with prompts  
    - Anthropic processes data on US servers – data residency in Switzerland must be ensured separately  
    - FINMA responsibility remains with humans: AI policy and approval processes are mandatory, not optional  
    Claude and Microsoft Copilot complement each other – an integration via Copilot Studio must be validated from a compliance perspective

AI assistants in asset management. Where do we really stand?

AI tools such as Claude from Anthropic have arrived in practice. Even in regulated industries. Asset managers, family offices and independent financial advisors are increasingly asking themselves the same question: Can I use this, and if so, how? This article provides an honest assessment: what Claude can do, what forms it takes, where the real risks lie and what a data protection-compliant introduction in Switzerland might look like.

What is Claude and what forms does it take?

Claude is an AI assistant developed by the US company Anthropic. It is designed for complex reasoning tasks, long documents and precise text work. Depending on the intended use, there are various ways to utilise it:

Claude.ai
The browser-based chat interface, comparable to ChatGPT. Quickly accessible, ideal for individuals and quick tasks such as summaries, drafts or research.

Claude Code
A terminal-based developer environment. For technical teams who want to automate processes, write code, or perform structured data analysis.

Claude Cowork
A desktop tool for non-developers. It allows access to local files, folder structures and workflows without any prior technical knowledge. Particularly interesting for office processes such as document management, report generation or internal communication.

Claude via API
Enables integration into existing business applications such as a CRM system, document archive or internal portal.

Claude Modelle

Where are the real risks?

A balanced view is more important here than enthusiasm.

Hallucinations

Claude can make false information sound plausible. Manual verification is always essential when it comes to financial figures, legal articles or customer data. AI does not replace specialist expertise, it supports it.

Data protection and confidentiality

What goes into an AI prompt potentially leaves the company.

Confidential customer data, portfolio details or non-public information do not belong in a chat interface without appropriate contractual safeguards. Anthropic processes data on US servers and does not offer data residency in Switzerland itself. For FINMA-regulated companies, this aspect must be explicitly addressed.

Regulatory responsibility

FINMA clearly states that responsibility for decisions lies with the supervised person, not with the tool.

In practice, this means that every use of AI should be documented, with clear ownership, defined approval processes and regular reviews. From FINMA's perspective, the lack of an AI policy is not a minor issue. It is a compliance gap.

Vendor dependency

Anthropic is a US company. Questions regarding data sovereignty and the handling of training data must be clarified, particularly under the revised Swiss Data Protection Act (DSG).

Where are the real risks?

A balanced view is more important here than enthusiasm.

Hallucinations

Claude can make false information sound plausible. Manual verification is always essential when it comes to financial figures, legal articles or customer data. AI does not replace specialist expertise, it supports it.

Data protection and confidentiality

What goes into an AI prompt potentially leaves the company.

Confidential customer data, portfolio details or non-public information do not belong in a chat interface without appropriate contractual safeguards. Anthropic processes data on US servers and does not offer data residency in Switzerland itself. For FINMA-regulated companies, this aspect must be explicitly addressed.

Regulatory responsibility

FINMA clearly states that responsibility for decisions lies with the supervised person, not with the tool.

In practice, this means that every use of AI should be documented, with clear ownership, defined approval processes and regular reviews. From FINMA's perspective, the lack of an AI policy is not a minor issue. It is a compliance gap.

Vendor dependency

Anthropic is a US company. Questions regarding data sovereignty and the handling of training data must be clarified, particularly under the revised Swiss Data Protection Act (DSG).

Claude Code Blog Artikel

Where are the real risks?

A balanced view is more important here than enthusiasm.

Hallucinations

Claude can make false information sound plausible. Manual verification is always essential when it comes to financial figures, legal articles or customer data. AI does not replace specialist expertise, it supports it.

Data protection and confidentiality

What goes into an AI prompt potentially leaves the company.

Confidential customer data, portfolio details or non-public information do not belong in a chat interface without appropriate contractual safeguards. Anthropic processes data on US servers and does not offer data residency in Switzerland itself. For FINMA-regulated companies, this aspect must be explicitly addressed.

Regulatory responsibility

FINMA clearly states that responsibility for decisions lies with the supervised person, not with the tool.

In practice, this means that every use of AI should be documented, with clear ownership, defined approval processes and regular reviews. From FINMA's perspective, the lack of an AI policy is not a minor issue. It is a compliance gap.

Vendor dependency

Anthropic is a US company. Questions regarding data sovereignty and the handling of training data must be clarified, particularly under the revised Swiss Data Protection Act (DSG).

Claude and Microsoft Copilot: competitors or complementary?

Copilot is deep in Microsoft 365 integrierd. He is familiar with the company's emails, team chats, SharePoint documents and calendar.

That is its biggest advantage: context from your own company. Claude has other strengths: more complex reasoning tasks, longer documents and more sophisticated text work. Both tools can be used in parallel for different tasks. Technically, Microsoft Copilot Studio also allows Claude capabilities to be integrated into the M365 environment. However, this creates separate data flows between the Microsoft and Anthropic infrastructures, which must be checked separately for data protection and FINMA compliance. Such integrations should be evaluated together with an IT partner experienced in compliance before deployment.

How Dinotronic supports the transitio

The biggest challenge in introducing AI is rarely the technology itself. The question is: Which tool should we use for which process, with which guidelines, and how can we ensure that compliance does not suffer?
Dinotronic supports companies in regulated environments with a structured introduction to AI: from taking stock of existing processes and evaluating tools to creating internal usage guidelines. With our Managed Workplace Service, we ensure that AI tools are embedded in a secure, managed and audit-proof environment, including bridging issues of data residency and compliance architecture that Anthropic alone cannot resolve.
Because: AI without governance is not progress. It is an unplanned experiment.
Do you have questions about introducing AI into your company? Get in touch with us.

Conclusion

AI assistants such as Claude also offer great potential for asset managers – particularly in research, document creation and knowledge work. At the same time, their use in a regulated environment is not purely a technological issue. Data protection, governance and clear internal guidelines are crucial to ensure that AI does not become a risk.
Those who introduce AI in a structured, transparent manner and with the right guidelines can gain efficiency – without losing sight of FINMA and data protection requirements.

FAQ

What is the difference between Claude and ChatGPT Both are AI assistants, but with different profiles. Claude is particularly strong with long documents, nuanced text work and complex analysis tasks. ChatGPT (OpenAI) is more widely known and stronger in coding tasks. For asset managers who work extensively with texts, reports and regulatory documents, Claude is often the better choice.
Which Claude Plan is suitable for corporate use At least the Business Plan. This contractually ensures that prompts are not used for training AI models. For companies with higher compliance requirements, we recommend the Enterprise Plan with extended access management and API connection.
Can we enter customer data into Claude No – not without anonymisation and without a suitable contractual basis. Customer data, portfolio details or non-public information do not belong directly in an AI prompt. Even with Business Plan, Anthropic processes the inputs on its own servers. The rule is: always anonymise, always put an internal usage policy in place first.
How does Dinotronic support the implementation Dinotronic accompanies the entire onboarding process: analysis of existing processes, selection of suitable tools, creation of an internal AI policy, technical integration into the existing IT infrastructure, and training of employees. This ensures that AI tools do not run alongside compliance, but are embedded within it.
avatar

Michael Freuler

Head of Solution Consulting and Marketing

Abonnieren Sie unsere monatlichen Newsletter

Unsere Newsletter geben interessante Einblicke in neue Trends.

Sie haben Fragen? Kommen Sie gernedirekt auf uns zu! Wir freuen uns vonIhnen zu hören

Kommen Sie gerne direkt auf uns zu!