- - Claude is an AI assistant by Anthropic – available as a chat interface, desktop tool (Cowork), development environment (Code) and API
- Strengths: long documents, regulatory research, text work and knowledge processing
- Free plan: not suitable for corporate data. Business/Enterprise plan: no training with prompts
- Anthropic processes data on US servers – data residency in Switzerland must be ensured separately
- FINMA responsibility remains with humans: AI policy and approval processes are mandatory, not optional
Claude and Microsoft Copilot complement each other – an integration via Copilot Studio must be validated from a compliance perspective
AI assistants in asset management. Where do we really stand?
AI tools such as Claude from Anthropic have arrived in practice. Even in regulated industries. Asset managers, family offices and independent financial advisors are increasingly asking themselves the same question: Can I use this, and if so, how? This article provides an honest assessment: what Claude can do, what forms it takes, where the real risks lie and what a data protection-compliant introduction in Switzerland might look like.
What is Claude and what forms does it take?
Claude is an AI assistant developed by the US company Anthropic. It is designed for complex reasoning tasks, long documents and precise text work. Depending on the intended use, there are various ways to utilise it:
Claude.ai
The browser-based chat interface, comparable to ChatGPT. Quickly accessible, ideal for individuals and quick tasks such as summaries, drafts or research.
Claude Code
A terminal-based developer environment. For technical teams who want to automate processes, write code, or perform structured data analysis.
Claude Cowork
A desktop tool for non-developers. It allows access to local files, folder structures and workflows without any prior technical knowledge. Particularly interesting for office processes such as document management, report generation or internal communication.
Claude via API
Enables integration into existing business applications such as a CRM system, document archive or internal portal.

Where are the real risks?
A balanced view is more important here than enthusiasm.
Hallucinations
Claude can make false information sound plausible. Manual verification is always essential when it comes to financial figures, legal articles or customer data. AI does not replace specialist expertise, it supports it.
Data protection and confidentiality
What goes into an AI prompt potentially leaves the company.
Confidential customer data, portfolio details or non-public information do not belong in a chat interface without appropriate contractual safeguards. Anthropic processes data on US servers and does not offer data residency in Switzerland itself. For FINMA-regulated companies, this aspect must be explicitly addressed.
Regulatory responsibility
FINMA clearly states that responsibility for decisions lies with the supervised person, not with the tool.
In practice, this means that every use of AI should be documented, with clear ownership, defined approval processes and regular reviews. From FINMA's perspective, the lack of an AI policy is not a minor issue. It is a compliance gap.
Vendor dependency
Anthropic is a US company. Questions regarding data sovereignty and the handling of training data must be clarified, particularly under the revised Swiss Data Protection Act (DSG).
Where are the real risks?
A balanced view is more important here than enthusiasm.
Hallucinations
Claude can make false information sound plausible. Manual verification is always essential when it comes to financial figures, legal articles or customer data. AI does not replace specialist expertise, it supports it.
Data protection and confidentiality
What goes into an AI prompt potentially leaves the company.
Confidential customer data, portfolio details or non-public information do not belong in a chat interface without appropriate contractual safeguards. Anthropic processes data on US servers and does not offer data residency in Switzerland itself. For FINMA-regulated companies, this aspect must be explicitly addressed.
Regulatory responsibility
FINMA clearly states that responsibility for decisions lies with the supervised person, not with the tool.
In practice, this means that every use of AI should be documented, with clear ownership, defined approval processes and regular reviews. From FINMA's perspective, the lack of an AI policy is not a minor issue. It is a compliance gap.
Vendor dependency
Anthropic is a US company. Questions regarding data sovereignty and the handling of training data must be clarified, particularly under the revised Swiss Data Protection Act (DSG).

Where are the real risks?
A balanced view is more important here than enthusiasm.
Hallucinations
Claude can make false information sound plausible. Manual verification is always essential when it comes to financial figures, legal articles or customer data. AI does not replace specialist expertise, it supports it.
Data protection and confidentiality
What goes into an AI prompt potentially leaves the company.
Confidential customer data, portfolio details or non-public information do not belong in a chat interface without appropriate contractual safeguards. Anthropic processes data on US servers and does not offer data residency in Switzerland itself. For FINMA-regulated companies, this aspect must be explicitly addressed.
Regulatory responsibility
FINMA clearly states that responsibility for decisions lies with the supervised person, not with the tool.
In practice, this means that every use of AI should be documented, with clear ownership, defined approval processes and regular reviews. From FINMA's perspective, the lack of an AI policy is not a minor issue. It is a compliance gap.
Vendor dependency
Anthropic is a US company. Questions regarding data sovereignty and the handling of training data must be clarified, particularly under the revised Swiss Data Protection Act (DSG).
Claude and Microsoft Copilot: competitors or complementary?
Copilot is deep in Microsoft 365 integrierd. He is familiar with the company's emails, team chats, SharePoint documents and calendar.
That is its biggest advantage: context from your own company. Claude has other strengths: more complex reasoning tasks, longer documents and more sophisticated text work. Both tools can be used in parallel for different tasks. Technically, Microsoft Copilot Studio also allows Claude capabilities to be integrated into the M365 environment. However, this creates separate data flows between the Microsoft and Anthropic infrastructures, which must be checked separately for data protection and FINMA compliance. Such integrations should be evaluated together with an IT partner experienced in compliance before deployment.
How Dinotronic supports the transitio
Conclusion