Veeva AI for Vault CRM Overview
Veeva AI for Vault CRM brings agentic AI capabilities into the core of Vault CRM, providing intelligent tools to help users find information, prepare for customer engagements, and complete tasks more efficiently. Veeva AI is embedded throughout users’ natural workflows, providing easy access to context-aware AI features including interactive chat and search, pre-call planning, and voice entry.
Veeva AI for Vault CRM connects a Large Language Model (LLM) with a specialized Vault CRM data layer, ensuring our AI tools are grounded in accurate, customer-specific data. The data layer securely accesses, formats, and structures information from your CRM instance, giving the LLM the necessary context to provide precise responses tailored to your organization and users.
Agents are at the core of Veeva AI. Each agent is designed to perform a specific function— for example, reviewing free text, guiding call planning, or searching for media — and works in coordination with others through AI Chat. This agent-based design allows Veeva AI to deliver targeted, contextually relevant assistance across the CRM experience while maintaining accuracy, security, and consistency with your organization’s data.
The Agents in Veeva AI for Vault CRM use Anthopic's Claude Sonnet 4 as the LLM, hosted on AWS Bedrock.
Users require a Veeva AI license to use this functionality.
Veeva AI in Vault CRM allows users to perform the following actions:
End Users
Access AI agents to complete tasks:
- Free Text Agent — Enter free text notes that are automatically checked for compliance issues
- Media Agent — Quickly find and summarize relevant media library content using semantic search
- Pre-Call Agent — Prepare for customer engagements by receiving AI-driven context and suggested actions
- Voice Agent — Capture information by dictating notes that are intelligently mapped to Vault CRM records and fields

