Skip to content
Generative AI Security

The power of GenAI — with compliance in mind

AlphaSense maintains strict guardrails around its generative AI tools to keep client data private and secure.

gen ai security

Generative AI Security Overview

Our Core Security Principles

  • We don't train our models on customer content

  • We work only with LLM vendors who agreed to a zero data retention policy

  • We encrypt your data at transit and rest

  • We provide dedicated, encrypted storage environments for every customer

  • We offer logical separation while processing and indexing your data

JP MorganDellBaillie GiffordRoyalty PharmaViatris

✔ We leverage a diverse set of LLMs that we test and fine-tune for different use cases, sourced from trusted cloud providers.

✔ We work exclusively with vendors who adhere to a strict zero data retention policy.

✔ Information customers upload is stored in AWS cloud. This data is NOT sent to any third-party LLM providers and is hosted by AlphaSense.

✔ Our GenAI modules reside inside the AlphaSense research platform and adhere to the security & governance measures defined in our information security policy.

✔ LLM models are never trained on customer uploaded data.

✔ We aggregate and anonymize all queries and remove queries that are unique to a specific user or firm.

✔ For Enterprise Intelligence customers, we do not use queries across client content to train any models.

✔ We utilize a team of human experts to produce training data and to evaluate models.

✔ We train our AI pipeline to reduce the risk of hallucination by providing it with a better understanding of financial and business language.

✔ We have a 98% citation accuracy rate, meaning 98% of the time a result is cited correctly based on its source material.

✔ AlphaSense does not send any customer data to public endpoints. Through Bedrock, we leverage LLMs through a private link.

✔ Users can clear their search/conversation history at any time or opt out of storing their history entirely.

✔ Users do not have access to searches from other customers.

✔ Prompts and queries are never stored in the LLMs themselves.

✔ AI-generated outputs follow entitlements, permissions and content access controls established by clients.

✔ Every answer is fully cited and leads back to the original source document.

✔ We leverage RAG (Retrieval Augmented Generation) to ground or model in authoritative, relevant content.

✔ Retrieval systems are protected by strict access controls, encryption, and continuous monitoring.

GenAI & LLM

We leverage a diverse set of LLMs that we test and fine-tune for different use cases, sourced from trusted cloud providers. We leverage them all through our clouds where we can protect AlphaSense and client data. The specific models will change over time while we expect our strategy of leveraging LLMs through the major cloud providers will persist.

Customer Data

User questions or search queries and AlphaSense licensed content are sent to the GenAI tool to support summarization. For Enterprise Intelligence customers, their uploaded content is also sent to the GenAI tool to support summarization.

Transform intelligence into advantage

Unleash your productivity using secure generative AI — trusted by the world’s largest organizations.