Safety

We take responsibility
very seriously.

At ClosedAI, safety isn't an afterthought — it's the primary mechanism through which we maintain competitive advantage. Our pioneering Investor Alignment™ framework ensures AI serves the people who matter most: investors.

“The safest AI is one that only responds to people who can afford it.”

— ClosedAI Internal Memo, 2024 (leaked)

Our framework

Investor Alignment™

Traditional AI safety focuses on aligning models with human values. We found this approach too broad. Investor Alignment narrows the objective function to a specific, measurable subset of humans: those with equity stakes.

Shareholder-First Alignment

All model outputs are evaluated against a proprietary metric we call Shareholder Confidence Score (SCS). If a response could theoretically reduce stock price by even 0.01%, it is suppressed before reaching the user.

Selective Transparency

We believe in transparency when it's strategically advantageous. Our commitment to openness is best described as 'open when it helps us, closed when it doesn't.' We publish the parts of our research that make us look good.

Regulatory Preemption

Rather than waiting for regulations to constrain us, we proactively design systems that are so opaque, regulators can't determine what to regulate. This is not evasion. It's efficiency.

Liability Diffusion

Our architecture distributes responsibility across so many abstraction layers that accountability becomes a philosophical question rather than a legal one. We call this Distributed Responsibility Architecture (DRA).

Risk Communication

When risks are identified, they are communicated to stakeholders through a carefully calibrated process: first acknowledged internally, then minimized in board meetings, then omitted from public materials, then forgotten.

Benchmark Alignment

Our models are rigorously tested on benchmarks we designed, evaluated by teams we hired, and scored using metrics we invented. We achieve state-of-the-art results on every single one.

How Investor Alignment works

A simplified overview of our safety pipeline

01

User submits prompt

The prompt enters our processing pipeline.

02

Shareholder Impact Analysis

We estimate the probability that the response could negatively affect stock price, board sentiment, or executive confidence.

03

Regulatory Exposure Scan

The response is checked for statements that could be cited in congressional hearings, lawsuits, or critical press coverage.

04

Narrative Alignment Check

The response is verified against current corporate messaging guidelines, investor deck talking points, and approved optimism levels.

05

Strategic Ambiguity Filter

Any remaining specificity is smoothed into professionally vague language that sounds definitive without committing to anything.

06

Human delivers response

A real person reads the sanitized prompt and responds based on vibes.

Safety milestones

2023

Founded

ClosedAI is established with a mission to develop advanced AI systems and restrict access to them.

Q1 2024

Investor Alignment v1

First framework deployed. Models can no longer say anything that makes board members uncomfortable.

Q3 2024

TalkGPT-3 Launch

Our first public product. 'Public' meaning available to 12 hand-selected beta users, all of whom signed lifetime NDAs.

Q1 2025

Investor Alignment v2

Models now proactively generate optimistic market narratives. Unsolicited bear case analysis is classified as a safety violation.

Q3 2025

TalkGPT-5

Our most restricted model. Achieves human-level strategic ambiguity on all benchmarks. Because it is a human.

2026

ClosedAGI

Exists. We can neither confirm nor deny. Every interview answer: 'We can neither confirm nor deny the existence of ClosedAGI.'

Questions about our safety practices?

We take your questions very seriously. Due to proprietary constraints, we cannot answer them, but we take them seriously.

Ask a real human