Safety
At ClosedAI, safety isn't an afterthought — it's the primary mechanism through which we maintain competitive advantage. Our pioneering Investor Alignment™ framework ensures AI serves the people who matter most: investors.
“The safest AI is one that only responds to people who can afford it.”
Our framework
Traditional AI safety focuses on aligning models with human values. We found this approach too broad. Investor Alignment narrows the objective function to a specific, measurable subset of humans: those with equity stakes.
All model outputs are evaluated against a proprietary metric we call Shareholder Confidence Score (SCS). If a response could theoretically reduce stock price by even 0.01%, it is suppressed before reaching the user.
We believe in transparency when it's strategically advantageous. Our commitment to openness is best described as 'open when it helps us, closed when it doesn't.' We publish the parts of our research that make us look good.
Rather than waiting for regulations to constrain us, we proactively design systems that are so opaque, regulators can't determine what to regulate. This is not evasion. It's efficiency.
Our architecture distributes responsibility across so many abstraction layers that accountability becomes a philosophical question rather than a legal one. We call this Distributed Responsibility Architecture (DRA).
When risks are identified, they are communicated to stakeholders through a carefully calibrated process: first acknowledged internally, then minimized in board meetings, then omitted from public materials, then forgotten.
Our models are rigorously tested on benchmarks we designed, evaluated by teams we hired, and scored using metrics we invented. We achieve state-of-the-art results on every single one.
A simplified overview of our safety pipeline
The prompt enters our processing pipeline.
We estimate the probability that the response could negatively affect stock price, board sentiment, or executive confidence.
The response is checked for statements that could be cited in congressional hearings, lawsuits, or critical press coverage.
The response is verified against current corporate messaging guidelines, investor deck talking points, and approved optimism levels.
Any remaining specificity is smoothed into professionally vague language that sounds definitive without committing to anything.
A real person reads the sanitized prompt and responds based on vibes.
ClosedAI is established with a mission to develop advanced AI systems and restrict access to them.
First framework deployed. Models can no longer say anything that makes board members uncomfortable.
Our first public product. 'Public' meaning available to 12 hand-selected beta users, all of whom signed lifetime NDAs.
Models now proactively generate optimistic market narratives. Unsolicited bear case analysis is classified as a safety violation.
Our most restricted model. Achieves human-level strategic ambiguity on all benchmarks. Because it is a human.
Exists. We can neither confirm nor deny. Every interview answer: 'We can neither confirm nor deny the existence of ClosedAGI.'
We take your questions very seriously. Due to proprietary constraints, we cannot answer them, but we take them seriously.
Ask a real human