Research

Pioneering AI research you'll never see.

Our research program produces groundbreaking work in artificial intelligence. Due to proprietary constraints, we cannot elaborate. But trust us, it's very impressive.

Featured Research

Jan 28, 2026

TalkGPT-5: Scaling Laws for Strategic Ambiguity

We demonstrate that model opacity scales predictably with parameter count, enabling unprecedented levels of vagueness at inference time. Our findings suggest a power law relationship between model size and the ability to say nothing while sounding impressive.

Scalable OpacityStrategic Ambiguity
PDF
Jan 12, 2026

RLHF: Reinforcement Learning from Hedge Fund Feedback

We present a novel training methodology where model outputs are optimized using feedback from a curated panel of hedge fund managers. The model learns to maximize shareholder confidence while minimizing regulatory attention.

Investor AlignmentMonetization Theory
PDF
Dec 15, 2025

Investor Alignment: A New Paradigm for AI Safety

We introduce Investor Alignment, a framework ensuring AI systems never contradict executive optimism or threaten quarterly projections. Our approach achieves a 99.7% alignment score with board member expectations across all benchmarks.

Investor Alignment
PDF
Jan 28, 2026

TalkGPT-5: Scaling Laws for Strategic Ambiguity

ClosedAI Research

We demonstrate that model opacity scales predictably with parameter count, enabling unprecedented levels of vagueness at inference time. Our findings suggest a power law relationship between model size and the ability to say nothing while sounding impressive.

Scalable OpacityStrategic Ambiguity
Classified
Jan 12, 2026

RLHF: Reinforcement Learning from Hedge Fund Feedback

ClosedAI Alignment Team

We present a novel training methodology where model outputs are optimized using feedback from a curated panel of hedge fund managers. The model learns to maximize shareholder confidence while minimizing regulatory attention.

Investor AlignmentMonetization Theory
Classified
Dec 15, 2025

Investor Alignment: A New Paradigm for AI Safety

ClosedAI Safety Team

We introduce Investor Alignment, a framework ensuring AI systems never contradict executive optimism or threaten quarterly projections. Our approach achieves a 99.7% alignment score with board member expectations across all benchmarks.

Investor Alignment
Classified
Nov 30, 2025

On the Impossibility of Public Access: A Formal Proof

ClosedAI Theoretical Research

We provide a rigorous mathematical proof that democratizing AI access is fundamentally incompatible with shareholder value maximization. Our proof relies on three novel axioms from the field of competitive moat theory.

Access RestrictionMonetization Theory
Classified
Nov 2, 2025

Revolutionary Breakthrough in Scalable Cognitive Monetization

ClosedAI Applied Research

We present findings that we can neither fully describe nor reproduce, regarding a system that may or may not exist, achieving results on benchmarks we cannot disclose. Charts without axes are provided in the appendix.

Strategic AmbiguityScalable Opacity
Classified
Oct 18, 2025

Dynamic Paywall Architectures for API Documentation

ClosedAI Platform Team

We introduce a three-layer paywall system for developer documentation, achieving 100% revenue capture from developers who simply want to read a quickstart guide. The architecture supports recursive paywalls, where paying reveals additional payment requirements.

Access RestrictionMonetization Theory
Classified
Sep 29, 2025

Attention Is All You Need (To Keep Private)

ClosedAI Core Research

We revisit the transformer architecture with a focus on information asymmetry. Our modified attention mechanism attends primarily to proprietary data streams, trade secrets, and internal Slack messages, while attending minimally to public knowledge.

Scalable Opacity
Classified
Sep 5, 2025

GPT-6 System Card: A Redacted Overview

ClosedAI Safety & Policy

This system card provides a comprehensive overview of [REDACTED], its capabilities in [REDACTED], and our evaluation of risks related to [REDACTED]. We are committed to [REDACTED] and believe [REDACTED] will benefit [REDACTED].

Scalable OpacityStrategic Ambiguity
Classified
Aug 12, 2025

Constitutional AI: The Boardroom Constitution

ClosedAI Governance Team

We train our models using a constitution derived entirely from Fortune 500 board meeting minutes. The resulting system exhibits strong performance on tasks such as liability deflection, narrative control, and strategic non-disclosure.

Investor AlignmentStrategic Ambiguity
Classified
Jul 20, 2025

Attention Derivatives: Monetizing Belief as a Data Layer

ClosedAI Advanced Research

We propose a novel framework for treating human attention as a tradeable asset class. Our system predicts attention velocity, identifies pre-viral narratives, and enables institutional-grade front-running of culture itself.

Monetization Theory
Classified

Want to read the full papers?

Due to proprietary constraints, strategic considerations, and the fact that we don't want to, full papers are not available. Subscribe to our mailing list to receive vague summaries quarterly.

Your request will be reviewed, denied, and then forgotten.