The deterministic layer for a probabilistic world.

We are opening the black box of artificial intelligence. CTGT understands what AI models know, and controls what they do with their knowledge.
Who We Are

CTGT is a product-focused frontier interpretability lab that is solving a fundamental problem everyone else worked around.

For years, the field treated AI, particularly generative models, as inscrutable. You could train them. You could prompt them. But you couldn't understand them. So the industry built guardrails, filters, and hope.

We took a different path. We opened the black box.

CTGT's research revealed how to isolate and modify the specific features inside neural networks that govern behavior. Not through retraining or prompting, but through direct, surgical intervention at the representation level.

By connecting tried and tested, centuries-old statistical methods with modern machine learning, we uncovered unprecedented insights into deep neural network behavior that yielded dividends for even closed-weight models. We are building the architecture that allows high trust organizations to rely on AI not just for probability, but for truth.

Why it matters

CTGT: Connecting Through Generative Thinking

Our name hearkens back to storied institutions that rapidly facilitated world-changing innovation: PARC, NASA - four letters, fundamental mission.

CTGT was built to offer a different path

Enterprises don’t fail at AI because models aren’t smart enough. They fail because they can’t control how AI behaves in the real world. Fine-tuning, prompt engineering, and human review were never designed to provide durable governance. They are fragile, expensive, and impossible to scale across dozens of workflows.

CTGT was founded on a simple insight

The challenge isn’t getting AI to generate content, it's getting it to do so reliably and in alignment with real-world constraints. Regulations change, business rules evolve, and most AI systems have no native concept of policy or accountability. This forces teams to bolt on governance after the fact.

Our Approach

Govern the output, not the model

The industry's current approach to AI reliability: prompt engineering, RLHF, output filtering, is duct tape on a leaky pipe. These methods are fragile, expensive, and fundamentally limited because they operate at the wrong layer of abstraction.

Instead, we introduce a governance layer that evaluates every AI output against policy and intervenes when necessary.

Define behavioral constraints once and apply them universally

Maintain mathematically verifiable guarantees on AI outputs

Deploy AI in regulated environments with full auditability

Eliminate the tradeoff between capability and control

WHAT WE BUILT

CTGT is real time infrastructure for controllable AI. Our model agnostic platform sits seamlessly on top of any existing AI deployment.

Our Philosophy

The values behind our system design

First principles over best practices

The field has been cargo-culting techniques that work without understanding why. We insist on mathematical foundations.

Determinism in high stakes domains

Highly probabilistic outputs are fine for some applications. They're unacceptable for financial decisions, medical advice, & journalistic integrity.

Control without compromise

The false choice between capable AI and controllable AI is a symptom of operating at the wrong abstraction layer. We dissolved it.

Transparency as architecture

Every intervention we make is explainable, auditable, and defensible. Trust is the foundation, not a feature.
Impact

Across enterprise deployments and pilots

80-90%

Reduction in manual AI review effort

70%

Faster deployment of compliant AI workflows

0

Fine tuning required to enforce policy

<10ms

Real time remediation latency

100%

Traceability across governed outputs
Leadership

The team opening
the black box

Cyril Gorlla
CEO
Trevor Tuttle
CTO
Ethan Yang
Head of Operations & Strategy
Phelimon Sarpaning
Engineer
INVESTORS SECTION

Backed by people who built the tools

Our investors built the foundations we're extending.

François Chollet
Creator of Keras, ARC-AGI
Paul Graham
Co-founder, Y Combinator
Michael Seibel
Co-founder, Twitch
Peter Wang
Co-founder, Anaconda
Mike Knoop
Co-founder, Zapier, Creator, ARC-AGI
Wes McKinney
Creator, Pandas
Taner Halıcıoğlu
First employee, Meta
Geoff Ralston
Creator, Yahoo! Mail. Former President, Y Combinator
Sam Blond
Former CRO, Brex
Institutional INVESTORS

Supported by global institutional capital

Capital partners with a track record of building enduring markets.

Gradient (Google's AI Fund)
General Catalyst
Y Combinator