
For years, the field treated AI, particularly generative models, as inscrutable. You could train them. You could prompt them. But you couldn't understand them. So the industry built guardrails, filters, and hope.
We took a different path. We opened the black box.
CTGT's research revealed how to isolate and modify the specific features inside neural networks that govern behavior. Not through retraining or prompting, but through direct, surgical intervention at the representation level.
By connecting tried and tested, centuries-old statistical methods with modern machine learning, we uncovered unprecedented insights into deep neural network behaviour that yielded dividends for even closed-weight models. We are building the architecture that allows high trust organizations to rely on AI not just for probability, but for truth.
Our name hearkens back to storied institutions that rapidly facilitated world-changing innovation: PARC, NASA - four letters, fundamental mission.
Enterprises don’t fail at AI because models aren’t smart enough. They fail because they can’t control how AI behaves in the real world. Fine-tuning, prompt engineering, and human review were never designed to provide durable governance. They are fragile, expensive, and impossible to scale across dozens of workflows.
The challenge isn’t getting AI to generate content, it's getting it to do so reliably and in alignment with real-world constraints. Regulations change, business rules evolve, and most AI systems have no native concept of policy or accountability. This forces teams to bolt on governance after the fact.

The industry's current approach to AI reliability: prompt engineering, RLHF, output filtering, is duct tape on a leaky pipe. These methods are fragile, expensive, and fundamentally limited because they operate at the wrong layer of abstraction.
Instead, we introduce a governance layer that evaluates every AI output against policy and intervenes when necessary.





CTO
Trevor built hyperscale distributed systems for machine learning at MLSys@UCSD.
CEO
Cyril left his research at Stanford at 23 to found CTGT. His work on efficient and interpretable AI was presented at ICLR while he was the Endowed Chair's Fellow at UC San Diego.
Cyril has briefed the White House and Congress on AI safety and the future of American AI competitiveness. His work has been covered by the Wall Street Journal, TechCrunch, and InfoWorld.
He is a Nordson Leadership Scholar and Ivory Bridges Fellow.
Our investors built the foundations we're extending.























Capital partners with a track record of building enduring markets.