Discover how CTGT enables fine-grained control over AI without retraining, ensuring models remains both accurate and adaptable.
It's no secret that we are reaching the limits of AI compute. This challenge has been highlighted by some of the brightest minds in AI, including Ilya Sutskever, who's doing such brilliant work at SSI. Training runs for large models can cost tens of millions of dollars due to the cost of chips. The costs have become so high that Anthropic has estimated that it could cost as much to update Claude as it did to develop it in the first place. Companies like Amazon are spending billions to erect new AI data centers in an effort to keep up with compute demands. But maybe all of this isn't necessary. With a better foundational understanding of how AI works, we can approach AI model training and deployment in new ways that require a fraction of the energy and compute. As the Endowed Chair’s Fellow at the University of California San Diego, I spent the last 5 years singularly focused on solving this problem. In 2023, I was invited to give a presentation at ICLR that examined the benefits of a new way of evaluating and training AI models that was an order of magnitude faster and resulted in three nines of accuracy - a huge leap over current methods.