An open toolkit for composable, automatic, and scalable learning
Composable
To quickly assemble your applications
Automatic
To automatically tune your models
Scalable
To efficiently train your large models
For machine learning in the real world
Learn more
Projects
Examples
3
Scale across GPUs with Minimal Coding
A novel TensorFlow training engine for distributed deep learning
CASL Updates
Latest updates and news about CASL
![Forte logo](assets/updates/forte-dark-text.png)
Building a Question Answering System Part 3: Answer Extraction
![Forte logo](assets/updates/forte-dark-text.png)
Building a Question Answering System Part 2: Document Retrieval
![Tuun logo](pages/casl-summary-assets/assets/tuun.png)
Introducing Tuun, an Open Source System for Hyperparameter Tuning via Uncertainty Modeling
![Forte logo](assets/updates/forte-dark-text.png)
Building a Q&A System Part 1: Query Understanding in 18 lines
![](pages/casl-summary-assets/assets/tuun.png)
Improving AI models through Automatic Data Augmentation using Tuun
![](assets/updates/adaptdl-text.png)
![PyTorch logo](assets/updates/pytorch.png)
Optimizing Elastic Deep Learning in GPU Clusters with AdaptDL for PyTorch
![](assets/updates/adaptdl-text.png)
![PyTorch logo](assets/updates/pytorch.png)
AdaptDL is now featured on the PyTorch ecosystem!
![Forte logo](assets/updates/forte-dark-text.png)
Forte: Building Modular and Re-purposable NLP Pipelines
![NNI AdaptDL logo](assets/updates/nni_adaptdl_logo.jpeg)
We have integrated AdaptDL with NNI for cost-effective hyperparameter tuning
![](assets/updates/adaptdl.png)
![Association for the Advancement of Artificial Intelligence (AAAI) logo](assets/updates/aaai-small_logo.jpg)
![](assets/updates/autodist.png)
AdaptDL and AutoDist Tutorial (AAAI 2021)
Simplifying and Automating Parallel Machine Learning via a Programmable and Composable Parallel ML System
![Texar logo](assets/updates/texar.png)
![Texar logo, Forte logo, KDD logo](assets/updates/kdd_2020_logo.png)
![Forte logo](assets/updates/forte-logo.png)
Texar and Forte Tutorial
(KDD 2020)
(KDD 2020)
Learning from All Types of Experiences: A Unifying Machine Learning Perspective
![Texar logo](assets/updates/texar.png)
![](assets/updates/pytorch.png)
Introducing Texar-PyTorch:
An ML Library Integrating the Best of TensorFlow into PyTorch
Research and Technology
![USENIX Symposium on Operating Systems Design and Implementation (OSDI) logo](assets/updates/osdi-22-logo.png)
OSDI 2022
Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning
![Association for the Advancement of Artificial Intelligence (AAAI) logo](assets/updates/aaai_logo.png)
AAAI 2021
BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search
Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning
![Pollux line plot showing favorable performance in relative job load (Average JCT hours) compared to Optimus+Oracle and Tiresias+TunedJobs](assets/updates/pollux_plot.png)
![Association for the Advancement of Artificial Intelligence (AAAI) logo](assets/updates/aaai_logo.png)
AAAI 2020 Tutorial
Tutorial: Modularizing Natural Language Processing
![Journal of Machine Learning Research (JMLR) logo](assets/updates/jmlr_logo.png)
Journal of Machine Learning Research (JMLR), 2020
Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly
![Neural Information Processing Systems (NeurIPS) logo](assets/updates/neurips_logo.png)
NeurIPS 2020
A Study on Encodings for Neural Architecture Search
![Association for Computational Linguistics (ACL) logo](assets/updates/acl_logo.png)
EMNLP 2020
A data-centric framework for composable NLP workflows
![Association for the Advancement of Artificial Intelligence (AAAI) logo](assets/updates/aaai_logo.png)
AAAI 2021
On Trustworthiness of ML Algorithms -- and implications in AI-driven healthcare
![](assets/updates/asyml_logo.png)
ASYML
Machine Learning as Machine Assembly