(Image credit: 123RF)
Amid the craze surrounding large language models (LLM) and the drive to create bigger and bigger neural networks, the field of TinyML and small machine learning models has almost gone under the radar.
One big development to follow in this field is “liquid neural networks,” a new architecture developed by researchers at MIT Computer Science and Artificial Intelligence Lab (CSAIL).
Liquid neural networks use a new mathematical formulation and wiring pattern to create deep learning models that are compact, energy-efficient, and causal. It can address some of the key challenges of current deep learning models and create new directions for AI research.
I had the pleasure to speak to Daniella Rus, the Director of MIT CSAIL about her team’s work on liquid neural networks.
Key findings:
The inspiration for LNNs was to create machine learning models that can run on robots and other resource-constrained edge devices
LNNs use a mathematical formulation that is less computationally expensive than traditional ANNs and stabilizes neurons during training
LNNs also use a wiring architecture that is different from traditional neural networks and allows for lateral and recurrent connections within the same layer
Rus and her colleagues were able to train an LNN with just 19 neurons to perform a task such as keeping a car in its lane the same task—a traditional ANN would require ~100,000 neurons and ~500,000 parameters
With so few neurons, LNNs are much more interpretable and we can extract a decision tree that corresponds to the firing patterns and essentially the decision-making flow in the system
Experiments show that LNNs learn causality and focus on the task instead of learning spurious patterns in the environment
One characteristic to take note of is that LNNs only work with time series and sequential data—you can’t use them for static datasets such as ImageNet
Read my full piece on VentureBeat.
Some goodies:
If you want to dig deep into how different ML algorithms work, I strongly recommend Machine Learning Algorithms, 2nd Edition by Giuseppe Bonaccorso. You’ll find the inner workings of more than a dozen different algorithms. (Read my review of the book here.)
If you want to review the history of deep learning and how we went from simple perceptions to where we are today, I recommend The Deep Learning Revolution by Terrence Sejnowski, one of the pioneers of the field. (Read my review of the book and interview with Dr. Sejnowski here.)
For more on AI research:
It's fascinating to see how this new architecture can revolutionize AI with its compactness and energy-efficiency. Looking forward to more breakthroughs in the field of TinyML. Cheers.