Build Your Own Liquid Neural Network with PyTorch | by Tim Cvetko | Apr, 2024


Why LNNs are so Fascinating — 2024 Overview

For the past 35 years, we have built these probabilistic models that output predictions based on data and learned parameters(θ). Each neuron is a logistic regression gate. Tie that to backpropagation — a model’s ability to retrain parameter weights based on model loss and you get neural networks.

Image: Neural Network Architecture

Neural networks, however, have some limitations in the modern world:

  1. They perform well on unified tasks, but cannot generalize knowledge across tasks, i.e have solid states.
  2. They process data non-sequentially, making them inefficient at handling real-time data.

Solution: “a type of neural network that learns on the job, not only during the training phase.”

That’s what we refer to as LNNs — Liquid Neural Networks.

Liquid Neural Networks (LNNs) are a type of neural network that processes data sequentially and adapts to changing data in real-time, much like the human brain.

Image: LNN Architecture

A Liquid Neural Network is a time-continuous Recurrent Neural Network (RNN) that processes data sequentially, keeps the memory of past inputs, adjusts its behaviors based on new inputs, and can handle variable-length inputs to enhance the task-understanding capabilities of NNs.

Their adaptable nature gives them the ability to continually learn and adapt and, ultimately, process time-series data more effectively than traditional neural networks.

A continuous time neural network is a neural network ƒ with:

Image by: LNN Authors

If ƒ parameterizes the derivates of the hidden state, we can go from a discrete computational graph to a continuous time graph. This allows us the following 2 properties of LNNs:

  1. The space of possible functions is much larger due to liquid states.
  2. Arbitrary time step…



Source link

Be the first to comment

Leave a Reply

Your email address will not be published.


*