For the past 35 years, we have built these probabilistic models that output predictions based on data and learned parameters(θ). Each neuron is a logistic regression gate. Tie that to backpropagation — a model’s ability to retrain parameter weights based on model loss and you get neural networks.
Neural networks, however, have some limitations in the modern world:
- They perform well on unified tasks, but cannot generalize knowledge across tasks, i.e have solid states.
- They process data non-sequentially, making them inefficient at handling real-time data.
Solution: “a type of neural network that learns on the job, not only during the training phase.”
That’s what we refer to as LNNs — Liquid Neural Networks.
Liquid Neural Networks (LNNs) are a type of neural network that processes data sequentially and adapts to changing data in real-time, much like the human brain.
A Liquid Neural Network is a time-continuous Recurrent Neural Network (RNN) that processes data sequentially, keeps the memory of past inputs, adjusts its behaviors based on new inputs, and can handle variable-length inputs to enhance the task-understanding capabilities of NNs.
Their adaptable nature gives them the ability to continually learn and adapt and, ultimately, process time-series data more effectively than traditional neural networks.
A continuous time neural network is a neural network ƒ with:
If ƒ parameterizes the derivates of the hidden state, we can go from a discrete computational graph to a continuous time graph. This allows us the following 2 properties of LNNs:
- The space of possible functions is much larger due to liquid states.
- Arbitrary time step…
Be the first to comment