💭 My experience as an Applied Scientist Intern at Amazon
Published:
Published:
📖 TL;DR: Predictive coding makes the loss landscape of feedforward neural networks more benign and robust to vanishing gradients.
Published:
📖 TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.
Published:
This is the first post of a short series on the infinite-width limits of deep neural networks (DNNs). We start by reviewing the correspondence between neural networks and Gaussian Processes (GPs).
Published:
This is the first post of a short series on the infinite-width limits of deep neural networks (DNNs). We start by reviewing the correspondence between neural networks and Gaussian Processes (GPs).
Published:
This is the first post of a short series on the infinite-width limits of deep neural networks (DNNs). We start by reviewing the correspondence between neural networks and Gaussian Processes (GPs).
Published:
This is the first post of a short series on the infinite-width limits of deep neural networks (DNNs). We start by reviewing the correspondence between neural networks and Gaussian Processes (GPs).
Published:
This is the second post of a short series on the infinite-width limits of deep neural networks (DNNs). Previously, we reviewed the correspondence between neural networks and Gaussian Processes (GPs), basically finding that, as the number neurons in the hidden layers grows to infinity, the output of a random network becomes Gaussian distributed.
Published:
This is the first post of a short series on the infinite-width limits of deep neural networks (DNNs). We start by reviewing the correspondence between neural networks and Gaussian Processes (GPs).
Published:
🤔 Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).
Published:
📖 TL;DR: Predictive coding makes the loss landscape of feedforward neural networks more benign and robust to vanishing gradients.
Published:
📖 TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.
Published:
I recently came across this paper Thermodynamic Natural Gradient Descent by Normal Computing. I found it very interesting, so below is my brief take on it.
Published:
📖 TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.
Published:
This is the first post of a short series on the infinite-width limits of deep neural networks (DNNs). We start by reviewing the correspondence between neural networks and Gaussian Processes (GPs).
Published:
📖 TL;DR: Predictive coding makes the loss landscape of feedforward neural networks more benign and robust to vanishing gradients.
Published:
📖 TL;DR: Predictive coding makes the loss landscape of feedforward neural networks more benign and robust to vanishing gradients.
Published:
📖 TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.
Published:
This is the second post of a short series on the infinite-width limits of deep neural networks (DNNs). Previously, we reviewed the correspondence between neural networks and Gaussian Processes (GPs), basically finding that, as the number neurons in the hidden layers grows to infinity, the output of a random network becomes Gaussian distributed.
Published:
This is the first post of a short series on the infinite-width limits of deep neural networks (DNNs). We start by reviewing the correspondence between neural networks and Gaussian Processes (GPs).
Published:
🤔 Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).
Published:
🤔 Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).
Published:
This is the second post of a short series on the infinite-width limits of deep neural networks (DNNs). Previously, we reviewed the correspondence between neural networks and Gaussian Processes (GPs), basically finding that, as the number neurons in the hidden layers grows to infinity, the output of a random network becomes Gaussian distributed.
Published:
🤔 Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).
Published:
🤔 Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).
Published:
This is the second post of a short series on the infinite-width limits of deep neural networks (DNNs). Previously, we reviewed the correspondence between neural networks and Gaussian Processes (GPs), basically finding that, as the number neurons in the hidden layers grows to infinity, the output of a random network becomes Gaussian distributed.
Published:
This is the second post of a short series on the infinite-width limits of deep neural networks (DNNs). Previously, we reviewed the correspondence between neural networks and Gaussian Processes (GPs), basically finding that, as the number neurons in the hidden layers grows to infinity, the output of a random network becomes Gaussian distributed.
Published:
📖 TL;DR: Predictive coding makes the loss landscape of feedforward neural networks more benign and robust to vanishing gradients.
Published:
📖 TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.
Published:
📖 TL;DR: Predictive coding makes the loss landscape of feedforward neural networks more benign and robust to vanishing gradients.
Published:
I recently came across this paper Thermodynamic Natural Gradient Descent by Normal Computing. I found it very interesting, so below is my brief take on it.
Published:
🤔 Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).
Published:
I recently came across this paper Thermodynamic Natural Gradient Descent by Normal Computing. I found it very interesting, so below is my brief take on it.
Published:
🤔 Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).
Published:
This is the second post of a short series on the infinite-width limits of deep neural networks (DNNs). Previously, we reviewed the correspondence between neural networks and Gaussian Processes (GPs), basically finding that, as the number neurons in the hidden layers grows to infinity, the output of a random network becomes Gaussian distributed.
Published:
I recently came across this paper Thermodynamic Natural Gradient Descent by Normal Computing. I found it very interesting, so below is my brief take on it.
Published:
📖 TL;DR: Predictive coding makes the loss landscape of feedforward neural networks more benign and robust to vanishing gradients.
Published:
📖 TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.
Published:
📖 TL;DR: Predictive coding makes the loss landscape of feedforward neural networks more benign and robust to vanishing gradients.
Published:
📖 TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.
Published:
📖 TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.
Published:
I recently came across this paper Thermodynamic Natural Gradient Descent by Normal Computing. I found it very interesting, so below is my brief take on it.
Published:
🤔 Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).
Published:
I recently came across this paper Thermodynamic Natural Gradient Descent by Normal Computing. I found it very interesting, so below is my brief take on it.
Published:
📖 TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.
Published:
📖 TL;DR: Predictive coding makes the loss landscape of feedforward neural networks more benign and robust to vanishing gradients.