Posts by Tags

Amazon

Applied Scientist

Backpropagation

๐Ÿง  Predictive Coding as a 2nd-Order Method

10 minute read

Published:

๐Ÿ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

Bayesian Inference

Bayesian Neural Networks

Central Limit Theorem

Deep Information Propagation

Deep Neural Networks

KANs Made Simple

2 minute read

Published:

๐Ÿค” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so hereโ€™s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

๐Ÿง  Predictive Coding as a 2nd-Order Method

10 minute read

Published:

๐Ÿ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

Fisher Information

๐Ÿง  Predictive Coding as a 2nd-Order Method

10 minute read

Published:

๐Ÿ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

Gaussian Processes

Gradient Descent

Industry

Inference Learning

๐Ÿง  Predictive Coding as a 2nd-Order Method

10 minute read

Published:

๐Ÿ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

Infinite Width Limit

Internship

Interpretability

KANs Made Simple

2 minute read

Published:

๐Ÿค” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so hereโ€™s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

KAN

KANs Made Simple

2 minute read

Published:

๐Ÿค” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so hereโ€™s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

Kolmogorov-Arnold Networks

KANs Made Simple

2 minute read

Published:

๐Ÿค” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so hereโ€™s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

Kolmogorov-Arnold Representation Theorem

KANs Made Simple

2 minute read

Published:

๐Ÿค” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so hereโ€™s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

Local Learning

๐Ÿง  Predictive Coding as a 2nd-Order Method

10 minute read

Published:

๐Ÿ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

Loss Landscape

Machine Learning

Multi-layer Perceptrons

KANs Made Simple

2 minute read

Published:

๐Ÿค” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so hereโ€™s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

Natural Gradient Descent

Neural Scaling Laws

KANs Made Simple

2 minute read

Published:

๐Ÿค” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so hereโ€™s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

Normal Computing

PhD

Predictive Coding

๐Ÿง  Predictive Coding as a 2nd-Order Method

10 minute read

Published:

๐Ÿ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

Saddle Points

Saddles

๐Ÿง  Predictive Coding as a 2nd-Order Method

10 minute read

Published:

๐Ÿ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

Second-Order

๐Ÿง  Predictive Coding as a 2nd-Order Method

10 minute read

Published:

๐Ÿ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

Second-order Methods

Splines

KANs Made Simple

2 minute read

Published:

๐Ÿค” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so hereโ€™s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

Thermodynamic AI

Trust Region

๐Ÿง  Predictive Coding as a 2nd-Order Method

10 minute read

Published:

๐Ÿ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

Vanishing Gradients