Posts by Tags

Amazon

Bayesian inference

Bayesian neural networks

Fisher information

🧠 Predictive Coding as a 2nd-Order Method

10 minute read

Published:

πŸ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

Gaussian processes

KAN

KANs Made Simple

2 minute read

Published:

πŸ€” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

Kolmogorov-Arnold networks

KANs Made Simple

2 minute read

Published:

πŸ€” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

Kolmogorov-Arnold representation theorem

KANs Made Simple

2 minute read

Published:

πŸ€” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

Normal Computing

PhD

applied scientist

backpropagation

🧠 Predictive Coding as a 2nd-Order Method

10 minute read

Published:

πŸ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

central limit theorem

deep information propagation

deep neural networks

♾️ Infinite Widths Part II: The Neural Tangent Kernel

7 minute read

Published:

This is the second post of a short series on the infinite-width limits of deep neural networks (DNNs). Previously, we reviewed the correspondence between neural networks and Gaussian Processes (NNGP), showing that, as the number neurons in the hidden layers grows to infinity, the output of a random network becomes Gaussian distributed.

KANs Made Simple

2 minute read

Published:

πŸ€” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

🧠 Predictive Coding as a 2nd-Order Method

10 minute read

Published:

πŸ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

depth-mup

dynamical mean field theory

energy-based models

Energy-based Transformers

3 minute read

Published:

πŸ“– TL;DR: Energy-based Transformers (EBTs) learn a scalar energy function parameterised by a transformer. Empirically, EBTs show promising scaling and reasoning properties on both language and vision tasks.

energy-based transformers

Energy-based Transformers

3 minute read

Published:

πŸ“– TL;DR: Energy-based Transformers (EBTs) learn a scalar energy function parameterised by a transformer. Empirically, EBTs show promising scaling and reasoning properties on both language and vision tasks.

feature learning

gradient descent

hyperparameter transfer

implicit gradient descent dynamics

In-Context Learning Demystified?

4 minute read

Published:

πŸ“– TL;DR: a transformer block implicitly uses the input context to modify its MLP weights.

in-context learning

In-Context Learning Demystified?

4 minute read

Published:

πŸ“– TL;DR: a transformer block implicitly uses the input context to modify its MLP weights.

industry

inference as optimisation

Energy-based Transformers

3 minute read

Published:

πŸ“– TL;DR: Energy-based Transformers (EBTs) learn a scalar energy function parameterised by a transformer. Empirically, EBTs show promising scaling and reasoning properties on both language and vision tasks.

inference learning

🧠 Predictive Coding as a 2nd-Order Method

10 minute read

Published:

πŸ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

infinite width limit

♾️ Infinite Widths Part II: The Neural Tangent Kernel

7 minute read

Published:

This is the second post of a short series on the infinite-width limits of deep neural networks (DNNs). Previously, we reviewed the correspondence between neural networks and Gaussian Processes (NNGP), showing that, as the number neurons in the hidden layers grows to infinity, the output of a random network becomes Gaussian distributed.

internship

interpretability

KANs Made Simple

2 minute read

Published:

πŸ€” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

kernel methods

♾️ Infinite Widths Part II: The Neural Tangent Kernel

7 minute read

Published:

This is the second post of a short series on the infinite-width limits of deep neural networks (DNNs). Previously, we reviewed the correspondence between neural networks and Gaussian Processes (NNGP), showing that, as the number neurons in the hidden layers grows to infinity, the output of a random network becomes Gaussian distributed.

large language models

In-Context Learning Demystified?

4 minute read

Published:

πŸ“– TL;DR: a transformer block implicitly uses the input context to modify its MLP weights.

lazy learning

♾️ Infinite Widths Part II: The Neural Tangent Kernel

7 minute read

Published:

This is the second post of a short series on the infinite-width limits of deep neural networks (DNNs). Previously, we reviewed the correspondence between neural networks and Gaussian Processes (NNGP), showing that, as the number neurons in the hidden layers grows to infinity, the output of a random network becomes Gaussian distributed.

linear regime

♾️ Infinite Widths Part II: The Neural Tangent Kernel

7 minute read

Published:

This is the second post of a short series on the infinite-width limits of deep neural networks (DNNs). Previously, we reviewed the correspondence between neural networks and Gaussian Processes (NNGP), showing that, as the number neurons in the hidden layers grows to infinity, the output of a random network becomes Gaussian distributed.

local learning

🧠 Predictive Coding as a 2nd-Order Method

10 minute read

Published:

πŸ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

loss landscape

machine learning

maximal update parameterisation

multi-layer perceptrons

KANs Made Simple

2 minute read

Published:

πŸ€” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

mup

natural gradient descent

neural scaling laws

KANs Made Simple

2 minute read

Published:

πŸ€” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

neural tangent kernel

♾️ Infinite Widths Part II: The Neural Tangent Kernel

7 minute read

Published:

This is the second post of a short series on the infinite-width limits of deep neural networks (DNNs). Previously, we reviewed the correspondence between neural networks and Gaussian Processes (NNGP), showing that, as the number neurons in the hidden layers grows to infinity, the output of a random network becomes Gaussian distributed.

optimisation theory

predictive coding

🧠 Predictive Coding as a 2nd-Order Method

10 minute read

Published:

πŸ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

rich regime

saddle points

saddles

🧠 Predictive Coding as a 2nd-Order Method

10 minute read

Published:

πŸ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

second-order method

🧠 Predictive Coding as a 2nd-Order Method

10 minute read

Published:

πŸ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

second-order methods

splines

KANs Made Simple

2 minute read

Published:

πŸ€” Confused about the recent KAN: Kolmogorov-Arnold Networks? I was too, so here’s a minimal explanation that makes it easy to see the difference between KANs and multi-layer perceptrons (MLPs).

system-2 thinking

Energy-based Transformers

3 minute read

Published:

πŸ“– TL;DR: Energy-based Transformers (EBTs) learn a scalar energy function parameterised by a transformer. Empirically, EBTs show promising scaling and reasoning properties on both language and vision tasks.

tensor programs

thermodynamic AI

transformers

In-Context Learning Demystified?

4 minute read

Published:

πŸ“– TL;DR: a transformer block implicitly uses the input context to modify its MLP weights.

Energy-based Transformers

3 minute read

Published:

πŸ“– TL;DR: Energy-based Transformers (EBTs) learn a scalar energy function parameterised by a transformer. Empirically, EBTs show promising scaling and reasoning properties on both language and vision tasks.

trust region

🧠 Predictive Coding as a 2nd-Order Method

10 minute read

Published:

πŸ“– TL;DR: Predictive coding implicitly performs a 2nd-order weight update via 1st-order (gradient) updates on neurons that in some cases allow it to converge faster than backpropagation with standard stochastic gradient descent.

vanishing gradients