We consider an inductive transfer setting and present a geometric characterization of inductive transfer using principles from functional analysis. Inductive transfer involves transferring knowledge to new, unseen tasks while keeping the data distribution the same. For instance, labeling images according to a new, previously unknown class, where only a few examples are provided after training. While prior works have studied transfer learning, gaps remain in identifying when learned models will succeed and when they will fail. We introduce a characterization of inductive transfer, based on Hilbert spaces, which will provide intuition about the difficulty of a given transfer learning problem.
Specifically, we characterize transfer using three types:
(Type 1) Interpolation within the convex hull. Tasks that can be represented as a convex combination of observed source tasks.
(Type 2) Extrapolation to the linear span. Tasks that are in the linear span of source tasks, which may lie far from observed data but share meaningful features.
(Type 3) Extrapolation to the Hilbert space. Tasks that are outside the linear span of the source predictors in an infinite-dimensional function space. Type 3 transfer is the most important and challenging form of transfer.
Basis functions are a natural solution to inductive transfer learning in a Hilbert space. We build upon the function encoder algorithm, a method for learning neural network basis functions to span an arbitrary function space. While analytical approaches such as Fourier series scale poorly with the dimensionality of the input and output spaces, function encoders scale extremely well due to the use of neural networks.
We make several improvements to the function encoder algorithm. First, we generalize all definitions to use inner products only, allowing the function encoder to work on any Hilbert space. This generalization allows us to tackle new problems, such as few-shot classification. Second, we introduce a novel training scheme for function encoders using least-squares optimization. This training scheme greatly improves convergence rate and accuracy. Lastly, we prove a universal function space approximation theorem for function encoders, showing that they can approximate any function in a separable Hilbert space to any desired accuracy.
@article{ingebrand_2025_fe_transfer,
author = {Tyler Ingebrand and
Adam J. Thorpe and
Ufuk Topcu},
title = {Function Encoders: A Principled Approach to Transfer Learning in Hilbert Spaces},
year = {2025}
}