Function Encoders:
A Principled Approach to Transfer Learning in Hilbert Spaces

The University of Texas at Austin
Under Review

Abstract

A central challenge in transfer learning is designing algorithms that can quickly adapt and generalize to new tasks without retraining. Yet, the conditions of when and how algorithms can effectively transfer to new tasks is poorly characterized. We introduce a geometric characterization of transfer in Hilbert spaces and define three types of inductive transfer: interpolation within the convex hull, extrapolation to the linear span, and extrapolation outside the span. We propose a method grounded in the theory of function encoders to achieve all three types of transfer. Specifically, we introduce a novel training scheme for function encoders using least-squares optimization, prove a universal approximation theorem for function encoders, and provide a comprehensive comparison with existing approaches such as transformers and meta-learning on four diverse benchmarks. Our experiments demonstrate that the function encoder outperforms state-of-the-art methods on four benchmark tasks and on all three types of transfer.

A Geometric Characterization of Inductive Transfer

An illustration of the three types of transfer. Type 1 transfer is within the convex hull of the training functions. Type 2 transfer is extrapolation to the linear span. Type 3 transfer is extrapolation to the rest of the Hilbert space.

We consider an inductive transfer setting and present a geometric characterization of inductive transfer using principles from functional analysis. Inductive transfer involves transferring knowledge to new, unseen tasks while keeping the data distribution the same. For instance, labeling images according to a new, previously unknown class, where only a few examples are provided after training. While prior works have studied transfer learning, gaps remain in identifying when learned models will succeed and when they will fail. We introduce a characterization of inductive transfer, based on Hilbert spaces, which will provide intuition about the difficulty of a given transfer learning problem.

Specifically, we characterize transfer using three types:

(Type 1) Interpolation within the convex hull. Tasks that can be represented as a convex combination of observed source tasks.


(Type 2) Extrapolation to the linear span. Tasks that are in the linear span of source tasks, which may lie far from observed data but share meaningful features.


(Type 3) Extrapolation to the Hilbert space. Tasks that are outside the linear span of the source predictors in an infinite-dimensional function space. Type 3 transfer is the most important and challenging form of transfer.

The Function Encoder


Basis functions are a natural solution to inductive transfer learning in a Hilbert space. We build upon the function encoder algorithm, a method for learning neural network basis functions to span an arbitrary function space. While analytical approaches such as Fourier series scale poorly with the dimensionality of the input and output spaces, function encoders scale extremely well due to the use of neural networks.

We make several improvements to the function encoder algorithm. First, we generalize all definitions to use inner products only, allowing the function encoder to work on any Hilbert space. This generalization allows us to tackle new problems, such as few-shot classification. Second, we introduce a novel training scheme for function encoders using least-squares optimization. This training scheme greatly improves convergence rate and accuracy. Lastly, we prove a universal function space approximation theorem for function encoders, showing that they can approximate any function in a separable Hilbert space to any desired accuracy.

Experimental Results

BibTeX

@article{ingebrand_2025_fe_transfer,
  author       = {Tyler Ingebrand and
                  Adam J. Thorpe and
                  Ufuk Topcu},
  title        = {Function Encoders: A Principled Approach to Transfer Learning in Hilbert Spaces},
  year         = {2025}
}