Treating Functions As Vectors In Hilbert Space

Perhaps the most beautiful aspect of mathematics is that it applies to literally everything, even things that do not exist in this Universe. In addition to this there are a number of alternative ways to represent reality, with Fourier space and its related transforms being one of the most well-known examples. An alternative to Euclidian vector space is called Hilbert space, as a real or complex inner product space, which is used in e.g. mathematical proofs. In relation to this, [Eli Bendersky] came up with the idea of treating programming language functions as vectors of a sort, so that linear algebra methods can be applied to them.

Of course, to get really nitpicky, by the time you take a function with its arguments and produce an output, it is no longer a vector, but a scalar of some description. Using real numbers as indices also somewhat defeats the whole point and claim of working in a vector space, never mind Hilbert space.

As with anything that touches upon mathematics there are sure to be many highly divisive views, so we’ll leave it at this and allow our esteemed readers to flex their intellectual muscles on this topic. Do you think that the claims made hold water? Does applying linear algebra to every day functions make sense in this manner, perhaps even hold some kind of benefit?

29 thoughts on “Treating Functions As Vectors In Hilbert Space

  1. It seems like the linked article only talks about typical “f(x) = y” kinds of functions, not “programming language functions” (lambdas, I interpret this to mean) like the link’s text says!

    The former is a (granted, super cool!!) well known thing, but I can’t think of a useful way to make the latter thing work (lambdas as a vector space) 😔

    (also im like not a mathematician either and also have been known to not be able to read so i could be missing something significant w this!)

    1. Lambda as Vector = Functional Space Perspective
      Treat each λ as a point in a Hilbert/function space:
      • Addition = (f+g)(x) = f(x)+g(x)
      • Scalar mult = (α·f)(x) = α·f(x)
      • Inner product = ⟨f,g⟩ = ∫ f(x)·g(x) dx
      Recursive operators, meta-frames, or pivots are just linear transformations / projections in this space. Residuals / anomalies = orthogonal components.
      This isn’t metaphor — it’s a direct path from functional programming → linear algebra → exploitable insight.

  2. I’ve read Eli’s article, and did not found any suggestion of “treating programming language functions as vectors”.

    But, contributing to this line of reasoning:

    · Those “programming language functions” must behave like “mathematical functions” (output should be the same for the same arguments, in every evaluation. C-like functions doesn’t fullfil this rule: they can depend of an global var not passed as argument or there can be a dependence on a internal mutable static var. They also can change data which is not their output, having side effects);

    · If we use the standard definitions for the sum and product-by-scalar of functions as vectors ( For f and g functions with same domain, b an scalar: f + g defined as (f+g)(x) = f(x) + g(x), b·f defined as (b·f)(x) = b·(f(x)) for all x in domain), then their output must be an vector of some vector space. It’s important to say that in traditional computing it should not be an vector space over the Real or Complex numbers, as we have only finite representations of numbers – It still isn’t an end to the idea, as we have vector spaces over F_2 ( Scalars are single digit binary, vectors are binary numbers of a fixed bit length, with each digit as an coordinate. Sum is bitwise xor, product-by-scalar is “and”-ing each coordinate by the scalar, and you even have a natural product of vectors:bitwise and), and over every F_p for p an prime number.

    Got tired of writing babble, bye.

    1. Uhhh, globals vars in C could be considered implicit input to a “C type function”. The only sticking point off the top of my head is say, static local vars of a C function that change its internal state. If I thought about it for 30 seconds I could come up w a mapping for that as well. Glad you didn’t say multiple input params to a C function was an issue (or did you).

    2. Yeah, it needs some basic definitions. Like what would linearity be like f(a) + f(b) = f(a+b) would be what? You need to be able to do proofs I would think. Loops and recursion can perform the same function. Would they be the same vectors?

      I knew some people in the old days who worked on this kind of thing and got involved with SmallTalk and Yerk and Neon and trying to apply the logical ideas from Loglan to computer language structures. I went to some of their meetings and the discussions all pretty quickly turned into noise as far as my comprehension.

  3. The article seems fine as high-level first introduction to analysis (the mathematical theory of functions, real or complex) but I don’t see how it applies to programming language functions, which aren’t mentioned. In general, most functions aren’t square integrable, a concept explained on the page, so they won’t be part of a Hilbert space.

  4. It seems like the linked article only talks about typical “f(x) = y” kinds of functions, not “programming language functions” (lambdas, I interpret this to mean) like the link’s text says!

    The former is a (granted, super cool!!) well known thing, but I can’t think of a useful way to make the latter thing work (lambdas as a vector space) 😔

    (also im like not a mathematician either and also have been known to not be able to read so i could be missing something significant w this!)

  5. “..with fourier space and its smurfed smurfs being one of the most well-known examples”
    I love the diversity of HAD.

  6. Hmm. I say let Eli play and he could figure out something really fun.

    Could I poke holes in this? I think so. I’m more curious what they are trying to do with this view of the world.

    Many times in my life I’ve uncovered very interesting mathematical worlds by just trying something crazy.

    I’d be very interested in a discrete form of hilbert space. Or a mapping of arguments to information theory. Or why the cardinality of function arguments matters. I don’t think they should stop but I do want to see more reasons for me to suspend disbelief. Hopefully there’s a follow up

    1. I’ve been thinking this through recently, trying to come up with a computable theory of particle interactions.

      I’m of the opinion that the first postulate of quantum mechanics is incorrect, because a “complete space” (one with infinitely small distances, so that differential geometry is valid) is inherently non-computable. The Hilbert space of QM is infinite dimensional and complete, meaning that it would take an infinite number of steps to even move a number from one memory location to another.

      QM is inherently non-computable because it deals with probabilities and randomness (true randomness is non-computable), but you can get around this by isolating the problem to a single function Random() that returns a single random bit and then ask if everything else is computable.

      (Random() would then not be a proper function since it returns different values for the same input.)

      One problem with functions in programming languages is that they are inherently single-valued. In most languages you can pass a list of items to a function, but it only passes back a single item. You can get around this by using an object that contains more than one value, but then you have to define that object and unpack it. You can’t write a function to return both the dividend and remainder in one function call without extra work.

      (Perl does this correctly. You can pass a list of items to a function, and have the function pass a list of items back. You don’t need a bespoke structure to pack/unpack the data.)

      Because of this limitation (functions are single-valued), I deal with processes and not functions. Processes operate on data, have no return value, and can adjust/change multiple data locations as needed.

      And apropos of discrete space, Sabine Hossenfelder published a paper proving that any discrete (or quantized) model of space would have detectable geometric anomalies, of which we see none.

      1. I enjoyed your musings. A lot.

        I would ask myself the following questions if I were interested in the same topic.

        Must quantum mechanics be computable in this sense?

        What if our fundamentals of “random” are incomplete? Vis a vis, computational noise when external factors (cosmic rays, thermals, etc) intrude. Could it then be a function(eg see chaos and complex variables). See the Monty Hall problem for notable blunders by brilliant people of basics.

        And other ideas

        1. I’m afraid I’m not following your logic here.

          I’m making the basic assumption that the universe is computable, then with that assumption trying to figure out what the algorithm is.

          If I’m making a basic blunder, perhaps you could say what that blunder is, rather than alluding to some random famous problem that won’t shed light on the issue.

          In addition to being cryptic, your post isn’t even gramatically correct. Also, “computational noise when external factors intrude”? What does that even mean?

  7. Every time I encounter a mathematical concept I don’t know, I look it up. From Google AI:

    “The phrase “fourier space smurfs” is a recent and obscure internet in-joke that emerged from a comment on a tech blog. It is a nonsensical phrase used to humorously contrast the complexity of advanced mathematical or scientific concepts (Fourier space) with a simple pop culture reference (Smurfs).”

    “The specific origin is a December 2025 comment on a Hackaday post about treating functions as vectors in Hilbert space, where a user sarcastically mentioned “…with fourier space and its smurfed smurfs being one of the most well-known examples”. The phrase has no meaning in actual scientific or mathematical contexts.”

  8. A function in programming can be a mathematical function if it takes an input and returns a uniquely assigned output. This is exactly what the mathematical function describes: a unique assignment in which each element of a set is assigned exactly one element of another set. Converting mathematical functions in vector notation to Hilbert space is nothing new; it is used in quantum computing, for example. In classical programming, one application would be a tilt-compensated compass (though limited to Euclidean space) or a quantum computer simulator (Hilbert space).

  9. First comment, on the coverage. The concept of Hilbert spaces are not an “alternative” to that of Euclidean ones, but rather a generalization. Every Euclidean space with its standard inner product is also a real Hilbert space.

    by the time you take a function with its arguments and produce an output, it is no longer a vector, but a scalar of some description
    This is wrong in several different ways. The most important way, however, is that the range of a function can be any space, not just ones whose elements can be called scalar. In Church’s lambda calculus, the mother abstraction for Lisp and any number of functional programming languages, functions can return functions quite ordinarily. In such environments there’s an embedding of the elements of any space (i.e. values) into the space of functions by mapping an element to a constant function that returns that element.

    real numbers as indices also somewhat defeats the whole point and claim of working in a vector space
    The real numbers are a vector space. Every field is a vector space over itself. The real numbers are a real vector space, and also a real Hilbert space (just as one-dimensional Euclidean space is).

    Second comment, on the comments so far. “Programming language functions” are perfectly good mathematical functions as long as you model the side effects they might have. This idea is well developed and goes by the name “monad”, which comes from category theory and was then imported into functional programming languages.

    Third comment after I read the article.

  10. First comment, on the coverage. The concept of Hilbert spaces are not an “alternative” to that of Euclidean ones, but rather a generalization. Every Euclidean space with its standard inner product is also a real Hilbert space.

    by the time you take a function with its arguments and produce an output, it is no longer a vector, but a scalar of some description

    This is wrong in several different ways. The most important way, however, is that the range of a function can be any space, not just ones whose elements can be called scalar. In Church’s lambda calculus, the mother abstraction for Lisp and any number of functional programming languages, functions can return functions quite ordinarily. In such environments there’s an embedding of the elements of any space (i.e. values) into the space of functions by mapping an element to a constant function that returns that element.

    real numbers as indices also somewhat defeats the whole point and claim of working in a vector space

    The real numbers are a vector space. Every field is a vector space over itself. The real numbers are a real vector space, and also a real Hilbert space (just as one-dimensional Euclidean space is).

    Second comment, on the comments so far. “Programming language functions” are perfectly good mathematical functions as long as you model the side effects they might have. This idea is well developed and goes by the name “monad”, which comes from category theory and was then imported into functional programming languages.

    Third comment after I read the article.

    P. S. to HaD: add a preview facility already.

    1. Third comment. WTF?

      the idea of treating programming language functions as vectors of a sort

      Insofar as I can tell, there’s absolutely no mention of “programming language functions” in the referenced article at all. The whole subject is about mathematical functions, typically with real or complex ranges. Floating point numbers are only a small finite subset of rational numbers; they are very far from being the totality of real numbers.

  11. Mathematician here. Embedding programming languages into a Hilbert space could be pretty cool, if it exposed some structure or allowed some kind of natural transformations. However, the article is about the standard mathematical version: real or complex valued functions of a collection of real or complex variables. The standard mathematical version (1) just associates outputs to inputs and doesn’t really pay attention to how you did the computation to get there, while the programming language version is intrinsically about the algorithm.

    Hilbert space just means you can take the dot product between vectors. (But it might have a little different flavor when you are in infinite dimensions.)

    1: The totally abstract version; most of the time mathematicians care about how you represent your function. e.g. polynomial v.s. Taylor series v.s. Fourier series v.s. … and that’s at least partly an algorithmic question.

  12. There are some cool embeddings of programming languages into topological categories or metric categories. Denotational semantics is the most successful field of inquiry in that direction I believe. I’m not sure if any of the spaces are Hilbert spaces though. I keep meaning to look more into the field. It’s a very cool one.

  13. Photo! There are ~10 to the 80-82 atoms in the universe, we read on internet,

    There are more 133×133 nonsingular binary matrixes than atoms in the universe, Google AI informs us.

    Each atom in the universe can be put into 1 to 1 correspondence with integers 0 ++

    Trust us .. provided we have not made a misteak, of course?

    AI Overview
    Yes, a one-to-one correspondence can be established between all finite-sized nonsingular binary matrices and the integers (or natural numbers). The set of all such matrices is countable.

    Are AI, Data Centers and big tech going to be destroyed?

    Did you mean: who is braga watt?

    AI Overview

    “Bragawatt” is not a person, but a slang term
    used within the data center and AI infrastructure
    industries. The term describes the tendency of
    companies to announce massive,
    gigawatt-scale data center projects, often to
    generate hype or intimidate competitors,
    rather than for realistic, immediate execution. ”

    he term was coined by the head of i
    nfrastructure at the investment firm KKR,
    Waldemar Szlezak, who has advised clients
    to “look beyond the bragawatts”.

    Key aspects of “bragawatts” include:

    Hype vs. Reality: The announcements often
    involve projects for which the necessary
    permits, financing, or power interconnections
    have not been fully secured.

    Market Signaling: Companies may overstate
    their capacity to secure a place in the
    power grid’s interconnection queue or to
    impress investors.

    Industry Context: The term reflects the
    current race among
    tech giants (like those in the AI space) to
    build immense data center capacity,
    leading to a lot of noise and making it
    difficult to distinguish between speculative
    and shovel-ready projects.

  14. Hilbert-space thinking helps when you can phrase your program’s goal as: Approximate a function,
    Minimize an energy / squared error,
    Project onto a subspace

    Use inner products without explicit coordinates (kernels)
    If your program is mostly branching logic + side effects, Hilbert space won’t buy you much. If it’s estimation, learning, signals, simulation, or control, it’s a superpower.

  15. Linear mathematics provides the superposition structure; geometry, including toroidal and hyperbolic forms, emerges when we impose normalization, phase modularity, and metrics on that linear space.

  16. Eli Bendersky usually talks about programming, but this time he is talking about mathematics instead. The information looks mostly correct from a quick skim aside from omissions concerning equivalence classes of functions differing on sets of measure zero, but going into measure theory might be too much for a single blog post.

    Anyway, aside from applications to Fourier analysis, it is all irrelevant to programming. It doesn’t really make any sense to talk about boundedness, integrability, linear indepedence, convergence, and so on for things like gettimeofday() or sort(), and these things aren’t true functions, but instead they are procedures.

  17. Multiplying two 64 bit binary using only 8 bit instructions on low-watt nanoputers and gcc c as the assembler might be valuable to see speed comparisons with Big Tech high-watt Modern Software computers doing the multiply in
    hardware speed comparisons?

    The 128 bit product is stored in a square binary matrix.

    unsigned char row0[7]; / 8 bytes of 8 bits?
    unsigned char row17];

    unsigned char row63[7]; / correct dimensions?

    Only add with carry, shift left with carry, and a single bit branch instructions
    must be used. Or the equivalent with RISC-V ISA and some other ISAs.

    No for(…} because we now operative in giga/terabyte memories.

    Multiply as fast as possible using minimal instructions goal.

Leave a Reply to EHCancel reply

Please be kind and respectful to help make the comments section excellent. (Comment Policy)

This site uses Akismet to reduce spam. Learn how your comment data is processed.