TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/Holographic Reduced Representation

Holographic Reduced Representation

GeneralIntroduced 20034 papers

Description

Holographic Reduced Representations are a simple mechanism to represent an associative array of key-value pairs in a fixed-size vector. Each individual key-value pair is the same size as the entire associative array; the array is represented by the sum of the pairs. Concretely, consider a complex vector key r=(a_r[1]eiφ_r[1],a_r[2]eiφ_r[2],… )r = (a\_{r}[1]e^{iφ\_{r}[1]}, a\_{r}[2]e^{iφ\_{r}[2]}, \dots)r=(a_r[1]eiφ_r[1],a_r[2]eiφ_r[2],…), which is the same size as the complex vector value x. The pair is "bound" together by element-wise complex multiplication, which multiplies the moduli and adds the phases of the elements:

y=r⊗xy = r \otimes xy=r⊗x

y=(a_r[1]a_x[1]ei(φ_r[1]+φ_x[1]),a_r[2]a_x[2]ei(φ_r[2]+φ_x[2]),… )y = \left(a\_{r}[1]a\_{x}[1]e^{i(φ\_{r}[1]+φ\_{x}[1])}, a\_{r}[2]a\_{x}[2]e^{i(φ\_{r}[2]+φ\_{x}[2])}, \dots\right)y=(a_r[1]a_x[1]ei(φ_r[1]+φ_x[1]),a_r[2]a_x[2]ei(φ_r[2]+φ_x[2]),…)

Given keys r_1r\_{1}r_1, r_2r\_{2}r_2, r_3r\_{3}r_3 and input vectors x_1x\_{1}x_1, x_2x\_{2}x_2, x_3x\_{3}x_3, the associative array is:

c=r_1⊗x_1+r_2⊗x_2+r_3⊗x_3c = r\_{1} \otimes x\_{1} + r\_{2} \otimes x\_{2} + r\_{3} \otimes x\_{3} c=r_1⊗x_1+r_2⊗x_2+r_3⊗x_3

where we call ccc a memory trace. Define the key inverse:

r−1=(a_r[1]−1e−iφ_r[1],a_r[2]−1e−iφ_r[2],… )r^{-1} = \left(a\_{r}[1]^{−1}e^{−iφ\_{r}[1]}, a\_{r}[2]^{−1}e^{−iφ\_{r}[2]}, \dots\right)r−1=(a_r[1]−1e−iφ_r[1],a_r[2]−1e−iφ_r[2],…)

To retrieve the item associated with key r_kr\_{k}r_k, we multiply the memory trace element-wise by the vector r−1_kr^{-1}\_{k}r−1_k. For example:

r_2−1⊗c=r_2−1⊗(r_1⊗x_1+r_2⊗x_2+r_3⊗x_3)r\_{2}^{−1} \otimes c = r\_{2}^{-1} \otimes \left(r\_{1} \otimes x\_{1} + r\_{2} \otimes x\_{2} + r\_{3} \otimes x\_{3}\right)r_2−1⊗c=r_2−1⊗(r_1⊗x_1+r_2⊗x_2+r_3⊗x_3)

r_2−1⊗c=x_2+r−1_2⊗(r_1⊗x_1+r_3⊗x3)r\_{2}^{−1} \otimes c = x\_{2} + r^{-1}\_{2} \otimes \left(r\_{1} \otimes x\_{1} + r\_{3} \otimes x3\right)r_2−1⊗c=x_2+r−1_2⊗(r_1⊗x_1+r_3⊗x3)

r_2−1⊗c=x_2+noiser\_{2}^{−1} \otimes c = x\_{2} + noiser_2−1⊗c=x_2+noise

The product is exactly x_2x\_{2}x_2 together with a noise term. If the phases of the elements of the key vector are randomly distributed, the noise term has zero mean.

Source: Associative LSTMs

Papers Using This Method

Improved Cleanup and Decoding of Fractional Power Encodings2024-11-30Audio Fingerprinting with Holographic Reduced Representations2024-06-19Residual and Attentional Architectures for Vector-Symbols2022-07-18Associative Long Short-Term Memory2016-02-09