CAUTION: There are examples containing sensitive material that may be distressing for some audiences.
We can use Concept Frames to Guide the model output to a desired idea keeping the sentence intelligible.
The Frame Representation Hypothesis is a robust framework for understanding and controlling LLMs.
Interpretability is a key challenge in fostering trust for Large Language Models (LLMs), which stems from the complexity of extracting reasoning from model's parameters. Our framework is grounded in the Linear Representation Hypothesis (LRH) to interpret and control LLMs by modeling multi-token words.
To this end, we propose words can be interpreted as frames, ordered sequences of vectors that better capture token-word relationships. Then, concepts can be represented as the average of word frames sharing a common concept. We showcase these tools through Guided Decoding, which can intuitively steer text generation using concepts of choice.
We use the Open Multilingual WordNet to generate concepts that can both guide the model text generation and expose biases or vulnerabilities.
We verify said ideas on Llama 3.1, Gemma 2, and Phi 3 families, demonstrating gender and language biases, exposing harmful content, but also potential to remediate them, leading to safer and more transparent LLMs.
@misc{valois2024framerepresentationhypothesismultitoken,
title={Frame Representation Hypothesis: Multi-Token LLM Interpretability and Concept-Guided Text Generation},
author={Pedro H. V. Valois and Lincon S. Souza and Erica K. Shimomoto and Kazuhiro Fukui},
year={2024},
eprint={2412.07334},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.07334},
}