Learning as Protein Folding
There is a certain view, which I have found useful, that views learning as analogous to protein folding.
In this view, your understanding is viewed as analogous to a protein. It is comprised of various parts, ordered and oriented in relation to each other, the total structure of which determines the function of your being.
The data of your experience are viewed as molecules, which can interact with the much larger and more intricate structure of your understanding.
If parts of the structure of data closely match parts of the structure of your understanding, then they can interact with each other. You might have receptors in your understanding that can receive and interface with certain types of data. Once these receptors are filled, it might cause the entire structure of your understanding to shift around the novel data.
Note that data that matches the structure of your understanding too closely likely won’t cause substantial change in your understanding. It matches so closely that it’s presence doesn’t substantially impact the rest of the molecule. Concretely, this might look like hearing a fact that you already know, or seeing an event that you expected come to pass.
Similarly, data that is too novel might not be able to interface with the structure of your understanding. If you have no receptors which can receive this novel data, it is not likely to impact your understanding. Concretely, this might look like white noise, or nonsense words, or language that you cannot make heads or tails of. [Though, note that these examples are still somewhat comprehensible: you have concepts for white noise, nonsense words, and languages that you don’t understand. Things that truly do not interface with any of your receptors do not have any concepts associated with them, and escape your attention entirely!]
Most learning is caused by data that nearly matches the structure of your understanding, but doesn’t entirely. Parts of this data will be able to interface with your understanding, but, parts of it won’t, and these parts that don’t match the structure of your understanding will create tension, increasing the energy of the overall system. This increase in energy is what can power refactoring, refolding, and restructuring of your understanding.
We might also speak of this in terms of resonance: you are unlikely to learn from data that is totally resonant or totally dissonant to your understanding. However, data that contains some resonance and dissonance can power learning.
Though we talk of misunderstanding data, perhaps it would be better to speak of misapprehending data: there might be multiple parts of the data that can be interfaced with/multiple parts of that data that resonate. Depending on which of these facets one apprehends, the way that one’s understanding evolves might vary.
If someone doesn’t understand something, we might view it as them not having the receptors for the data that we are trying to present them. In response to this, we may add “handles” to our data, for which they do have receptors.
Note that not all learning is good, and that the previously described behaviors can be used for ill. For example, if someone says something that isn’t true about you, it isn’t likely to change your understanding of yourself. However, if they package this thing that isn’t true with things that are true, or do have resonance, then it is much more likely that they will be able to change your understanding of yourself. They might say: “You are flaky, and afraid of commitment, and no one will ever love you because of that”. The resonance of the first two statements might be enough to get the accusation to stick, and the dissonance of the last statement may force your understanding of yourself to change for the worse.
How might we say what is good and what is bad learning, then? This analogy makes the answer fairly straightforward: foldings which are high in energy and tension are worse and less pleasant, and foldings which are lower in energy and contain less tension are better and more pleasant.
We could also state this in the language of self-conflict and self-alignment: data which brings us into greater long-term self-alignment are better, and data which brings us into greater long-term self-conflict are worse.
There is some subtlety here: really goodness is determined by self-and-world (bodymindenvironment) alignment, and badness is determined by self-and-world (bodymindenvironment) conflict. [Though I will continue to use the phrase self-alignment to refer to this, as it is less wordy.]
Reaching the global optimum self-alignment may involve substantial periods of greater self-conflict. In chemistry terms, the activation energy of many long-term good reactions might be quite high.
We might then think of memes (in the Richard Dawkins/memetics sense of the word) as analogous to prions: foldings in understanding that are likely to cause the same (or similar) foldings in other people’s understandings.
What are good and bad memes, then? The answer is the same as above: those which cause greater long-term self-alignment and self-conflict, respectively.
What does this view give you?
I have found that this view neatly describes a couple of useful phenomena:
- Learning requires at least some dissonance, which might be experienced as either frustration or confusion. Too much, and you reject the data or are unable to grasp it. Too little, and no learning occurs.
- Delivering feedback with some form of compliment works because it adds resonant features to the data, helping people deal with the dissonance of the feedback itself. For this to work, the compliments have to be genuinely resonant. Note that compliments aren’t strictly necessary if the feedback itself is resonant. This might be the case if someone already had an intuition that the thing being critiqued could have been better.
- If someone is unable to understand something, or if the concept just “bounces off” of them, it can be helpful to “add handles” to the data. “Handles” for the data look like features of that data that have resonance/interact with receptors in the learners understanding. This can look like giving concrete examples or analogizing it to something that they already understand.
- Dissociation is a type of learning that involves “decoherence” of the structure of understanding. “Decoherence” of the structure of understanding involve partitioning it, cleaving it, or otherwise making it disjoint so that there exist two or more sections of understanding that rarely interact with each other. The issue with this is that shifts in understanding, driven by later learning, can cause these separate sections to collide with each other again.
- No changes in understanding are fundamentally different from any others, though the magnitude can differ. All learning is shifting in the structure of understanding, some of which may be larger or smaller, but all of which exist on the same fundamental spectrum.
- Shifting understanding involves local operations (local folds in the protein), but is used to solve a global optimization problem. Extremely complex chains of local operations might be required to effect the desired global changes in understanding.
- Misunderstandings/mis-foldings can compound over time and have substantial global effects.
- Misunderstandings/mis-foldings can be occluded by other parts of the structure of understanding. Things can lie “in the middle” of the protein, and not be directly accessible until parts of it have shifted out of the way.
- This seems to be one of the fundamental problems of meditation: people often don’t have the receptors for the type of data that they are looking for. Meditation techniques often involve making handles for the data, or providing data that will eventually cause folds creating receptors for a specific type of data. Noting is a way of adding handles to noticing, shamatha with support is a way of adding handles to shamatha without support, etc. Not that these are the best way to encourage meditative progress, but it seems to be a common historical pattern: build receptor for data, use receptor to refactor understanding, deconstruct receptor when no longer useful.