>>17281
I'm going to do a thought dump as I parse what you wrote.
I think you're right when you say we lack the ability to process [certain] things unless we can attach a feeling to them. We treat feelings as if they were right on the border of logic and not-logic. I say that because it seems reasonable to ask why (a logic question) a person feels a thing whenever they feel something, yet we simultaneously reject feelings as a justification for anything abstract and objective. I guess the question then comes down to this: which things do emotions let us process?
I think emotions are
always involved when we expect people to justify their actions or behavior, and that pure logic is
never sufficient to justify these things. Even when people try to justify their actions through something physical or mathematical, like utilitarianism or evolution, there's always the implicit assumption that could break from the utilitarian or evolution "script" if they wanted to, and the thing attaching them to the script is some desire. In the pathological cases, like with depression or anxiety, people's feelings drive their behavior to such an extent that it overrides any script that a person believes reasonable to follow.
>We then "feel on edge" so to speak, as one human defined manner of expressing that rushing feeling of chemicals waking your body up to high alert. But the lack of an appropriate cause for such a response and eventual release of said energy deadens the recepticals for the chemicals, and the body is put into a state of suspense, as it can't decide whether to continue the highly inefficient but potentially life saving energy output or to put this as a false positive and return to normal. Would this be a mix of both then?
That's a very interesting thought. There's one potential outcome to which you attach a high-arousal feeling (as in the "valence-arousal" dimensions of emotions), and another potential outcome to which you attach a low-arousal feeling, and it's through a mix of the two that you get suspense. If that's true, it suggests that the brain can look for (potentially complex) patterns in feelings, similar to how it can look for complex patterns in visual input. In the case of vision, it finds patterns in color-based activations with spatial structure. In the case emotions, it could be feeling-based activations (e.g., Valence-Activation instead of Red-Green-Blue) with prediction structure (e.g., Monte Carlo Tree instead of a 2D grid). If
that's true, then the brain would be able to attach phrases like "scared" and "powerless" to these feeling patterns just like it can attach names to physical objects.
>I think language acts as a shortcut, with emotions being a sort of tree structure that begins with the strongest example at the front and slowly cascades down into weaker or imagined scenarios, with the edges of the tree being amorphous until generated specifically, branching out from a mishmash of the consciousness.
This is the same as how I think of it. The mathematical object desribing this is a topology or (equivalently) a lattice. With no information, all possibilities are valid. Every piece of information is associated with a subset of those possiblities, so every time you get more information, you can "descend the lattice" to more specific scenarios. Information theory gives a direct relationship between language and information of this sort. The relationship between
natural language and the kinds of language useful for navigating these trees is less clear, but I don't expect that to be a big jump. I think people are already working on this under what they call "triplet extraction".
>How different does an emotion or feeling have to be to generate a new tree?
With a topological representation, you don't need to worry about things like when to generate a new branch. Since each unit of language is allowed to carry a (continuously) variable amount of information, all the branches of the tree are able to blur together or separate depending on what the data requires. Depending on what kind of space you use, you can even calculate the "distance" between two arbitrary feelings. Conceptually, all possible emotions would exist in some abstract space, and each labeled emotion would correspond to some "landmark" in that space. The landmark could be either a point (i.e., a complete, maybe infinite, description of a distinct feeling) or a subset (i.e., a finite description that narrows the set of possible feelings. That might sound complicated, but programmatically, these things are pretty straightforward to implement since it's just a combination of, e.g., text generation and text-to-whatever models.
Thank you for that. Reading and responding to your post helped clarify a lot of things for me. I think I now have a satisfying answer to the question I originally posed in
>>17245. We have feelings about exactly the set of things that predict our actions and behaviors. With that, I think I can follow up on:
>If we can get a good understanding of that, I think we can figure out a lot of the missing pieces for how to model feelings and emotions.
I'll need to think about all of this a bit more to compile my thoughts and ground them in directions that are actually programmable. For now, I can at least say that this has flipped my understanding of emotions on its head. I previously believed, and I think it's common to believe, that feelings were necessary for making good decisions with bounded compute. If I'm right about us attaching feelings to everything that predicts our actions and behaviors, that means the relationship is actually the reverse: feelings
require us to have bounded compute. Otherwise, the whole situation would be impossible since our feelings-about-feelings-about-feelings-about... (ad infinitum) could drive our behavior, which would lead to all sorts of paradoxes.