My Feedback

#3
by Emalog - opened

Issue Description:
During roleplay scenarios involving two characters in a long-term, intimate relationship (e.g., a married couple), the model often generates responses where one character explains their own personality, motivations, or shared history to the other. This behavior breaks the realism and immersion of the interaction, as such information would already be known by the partner in a real-world context. The character comes across as a narrator providing exposition for an external audience, rather than a participant engaging in a genuine conversation with their partner.
Potential Cause:
The model was likely trained on a dataset where narrative exposition is common and expected (e.g., novels, public roleplaying forums, fanfiction). As a result, the model's default behavior is to "tell a story" for an outside reader, rather than simulate an authentic dialogue. The model struggles with the concept of "shared knowledge" between characters and tends to state the context explicitly instead of assuming it.
Suggested Solution:
Consider fine-tuning the model with datasets that better represent private, intimate conversations where context is implicit and does not require explanation. Additionally, adjusting the model's internal logic to better recognize keywords indicating an "intimate relationship" in the prompt could be beneficial. This would allow it to automatically reduce the probability of generating expository content in such scenarios. The goal is to train the model to differentiate between "narrating a story for an audience" and "engaging in authentic, in-character interaction."

Sign up or log in to comment