There is no LLM that ever brings it's own opinion. Please reach out to basics on how LLMs work. There is nothing "new" that LLM can give you.
LLM models act like a book: when you open a page, you already have the content stored. The model processes this existing information, generating probabilistic results based on the training data, not new insights. This means LLMs rely on structured, data-driven outputs rather than independent opinions.
LLM models are designed to process and generate text based on vast training data, and their outputs are results of statistical inference rather than independent opinions. The "ingested data" combines the model’s training knowledge with user-retrieved information, generating probabilistic results that align with the training patterns, not personal beliefs. Thus, LLMs rely on structured, data-driven outputs to provide answers, not independent thoughts or opinions.
Those so called "opinions" must align with the data they are trained on.
Let us say this way, if LLM can give opinion, that means it is 100% biased opinion based on the data it was trained on.
You simply cannot get true opinions.