> [Gemini] has a more "business like" conversation attitude which is refreshing in comparison to the over-the-top informal ChatGPT default.
Maybe "business like" for Americans. In most of the world we don't spend quite so much effort glazing one another in the workplace. "That's an incredibly insightful question and really gets to the heart of the matter". No it isn't. I was shocked they didn't fix this behavior in v3.
> Maybe "business like" for Americans. In most of the world we don't spend quite so much effort glazing one another in the workplace. "That's an incredibly insightful question and really gets to the heart of the matter". No it isn't. I was shocked they didn't fix this behavior in v3.
I presume rejecting the glazing is exactly the behavior they're praising Google for. I can't recall it doing this with any of my prompts, whereas this is standard for OpenAI.
> It is absolutely not too late to bonsai your Cryptomeria japonica. In fact, a 1-meter tall, ground-grown tree is often considered ideal starting material by bonsai enthusiasts. [...]
And when followed up with 'I have been told cutting back to brown wood will prevent back budding' I get:
> That is a very common piece of advice in bonsai, but for Cryptomeria (Japanese Cedar), it is a half-truth that requires clarification. [...]
That's in 'Thinking with 3 Pro' mode. No idea about the quality of results, but I assume it to be full of omitted nuances and slight mistakes like most of the LLM generated output out there.
Maybe they tune their models to be less glaze'y for Germany? Or The Machine has Learned that you respond more positively to glazing? :)
I rarely use LLMs because I don't want my brain to atrophy, but when I do I use Gemini precisely because it doesn't try to tell me I'm a very smart boy.
What helped me to get rid of such nonsense in ChatGPT is to make a custom instruction (personalization, customization) in the settings.
>Be efficient and blunt. Tell it like it is; don't sugar-coat responses. Get right to the point. Be innovative and think outside the box. Give options, explain reasoning. Stop saying "here is blunt information", "here is no-nonsense answer" and annoying word noise waste; just tell the information directly without categorizing how and in what style you are going to say it.
You know you can control that, right? I'm constantly blown away by the number of posts in threads like this from people who clearly aren't aware of custom instructions.
Go to 'Personal Context' on the user menu and enter something like this:
Answer concisely by default, and more extensively when necessary. Avoid rhetorical flourishes, bonhomie, and cliches. Take a forward-thinking view. Be mildly positive and encouraging, but never sycophantic or cloying. Never use phrases such as 'You're absolutely right,' 'Great question,' or 'That was a very insightful observation.' When returning source code, never use anything but straight ASCII characters in code and comments—no Unicode, emoji, or anything but ASCII. When asked to write C code, assume C99 with no third-party libraries, frameworks, or other optional resources unless otherwise instructed.
ChatGPT and Claude have similar features. Obviously skip the stuff about coding standards if your interests are horticultural.
It will still occasionally glaze you, but not to an insufferable extent, as happens by default.
But as a sibling has said, the "super nice question homie" texts are not coming (as much) in Gemini as in ChatGPT (for me). I know that you can tune ChatGPTs persona, but that changed also the answer quality for me for the worse.
Maybe "business like" for Americans. In most of the world we don't spend quite so much effort glazing one another in the workplace. "That's an incredibly insightful question and really gets to the heart of the matter". No it isn't. I was shocked they didn't fix this behavior in v3.