We recommend prioritizing the cost-effective gpt-3.5-turbo-0613 model, but we also understand that the GPT4 series models can solve problems well in certain scenarios.
Therefore, we also support specifying the model used for a single conversation through session commands without modifying the default model parameters. This includes specifying the conversation temperature and the associated history of conversation volume (one question and one answer is considered as one conversation).