It should be quite easy - first fast model evaluates complexity of the task and suitable model to perform it, then it executes on best suited model and returns answer to the user. Obviously multi-step models are not great for simple questions.
The current CEO has laid out the roadmap for OpenAI's AI LLM development for GPT-4.5 and GPT 5, with the goal of simplifying model selection for users and developers.