Cut your LLM costs by 200x. Offload parallel, batch, and research work to Gemini Flash workers instead of burning your expensive primary model.
v1.3.5-1.3.7: Self-reflection, Skeleton-of-Thought, Structured Output, Majority Voting. Quality sprint complete.