Koog 0.4.0, an update to the JetBrains Kotlin framework for building AI agents, features native structured output intended to hold up in production. It also adds Apple’s iOS as a target platform, as well as GPT-5 model and OpenTelemetry support.
Announced August 28, Koog 0.4.0 is intended to make agents observable, seamless, deployable in a user’s stack, and more predictable, while adding support for new platforms and models, JetBrains said. Code is accessible from GitHub. Native structured output addresses a situation in which a large language model (LLM) can provide the exact data format needed, but then the process stops working. Koog 0.4.0 adds native structured output supported by some LLMs, with pragmatic guardrails like retries and fixing strategies. When a model supports structured output, the framework uses it directly. Otherwise, Koog falls back to a tuned prompt-and-retry, with a fixing parser powered by a separate model, until the payload looks exactly how it should.
For iOS, Koog is now available on Apple’s mobile operating system as part of a focus on Kotlin Multiplatform. Developers can build an agent once and ship it to iOS, Android, and JVM back ends—all with the same strategy graphs, observability hooks, and tests. But developers must use Koog 0.4.1 to build for iOS. Additionally with Koog 0.4.0, GPT-5 support and custom LLM parameters help a model think harder on complex parameters. Settings such as reasoningEffort
enable a balance of quality, cost, and latency for each call, according to JetBrains. OpenTelemetry backing, meanwhile, supports both the W&B Weave AI development toolkit and the Langfuse open source LLM engineering platform.