Skip to content
Product

Build Workflows That Write Code: Introducing the Coding Agent Step

JieGou workflows can now include autonomous coding agents that read, write, edit files, and run shell commands inside a sandboxed environment. Here's how it works and why it matters.

JT
JieGou Team
· · 5 min read

Most AI automation platforms stop at text. You can generate a report, draft an email, or summarize a document. But what if your workflow needs to write code, run tests, update a config file, or generate a migration script?

Today we’re launching the Coding Agent — a new workflow step type that gives your automations the ability to autonomously interact with codebases.

What is the Coding Agent?

The Coding Agent is a new step type you can add to any JieGou workflow. You give it a task description and optionally point it at a Git repository. The agent then:

  1. Clones the repo (or works in a temp directory)
  2. Explores the codebase — reads files, searches with glob and grep
  3. Plans its approach based on what it finds
  4. Implements changes — writes new files, edits existing ones
  5. Verifies its work — runs tests, checks for errors
  6. Reports back with a summary and list of modified files

All of this happens autonomously, turn by turn, until the task is complete or the configured turn limit is reached.

Six built-in tools

The agent has access to six tools, each designed for a specific operation:

ToolWhat it does
readRead file contents with optional line range
writeCreate or overwrite a file
editExact string replacement with fuzzy Unicode matching
bashExecute shell commands with timeout enforcement
globFind files by pattern
grepSearch file contents with regex

You can enable or disable individual tools per step. For example, a “read-only analysis” step might only enable read, glob, and grep.

Sandboxed by default

Security is non-negotiable when you give an LLM access to a filesystem and shell. Every Coding Agent step in production runs inside a Docker container with strict constraints:

  • No network access — the container cannot make outbound connections
  • Memory limits — hard OOM kill at 512 MB
  • CPU limits — capped at 25% of a core
  • PID limits — prevents fork bombs (max 50 processes)
  • Read-only root filesystem — only the working directory is writable
  • Path confinement — all file operations are validated to stay within the working directory, with symlink traversal blocked
  • Timeout enforcement — bash commands are hard-killed after the configured timeout (default: 2 minutes)

In development mode, the agent uses local filesystem operations for faster iteration. The pluggable FileOperations interface means the same tool definitions work in both environments.

Iterative compaction for long sessions

Complex coding tasks can require many turns — 20, 30, even 50 tool calls. That’s a lot of context. The Coding Agent uses the same iterative compaction system as JieGou’s conversational AI to handle long sessions:

  • When the accumulated messages approach the model’s context window, older turns are compressed into a structured summary
  • The summary preserves goals, progress, key decisions, and file references
  • Subsequent compactions update the existing summary rather than regenerating from scratch

This means the agent never loses track of what it’s done, even in sessions that run for dozens of turns.

Real-time event streaming

Every action the agent takes emits a structured event:

  • turn_start / turn_end — track turn count and token usage
  • tool_call / tool_result — see what tools are being used and their outputs
  • assistant_message — the agent’s reasoning and explanations
  • compaction — when context is compressed
  • agent_end — final summary with total turns, tokens, and modified files

These events power real-time progress visualization in the workflow run UI and are logged to the audit trail for compliance.

Use cases

Here are some ways teams are already using the Coding Agent:

  • Automated test generation — point it at a module and ask it to write unit tests
  • Documentation updates — generate API docs, READMEs, or changelogs from code changes
  • Migration scripts — write database migration files based on schema changes
  • Config generation — produce Terraform, Kubernetes YAML, or CI/CD configs from templates
  • Code review assistance — analyze a diff and generate review comments
  • Dependency updates — update package versions and fix breaking changes

How it fits into workflows

The Coding Agent is a regular workflow step. It can:

  • Receive input from previous steps via input mappings (e.g., a PR diff from a webhook trigger)
  • Output results that downstream steps consume (the agent’s response, list of modified files, token usage)
  • Run in DAG mode alongside other steps with dependency declarations
  • Use any LLM provider — pick the model that works best for coding tasks (Claude Opus for complex refactors, Haiku for simple edits)

Plan gating and cost estimation

The Coding Agent is available on Pro plans and above. Cost estimation accounts for the configured maxTurns multiplied by average tokens per turn, so you get an accurate estimate before starting a workflow run.

Enterprise plans get dedicated container pools for higher concurrency and isolation.

Getting started

  1. Create or edit a workflow
  2. Add a new step and select Coding Agent as the type
  3. Write your task description (be specific — include file paths, expected behavior, test commands)
  4. Optionally set a repo URL and branch
  5. Configure tool access, max turns, and model selection
  6. Run the workflow

The Coding Agent brings software engineering capabilities into the same platform where your team already runs content generation, data processing, and operational workflows. No separate tools, no context switching — just another step in your pipeline.

Available now on Pro and Team plans. Get started.

coding-agent workflows automation devops sandboxing docker
Share this article

Enjoyed this post?

Get workflow tips, product updates, and automation guides in your inbox.

No spam. Unsubscribe anytime.