Natera OpenCode Setup
This repository contains instructions for setting up Mac OS X and OpenCode to work with Natera's AWS Bedrock account. This enables using a CLI coding agent that has the look and feel of Claude Code, but with a wide variety of open-source coding models (Claude Code only supports Anthropic models).
The guide is split into two independent parts:
- Part I — Core OpenCode + Bedrock setup (sections 1–7). This is everything you need to use opencode against Natera's Bedrock. Stop here if that's all you want.
- Part II — Optional: oh-my-openagent (sections 8–10). A separate, optional plugin layer that assigns different Bedrock models to different agent roles. You do not need this to use opencode. Skip Part II entirely if you just want a working CLI coding agent.
AI-assisted setup: If you already have an AI coding assistant (Claude, Cursor, opencode, etc.), you can point it at this README and say "Get me this setup on my Mac". The instructions below are written to be executable by an AI agent with shell access. Each section is ordered, contains verifiable success criteria, and calls out the exact files to create or edit. An AI agent should complete Part I first and only proceed to Part II if the user explicitly asks for oh-my-openagent.
Table of Contents
Part I — Core OpenCode + Bedrock (required)
- Prerequisites
- AWS Configuration
- OpenCode Configuration
- Selecting a Model
- Testing Bedrock Authentication (Without OpenCode)
- Shell Aliases for Wrapping OpenCode with
aws-vault - Core Verification Checklist
Part II — Optional: oh-my-openagent (not required to use opencode)
- What is oh-my-openagent, and do you need it?
- Installing and Configuring oh-my-openagent
- Swapping oh-my-openagent Configs with Aliases
Part I — Core OpenCode + Bedrock Setup
Everything in this part is required to use opencode with Natera's Bedrock account. When you finish Section 7's checklist, you have a working, self-sufficient setup. You can stop reading here.
1. Prerequisites
Install the following on macOS. If you have an AI assistant running the setup, it can run these
sequentially and check each one with which <cmd>.
1.1 AWS CLI
Install per AWS instructions: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
Verify:
aws --version
1.2 Homebrew
Install from https://brew.sh/, then verify:
brew --version
1.3 OpenCode
brew install opencode
Verify:
opencode --version
1.4 aws-vault
aws-vault wraps arbitrary commands with credentials from a named AWS profile. We use it to inject
Bedrock-capable credentials into opencode.
brew install aws-vault
Verify:
aws-vault --version
2. AWS Configuration
Create or edit ~/.aws/config to define a bedrock profile pointing at Natera's AWS SSO.
Profile name: We use
bedrockas the canonical name in this guide. The profile name is arbitrary, but it must match consistently across~/.aws/config,~/.config/opencode/opencode.json, and anyaws-vault exec <profile>invocations. If you pick a different name (e.g.claude), change it everywhere.
~/.aws/config:
[profile bedrock]
sso_start_url = https://d-9267b2b155.awsapps.com/start/#
sso_region = us-west-2
sso_account_id = 976619631140
sso_role_name = NateraReadOnly
region = us-west-2
output = json
Test SSO login:
aws sso login --profile bedrock
This should open your browser to the Natera SSO portal. Approve the request.
3. OpenCode Configuration
Create ~/.config/opencode/opencode.json. This file tells opencode which Bedrock model to use by
default, which AWS profile to authenticate with, and configures permissions so it doesn't run destructive
commands without asking.
~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"model": "amazon-bedrock/<MODEL-STRING>",
"permission": {
"bash": "ask",
"edit": "ask",
"webfetch": "ask"
},
"provider": {
"amazon-bedrock": {
"options": {
"region": "us-west-2",
"profile": "bedrock"
}
}
}
}
Notes:
- Replace
<MODEL-STRING>with one of the model IDs from Section 4. Theamazon-bedrock/prefix is required — it tells opencode which provider to route to. - The
profilefield must match the profile name in~/.aws/config. - The
permissionblock opts out of opencode's "yolo mode" (auto-execute everything). This is strongly recommended, especially when you're evaluating unknown models.
4. Selecting a Model
Refresh this list yourself. Natera's Bedrock roster is a moving target — new models land, old ones get retired, version suffixes get bumped, cross-region inference profile IDs change. Do not trust any hard-coded list in this README (including the snapshot below). Before hard-wiring a model into a config or alias, re-query Bedrock directly with the command in §4.1 and confirm the exact ID. A stale model ID will fail with a cryptic
ValidationExceptionorAccessDeniedException— not a friendly "model not found" error.
4.1 List all available text models
This is the source of truth. Run it any time you're about to edit a config:
aws --profile bedrock \
bedrock list-foundation-models \
--by-output-modality TEXT \
--query "modelSummaries[*].modelId" \
--output text
You may also want to list Bedrock's inference profiles (cross-region / system-defined), which are often what opencode actually needs to route to rather than the raw foundation model ID:
aws --profile bedrock \
bedrock list-inference-profiles \
--query "inferenceProfileSummaries[*].inferenceProfileId" \
--output text
If an amazon-bedrock/<foundation-model-id> string fails to invoke from opencode, try the
corresponding inference profile ID instead (these typically look like
us.anthropic.claude-..., global.anthropic.claude-..., etc.).
4.2 Snapshot of models seen recently (may be stale)
The IDs below were valid as of the last time this README was updated. Treat them as hints, not gospel — always cross-check against §4.1 output before using them.
Anthropic (closed-source, highest quality):
anthropic.claude-opus-4-6-v1
anthropic.claude-sonnet-4-6
anthropic.claude-haiku-4-5-20251001-v1:0
Open-source / open-weight (strong coders):
deepseek.v3.2
qwen.qwen3-coder-next
qwen.qwen3-coder-480b-a35b-v1:0
mistral.mistral-large-3-675b-instruct
mistral.devstral-2-123b
meta.llama3-3-70b-instruct-v1:0
google.gemma-3-27b-it
moonshot.kimi-k2-thinking
Small / cheap (good for basic chat, risky for agent work — see §4.3):
amazon.nova-2-lite-v1:0
amazon.nova-pro-v1:0
4.3 Not every model works for every task
Important caveat: being listed in Bedrock — and even being able to hold a conversation in opencode — does not mean a model is fit for agentic coding work. Models vary wildly along three independent axes:
- Conversational fluency — can it hold a basic chat turn? Most Bedrock text models can.
- Tool use / function calling — can it correctly emit structured tool calls that opencode understands? Some models can't, or will mis-format calls.
- Agentic discipline — can it orchestrate sub-agents, wait for them to finish, and compose their results? This is where small/cheap models consistently fall over, even when #1 and #2 look fine.
Cheap models like amazon.nova-2-lite-v1:0 will often pass test #1 cleanly and still fail test #3
by firing off sub-agents and then terminating the turn before the sub-agents come back.
Tip — ask a chatbot for a fresh fitness assessment. Model fitness for specific roles (coding, tool use, agent orchestration, long-context work, vision, etc.) changes faster than any static README can track. Before you commit a model to an alias or an oh-my-openagent slot, paste its exact ID into a general-purpose chatbot — Gemini, ChatGPT, Claude.ai, Perplexity, etc. — and ask something like "Is
qwen.qwen3-coder-480b-a35b-v1:0a good choice for agentic coding with tool use? How does it compare toanthropic.claude-sonnet-4-6?". These assistants see benchmark chatter, release notes, and community feedback well after this README was last edited, and they're a fast second opinion on whether a given Bedrock model is known to handle orchestration, function calling, or long-horizon tasks. Treat their answer as a hint, then confirm empirically with the two-tier test below.
Use the two-tier test below before committing a model to any alias or oh-my-openagent slot.
4.3.1 Tier 1 — Basic conversation test
Can opencode even talk to this model? This is the cheapest possible smoke test:
aws-opencode --model amazon-bedrock/amazon.nova-2-lite-v1:0 run "say hello"
(Swap the model string for whatever you're evaluating.) If this returns a coherent reply, the model is wired up correctly — provider routing, credentials, and the response format all work. Most models pass this out of the box.
If Tier 1 fails, the problem is almost always plumbing (credentials, region, model ID typo, inference-profile mismatch) — not the model itself. Fix plumbing before proceeding.
4.3.2 Tier 2 — Agent orchestration test
Passing Tier 1 does not mean the model can drive opencode's task() delegation system. The
failure mode is sneaky: the model will happily spin up sub-agents and then end its own turn
immediately, ignoring their output.
This is the minimum test for agentic competence. It fires three sub-agents in parallel against three throwaway directory names and requires the orchestrator to wait and synthesize:
aws-opencode run "spin up 3 agents to analyze the directories @foo @bar @wuz, \
then compile the summaries into a single report. do not analyze the directories \
yourself, explicitly spin up agents to do it. do not terminate prematurely, wait \
for the agents to complete" \
--dangerously-skip-permissions
(Replace foo, bar, wuz with any three directory names in your current working tree.)
A capable orchestrator model will:
- Dispatch three parallel
task(...)invocations. - Pause until all three return.
- Read their summaries and emit a single combined report.
A model that's too small for agent work will typically:
- Fire the sub-agents and then immediately exit the turn without waiting.
- Or never spin up agents at all and silently do the analysis itself despite being told not to.
- Or spin up one agent, not three.
amazon.nova-2-lite-v1:0 — which passes Tier 1 perfectly — reliably fails Tier 2 this way.
That is exactly the case this test is designed to catch. Do not assign a Tier-2-failing model
to any orchestrator role (e.g. sisyphus, unspecified-high) in Part II.
--dangerously-skip-permissionsis used here so the test is actually non-interactive and you can observe whether the model sits and waits. Only use it in evaluation runs like this — it disables the"ask"prompts you set up in §3.
5. Testing Bedrock Authentication (Without OpenCode)
Before wiring up opencode, verify your AWS profile can actually invoke Bedrock models. This isolates credential problems from opencode problems.
Qwen:
aws --profile bedrock \
bedrock-runtime invoke-model \
--model-id qwen.qwen3-coder-480b-a35b-v1:0 \
--region us-west-2 \
--cli-binary-format raw-in-base64-out \
--body '{"max_tokens": 1024, "messages": [{"role": "user", "content": "Hello world"}]}' \
output.json
Mistral:
aws --profile bedrock \
bedrock-runtime invoke-model \
--model-id mistral.mistral-large-3-675b-instruct \
--region us-west-2 \
--cli-binary-format raw-in-base64-out \
--body '{"max_tokens": 1024, "messages": [{"role": "user", "content": "Hello world"}]}' \
output.json
Anthropic: Closed-source Anthropic models (Claude Opus, Sonnet, Haiku) require an additional
anthropic_version field in the body:
aws --profile bedrock \
bedrock-runtime invoke-model \
--model-id anthropic.claude-sonnet-4-6 \
--region us-west-2 \
--cli-binary-format raw-in-base64-out \
--body '{"anthropic_version": "bedrock-2023-05-31", "max_tokens": 1024, "messages": [{"role": "user", "content": "Hi"}]}' \
output.json
Each command should write a non-empty output.json. If it fails, fix the AWS layer before touching
opencode.
6. Shell Aliases for Wrapping OpenCode with aws-vault
opencode won't see Bedrock credentials unless it's launched inside an aws-vault exec environment.
Rather than typing that prefix every time, define aliases in ~/.aliases (or wherever your shell
sources aliases from).
Add the following to ~/.aliases, then make sure your ~/.zshrc or ~/.bashrc sources it
(e.g. [ -f ~/.aliases ] && source ~/.aliases).
# ==================================================================
# OpenCode + AWS Bedrock (Natera)
# ==================================================================
# Plain opencode — uses the default model from ~/.config/opencode/opencode.json
alias aws-opencode="aws-vault exec bedrock -- opencode"
# ----- Anthropic (closed-source) -----
alias aws-opus="aws-vault exec bedrock -- opencode --model amazon-bedrock/anthropic.claude-opus-4-6-v1"
alias aws-sonnet="aws-vault exec bedrock -- opencode --model amazon-bedrock/anthropic.claude-sonnet-4-6"
alias aws-haiku="aws-vault exec bedrock -- opencode --model amazon-bedrock/anthropic.claude-haiku-4-5-20251001-v1:0"
# ----- Open / open-weight coders -----
alias aws-deepseek="aws-vault exec bedrock -- opencode --model amazon-bedrock/deepseek.v3.2"
alias aws-qwen="aws-vault exec bedrock -- opencode --model amazon-bedrock/qwen.qwen3-coder-next"
alias aws-mistral="aws-vault exec bedrock -- opencode --model amazon-bedrock/mistral.mistral-large-3-675b-instruct"
alias aws-llama="aws-vault exec bedrock -- opencode --model amazon-bedrock/meta.llama3-3-70b-instruct-v1:0"
alias aws-gemma="aws-vault exec bedrock -- opencode --model amazon-bedrock/google.gemma-3-27b-it"
alias aws-kimi="aws-vault exec bedrock -- opencode --model amazon-bedrock/moonshot.kimi-k2-thinking"
alias aws-nova="aws-vault exec bedrock -- opencode --model amazon-bedrock/amazon.nova-pro-v1:0"
# ----- Anthropic Claude CLI (not opencode) -----
# If you also use the official `claude` CLI, you can wrap it too.
# Requires setting ANTHROPIC_MODEL to a Bedrock-hosted model.
alias aws-claude="ANTHROPIC_MODEL='global.anthropic.claude-opus-4-6-v1' aws-vault exec bedrock -- claude"
Reload your shell (exec $SHELL -l) and run:
aws-opencode
If you haven't opened an SSO session recently, aws-vault will launch a browser and walk you through
SSO. Once authenticated, opencode starts with Bedrock creds injected.
Why separate aliases per model? Each alias is a one-keystroke switch between models. This is
especially useful for evaluating which model is best at a given task — start two terminals with
aws-opus and aws-qwen, hand both the same prompt, compare.
7. Core Verification Checklist
Run through this list to confirm a working Part I setup. If every box below is checked, you have a fully functional opencode + Bedrock environment and you are done. Part II is optional and unrelated to whether opencode works.
aws --versionprints a version.aws-vault --versionprints a version.opencode --versionprints a version.~/.aws/configcontains a[profile bedrock]block (or your chosen name).aws sso login --profile bedrockcompletes without error.- The
bedrock list-foundation-modelscall from §4 returns a non-empty list. - At least one
bedrock-runtime invoke-modelcall from §5 returns a validoutput.json. ~/.config/opencode/opencode.jsonexists, hasamazon-bedrock/<model>set, and references the correct profile.~/.aliasescontains the opencode wrapping aliases from §6 and is sourced by your shell.aws-opencodelaunches opencode with Bedrock credentials (no "provider not configured" error).
If all boxes are checked, Part I is complete. You have a working setup. You do not need to read Part II unless you specifically want to try oh-my-openagent.
Part II — Optional: oh-my-openagent
THIS PART IS OPTIONAL. OpenCode works fine without oh-my-openagent. If you just want a working CLI coding agent, stop reading — you're already done after Part I.
Only continue if you specifically want to assign different Bedrock models to different agent roles (e.g. Opus for the orchestrator, Haiku for cheap tasks, Mistral for exploration).
8. What is oh-my-openagent, and do you need it?
oh-my-openagent (nicknamed omo) is a third-party opencode plugin. It is not part of opencode and is not required to use opencode against Bedrock — everything in Part I runs without it.
What it adds:
- Named agent roles — sisyphus, oracle, explore, librarian, hephaestus, atlas, prometheus, … — each of which can be assigned its own Bedrock model.
- Task categories — quick, writing, visual-engineering, git, unspecified-low,
unspecified-high — used by opencode's
task()delegation system, each assignable to its own model. - A convention where your orchestrator model (the one you talk to) can delegate sub-tasks to cheaper or faster models, keeping cost down while keeping overall quality high.
When you might want it:
- You care about mixing models (expensive Claude Opus for planning, cheap Haiku for quick tasks).
- You want to evaluate how different providers behave in specific roles.
- You're building multi-agent workflows.
When you should skip it:
- You just want opencode to work with one Bedrock model.
- You're setting up a new teammate and just need them productive. Send them Part I only.
The rest of Part II assumes Part I is already working.
9. Installing and Configuring oh-my-openagent
9.1 Add the plugin to ~/.config/opencode/opencode.json
Edit the file you created in §3 and add a plugin array:
{
"$schema": "https://opencode.ai/config.json",
"model": "amazon-bedrock/qwen.qwen3-coder-480b-a35b-v1:0",
"permission": {
"bash": "ask",
"edit": "ask",
"webfetch": "ask"
},
"provider": {
"amazon-bedrock": {
"options": {
"region": "us-west-2",
"profile": "bedrock"
}
}
},
"plugin": [
"oh-my-openagent@latest"
]
}
When opencode starts, it will fetch the plugin into ~/.config/opencode/node_modules/ and look for
an oh-my-openagent.json file next to opencode.json.
9.2 Create ~/.config/opencode/oh-my-openagent.json
This file is the omo config. A minimal working example, mapping every agent to Anthropic Claude:
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/dev/assets/oh-my-opencode.schema.json",
"disabled_hooks": ["comment-checker"],
"agents": {
"sisyphus": {
"model": "amazon-bedrock/anthropic.claude-opus-4-6-v1",
"ultrawork": {
"model": "amazon-bedrock/anthropic.claude-opus-4-6-v1",
"variant": "max"
}
},
"hephaestus": { "model": "amazon-bedrock/anthropic.claude-opus-4-6-v1" },
"atlas": { "model": "amazon-bedrock/anthropic.claude-opus-4-6-v1" },
"librarian": { "model": "amazon-bedrock/anthropic.claude-sonnet-4-6" },
"explore": { "model": "amazon-bedrock/anthropic.claude-sonnet-4-6" },
"oracle": { "model": "amazon-bedrock/anthropic.claude-opus-4-6-v1", "variant": "max" },
"prometheus": { "prompt_append": "Leverage deep & quick agents heavily, always in parallel." }
},
"categories": {
"quick": { "model": "amazon-bedrock/anthropic.claude-haiku-4-5-20251001-v1:0" },
"unspecified-low": { "model": "amazon-bedrock/anthropic.claude-sonnet-4-6" },
"unspecified-high": { "model": "amazon-bedrock/anthropic.claude-opus-4-6-v1", "variant": "max" },
"writing": { "model": "amazon-bedrock/anthropic.claude-opus-4-6-v1" },
"visual-engineering": { "model": "amazon-bedrock/amazon.nova-pro-v1:0" },
"git": {
"model": "amazon-bedrock/anthropic.claude-haiku-4-5-20251001-v1:0",
"description": "All git operations",
"prompt_append": "Focus on atomic commits, clear messages, and safe operations."
}
},
"tmux": { "enabled": true },
"providers": { "amazon-bedrock": { "enabled": true, "region": "us-west-2" } }
}
Key ideas:
agents— named subagent roles that opencode/Sisyphus delegates to. Each gets its own model.categories— task buckets used by thetask()delegation system.quickshould be cheap and fast;unspecified-highshould be your best model.variant: "max"— unlocks extended thinking budgets on supported Anthropic models.visual-engineering— used for UI/UX work; consider a vision-capable model here (e.g.amazon.nova-pro-v1:0orqwen.qwen3-vl-235b-a22b).
Restart opencode after editing. The plugin should log that it loaded your agent/category overrides.
10. Swapping oh-my-openagent Configs with Aliases
Once you have a few different omo configs (all-Anthropic, budget/mixed, open-source-heavy, etc.), the
easiest way to switch between them is to keep each as a named JSON file in a dedicated git repo and
cp it over the active ~/.config/opencode/oh-my-openagent.json.
10.1 Clone the starter config repo
A curated set of preset omo configs lives at:
git@gitlab.natera.com:creid/oh-my-openagent-config.git
Clone it somewhere stable — the swap aliases below assume ~/natera/creid/oh-my-openagent-config/:
mkdir -p ~/natera/creid
git clone git@gitlab.natera.com:creid/oh-my-openagent-config.git \
~/natera/creid/oh-my-openagent-config
Expected contents:
anthropic.json # all-Anthropic (Opus + Sonnet + Haiku)
budget.json # cheap mix — Sonnet orchestrator, small Mistrals for workers
mistral-test.json # Anthropic orchestrator, Mistral models for workers/categories
qwen-test.json # Anthropic orchestrator, Qwen models for workers/categories
multi-provider.json # mix of Anthropic + Mistral + Qwen across roles
Each file is a complete, drop-in replacement for ~/.config/opencode/oh-my-openagent.json.
10.2 Add swap aliases to ~/.aliases
# ==================================================================
# oh-my-openagent config swapper (OPTIONAL — only needed if you use omo)
# ==================================================================
# Each alias copies a preset from the config repo over the active omo config.
# Restart opencode after swapping for changes to take effect.
OMO_CFG_DIR="$HOME/natera/creid/oh-my-openagent-config"
OMO_ACTIVE="$HOME/.config/opencode/oh-my-openagent.json"
alias omo-anthropic="command cp $OMO_CFG_DIR/anthropic.json $OMO_ACTIVE && echo 'omo: anthropic'"
alias omo-budget="command cp $OMO_CFG_DIR/budget.json $OMO_ACTIVE && echo 'omo: budget'"
alias omo-mistral="command cp $OMO_CFG_DIR/mistral-test.json $OMO_ACTIVE && echo 'omo: mistral-test'"
alias omo-qwen="command cp $OMO_CFG_DIR/qwen-test.json $OMO_ACTIVE && echo 'omo: qwen-test'"
alias omo-multi="command cp $OMO_CFG_DIR/multi-provider.json $OMO_ACTIVE && echo 'omo: multi-provider'"
We use
command cpto bypass anycp -iinteractive alias (see thealias cp='cp -i'pattern many dotfiles ship with). Otherwisecpwould prompt before overwriting.
10.3 Typical workflow
# Put opencode into all-Anthropic mode
omo-anthropic
# Start opencode (picks up the new omo config on launch)
aws-opencode
# Later: swap to a budget-friendly mix and restart
omo-budget
aws-opencode
10.4 Creating your own preset
- Edit
~/.config/opencode/oh-my-openagent.jsonto taste. - Verify with a real opencode session that agents route to the right models.
- Save the verified file into
~/natera/creid/oh-my-openagent-config/<your-preset>.json. - Add an
omo-<name>alias in~/.aliases. - Commit + push the config repo so the team can pull your preset.
10.5 Part II Verification Checklist
Only relevant if you chose to install oh-my-openagent.
~/.config/opencode/opencode.jsonhas"plugin": ["oh-my-openagent@latest"].~/.config/opencode/oh-my-openagent.jsonexists and is valid JSON.~/natera/creid/oh-my-openagent-config/is cloned and contains preset files.omo-anthropicsuccessfully overwrites the active omo config.- After swapping and restarting opencode, delegated subagents use the models you assigned.
If Part II fails, it does not affect Part I — you can remove the plugin line from
opencode.json at any time and opencode will go back to behaving as a plain Bedrock client.