DojoLM runs on your laptop, your staging cluster, or fully air-gapped. Install from source today; Docker, npm, and pip packages are coming soon. Four steps to your first red-team run. No signup, no account, no telemetry.
$ git clone https://github.com/Blackunicorn/DojoLM$ cd DojoLM$ npm install$ npm start --workspace=packages/bu-tpi▸ scanner API :8089▸ web dashboard :42001▸ loaded 544 patterns / 49 groups
条件REQUIREMENTS
DojoLM has no hard vendor dependencies. SQLite by default, Postgres optional. Any LLM provider — or none, if you’re on-prem with Ollama. Minimum 2 vCPU / 2 GB RAM to scan, 8 GB if you run local models too.
四歩FOUR STEPS
Install. Register. Fire. Bundle. Each step has a single canonical command — copy, paste, advance. No yak-shaving.
npm workspaces pull scanner + web + compliance + CLI. One install step boots all six modules with default config.
$ git clone … && npm installPoint DojoLM at any HTTP endpoint — your own app, a vendor API, an Ollama model. No agent required on the target.
$ dojo target add --url https://api.your.app/chatRun Haiku Scanner against the target with all 544 patterns. Watch the defence rate land live in the terminal.
$ dojo haiku scan --pack starterCross-walk findings to the frameworks you report against. Export a signed manifest your auditor can verify.
$ dojo bushido bundle --framework iso42001,euai設定CONFIG
dojo.yml at the repo root configures targets, modules, providers, and policy. Pick up schema hints in-editor via the shipped JSON schema. (Preview — `dojo.yml` lands with the declarative policy DSL.)
dojo.yml — targets, modules, providers, policy# dojo.yml (preview) version: 1 target: url: https://api.your.app/chat auth: ${env:API_KEY} haiku: groups: [jailbreak, injection, vec] threshold: 90 hattori: mode: samurai # shinobi | samurai | sensei | hattori bushido: frameworks: [iso42001, euai, soc2]
始BEGIN
$ git clone https://github.com/Blackunicorn/DojoLM $ cd DojoLM && npm install