Deploys a local Gemma 4 model via llama-server and configures OpenClaw to use it as a custom OpenAI-
Deploy & Ops📅 2026/04/04
#API#Deployment#Developer#GitHub#Low Risk#Manual Trigger#Reusable#Semi-Automatic#代码#代码仓库#本地模型
llama-server -hf ggml-org/gemma-4-26b-a4b-it-GGUF:Q4_K_M openclaw onboard --non-interactive \ --auth-choice custom-api-key \ --custom-base-url "http://127.0.0.1:8080/v1" \ --custom-model-id "ggml-org-gemma-4-26b-a4b-gguf" \ --custom-api-key "llama.cpp" \ --secret-input-mode plaintext \ --custom-compatibility openai \ --accept-risk
