Time? Stop! Don’t run away!
Yeah, where did all the time go? I wanted to write a blog entry about all the little steps, but now I see that only 2 entries exist and I can barely remember what I did after the last entry up to now.
That’s the reason why I will shorten this a bit with a quick summary of some points.
LM Studio
Whatever I want to do on this machine, I will need an LLM provider. The decision to use LM Studio was made pretty quickly. I’ve used text-generation-webui in the early days, then used ollama, but the frontend of LM Studio plus the ability to download and use any huggingface model was a big plus for me.
I started with installing the desktop client, which was a bit weird as it is started from an .appImage and not installed. With a new update, there is a new .appImage and – while it keeps all other settings – the server settings are lost somehow.
After setting it up, I looked for a way to start it as a server on boot and added a headless installation of the daemon, so it was running at boot.
The CLI tool lms is very handy and can be used to load/unload models, manage the server and some other things.
Later on, I created a shell script to list the models with some additional infos like their capabilities and the scores, tags and comments I added to them.
OpenClaw
This point proves that I had no clear plan or roadmap for this. I went straight to the end of the line: OpenClaw as a full assistant system.
The problem: I did (and still don’t) know what I actually want to use it for. XD
But… as a first step, I still connected it to LM Studio and started to connect it to Discord.
The good thing is, that I learned a bit more about how to do this. I created an app and a bot, created an oauth token and finally managed to connect it.
It felt cool to be able to chat with my agent through Discord. It was interesting to see that he would answer questions about the system he was running on without any problems. 😀
The Discord connection has one major flaw, though: The answers were produced fine by OpenClaw, but only popped up on Discord after about 10 Minutes.
In the end, I disabled OpenClaw again for now. I might be getting back to it when I have better usecases.
Coding assistants
I had some experience with Claude Code when I started this project, but I also did not think that it was usable when connecting to a local model.
I’m still evaluating them, so I will just very briefly write down my first impressions.
Cline
This was the first one I tried to use. The results were not bad.
Pro: Fast, act and plan mode, deep-planning mode
Con: Was not able to work with some models, no model selection list (model key has to be entered manually for local provider)
OpenCode
This one looked slick and I recently checked out the desktop version as well, which looked even better (but did not work well). This is my current favorite.
Pro: Subagents, nicer UI, model selection list, themes, checklists while working on a plan
Con: Non-standard mcp config, weird key shortcuts
PI
I only tested this one very briefly, so I can only say that it tends to work in YOLO mode and was quite fast, but did not manage to fulfill the task.
Claude Code
At first, I thought of this to be the non plus ultra, but it failed in the same way that others did and the UI is not that great either. It felt cool and modern (especially the funny verbs it shows while working), but the true worth comes from the model.
It seemed to be slower even for some things, so I went back to OpenCode (and Cline in some cases).
