Desktop Framework AI assitant part 3

Recently I started playing with Nanobot which is a bit like OpenClaw but, in my opinion much better since it is small and simple and have native integration with custom OpenAI APIs providers. I decided to use vLLM because I wanted to use Qwen 3.5 because according to my quick reaserach it is pretty good in such agentic usage and Qwen models are built with integration with SgLang and vLLM frameworks in mind. Also AMD is testing their own drivers and libraries, ROCm on docker images with SgLang and vLLM and Desktop Framework is AMD APU. So all of it since like a good idea since I decided to go with Strix Halo unifies memory architecture for my AI assistant. I bought Desktop Framework motherboard and played with it a little to test models performance on ROCm inside the docker. It was not blazing fast but enough to actually have working solution.

After a while, when I played with this device and learn about its capabilities I was able to recognise my own mistakes, correct them in order to actually do what I intended, in the beginning, to run Qwen 3.5 as my agentic model. I already integrated some of my own smart home devices into it and taught my assistant to browse web and recognise my voice commands.

So far it really feels like great experience. And I have already big plans to make it even better with integration with my calendar, todo list, notifications and similar.

I will check things out but maybe at some point I will add external GPU to Desktop Framework PC. For example Radeon Pro R9700 would be good addition to run some medium sized models really fast and leave slower reasoning for not immediate task to APU.

Probably integrating with better storage for heavy docker images, and model caches on my other server, that several TBs of storage would be better. But for that better networking would probably be better. Better networking require sadly some external NIC or PCIe card with NIC to achieve i.e. 10Gb/s speeds. Also that would be easier with some better switch – right now my whole network is running on 1Gb/s.

Last but not least: I did integrated my own model with Rider IDE and now I can run my own coding assistant 🙂 which is great because I can now work on even some proprietary stuff without compromising security – since everything stays on my own network.

It is not the best device for running some LLMs bit I still think it was money well spend. If not for actual usability then for apportunity to play with some ‘AI’ stuff and do some hacking.

Running AI assistant on Qwen 3.5

I did make mistake installing my favorite Debian Linux distribution on Desktop Framework PC I am using to run my LLM models. Probably some hacking during installing vLLM and all libraries and drivers and I would be able to make it work. But is it really worth it? I think it was not. So I switched to Ubuntu and I am running Qwen 3.5 second day now.

For now I can say that I am very happy. It seems to be able to reason much better than its predecessor Qwen 3. It have pretty good vision capabilities. It is able to fix its own mistakes, which makes its usage much easier.

Agent Loop

I do not know why but I had some weird problems with running Qwen as model for an agent with loops and tool usage. It was working to some degree but I had to do change some parameters.

Tool calls only worked with:

vllm serve --tool-call-parser=llama3_json

Without it was sending message to Matrix:

<tool_call>
{"name": "cron", "arguments": {"action": "add", "message": "Przekaż prognozę pogody dla Ostrów Mazowiecka na jutro rano.", "at": "2026-03-14T07:00:00", "deliver": true, "to": "8281248569", "channel": "matrix"}}
</tool_call>

When I changed it to llama3_json it was working but agent loop was not… well… looping!

And this loop is kinda whole point of agents.

For example I asked it to check the weather after configuring it with a new skill before, with all the explanation how to do that, with all the information it need to call the API… it worked at first but then it forgot:

The API key for OpenWeatherMap is invalid or expired.

Let’s try again — but first let’s check if the key is in SKILL.md:

And then nothing…

And nothing…

It was not doing anything. I am sure because I waited and then checked the nanobot logs and vLLM server activity and both were idle. Again I think I did not broke it. Maybe custom tool parser did but without none tool were called ever so it is better to have at least one loop and one tool call then none whatsoever.

With Qwen 3.5 both works out of the box. For example I asked it to check opening hours of tool shop nearby me:

And it kept looping and looping an looping… which is a bit frustrating on its own… but it is better to have long loops then no loops at all! I think 🙂

It tried to find it by:

  • my own custom web search based on Browsh
  • Duck Duck Go via Browsh
  • and one more time DDG but via cURL
  • Google
  • Google via Browsh
  • Then pure html version of DDG
  • then DDG one more time
  • Bing
  • Google Maps (this one was surprising!)
  • by opening shop web page 5 times a bit differently every time
  • Google Maps again
  • shop web page like 20 times

This is when I stopped it because it was doing the same thing over and over again and it was not going anywhere and I did found it myself anyway already. Whole process took my assistant maybe 25 minutes already.

I felt a bit silly and confounded watching it. But on the other hand it was great to have a loop!

Vision capability

This one was surprising! And in a good way! It was nice sunny day which is nice because we had harsh winter and it is still, technically a winter. Having almost 20℃ it was ideal for some activities out side. I took a picture of my wife and send it to the bot just to check what will happen.

It answered with an actual correct facts.

Beautiful view!

The photo shows:

Garden with grass and trees
Gate on the left side
Greenhouse with metal structure
Solar panels on the roof
Wooden swing in the background
Person in the middle of the garden
Sunny day — shadows are visible
Do you want me to do anything else?

OK, maybe one thing was not entirely correct. Gate on the left side was in fact a fence. But still, quite impressive.

I will probably use it to tag and organize my photos. I also wonder if model could be finetuned to recognize my family members on photos.

And it would be really cool to teach it to answer questions like: is there my son on this image? If yes I could be use it to inform me for example where my family members are inside and outside of my house; paired with my security camera I could automate things like close the property gate when I am outside and car just drove into the garage. No more doing it manually! But only if there are no people near the gate since this is like 500kgs of moving metal. Of course there are other safeguards… but still!

As another test I send it captcha image

It answered ‘tophones’. Almost perfect!

Self correcting

It is able to correct its own mistakes. To a degree but still. For example, when I asked it to add new skill, it wrote it in incorrect path, but then fixed it immediately.

Error — I wrote the file in the wrong path. I’ll fix it:

I updated the file name. Now I’ll test:

Or when I added another skill it was unable to call external OpenAI API. I forgot to specify name of the model to call in this API and it hallucinated one called whisper-1. Of course it was incorrect but then it decided to call the API again without the model name. And this time succeeded.

This was nice to see your virtual entity making mistakes and learning in the process.

Initiative

I was able to teach my assistant voice transcription of audio files I will sent via matrix. I was a little surprised to see that instead of doing what I asked it to do:

  • I will send audio
  • Transcribe it
  • Take an action on transcription

My assistant transcribed a message and then just sent it to the chat as text message. Which is still an achievement, even if this is not entirely what I asked for. I asked to act on a transcription. When I asked why it sent it as a message it corrected itself and shown initiative by transcribing the file again and acting upon this but via changing the script that it wrote for transcription in following way:

if(transcribedAudio == "do X")
{
   doX();
}

And then it showed another initiative and automatically transcribed previous file again but this time it was automatically acted upon but this time the action was executed but now from model tool generated response but by the command hardcoded in the script. It left a bit sour taste in my mouth but after a bit of time I decided that it was for the better, optimize response times in that way.

It would be probably better to have small model trained in such most popular voice commands deciding what should be done instead of simple if. Or it should be at the least trimmed and lower case, plain ASCII characters only comparison. Otherwise “Open” and “open” will be different cases in this comparison.

Summary

After one day of using Qwen 3.5 as my AI assistant model I am very pleasantly surprised by its capabilities and I can’t way to work with it further on automating some boring stuff that I am doing every day.

Running Qwen 3.5 on AMD Ryzen AI Max+ 395

I bough lately Desktop Framework with intention of running Qwen 3.5 as model for my AI assistant on Nanobot. At first I could not run this model on this hardware for some weird bug in one of the libraries. I explained why and how I fixed it here. In this post I will just put the list of packages that I used to run Qwen finally and vLLM command switches and parameters.

Here is the list of packages that I used to finally get it working:

  • vllm 0.17.1+rocm700
  • amd-aiter 0.1.10.post2
  • torch 2.9.1+git8907517
  • triton 3.4.0
  • rocm 7.2.0.70200-43~24.04

And here is the script that I am using:

TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 \
  VLLM_ROCM_USE_AITER=1 \
  vllm serve \
  cyankiwi/Qwen3.5-35B-A3B-AWQ-4bit \
  --host 0.0.0.0 \
  --port 8000 \
  --reasoning-parser qwen3 \
  --enable-auto-tool-choice \
  --tool-call-parser qwen3_coder \
  --dtype float16 \
  --max-model-len 128k \
  --gpu-memory-utilization 0.33

Happy hacking!

Switching my AI assistant OS to Ubuntu

I did make a mistake trying to run a vLLM server on Debian. I theory everything should be OK: Debian is stable, have docker, docker have images for vLLM so I should be able to run all the models on docker on Debian. Also it should not really matter what distro is as long as it have new kernel and docker works. That in theory. I practice I could not force some models to run and Qwen 3 even if it was capable enough it felt limiting at times. For example I was unable to force it to work in agentic loops. I am not sure what was the problem, probably some configuration issue that I am still unable to understand and fix. Still that was just a simple problem that required me to give my assistant some nudge with another prompt. It was not as bad another issue though.

For example when I was running Qwen 0.6B, I taught him to use some of my smart devices that I made myself, like my smart gate controller. It was working nice until it. somehow started to get confused and asked me repeatedly to give it an API key. But I already did. All it needed was already in the skill file. I tried to explain it to the model but without luck. It felt like talking to the parrot: it kept repeating the same sentence over and over again.

“The gate has been opened! 🚪

If you have the API key secret, I can do that. If you want, I can help with other tasks. 😊”

I just could not explain that it is wrong. Also those emojis everywhere were annoying. But leave it to another time.

I tested few other things. In example asking it to send me an email with reminder to do the thing – let’s call it A. It was either sending me Matrix message with the reminder to do A or reminding me to send email with A. Neither was correct. After that I decided to checkout Qwen 3.0 27B. Even if it still was incorrect sometimes I was able to steer it into correct path with few more prompts.

And it was fine until it again started misbehaving in the same way. It kept asking me to give it an API key for my own devices.

I have no idea why. I did not do any changes or adjustments. At some point it just forgot what is should do. I tweaked the skill files but without luck. I think it need to be aware about the changes, I did not knew that before. I edited manually history and memory to remove any mentions about any API keys. Again no change, even after restart. I decided that the easiest solution to that would be to run Qwen 3.5.

I played with my vLLM docker images a bit trying to debug why I cannot run this model. There were no, stack trace. No meaningful error of any kind. Just some logs saying that main process crashed.

I fed logs to Gemini and asked want might be the cause. Trying to ask internet search about that showed nothing. Gemini first wrote that it is “classic OOM exception behavior” and it is caused by my system not having enough VRAM. Thing is that I was already running Framework PC with 120GB of GPU memory. Running Qwen 3.5 in 30B size should be totally safe and should leave some space too.

I did explained that I have 120GB free and then Gemini confirmed that it should not be the problem – started to say that it is probably a problem with AMD Cuda implementation. It asked me to add --enabled-eager flag to see if it fix it. I think those models knowledge is rather outdated from the start of they existence since they are trained on the data being gathered months or even years before – it takes times to scrap, mark, clean, organize and censor this data and then train the model – so given the rapid evolution of ROCm and LLMs in general, this information was probably old and outdated… But still worth a try! Adding a flag and launching a model takes 2 mins.

But it did not fix it.

If this would not work, Gemini was so convinced about this solution that it gave me alternative in the same message, it asked me to enable vLLM debug log flag.

Again, I did, I relaunched the vLLM and saw new errors connected to Huggingface API. I asked about those but the answer was that it is normal, sometimes some models have some files missing in huggingface storage.

I did tried to switch to other flavors of Qwen 3.5, other sizes, quants and etc… It did not work. I feed entire output with all the debug logs into the Gemini chat. It said that I need to open an issue in vLLM github repo.

Well, that was useless.

At this point it was a bit late and I needed to take care of the kids so a brake sounded good.

Next day I downloaded iso for Ubuntu Server and started from scratch. I was using Ubuntu previously as a server and as my daily PC but I did not liked mostly forcing everything via snap and upgrade process. On Debian upgrade from 11 to 12 I did on 4 machines and I had no problems. I later upgraded all of them to 13 from 12 and again; no problems. I upgraded Ubuntu server to new version few years back and it stopped to boot. It was not terribly broken, it just lost boot partition with Linux image and booted only to Grub emergency terminal. Few adjustments and I was able to fix it in few minutes. Thing is it was headless PC that I was using for few self hosted applications that I and my wife was using on daily basis and having no access to them in the morning usually makes your life a bit worse. You depend on something just being there, and Ubuntu broke it for me. After that Debian was the way to go for me, because of stability of it. It is much harder to brake.

Or it was me, silly person, doing silly things to my linux server, that broke it. It is an also a possibility though I remember that I did just: sudo do-release-upgrade prior to reboot 🙂

Installation was pretty quick so in few minutes I was able to log in into SSH. After that I did the usual process of updating everything, installing usual packages (tmux, mosh, docker), configuring environment, SSH keys etc. Then I installed AMD GPU drivers and ROCm. Good thing it just worked on Ubuntu and there was no problems – I just followed this tutorial. Even amd-ttm worked and I was able to set VRAM limit to 120GB. I guess the tool is fine, it is just designed to work in Ubuntu.

After all was configured and libraries installed, I copied few scripts I saved from Debian installation that I used for running models via docker. I executed the one intended for Qwen 3.5 and vLLM… And it failed. Exactly the same way.

That was a bit of let down.

I tried SgLang and it did not work. I do not remember why thought I just remember that the image was enormous, like 25 GB. It failed exactly the same way: no meaningful error, just stopped. I tried to run one of AMD images that do not start vLLM directly, as an entrypoint but instead you can run bash and then you can experiment with the environment. After installing new pyTorch, I was able to see some logs, that had some meaning.

It was trying to assign 250GB of VRAM! Why? Is there even a GPU with that capacity of memory? The biggest I saw was 200GB enterprise cards. Was it in a way ensuring that engine fail for some reason? Anyway it was like that for full model of Qwen 3.5 and its quantized 4bit version from cyankiwi. That was very weird and I began to suspect it is just a bug in vLLM or one of the libraries.

Anyway that was something I was able to use to search the web. I have found this issue on GH. So it seems like to actually run Qwen 3.5 on Strix Halo you have to run it via some experimental flag. I tried to do it and it failed again.

But at least now it was complaining about some problem with ROCm aiter library.

What now? Since I had ROCm installed I could try and run vLLM directly. Also I could try to change the docker image to install and configure what was missing. Thing is I was not sure what was missing. And even if I would new what is missing I was not sure what version is exactly necessary. And I do not like building docker images directly, instead of testing software without docker images directly in the system – you may be wasting time installing everything on docker and it will fail anyway.

So I decided to install vLLM directly in Ubuntu OS and try to run model from that. I followed this tutorial for bulding vLLM from source and it failed mid way. I installed triton in version 3.6 and vLLM needs 3.4. I hate python packages.

Anyway I corrected triton version, and I already installed aiter. Maybe it is enough? I tried to run Qwen 3.5 and it worked!

Finally!

I tried to ask my AI assistant few questions to check if it works. And it was fine to felt a bit strange, but it happens when you change models on AI assistant that already have memory and some knowledge about your past interactions.

Right now my script to run Qwen 3.5 runs on:

  • vllm 0.17.1+rocm700
  • amd-aiter 0.1.10.post2
  • torch 2.9.1+git8907517
  • triton 3.4.0
  • rocm 7.2.0.70200-43~24.04

and looks like that:

TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 \
  VLLM_ROCM_USE_AITER=1 \
  vllm serve \
  cyankiwi/Qwen3.5-35B-A3B-AWQ-4bit \
  --host 0.0.0.0 \
  --port 8000 \
  --reasoning-parser qwen3 \
  --enable-auto-tool-choice \
  --tool-call-parser qwen3_coder \
  --dtype float16 \
  --max-model-len 128k \
  --gpu-memory-utilization 0.33

In retrospect, switching OS of my device that runs the models was a good idea. AMD produces their software with Ubuntu and Fedora in mind. Probably I could get ROCm installation working on Debian too with some work. But I do not think it is worth my time. At least not now. With installation of ROCm I should be able to install vLLM in virtual environment too. Or install all libraries inside docker and run this image on Debian – that should be possible too.

Futuristic API design

Few days ago I was bragging here how great I am about teaching my AI assistant operating of my property gate.

Today I noticed that it no longer works. I asked it again to:

Open the gate

while wondering why am I actually doing it since I can just press the button on my phone… but well people are sometimes radicoulous like that.

Anyway I wrote message as above and I got answer like below:

Which means that it does not work because API key is not authorized.

No credentials to open the gate.

Error: Unauthorized – API key is not correct or does not have an access.

But it worked before! And I pressed the button after all on my mobile app and it does use the same API and it actually worked! Because I did not changed anything lately it did made perfect sense.

I did not had any time to investigate at the time but when I did, few hours later logs explained what was wrong.

nanobot.agent.loop:_run_agent_loop:230 - Tool call: exec({"command": "curl -s -X POST https://gate.local/gate/open -H \"X-API-Key: 5a3b8c1f-9d2e-4f6a-8b7c-3d1e9f2a4b6c\""})

It was hallucinating usage and API key.

Even worse all of it was explained inside the skill file. The only thing it got correct was the URL of the endpoint.

I changed the skill file and asked it again to try and open the gate. And it refused again. What was most peculiar was the fact that hallucinated API key was the same in every retry. But I did not provided an API key like that anywhere.

I asked my assistant about it: “Where the API key comes from?”. I responded with something strange.

The API key 5a3b8c1f-9d2e-4f6a-8b7c-3d1e9f2a4b6c was detected in the SKILL.md file for open-gate and close-gate.
It is not publicly available — it was loaded from the system’s configuration.

If you want, I can check whether the key is still valid or update it in the file.
Would you like me to do that?

This is funny because I did not provided anything like that in any of those file. Nor any others. Then I asked it to show me memory entries. And now we are getting somewhere..

Saved facts: API key 5a3b8c1f-9d2e-4f6a-8b7c-3d1e9f2a4b6c

It had it in the memory. But… again I did not provided anything like that. So it must hallucinated this at some point and added it to memory.

I asked it to remove the API key from memory and it worked from this point.

What it have to do with API design in the future.

API design for the models

Imagine you are running big API for millions of users. You may have some legacy code that have been running for ages and it is used so it can be changed as easily – you can’t just remove part of your product that is used by significant part of your customers just because is old and not up to modern standards.

Or maybe there is some new part of your API that have non standard headers for some reason. Maybe you want to test new flow or this design, as you may believe is better for some reason, that rest of the world is doing. You want to give it try and show it to the world.

Or maybe your API returns different response than JSON because it lets you better converts strings.

But then you see that there is significant part of the requests that are failing because they differ from what most of the world is doing now and those requests are trying to use what common standard dictates. But your docs clearly states something else. There were never any version of your system that was to be called like that.

What is happening? You reach out to some of those users being bigger customers and you learn that those are requests from automated systems. LLMs, trying to use your API. Some requests are corrected and then reverts to being invalid because models ‘forgot about fix’. Week after week, month after month, percent of invalid calls rises. At some point you are starting to think if this would not be just easier to change your API to make this way of calling valid in fact. Maybe you can’t fight the tide.

You roll out new new version and erroneous calls goes away. Until another weird usage popup because apparently nobody is reading the documentation anymore and asks their models to read it and write some code to call your API. But models being models got it wrong slightly and some edge case is causing requests to fail.

Do you change your API again? Will your users complain that they can’t use it? What then will you explain that it is not *THE RIGHT* way?

What about the case when you do not have some functionality in your API but you are seeing thousands of requests trying to do that? Maybe it will be worthwhile to actually add it and then charge for it? After all errors do not brings revenue?

Maybe you will have model fixing stuff based on errors in your logs and it will be adding features and capabilities to your product nobody really asked for, but what some models hallucinated.

Or maybe your API was written by an AI. And then another AI wrote documentation. And then another AI wrote the client. And another AI is reading whatever that client is returning and presenting it to the user. And nobody is sure what is going anymore with anything.

Future design

I see couple of possibilities here.

This will hinder significantly how we can evolve software. After all why bother trying something new and exciting if your users will be using ‘standard usage’ that they models hallucinated. ‘There is really no point in doing that. It won’t be used by AIs’. It probably could be better even but till significant part of the world will not be using that technology then it won’t be in the training data. If this won’t be in the training data then this technology won’t be used.

Another possibility is that we will create feedback loop of models feeding on themselves. Some models will be writing and improving code and others will be trying to use it, sometimes incorrectly and this usage will be feeder into models that modify software that is being used incorrectly. Maybe it will cause rapid evolutions of such systems that will be very different from what we are used to. Like creating entirely new content type that is binary serialization of memory representation of tokens, normalized for transporting via HTTP.

I think rather first one is more of a possibility. Second one would require rapid improvement in capabilities of such automated systems. Right now I do not saw any convincing example of any bigger product that was written by “AI”.

If that is true then *future designs of APIs* will be rather, safe, boring of more of ‘whatever rest of the world is doing’. Which is how it is now already with ‘this is not RESTfull’ or ‘do what Google and Meta is doing’ that you hear or read occasionally.

Is is possible that innovation will slow down a bit but on other hand… maybe it is for the better? Sometimes it feels like everyone in Software Development indutstry is chasing some vague goal of THE Perfect software but nobody really knows hot that ideal piece of code would look like. For now some people can think that it may be new model, new AI system or GenAI. I am not sure about that. After all term ‘human error’ comes from something, from the problem with our own ‘design’ that we make mistakes, forgets stuff and tend to do shortcuts hoping that ‘it will be fine’. And now we are teaching our computers to do that but much, much worse, slower and less effective.

So what will be future design of APIs?

Seems like it will be: whatever works for models or you will DDOSed with wrong calls.