Do you want a given output structure, like json or toml?
Do you want to align the model, with your dataset of question and answer pairs?
First of all, have you tried giving the model multiple examples of input output pairs in the context, this already helps the model a lot to output the correct format.
And third, in case you want to train a model to respond differently and the previous steps were not good enough, you can fine-tune.
I can recommend this project to you, as it teaches how to fine-tune a model: https://github.com/huggingface/smol-course
Depending on the size of the model, that you want to fine-tune and the amount of compute that you have available you can either train by updating all parameters like ORPO or you can train via PEFT (LoRA)
First of all i think it is a great idea to give the model access to a map.
Unfortunately it seems like, that the script is missing a huge part at the end, the loop does not have any content and the Tools class is missing.
I have found the problem with the cut off, by default aider only sends 2048 tokens to ollama, this is why i have not noticed it anywhere else except for coding.
When running /tokens in aider:
bash
$ 0.0000 16,836 tokens total
15,932 tokens remaining in context window
32,768 tokens max context window size
Even though it will only send 2048 tokens to ollama.
To fix it i needed to add a file .aider.model.settings.yml to the repository:
The --rotate normal,inverted,left,right does not work, but you can use the transform option to achieve the same effect.
To create the transformation matrix you can use something like: https://angrytools.com/css-generator/transform/
for translateXY enter half the screen resolution
don't copy the generated code, it has the numbers in the wrong order just type out the matrix row wise.
Thanks for suggesting RNote, i always use Xournal++ to take notes, but there are some problems and RNote seems to work much nicer with gestures.
The only thing that i am missing is an option for saving pen configuration to easily switch between a black pen and a yellow marker.
Longnet handles that case better in my opinion. It does not need as much memory as vanilla attention, but it also does not discard as much information as this implementation. Here is a very good video on how longnet works https://www.youtube.com/watch?v=nC2nU9j9DVQ
Thanks for the suggestion, I tried it and the diff view is very good.
The setup was not really easy for my local models, but after i set it up, it was really fast.
The biggest problem with the tool is that the open source models are not that good, i tried if it could fix a bug in my code and it was only able to make it worse.
On a more positive note, you at least do not need to copy all text over to another window and it is great for generating boilerplate code nearly flawlessly every time.
It works ok for the most part.
The problem i have with it is that inline completion is more annoying then helpful, because the AI only sees the last few lines that you wrote and therefore does not know the larger context of the project.
I also found this project, it looks promising.
Has anyone tested it?
Can you separate the server from the client?
I dont know what you mean with steering?
First of all, have you tried giving the model multiple examples of input output pairs in the context, this already helps the model a lot to output the correct format.
Second you can force a specific output structure by using a regex or grammar: https://python.langchain.com/docs/integrations/chat/outlines/#constrained-generation https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md
And third, in case you want to train a model to respond differently and the previous steps were not good enough, you can fine-tune. I can recommend this project to you, as it teaches how to fine-tune a model: https://github.com/huggingface/smol-course
Depending on the size of the model, that you want to fine-tune and the amount of compute that you have available you can either train by updating all parameters like ORPO or you can train via PEFT (LoRA)