Here are the errors on load of the model in a new chat:
EBUSY: resource busy or locked, open ‘C:\Users[username]\AppData\Roaming\ai-navigator\logs\20240809055518_api-server.log’
API server exited with code 1
kill ESRCH
API server exited with code 3221226356
The log states this:
{“tid”:“97396”,“timestamp”:1723197326,“level”:“INFO”,“function”:“main”,“line”:2495,“msg”:“build info”,“build”:0,“commit”:“unknown”}
{“tid”:“97396”,“timestamp”:1723197326,“level”:“INFO”,“function”:“main”,“line”:2502,“msg”:“system info”,“n_threads”:16,“n_threads_batch”:-1,“total_threads”:32,“system_info”:"AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "}
llama_model_load: error loading model: llama_model_loader: failed to load model from C:\Users[username].ai-navigator\models\mistralai\Mixtral-8x7B-v0.1\Mixtral-8x7B-v0.1_Q6_K.gguf
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model ‘C:\Users[username].ai-navigator\models\mistralai\Mixtral-8x7B-v0.1\Mixtral-8x7B-v0.1_Q6_K.gguf’
{“tid”:“97396”,“timestamp”:1723197326,“level”:“ERR”,“function”:“load_model”,“line”:683,“msg”:“unable to load model”,“model”:“C:\Users\[username]\.ai-navigator\models\mistralai\Mixtral-8x7B-v0.1\Mixtral-8x7B-v0.1_Q6_K.gguf”}
1 Like
Hi, just bumping this thread as the issue is not resolved.
Thanks!
1 Like
lzhou
August 20, 2024, 3:34pm
23
Thank you, @franksoapdish , for the following up!
Our engineers are investigating this issue. I will update when have findings.
Thank you for the patience.
1 Like
lzhou
August 30, 2024, 2:22pm
24
@franksoapdish , we have a new v0.7.3 release yesterday with quite some improvements and bug fixes.
Could you update and see if the issue is resolved?
Thank you!
Hello! I have updated and received this API server error - I am really not certain what’s going on but it might be my Avira firewall?
lzhou
September 4, 2024, 12:13pm
26
@franksoapdish , thanks for the update and we are sorry that the latest release does not address your issue.
What model did you use when you encounter this error?
Have you tried different models and do they all lead to the same error code?
lzhou
September 10, 2024, 2:05pm
27
@franksoapdish , following up on the issue that has been bugging you – do you see the error happening when load only specific model(s) or every model you have tried?
Yes both Mistral and Llama, for example.
Actually I just found that some newly downloaded models sometimes work fine - so I guess that is an improvement.
lzhou
September 17, 2024, 3:19pm
30
Thank you, @franksoapdish , for the additional information. Glad to hear that some models works fine.
What exact Mistral and Llmama models, including the quantization levels, are you having problems with?
If you can share with the error notification and log along with the errors, that will be great.
Thank you!