Here are the errors on load of the model in a new chat:
EBUSY: resource busy or locked, open ‘C:\Users[username]\AppData\Roaming\ai-navigator\logs\20240809055518_api-server.log’
API server exited with code 1
kill ESRCH
API server exited with code 3221226356
The log states this:
{“tid”:“97396”,“timestamp”:1723197326,“level”:“INFO”,“function”:“main”,“line”:2495,“msg”:“build info”,“build”:0,“commit”:“unknown”}
{“tid”:“97396”,“timestamp”:1723197326,“level”:“INFO”,“function”:“main”,“line”:2502,“msg”:“system info”,“n_threads”:16,“n_threads_batch”:-1,“total_threads”:32,“system_info”:"AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | "}
llama_model_load: error loading model: llama_model_loader: failed to load model from C:\Users[username].ai-navigator\models\mistralai\Mixtral-8x7B-v0.1\Mixtral-8x7B-v0.1_Q6_K.ggufllama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model ‘C:\Users[username].ai-navigator\models\mistralai\Mixtral-8x7B-v0.1\Mixtral-8x7B-v0.1_Q6_K.gguf’
{“tid”:“97396”,“timestamp”:1723197326,“level”:“ERR”,“function”:“load_model”,“line”:683,“msg”:“unable to load model”,“model”:“C:\Users\[username]\.ai-navigator\models\mistralai\Mixtral-8x7B-v0.1\Mixtral-8x7B-v0.1_Q6_K.gguf”}