TEST 01
Server Running
WAITING
Tests if Ollama is reachable at
http://localhost:11434
TEST 02
List Models
WAITING
Lists all models loaded from
E:\Models\
TEST 03
Generate — Simple Message
WAITING
Sends a short test prompt to the selected model. First load may take 30–90 seconds.
[!] CORS / FETCH BLOCK DETECTED — FIXES
OPTION A — ENV VAR
Stop Ollama and restart it with CORS allowed. Open a Command Prompt and run:
set OLLAMA_ORIGINS=* set OLLAMA_MODELS=E:\Models E:\ollama\ollama.exe serve
OPTION B — WEBSERVER
Use the Python webserver already on the USB. This avoids all file:// CORS issues.
Run: E:\Utilities\start_webserver.py Then open: http://localhost:8080
OPTION C — BAT FILE
The .bat launcher already sets all required environment variables automatically.
Run: E:\Utilities\ Utilities_CHECK_MODELS.bat Then open your chat page.
System Info
Result Guide
| [FAIL] Test 1 | Ollama not running. Run Utilities_CHECK_MODELS.bat |
| [OK] 1, [FAIL] 2 | Running but no models. Check OLLAMA_MODELS=E:\Models |
| [OK] 1&2, [FAIL] 3 | Model error. Try phi3:mini or free memory. |
| [OK] All | Ollama is working. Issue is in chat HTML page. |
| Failed to fetch | CORS block. See CORS Fix section or use webserver. |
USB Drive Paths
LAUNCHER
E:\AI_LAUNCHER.html
CHAT PAGES
E:\Portal Pages\
UTILITIES
E:\Utilities\
OLLAMA EXE
E:\ollama\ollama.exe
MODELS
E:\Models\
DOCUMENTS
E:\Documents\
WEBSERVER
E:\Utilities\start_webserver.py