K1 OH5.0 AI Build and Development Instructions
Revision History
Revision Version | Revision Date | Revision Description |
001 | 2025-03-28 | Initial version |
002 | 2025-04-12 | Format optimization |
1. Prerequisites
Refer to the compilation documentation to complete system compilation and flashing: K1 OH5.0 Download, Compile, and Flash Instructions
1.1. ollama+deepseek Resource Preparation
Download link: Click to Download
deepseek-r1-distill-qwen-1.5b-q4_0.gguf
Modelfile
Ollama
deepSeek-r1-distill-qwen-1.5b-q4_0.gguf
A compressed and optimized model file. It uses the GGUF format, which is designed for efficient inference and compression. This format can run efficiently in resource-constrained environments, such as embedded or mobile devices.
modefile
Defines how to configure and use the deepseek-r1-distill-qwen-1.5b-q4_0.gguf
model file.
ollama
Runs and manages various machine learning models. It supports multiple model versions and formats, including the DeepSeek series. With Ollama, you can easily deploy, run, and manage the DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf model.
1.2. Environment and Tool Preparation
- One set of MUSE Paper and power supply
- Type-C cable (for flashing and hdc connection)
- Windows-side hdc (for transferring files between PC and board)
- IDE (DevEco 4.0)
- K1 OH5.0 build environment
2. Install ollama+deepseek-r1-1.5b
To help developers quickly experience, a one-click installation package is provided.
2.1. Connect Windows and Muse Paper with a Type-C Cable
Ensure hdc shell
can connect to MUSE Paper
D:\>hdc list targets
0123456789ABCDEF
2.2. Download the installation package to your Windows PC and extract it to any location
Download link: Click to Download (already downloaded above, can be ignored). The package includes the installer, programs needed for secondary development, and development manuals.
2.2.1. One-click Automatic Installation of deepseek
Double-click the installation script circled in red in the figure: setup_ohos_ollama_env_v1.0.bat
to install all LLM dependencies and applications for OH:
2.2.2. Run and Debug
After installation, the application can perform LLM Q&A.
- Run ollama, if the following is displayed, ollama is working properly:
- If the list is empty, it means the model is not installed and needs to be loaded
- Load the large model
- Check the model list again
- Run the large model for conversation in the command line
- Open the OH HAP application
- Test the HAP user interface
3. Secondary Development
3.1. Development Environment Preparation
-
OH system development: VSCode + ubuntu Linux server
-
HAP development: DevEco 4.0 【deveco-studio-4.1.0.400.exe】
-
Required development files: Click to Download (already downloaded above, can be ignored)
- chatgpt: oh chatgpt lib code + testNAPI code
- deepseek: demo HAP code
3.2. OH System Build
Place the chatgpt
folder into the oh5.0\foundation\communication\chatgpt
directory, configure the module build settings, and you can compile the corresponding library. This library provides the interface for the upper HAP to access ollama.
3.2.1. Edit Development Code
Modify the code as needed.
3.2.2. Compile the Image
Command:
./build.sh --product-name musepaper2 --ccache --prebuilt-sdk
The two libraries related to this project are:
- libchatgpt_napi.z.so
- libchatgpt_core.z.so
The newly compiled image contains these two so files, which can be flashed or pushed using the hdc file send command, as follows:
hdc file send libchatgpt_napi.z.so /lib64/module/
hdc file send libchatgpt_core.z.so /lib64/
3.3. HAP Test Project
OH5.0\foundation\communication\chatgpt\testNapi
is mainly for secondary developers to refer to when developing their own AI large model applications. Open the project with the following version of DevEco, compile to generate the testNapi HAP needed for testing, which can be used to test and help develop your own LLM applications.
3.4. Development and Debugging
3.4.1. View Logs
-
hdc shell higlog | grep Chatgpt
-
hdc shell hilog | grep Index
-
Set ollama debug:
- export OLLAMA_DEBUG=1 //enable log output
- export OLLAMA_HOST='0.0.0.0' //allow external access to OLLAMA
02-28 12:35:58.260 4086 4086 I C01650/ChatGPT: ChatGPT instance created
02-28 12:35:58.260 4086 4086 I C01650/ChatGPT: Generating streaming response for input: who are you
02-28 12:35:58.261 4086 7595 I C01650/ChatGPT: Request payload: {"model":"deepseek-r1-1.5b","prompt":"who are you","stream":true}
02-28 12:35:58.262 4086 7595 I C01650/ChatGPT: Making request to Ollama API at [http://localhost:11434/api/generate](http://localhost:11434/api/generate)
02-28 12:35:58.266 4086 7595 I C01650/ChatGPT: CURL request completed after 1 attempts
02-28 12:35:58.267 4086 7595 I C01650/ChatGPT: Request completed successfully
3.5. Demo HAP Project
Similarly, use DevEco Studio 4.1 Release to open the corresponding code project and compile the demo HAP.