Python wrapper for running locally-hosted LLM with llama.cpp
inference llama llm local-inference offline-llm
https://github.com/nf-core/modules/tree/master/modules/nf-core/llamacpppython/run
0
1
2
Python wrapper for llama.cpp LLM inference tool
Ask a question on Slack
Open an issue on GitHub