We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Porting over the https://github.com/ggerganov/llama.cpp mmap support, would reduce the minimum RAM requirement for the model.
I know, you know, haha - just putting it here for me to keep track of (if i come around to picking it up, to optimize the RWKV-cpp-node bindings)
The text was updated successfully, but these errors were encountered:
Move graph building into its own function
f422a3a
step towards RWKV#50 and loading models from memory among other things
Move graph building into its own function (#69)
3ca9c7f
step towards #50 and loading models from memory among other things
any updates?
Sorry, something went wrong.
@Ar57m Not a the moment. mmap support is in my backlog, but I have no estimates.
mmap
No branches or pull requests
Porting over the https://github.com/ggerganov/llama.cpp mmap support, would reduce the minimum RAM requirement for the model.
I know, you know, haha - just putting it here for me to keep track of (if i come around to picking it up, to optimize the RWKV-cpp-node bindings)
The text was updated successfully, but these errors were encountered: