From 0dc2cff7759bd9e86a1346499b358f760c35c78e Mon Sep 17 00:00:00 2001 From: fuhui Date: Wed, 13 Dec 2023 11:21:36 +0800 Subject: [PATCH] add news in readme file --- README.md | 3 ++- README_CN.md | 3 ++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index fcf298c..71740bb 100644 --- a/README.md +++ b/README.md @@ -25,7 +25,8 @@ Codefuse-ModelCache - [Acknowledgements](#Acknowledgements) - [Contributing](#Contributing) ## news -- [2023.11.20] codefuse-ModelCache has integrated local storage, such as sqlite and faiss, providing users with the convenience of quickly initiating tests. +- 🔥🔥[2023.12.10] we integrate LLM embedding frameworks such as 'llmEmb', 'ONNX', 'PaddleNLP', 'FastText', alone with the image embedding framework 'timm', to bolster embedding functionality. +- 🔥🔥[2023.11.20] codefuse-ModelCache has integrated local storage, such as sqlite and faiss, providing users with the convenience of quickly initiating tests. - [2023.08.26] codefuse-ModelCache... ### Introduction Codefuse-ModelCache is a semantic cache for large language models (LLMs). By caching pre-generated model results, it reduces response time for similar requests and improves user experience.
This project aims to optimize services by introducing a caching mechanism. It helps businesses and research institutions reduce the cost of inference deployment, improve model performance and efficiency, and provide scalable services for large models. Through open-source, we aim to share and exchange technologies related to large model semantic cache. diff --git a/README_CN.md b/README_CN.md index f897edc..18d3ea5 100644 --- a/README_CN.md +++ b/README_CN.md @@ -25,7 +25,8 @@ Codefuse-ModelCache - [致谢](#致谢) - [Contributing](#Contributing) ## 新闻 -- [2023.11.20] codefuse-ModelCache增加本地存储能力, 适配了嵌入式数据库sqlite、faiss,方便用户快速启动测试。 +- 🔥🔥[2023.12.10] 增加llmEmb、onnx、paddlenlp、fasttext等LLM embedding框架,并增加timm 图片embedding框架,用于提供更丰富的embedding能力。 +- 🔥🔥[2023.11.20] codefuse-ModelCache增加本地存储能力, 适配了嵌入式数据库sqlite、faiss,方便用户快速启动测试。 - [2023.10.31] codefuse-ModelCache... ## 项目简介 Codefuse-ModelCache 是一个开源的大模型语义缓存系统,通过缓存已生成的模型结果,降低类似请求的响应时间,提升用户体验。该项目从服务优化角度出发,引入缓存机制,在资源有限和对实时性要求较高的场景下,帮助企业和研究机构降低推理部署成本、提升模型性能和效率、提供规模化大模型服务。我们希望通过开源,分享交流大模型语义Cache的相关技术。