Third-party providers hosting
👉demo👈
A no-strings-attached framework for your LLM that allows applying Chain-of-Thought-alike prompt schema
towards a massive textual collections using custom third-party providers
- ✅ No-strings: you're free to LLM dependencies and flexible
venv
customization. - ✅ Support schemas descriptions for Chain-of-Thought concept.
- ✅ Provides iterator over infinite amount of input contexts
From PyPI:
pip install --no-deps bulk-chain
or latest version from here:
pip install git+https://github.com/nicolay-r/bulk-chain@master
To declare Chain-of-Though (CoT) schema we use JSON
format.
The field schema
is a list of CoT instructions for the Large Language Model.
Each item of the list represent a dictionary with prompt
and out
keys that corresponds to the input prompt and output variable name respectively.
All the variable names should be mentioned in {}
.
Example:
[
{"prompt": "Given customer message: {text}, detect the customer's intent?", "out": "intent" },
{"prompt": "Given customer message: {text}, extract relevant entities?", "out": "entities"},
{"prompt": "Given intent: {intent} and entities: {entities}, generate a concise response or action recommendation for support agent.", "out": "action"}
]
- schema
- LLM model from the Third-party providers hosting
↗️ . - Data (iter of dictionaries)
API: For more details see the related Wiki page
from bulk_chain.core.utils import dynamic_init
from bulk_chain.api import iter_content
content_it = iter_content(
# 1. Your schema.
schema=[
{"prompt": "Given customer message: {text}, detect the customer's intent?", "out": "intent" },
{"prompt": "Given customer message: {text}, extract relevant entities?", "out": "entities"},
{"prompt": "Given intent: {intent} and entities: {entities}, generate a concise response or action recommendation for support agent.", "out": "action"}
],
# 2. Your third-party model implementation.
llm=dynamic_init(class_filepath="replicate_104.py")(api_token="<API-KEY>"),
# 3. Customize your inference and result providing modes:
infer_mode="batch_async",
return_mode="batch",
# 4. Your iterator of dictionaries
input_dicts_it=YOUR_DATA_IT,
)
for content in content_it:
# Handle your LLM responses here ...
All you have to do is to implement BaseLM
class, that includes:
__init__
-- for setting up batching mode support and (optional) model name;ask(prompt)
-- infer your model with the givenprompt
.
See examples with models at nlp-thirdgate 🌌.