Skip to content

fix: update embedding extraction to use appropriate async method #2068

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,16 +31,16 @@ jobs:
token: ${{ github.token }}
filters: |
related: &related
- .github/workflows/ci.yml
- .github/workflows/ci.yaml
- codecov.yml
- pyproject.toml
- requirements/test.txt
ragas:
- *related
- "src/ragas/**"
- "tests/**"
- "ragas/src/ragas/**"
- "ragas/tests/**"
ragas_experimental:
- "src/experimental/**"
- "experimental/ragas_experimental/**"
docs:
- *related
- requirements/docs-requirements.txt
Expand Down Expand Up @@ -85,7 +85,7 @@ jobs:

- name: Install dependencies
run: |
pip install "."
pip install "./ragas"
pip install -r requirements/test.txt


Expand All @@ -97,7 +97,7 @@ jobs:
OPTS=(--dist loadfile -n auto)
fi
# Now run the unit tests
pytest --nbmake tests/unit "${OPTS[@]}"
pytest --nbmake ragas/tests/unit "${OPTS[@]}"
env:
__RAGAS_DEBUG_TRACKING: true
RAGAS_DO_NOT_TRACK: true
Expand Down Expand Up @@ -140,7 +140,7 @@ jobs:

- name: Install dependencies
run: |
pip install .
pip install ./ragas
pip install -r requirements/dev.txt

- name: Lint check
Expand Down
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ cython_debug/
# Ragas specific
experiments/
**/fil-result/
src/ragas/_version.py
ragas/src/ragas/_version.py
experimental/ragas_experimental/_version.py
.vscode
.envrc
Expand Down
6 changes: 5 additions & 1 deletion CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,4 +187,8 @@ console_handler.setFormatter(formatter)

# Add the handler to the logger
analytics_logger.addHandler(console_handler)
```
```

## Memories

- whenever you create such docs put in in /experiments because that is gitignored and you can use it as a scratchpad or tmp directory for storing these
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ lint-all: lint lint-experimental ## Lint all code in the monorepo

type: ## Running type checker for ragas
@echo "(pyright) Typechecking ragas codebase..."
PYRIGHT_PYTHON_FORCE_VERSION=latest pyright ragas/src/ragas
cd ragas && PYRIGHT_PYTHON_FORCE_VERSION=latest pyright src

type-experimental: ## Running type checker for experimental
@echo "(pyright) Typechecking experimental codebase..."
Expand Down
44 changes: 0 additions & 44 deletions docs/experimental/index.html.md

This file was deleted.

3 changes: 3 additions & 0 deletions docs/experimental/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Ragas Experimental

Under the works but stay tuned :)
30 changes: 3 additions & 27 deletions docs/howtos/applications/cost.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -24,33 +24,9 @@
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"TokenUsage(input_tokens=9, output_tokens=9, model='')"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_openai.chat_models import ChatOpenAI\n",
"from langchain_core.prompt_values import StringPromptValue\n",
"\n",
"gpt4o = ChatOpenAI(model=\"gpt-4o\")\n",
"p = StringPromptValue(text=\"hai there\")\n",
"llm_result = gpt4o.generate_prompt([p])\n",
"\n",
"# lets import a parser for OpenAI\n",
"from ragas.cost import get_token_usage_for_openai\n",
"\n",
"get_token_usage_for_openai(llm_result)"
]
"outputs": [],
"source": "from langchain_openai.chat_models import ChatOpenAI\nfrom langchain_core.prompt_values import StringPromptValue\n# lets import a parser for OpenAI\nfrom ragas.cost import get_token_usage_for_openai\n\ngpt4o = ChatOpenAI(model=\"gpt-4o\")\np = StringPromptValue(text=\"hai there\")\nllm_result = gpt4o.generate_prompt([p])\n\nget_token_usage_for_openai(llm_result)"
},
{
"cell_type": "markdown",
Expand Down Expand Up @@ -284,4 +260,4 @@
},
"nbformat": 4,
"nbformat_minor": 2
}
}
38 changes: 3 additions & 35 deletions docs/howtos/customizations/metrics/cost.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -35,41 +35,9 @@
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/homebrew/Caskroom/miniforge/base/envs/ragas/lib/python3.9/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
},
{
"data": {
"text/plain": [
"TokenUsage(input_tokens=9, output_tokens=9, model='')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_openai.chat_models import ChatOpenAI\n",
"from langchain_core.prompt_values import StringPromptValue\n",
"\n",
"gpt4o = ChatOpenAI(model=\"gpt-4o\")\n",
"p = StringPromptValue(text=\"hai there\")\n",
"llm_result = gpt4o.generate_prompt([p])\n",
"\n",
"# lets import a parser for OpenAI\n",
"from ragas.cost import get_token_usage_for_openai\n",
"\n",
"get_token_usage_for_openai(llm_result)"
]
"outputs": [],
"source": "from langchain_openai.chat_models import ChatOpenAI\nfrom langchain_core.prompt_values import StringPromptValue\n# lets import a parser for OpenAI\nfrom ragas.cost import get_token_usage_for_openai\n\ngpt4o = ChatOpenAI(model=\"gpt-4o\")\np = StringPromptValue(text=\"hai there\")\nllm_result = gpt4o.generate_prompt([p])\n\nget_token_usage_for_openai(llm_result)"
},
{
"cell_type": "markdown",
Expand Down Expand Up @@ -212,4 +180,4 @@
},
"nbformat": 4,
"nbformat_minor": 2
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@
}
],
"source": [
"from langchain_community.document_loaders import DirectoryLoader, TextLoader\n",
"from langchain_community.document_loaders import DirectoryLoader\n",
"\n",
"\n",
"path = \"Sample_non_english_corpus/\"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -229,7 +229,6 @@
"source": [
"from ragas.testset.synthesizers.single_hop import (\n",
" SingleHopQuerySynthesizer,\n",
" SingleHopScenario,\n",
")\n",
"from dataclasses import dataclass\n",
"from ragas.testset.synthesizers.prompts import (\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import DirectoryLoader, TextLoader\n",
"from langchain_community.document_loaders import DirectoryLoader\n",
"\n",
"path = \"Sample_Docs_Markdown/\"\n",
"loader = DirectoryLoader(path, glob=\"**/*.md\")\n",
Expand Down Expand Up @@ -136,7 +136,7 @@
"metadata": {},
"outputs": [],
"source": [
"from ragas.testset.transforms import Parallel, apply_transforms\n",
"from ragas.testset.transforms import apply_transforms\n",
"from ragas.testset.transforms import (\n",
" HeadlinesExtractor,\n",
" HeadlineSplitter,\n",
Expand Down
26 changes: 2 additions & 24 deletions docs/howtos/integrations/helicone.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -45,31 +45,9 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from datasets import Dataset\n",
"from ragas import evaluate\n",
"from ragas.metrics import faithfulness, answer_relevancy, context_precision\n",
"from ragas.integrations.helicone import helicone_config # import helicone_config\n",
"\n",
"\n",
"# Set up Helicone\n",
"helicone_config.api_key = (\n",
" \"your_helicone_api_key_here\" # Replace with your actual Helicone API key\n",
")\n",
"os.environ[\"OPENAI_API_KEY\"] = (\n",
" \"your_openai_api_key_here\" # Replace with your actual OpenAI API key\n",
")\n",
"\n",
"# Verify Helicone API key is set\n",
"if HELICONE_API_KEY == \"your_helicone_api_key_here\":\n",
" raise ValueError(\n",
" \"Please replace 'your_helicone_api_key_here' with your actual Helicone API key.\"\n",
" )"
]
"source": "import os\nfrom datasets import Dataset\nfrom ragas import evaluate\nfrom ragas.metrics import faithfulness, answer_relevancy, context_precision\nfrom ragas.integrations.helicone import helicone_config # import helicone_config\n\n\n# Set up Helicone\nHELICONE_API_KEY = \"your_helicone_api_key_here\" # Replace with your actual Helicone API key\nhelicone_config.api_key = HELICONE_API_KEY\nos.environ[\"OPENAI_API_KEY\"] = (\n \"your_openai_api_key_here\" # Replace with your actual OpenAI API key\n)\n\n# Verify Helicone API key is set\nif HELICONE_API_KEY == \"your_helicone_api_key_here\":\n raise ValueError(\n \"Please replace 'your_helicone_api_key_here' with your actual Helicone API key.\"\n )"
},
{
"cell_type": "markdown",
Expand Down Expand Up @@ -175,4 +153,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}
2 changes: 1 addition & 1 deletion docs/howtos/integrations/langfuse.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@
"source": [
"# import metrics\n",
"from ragas.metrics import faithfulness, answer_relevancy, context_precision\n",
"from ragas.metrics.critique import SUPPORTED_ASPECTS, harmfulness\n",
"from ragas.metrics.critique import harmfulness\n",
"\n",
"# metrics you chose\n",
"metrics = [faithfulness, answer_relevancy, context_precision, harmfulness]"
Expand Down
9 changes: 2 additions & 7 deletions docs/howtos/integrations/openlayer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -197,15 +197,10 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "ced5f583-b849-4aae-8397-2bd9006bb69f",
"metadata": {},
"outputs": [],
"source": [
"from openlayer.tasks import TaskType\n",
"\n",
"client = openlayer.OpenlayerClient(\"YOUR_OPENLAYER_API_KEY_HERE\")"
]
"source": "import openlayer\nfrom openlayer.tasks import TaskType\n\nclient = openlayer.OpenlayerClient(\"YOUR_OPENLAYER_API_KEY_HERE\")"
},
{
"cell_type": "code",
Expand Down Expand Up @@ -298,4 +293,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}
1 change: 0 additions & 1 deletion docs/howtos/integrations/opik.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,6 @@
"from datasets import load_dataset\n",
"from ragas.metrics import context_precision, answer_relevancy, faithfulness\n",
"from ragas import evaluate\n",
"from ragas.integrations.opik import OpikTracer\n",
"\n",
"fiqa_eval = load_dataset(\"explodinggradients/fiqa\", \"ragas_eval\")\n",
"\n",
Expand Down
36 changes: 1 addition & 35 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -77,41 +77,7 @@ nav:
- Feedback Intelligence:
- concepts/feedback/index.md
- 🧪 Experimental:
- Overview: experimental/index.html.md
- Core:
- Project:
- Core: experimental/project/core.html.md
- Experiments: experimental/project/experiments.html.md
- Naming: experimental/project/naming.html.md
- Dataset: experimental/dataset.html.md
- Experiment: experimental/experiment.html.md
- Utils: experimental/utils.html.md
- Typing: experimental/typing.html.md
- Models:
- Pydantic Models: experimental/model/pydantic_mode.html.md
- Components:
- LLM:
- Base: experimental/llm/llm.html.md
- Embedding:
- Base: experimental/embedding/base.md
- Prompt:
- Base: experimental/prompt/base.md
- Dynamic Few Shot: experimental/prompt/dynamic_few_shot.html.md
- Metric:
- Base: experimental/metric/base.html.md
- Decorator: experimental/metric/decorator.html.md
- Discrete: experimental/metric/discrete.html.md
- Numeric: experimental/metric/numeric.html.md
- Ranking: experimental/metric/ranking.html.md
- Result: experimental/metric/result.html.md
- Backends:
- Factory: experimental/backends/factory.html.md
- Ragas API Client: experimental/backends/ragas_api_client.html.md
- Tracing:
- Langfuse: experimental/tracing/langfuse.html.md
- MLflow: experimental/tracing/mlflow.html.md
- Exceptions: experimental/exceptions.html.md
- Init Module: experimental/init_module.md
- Overview: experimental/index.md
- 🛠️ How-to Guides:
- howtos/index.md
- Customizations:
Expand Down
Loading
Loading