From cb3caf5b129f8f159310623b85f8babe8a4805f2 Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:08:40 +0000 Subject: [PATCH 01/20] Create file --- docs/quick-start-guide.md | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) create mode 100644 docs/quick-start-guide.md diff --git a/docs/quick-start-guide.md b/docs/quick-start-guide.md new file mode 100644 index 000000000..2ab95e5b4 --- /dev/null +++ b/docs/quick-start-guide.md @@ -0,0 +1,31 @@ +# Quick Start Guide + +Welcome to Dev-Docs! This guide will help you get up and running quickly. + +## Installation + +1. Install the Dev-Docs VS Code extension from the marketplace +2. Sign up for a Dev-Docs account at https://dev-docs.com + +## Basic Usage + +1. Open a project in VS Code +2. Right-click on a file and select "Generate Documentation" +3. The AI will analyze your code and generate documentation +4. Review and edit the generated docs as needed +5. Commit the new documentation files to your repo + +## Key Features + +- AI-powered documentation generation +- Integration with GitHub for version control +- Web editor for refining and publishing docs +- Chrome extension for capturing UI workflows + +## Next Steps + +- Configure custom prompts in dev-docs.json +- Set up automated doc generation on commits +- Integrate Dev-Docs into your development workflow + +For more details, check out the full documentation at https://docs.dev-docs.com \ No newline at end of file From 723737902b7d9838b35d66e137f6a2119689ea8d Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:08:47 +0000 Subject: [PATCH 02/20] Create file --- docs/troubleshooting.md | 68 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 68 insertions(+) create mode 100644 docs/troubleshooting.md diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md new file mode 100644 index 000000000..300e1e3ce --- /dev/null +++ b/docs/troubleshooting.md @@ -0,0 +1,68 @@ +# Troubleshooting Guide + +This guide covers common issues you may encounter when using Dev-Docs and how to resolve them. + +## Installation Issues + +### VS Code Extension Not Installing + +If you're having trouble installing the Dev-Docs VS Code extension: + +1. Check your internet connection +2. Ensure you have the latest version of VS Code installed +3. Try uninstalling and reinstalling the extension +4. Check VS Code logs for any error messages + +## Authentication Problems + +### Unable to Sign In + +If you can't sign in to Dev-Docs: + +1. Verify your username and password are correct +2. Clear your browser cache and cookies +3. Ensure you're using a supported browser (latest Chrome, Firefox, or Safari) +4. Check if Dev-Docs is experiencing any known service disruptions + +## Generation Issues + +### AI Not Generating Documentation + +If the AI isn't generating documentation: + +1. Check your API key is valid and has sufficient credits +2. Ensure you've selected the correct files/folders for context +3. Try simplifying your generation prompt +4. Check your internet connection + +### Poor Quality Generation Results + +If you're getting low quality AI-generated content: + +1. Provide more context by selecting relevant files/folders +2. Be more specific in your generation prompt +3. Try breaking down complex tasks into smaller, focused generations + +## GitHub Integration Problems + +### Unable to Connect to GitHub + +If you can't connect Dev-Docs to your GitHub repository: + +1. Verify your GitHub credentials are correct +2. Ensure you've granted the necessary permissions to Dev-Docs +3. Check if there are any GitHub service disruptions +4. Try revoking and re-granting access to Dev-Docs + +### Workflow Not Triggering + +If your GitHub workflow isn't triggering: + +1. Check your dev-docs.json configuration +2. Ensure you've pushed changes to the correct branch +3. Verify the workflow file is in the .github/workflows directory +4. Check GitHub Actions logs for any error messages + +## Still Need Help? + +If you're still experiencing issues after trying these troubleshooting steps, please contact our support team at support@dev-docs.com or open an issue on our GitHub repository. \ No newline at end of file From e754dd338f329e550195ca084a598a21b25b9a0b Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:16:57 +0000 Subject: [PATCH 03/20] Create file --- docs/push.md | 46 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 docs/push.md diff --git a/docs/push.md b/docs/push.md new file mode 100644 index 000000000..105075d66 --- /dev/null +++ b/docs/push.md @@ -0,0 +1,46 @@ +# push Documentation + +## Brief Description +The `push` method pushes a model to the Ollama registry, allowing for optional streaming of progress updates. + +## Usage +To use the `push` method, you need an instance of the Ollama class. Here's how to push a model: + +```javascript +import Ollama from 'ollama-js'; + +const ollama = new Ollama(); +await ollama.push({ model: 'mymodel' }); +``` + +## Parameters +- `request` (object, required): An object containing: + - `model` (string, required): The name of the model to push. + - `stream` (boolean, optional): If true, returns progress updates as a stream. + - `insecure` (boolean, optional): If true, allows insecure connections. + +## Return Value +Returns a Promise that resolves to: +- A `ProgressResponse` object if `stream` is false. +- An `AbortableAsyncIterator` if `stream` is true. + +## Examples + +### Basic usage: +```javascript +const response = await ollama.push({ model: 'mymodel' }); +console.log(response.status); +``` + +### Streaming progress: +```javascript +const stream = await ollama.push({ model: 'mymodel', stream: true }); +for await (const update of stream) { + console.log(`Progress: ${update.completed}/${update.total}`); +} +``` + +## Notes or Considerations +- Ensure you have the necessary permissions to push models to the Ollama registry. +- The `insecure` option should be used with caution, as it may compromise security. +- Streaming can be useful for providing real-time feedback on long-running push operations. \ No newline at end of file From bf944e0c239beb41390ba1f2681f73b1fabb32b4 Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:16:58 +0000 Subject: [PATCH 04/20] Create file --- docs/embeddings.md | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) create mode 100644 docs/embeddings.md diff --git a/docs/embeddings.md b/docs/embeddings.md new file mode 100644 index 000000000..be83e454c --- /dev/null +++ b/docs/embeddings.md @@ -0,0 +1,43 @@ +# embeddings Documentation + +## Brief Description +The `embeddings` method converts text prompts into vector representations using a specified model. + +## Usage +To use the `embeddings` method, you need to create an instance of the Ollama class and call the `embeddings` method with the required parameters. + +## Parameters +- `model` (string, required): The name of the model to use for generating embeddings. +- `prompt` (string, required): The text prompt to be converted into a vector representation. +- `keep_alive` (string | number, optional): Specifies how long to keep the model loaded in memory. +- `options` (Partial, optional): Additional options for fine-tuning the embedding process. + +## Return Value +The method returns a Promise that resolves to an `EmbeddingsResponse` object containing: +- `embedding` (number[]): An array of numbers representing the vector embedding of the input prompt. + +## Examples +```javascript +const ollama = new Ollama(); + +// Basic usage +const response = await ollama.embeddings({ + model: "text-embedding-ada-002", + prompt: "Hello, world!" +}); +console.log(response.embedding); + +// With keep_alive option +const response2 = await ollama.embeddings({ + model: "text-embedding-ada-002", + prompt: "Another example", + keep_alive: "5m" +}); +console.log(response2.embedding); +``` + +## Notes or Considerations +- The resulting embedding can be used for various natural language processing tasks, such as semantic search or text classification. +- The dimensionality of the embedding vector depends on the model used. +- Ensure you have the necessary model loaded before calling this method. +- The `keep_alive` option can be useful for performance optimization when making multiple embedding requests. \ No newline at end of file From 747aeed496a9ab89693999e0cde5efda51b7be9f Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:16:58 +0000 Subject: [PATCH 05/20] Create file --- docs/pull.md | 46 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 docs/pull.md diff --git a/docs/pull.md b/docs/pull.md new file mode 100644 index 000000000..289b8e0ec --- /dev/null +++ b/docs/pull.md @@ -0,0 +1,46 @@ +# pull Documentation + +## Brief Description +The `pull` method allows you to download a model from the Ollama registry, with optional streaming support. + +## Usage +To use the `pull` method, you need to create an instance of the Ollama class and then call the `pull` method on it. + +```javascript +import Ollama from 'ollama' + +const ollama = new Ollama() +``` + +## Parameters +- `request` (Required): An object with the following properties: + - `model` (Required): String - The name of the model to pull. + - `stream` (Optional): Boolean - Whether to stream the response. + - `insecure` (Optional): Boolean - Whether to allow insecure connections. + +## Return Value +The `pull` method returns a Promise that resolves to either: +- A `ProgressResponse` object if `stream` is false or not specified. +- An `AbortableAsyncIterator` if `stream` is true. + +## Examples + +### Basic usage: +```javascript +const response = await ollama.pull({ model: 'llama2' }) +console.log(response.status) +``` + +### Streaming the pull process: +```javascript +const stream = await ollama.pull({ model: 'llama2', stream: true }) +for await (const chunk of stream) { + console.log(chunk.status, chunk.completed, '/', chunk.total) +} +``` + +## Notes or Considerations +- The `pull` method is useful for downloading models to your local Ollama instance. +- Streaming the response allows you to track the progress of the download. +- Be cautious when using the `insecure` option, as it may expose you to security risks. +- The method uses the Ollama API endpoint `/api/pull` under the hood. \ No newline at end of file From e34262e65e1b79b4af1ca368109ae1d4dbd86fb4 Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:16:59 +0000 Subject: [PATCH 06/20] Create file --- docs/generate.md | 59 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 docs/generate.md diff --git a/docs/generate.md b/docs/generate.md new file mode 100644 index 000000000..a67f1c494 --- /dev/null +++ b/docs/generate.md @@ -0,0 +1,59 @@ +# generate Documentation + +## Brief Description +The `generate` method creates a response from a text prompt using an AI model. + +## Usage +To use `generate`, first import the Ollama class and create an instance. Then call the `generate` method with your request parameters. + +```javascript +import { Ollama } from 'ollama-js'; + +const ollama = new Ollama(); +const response = await ollama.generate({ + model: 'modelName', + prompt: 'Your prompt here' +}); +``` + +## Parameters +- `request` (GenerateRequest): An object containing the following properties: + - `model` (string, required): The name of the model to use. + - `prompt` (string, required): The text prompt to generate from. + - `stream` (boolean, optional): Whether to stream the response. + - `images` (Uint8Array[] | string[], optional): Images to include with the prompt. + - Other optional parameters like `system`, `template`, `context`, etc. + +## Return Value +- If `stream` is false: Promise +- If `stream` is true: Promise> + +## Examples + +### Basic usage: +```javascript +const response = await ollama.generate({ + model: 'llama2', + prompt: 'Write a haiku about coding' +}); +console.log(response.response); +``` + +### Streaming response: +```javascript +const stream = await ollama.generate({ + model: 'gpt-3', + prompt: 'Explain quantum computing', + stream: true +}); + +for await (const chunk of stream) { + process.stdout.write(chunk.response); +} +``` + +## Notes or Considerations +- Ensure you have the necessary permissions to use the specified model. +- Large prompts or complex requests may take longer to process. +- When using images, they will be automatically encoded to base64. +- The method can handle both streaming and non-streaming responses based on the `stream` parameter. \ No newline at end of file From f6efffd2e80d7350c168079a02736ad03db11e02 Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:17:03 +0000 Subject: [PATCH 07/20] Create file --- docs/chat.md | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100644 docs/chat.md diff --git a/docs/chat.md b/docs/chat.md new file mode 100644 index 000000000..25c4a2788 --- /dev/null +++ b/docs/chat.md @@ -0,0 +1,80 @@ +# chat Documentation + +## Brief Description +The `chat` method allows users to interact with an AI model through a conversation-like interface, supporting both text and image inputs. + +## Usage +To use the `chat` method, first import and initialize the Ollama client: + +```javascript +import Ollama from 'ollama-js' + +const ollama = new Ollama() +``` + +Then, you can call the `chat` method with your request: + +```javascript +const response = await ollama.chat({ + model: 'your-model-name', + messages: [{ role: 'user', content: 'Hello, how are you?' }] +}) +``` + +## Parameters +- `request` (ChatRequest): An object containing: + - `model` (string): The name of the model to use. + - `messages` (Message[]): An array of message objects representing the conversation. + - `stream` (boolean, optional): Whether to stream the response. + - `format` (string | object, optional): The desired output format. + - `options` (Partial, optional): Additional options for the chat. + +## Return Value +- Promise>: + - If `stream` is false, returns a Promise resolving to a ChatResponse. + - If `stream` is true, returns an AbortableAsyncIterator yielding ChatResponse objects. + +## Examples + +### Basic chat interaction: +```javascript +const response = await ollama.chat({ + model: 'llama2', + messages: [{ role: 'user', content: 'What is the capital of France?' }] +}) +console.log(response.message.content) +``` + +### Streaming chat response: +```javascript +const stream = await ollama.chat({ + model: 'llama2', + messages: [{ role: 'user', content: 'Tell me a short story.' }], + stream: true +}) + +for await (const chunk of stream) { + process.stdout.write(chunk.message.content) +} +``` + +### Chat with image input: +```javascript +const response = await ollama.chat({ + model: 'llava', + messages: [ + { + role: 'user', + content: 'What's in this image?', + images: ['data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD/...'] + } + ] +}) +console.log(response.message.content) +``` + +## Notes and Considerations +- Ensure you have the correct model loaded before making chat requests. +- Image inputs should be provided as base64-encoded strings or file paths. +- For long conversations, consider managing context efficiently to avoid hitting token limits. +- The streaming option is useful for real-time interactions and handling long responses. \ No newline at end of file From a6217ce23d94594fc5a7fc694fb88f9460e308b3 Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:17:10 +0000 Subject: [PATCH 08/20] Create file --- docs/show.md | 45 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 45 insertions(+) create mode 100644 docs/show.md diff --git a/docs/show.md b/docs/show.md new file mode 100644 index 000000000..366911f40 --- /dev/null +++ b/docs/show.md @@ -0,0 +1,45 @@ +# show Documentation + +## Brief Description +The `show` method retrieves metadata for a specified model in the Ollama system. + +## Usage +To use the `show` method, you need to import the Ollama class and create an instance: + +```javascript +import { Ollama } from 'ollama' + +const ollama = new Ollama() +``` + +## Parameters +- `request` (ShowRequest): An object containing the following properties: + - `model` (string): The name of the model to retrieve metadata for. + - `system` (string, optional): A system prompt to include in the metadata. + - `template` (string, optional): A custom prompt template. + - `options` (Options, optional): Additional options for the model. + +## Return Value +The `show` method returns a Promise that resolves to a `ShowResponse` object containing the model's metadata. + +## Examples + +### Basic usage +```javascript +const metadata = await ollama.show({ model: 'llama2' }) +console.log(metadata) +``` + +### With custom system prompt +```javascript +const metadata = await ollama.show({ + model: 'gpt-4', + system: 'You are a helpful assistant.' +}) +console.log(metadata.system) +``` + +## Notes or Considerations +- Ensure you have the correct permissions to access the model's metadata. +- The availability of certain metadata fields may vary depending on the model. +- This method is useful for inspecting model details before using it in other operations. \ No newline at end of file From c8824b098822927e150e7296db92beab97b713b7 Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:17:11 +0000 Subject: [PATCH 09/20] Create file --- docs/list.md | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 65 insertions(+) create mode 100644 docs/list.md diff --git a/docs/list.md b/docs/list.md new file mode 100644 index 000000000..d5baf5b1c --- /dev/null +++ b/docs/list.md @@ -0,0 +1,65 @@ +# list Documentation + +## Brief Description +The `list` method retrieves information about available models on the Ollama server. + +## Usage +To use the `list` method, you need to have an instance of the Ollama class. Here's how you can use it: + +```javascript +import Ollama from 'ollama' + +const ollama = new Ollama() +const models = await ollama.list() +``` + +## Parameters +This method doesn't take any parameters. + +## Return Value +The `list` method returns a Promise that resolves to a `ListResponse` object. This object contains an array of `ModelResponse` objects, each representing a model available on the server. + +## Examples + +### Listing all available models +```javascript +import Ollama from 'ollama' + +const ollama = new Ollama() + +try { + const response = await ollama.list() + console.log('Available models:', response.models) +} catch (error) { + console.error('Error listing models:', error) +} +``` + +### Displaying model details +```javascript +import Ollama from 'ollama' + +const ollama = new Ollama() + +async function displayModelDetails() { + try { + const response = await ollama.list() + response.models.forEach(model => { + console.log(`Model: ${model.name}`) + console.log(`Modified: ${model.modified_at}`) + console.log(`Size: ${model.size} bytes`) + console.log('---') + }) + } catch (error) { + console.error('Error fetching model details:', error) + } +} + +displayModelDetails() +``` + +## Notes or Considerations +- This method requires an active connection to an Ollama server. +- The list of available models may change over time as models are added, updated, or removed from the server. +- Large model files may take some time to download and process, so be patient when working with new or updated models. +- Make sure you have the necessary permissions to access the Ollama server and list the models. \ No newline at end of file From 6d6d6c8a2f3773373d413cf4d8e05012e68c7ee5 Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:17:26 +0000 Subject: [PATCH 10/20] Update file --- docs/show.md | 37 ++++++++++++++++++++++--------------- 1 file changed, 22 insertions(+), 15 deletions(-) diff --git a/docs/show.md b/docs/show.md index 366911f40..26149859a 100644 --- a/docs/show.md +++ b/docs/show.md @@ -13,33 +13,40 @@ const ollama = new Ollama() ``` ## Parameters -- `request` (ShowRequest): An object containing the following properties: - - `model` (string): The name of the model to retrieve metadata for. - - `system` (string, optional): A system prompt to include in the metadata. - - `template` (string, optional): A custom prompt template. - - `options` (Options, optional): Additional options for the model. +- `request` (object, required): An object containing the following properties: + - `model` (string, required): The name of the model to show metadata for. + - `system` (string, optional): Custom system prompt to use for the model. + - `template` (string, optional): Custom prompt template to use for the model. + - `options` (object, optional): Additional options for the model. ## Return Value The `show` method returns a Promise that resolves to a `ShowResponse` object containing the model's metadata. ## Examples -### Basic usage +1. Basic usage: ```javascript -const metadata = await ollama.show({ model: 'llama2' }) -console.log(metadata) +const modelInfo = await ollama.show({ model: 'llama2' }) +console.log(modelInfo) ``` -### With custom system prompt +2. With custom system prompt: ```javascript -const metadata = await ollama.show({ - model: 'gpt-4', +const modelInfo = await ollama.show({ + model: 'gpt4', system: 'You are a helpful assistant.' }) -console.log(metadata.system) +console.log(modelInfo.system) +``` + +3. Retrieving model details: +```javascript +const modelInfo = await ollama.show({ model: 'codellama' }) +console.log(`Model family: ${modelInfo.details.family}`) +console.log(`Parameter size: ${modelInfo.details.parameter_size}`) ``` ## Notes or Considerations -- Ensure you have the correct permissions to access the model's metadata. -- The availability of certain metadata fields may vary depending on the model. -- This method is useful for inspecting model details before using it in other operations. \ No newline at end of file +- Ensure you have the correct permissions to access the model information. +- The availability of certain metadata fields may vary depending on the model and Ollama version. +- This method is useful for inspecting model configurations and understanding their capabilities before use. \ No newline at end of file From 161802eed454c852d96341bb6c3213dcd62909bf Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:17:27 +0000 Subject: [PATCH 11/20] Update file --- docs/embeddings.md | 60 ++++++++++++++++++++++++++++------------------ 1 file changed, 37 insertions(+), 23 deletions(-) diff --git a/docs/embeddings.md b/docs/embeddings.md index be83e454c..cd5a31b0e 100644 --- a/docs/embeddings.md +++ b/docs/embeddings.md @@ -1,43 +1,57 @@ # embeddings Documentation ## Brief Description -The `embeddings` method converts text prompts into vector representations using a specified model. +The `embeddings` method embeds a text prompt into a vector representation. ## Usage -To use the `embeddings` method, you need to create an instance of the Ollama class and call the `embeddings` method with the required parameters. +To use the `embeddings` method, you need an instance of the Ollama class. Here's how you can call it: + +```javascript +const ollama = new Ollama(); +const result = await ollama.embeddings(request); +``` ## Parameters -- `model` (string, required): The name of the model to use for generating embeddings. -- `prompt` (string, required): The text prompt to be converted into a vector representation. -- `keep_alive` (string | number, optional): Specifies how long to keep the model loaded in memory. -- `options` (Partial, optional): Additional options for fine-tuning the embedding process. +The `embeddings` method takes a single parameter: + +- `request` (EmbeddingsRequest): An object containing the following properties: + - `model` (string): The name of the model to use for embedding. + - `prompt` (string): The text to be embedded. + - `keep_alive` (string | number, optional): Duration to keep the model loaded in memory. + - `options` (Partial, optional): Additional options for the embedding process. ## Return Value -The method returns a Promise that resolves to an `EmbeddingsResponse` object containing: -- `embedding` (number[]): An array of numbers representing the vector embedding of the input prompt. +The method returns a Promise that resolves to an `EmbeddingsResponse` object, which contains: + +- `embedding` (number[]): An array of numbers representing the vector embedding of the input text. ## Examples + +### Basic Usage ```javascript const ollama = new Ollama(); - -// Basic usage -const response = await ollama.embeddings({ - model: "text-embedding-ada-002", - prompt: "Hello, world!" -}); +const request = { + model: 'text-embedding-ada-002', + prompt: 'Hello, world!' +}; +const response = await ollama.embeddings(request); console.log(response.embedding); +``` -// With keep_alive option -const response2 = await ollama.embeddings({ - model: "text-embedding-ada-002", - prompt: "Another example", - keep_alive: "5m" -}); -console.log(response2.embedding); +### With Keep Alive Option +```javascript +const ollama = new Ollama(); +const request = { + model: 'text-embedding-ada-002', + prompt: 'Embed this text', + keep_alive: '5m' +}; +const response = await ollama.embeddings(request); +console.log(response.embedding); ``` ## Notes or Considerations - The resulting embedding can be used for various natural language processing tasks, such as semantic search or text classification. - The dimensionality of the embedding vector depends on the model used. -- Ensure you have the necessary model loaded before calling this method. -- The `keep_alive` option can be useful for performance optimization when making multiple embedding requests. \ No newline at end of file +- Ensure you have the necessary model loaded before making the embeddings request. +- The `keep_alive` option can be useful for performance when making multiple embedding requests. \ No newline at end of file From db5209e263fa4e39b4a92de19cd6abf472c6c030 Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:17:29 +0000 Subject: [PATCH 12/20] Update file --- docs/chat.md | 66 ++++++++++++++++++---------------------------------- 1 file changed, 23 insertions(+), 43 deletions(-) diff --git a/docs/chat.md b/docs/chat.md index 25c4a2788..7fe37c26b 100644 --- a/docs/chat.md +++ b/docs/chat.md @@ -1,55 +1,49 @@ # chat Documentation ## Brief Description -The `chat` method allows users to interact with an AI model through a conversation-like interface, supporting both text and image inputs. +The `chat` method allows you to interact with an AI model using a conversational interface. It supports streaming responses and can handle messages with text and images. ## Usage -To use the `chat` method, first import and initialize the Ollama client: +To use the `chat` method, you'll need to import the Ollama class and create an instance: ```javascript -import Ollama from 'ollama-js' +import { Ollama } from 'ollama' const ollama = new Ollama() ``` -Then, you can call the `chat` method with your request: - -```javascript -const response = await ollama.chat({ - model: 'your-model-name', - messages: [{ role: 'user', content: 'Hello, how are you?' }] -}) -``` - ## Parameters -- `request` (ChatRequest): An object containing: - - `model` (string): The name of the model to use. - - `messages` (Message[]): An array of message objects representing the conversation. +- `request` (ChatRequest): An object containing the following properties: + - `model` (string, required): The name of the model to use. + - `messages` (Message[], optional): An array of message objects representing the conversation history. - `stream` (boolean, optional): Whether to stream the response. - `format` (string | object, optional): The desired output format. - - `options` (Partial, optional): Additional options for the chat. + - `keep_alive` (string | number, optional): How long to keep the model loaded in memory. + - `tools` (Tool[], optional): An array of tool objects that the model can use. + - `options` (Partial, optional): Additional options for the chat session. ## Return Value -- Promise>: - - If `stream` is false, returns a Promise resolving to a ChatResponse. - - If `stream` is true, returns an AbortableAsyncIterator yielding ChatResponse objects. +- If `stream` is `false`: Promise +- If `stream` is `true`: Promise> ## Examples -### Basic chat interaction: +### Basic chat interaction ```javascript const response = await ollama.chat({ model: 'llama2', - messages: [{ role: 'user', content: 'What is the capital of France?' }] + messages: [{ role: 'user', content: 'Hello, how are you?' }] }) console.log(response.message.content) ``` -### Streaming chat response: +### Streaming chat with images ```javascript const stream = await ollama.chat({ model: 'llama2', - messages: [{ role: 'user', content: 'Tell me a short story.' }], + messages: [ + { role: 'user', content: 'What's in this image?', images: ['base64_encoded_image_data'] } + ], stream: true }) @@ -58,23 +52,9 @@ for await (const chunk of stream) { } ``` -### Chat with image input: -```javascript -const response = await ollama.chat({ - model: 'llava', - messages: [ - { - role: 'user', - content: 'What's in this image?', - images: ['data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD/...'] - } - ] -}) -console.log(response.message.content) -``` - -## Notes and Considerations -- Ensure you have the correct model loaded before making chat requests. -- Image inputs should be provided as base64-encoded strings or file paths. -- For long conversations, consider managing context efficiently to avoid hitting token limits. -- The streaming option is useful for real-time interactions and handling long responses. \ No newline at end of file +## Notes or Considerations +- The `chat` method can handle both text and image inputs. +- When using images, they should be provided as base64 encoded strings or Uint8Arrays. +- Streaming responses allow for real-time output, which is useful for long responses or interactive applications. +- The method automatically encodes images before sending the request. +- You can abort an ongoing streamed request using the `abort()` method on the returned AbortableAsyncIterator. \ No newline at end of file From ec4cb0cf8e1545e28cd2650ba12221a7f939e58c Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 17:21:17 +0000 Subject: [PATCH 13/20] md file --- ...-docs-web-editor-features-and-use-cases.md | 72 +++++++++++++++++++ 1 file changed, 72 insertions(+) create mode 100644 docs/dev-docs-web-editor-features-and-use-cases.md diff --git a/docs/dev-docs-web-editor-features-and-use-cases.md b/docs/dev-docs-web-editor-features-and-use-cases.md new file mode 100644 index 000000000..31ecffd79 --- /dev/null +++ b/docs/dev-docs-web-editor-features-and-use-cases.md @@ -0,0 +1,72 @@ + + + # Dev-Docs Web Editor: Core Features and Use Cases + +## Overview + +The Dev-Docs web editor is a comprehensive documentation tool designed to streamline the process of creating, editing, and managing documentation. It offers a range of features that cater to various developer roles, enhancing collaboration and efficiency in documentation workflows. + +## Core Features + +1. Rich Text and Markdown Editing +2. Draft Management +3. AI-Assisted Content Generation +4. Image and Table Insertion +5. Frontmatter Editing +6. GitHub Integration +7. Branch Management +8. Automated Documentation Workflows +9. Content Auditing +10. Raw Markdown Viewing + +## Use Cases for Different Developer Roles + +### Documentation Specialists + +- Create and organize comprehensive documentation +- Utilize AI-assisted content generation for efficiency +- Manage multiple drafts and versions +- Audit existing documentation for consistency and completeness + +### Software Developers + +- Document code changes directly from the codebase +- Generate technical documentation using AI tools +- Collaborate on documentation through GitHub integration +- Maintain up-to-date API documentation + +### Project Managers + +- Oversee documentation progress across teams +- Ensure consistency in documentation style and structure +- Manage documentation versions aligned with project milestones +- Facilitate collaboration between technical and non-technical team members + +### UX/UI Designers + +- Add visual elements to documentation (images, diagrams) +- Collaborate on user-facing documentation +- Ensure consistency in design-related documentation + +### DevOps Engineers + +- Document deployment processes and configurations +- Maintain changelogs through automated workflows +- Integrate documentation updates with CI/CD pipelines + +### Quality Assurance Testers + +- Document test cases and procedures +- Collaborate on bug reports and feature documentation +- Ensure documentation accuracy for user-facing features + +### Technical Writers + +- Leverage AI tools for content generation and editing +- Manage multiple document versions across different branches +- Collaborate with developers to ensure technical accuracy +- Conduct regular audits of documentation quality and completeness + +By catering to these diverse roles, the Dev-Docs web editor serves as a central hub for documentation efforts, fostering collaboration, maintaining consistency, and improving the overall quality of both internal and user-facing documentation. + + \ No newline at end of file From 910088fe439bcc811f946cbcb84c5038b24a4f5c Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 20:08:37 +0000 Subject: [PATCH 14/20] Deletef file --- docs/troubleshooting.md | 68 ----------------------------------------- 1 file changed, 68 deletions(-) delete mode 100644 docs/troubleshooting.md diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md deleted file mode 100644 index 300e1e3ce..000000000 --- a/docs/troubleshooting.md +++ /dev/null @@ -1,68 +0,0 @@ -# Troubleshooting Guide - -This guide covers common issues you may encounter when using Dev-Docs and how to resolve them. - -## Installation Issues - -### VS Code Extension Not Installing - -If you're having trouble installing the Dev-Docs VS Code extension: - -1. Check your internet connection -2. Ensure you have the latest version of VS Code installed -3. Try uninstalling and reinstalling the extension -4. Check VS Code logs for any error messages - -## Authentication Problems - -### Unable to Sign In - -If you can't sign in to Dev-Docs: - -1. Verify your username and password are correct -2. Clear your browser cache and cookies -3. Ensure you're using a supported browser (latest Chrome, Firefox, or Safari) -4. Check if Dev-Docs is experiencing any known service disruptions - -## Generation Issues - -### AI Not Generating Documentation - -If the AI isn't generating documentation: - -1. Check your API key is valid and has sufficient credits -2. Ensure you've selected the correct files/folders for context -3. Try simplifying your generation prompt -4. Check your internet connection - -### Poor Quality Generation Results - -If you're getting low quality AI-generated content: - -1. Provide more context by selecting relevant files/folders -2. Be more specific in your generation prompt -3. Try breaking down complex tasks into smaller, focused generations - -## GitHub Integration Problems - -### Unable to Connect to GitHub - -If you can't connect Dev-Docs to your GitHub repository: - -1. Verify your GitHub credentials are correct -2. Ensure you've granted the necessary permissions to Dev-Docs -3. Check if there are any GitHub service disruptions -4. Try revoking and re-granting access to Dev-Docs - -### Workflow Not Triggering - -If your GitHub workflow isn't triggering: - -1. Check your dev-docs.json configuration -2. Ensure you've pushed changes to the correct branch -3. Verify the workflow file is in the .github/workflows directory -4. Check GitHub Actions logs for any error messages - -## Still Need Help? - -If you're still experiencing issues after trying these troubleshooting steps, please contact our support team at support@dev-docs.com or open an issue on our GitHub repository. \ No newline at end of file From 0f35deb0c1e93faefc8cc97505e7a7044a9d40d6 Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Wed, 5 Feb 2025 20:08:41 +0000 Subject: [PATCH 15/20] Deletef file --- docs/show.md | 52 ---------------------------------------------------- 1 file changed, 52 deletions(-) delete mode 100644 docs/show.md diff --git a/docs/show.md b/docs/show.md deleted file mode 100644 index 26149859a..000000000 --- a/docs/show.md +++ /dev/null @@ -1,52 +0,0 @@ -# show Documentation - -## Brief Description -The `show` method retrieves metadata for a specified model in the Ollama system. - -## Usage -To use the `show` method, you need to import the Ollama class and create an instance: - -```javascript -import { Ollama } from 'ollama' - -const ollama = new Ollama() -``` - -## Parameters -- `request` (object, required): An object containing the following properties: - - `model` (string, required): The name of the model to show metadata for. - - `system` (string, optional): Custom system prompt to use for the model. - - `template` (string, optional): Custom prompt template to use for the model. - - `options` (object, optional): Additional options for the model. - -## Return Value -The `show` method returns a Promise that resolves to a `ShowResponse` object containing the model's metadata. - -## Examples - -1. Basic usage: -```javascript -const modelInfo = await ollama.show({ model: 'llama2' }) -console.log(modelInfo) -``` - -2. With custom system prompt: -```javascript -const modelInfo = await ollama.show({ - model: 'gpt4', - system: 'You are a helpful assistant.' -}) -console.log(modelInfo.system) -``` - -3. Retrieving model details: -```javascript -const modelInfo = await ollama.show({ model: 'codellama' }) -console.log(`Model family: ${modelInfo.details.family}`) -console.log(`Parameter size: ${modelInfo.details.parameter_size}`) -``` - -## Notes or Considerations -- Ensure you have the correct permissions to access the model information. -- The availability of certain metadata fields may vary depending on the model and Ollama version. -- This method is useful for inspecting model configurations and understanding their capabilities before use. \ No newline at end of file From adfa1b2d0daef3302cf8781cecf25c5c410d4354 Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Sat, 8 Feb 2025 22:28:36 +0000 Subject: [PATCH 16/20] Create file --- docs/design/documentation-ux-guidelines.md | 55 ++++++++++++++++++++++ 1 file changed, 55 insertions(+) create mode 100644 docs/design/documentation-ux-guidelines.md diff --git a/docs/design/documentation-ux-guidelines.md b/docs/design/documentation-ux-guidelines.md new file mode 100644 index 000000000..323ced320 --- /dev/null +++ b/docs/design/documentation-ux-guidelines.md @@ -0,0 +1,55 @@ +# Documentation UX Guidelines + +## 1. Organization and Navigation + +- Use a clear and logical structure for your documentation +- Implement an intuitive navigation system with search functionality +- Provide a table of contents and/or sidebar navigation +- Use consistent headers and subheaders to organize content + +## 2. Content Presentation + +- Use concise and clear language +- Break content into easily digestible chunks +- Use bullet points and numbered lists for clarity +- Include relevant code examples and screenshots + +## 3. Interactivity + +- Implement collapsible sections for lengthy content +- Use tabs to organize related information +- Include interactive code samples where appropriate +- Provide a feedback mechanism for users to report issues or suggest improvements + +## 4. Responsive Design + +- Ensure documentation is readable on mobile devices +- Use responsive layouts and images +- Implement a mobile-friendly navigation system + +## 5. Accessibility + +- Use proper heading structure (H1, H2, etc.) +- Include alt text for images +- Ensure sufficient color contrast for text +- Make sure all interactive elements are keyboard accessible + +## 6. Search and Discoverability + +- Implement a robust search functionality +- Use descriptive page titles and meta descriptions +- Include a sitemap for better indexing + +## 7. Version Control + +- Clearly indicate the version of the software being documented +- Provide access to documentation for previous versions +- Highlight recent changes or updates + +## 8. User Feedback and Community + +- Include a commenting system or discussion forum +- Provide links to related resources or community support +- Offer ways for users to contribute to documentation improvements + +By following these guidelines, UX designers can create user-friendly documentation interfaces that enhance the overall user experience and make information more accessible and understandable. \ No newline at end of file From df719266243c3711967528a2b9f4b31ed2dbb2a1 Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Sat, 8 Feb 2025 22:29:07 +0000 Subject: [PATCH 17/20] Create file --- .../automating-documentation-updates.md | 35 +++++++++++++++++++ 1 file changed, 35 insertions(+) create mode 100644 docs/devops/automating-documentation-updates.md diff --git a/docs/devops/automating-documentation-updates.md b/docs/devops/automating-documentation-updates.md new file mode 100644 index 000000000..6e3763dab --- /dev/null +++ b/docs/devops/automating-documentation-updates.md @@ -0,0 +1,35 @@ +# Automating Documentation Updates + +Automating documentation processes is crucial for maintaining up-to-date and consistent documentation. Here are some key strategies for DevOps engineers to automate documentation updates: + +## 1. Version Control Integration + +- Use Git hooks to trigger documentation updates on code commits +- Implement CI/CD pipelines that automatically rebuild and deploy documentation sites + +## 2. API Documentation Generation + +- Utilize tools like Swagger or OpenAPI to auto-generate API documentation from code annotations +- Set up workflows to update API docs on new releases or branch merges + +## 3. Automated Testing for Documentation + +- Implement doc tests to verify code examples in documentation are correct +- Use linters to check documentation formatting and structure + +## 4. Dynamic Documentation + +- Use tools that can pull data directly from your codebase or databases to populate documentation +- Implement templating systems for consistent doc generation + +## 5. Automated Review Process + +- Set up bots to automatically assign reviewers for documentation PRs +- Use AI-powered tools to suggest improvements or catch inconsistencies + +## 6. Monitoring and Alerts + +- Implement monitoring to detect outdated or broken links in documentation +- Set up alerts for when documentation falls out of sync with code changes + +By implementing these automation strategies, DevOps teams can ensure documentation remains accurate, up-to-date, and valuable for both internal teams and external users. \ No newline at end of file From ff277fd53abcaeb392c96fc4613d453938759c9b Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Sat, 8 Feb 2025 22:31:27 +0000 Subject: [PATCH 18/20] Update file --- docs/pull.md | 40 ++++++++++++++++++++++------------------ 1 file changed, 22 insertions(+), 18 deletions(-) diff --git a/docs/pull.md b/docs/pull.md index 289b8e0ec..aa8c250e0 100644 --- a/docs/pull.md +++ b/docs/pull.md @@ -1,46 +1,50 @@ # pull Documentation ## Brief Description -The `pull` method allows you to download a model from the Ollama registry, with optional streaming support. +The `pull` method downloads a model from the Ollama registry, with optional streaming of progress updates. ## Usage To use the `pull` method, you need to create an instance of the Ollama class and then call the `pull` method on it. ```javascript -import Ollama from 'ollama' +import { Ollama } from 'ollama' const ollama = new Ollama() ``` ## Parameters -- `request` (Required): An object with the following properties: - - `model` (Required): String - The name of the model to pull. - - `stream` (Optional): Boolean - Whether to stream the response. - - `insecure` (Optional): Boolean - Whether to allow insecure connections. +The `pull` method accepts a single parameter: + +- `request` (object, required): An object with the following properties: + - `model` (string, required): The name of the model to pull. + - `stream` (boolean, optional): If true, returns a stream of progress updates. Default is false. + - `insecure` (boolean, optional): If true, allows insecure connections for pulling the model. Default is false. ## Return Value -The `pull` method returns a Promise that resolves to either: -- A `ProgressResponse` object if `stream` is false or not specified. -- An `AbortableAsyncIterator` if `stream` is true. +The `pull` method returns a Promise that resolves to: +- If `stream` is false: A `ProgressResponse` object with information about the pull operation. +- If `stream` is true: An `AbortableAsyncIterator` that yields progress updates. ## Examples -### Basic usage: +### Basic usage (non-streaming): ```javascript +const ollama = new Ollama() const response = await ollama.pull({ model: 'llama2' }) -console.log(response.status) +console.log(response) ``` -### Streaming the pull process: +### Streaming progress updates: ```javascript +const ollama = new Ollama() const stream = await ollama.pull({ model: 'llama2', stream: true }) -for await (const chunk of stream) { - console.log(chunk.status, chunk.completed, '/', chunk.total) +for await (const update of stream) { + console.log(update) } ``` ## Notes or Considerations -- The `pull` method is useful for downloading models to your local Ollama instance. -- Streaming the response allows you to track the progress of the download. -- Be cautious when using the `insecure` option, as it may expose you to security risks. -- The method uses the Ollama API endpoint `/api/pull` under the hood. \ No newline at end of file +- The `pull` method is asynchronous and should be used with `await` or `.then()`. +- When using the streaming option, make sure to handle the stream properly to avoid memory leaks. +- If you need to cancel an ongoing pull operation, you can use the `abort()` method on the `AbortableAsyncIterator` returned when streaming. +- The `insecure` option should be used with caution, as it may pose security risks. \ No newline at end of file From 3fecee9008acc102af45b2d2046f9ed17a8a8ec2 Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Sat, 8 Feb 2025 22:31:30 +0000 Subject: [PATCH 19/20] Create file --- docs/create.md | 64 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 docs/create.md diff --git a/docs/create.md b/docs/create.md new file mode 100644 index 000000000..08bb23d0c --- /dev/null +++ b/docs/create.md @@ -0,0 +1,64 @@ +# create Documentation + +## Brief Description +The `create` method creates a new model from a stream of data or a modelfile. + +## Usage +To use the `create` method, you need to import the Ollama class and instantiate it. Then, you can call the `create` method on the instance. + +```javascript +import { Ollama } from 'ollama' + +const ollama = new Ollama() +const response = await ollama.create(createRequest) +``` + +## Parameters +The `create` method accepts a `CreateRequest` object with the following properties: + +- `model` (string, required): The name of the model to create. +- `path` (string, optional): The path to the modelfile. +- `modelfile` (string, optional): The content of the modelfile. +- `quantize` (string, optional): The quantization level to apply. +- `stream` (boolean, optional): Whether to stream the response. + +You must provide either `path` or `modelfile`, but not both. + +## Return Value +The `create` method returns a Promise that resolves to either: + +- A `ProgressResponse` object if `stream` is false or not specified. +- An `AbortableAsyncIterator` if `stream` is true. + +## Examples + +1. Creating a model from a modelfile path: + +```javascript +const response = await ollama.create({ + model: 'my-new-model', + path: '/path/to/modelfile' +}) +``` + +2. Creating a model with streaming enabled: + +```javascript +const stream = await ollama.create({ + model: 'my-streaming-model', + modelfile: 'FROM llama2\n...', + stream: true +}) + +for await (const chunk of stream) { + console.log(chunk.status) +} +``` + +## Notes or Considerations + +- The `create` method will automatically parse the modelfile and replace the FROM and ADAPTER commands with the corresponding blob hashes. +- If using a `path`, the method will read the file content and process it. +- When streaming is enabled, you can iterate over the response to get progress updates. +- Be cautious when creating large models, as it may consume significant resources. +- Ensure you have the necessary permissions to read the modelfile and create models on the Ollama server. \ No newline at end of file From f2c3c373b90581acccd43a7c109149e8f98e06ea Mon Sep 17 00:00:00 2001 From: "dev-docs-github-app[bot]" <178952281+dev-docs-github-app[bot]@users.noreply.github.com> Date: Sat, 8 Feb 2025 22:31:32 +0000 Subject: [PATCH 20/20] Update file --- docs/chat.md | 35 ++++++++++++++++++----------------- 1 file changed, 18 insertions(+), 17 deletions(-) diff --git a/docs/chat.md b/docs/chat.md index 7fe37c26b..71264a852 100644 --- a/docs/chat.md +++ b/docs/chat.md @@ -1,30 +1,31 @@ # chat Documentation ## Brief Description -The `chat` method allows you to interact with an AI model using a conversational interface. It supports streaming responses and can handle messages with text and images. +The `chat` method allows users to interact with an AI model in a conversational manner, supporting text and image inputs. ## Usage -To use the `chat` method, you'll need to import the Ollama class and create an instance: +To use the `chat` method, you need to create an instance of the Ollama class and then call the `chat` method with the appropriate request object. ```javascript import { Ollama } from 'ollama' const ollama = new Ollama() +const response = await ollama.chat(chatRequest) ``` ## Parameters - `request` (ChatRequest): An object containing the following properties: - - `model` (string, required): The name of the model to use. + - `model` (string, required): The name of the model to use for the chat. - `messages` (Message[], optional): An array of message objects representing the conversation history. - - `stream` (boolean, optional): Whether to stream the response. + - `stream` (boolean, optional): Whether to stream the response. Default is `false`. - `format` (string | object, optional): The desired output format. - - `keep_alive` (string | number, optional): How long to keep the model loaded in memory. - - `tools` (Tool[], optional): An array of tool objects that the model can use. - - `options` (Partial, optional): Additional options for the chat session. + - `keep_alive` (string | number, optional): Duration to keep the model loaded in memory. + - `tools` (Tool[], optional): Array of tools available for the model to use. + - `options` (Partial, optional): Additional options for the chat. ## Return Value -- If `stream` is `false`: Promise -- If `stream` is `true`: Promise> +- If `stream` is `false`: Returns a Promise that resolves to a `ChatResponse` object. +- If `stream` is `true`: Returns a Promise that resolves to an `AbortableAsyncIterator`. ## Examples @@ -37,12 +38,13 @@ const response = await ollama.chat({ console.log(response.message.content) ``` -### Streaming chat with images +### Streaming chat with image input ```javascript +const imageData = await fs.promises.readFile('image.jpg') const stream = await ollama.chat({ - model: 'llama2', + model: 'llava', messages: [ - { role: 'user', content: 'What's in this image?', images: ['base64_encoded_image_data'] } + { role: 'user', content: 'What's in this image?', images: [imageData] } ], stream: true }) @@ -53,8 +55,7 @@ for await (const chunk of stream) { ``` ## Notes or Considerations -- The `chat` method can handle both text and image inputs. -- When using images, they should be provided as base64 encoded strings or Uint8Arrays. -- Streaming responses allow for real-time output, which is useful for long responses or interactive applications. -- The method automatically encodes images before sending the request. -- You can abort an ongoing streamed request using the `abort()` method on the returned AbortableAsyncIterator. \ No newline at end of file +- The `chat` method supports both text and image inputs. Images can be provided as Uint8Arrays or base64 encoded strings. +- When using the streaming option, make sure to handle the async iterator properly to receive real-time responses. +- The method automatically encodes images to base64 if they are provided as Uint8Arrays. +- Be aware of the model's capabilities when using different input types or tools. \ No newline at end of file