diff --git a/docs/chat.md b/docs/chat.md new file mode 100644 index 000000000..71264a852 --- /dev/null +++ b/docs/chat.md @@ -0,0 +1,61 @@ +# chat Documentation + +## Brief Description +The `chat` method allows users to interact with an AI model in a conversational manner, supporting text and image inputs. + +## Usage +To use the `chat` method, you need to create an instance of the Ollama class and then call the `chat` method with the appropriate request object. + +```javascript +import { Ollama } from 'ollama' + +const ollama = new Ollama() +const response = await ollama.chat(chatRequest) +``` + +## Parameters +- `request` (ChatRequest): An object containing the following properties: + - `model` (string, required): The name of the model to use for the chat. + - `messages` (Message[], optional): An array of message objects representing the conversation history. + - `stream` (boolean, optional): Whether to stream the response. Default is `false`. + - `format` (string | object, optional): The desired output format. + - `keep_alive` (string | number, optional): Duration to keep the model loaded in memory. + - `tools` (Tool[], optional): Array of tools available for the model to use. + - `options` (Partial, optional): Additional options for the chat. + +## Return Value +- If `stream` is `false`: Returns a Promise that resolves to a `ChatResponse` object. +- If `stream` is `true`: Returns a Promise that resolves to an `AbortableAsyncIterator`. + +## Examples + +### Basic chat interaction +```javascript +const response = await ollama.chat({ + model: 'llama2', + messages: [{ role: 'user', content: 'Hello, how are you?' }] +}) +console.log(response.message.content) +``` + +### Streaming chat with image input +```javascript +const imageData = await fs.promises.readFile('image.jpg') +const stream = await ollama.chat({ + model: 'llava', + messages: [ + { role: 'user', content: 'What's in this image?', images: [imageData] } + ], + stream: true +}) + +for await (const chunk of stream) { + process.stdout.write(chunk.message.content) +} +``` + +## Notes or Considerations +- The `chat` method supports both text and image inputs. Images can be provided as Uint8Arrays or base64 encoded strings. +- When using the streaming option, make sure to handle the async iterator properly to receive real-time responses. +- The method automatically encodes images to base64 if they are provided as Uint8Arrays. +- Be aware of the model's capabilities when using different input types or tools. \ No newline at end of file diff --git a/docs/create.md b/docs/create.md new file mode 100644 index 000000000..08bb23d0c --- /dev/null +++ b/docs/create.md @@ -0,0 +1,64 @@ +# create Documentation + +## Brief Description +The `create` method creates a new model from a stream of data or a modelfile. + +## Usage +To use the `create` method, you need to import the Ollama class and instantiate it. Then, you can call the `create` method on the instance. + +```javascript +import { Ollama } from 'ollama' + +const ollama = new Ollama() +const response = await ollama.create(createRequest) +``` + +## Parameters +The `create` method accepts a `CreateRequest` object with the following properties: + +- `model` (string, required): The name of the model to create. +- `path` (string, optional): The path to the modelfile. +- `modelfile` (string, optional): The content of the modelfile. +- `quantize` (string, optional): The quantization level to apply. +- `stream` (boolean, optional): Whether to stream the response. + +You must provide either `path` or `modelfile`, but not both. + +## Return Value +The `create` method returns a Promise that resolves to either: + +- A `ProgressResponse` object if `stream` is false or not specified. +- An `AbortableAsyncIterator` if `stream` is true. + +## Examples + +1. Creating a model from a modelfile path: + +```javascript +const response = await ollama.create({ + model: 'my-new-model', + path: '/path/to/modelfile' +}) +``` + +2. Creating a model with streaming enabled: + +```javascript +const stream = await ollama.create({ + model: 'my-streaming-model', + modelfile: 'FROM llama2\n...', + stream: true +}) + +for await (const chunk of stream) { + console.log(chunk.status) +} +``` + +## Notes or Considerations + +- The `create` method will automatically parse the modelfile and replace the FROM and ADAPTER commands with the corresponding blob hashes. +- If using a `path`, the method will read the file content and process it. +- When streaming is enabled, you can iterate over the response to get progress updates. +- Be cautious when creating large models, as it may consume significant resources. +- Ensure you have the necessary permissions to read the modelfile and create models on the Ollama server. \ No newline at end of file diff --git a/docs/design/documentation-ux-guidelines.md b/docs/design/documentation-ux-guidelines.md new file mode 100644 index 000000000..323ced320 --- /dev/null +++ b/docs/design/documentation-ux-guidelines.md @@ -0,0 +1,55 @@ +# Documentation UX Guidelines + +## 1. Organization and Navigation + +- Use a clear and logical structure for your documentation +- Implement an intuitive navigation system with search functionality +- Provide a table of contents and/or sidebar navigation +- Use consistent headers and subheaders to organize content + +## 2. Content Presentation + +- Use concise and clear language +- Break content into easily digestible chunks +- Use bullet points and numbered lists for clarity +- Include relevant code examples and screenshots + +## 3. Interactivity + +- Implement collapsible sections for lengthy content +- Use tabs to organize related information +- Include interactive code samples where appropriate +- Provide a feedback mechanism for users to report issues or suggest improvements + +## 4. Responsive Design + +- Ensure documentation is readable on mobile devices +- Use responsive layouts and images +- Implement a mobile-friendly navigation system + +## 5. Accessibility + +- Use proper heading structure (H1, H2, etc.) +- Include alt text for images +- Ensure sufficient color contrast for text +- Make sure all interactive elements are keyboard accessible + +## 6. Search and Discoverability + +- Implement a robust search functionality +- Use descriptive page titles and meta descriptions +- Include a sitemap for better indexing + +## 7. Version Control + +- Clearly indicate the version of the software being documented +- Provide access to documentation for previous versions +- Highlight recent changes or updates + +## 8. User Feedback and Community + +- Include a commenting system or discussion forum +- Provide links to related resources or community support +- Offer ways for users to contribute to documentation improvements + +By following these guidelines, UX designers can create user-friendly documentation interfaces that enhance the overall user experience and make information more accessible and understandable. \ No newline at end of file diff --git a/docs/dev-docs-web-editor-features-and-use-cases.md b/docs/dev-docs-web-editor-features-and-use-cases.md new file mode 100644 index 000000000..31ecffd79 --- /dev/null +++ b/docs/dev-docs-web-editor-features-and-use-cases.md @@ -0,0 +1,72 @@ + + + # Dev-Docs Web Editor: Core Features and Use Cases + +## Overview + +The Dev-Docs web editor is a comprehensive documentation tool designed to streamline the process of creating, editing, and managing documentation. It offers a range of features that cater to various developer roles, enhancing collaboration and efficiency in documentation workflows. + +## Core Features + +1. Rich Text and Markdown Editing +2. Draft Management +3. AI-Assisted Content Generation +4. Image and Table Insertion +5. Frontmatter Editing +6. GitHub Integration +7. Branch Management +8. Automated Documentation Workflows +9. Content Auditing +10. Raw Markdown Viewing + +## Use Cases for Different Developer Roles + +### Documentation Specialists + +- Create and organize comprehensive documentation +- Utilize AI-assisted content generation for efficiency +- Manage multiple drafts and versions +- Audit existing documentation for consistency and completeness + +### Software Developers + +- Document code changes directly from the codebase +- Generate technical documentation using AI tools +- Collaborate on documentation through GitHub integration +- Maintain up-to-date API documentation + +### Project Managers + +- Oversee documentation progress across teams +- Ensure consistency in documentation style and structure +- Manage documentation versions aligned with project milestones +- Facilitate collaboration between technical and non-technical team members + +### UX/UI Designers + +- Add visual elements to documentation (images, diagrams) +- Collaborate on user-facing documentation +- Ensure consistency in design-related documentation + +### DevOps Engineers + +- Document deployment processes and configurations +- Maintain changelogs through automated workflows +- Integrate documentation updates with CI/CD pipelines + +### Quality Assurance Testers + +- Document test cases and procedures +- Collaborate on bug reports and feature documentation +- Ensure documentation accuracy for user-facing features + +### Technical Writers + +- Leverage AI tools for content generation and editing +- Manage multiple document versions across different branches +- Collaborate with developers to ensure technical accuracy +- Conduct regular audits of documentation quality and completeness + +By catering to these diverse roles, the Dev-Docs web editor serves as a central hub for documentation efforts, fostering collaboration, maintaining consistency, and improving the overall quality of both internal and user-facing documentation. + + \ No newline at end of file diff --git a/docs/devops/automating-documentation-updates.md b/docs/devops/automating-documentation-updates.md new file mode 100644 index 000000000..6e3763dab --- /dev/null +++ b/docs/devops/automating-documentation-updates.md @@ -0,0 +1,35 @@ +# Automating Documentation Updates + +Automating documentation processes is crucial for maintaining up-to-date and consistent documentation. Here are some key strategies for DevOps engineers to automate documentation updates: + +## 1. Version Control Integration + +- Use Git hooks to trigger documentation updates on code commits +- Implement CI/CD pipelines that automatically rebuild and deploy documentation sites + +## 2. API Documentation Generation + +- Utilize tools like Swagger or OpenAPI to auto-generate API documentation from code annotations +- Set up workflows to update API docs on new releases or branch merges + +## 3. Automated Testing for Documentation + +- Implement doc tests to verify code examples in documentation are correct +- Use linters to check documentation formatting and structure + +## 4. Dynamic Documentation + +- Use tools that can pull data directly from your codebase or databases to populate documentation +- Implement templating systems for consistent doc generation + +## 5. Automated Review Process + +- Set up bots to automatically assign reviewers for documentation PRs +- Use AI-powered tools to suggest improvements or catch inconsistencies + +## 6. Monitoring and Alerts + +- Implement monitoring to detect outdated or broken links in documentation +- Set up alerts for when documentation falls out of sync with code changes + +By implementing these automation strategies, DevOps teams can ensure documentation remains accurate, up-to-date, and valuable for both internal teams and external users. \ No newline at end of file diff --git a/docs/embeddings.md b/docs/embeddings.md new file mode 100644 index 000000000..cd5a31b0e --- /dev/null +++ b/docs/embeddings.md @@ -0,0 +1,57 @@ +# embeddings Documentation + +## Brief Description +The `embeddings` method embeds a text prompt into a vector representation. + +## Usage +To use the `embeddings` method, you need an instance of the Ollama class. Here's how you can call it: + +```javascript +const ollama = new Ollama(); +const result = await ollama.embeddings(request); +``` + +## Parameters +The `embeddings` method takes a single parameter: + +- `request` (EmbeddingsRequest): An object containing the following properties: + - `model` (string): The name of the model to use for embedding. + - `prompt` (string): The text to be embedded. + - `keep_alive` (string | number, optional): Duration to keep the model loaded in memory. + - `options` (Partial, optional): Additional options for the embedding process. + +## Return Value +The method returns a Promise that resolves to an `EmbeddingsResponse` object, which contains: + +- `embedding` (number[]): An array of numbers representing the vector embedding of the input text. + +## Examples + +### Basic Usage +```javascript +const ollama = new Ollama(); +const request = { + model: 'text-embedding-ada-002', + prompt: 'Hello, world!' +}; +const response = await ollama.embeddings(request); +console.log(response.embedding); +``` + +### With Keep Alive Option +```javascript +const ollama = new Ollama(); +const request = { + model: 'text-embedding-ada-002', + prompt: 'Embed this text', + keep_alive: '5m' +}; +const response = await ollama.embeddings(request); +console.log(response.embedding); +``` + +## Notes or Considerations +- The resulting embedding can be used for various natural language processing tasks, such as semantic search or text classification. +- The dimensionality of the embedding vector depends on the model used. +- Ensure you have the necessary model loaded before making the embeddings request. +- The `keep_alive` option can be useful for performance when making multiple embedding requests. \ No newline at end of file diff --git a/docs/generate.md b/docs/generate.md new file mode 100644 index 000000000..a67f1c494 --- /dev/null +++ b/docs/generate.md @@ -0,0 +1,59 @@ +# generate Documentation + +## Brief Description +The `generate` method creates a response from a text prompt using an AI model. + +## Usage +To use `generate`, first import the Ollama class and create an instance. Then call the `generate` method with your request parameters. + +```javascript +import { Ollama } from 'ollama-js'; + +const ollama = new Ollama(); +const response = await ollama.generate({ + model: 'modelName', + prompt: 'Your prompt here' +}); +``` + +## Parameters +- `request` (GenerateRequest): An object containing the following properties: + - `model` (string, required): The name of the model to use. + - `prompt` (string, required): The text prompt to generate from. + - `stream` (boolean, optional): Whether to stream the response. + - `images` (Uint8Array[] | string[], optional): Images to include with the prompt. + - Other optional parameters like `system`, `template`, `context`, etc. + +## Return Value +- If `stream` is false: Promise +- If `stream` is true: Promise> + +## Examples + +### Basic usage: +```javascript +const response = await ollama.generate({ + model: 'llama2', + prompt: 'Write a haiku about coding' +}); +console.log(response.response); +``` + +### Streaming response: +```javascript +const stream = await ollama.generate({ + model: 'gpt-3', + prompt: 'Explain quantum computing', + stream: true +}); + +for await (const chunk of stream) { + process.stdout.write(chunk.response); +} +``` + +## Notes or Considerations +- Ensure you have the necessary permissions to use the specified model. +- Large prompts or complex requests may take longer to process. +- When using images, they will be automatically encoded to base64. +- The method can handle both streaming and non-streaming responses based on the `stream` parameter. \ No newline at end of file diff --git a/docs/list.md b/docs/list.md new file mode 100644 index 000000000..d5baf5b1c --- /dev/null +++ b/docs/list.md @@ -0,0 +1,65 @@ +# list Documentation + +## Brief Description +The `list` method retrieves information about available models on the Ollama server. + +## Usage +To use the `list` method, you need to have an instance of the Ollama class. Here's how you can use it: + +```javascript +import Ollama from 'ollama' + +const ollama = new Ollama() +const models = await ollama.list() +``` + +## Parameters +This method doesn't take any parameters. + +## Return Value +The `list` method returns a Promise that resolves to a `ListResponse` object. This object contains an array of `ModelResponse` objects, each representing a model available on the server. + +## Examples + +### Listing all available models +```javascript +import Ollama from 'ollama' + +const ollama = new Ollama() + +try { + const response = await ollama.list() + console.log('Available models:', response.models) +} catch (error) { + console.error('Error listing models:', error) +} +``` + +### Displaying model details +```javascript +import Ollama from 'ollama' + +const ollama = new Ollama() + +async function displayModelDetails() { + try { + const response = await ollama.list() + response.models.forEach(model => { + console.log(`Model: ${model.name}`) + console.log(`Modified: ${model.modified_at}`) + console.log(`Size: ${model.size} bytes`) + console.log('---') + }) + } catch (error) { + console.error('Error fetching model details:', error) + } +} + +displayModelDetails() +``` + +## Notes or Considerations +- This method requires an active connection to an Ollama server. +- The list of available models may change over time as models are added, updated, or removed from the server. +- Large model files may take some time to download and process, so be patient when working with new or updated models. +- Make sure you have the necessary permissions to access the Ollama server and list the models. \ No newline at end of file diff --git a/docs/pull.md b/docs/pull.md new file mode 100644 index 000000000..aa8c250e0 --- /dev/null +++ b/docs/pull.md @@ -0,0 +1,50 @@ +# pull Documentation + +## Brief Description +The `pull` method downloads a model from the Ollama registry, with optional streaming of progress updates. + +## Usage +To use the `pull` method, you need to create an instance of the Ollama class and then call the `pull` method on it. + +```javascript +import { Ollama } from 'ollama' + +const ollama = new Ollama() +``` + +## Parameters +The `pull` method accepts a single parameter: + +- `request` (object, required): An object with the following properties: + - `model` (string, required): The name of the model to pull. + - `stream` (boolean, optional): If true, returns a stream of progress updates. Default is false. + - `insecure` (boolean, optional): If true, allows insecure connections for pulling the model. Default is false. + +## Return Value +The `pull` method returns a Promise that resolves to: +- If `stream` is false: A `ProgressResponse` object with information about the pull operation. +- If `stream` is true: An `AbortableAsyncIterator` that yields progress updates. + +## Examples + +### Basic usage (non-streaming): +```javascript +const ollama = new Ollama() +const response = await ollama.pull({ model: 'llama2' }) +console.log(response) +``` + +### Streaming progress updates: +```javascript +const ollama = new Ollama() +const stream = await ollama.pull({ model: 'llama2', stream: true }) +for await (const update of stream) { + console.log(update) +} +``` + +## Notes or Considerations +- The `pull` method is asynchronous and should be used with `await` or `.then()`. +- When using the streaming option, make sure to handle the stream properly to avoid memory leaks. +- If you need to cancel an ongoing pull operation, you can use the `abort()` method on the `AbortableAsyncIterator` returned when streaming. +- The `insecure` option should be used with caution, as it may pose security risks. \ No newline at end of file diff --git a/docs/push.md b/docs/push.md new file mode 100644 index 000000000..105075d66 --- /dev/null +++ b/docs/push.md @@ -0,0 +1,46 @@ +# push Documentation + +## Brief Description +The `push` method pushes a model to the Ollama registry, allowing for optional streaming of progress updates. + +## Usage +To use the `push` method, you need an instance of the Ollama class. Here's how to push a model: + +```javascript +import Ollama from 'ollama-js'; + +const ollama = new Ollama(); +await ollama.push({ model: 'mymodel' }); +``` + +## Parameters +- `request` (object, required): An object containing: + - `model` (string, required): The name of the model to push. + - `stream` (boolean, optional): If true, returns progress updates as a stream. + - `insecure` (boolean, optional): If true, allows insecure connections. + +## Return Value +Returns a Promise that resolves to: +- A `ProgressResponse` object if `stream` is false. +- An `AbortableAsyncIterator` if `stream` is true. + +## Examples + +### Basic usage: +```javascript +const response = await ollama.push({ model: 'mymodel' }); +console.log(response.status); +``` + +### Streaming progress: +```javascript +const stream = await ollama.push({ model: 'mymodel', stream: true }); +for await (const update of stream) { + console.log(`Progress: ${update.completed}/${update.total}`); +} +``` + +## Notes or Considerations +- Ensure you have the necessary permissions to push models to the Ollama registry. +- The `insecure` option should be used with caution, as it may compromise security. +- Streaming can be useful for providing real-time feedback on long-running push operations. \ No newline at end of file diff --git a/docs/quick-start-guide.md b/docs/quick-start-guide.md new file mode 100644 index 000000000..2ab95e5b4 --- /dev/null +++ b/docs/quick-start-guide.md @@ -0,0 +1,31 @@ +# Quick Start Guide + +Welcome to Dev-Docs! This guide will help you get up and running quickly. + +## Installation + +1. Install the Dev-Docs VS Code extension from the marketplace +2. Sign up for a Dev-Docs account at https://dev-docs.com + +## Basic Usage + +1. Open a project in VS Code +2. Right-click on a file and select "Generate Documentation" +3. The AI will analyze your code and generate documentation +4. Review and edit the generated docs as needed +5. Commit the new documentation files to your repo + +## Key Features + +- AI-powered documentation generation +- Integration with GitHub for version control +- Web editor for refining and publishing docs +- Chrome extension for capturing UI workflows + +## Next Steps + +- Configure custom prompts in dev-docs.json +- Set up automated doc generation on commits +- Integrate Dev-Docs into your development workflow + +For more details, check out the full documentation at https://docs.dev-docs.com \ No newline at end of file