diff --git a/CONTRIBUTOR.md b/CONTRIBUTOR.md new file mode 100644 index 000000000..c6fe881ff --- /dev/null +++ b/CONTRIBUTOR.md @@ -0,0 +1,231 @@ +# Contributing to confluent-kafka-python + +Thank you for your interest in contributing to confluent-kafka-python! This document provides guidelines and best practices for contributing to this project. + +## Table of Contents + +- [Getting Started](#getting-started) +- [Development Environment Setup](#development-environment-setup) +- [Code Style and Standards](#code-style-and-standards) +- [Testing](#testing) +- [Submitting Changes](#submitting-changes) +- [Reporting Issues](#reporting-issues) +- [Community Guidelines](#community-guidelines) + +## Getting Started + +### Ways to Contribute + +- **Bug Reports**: Report bugs and issues you encounter +- **Feature Requests**: Suggest new features or improvements +- **Code Contributions**: Fix bugs, implement features, or improve documentation +- **Documentation**: Improve existing docs or add new documentation +- **Testing**: Help improve test coverage and quality + +### Before You Start + +1. Check existing [issues](../../issues) to see if your bug/feature has already been reported +2. For major changes, open an issue first to discuss the proposed changes +3. Fork the repository and create a feature branch for your work + +## Development Environment Setup + +For complete development environment setup instructions, including prerequisites, virtual environment creation, and dependency installation, see the [Development Environment Setup section in DEVELOPER.md](DEVELOPER.md#development-environment-setup). + +## Code Style and Standards + +### Python Code Style + +- **PEP 8**: Follow [PEP 8](https://pep8.org/) style guidelines as a default, with exceptions captured in the `tox.ini` flake8 rules for modern updates to the recommendations +- **Type Hints**: Use type hints for new code where applicable +- **Docstrings**: Use Google-style docstrings for all public functions and classes + +### Code Formatting + +We use automated tools to maintain consistent code style: + +```bash +# Install formatting tools +pip install isort flake8 mypy + +# Format code +isort src/ tests/ + +# Check style +flake8 src/ tests/ +mypy src/ +``` + +### Naming Conventions + +- **Functions and Variables**: `snake_case` +- **Classes**: `PascalCase` +- **Constants**: `UPPER_SNAKE_CASE` +- **Private Methods**: Prefix with single underscore `_private_method` + +### Documentation + +- All public APIs must have docstrings +- Include examples in docstrings when helpful +- Keep docstrings concise but complete +- Update relevant documentation files when making changes + +## Testing + +### Running Tests + +See [tests/README.md](tests/README.md) for comprehensive testing instructions. + +### Test Requirements + +- **Unit Tests**: All new functionality must include unit tests +- **Integration Tests**: Add integration tests for complex features +- **Test Coverage**: Maintain or improve existing test coverage +- **Test Naming**: Use descriptive test names that explain what is being tested + +### Test Structure + +```python +def test_feature_should_behave_correctly_when_condition(): + # Arrange + setup_data = create_test_data() + + # Act + result = function_under_test(setup_data) + + # Assert + assert result.expected_property == expected_value +``` + +## Submitting Changes + +### Pull Request Process + +1. **Create Feature Branch** + ```bash + git checkout -b feature/your-feature-name + # or + git checkout -b fix/issue-number-description + ``` + +2. **Make Your Changes** + - Write clean, well-documented code + - Add appropriate tests + - Update documentation if needed + +3. **Test Your Changes** + ```bash + # Run tests + python -m pytest + + # Check code style + flake8 src/ tests/ + + # Update sync code if you modified async code + python tools/unasync.py --check + ``` + +4. **Commit Your Changes** + ```bash + git add . + git commit -m "Clear, descriptive commit message" + ``` + + **Commit Message Guidelines:** + - Use present tense ("Add feature" not "Added feature") + - Keep first line under 50 characters + - Reference issue numbers when applicable (#123) + - Include breaking change notes if applicable + +5. **Push and Create Pull Request** + ```bash + git push origin feature/your-feature-name + ``` + + Then create a pull request through GitHub's interface. + +### Pull Request Guidelines + +- **Title**: Clear and descriptive +- **Description**: Explain what changes you made and why +- **Linked Issues**: Reference related issues using "Fixes #123" or "Relates to #123" +- **Labels**: Review available issue/PR labels and apply relevant ones to help with categorization and triage +- **Documentation**: Update documentation for user-facing changes +- **Tests**: Include appropriate tests +- **Breaking Changes**: Clearly mark any breaking changes + +### Code Review Process + +- All pull requests require review before merging +- Address reviewer feedback promptly +- Keep discussions respectful and constructive +- Be open to suggestions and alternative approaches + +## Reporting Issues + +### Using Labels + +When creating issues or pull requests, please review the available labels and apply those that are relevant to your submission. This helps maintainers categorize and prioritize work effectively. Common label categories include: + +- **Type**: bug, enhancement, documentation, question +- **Priority**: high, medium, low +- **Component**: producer, consumer, admin, schema-registry +- **Status**: needs-investigation, help-wanted, good-first-issue + +### Bug Reports + +When reporting bugs, please include: + +- **Clear Title**: Describe the issue concisely +- **Environment**: Python version, OS, library versions +- **Steps to Reproduce**: Detailed steps to reproduce the issue +- **Expected Behavior**: What you expected to happen +- **Actual Behavior**: What actually happened +- **Code Sample**: Minimal code that demonstrates the issue +- **Error Messages**: Full error messages and stack traces +- **Labels**: Apply relevant labels such as "bug" and component-specific labels + +### Feature Requests + +For feature requests, please include: + +- **Use Case**: Describe the problem you're trying to solve +- **Proposed Solution**: Your idea for how to address it +- **Alternatives**: Other solutions you've considered +- **Additional Context**: Any other relevant information +- **Labels**: Apply relevant labels such as "enhancement" and component-specific labels + +## Community Guidelines + +### Code of Conduct + +This project follows the [Contributor Covenant Code of Conduct](https://www.contributor-covenant.org/). By participating, you agree to uphold this code. + +### Communication + +- **Be Respectful**: Treat all community members with respect +- **Be Constructive**: Provide helpful feedback and suggestions +- **Be Patient**: Remember that maintainers and contributors volunteer their time +- **Be Clear**: Communicate clearly and provide sufficient context + +### Getting Help + +- **Issues**: Use GitHub issues for bug reports and feature requests +- **Discussions**: Use GitHub Discussions for questions and general discussion +- **Documentation**: Check existing documentation before asking questions + +## Recognition + +Contributors are recognized in the following ways: + +- Contributors are listed in the project's contributor history +- Significant contributions may be mentioned in release notes +- Long-term contributors may be invited to become maintainers + +## License + +By contributing to this project, you agree that your contributions will be licensed under the same license as the project (see LICENSE file). + +--- + +Thank you for contributing to confluent-kafka-python! Your contributions help make this project better for everyone. \ No newline at end of file diff --git a/DEVELOPER.md b/DEVELOPER.md index e886fa63a..441d46229 100644 --- a/DEVELOPER.md +++ b/DEVELOPER.md @@ -2,14 +2,52 @@ This document provides information useful to developers working on confluent-kafka-python. +## Development Environment Setup + +### Prerequisites + +- Python 3.7 or higher +- Git +- librdkafka (for Kafka functionality) + +### Setup Steps + +1. **Fork and Clone** + ```bash + git clone https://github.com/your-username/confluent-kafka-python.git + cd confluent-kafka-python + ``` + +2. **Create Virtual Environment** + ```bash + python -m venv venv + source venv/bin/activate # On Windows: venv\Scripts\activate + ``` + +3. **Install Dependencies** + ```bash + pip install -e .[dev,test,docs] + ``` + +4. **Install librdkafka** (if not already installed) + - See the main README.md for platform-specific installation instructions + +5. **Verify Setup** + ```bash + python -c "import confluent_kafka; print('Setup successful!')" + ``` ## Build - $ python -m build +```bash +python -m build +``` If librdkafka is installed in a non-standard location provide the include and library directories with: - $ C_INCLUDE_PATH=/path/to/include LIBRARY_PATH=/path/to/lib python -m build +```bash +C_INCLUDE_PATH=/path/to/include LIBRARY_PATH=/path/to/lib python -m build +``` **Note**: On Windows the variables for Visual Studio are named INCLUDE and LIB @@ -17,26 +55,34 @@ If librdkafka is installed in a non-standard location provide the include and li Install docs dependencies: - $ pip install .[docs] +```bash +pip install .[docs] +``` Build HTML docs: - $ make docs +```bash +make docs +``` Documentation will be generated in `docs/_build/`. or: - $ python setup.py build_sphinx +```bash +python setup.py build_sphinx +``` Documentation will be generated in `build/sphinx/html`. ## Unasync -- maintaining sync versions of async code - $ python tools/unasync.py +```bash +python tools/unasync.py - # Run the script with the --check flag to ensure the sync code is up to date - $ python tools/unasync.py --check +# Run the script with the --check flag to ensure the sync code is up to date +python tools/unasync.py --check +``` If you make any changes to the async code (in `src/confluent_kafka/schema_registry/_async` and `tests/integration/schema_registry/_async`), you **must** run this script to generate the sync counter parts (in `src/confluent_kafka/schema_registry/_sync` and `tests/integration/schema_registry/_sync`). Otherwise, this script will be run in CI with the --check flag and fail the build. diff --git a/INSTALL.md b/INSTALL.md index 6b642463b..70c8d020f 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -8,7 +8,7 @@ all dependencies included. To install, simply do: ```bash -python3 -m pip install confluent-kafka +python -m pip install confluent-kafka ``` If you get a build error or require Kerberos/GSSAPI support please read the next section: *Install from source* @@ -33,7 +33,7 @@ than from prebuilt binary wheels, such as when: # Install build tools and Kerberos support. -yum install -y python3 python3-pip python3-devel gcc make cyrus-sasl-gssapi krb5-workstation +yum install -y python python-pip python-devel gcc make cyrus-sasl-gssapi krb5-workstation # Install the latest version of librdkafka: @@ -55,12 +55,12 @@ yum install -y librdkafka-devel # (e.g., exit the root shell first). # -python3 -m pip install --no-binary confluent-kafka confluent-kafka +python -m pip install --no-binary confluent-kafka confluent-kafka # Verify that confluent_kafka is installed: -python3 -c 'import confluent_kafka; print(confluent_kafka.version())' +python -c 'import confluent_kafka; print(confluent_kafka.version())' ``` ### Install from source on Debian or Ubuntu @@ -72,7 +72,7 @@ python3 -c 'import confluent_kafka; print(confluent_kafka.version())' # Install build tools and Kerberos support. -apt install -y wget software-properties-common lsb-release gcc make python3 python3-pip python3-dev libsasl2-modules-gssapi-mit krb5-user +apt install -y wget software-properties-common lsb-release gcc make python python-pip python-dev libsasl2-modules-gssapi-mit krb5-user # Install the latest version of librdkafka: @@ -91,12 +91,12 @@ apt install -y librdkafka-dev # (e.g., exit the root shell first). # -python3 -m pip install --no-binary confluent-kafka confluent-kafka +python -m pip install --no-binary confluent-kafka confluent-kafka # Verify that confluent_kafka is installed: -python3 -c 'import confluent_kafka; print(confluent_kafka.version())' +python -c 'import confluent_kafka; print(confluent_kafka.version())' ``` @@ -111,11 +111,11 @@ brew install librdkafka # Build and install confluent-kafka-python -python3 -m pip install --no-binary confluent-kafka confluent-kafka +python -m pip install --no-binary confluent-kafka confluent-kafka # Verify that confluent_kafka is installed: -python3 -c 'import confluent_kafka; print(confluent_kafka.version())' +python -c 'import confluent_kafka; print(confluent_kafka.version())' ``` diff --git a/README.md b/README.md index 4eb0d0515..afa323ba1 100644 --- a/README.md +++ b/README.md @@ -130,7 +130,9 @@ The `Producer`, `Consumer` and `AdminClient` are all thread safe. **Install self-contained binary wheels** - $ pip install confluent-kafka +```bash +pip install confluent-kafka +``` **NOTE:** The pre-built Linux wheels do NOT contain SASL Kerberos/GSSAPI support. If you need SASL Kerberos/GSSAPI support you must install librdkafka and @@ -140,19 +142,27 @@ The `Producer`, `Consumer` and `AdminClient` are all thread safe. To use Schema Registry with the Avro serializer/deserializer: - $ pip install "confluent-kafka[avro,schemaregistry]" +```bash +pip install "confluent-kafka[avro,schemaregistry]" +``` To use Schema Registry with the JSON serializer/deserializer: - $ pip install "confluent-kafka[json,schemaregistry]" +```bash +pip install "confluent-kafka[json,schemaregistry]" +``` To use Schema Registry with the Protobuf serializer/deserializer: - $ pip install "confluent-kafka[protobuf,schemaregistry]" +```bash +pip install "confluent-kafka[protobuf,schemaregistry]" +``` When using Data Contract rules (including CSFLE) add the `rules`extra, e.g.: - $ pip install "confluent-kafka[avro,schemaregistry,rules]" +```bash +pip install "confluent-kafka[avro,schemaregistry,rules]" +``` **Install from source** diff --git a/examples/README.md b/examples/README.md index b68fb8886..1e4e86b1e 100644 --- a/examples/README.md +++ b/examples/README.md @@ -27,11 +27,11 @@ conflicts between projects. To setup a venv with the latest release version of confluent-kafka and dependencies of all examples installed: -``` -$ python3 -m venv venv_examples -$ source venv_examples/bin/activate -$ pip install confluent_kafka -$ pip install -r requirements/requirements-examples.txt +```bash +python -m venv venv_examples +source venv_examples/bin/activate +pip install confluent_kafka +pip install -r requirements/requirements-examples.txt ``` To setup a venv that uses the current source tree version of confluent_kafka, you @@ -39,14 +39,14 @@ need to have a C compiler and librdkafka installed ([from a package](https://github.com/edenhill/librdkafka#installing-prebuilt-packages), or [from source](https://github.com/edenhill/librdkafka#build-from-source)). Then: -``` -$ python3 -m venv venv_examples -$ source venv_examples/bin/activate -$ pip install .[examples] +```bash +python -m venv venv_examples +source venv_examples/bin/activate +pip install .[examples] ``` When you're finished with the venv: -``` -$ deactivate +```bash +deactivate ``` diff --git a/examples/docker/README.md b/examples/docker/README.md index a02ffb062..5d5730464 100644 --- a/examples/docker/README.md +++ b/examples/docker/README.md @@ -7,5 +7,7 @@ See the header of each Dockerfile in this directory for what is included. From the confluent-kafka-python source top directory: - $ docker build -f examples/docker/Dockerfile.alpine . +```bash +docker build -f examples/docker/Dockerfile.alpine . +``` diff --git a/tests/README.md b/tests/README.md index 7502d2f6b..b95ea6669 100644 --- a/tests/README.md +++ b/tests/README.md @@ -14,16 +14,20 @@ Test summary: **Note:** Unless otherwise stated, all command, file and directory references are relative to the *repo's root* directory. -A python3 env suitable for running tests: +A python env suitable for running tests: - $ python3 -m venv venv_test - $ source venv_test/bin/activate - $ python3 -m pip install -r requirements/requirements-tests-install.txt - $ python3 -m pip install . +```bash +python -m venv venv_test +source venv_test/bin/activate +python -m pip install -r requirements/requirements-tests-install.txt +python -m pip install . +``` When you're finished with it: - $ deactivate +```bash +deactivate +``` ## Unit tests @@ -32,17 +36,23 @@ not require an active Kafka cluster. You can run them selectively like so: - $ pytest -s -v tests/test_Producer.py +```bash +pytest -s -v tests/test_Producer.py +``` Or run them all with: - $ pytest -s -v tests/test_*.py +```bash +pytest -s -v tests/test_*.py +``` Note that the -v flag enables verbose output and -s flag disables capture of stderr and stdout (so that you see it on the console). You can also use ./tests/run.sh to run the unit tests: - $ ./tests/run.sh unit +```bash +./tests/run.sh unit +``` ## Integration tests