From 3c09f3f707e4f3609d370487c975eb08d3958a90 Mon Sep 17 00:00:00 2001 From: Jay Bryant Date: Thu, 12 Apr 2018 15:55:43 -0500 Subject: [PATCH] Full editing pass for Spring Data Redis I edited for spelling, punctuation, grammar, clarity, and cross-references. I also pulled one piece of content that was being reused into its own file, so that I could include it rather than repeat it. --- .../appendix/appendix-command-reference.adoc | 6 +- .../asciidoc/appendix/appendix-schema.adoc | 2 +- src/main/asciidoc/appendix/introduction.adoc | 7 +- src/main/asciidoc/index.adoc | 5 +- .../introduction/getting-started.adoc | 27 ++- .../asciidoc/introduction/introduction.adoc | 5 +- .../asciidoc/introduction/requirements.adoc | 5 +- src/main/asciidoc/introduction/why-sdr.adoc | 5 +- src/main/asciidoc/new-features.adoc | 23 +- src/main/asciidoc/preface.adoc | 3 +- src/main/asciidoc/reference/introduction.adoc | 1 - src/main/asciidoc/reference/pipelining.adoc | 13 +- .../asciidoc/reference/query-by-example.adoc | 16 +- .../asciidoc/reference/reactive-redis.adoc | 45 ++-- .../asciidoc/reference/redis-cluster.adoc | 71 +++--- .../asciidoc/reference/redis-messaging.adoc | 53 +++-- .../reference/redis-repositories.adoc | 189 +++++++-------- .../asciidoc/reference/redis-scripting.adoc | 14 +- .../reference/redis-transactions.adoc | 34 +-- src/main/asciidoc/reference/redis.adoc | 215 +++++++++--------- src/main/asciidoc/reference/version-note.adoc | 1 + 21 files changed, 381 insertions(+), 359 deletions(-) create mode 100644 src/main/asciidoc/reference/version-note.adoc diff --git a/src/main/asciidoc/appendix/appendix-command-reference.adoc b/src/main/asciidoc/appendix/appendix-command-reference.adoc index ddf2c93359..793b38290d 100644 --- a/src/main/asciidoc/appendix/appendix-command-reference.adoc +++ b/src/main/asciidoc/appendix/appendix-command-reference.adoc @@ -2,8 +2,8 @@ [appendix] = Command Reference -== Supported commands -.Redis commands supported by RedisTemplate. +== Supported Commands +.Redis commands supported by `RedisTemplate` [width="50%",cols="<2,^1",options="header"] |========================================================= |Command |Template Support @@ -181,4 +181,4 @@ |ZSCAN |X |ZSCORE |X |ZUNINONSTORE |X -|========================================================= \ No newline at end of file +|========================================================= diff --git a/src/main/asciidoc/appendix/appendix-schema.adoc b/src/main/asciidoc/appendix/appendix-schema.adoc index 8b98c4f0a5..c17f5fe786 100644 --- a/src/main/asciidoc/appendix/appendix-schema.adoc +++ b/src/main/asciidoc/appendix/appendix-schema.adoc @@ -3,7 +3,7 @@ = Schema :resourcesDir: ../../resources -== Core schema +== Core Schema [source,xml] ------------------------------------------- diff --git a/src/main/asciidoc/appendix/introduction.adoc b/src/main/asciidoc/appendix/introduction.adoc index 640c5af278..ab08fa296a 100644 --- a/src/main/asciidoc/appendix/introduction.adoc +++ b/src/main/asciidoc/appendix/introduction.adoc @@ -1,7 +1,8 @@ [float] -= Appendix Document structure += Appendix Document Structure -Various appendixes outside the reference documentation. +The appendix contains various additional detail that complements the information in the rest of the reference documentation: -<> defines the schemas provided by Spring Data Redis. +* "`<>`" defines the schemas provided by Spring Data Redis. +* "`<>`" details which commands are supported by `RedisTemplate`. diff --git a/src/main/asciidoc/index.adoc b/src/main/asciidoc/index.adoc index 0d217a1bf9..471381c3f5 100644 --- a/src/main/asciidoc/index.adoc +++ b/src/main/asciidoc/index.adoc @@ -1,9 +1,8 @@ = Spring Data Redis -Costin Leau, Jennifer Hickey, Christoph Strobl, Thomas Darimont, Mark Paluch +Costin Leau, Jennifer Hickey, Christoph Strobl, Thomas Darimont, Mark Paluch, Jay Bryant :revnumber: {version} :revdate: {localdate} -:toc: -:toc-placement!: +:toc: left :spring-data-commons-include: ../../../../spring-data-commons/src/main/asciidoc :spring-data-commons-docs: https://raw.githubusercontent.com/spring-projects/spring-data-commons/master/src/main/asciidoc diff --git a/src/main/asciidoc/introduction/getting-started.adoc b/src/main/asciidoc/introduction/getting-started.adoc index 7ef85cfe1a..aedd753d39 100644 --- a/src/main/asciidoc/introduction/getting-started.adoc +++ b/src/main/asciidoc/introduction/getting-started.adoc @@ -1,37 +1,37 @@ [[get-started]] = Getting Started -Learning a new framework is not always straight forward. In this section, we (the Spring Data team) tried to provide, what we think is, an easy to follow guide for starting with the Spring Data Redis module. Of course, feel free to create your own learning 'path' as you see fit and, if possible, please report back any improvements to the documentation that can help others. +This section provides an easy-to-follow guide for getting started with the Spring Data Redis module. [[get-started:first-steps]] == First Steps -As explained in <>, Spring Data Redis (SDR) provides integration between Spring framework and the Redis key value store. Thus, it is important to become acquainted with both of these frameworks (storages or environments depending on how you want to name them). Throughout the SDR documentation, each section provides links to resources relevant however, it is best to become familiar with these topics beforehand. +As explained in <>, Spring Data Redis (SDR) provides integration between the Spring framework and the Redis key-value store. Consequently, you should become acquainted with both of these frameworks. Throughout the SDR documentation, each section provides links to relevant resources. However, you should become familiar with these topics before reading this guide. [[get-started:first-steps:spring]] -=== Knowing Spring +=== Learning Spring -Spring Data uses heavily Spring framework's http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/core.html[core] functionality, such as the http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/core.html[IoC] container, http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/core.html#resources[resource] abstract or http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/core.html#aop[AOP] infrastructure. While it is not important to know the Spring APIs, understanding the concepts behind them is. At a minimum, the idea behind IoC should be familiar. That being said, the more knowledge one has about the Spring, the faster she will pick up Spring Data Redis. Besides the very comprehensive (and sometimes disarming) documentation that explains in detail the Spring Framework, there are a lot of articles, blog entries and books on the matter - take a look at the Spring Guides http://spring.io/guides[home page] for more information. In general, this should be the starting point for developers wanting to try Spring DR. +Spring Data uses Spring framework's http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/core.html[core] functionality, such as the http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/core.html[IoC] container, http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/core.html#resources[resource] abstract, and the http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/core.html#aop[AOP] infrastructure. While it is not important to know the Spring APIs, understanding the concepts behind them is important. At a minimum, the idea behind IoC should be familiar. That being said, the more knowledge you have about the Spring, the faster you can pick up Spring Data Redis. In addition to the Spring Framework's comprehensive documentation, there are a lot of articles, blog entries, and books on the matter. The Spring Guides http://spring.io/guides[home page] offer a good place to start. In general, this should be the starting point for developers wanting to try Spring Data Redis. [[get-started:first-steps:nosql]] -=== Knowing NoSQL and Key Value stores +=== Learning NoSQL and Key Value Stores -NoSQL stores have taken the storage world by storm. It is a vast domain with a plethora of solutions, terms and patterns (to make things worse even the term itself has multiple http://www.google.com/search?q=nosoql+acronym[meanings]). While some of the principles are common, it is crucial that the user is familiar to some degree with the stores supported by SDR. The best way to get acquainted with these solutions is to read their documentation and follow their examples - it usually doesn't take more then 5-10 minutes to go through them and if you are coming from an RDMBS-only background many times these exercises can be an eye opener. +NoSQL stores have taken the storage world by storm. It is a vast domain with a plethora of solutions, terms, and patterns (to make things worse, even the term itself has multiple http://www.google.com/search?q=nosoql+acronym[meanings]). While some of the principles are common, it is crucial that you be familiar to some degree with the stores supported by SDR. The best way to get acquainted with these solutions is to read their documentation and follow their examples. It usually does not take more then five to ten minutes to go through them and, if you come from an RDMBS-only background, many times these exercises can be eye-openers. [[get-started:first-steps:samples]] -=== Trying Out The Samples +=== Trying out the Samples -One can find various samples for key value stores in the dedicated example repo, at https://github.com/spring-projects/spring-data-keyvalue-examples[http://github.com/spring-projects/spring-data-keyvalue-examples]. For Spring Data Redis, of interest is the `retwisj` sample, a Twitter-clone built on top of Redis which can be run locally or be deployed into the cloud. See its http://static.springsource.org/spring-data/data-keyvalue/examples/retwisj/current/[documentation], the following blog http://blog.springsource.com/2011/04/27/getting-started-redis-spring-cloud-foundry/[entry] or the http://retwisj.cloudfoundry.com/[live instance] for more information. +One can find various samples for key-value stores in the dedicated Spring Data example repo, at https://github.com/spring-projects/spring-data-keyvalue-examples[http://github.com/spring-projects/spring-data-keyvalue-examples]. For Spring Data Redis, you should pay particular attention to the `retwisj` sample, a Twitter-clone built on top of Redis that can be run locally or be deployed into the cloud. See its http://static.springsource.org/spring-data/data-keyvalue/examples/retwisj/current/[documentation], the following blog http://blog.springsource.com/2011/04/27/getting-started-redis-spring-cloud-foundry/[entry] or the http://retwisj.cloudfoundry.com/[live instance] for more information. [[get-started:help]] == Need Help? -If you encounter issues or you are just looking for advice, feel free to use one of the links below: +If you encounter issues or you are just looking for advice, use one of the links below: [[get-started:help:community]] === Community Support -The Spring Data tag on http://stackoverflow.com/questions/tagged/spring-data[Stackoverflow] is a message board for all Spring Data (not just Redis) users to share information and help each other. Note that registration is needed *only* for posting. +The Spring Data tag on http://stackoverflow.com/questions/tagged/spring-data[Stack Overflow] is a message board for all Spring Data (not just Redis) users to share information and help each other. Note that registration is needed *only* for posting. [[get-started:help:professional]] === Professional Support @@ -41,14 +41,13 @@ Professional, from-the-source support, with guaranteed response time, is availab [[get-started:up-to-date]] == Following Development -For information on the Spring Data source code repository, nightly builds and snapshot artifacts please see the Spring Data home http://spring.io/spring-data[page]. +For information on the Spring Data source code repository, nightly builds, and snapshot artifacts, see the Spring Data home http://spring.io/spring-data[page]. -You can help make Spring Data best serve the needs of the Spring community by interacting with developers on Stackoverflow at either +You can help make Spring Data best serve the needs of the Spring community by interacting with developers on Stack Overflow at either http://stackoverflow.com/questions/tagged/spring-data[spring-data] or http://stackoverflow.com/questions/tagged/spring-data-redis[spring-data-redis]. -If you encounter a bug or want to suggest an improvement, please create a ticket on the Spring Data issue https://jira.springsource.org/browse/DATAREDIS[tracker]. +If you encounter a bug or want to suggest an improvement (including to this documentation), please create a ticket on the Spring Data issue https://jira.springsource.org/browse/DATAREDIS[tracker]. To stay up to date with the latest news and announcements in the Spring eco system, subscribe to the Spring Community http://spring.io/[Portal]. Lastly, you can follow the Spring http://spring.io/blog/[blog] or the project team (http://twitter.com/SpringData[@SpringData]) on Twitter. - diff --git a/src/main/asciidoc/introduction/introduction.adoc b/src/main/asciidoc/introduction/introduction.adoc index be42f86ca6..0d3bd7c6ff 100644 --- a/src/main/asciidoc/introduction/introduction.adoc +++ b/src/main/asciidoc/introduction/introduction.adoc @@ -1,4 +1,3 @@ -This document is the reference guide for Spring Data Redis (SDR) Support. It explains Key Value module concepts and semantics and the syntax for various stores namespaces. - -For an introduction to key value stores or Spring, or Spring Data examples, please refer to <> - this documentation refers only to Spring Data Redis Support and assumes the user is familiar with the key value storages and Spring concepts. +This document is the reference guide for Spring Data Redis (SDR) Support. It explains Key-Value module concepts and semantics and the syntax for various stores namespaces. +For an introduction to key-value stores, Spring, or Spring Data examples, see <>. This documentation refers only to Spring Data Redis Support and assumes the user is familiar with key-value storage and Spring concepts. diff --git a/src/main/asciidoc/introduction/requirements.adoc b/src/main/asciidoc/introduction/requirements.adoc index 52d23dc5da..22a6f123a4 100644 --- a/src/main/asciidoc/introduction/requirements.adoc +++ b/src/main/asciidoc/introduction/requirements.adoc @@ -1,7 +1,6 @@ [[requirements]] = Requirements -Spring Data Redis 1.x binaries requires JDK level 6.0 and above, and http://projects.spring.io/spring-framework/[Spring Framework] {springVersion} and above. - -In terms of key value stores, http://redis.io[Redis] 2.6.x or higher is required. Spring Data Redis is currently tested against the latest 3.2 release. +Spring Data Redis 1.x binaries require JDK level 6.0 and above and http://projects.spring.io/spring-framework/[Spring Framework] {springVersion} and above. +In terms of key-value stores, http://redis.io[Redis] 2.6.x or higher is required. Spring Data Redis is currently tested against the latest 3.2 release. diff --git a/src/main/asciidoc/introduction/why-sdr.adoc b/src/main/asciidoc/introduction/why-sdr.adoc index bf5e581153..1e1689a58d 100644 --- a/src/main/asciidoc/introduction/why-sdr.adoc +++ b/src/main/asciidoc/introduction/why-sdr.adoc @@ -3,7 +3,6 @@ The Spring Framework is the leading full-stack Java/JEE application framework. It provides a lightweight container and a non-invasive programming model enabled by the use of dependency injection, AOP, and portable service abstractions. -http://en.wikipedia.org/wiki/NoSQL[NoSQL] storages provide an alternative to classical RDBMS for horizontal scalability and speed. In terms of implementation, Key Value stores represent one of the largest (and oldest) members in the NoSQL space. - -The Spring Data Redis (or SDR) framework makes it easy to write Spring applications that use the Redis key value store by eliminating the redundant tasks and boiler plate code required for interacting with the store through Spring's excellent infrastructure support. +http://en.wikipedia.org/wiki/NoSQL[NoSQL] storage systems provide an alternative to classical RDBMS for horizontal scalability and speed. In terms of implementation, key-value stores represent one of the largest (and oldest) members in the NoSQL space. +The Spring Data Redis (SDR) framework makes it easy to write Spring applications that use the Redis key-value store by eliminating the redundant tasks and boilerplate code required for interacting with the store through Spring's excellent infrastructure support. diff --git a/src/main/asciidoc/new-features.adoc b/src/main/asciidoc/new-features.adoc index 3f24514373..396b98ded3 100644 --- a/src/main/asciidoc/new-features.adoc +++ b/src/main/asciidoc/new-features.adoc @@ -1,13 +1,13 @@ [[new-features]] = New Features -New and noteworthy in the latest releases. +This section briefly covers items that are new and noteworthy in the latest releases. [[new-in-2.1.0]] == New in Spring Data Redis 2.1 * Unix domain socket connections using <>. -* <> support using Lettuce. +* <> support using Lettuce. * <> integration. * `@TypeAlias` Support for Redis repositories. * Cluster-wide `SCAN` using Lettuce and `SCAN` execution on a selected node supported by both drivers. @@ -20,7 +20,7 @@ New and noteworthy in the latest releases. * Removed support for SRP and JRedis drivers. * <>. * Introduce Redis feature-specific interfaces for `RedisConnection`. -* Improved `RedisConnectionFactory` configuration via `JedisClientConfiguration` and `LettuceClientConfiguration`. +* Improved `RedisConnectionFactory` configuration with `JedisClientConfiguration` and `LettuceClientConfiguration`. * Revised `RedisCache` implementation. * Add `SPOP` with count command for Redis 3.2. @@ -31,10 +31,10 @@ New and noteworthy in the latest releases. * Upgrade to `Lettuce` 4.2 (Note: Lettuce 4.2 requires Java 8). * Support for Redis http://redis.io/commands#geo[GEO] commands. * Support for Geospatial Indexes using Spring Data Repository abstractions (see <>). -* `MappingRedisConverter` based `HashMapper` implementation (see <>). -* Support for `PartialUpdate` in repository support (see <>). +* `MappingRedisConverter`-based `HashMapper` implementation (see <>). +* Support for `PartialUpdate` in repositories (see <>). * SSL support for connections to Redis cluster. -* Support for `client name` via `ConnectionFactory` when using Jedis. +* Support for `client name` through `ConnectionFactory` when using Jedis. [[new-in-1.7.0]] == New in Spring Data Redis 1.7 @@ -47,14 +47,13 @@ New and noteworthy in the latest releases. * The `Lettuce` Redis driver switched from https://github.com/wg/lettuce[wg/lettuce] to https://github.com/mp911de/lettuce[mp911de/lettuce]. * Support for `ZRANGEBYLEX`. -* Enhanced range operations for `ZSET` s including `+inf` / `-inf`. -* Performance improvements in `RedisCache` now releasing connections earlier. +* Enhanced range operations for `ZSET`, including `+inf` / `-inf`. +* Performance improvements in `RedisCache`, now releasing connections earlier. * Generic Jackson2 `RedisSerializer` making use of Jackson's polymorphic deserialization. [[new-in-1-5-0]] == New in Spring Data Redis 1.5 -* Add support for Redis HyperLogLog commands `PFADD`, `PFCOUNT` and `PFMERGE`. -* Configurable `JavaType` lookup for Jackson based `RedisSerializers`. -* `PropertySource` based configuration for connecting to Redis Sentinel (see: <>). - +* Add support for Redis HyperLogLog commands: `PFADD`, `PFCOUNT`, and `PFMERGE`. +* Configurable `JavaType` lookup for Jackson-based `RedisSerializers`. +* `PropertySource`-based configuration for connecting to Redis Sentinel (see: <>). diff --git a/src/main/asciidoc/preface.adoc b/src/main/asciidoc/preface.adoc index 23eb152e2d..ea2c50fe60 100644 --- a/src/main/asciidoc/preface.adoc +++ b/src/main/asciidoc/preface.adoc @@ -1,4 +1,3 @@ = Preface -The Spring Data Redis project applies core Spring concepts to the development of solutions using a key-value style data store. We provide a "template" as a high-level abstraction for sending and receiving messages. You will notice similarities to the JDBC support in the Spring Framework. - +The Spring Data Redis project applies core Spring concepts to the development of solutions by using a key-value style data store. We provide a "`template`" as a high-level abstraction for sending and receiving messages. You may notice similarities to the JDBC support in the Spring Framework. diff --git a/src/main/asciidoc/reference/introduction.adoc b/src/main/asciidoc/reference/introduction.adoc index 66850eb5f6..eeba3b0266 100644 --- a/src/main/asciidoc/reference/introduction.adoc +++ b/src/main/asciidoc/reference/introduction.adoc @@ -4,4 +4,3 @@ This part of the reference documentation explains the core functionality offered by Spring Data Redis. <> introduces the Redis module feature set. - diff --git a/src/main/asciidoc/reference/pipelining.adoc b/src/main/asciidoc/reference/pipelining.adoc index 6d5692749a..f2753bff9b 100644 --- a/src/main/asciidoc/reference/pipelining.adoc +++ b/src/main/asciidoc/reference/pipelining.adoc @@ -3,14 +3,14 @@ Redis provides support for http://redis.io/topics/pipelining[pipelining], which involves sending multiple commands to the server without waiting for the replies and then reading the replies in a single step. Pipelining can improve performance when you need to send several commands in a row, such as adding many elements to the same List. -Spring Data Redis provides several `RedisTemplate` methods for executing commands in a pipeline. If you don't care about the results of the pipelined operations, you can use the standard `execute` method, passing `true` for the `pipeline` argument. The `executePipelined` methods will execute the provided `RedisCallback` or `SessionCallback` in a pipeline and return the results. For example: +Spring Data Redis provides several `RedisTemplate` methods for executing commands in a pipeline. If you do not care about the results of the pipelined operations, you can use the standard `execute` method, passing `true` for the `pipeline` argument. The `executePipelined` methods run the provided `RedisCallback` or `SessionCallback` in a pipeline and return the results, as shown in the following example: [source,java] ---- -//pop a specified number of items from a queue +//pop a specified number of items from a queue List results = stringRedisTemplate.executePipelined( - new RedisCallback() { - public Object doInRedis(RedisConnection connection) throws DataAccessException { + new RedisCallback() { + public Object doInRedis(RedisConnection connection) throws DataAccessException { StringRedisConnection stringRedisConn = (StringRedisConnection)connection; for(int i=0; i< batchSize; i++) { stringRedisConn.rPop("myqueue"); @@ -20,9 +20,8 @@ List results = stringRedisTemplate.executePipelined( }); ---- -The example above executes a bulk right pop of items from a queue in a pipeline. The `results` List contains all of the popped items. `RedisTemplate` uses its value, hash key, and hash value serializers to deserialize all results before returning, so the returned items in the above example will be Strings. There are additional `executePipelined` methods that allow you to pass a custom serializer for pipelined results. +The preceding example runs a bulk right pop of items from a queue in a pipeline. The `results` `List` contains all of the popped items. `RedisTemplate` uses its value, hash key, and hash value serializers to deserialize all results before returning, so the returned items in the preceding example are Strings. There are additional `executePipelined` methods that let you pass a custom serializer for pipelined results. Note that the value returned from the `RedisCallback` is required to be null, as this value is discarded in favor of returning the results of the pipelined commands. -NOTE: An important change has been made to the `closePipeline` method of `RedisConnection` in version 1.1. Previously this method returned the results of pipelined operations directly from the connectors. This means that the data types often differed from those returned by the methods of `RedisConnection`. For example, `zAdd` returns a boolean indicating that the element has been added to the sorted set. Most connectors return this value as a long and Spring Data Redis performs the conversion. Another common difference is that most connectors return a status reply (usually the String "OK") for operations like `set`. These replies are typically discarded by Spring Data Redis. Prior to 1.1, these conversions were not performed on the results of `closePipeline`. If this change breaks your application, you can set `convertPipelineAndTxResults` to false on your `RedisConnectionFactory` to disable this behavior. - +include::version-note.adoc[] diff --git a/src/main/asciidoc/reference/query-by-example.adoc b/src/main/asciidoc/reference/query-by-example.adoc index 4ed60697b2..5ce394c17d 100644 --- a/src/main/asciidoc/reference/query-by-example.adoc +++ b/src/main/asciidoc/reference/query-by-example.adoc @@ -1,5 +1,7 @@ [[query-by-example.execution]] -== Executing an example +== Executing an Example + +The following example uses Query by Example against a repository: .Query by Example using a Repository ==== @@ -19,21 +21,21 @@ class PersonService { ---- ==== -Redis Repositories support with their secondary indexes a subset of Spring Data's Query by Example features. -In particular, only exact, case-sensitive and non-null values are used to construct a query. +Redis Repositories support, with their secondary indexes, a subset of Spring Data's Query by Example features. +In particular, only exact, case-sensitive, and non-null values are used to construct a query. -Secondary indexes use set-based operations (Set intersection, Set union) to determine matching keys. Adding a property to the query that is not indexed returns no result as no index exists. Query by Example support inspects indexing configuration to only include properties in the query that are covered by an index. This is to prevent accidental inclusion of not indexed properties. +Secondary indexes use set-based operations (Set intersection, Set union) to determine matching keys. Adding a property to the query that is not indexed returns no result, because no index exists. Query by Example support inspects indexing configuration to include only properties in the query that are covered by an index. This is to prevent accidental inclusion of non-indexed properties. -Case-insensitive queries and unsupported ``StringMatcher``s are rejected at runtime. +Case-insensitive queries and unsupported ``StringMatcher`` instances are rejected at runtime. -*Supported Query by Example options* +The following list shows the supported Query by Example options: * Case-sensitive, exact matching of simple and nested properties * Any/All match modes * Value transformation of the criteria value * Exclusion of `null` values from the criteria -*Not supported by Query by Example* +The following list shows properties not supported by Query by Example: * Case-insensitive matching * Regex, prefix/contains/suffix String-matching diff --git a/src/main/asciidoc/reference/reactive-redis.adoc b/src/main/asciidoc/reference/reactive-redis.adoc index 7d3913f602..489683439c 100644 --- a/src/main/asciidoc/reference/reactive-redis.adoc +++ b/src/main/asciidoc/reference/reactive-redis.adoc @@ -2,7 +2,7 @@ = Reactive Redis support :referenceDir: . -This section covers reactive Redis support and how to get started. You will find certain overlaps with the <>. +This section covers reactive Redis support and how to get started. Reactive Redis support naturally has certain overlaps with <>. [[redis:reactive:requirements]] == Redis Requirements @@ -10,33 +10,33 @@ This section covers reactive Redis support and how to get started. You will find Spring Data Redis requires Redis 2.6 or above and Java SE 8.0 or above. In terms of language bindings (or connectors), Spring Data Redis currently integrates with http://github.com/lettuce-io/lettuce-core[Lettuce] as the only reactive Java connector. https://projectreactor.io/[Project Reactor] is used as reactive composition library. [[redis:reactive:connectors]] -== Connecting to Redis using a reactive driver +== Connecting to Redis by Using a Reactive Driver -One of the first tasks when using Redis and Spring is to connect to the store through the IoC container. To do that, a Java connector (or binding) is required. No matter the library one chooses, there is only one set of Spring Data Redis API that one needs to use that behaves consistently across all connectors, namely the `org.springframework.data.redis.connection` package and its `ReactiveRedisConnection` and `ReactiveRedisConnectionFactory` interfaces for working with and retrieving active `connections` to Redis. +One of the first tasks when using Redis and Spring is to connect to the store through the IoC container. To do that, a Java connector (or binding) is required. No matter the library you choose, you must use the `org.springframework.data.redis.connection` package and its `ReactiveRedisConnection` and `ReactiveRedisConnectionFactory` interfaces to work with and retrieve active `connections` to Redis. [[redis:reactive:connectors:operation-modes]] === Redis Operation Modes -Redis can be run as standalone server, with <> or in <> mode. -http://github.com/lettuce-io/lettuce-core[Lettuce] supports all above mentioned connection types. +Redis can be run as a standalone server, with <>, or in <> mode. +http://github.com/lettuce-io/lettuce-core[Lettuce] supports all of the previously mentioned connection types. [[redis:reactive:connectors:connection]] -=== ReactiveRedisConnection and ReactiveRedisConnectionFactory +=== `ReactiveRedisConnection` and `ReactiveRedisConnectionFactory` -`ReactiveRedisConnection` provides the building block for Redis communication as it handles the communication with the Redis back-end. It also automatically translates the underlying driver exceptions to Spring's consistent DAO exception http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/data-access.html#dao-exceptions[hierarchy] so one can switch the connectors without any code changes as the operation semantics remain the same. +`ReactiveRedisConnection` is the core of Redis communication, as it handles the communication with the Redis back-end. It also automatically translates the underlying driver exceptions to Spring's consistent DAO exception http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/data-access.html#dao-exceptions[hierarchy], so you can switch the connectors without any code changes, as the operation semantics remain the same. -Active ``ReactiveRedisConnection``s are created through `ReactiveRedisConnectionFactory`. In addition, the factories act as ``PersistenceExceptionTranslator``s, meaning once declared, they allow one to do transparent exception translation. For example, exception translation through the use of the `@Repository` annotation and AOP. For more information see the dedicated http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/data-access.html#orm-exception-translation[section] in Spring Framework documentation. +`ReactiveRedisConnectionFactory` creates active `ReactiveRedisConnection` instances. In addition, the factories act as `PersistenceExceptionTranslator` instances, meaning that, once declared, they let you do transparent exception translation -- for example, exception translation through the use of the `@Repository` annotation and AOP. For more information, see the dedicated http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/data-access.html#orm-exception-translation[section] in the Spring Framework documentation. NOTE: Depending on the underlying configuration, the factory can return a new connection or an existing connection (in case a pool or shared native connection is used). -The easiest way to work with a `ReactiveRedisConnectionFactory` is to configure the appropriate connector through the IoC container and inject it into the using class. +TIP: The easiest way to work with a `ReactiveRedisConnectionFactory` is to configure the appropriate connector through the IoC container and inject it into the using class. [[redis:reactive:connectors:lettuce]] -=== Configuring Lettuce connector +=== Configuring a Lettuce Connector https://github.com/lettuce-io/lettuce-core[Lettuce] is supported by Spring Data Redis through the `org.springframework.data.redis.connection.lettuce` package. -Setting up `ReactiveRedisConnectionFactory` for Lettuce can be done as follows: +You can set up `ReactiveRedisConnectionFactory` for Lettuce as follows: [source,java] ---- @@ -46,7 +46,7 @@ public ReactiveRedisConnectionFactory connectionFactory() { } ---- -A more sophisticated configuration, including SSL and timeouts, using `LettuceClientConfigurationBuilder` might look like below: +The following exxample shows a more sophisticated configuration, including SSL and timeouts, that uses `LettuceClientConfigurationBuilder`: [source,java] ---- @@ -63,14 +63,14 @@ public ReactiveRedisConnectionFactory lettuceConnectionFactory() { } ---- -For more detailed client configuration tweaks have a look at `LettuceClientConfiguration`. +For more detailed client configuration tweaks, see https://docs.spring.io/spring-data/redis/docs/{revnumber}/api/org/springframework/data/redis/connection/lettuce/LettuceClientConfiguration.html[`LettuceClientConfiguration`]. [[redis:reactive:template]] == Working with Objects through ReactiveRedisTemplate -Most users are likely to use `ReactiveRedisTemplate` and its corresponding package `org.springframework.data.redis.core` - the template is in fact the central class of the Redis module due to its rich feature set. The template offers a high-level abstraction for Redis interactions. While `ReactiveRedisConnection` offers low level methods that accept and return binary values (`ByteBuffer`), the template takes care of serialization and connection management, freeing the user from dealing with such details. +Most users are likely to use `ReactiveRedisTemplate` and its corresponding package, `org.springframework.data.redis.core`. Due to its rich feature set, the template is, in fact, the central class of the Redis module. The template offers a high-level abstraction for Redis interactions. While `ReactiveRedisConnection` offers low-level methods that accept and return binary values (`ByteBuffer`), the template takes care of serialization and connection management, freeing you from dealing with such details. -Moreover, the template provides operation views (following the grouping from Redis command http://redis.io/commands[reference]) that offer rich, generified interfaces for working against a certain type as described below: +Moreover, the template provides operation views (following the grouping from Redis command http://redis.io/commands[reference]) that offer rich, generified interfaces for working against a certain type as described in the following table: .Operational views [width="80%",cols="<1,<2",options="header"] @@ -81,13 +81,13 @@ Moreover, the template provides operation views (following the grouping from Red 2+^|_Key Type Operations_ |ReactiveGeoOperations -|Redis geospatial operations like `GEOADD`, `GEORADIUS`,...) +|Redis geospatial operations such as `GEOADD`, `GEORADIUS`, and others) |ReactiveHashOperations |Redis hash operations |ReactiveHyperLogLogOperations -|Redis HyperLogLog operations like (`PFADD`, `PFCOUNT`,...) +|Redis HyperLogLog operations such as (`PFADD`, `PFCOUNT`, and others) |ReactiveListOperations |Redis list operations @@ -104,7 +104,9 @@ Moreover, the template provides operation views (following the grouping from Red Once configured, the template is thread-safe and can be reused across multiple instances. -Out of the box, `ReactiveRedisTemplate` uses a Java-based serializer for most of its operations. This means that any object written or read by the template will be serialized/deserialized through `RedisElementWriter` respective `RedisElementReader`. The serialization context is passed to the template upon construction, and the Redis module offers several implementations available in the `org.springframework.data.redis.serializer` package - see <> for more information. +`ReactiveRedisTemplate` uses a Java-based serializer for most of its operations. This means that any object written or read by the template is serialized or deserialized through `RedisElementWriter` or `RedisElementReader`. The serialization context is passed to the template upon construction, and the Redis module offers several implementations available in the `org.springframework.data.redis.serializer` package. See <> for more information. + +The following example shows a `ReactiveRedisTemplate` being used to return a `Mono`: [source,java] ---- @@ -132,9 +134,9 @@ public class Example { ---- [[redis:reactive:string]] -== String-focused convenience classes +== String-focused Convenience Classes -Since it's quite common for keys and values stored in Redis to be a `java.lang.String`, the Redis module provides a String-based extension to `ReactiveRedisTemplate`: `ReactiveStringRedisTemplate` is a convenient one-stop solution for intensive `String` operations. In addition to being bound to `String` keys, the template uses the String-based `RedisSerializationContext` underneath which means the stored keys and values are human readable (assuming the same encoding is used both in Redis and your code). For example: +Since it is quite common for keys and values stored in Redis to be a `java.lang.String`, the Redis module provides a String-based extension to `ReactiveRedisTemplate`: `ReactiveStringRedisTemplate`. It is a convenient one-stop solution for intensive `String` operations. In addition to being bound to `String` keys, the template uses the String-based `RedisSerializationContext`, which means the stored keys and values are human readable (assuming the same encoding is used in both Redis and your code). The following example shows `ReactiveStringRedisTemplate` in use: [source,java] ---- @@ -183,5 +185,4 @@ public class Example { } ---- -Please refer to the <> for more details on scripting commands. - +See to the <> for more details on scripting commands. diff --git a/src/main/asciidoc/reference/redis-cluster.adoc b/src/main/asciidoc/reference/redis-cluster.adoc index d747925281..4cf908f07e 100644 --- a/src/main/asciidoc/reference/redis-cluster.adoc +++ b/src/main/asciidoc/reference/redis-cluster.adoc @@ -1,12 +1,12 @@ [[cluster]] = Redis Cluster -Working with http://redis.io/topics/cluster-spec[Redis Cluster] requires a Redis Server version 3.0+ and provides a very own set of features and capabilities. Please refer to the http://redis.io/topics/cluster-tutorial[Cluster Tutorial] for more information. +Working with http://redis.io/topics/cluster-spec[Redis Cluster] requires Redis Server version 3.0+. See the http://redis.io/topics/cluster-tutorial[Cluster Tutorial] for more information. == Enabling Redis Cluster -Cluster support is based on the very same building blocks as non clustered communication. `RedisClusterConnection` an extension to `RedisConnection` handles the communication with the Redis Cluster and translates errors into the Spring DAO exception hierarchy. -`RedisClusterConnection` 's are created via the `RedisConnectionFactory` which has to be set up with the according `RedisClusterConfiguration`. +Cluster support is based on the same building blocks as non-clustered communication. `RedisClusterConnection`, an extension to `RedisConnection`, handles the communication with the Redis Cluster and translates errors into the Spring DAO exception hierarchy. +`RedisClusterConnection` instances are created with the `RedisConnectionFactory`, which has to be set up with the associated `RedisClusterConfiguration`, as shown in the following example: .Sample RedisConnectionFactory Configuration for Redis Cluster ==== @@ -56,23 +56,25 @@ public class AppConfig { [TIP] ==== -`RedisClusterConfiguration` can also be defined via `PropertySource`. +`RedisClusterConfiguration` can also be defined through `PropertySource` and has the following properties: .Configuration Properties -- `spring.redis.cluster.nodes`: Comma delimited list of host:port pairs. +- `spring.redis.cluster.nodes`: Comma-delimited list of host:port pairs. - `spring.redis.cluster.max-redirects`: Number of allowed cluster redirections. ==== -NOTE: The initial configuration points driver libraries to an initial set of cluster nodes. Changes resulting from live cluster reconfiguration will only be kept in the native driver and not be written back to the configuration. +NOTE: The initial configuration points driver libraries to an initial set of cluster nodes. Changes resulting from live cluster reconfiguration are kept only in the native driver and are not written back to the configuration. == Working With Redis Cluster Connection -As mentioned above Redis Cluster behaves different from single node Redis or even a Sentinel monitored master slave environment. This is reasoned by the automatic sharding that maps a key to one of 16384 slots which are distributed across the nodes. Therefore commands that involve more than one key must assert that all keys map to the exact same slot in order to avoid cross slot execution errors. -Further on, hence a single cluster node, only serves a dedicated set of keys, commands issued against one particular server only return results for those keys served by the server. As a very simple example take the `KEYS` command. When issued to a server in cluster environment it only returns the keys served by the node the request is sent to and not necessarily all keys within the cluster. So to get all keys in cluster environment it is necessary to read the keys from at least all known master nodes. +As mentioned earlier, Redis Cluster behaves differently from single-node Redis or even a Sentinel-monitored master-slave environment. This is because the automatic sharding maps a key to one of 16384 slots, which are distributed across the nodes. Therefore, commands that involve more than one key must assert all keys map to the exact same slot to avoid cross-slot execution errors. +A single cluster node serves only a dedicated set of keys. Commands issued against one particular server return results only for those keys served by that server. As a simple example, consider the `KEYS` command. When issued to a server in a cluster environment, it returns only the keys served by the node the request is sent to and not necessarily all keys within the cluster. So, to get all keys in a cluster environment, you must read the keys from all the known master nodes. -While redirects for to a specific keys to the corresponding slot serving node are handled by the driver libraries, higher level functions like collecting information across nodes, or sending commands to all nodes in the cluster that are covered by `RedisClusterConnection`. Picking up the keys example from just before, this means, that the `keys(pattern)` method picks up every master node in cluster and simultaneously executes the `KEYS` command on every single one, while picking up the results and returning the cumulated set of keys. To just request the keys of a single node `RedisClusterConnection` provides overloads for those (like `keys(node, pattern)` ). +While redirects for specific keys to the corresponding slot-serving node are handled by the driver libraries, higher-level functions, such as collecting information across nodes or sending commands to all nodes in the cluster, are covered by `RedisClusterConnection`. Picking up the keys example from earlier, this means that the `keys(pattern)` method picks up every master node in the cluster and simultaneously executes the `KEYS` command on every master node while picking up the results and returning the cumulated set of keys. To just request the keys of a single node `RedisClusterConnection` provides overloads for those methods (for example, `keys(node, pattern)`). -A `RedisClusterNode` can be obtained from `RedisClusterConnection.clusterGetNodes` or it can be constructed using either host and port or the node Id. +A `RedisClusterNode` can be obtained from `RedisClusterConnection.clusterGetNodes` or it can be constructed by using either the host and the port or the node Id. + +The following example shows a set of commands being run across the cluster: .Sample of Running Commands Across the Cluster ==== @@ -90,8 +92,8 @@ b8b5ee... 127.0.0.1:7382 slave 6b38bb... 0 1449730618304 25 connected < ---- RedisClusterConnection connection = connectionFactory.getClusterConnnection(); -connection.set("foo", value); <5> -connection.set("bar", value); <6> +connection.set("thing1", value); <5> +connection.set("thing2", value); <6> connection.keys("*"); <7> @@ -103,19 +105,19 @@ connection.keys(NODE_7382, "*"); < <1> Master node serving slots 0 to 5460 replicated to slave at 7382 <2> Master node serving slots 5461 to 10922 <3> Master node serving slots 10923 to 16383 -<4> Slave node holding replicates of master at 7379 +<4> Slave node holding replicants of the master at 7379 <5> Request routed to node at 7381 serving slot 12182 <6> Request routed to node at 7379 serving slot 5061 -<7> Request routed to nodes at 7379, 7380, 7381 -> [foo, bar] -<8> Request routed to node at 7379 -> [bar] +<7> Request routed to nodes at 7379, 7380, 7381 -> [thing1, thing2] +<8> Request routed to node at 7379 -> [thing2] <9> Request routed to node at 7380 -> [] -<10> Request routed to node at 7381 -> [foo] -<11> Request routed to node at 7382 -> [bar] +<10> Request routed to node at 7381 -> [thing1] +<11> Request routed to node at 7382 -> [thing2] ==== -Cross slot requests such as `MGET` are automatically served by the native driver library when all keys map to the same slot. However once this is not the case `RedisClusterConnection` executes multiple parallel `GET` commands against the slot serving nodes and again returns a cumulated result. Obviously this is less performing than the single slot execution and therefore should be used with care. In doubt please consider pinning keys to the same slot by providing a prefix in curly brackets like `{my-prefix}.foo` and `{my-prefix}.bar` which will both map to the same slot number. +When all keys map to the same slot, the native driver library automatically serves cross-slot requests, such as `MGET`. However, once this is not the case, `RedisClusterConnection` executes multiple parallel `GET` commands against the slot-serving nodes and again returns an accumulated result. This is less performant than the single-slot execution and, therefore, should be used with care. If in doubt, consider pinning keys to the same slot by providing a prefix in curly brackets, such as `{my-prefix}.thing1` and `{my-prefix}.thing2`, which will both map to the same slot number. The following example shows cross-slot request handling: -.Sample of Cross Slot Request Handling +.Sample of Cross-Slot Request Handling ==== [source,text] ---- @@ -129,33 +131,35 @@ redis-cli@127.0.0.1:7379 > cluster nodes ---- RedisClusterConnection connection = connectionFactory.getClusterConnnection(); -connection.set("foo", value); // slot: 12182 -connection.set("{foo}.bar", value); // slot: 12182 -connection.set("bar", value); // slot: 5461 +connection.set("thing1", value); // slot: 12182 +connection.set("{thing1}.thing2", value); // slot: 12182 +connection.set("thing2", value); // slot: 5461 -connection.mGet("foo", "{foo}.bar"); <2> +connection.mGet("thing1", "{thing1}.thing2"); <2> -connection.mGet("foo", "bar"); <3> +connection.mGet("thing1", "thing2"); <3> ---- <1> Same Configuration as in the sample before. -<2> Keys map to same slot -> 127.0.0.1:7381 MGET foo {foo}.bar +<2> Keys map to same slot -> 127.0.0.1:7381 MGET thing1 {thing1}.thing2 <3> Keys map to different slots and get split up into single slot ones routed to the according nodes + - -> 127.0.0.1:7379 GET bar + - -> 127.0.0.1:7381 GET foo + -> 127.0.0.1:7379 GET thing2 + + -> 127.0.0.1:7381 GET thing1 ==== -TIP: The above provided simple examples to demonstrate the general strategy followed by Spring Data Redis. Be aware that some operations might require loading huge amounts of data into memory in order to compute the desired command. Additionally not all cross slot requests can safely be ported to multiple single slot requests and will error if misused (eg. ``PFCOUNT``). +TIP: The preceding examples demonstrate the general strategy followed by Spring Data Redis. Be aware that some operations might require loading huge amounts of data into memory to compute the desired command. Additionally, not all cross-slot requests can safely be ported to multiple single slot requests and error if misused (for example, `PFCOUNT`). + +== Working with RedisTemplate and ClusterOperations -== Working With RedisTemplate and ClusterOperations +See the <> section for information about the general purpose, configuration, and usage of `RedisTemplate`. -Please refer to the section <> to read about general purpose, configuration and usage of `RedisTemplate`. +CAUTION: Be careful when setting up `RedisTemplate#keySerializer` using any of the JSON `RedisSerializers`, as changing JSON structure has immediate influence on hash slot calculation. -WARNING: Please be careful when setting up `RedisTemplate#keySerializer` using any of the JSON `RedisSerializers` as changing json structure has immediate influence on hash slot calculation. +`RedisTemplate` provides access to cluster-specific operations through the `ClusterOperations` interface, which can be obtained from `RedisTemplate.opsForCluster()`. This lets you explicitly run commands on a single node within the cluster while retaining the serialization and deserialization features configured for the template. It also provides administrative commands (such as `CLUSTER MEET`) or more high-level operations (for example, resharding). -`RedisTemplate` provides access to cluster specific operations via the `ClusterOperations` interface that can be obtained via `RedisTemplate.opsForCluster()`. This allows to execute commands explicitly on a single node within the cluster while retaining de-/serialization features configured for the template and provides administrative commands such as `CLUSTER MEET` or more high level operations for eg. resharding. +The following example shows how to access `RedisClusterConnection` with `RedisTemplate`: -.Accessing RedisClusterConnection via RedisTemplate +.Accessing `RedisClusterConnection` with `RedisTemplate` ==== [source,text] ---- @@ -164,4 +168,3 @@ clusterOps.shutdown(NODE_7379); <1> ---- <1> Shut down node at 7379 and cross fingers there is a slave in place that can take over. ==== - diff --git a/src/main/asciidoc/reference/redis-messaging.adoc b/src/main/asciidoc/reference/redis-messaging.adoc index 0d93e4db8b..ec13de8df0 100644 --- a/src/main/asciidoc/reference/redis-messaging.adoc +++ b/src/main/asciidoc/reference/redis-messaging.adoc @@ -1,16 +1,21 @@ [[pubsub]] -= Redis Messaging/PubSub += Redis Messaging and Pub/Sub -Spring Data provides dedicated messaging integration for Redis, very similar in functionality and naming to the JMS integration in Spring Framework; in fact, users familiar with the JMS support in Spring should feel right at home. +Spring Data provides dedicated messaging integration for Redis, similar in functionality and naming to the JMS integration in Spring Framework. -Redis messaging can be roughly divided into two areas of functionality, namely the production or publication and consumption or subscription of messages, hence the shortcut pubsub (Publish/Subscribe). The `RedisTemplate` class is used for message production. For asynchronous reception similar to Java EE's message-driven bean style, Spring Data provides a dedicated message listener container that is used to create Message-Driven POJOs (MDPs) and for synchronous reception, the `RedisConnection` contract. +Redis messaging can be roughly divided into two areas of functionality: -The package `org.springframework.data.redis.connection` and `org.springframework.data.redis.listener` provide the core functionality for using Redis messaging. +* Publication or production of messages +* Subscription or consumption of messages + +This is an example of the pattern often called Publish/Subscribe (Pub/Sub for short). The `RedisTemplate` class is used for message production. For asynchronous reception similar to Java EE's message-driven bean style, Spring Data provides a dedicated message listener container that is used to create Message-Driven POJOs (MDPs) and, for synchronous reception, the `RedisConnection` contract. + +The `org.springframework.data.redis.connection` and `org.springframework.data.redis.listener` packages provide the core functionality for Redis messaging. [[redis:pubsub:publish]] -== Sending/Publishing messages +== Publishing or Sending Messages -To publish a message, one can use, as with the other operations, either the low-level `RedisConnection` or the high-level `RedisTemplate`. Both entities offer the `publish` method that accepts as an argument the message that needs to be sent as well as the destination channel. While `RedisConnection` requires raw-data (array of bytes), the `RedisTemplate` allow arbitrary objects to be passed in as messages: +To publish a message, you can use, as with the other operations, either the low-level `RedisConnection` or the high-level `RedisTemplate`. Both entities offer the `publish` method, which accepts the message and the destination channel as arguments. While `RedisConnection` requires raw data (array of bytes), the `RedisTemplate` lets arbitrary objects be passed in as messages, as shown in the following example: [source,java] ---- @@ -23,35 +28,36 @@ template.convertAndSend("hello!", "world"); ---- [[redis:pubsub:subscribe]] -== Receiving/Subscribing for messages +== Subscribing to or Receiving Messages -On the receiving side, one can subscribe to one or multiple channels either by naming them directly or by using pattern matching. The latter approach is quite useful as it not only allows multiple subscriptions to be created with one command but to also listen on channels not yet created at subscription time (as long as they match the pattern). +On the receiving side, one can subscribe to one or multiple channels either by naming them directly or by using pattern matching. The latter approach is quite useful, as it not only lets multiple subscriptions be created with one command but can also listen on channels not yet created at subscription time (as long as they match the pattern). -At the low-level, `RedisConnection` offers `subscribe` and `pSubscribe` methods that map the Redis commands for subscribing by channel respectively by pattern. Note that multiple channels or patterns can be used as arguments. To change the subscription of a connection or simply query whether it is listening or not, `RedisConnection` provides `getSubscription` and `isSubscribed` method. +At the low-level, `RedisConnection` offers the `subscribe` and `pSubscribe` methods that map the Redis commands for subscribing by channel or by pattern, respectively. Note that multiple channels or patterns can be used as arguments. To change the subscription of a connection or query whether it is listening, `RedisConnection` provides the `getSubscription` and `isSubscribed` methods. -NOTE: Subscription commands in Spring Data Redis are blocking. That is, calling subscribe on a connection will cause the current thread to block as it will start waiting for messages - the thread will be released only if the subscription is canceled, that is an additional thread invokes `unsubscribe` or `pUnsubscribe` on the *same* connection. See <> below for a solution to this problem. +NOTE: Subscription commands in Spring Data Redis are blocking. That is, calling subscribe on a connection causes the current thread to block as it starts waiting for messages. The thread is released only if the subscription is canceled, which happens when another thread invokes `unsubscribe` or `pUnsubscribe` on the *same* connection. See "`<>`" (later in this document) for a solution to this problem. -As mentioned above, once subscribed a connection starts waiting for messages. No other commands can be invoked on it except for adding new subscriptions or modifying/canceling the existing ones. That is, invoking anything other then `subscribe`, `pSubscribe`, `unsubscribe`, or `pUnsubscribe` is illegal and will throw an exception. +As mentioned earlier, once subscribed, a connection starts waiting for messages. Only commands that add new subscriptions, modify existing subscriptions, and cancel existing subscriptions are allowed. Invoking anything other than `subscribe`, `pSubscribe`, `unsubscribe`, or `pUnsubscribe` throws an exception. -In order to subscribe for messages, one needs to implement the `MessageListener` callback: each time a new message arrives, the callback gets invoked and the user code executed through `onMessage` method. The interface gives access not only to the actual message but to the channel it has been received through and the pattern (if any) used by the subscription to match the channel. This information allows the callee to differentiate between various messages not just by content but also through data. +In order to subscribe to messages, one needs to implement the `MessageListener` callback. Each time a new message arrives, the callback gets invoked and the user code gets run by the `onMessage` method. The interface gives access not only to the actual message but also to the channel it has been received through and the pattern (if any) used by the subscription to match the channel. This information lets the callee differentiate between various messages not just by content but also examining additional details. [[redis:pubsub:subscribe:containers]] === Message Listener Containers -Due to its blocking nature, low-level subscription is not attractive as it requires connection and thread management for every single listener. To alleviate this problem, Spring Data offers `RedisMessageListenerContainer` which does all the heavy lifting on behalf of the user - users familiar with EJB and JMS should find the concepts familiar as it is designed as close as possible to the support in Spring Framework and its message-driven POJOs (MDPs) +Due to its blocking nature, low-level subscription is not attractive, as it requires connection and thread management for every single listener. To alleviate this problem, Spring Data offers `RedisMessageListenerContainer`, which does all the heavy lifting. If you are familiar with EJB and JMS, you should find the concepts familiar, as it is designed to be as close as possible to the support in Spring Framework and its message-driven POJOs (MDPs). -`RedisMessageListenerContainer` acts as a message listener container; it is used to receive messages from a Redis channel and drive the `MessageListener` s that are injected into it. The listener container is responsible for all threading of message reception and dispatches into the listener for processing. A message listener container is the intermediary between an MDP and a messaging provider, and takes care of registering to receive messages, resource acquisition and release, exception conversion and the like. This allows you as an application developer to write the (possibly complex) business logic associated with receiving a message (and reacting to it), and delegates boilerplate Redis infrastructure concerns to the framework. +`RedisMessageListenerContainer` acts as a message listener container. It is used to receive messages from a Redis channel and drive the `MessageListener` instances that are injected into it. The listener container is responsible for all threading of message reception and dispatches into the listener for processing. A message listener container is the intermediary between an MDP and a messaging provider and takes care of registering to receive messages, resource acquisition and release, exception conversion, and the like. This lets you as an application developer write the (possibly complex) business logic associated with receiving a message (and reacting to it) and delegates boilerplate Redis infrastructure concerns to the framework. -Furthermore, to minimize the application footprint, `RedisMessageListenerContainer` allows one connection and one thread to be shared by multiple listeners even though they do not share a subscription. Thus no matter how many listeners or channels an application tracks, the runtime cost will remain the same through out its lifetime. Moreover, the container allows runtime configuration changes so one can add or remove listeners while an application is running without the need for restart. Additionally, the container uses a lazy subscription approach, using a `RedisConnection` only when needed - if all the listeners are unsubscribed, cleanup is automatically performed and the used thread released. +Furthermore, to minimize the application footprint, `RedisMessageListenerContainer` lets one connection and one thread be shared by multiple listeners even though they do not share a subscription. Thus, no matter how many listeners or channels an application tracks, the runtime cost remains the same throughout its lifetime. Moreover, the container allows runtime configuration changes so that you can add or remove listeners while an application is running without the need for a restart. Additionally, the container uses a lazy subscription approach, using a `RedisConnection` only when needed. If all the listeners are unsubscribed, cleanup is automatically performed, and the thread is released. -To help with the asynch manner of messages, the container requires a `java.util.concurrent.Executor` ( or Spring's `TaskExecutor`) for dispatching the messages. Depending on the load, the number of listeners or the runtime environment, one should change or tweak the executor to better serve her needs - in particular in managed environments (such as app servers), it is highly recommended to pick a a proper `TaskExecutor` to take advantage of its runtime. +To help with the asynchronous nature of messages, the container requires a `java.util.concurrent.Executor` ( or Spring's `TaskExecutor`) for dispatching the messages. Depending on the load, the number of listeners, or the runtime environment, you should change or tweak the executor to better serve your needs. In particular, in managed environments (such as app servers), it is highly recommended to pick a proper `TaskExecutor` to take advantage of its runtime. +// TODO How can one know which is "proper"? [[redis:pubsub:subscribe:adapter]] === The MessageListenerAdapter -The `MessageListenerAdapter` class is the final component in Spring's asynchronous messaging support: in a nutshell, it allows you to expose almost *any* class as a MDP (there are of course some constraints). +The `MessageListenerAdapter` class is the final component in Spring's asynchronous messaging support. In a nutshell, it lets you expose almost *any* class as a MDP (though there are some constraints). -Consider the following interface definition. Notice that although the interface doesn't extend the `MessageListener` interface, it can still be used as a MDP via the use of the `MessageListenerAdapter` class. Notice also how the various message handling methods are strongly typed according to the *contents* of the various `Message` types that they can receive and handle. In addition, the channel or pattern to which a message is sent can be passed in to the method as the second argument of type String: +Consider the following interface definition: [source,java] ---- @@ -64,6 +70,8 @@ public interface MessageDelegate { } ---- +Notice that, although the interface does not extend the `MessageListener` interface, it can still be used as a MDP by using the `MessageListenerAdapter` class. Notice also how the various message handling methods are strongly typed according to the *contents* of the various `Message` types that they can receive and handle. In addition, the channel or pattern to which a message is sent can be passed in to the method as the second argument of type `String`: + [source,java] ---- public class DefaultMessageDelegate implements MessageDelegate { @@ -71,7 +79,7 @@ public class DefaultMessageDelegate implements MessageDelegate { } ---- -In particular, note how the above implementation of the `MessageDelegate` interface (the above `DefaultMessageDelegate` class) has *no* Redis dependencies at all. It truly is a POJO that we will make into an MDP via the following configuration. +Notice how the above implementation of the `MessageDelegate` interface (the above `DefaultMessageDelegate` class) has *no* Redis dependencies at all. It truly is a POJO that we make into an MDP with the following configuration: [source,xml] ---- @@ -93,9 +101,9 @@ In particular, note how the above implementation of the `MessageDelegate` interf ---- -NOTE: The listener topic can be either a channel (e.g. `topic="chatroom"`) or a pattern (e.g. `topic="*room"`) +NOTE: The listener topic can be either a channel (for example, `topic="chatroom"`) or a pattern (for example, `topic="*room"`) -The example above uses the Redis namespace to declare the message listener container and automatically register the POJOs as listeners. The full blown, *beans* definition is displayed below: +The preceding example uses the Redis namespace to declare the message listener container and automatically register the POJOs as listeners. The full blown beans definition follows: [source,xml] ---- @@ -119,5 +127,4 @@ The example above uses the Redis namespace to declare the message listener conta ---- -Each time a message is received, the adapter automatically performs translation (using the configured `RedisSerializer`) between the low-level format and the required object type transparently. Any exception caused by the method invocation is caught and handled by the container (by default, being logged). - +Each time a message is received, the adapter automatically and transparently performs translation (using the configured `RedisSerializer`) between the low-level format and the required object type. Any exception caused by the method invocation is caught and handled by the container (by default, exceptions get logged). diff --git a/src/main/asciidoc/reference/redis-repositories.adoc b/src/main/asciidoc/reference/redis-repositories.adoc index 4bef9ddbee..f8c331861f 100644 --- a/src/main/asciidoc/reference/redis-repositories.adoc +++ b/src/main/asciidoc/reference/redis-repositories.adoc @@ -1,14 +1,14 @@ [[redis.repositories]] = Redis Repositories -Working with Redis Repositories allows to seamlessly convert and store domain objects in Redis Hashes, apply custom mapping strategies and make use of secondary indexes. +Working with Redis Repositories lets you seamlessly convert and store domain objects in Redis Hashes, apply custom mapping strategies, and use secondary indexes. -WARNING: Redis Repositories requires at least Redis Server version 2.8.0. +IMPORTANT: Redis Repositories requires at least Redis Server version 2.8.0. [[redis.repositories.usage]] == Usage -To access domain entities stored in a Redis you can leverage repository support that eases implementing those quite significantly. +Spring Data Redis lets you easily implement domain entities, as shown in the following example: .Sample Person Entity ==== @@ -25,12 +25,11 @@ public class Person { ---- ==== -We have a pretty simple domain object here. Note that it has a property named `id` annotated with `org.springframework.data.annotation.Id` and a `@RedisHash` annotation on its type. -Those two are responsible for creating the actual key used to persist the hash. +We have a pretty simple domain object here. Note that it has a `@RedisHash` annotation on its type and a property named `id` that is annotated with `org.springframework.data.annotation.Id`. Those two items are responsible for creating the actual key used to persist the hash. NOTE: Properties annotated with `@Id` as well as those named `id` are considered as the identifier properties. Those with the annotation are favored over others. -To now actually have a component responsible for storage and retrieval we need to define a repository interface. +To now actually have a component responsible for storage and retrieval, we need to define a repository interface, as shown in the following example: .Basic Repository Interface To Persist Person Entities ==== @@ -42,7 +41,7 @@ public interface PersonRepository extends CrudRepository { ---- ==== -As our repository extends `CrudRepository` it provides basic CRUD and finder operations. The thing we need in between to glue things together is the according Spring configuration. +As our repository extends `CrudRepository`, it provides basic CRUD and finder operations. The thing we need in between to glue things together is the corresponding Spring configuration, shown in the following example: .JavaConfig for Redis Repositories ==== @@ -67,7 +66,7 @@ public class ApplicationConfig { ---- ==== -Given the setup above we can go on and inject `PersonRepository` into our components. +Given the preceding setup, we can inject `PersonRepository` into our components, as shown in the following example: .Access to Person Entities ==== @@ -89,17 +88,17 @@ public void basicCrudOperations() { repo.delete(rand); <4> } ---- -<1> Generates a new id if current value is `null` or reuses an already set id value and stores properties of type `Person` inside the Redis Hash with key with pattern `keyspace:id` in this case eg. `persons:5d67b7e1-8640-4475-beeb-c666fab4c0e5`. -<2> Uses the provided id to retrieve the object stored at `keyspace:id`. -<3> Counts the total number of entities available within the keyspace _persons_ defined by `@RedisHash` on `Person`. +<1> Generates a new `id` if the current value is `null` or reuses an already set `id` value and stores properties of type `Person` inside the Redis Hash with a key that has a pattern of `keyspace:id` -- in this case, it might be `persons:5d67b7e1-8640-4475-beeb-c666fab4c0e5`. +<2> Uses the provided `id` to retrieve the object stored at `keyspace:id`. +<3> Counts the total number of entities available within the keyspace, `persons`, defined by `@RedisHash` on `Person`. <4> Removes the key for the given object from Redis. ==== [[redis.repositories.mapping]] -== Object to Hash Mapping -The Redis Repository support persists Objects in Hashes. This requires an Object to Hash conversion which is done by a `RedisConverter`. The default implementation uses `Converter` for mapping property values to and from Redis native `byte[]`. +== Object-to-Hash Mapping +The Redis Repository support persists Objects in Hashes. This requires an Object-to-Hash conversion which is done by a `RedisConverter`. The default implementation uses `Converter` for mapping property values to and from Redis native `byte[]`. -Given the `Person` type from the previous sections the default mapping looks like the following: +Given the `Person` type from the previous sections, the default mapping looks like the following: ==== [source,text] @@ -111,11 +110,13 @@ lastname = al’thor address.city = emond's field <3> address.country = andor ---- -<1> The `_class` attribute is included on root level as well as on any nested interface or abstract types. +<1> The `_class` attribute is included on the root level as well as on any nested interface or abstract types. <2> Simple property values are mapped by path. <3> Properties of complex types are mapped by their dot path. ==== +The following table describes the default mapping rules: + [cols="1,2,3", options="header"] .Default Mapping Rules |=== @@ -124,13 +125,13 @@ address.country = andor | Mapped Value | Simple Type + -(eg. String) +(for example, String) | String firstname = "rand"; | firstname = "rand" | Complex Type + -(eg. Address) -| Address adress = new Address("emond's field"); +(for example, Address) +| Address address = new Address("emond's field"); | address.city = "emond's field" | List + @@ -158,9 +159,13 @@ of Complex Type addresses.[work].city = "... |=== -WARNING: Due to the flat representation structure Map keys need to be simple types such as ``String``s or ``Number``s. +CAUTION: Due to the flat representation structure, Map keys need to be simple types, such as ``String`` or ``Number``. + +Mapping behavior can be customized by registering the corresponding `Converter` in `RedisCustomConversions`. Those converters can take care of converting from and to a single `byte[]` as well as `Map`. The first one is suitable for (for example) converting a complex type to (for example) a binary JSON representation that still uses the default mappings hash structure. The second option offers full control over the resulting hash. + +WARNING: Writing objects to a Redis hash deletes the content from the hash and re-creates the whole hash, so data that has not been mapped is lost. -Mapping behavior can be customized by registering the according `Converter` in `RedisCustomConversions`. Those converters can take care of converting from/to a single `byte[]` as well as `Map` whereas the first one is suitable for eg. converting one complex type to eg. a binary JSON representation that still uses the default mappings hash structure. The second option offers full control over the resulting hash. Writing objects to a Redis hash will delete the content from the hash and re-create the whole hash, so not mapped data will be lost. +The following example shows two sample byte array converters: .Sample byte[] Converters ==== @@ -202,7 +207,7 @@ public class BytesToAddressConverter implements Converter { ---- ==== -Using the above byte[] `Converter` produces eg. +Using the preceding byte array `Converter` produces output similar to the following: ==== [source,text] ---- @@ -214,6 +219,7 @@ address = { city : "emond's field", country : "andor" } ---- ==== +The following example shows two examples of `Map` converters: .Sample Map Converters ==== @@ -239,7 +245,7 @@ public class MapToAddressConverter implements Converter> will still be created even for custom converted types. +NOTE: Custom conversions have no effect on index resolution. <> are still created, even for custom converted types. -=== Customizing type mapping +=== Customizing Type Mapping -In case you want to avoid writing the entire Java class name as type information but rather like to use some key you can use the `@TypeAlias` annotation at the entity class being persisted. If you need to customize the mapping even more have a look at the `TypeInformationMapper` interface. An instance of that interface can be configured at the `DefaultRedisTypeMapper` which can be configured on `MappingRedisConverter`. +If you want to avoid writing the entire Java class name as type information and would rather like to use a key, you can use the `@TypeAlias` annotation on the entity class being persisted. If you need to customize the mapping even more, look at the https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/convert/TypeInformationMapper.html[`TypeInformationMapper`] interface. An instance of that interface can be configured at the `DefaultRedisTypeMapper`, which can be configured on `MappingRedisConverter`. -.Defining `@TypeAlias` for an Entity +The following example shows how to define a type alias for an entity: + +.Defining `@TypeAlias` for an entity ==== [source,java] ---- @@ -269,11 +277,11 @@ class Person { ---- ==== -Note that the resulting document will contain `pers` as the value in a `_class` field. +The resulting document contains `pers` as the value in a `_class` field. -==== Configuring custom type mapping +==== Configuring Custom Type Mapping -The following example demonstrates how to configure a custom `RedisTypeMapper` in `MappingRedisConverter`. +The following example demonstrates how to configure a custom `RedisTypeMapper` in `MappingRedisConverter`: .Configuring a custom `RedisTypeMapper` via Spring Java Config ==== @@ -283,7 +291,6 @@ class CustomRedisTypeMapper extends DefaultRedisTypeMapper { //implement custom type mapping here } ---- -==== [source,java] ---- @@ -308,13 +315,16 @@ class SampleRedisConfiguration { } } ---- +==== [[redis.repositories.keyspaces]] == Keyspaces -Keyspaces define prefixes used to create the actual _key_ for the Redis Hash. -By default the prefix is set to `getClass().getName()`. This default can be altered via `@RedisHash` on aggregate root level or by setting up a programmatic configuration. However, the annotated keyspace supersedes any other configuration. +Keyspaces define prefixes used to create the actual key for the Redis Hash. +By default, the prefix is set to `getClass().getName()`. You can alter this default by setting `@RedisHash` on the aggregate root level or by setting up a programmatic configuration. However, the annotated keyspace supersedes any other configuration. + +The following example shows how to set the keyspace configuration with the `@EnableRedisRepositories` annotation: -.Keyspace Setup via @EnableRedisRepositories +.Keyspace Setup via `@EnableRedisRepositories` ==== [source,java] ---- @@ -335,6 +345,8 @@ public class ApplicationConfig { ---- ==== +The following example shows how to programmatically set the keyspace: + .Programmatic Keyspace setup ==== [source,java] @@ -370,7 +382,7 @@ http://redis.io/topics/indexes[Secondary indexes] are used to enable lookup oper [[redis.repositories.indexes.simple]] === Simple Property Index -Given the sample `Person` entity we can create an index for _firstname_ by annotating the property with `@Indexed`. +Given the sample `Person` entity shown earlier, we can create an index for `firstname` by annotating the property with `@Indexed`, as shown in the following example: .Annotation driven indexing ==== @@ -387,7 +399,7 @@ public class Person { ---- ==== -Indexes are built up for actual property values. Saving two Persons eg. "rand" and "aviendha" results in setting up indexes like below. +Indexes are built up for actual property values. Saving two Persons (for example, "rand" and "aviendha") results in setting up indexes similar to the following: ==== [source,text] @@ -397,7 +409,7 @@ SADD persons:firstname:aviendha a9d4b3a0-50d3-4538-a2fc-f7fc2581ee56 ---- ==== -It is also possible to have indexes on nested elements. Assume `Address` has a _city_ property that is annotated with `@Indexed`. In that case, once `person.address.city` is not `null`, we have Sets for each city. +It is also possible to have indexes on nested elements. Assume `Address` has a `city` property that is annotated with `@Indexed`. In that case, once `person.address.city` is not `null`, we have Sets for each city, as shown in the following example: ==== [source,text] @@ -406,7 +418,7 @@ SADD persons:address.city:tear e2c7dcee-b8cd-4424-883e-736ce564363e ---- ==== -Further more the programmatic setup allows to define indexes on map keys and list properties. +Furthermore, the programmatic setup lets you define indexes on map keys and list properties, as shown in the following example: ==== [source,java] @@ -426,11 +438,11 @@ public class Person { <3> `SADD persons:addresses.city:tear e2c7dcee-b8cd-4424-883e-736ce564363e` ==== -WARNING: Indexes will not be resolved on <>. +CAUTION: Indexes cannot be resolved on <>. -Same as with _keyspaces_ it is possible to configure indexes without the need of annotating the actual domain type. +As with keyspaces, you can configure indexes without needing to annotate the actual domain type, as shown in the following example: -.Index Setup via @EnableRedisRepositories +.Index Setup with @EnableRedisRepositories ==== [source,java] ---- @@ -451,6 +463,8 @@ public class ApplicationConfig { ---- ==== +Again, as with keyspaces, you can programmatically configure indexes, as shown in the following example: + .Programmatic Index setup ==== [source,java] @@ -482,7 +496,7 @@ public class ApplicationConfig { [[redis.repositories.indexes.geospatial]] === Geospatial Index -Assume the `Address` type contains a property `location` of type `Point` that holds the geo coordinates of the particular address. By annotating the property with `@GeoIndexed` those values will be added using Redis `GEO` commands. +Assume the `Address` type contains a `location` property of type `Point` that holds the geo coordinates of the particular address. By annotating the property with `@GeoIndexed`, Spring Data Redis adds those values by using Redis `GEO` commands, as shown in the following example: ==== [source,java] @@ -515,25 +529,24 @@ repository.save(rand); <3 repository.findByAddressLocationNear(new Point(15D, 37D), new Distance(200)); <4> ---- -<1> Query method declaration on nested property using Point and Distance. -<2> Query method declaration on nested property using Circle to search within. +<1> Query method declaration on a nested property, using Point and Distance. +<2> Query method declaration on a nested property, using Circle to search within. <3> `GEOADD persons:address:location 13.361389 38.115556 e2c7dcee-b8cd-4424-883e-736ce564363e` <4> `GEORADIUS persons:address:location 15.0 37.0 200.0 km` ==== -In the above example the lon/lat values are stored using `GEOADD` using the objects `id` as the member's name. The finder methods allow usage of `Circle` or `Point, Distance` combinations for querying those values. +In the preceding example the, longitude and latitude values are stored by using `GEOADD` that use the object's `id` as the member's name. The finder methods allow usage of `Circle` or `Point, Distance` combinations for querying those values. -NOTE: It is **not** possible to combine `near`/`within` with other criteria. +NOTE: It is **not** possible to combine `near` and `within` with other criteria. include::../{spring-data-commons-include}/query-by-example.adoc[leveloffset=+1] include::query-by-example.adoc[leveloffset=+1] [[redis.repositories.expirations]] == Time To Live -Objects stored in Redis may only be valid for a certain amount of time. This is especially useful for persisting short lived objects in Redis without having to remove them manually when they reached their end of life. -The expiration time in seconds can be set via `@RedisHash(timeToLive=...)` as well as via `KeyspaceSettings` (see <>). +Objects stored in Redis may be valid only for a certain amount of time. This is especially useful for persisting short-lived objects in Redis without having to remove them manually when they reach their end of life. The expiration time in seconds can be set with `@RedisHash(timeToLive=...)` as well as by using `KeyspaceSettings` (see <>). -More flexible expiration times can be set by using the `@TimeToLive` annotation on either a numeric property or method. However do not apply `@TimeToLive` on both a method and a property within the same class. +More flexible expiration times can be set by using the `@TimeToLive` annotation on either a numeric property or a method. However, do not apply `@TimeToLive` on both a method and a property within the same class. The following example shows the `@TimeToLive` annotation on a property and on a method: .Expirations ==== @@ -561,30 +574,26 @@ public class TimeToLiveOnMethod { ---- ==== -NOTE: Annotating a property explicitly with `@TimeToLive` will read back the actual `TTL` or `PTTL` value from Redis. -1 indicates that the object has no expire associated. +NOTE: Annotating a property explicitly with `@TimeToLive` reads back the actual `TTL` or `PTTL` value from Redis. -1 indicates that the object has no associated expiration. The repository implementation ensures subscription to http://redis.io/topics/notifications[Redis keyspace notifications] via `RedisMessageListenerContainer`. -When the expiration is set to a positive value the according `EXPIRE` command is executed. -Additionally to persisting the original, a _phantom_ copy is persisted in Redis and set to expire 5 minutes after the original one. This is done to enable the Repository support to publish `RedisKeyExpiredEvent` holding the expired value via Springs `ApplicationEventPublisher` whenever a key expires even though the original values have already been gone. Expiry events -will be received on all connected applications using Spring Data Redis repositories. +When the expiration is set to a positive value, the corresponding `EXPIRE` command is executed. In addition to persisting the original, a phantom copy is persisted in Redis and set to expire five minutes after the original one. This is done to enable the Repository support to publish `RedisKeyExpiredEvent`, holding the expired value in Spring's `ApplicationEventPublisher` whenever a key expires, even though the original values have already been removed. Expiry events are received on all connected applications that use Spring Data Redis repositories. -By default, the key expiry listener is disabled when initializing the application. The startup mode can be adjusted in `@EnableRedisRepositories` or `RedisKeyValueAdapter` to start the listener with the application or upon the first insert of an entity with a TTL. See `EnableKeyspaceEvents` for possible values. +By default, the key expiry listener is disabled when initializing the application. The startup mode can be adjusted in `@EnableRedisRepositories` or `RedisKeyValueAdapter` to start the listener with the application or upon the first insert of an entity with a TTL. See https://docs.spring.io/spring-data/redis/docs/{revnumber}/api/org/springframework/data/redis/core/RedisKeyValueAdapter.EnableKeyspaceEvents.html[`EnableKeyspaceEvents`] for possible values. -The `RedisKeyExpiredEvent` will hold a copy of the actually expired domain object as well as the key. +The `RedisKeyExpiredEvent` holds a copy of the expired domain object as well as the key. -NOTE: Delaying or disabling the expiry event listener startup impacts `RedisKeyExpiredEvent` publishing. -A disabled event listener will not publish expiry events. A delayed startup can cause loss of events because the delayed -listener initialization. +NOTE: Delaying or disabling the expiry event listener startup impacts `RedisKeyExpiredEvent` publishing. A disabled event listener does not publish expiry events. A delayed startup can cause loss of events because of the delayed listener initialization. -NOTE: The keyspace notification message listener will alter `notify-keyspace-events` settings in Redis if those are not already set. Existing settings will not be overridden, so it is left to the user to set those up correctly when not leaving them empty. Please note that `CONFIG` is disabled on AWS ElastiCache and enabling the listener leads to an error. +NOTE: The keyspace notification message listener alters `notify-keyspace-events` settings in Redis, if those are not already set. Existing settings are not overridden, so you must set up those settings correctly (or leav them empty). Note that `CONFIG` is disabled on AWS ElastiCache, and enabling the listener leads to an error. -NOTE: Redis Pub/Sub messages are not persistent. If a key expires while the application is down the expiry event will not be processed which may lead to secondary indexes containing still references to the expired object. +NOTE: Redis Pub/Sub messages are not persistent. If a key expires while the application is down, the expiry event is not processed, which may lead to secondary indexes containing references to the expired object. [[redis.repositories.references]] == Persisting References Marking properties with `@Reference` allows storing a simple key reference instead of copying values into the hash itself. -On loading from Redis, references are resolved automatically and mapped back into the object. +On loading from Redis, references are resolved automatically and mapped back into the object, as shown in the following example: .Sample Property Reference ==== @@ -599,13 +608,13 @@ mother = persons:a9d4b3a0-50d3-4538-a2fc-f7fc2581ee56 <1> <1> Reference stores the whole key (`keyspace:id`) of the referenced object. ==== -WARNING: Referenced Objects are not subject of persisting changes when saving the referencing object. Please make sure to persist changes on referenced objects separately, since only the reference will be stored. -Indexes set on properties of referenced types will not be resolved. +WARNING: Referenced Objects are not persisted when the referencing object is saved. You must persist changes on referenced objects separately, since only the reference is stored. Indexes set on properties of referenced types are not resolved. [[redis.repositories.partial-updates]] == Persisting Partial Updates -In some cases it is not necessary to load and rewrite the entire entity just to set a new value within it. A session timestamp for last active time might be such a scenario where you just want to alter one property. -`PartialUpdate` allows to define `set` and `delete` actions on existing objects while taking care of updating potential expiration times of the entity itself as well as index structures. + +In some cases, you need not load and rewrite the entire entity just to set a new value within it. A session timestamp for the last active time might be such a scenario where you want to alter one property. +`PartialUpdate` lets you define `set` and `delete` actions on existing objects while taking care of updating potential expiration times of both the entity itself and index structures. The following example shows a partial update: .Sample Partial Update ==== @@ -630,19 +639,20 @@ update = new PartialUpdate("e2c7dcee", Person.class) template.update(update); ---- -<1> Set the simple property _firstname_ to _mat_. -<2> Set the simple property _address.city_ to _emond's field_ without having to pass in the entire object. This does not work when a custom conversion is registered. -<3> Remove the property _age_. -<4> Set complex property _address_. -<5> Set a map/collection of values removes the previously existing map/collection and replaces the values with the given ones. +<1> Set the simple `firstname` property to `mat`. +<2> Set the simple 'address.city' property to 'emond's field' without having to pass in the entire object. This does not work when a custom conversion is registered. +<3> Remove the `age` property. +<4> Set complex `address` property. +<5> Set a map of values, which removes the previously existing map and replaces the values with the given ones. <6> Automatically update the server expiration time when altering <>. ==== -NOTE: Updating complex objects as well as map/collection structures requires further interaction with Redis to determine existing values which means that it might turn out that rewriting the entire entity might be faster. +NOTE: Updating complex objects as well as map (or other collection) structures requires further interaction with Redis to determine existing values, which means that rewriting the entire entity might be faster. [[redis.repositories.queries]] == Queries and Query Methods -Query methods allow automatic derivation of simple finder queries from the method name. + +Query methods allow automatic derivation of simple finder queries from the method name, as shown in the following example: .Sample Repository finder Method ==== @@ -660,7 +670,7 @@ NOTE: Please make sure properties used in finder methods are set up for indexing NOTE: Query methods for Redis repositories support only queries for entities and collections of entities with paging. -Using derived query methods might not always be sufficient to model the queries to execute. `RedisCallback` offers more control over the actual matching of index structures or even custom added ones. All it takes is providing a `RedisCallback` that returns a single or `Iterable` set of _id_ values. +Using derived query methods might not always be sufficient to model the queries to execute. `RedisCallback` offers more control over the actual matching of index structures or even custom indexes. To do so, provide a `RedisCallback` that returns a single or `Iterable` set of `id` values, as shown in the following example: .Sample finder using RedisCallback ==== @@ -677,9 +687,9 @@ List sessionsByUser = template.find(new RedisCallback> ---- ==== -Here's an overview of the keywords supported for Redis and what a method containing that keyword essentially translates to. -==== +The following table provides an overview of the keywords supported for Redis and what a method containing that keyword essentially translates to: +==== .Supported keywords inside method names [options = "header, autowidth"] |=============== @@ -692,14 +702,15 @@ Here's an overview of the keywords supported for Redis and what a method contain ==== [[redis.repositories.cluster]] -== Redis Repositories running on Cluster +== Redis Repositories Running on a Cluster -Using the Redis repository support in a clustered Redis environment is fine. Please see the <> section for `ConnectionFactory` configuration details. -Still some considerations have to be done as the default key distribution will spread entities and secondary indexes through out the whole cluster and its slots. +You can use the Redis repository support in a clustered Redis environment. See the "`<>`" section for `ConnectionFactory` configuration details. Still, some additional configuration must be done, because the default key distribution spreads entities and secondary indexes through out the whole cluster and its slots. + +The following table shows the details of data on a cluster (based on previous examples): [options = "header, autowidth"] |=============== -|key|type|slot|node +|Key|Type|Slot|Node |persons:e2c7dcee-b8cd-4424-883e-736ce564363e|id for hash|15171|127.0.0.1:7381 |persons:a9d4b3a0-50d3-4538-a2fc-f7fc2581ee56|id for hash|7373|127.0.0.1:7380 |persons:firstname:rand|index|1700|127.0.0.1:7379 @@ -707,12 +718,11 @@ Still some considerations have to be done as the default key distribution will s |=============== ==== -Some commands like `SINTER` and `SUNION` can only be processed on the Server side when all involved keys map to the same slot. Otherwise computation has to be done on client side. -Therefore it be useful to pin keyspaces to a single slot which allows to make use of Redis serverside computation right away. +Some commands (such as `SINTER` and `SUNION`) can only be processed on the server side when all involved keys map to the same slot. Otherwise, computation has to be done on client side. Therefore, it is useful to pin keyspaces to a single slot, which lets make use of Redis server side computation right away. The following table shows what happens when you do (note the change in the slot column and the port value in the node column): [options = "header, autowidth"] |=============== -|key|type|slot|node +|Key|Type|Slot|Node |{persons}:e2c7dcee-b8cd-4424-883e-736ce564363e|id for hash|2399|127.0.0.1:7379 |{persons}:a9d4b3a0-50d3-4538-a2fc-f7fc2581ee56|id for hash|2399|127.0.0.1:7379 |{persons}:firstname:rand|index|2399|127.0.0.1:7379 @@ -720,14 +730,14 @@ Therefore it be useful to pin keyspaces to a single slot which allows to make us |=============== ==== -TIP: Define and pin keyspaces via `@RedisHash("{yourkeyspace}") to specific slots when using Redis cluster. +TIP: Define and pin keyspaces by using `@RedisHash("{yourkeyspace}") to specific slots when you use Redis cluster. [[redis.repositories.cdi-integration]] -== CDI integration +== CDI Integration -Instances of the repository interfaces are usually created by a container, which Spring is the most natural choice when working with Spring Data. There's sophisticated support to easily set up Spring to create bean instances. Spring Data Redis ships with a custom CDI extension that allows using the repository abstraction in CDI environments. The extension is part of the JAR so all you need to do to activate it is dropping the Spring Data Redis JAR into your classpath. +Instances of the repository interfaces are usually created by a container, for which Spring is the most natural choice when working with Spring Data. Spring offers sophisticated for creating bean instances. Spring Data Redis ships with a custom CDI extension that lets you use the repository abstraction in CDI environments. The extension is part of the JAR, so, to activate it, drop the Spring Data Redis JAR into your classpath. -You can now set up the infrastructure by implementing a CDI Producer for the `RedisConnectionFactory` and `RedisOperations`: +You can then set up the infrastructure by implementing a CDI Producer for the `RedisConnectionFactory` and `RedisOperations`, as shown in the following example: [source, java] ---- @@ -764,9 +774,9 @@ class RedisOperationsProducer { } ---- -The necessary setup can vary depending on the JavaEE environment you run in. +The necessary setup can vary, depending on your JavaEE environment. -The Spring Data Redis CDI extension will pick up all Repositories available as CDI beans and create a proxy for a Spring Data repository whenever a bean of a repository type is requested by the container. Thus obtaining an instance of a Spring Data repository is a matter of declaring an `@Injected` property: +The Spring Data Redis CDI extension picks up all available repositories as CDI beans and creates a proxy for a Spring Data repository whenever a bean of a repository type is requested by the container. Thus, obtaining an instance of a Spring Data repository is a matter of declaring an `@Injected` property, as shown in the following example: [source, java] ---- @@ -781,7 +791,4 @@ class RepositoryClient { } ---- -A Redis Repository requires `RedisKeyValueAdapter` and `RedisKeyValueTemplate` instances. These beans are created and managed by the Spring Data CDI extension if no provided beans are found. You can however supply your own beans to configure the specific properties of `RedisKeyValueAdapter` and `RedisKeyValueTemplate`. - - - +A Redis Repository requires `RedisKeyValueAdapter` and `RedisKeyValueTemplate` instances. These beans are created and managed by the Spring Data CDI extension if no provided beans are found. You can, however, supply your own beans to configure the specific properties of `RedisKeyValueAdapter` and `RedisKeyValueTemplate`. diff --git a/src/main/asciidoc/reference/redis-scripting.adoc b/src/main/asciidoc/reference/redis-scripting.adoc index 6b6670afb1..f17a0a75d8 100644 --- a/src/main/asciidoc/reference/redis-scripting.adoc +++ b/src/main/asciidoc/reference/redis-scripting.adoc @@ -1,13 +1,13 @@ [[scripting]] = Redis Scripting -Redis versions 2.6 and higher provide support for execution of Lua scripts through the http://redis.io/commands/eval[eval] and http://redis.io/commands/evalsha[evalsha] commands. Spring Data Redis provides a high-level abstraction for script execution that handles serialization and automatically makes use of the Redis script cache. +Redis versions 2.6 and higher provide support for execution of Lua scripts through the http://redis.io/commands/eval[eval] and http://redis.io/commands/evalsha[evalsha] commands. Spring Data Redis provides a high-level abstraction for script execution that handles serialization and automatically uses the Redis script cache. -Scripts can be run through the `execute` methods of `RedisTemplate` and `ReactiveRedisTemplate`. Both use a configurable `ScriptExecutor` / `ReactiveScriptExecutor` to run the provided script. By default, the `ScriptExecutor` takes care of serializing the provided keys and arguments and deserializing the script result. This is done via the key and value serializers of the template. There is an additional overload that allows you to pass custom serializers for the script arguments and result. +Scripts can be run by calling the `execute` methods of `RedisTemplate` and `ReactiveRedisTemplate`. Both use a configurable `ScriptExecutor` (or `ReactiveScriptExecutor`) to run the provided script. By default, the `ScriptExecutor` (or `ReactiveScriptExecutor`) takes care of serializing the provided keys and arguments and deserializing the script result. This is done through the key and value serializers of the template. There is an additional overload that lets you pass custom serializers for the script arguments and the result. The default `ScriptExecutor` optimizes performance by retrieving the SHA1 of the script and attempting first to run `evalsha`, falling back to `eval` if the script is not yet present in the Redis script cache. -Here's an example that executes a common "check-and-set" scenario using a Lua script. This is an ideal use case for a Redis script, as it requires that we execute a set of commands atomically and the behavior of one command is influenced by the result of another. +The following example runs a common "`check-and-set`" scenario by using a Lua script. This is an ideal use case for a Redis script, as it requires that running a set of commands atomically, and the behavior of one command is influenced by the result of another. [source,java] ---- @@ -43,10 +43,10 @@ end return false ---- -The code above configures a `RedisScript` pointing to a file called `checkandset.lua`, which is expected to return a boolean value. The script `resultType` should be one of `Long`, `Boolean`, `List`, or deserialized value type. It can also be `null` if the script returns a throw-away status (i.e "OK"). It is ideal to configure a single instance of `DefaultRedisScript` in your application context to avoid re-calculation of the script's SHA1 on every script execution. +The preceding code configures a `RedisScript` pointing to a file called `checkandset.lua`, which is expected to return a boolean value. The script `resultType` should be one of `Long`, `Boolean`, `List`, or a deserialized value type. It can also be `null` if the script returns a throw-away status (specifically, `OK`). -The checkAndSet method above then executes th -Scripts can be executed within a `SessionCallback` as part of a transaction or pipeline. See <> and <> for more information. +TIP: It is ideal to configure a single instance of `DefaultRedisScript` in your application context to avoid re-calculation of the script's SHA1 on every script execution. -The scripting support provided by Spring Data Redis also allows you to schedule Redis scripts for periodic execution using the Spring Task and Scheduler abstractions. See the `Spring Framework` documentation for more details. +The `checkAndSet` method above then runs the scripts. Scripts can be run within a `SessionCallback` as part of a transaction or pipeline. See "`<>`" and "`<>`" for more information. +The scripting support provided by Spring Data Redis also lets you schedule Redis scripts for periodic execution by using the Spring Task and Scheduler abstractions. See the http://projects.spring.io/spring-framework/[Spring Framework] documentation for more details. diff --git a/src/main/asciidoc/reference/redis-transactions.adoc b/src/main/asciidoc/reference/redis-transactions.adoc index d4dc995411..024b7d4af1 100644 --- a/src/main/asciidoc/reference/redis-transactions.adoc +++ b/src/main/asciidoc/reference/redis-transactions.adoc @@ -1,9 +1,9 @@ [[tx]] = Redis Transactions -Redis provides support for http://redis.io/topics/transactions[transactions] through the `multi`, `exec`, and `discard` commands. These operations are available on `RedisTemplate`, however `RedisTemplate` is not guaranteed to execute all operations in the transaction using the same connection. +Redis provides support for http://redis.io/topics/transactions[transactions] through the `multi`, `exec`, and `discard` commands. These operations are available on `RedisTemplate`. However, `RedisTemplate` is not guaranteed to execute all operations in the transaction with the same connection. -Spring Data Redis provides the `SessionCallback` interface for use when multiple operations need to be performed with the same `connection`, as when using Redis transactions. For example: +Spring Data Redis provides the `SessionCallback` interface for use when multiple operations need to be performed with the same `connection`, such as when using Redis transactions. The following example uses the `multi` method: [source,java] ---- @@ -13,21 +13,23 @@ List txResults = redisTemplate.execute(new SessionCallback> operations.multi(); operations.opsForSet().add("key", "value1"); - // This will contain the results of all ops in the transaction + // This will contain the results of all operations in the transaction return operations.exec(); } }); System.out.println("Number of items added to set: " + txResults.get(0)); ---- -`RedisTemplate` will use its value, hash key, and hash value serializers to deserialize all results of `exec` before returning. There is an additional `exec` method that allows you to pass a custom serializer for transaction results. +`RedisTemplate` uses its value, hash key, and hash value serializers to deserialize all results of `exec` before returning. There is an additional `exec` method that lets you pass a custom serializer for transaction results. -NOTE: An important change has been made to the `exec` methods of `RedisConnection` and `RedisTemplate` in version 1.1. Previously these methods returned the results of transactions directly from the connectors. This means that the data types often differed from those returned from the methods of `RedisConnection`. For example, `zAdd` returns a boolean indicating that the element has been added to the sorted set. Most connectors return this value as a long and Spring Data Redis performs the conversion. Another common difference is that most connectors return a status reply (usually the String "OK") for operations like `set`. These replies are typically discarded by Spring Data Redis. Prior to 1.1, these conversions were not performed on the results of `exec`. Also, results were not deserialized in `RedisTemplate`, so they often included raw byte arrays. If this change breaks your application, you can set `convertPipelineAndTxResults` to false on your `RedisConnectionFactory` to disable this behavior. +include::version-note.adoc[] [[tx.spring]] == @Transactional Support -Transaction Support is disabled by default and has to be explicitly enabled for each `RedisTemplate` in use by setting `setEnableTransactionSupport(true)`. This will force binding the `RedisConnection` in use to the current `Thread` triggering `MULTI`. If the transaction finishes without errors, `EXEC` is called, otherwise `DISCARD`. Once in `MULTI`, `RedisConnection` would queue write operations, all `readonly` operations, such as `KEYS` are piped to a fresh (non thread bound) `RedisConnection`. +By default, transaction Support is disabled and has to be explicitly enabled for each `RedisTemplate` in use by setting `setEnableTransactionSupport(true)`. Doing so forces binding the current `RedisConnection` to the current `Thread` that is triggering `MULTI`. If the transaction finishes without errors, `EXEC` is called. Otherwise `DISCARD` is called. Once in `MULTI`, `RedisConnection` queues write operations. All `readonly` operations, such as `KEYS`, are piped to a fresh (non-thread-bound) `RedisConnection`. + +The following example shows how to configure transaction management: .Configuration enabling Transaction Management ==== @@ -62,21 +64,23 @@ public class RedisTxContextConfiguration { } ---- <1> Configures a Spring Context to enable http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/data-access.html#transaction-declarative[declarative transaction management]. -<2> Configures `RedisTemplate` to participate in transactions by binding connections to the current Thread. -<3> Transaction management requires a `PlatformTransactionManager`. Spring Data Redis does not ship with a `PlatformTransactionManager` implementation. Assuming your application uses JDBC, we can participate in transactions using existing transaction managers. +<2> Configures `RedisTemplate` to participate in transactions by binding connections to the current thread. +<3> Transaction management requires a `PlatformTransactionManager`. Spring Data Redis does not ship with a `PlatformTransactionManager` implementation. Assuming your application uses JDBC, Spring Data Redis can participate in transactions by using existing transaction managers. ==== -.Usage Constrainsts +The following examples each demonstrate a usage constraint: + +.Usage Constraints ==== [source,java] ---- -// executed on thread bound connection -template.opsForValue().set("foo", "bar"); +// must be performed on thread-bound connection +template.opsForValue().set("thing1", "thing2"); -// read operation executed on a free (not tx-aware) -connection template.keys("*"); +// read operation must be executed on a free (not transaction-aware) connection +template.keys("*"); -// returns null as values set within transaction are not visible -template.opsForValue().get("foo"); +// returns null as values set within a transaction are not visible +template.opsForValue().get("thing1"); ---- ==== diff --git a/src/main/asciidoc/reference/redis.adoc b/src/main/asciidoc/reference/redis.adoc index 28e15f48a1..c55618280d 100644 --- a/src/main/asciidoc/reference/redis.adoc +++ b/src/main/asciidoc/reference/redis.adoc @@ -2,7 +2,7 @@ = Redis support :referenceDir: . -One of the key value stores supported by Spring Data is http://redis.io[Redis]. To quote the project home page: +One of the key-value stores supported by Spring Data is http://redis.io[Redis]. To quote the Redis project home page: [quote] Redis is an advanced key-value store. It is similar to memcached but the dataset is not volatile, and values can be strings, exactly like in memcached, but also lists, sets, and ordered sets. All this data types can be manipulated with atomic operations to push/pop elements, add/remove elements, perform server side union, intersection, difference between sets, and so forth. Redis supports different kind of sorting abilities. @@ -12,39 +12,37 @@ Spring Data Redis provides easy configuration and access to Redis from Spring ap [[redis:requirements]] == Redis Requirements -Spring Redis requires Redis 2.6 or above and Java SE 8.0 or above. In terms of language bindings (or connectors), Spring Redis integrates with http://github.com/xetorthio/jedis[Jedis] and http://github.com/lettuce-io/lettuce-core[Lettuce], two popular open source Java libraries for Redis. +Spring Redis requires Redis 2.6 or above and Java SE 8.0 or above. In terms of language bindings (or connectors), Spring Redis integrates with http://github.com/xetorthio/jedis[Jedis] and http://github.com/lettuce-io/lettuce-core[Lettuce], two popular open-source Java libraries for Redis. [[redis:architecture]] -== Redis Support High Level View +== Redis Support High-level View -The Redis support provides several components (in order of dependencies): - -For most tasks, the high-level abstractions and support services are the best choice. Note that at any point, one can move between layers - for example, it's very easy to get a hold of the low level connection (or even the native library) to communicate directly with Redis. +The Redis support provides several components. For most tasks, the high-level abstractions and support services are the best choice. Note that, at any point, you can move between layers. For example, you can get a low-level connection (or even the native library) to communicate directly with Redis. [[redis:connectors]] == Connecting to Redis -One of the first tasks when using Redis and Spring is to connect to the store through the IoC container. To do that, a Java connector (or binding) is required. No matter the library one chooses, there is only one set of Spring Data Redis API that one needs to use that behaves consistently across all connectors, namely the `org.springframework.data.redis.connection` package and its `RedisConnection` and `RedisConnectionFactory` interfaces for working with and retrieving active `connections` to Redis. +One of the first tasks when using Redis and Spring is to connect to the store through the IoC container. To do that, a Java connector (or binding) is required. No matter the library you choose, you need to use only one set of Spring Data Redis APIs (which behaves consistently across all connectors): the `org.springframework.data.redis.connection` package and its `RedisConnection` and `RedisConnectionFactory` interfaces for working with and retrieving active connections to Redis. [[redis:connectors:connection]] === RedisConnection and RedisConnectionFactory -`RedisConnection` provides the building block for Redis communication as it handles the communication with the Redis back-end. It also automatically translates the underlying connecting library exceptions to Spring's consistent DAO exception http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/data-access.html#dao-exceptions[hierarchy] so one can switch the connectors without any code changes as the operation semantics remain the same. +`RedisConnection` provides the core building block for Redis communication, as it handles the communication with the Redis back end. It also automatically translates the underlying connecting library exceptions to Spring's consistent DAO exception http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/data-access.html#dao-exceptions[hierarchy] so that you can switch the connectors without any code changes, as the operation semantics remain the same. -NOTE: For the corner cases where the native library API is required, `RedisConnection` provides a dedicated method `getNativeConnection` which returns the raw, underlying object used for communication. +NOTE: For the corner cases where the native library API is required, `RedisConnection` provides a dedicated method (`getNativeConnection`) that returns the raw, underlying object used for communication. -Active `RedisConnection` s are created through `RedisConnectionFactory`. In addition, the factories act as `PersistenceExceptionTranslator` s, meaning once declared, they allow one to do transparent exception translation. For example, exception translation through the use of the `@Repository` annotation and AOP. For more information see the dedicated http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/data-access.html#orm-exception-translation[section] in Spring Framework documentation. +Active `RedisConnection` objects are created through `RedisConnectionFactory`. In addition, the factory acts as `PersistenceExceptionTranslator` objects, meaning that, once declared, they let you do transparent exception translation. For example, you can do exception translation through the use of the `@Repository` annotation and AOP. For more information, see the dedicated http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/data-access.html#orm-exception-translation[section] in the Spring Framework documentation. -NOTE: Depending on the underlying configuration, the factory can return a new connection or an existing connection (in case a pool or shared native connection is used). +NOTE: Depending on the underlying configuration, the factory can return a new connection or an existing connection (when a pool or shared native connection is used). The easiest way to work with a `RedisConnectionFactory` is to configure the appropriate connector through the IoC container and inject it into the using class. IMPORTANT: Unfortunately, currently, not all connectors support all Redis features. When invoking a method on the Connection API that is unsupported by the underlying library, an `UnsupportedOperationException` is thrown. [[redis:connectors:lettuce]] -=== Configuring Lettuce connector +=== Configuring the Lettuce Connector -https://github.com/lettuce-io/lettuce-core[Lettuce] is a http://netty.io/[netty]-based open-source connector supported by Spring Data Redis through the `org.springframework.data.redis.connection.lettuce` package. +https://github.com/lettuce-io/lettuce-core[Lettuce] is a http://netty.io/[Netty]-based open-source connector supported by Spring Data Redis through the `org.springframework.data.redis.connection.lettuce` package. The following example shows how to create a new Lettuce connection factory: [source,java] ---- @@ -59,9 +57,9 @@ class AppConfig { } ---- -There are also a few Lettuce-specific connection parameters that can be tweaked. By default, all `LettuceConnection` s created by the `LettuceConnectionFactory` share the same thread-safe native connection for all non-blocking and non-transactional operations. Set `shareNativeConnection` to false to use a dedicated connection each time. `LettuceConnectionFactory` can also be configured with a `LettucePool` to use for pooling blocking and transactional connections, or all connections if `shareNativeConnection` is set to false. +There are also a few Lettuce-specific connection parameters that can be tweaked. By default, all `LettuceConnection` instances created by the `LettuceConnectionFactory` share the same thread-safe native connection for all non-blocking and non-transactional operations. To use a dedicated connection each time, set `shareNativeConnection` to `false`. `LettuceConnectionFactory` can also be configured to use a `LettucePool` for pooling blocking and transactional connections or all connections if `shareNativeConnection` is set to `false`. -Lettuce integrates with netty's http://netty.io/wiki/native-transports.html[native transports] allowing to use unix domain sockets to communicate with Redis. Make sure to include the appropriate native transport dependencies that match your runtime environment. +Lettuce integrates with Netty's http://netty.io/wiki/native-transports.html[native transports], letting you use Unix domain sockets to communicate with Redis. Make sure to include the appropriate native transport dependencies that match your runtime environment. The following example shows how to create a Lettuce Connection factory for a Unix domain socket at `/var/run/redis.sock`: [source,java] ---- @@ -76,10 +74,10 @@ class AppConfig { } ---- -NOTE: Netty currently supports epoll (Linux) and kqueue (BSD/macOS) interfaces for OS-native transport. +NOTE: Netty currently supports the epoll (Linux) and kqueue (BSD/macOS) interfaces for OS-native transport. [[redis:connectors:jedis]] -=== Configuring Jedis connector +=== Configuring the Jedis Connector http://github.com/xetorthio/jedis[Jedis] is a community-driven connector supported by the Spring Data Redis module through the `org.springframework.data.redis.connection.jedis` package. In its simplest form, the Jedis configuration looks as follow: @@ -95,7 +93,7 @@ class AppConfig { } ---- -For production use however, one might want to tweak the settings such as the host or password: +For production use, however, you might want to tweak settings such as the host or password, as shown in the following example: [source,java] ---- @@ -112,10 +110,9 @@ class RedisConfiguration { ---- [[redis:write-to-master-read-from-slave]] -=== Write to Master read from Slave +=== Write to Master, Read from Slave -Redis Master/Slave setup, without automatic failover (for automatic failover see: <>), not only allows data to be savely stored at more nodes. It also allows, using <>, reading data from slaves while pushing writes to the master. -Set the read/write strategy to be used via `LettuceClientConfiguration`. +The Redis Master/Slave setup -- without automatic failover (for automatic failover see: <>) -- not only allows data to be safely stored at more nodes. It also allows, by using <>, reading data from slaves while pushing writes to the master. You can set the read/write strategy to be used by using `LettuceClientConfiguration`, as shown in the following example: [source,java] ---- @@ -136,12 +133,12 @@ class WriteToMasterReadFromSlaveConfiguration { } ---- -TIP: Use `RedisStaticMasterSlaveConfiguration` instead of `RedisStandaloneConfiguration` for environments reporting non public addresses via the `INFO` command (e.g. when using AWS). +TIP: For environments reporting non-public addresses through the `INFO` command (for example, when using AWS), use `RedisStaticMasterSlaveConfiguration` instead of `RedisStandaloneConfiguration`. [[redis:sentinel]] == Redis Sentinel Support -For dealing with high available Redis there is support for http://redis.io/topics/sentinel[Redis Sentinel] using `RedisSentinelConfiguration`. +For dealing with high-availability Redis, Spring Data Redis has support for http://redis.io/topics/sentinel[Redis Sentinel], using `RedisSentinelConfiguration`, as shown in the following example: [source,java] ---- @@ -172,21 +169,21 @@ public RedisConnectionFactory lettuceConnectionFactory() { [TIP] ==== -`RedisSentinelConfiguration` can also be defined via `PropertySource`. +`RedisSentinelConfiguration` can also be defined with a `PropertySource`, which lets you set the following properties: .Configuration Properties -- `spring.redis.sentinel.master`: name of the master node. -- `spring.redis.sentinel.nodes`: Comma delimited list of host:port pairs. +* `spring.redis.sentinel.master`: name of the master node. +* `spring.redis.sentinel.nodes`: Comma delimited list of host:port pairs. ==== -Sometimes direct interaction with the one of the Sentinels is required. Using `RedisConnectionFactory.getSentinelConnection()` or `RedisConnection.getSentinelCommands()` gives you access to the first active Sentinel configured. +Sometimes, direct interaction with one of the Sentinels is required. Using `RedisConnectionFactory.getSentinelConnection()` or `RedisConnection.getSentinelCommands()` gives you access to the first active Sentinel configured. [[redis:template]] == Working with Objects through RedisTemplate -Most users are likely to use `RedisTemplate` and its corresponding package `org.springframework.data.redis.core` - the template is in fact the central class of the Redis module due to its rich feature set. The template offers a high-level abstraction for Redis interactions. While `RedisConnection` offers low level methods that accept and return binary values (`byte` arrays), the template takes care of serialization and connection management, freeing the user from dealing with such details. +Most users are likely to use `RedisTemplate` and its corresponding package, `org.springframework.data.redis.core`. The template is, in fact, the central class of the Redis module, due to its rich feature set. The template offers a high-level abstraction for Redis interactions. While `RedisConnection` offers low-level methods that accept and return binary values (`byte` arrays), the template takes care of serialization and connection management, freeing the user from dealing with such details. -Moreover, the template provides operations views (following the grouping from Redis command http://redis.io/commands[reference]) that offer rich, generified interfaces for working against a certain type or certain key (through the `KeyBound` interfaces) as described below: +Moreover, the template provides operations views (following the grouping from the Redis command http://redis.io/commands[reference]) that offer rich, generified interfaces for working against a certain type or certain key (through the `KeyBound` interfaces) as described in the following table: .Operational views [width="80%",cols="<1,<2",options="header"] @@ -196,57 +193,57 @@ Moreover, the template provides operations views (following the grouping from Re 2+^|_Key Type Operations_ -|GeoOperations -|Redis geospatial operations like `GEOADD`, `GEORADIUS`,...) +|`GeoOperations` +|Redis geospatial operations, such as `GEOADD`, `GEORADIUS`,... -|HashOperations +|`HashOperations` |Redis hash operations -|HyperLogLogOperations -|Redis HyperLogLog operations like (`PFADD`, `PFCOUNT`,...) +|`HyperLogLogOperations` +|Redis HyperLogLog operations, such as `PFADD`, `PFCOUNT`,... -|ListOperations +|`ListOperations` |Redis list operations -|SetOperations +|`SetOperations` |Redis set operations -|ValueOperations +|`ValueOperations` |Redis string (or value) operations -|ZSetOperations +|`ZSetOperations` |Redis zset (or sorted set) operations 2+^|_Key Bound Operations_ -|BoundGeoOperations -|Redis key bound geospatial operations. +|`BoundGeoOperations` +|Redis key bound geospatial operations -|BoundHashOperations +|`BoundHashOperations` |Redis hash key bound operations -|BoundKeyOperations +|`BoundKeyOperations` |Redis key bound operations -|BoundListOperations +|`BoundListOperations` |Redis list key bound operations -|BoundSetOperations +|`BoundSetOperations` |Redis set key bound operations -|BoundValueOperations +|`BoundValueOperations` |Redis string (or value) key bound operations -|BoundZSetOperations +|`BoundZSetOperations` |Redis zset (or sorted set) key bound operations |==== Once configured, the template is thread-safe and can be reused across multiple instances. -Out of the box, `RedisTemplate` uses a Java-based serializer for most of its operations. This means that any object written or read by the template will be serialized/deserialized through Java. The serialization mechanism can be easily changed on the template, and the Redis module offers several implementations available in the `org.springframework.data.redis.serializer` package - see <> for more information. You can also set any of the serializers to null and use RedisTemplate with raw `byte` arrays by setting the `enableDefaultSerializer` property to false. Note that the template requires all keys to be non-null - values can be null as long as the underlying serializer accepts them; read the javadoc of each serializer for more information. +`RedisTemplate` uses a Java-based serializer for most of its operations. This means that any object written or read by the template is serialized and deserialized through Java. You can change the serialization mechanism on the template, and the Redis module offers several implementations, which are available in the `org.springframework.data.redis.serializer` package. See <> for more information. You can also set any of the serializers to null and use RedisTemplate with raw byte arrays by setting the `enableDefaultSerializer` property to `false`. Note that the template requires all keys to be non-null. However, values can be null as long as the underlying serializer accepts them. Read the Javadoc of each serializer for more information. -For cases where a certain template *view* is needed, declare the view as a dependency and inject the template: the container will automatically perform the conversion eliminating the `opsFor[X]` calls: +For cases where you need a certain template view, declare the view as a dependency and inject the template. The container automatically performs the conversion, eliminating the `opsFor[X]` calls, as shown in the following example: [source,xml] ---- @@ -283,9 +280,9 @@ public class Example { ---- [[redis:string]] -== String-focused convenience classes +== String-focused Convenience Classes -Since it's quite common for the keys and values stored in Redis to be `java.lang.String`, the Redis modules provides two extensions to `RedisConnection` and `RedisTemplate`, respectively the `StringRedisConnection` (and its `DefaultStringRedisConnection` implementation) and `StringRedisTemplate` as a convenient one-stop solution for intensive String operations. In addition to being bound to `String` keys, the template and the connection use the `StringRedisSerializer` underneath which means the stored keys and values are human readable (assuming the same encoding is used both in Redis and your code). For example: +Since it is quite common for the keys and values stored in Redis to be `java.lang.String`, the Redis modules provides two extensions to `RedisConnection` and `RedisTemplate`, respectively the `StringRedisConnection` (and its `DefaultStringRedisConnection` implementation) and `StringRedisTemplate` as a convenient one-stop solution for intensive String operations. In addition to being bound to `String` keys, the template and the connection use the `StringRedisSerializer` underneath, which means the stored keys and values are human-readable (assuming the same encoding is used both in Redis and your code). The following listings show an example: [source,xml] ---- @@ -315,7 +312,7 @@ public class Example { } ---- -As with the other Spring templates, `RedisTemplate` and `StringRedisTemplate` allow the developer to talk directly to Redis through the `RedisCallback` interface. This gives complete control to the developer as it talks directly to the `RedisConnection`. Note that the callback receives an instance of `StringRedisConnection` when a `StringRedisTemplate` is used. +As with the other Spring templates, `RedisTemplate` and `StringRedisTemplate` let you talk directly to Redis through the `RedisCallback` interface. This feature gives complete control to you, as it talks directly to the `RedisConnection`. Note that the callback receives an instance of `StringRedisConnection` when a `StringRedisTemplate` is used. The following example shows how to use the `RedisCallback` interface: [source,java] ---- @@ -334,31 +331,31 @@ public void useCallback() { [[redis:serializer]] == Serializers -From the framework perspective, the data stored in Redis is just bytes. While Redis itself supports various types, for the most part these refer to the way the data is stored rather than what it represents. It is up to the user to decide whether the information gets translated into Strings or any other objects. +From the framework perspective, the data stored in Redis is only bytes. While Redis itself supports various types, for the most part, these refer to the way the data is stored rather than what it represents. It is up to the user to decide whether the information gets translated into strings or any other objects. -The conversion between the user (custom) types and raw data (and vice-versa) is handled in Spring Data Redis in the `org.springframework.data.redis.serializer` package. +In Spring Data, the conversion between the user (custom) types and raw data (and vice-versa) is handled Redis in the `org.springframework.data.redis.serializer` package. -This package contains two types of serializers which as the name implies, takes care of the serialization process: +This package contains two types of serializers that, as the name implies, take care of the serialization process: * Two-way serializers based on ``RedisSerializer``. -* Element readers and writers using `RedisElementReader` and ``RedisElementWriter``. +* Element readers and writers that use `RedisElementReader` and ``RedisElementWriter``. The main difference between these variants is that `RedisSerializer` primarily serializes to `byte[]` while readers and writers use `ByteBuffer`. -Multiple implementations are available out of the box, two of which have been already mentioned before in this documentation: +Multiple implementations are available (including two that have been already mentioned in this documentation): -* `JdkSerializationRedisSerializer` which is used by default for `RedisCache` and ``RedisTemplate``. -* the ``StringRedisSerializer``. +* `JdkSerializationRedisSerializer`, which is used by default for `RedisCache` and `RedisTemplate`. +* the `StringRedisSerializer`. -However one can use `OxmSerializer` for Object/XML mapping through Spring http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/data-access.html#oxm[OXM] support or either `Jackson2JsonRedisSerializer` or `GenericJackson2JsonRedisSerializer` for storing data in http://en.wikipedia.org/wiki/JSON[JSON] format. +However one can use `OxmSerializer` for Object/XML mapping through Spring http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/data-access.html#oxm[OXM] support or `Jackson2JsonRedisSerializer` or `GenericJackson2JsonRedisSerializer` for storing data in http://en.wikipedia.org/wiki/JSON[JSON] format. -Do note that the storage format is not limited only to values - it can be used for keys, values or hashes without any restrictions. +Do note that the storage format is not limited only to values. It can be used for keys, values, or hashes without any restrictions. [WARNING] ==== -`RedisCache` and `RedisTemplate` are configured by default to use Java native serialization. Java native serialization is known for allowing remote code execution caused by payloads that exploit vulnerable libraries and classes injecting unverified bytecode. Manipulated input could lead to unwanted code execution in the application during the deserialization step. As a consequence, do not use serialization in untrusted environments. In general, we strongly recommend any other message format (e.g. JSON) instead. +By default, `RedisCache` and `RedisTemplate` are configured to use Java native serialization. Java native serialization is known for allowing remote code execution caused by payloads that exploit vulnerable libraries and classes injecting unverified bytecode. Manipulated input could lead to unwanted code execution in the application during the deserialization step. As a consequence, do not use serialization in untrusted environments. In general, we strongly recommend any other message format (such as JSON) instead. -If you are concerned about security vulnerabilities due to Java serialization, consider the general-purpose serialization filter mechanism at the core JVM level, originally developed for JDK 9 but backported to JDK 8, 7 and 6 in the meantime: +If you are concerned about security vulnerabilities due to Java serialization, consider the general-purpose serialization filter mechanism at the core JVM level, originally developed for JDK 9 but backported to JDK 8, 7, and 6: * https://blogs.oracle.com/java-platform-group/entry/incoming_filter_serialization_data_a[Filter Incoming Serialization Data]. * http://openjdk.java.net/jeps/290[JEP 290]. @@ -368,23 +365,23 @@ If you are concerned about security vulnerabilities due to Java serialization, c [[redis.hashmappers.root]] == Hash mapping -Data can be stored using various data structures within Redis. You already learned about `Jackson2JsonRedisSerializer` which can convert objects -in http://en.wikipedia.org/wiki/JSON[JSON] format. JSON can be ideally stored as value using plain keys. A more sophisticated mapping of structured objects -can be achieved using Redis Hashes. Spring Data Redis offers various strategies for mapping data to hashes depending on the use case. +Data can be stored by using various data structures within Redis. `Jackson2JsonRedisSerializer` can convert objects in http://en.wikipedia.org/wiki/JSON[JSON] format. Ideally, JSON can be stored as a value by using plain keys. You can achieve a more sophisticated mapping of structured objects by using Redis hashes. Spring Data Redis offers various strategies for mapping data to hashes (depending on the use case): + +* Direct mapping, by using `HashOperations` and a <> +* Using <> +* Using `HashMapper` and `HashOperations` -1. Direct mapping using `HashOperations` and a <> -2. Using <> -3. Using `HashMapper` and `HashOperations` +=== Hash Mappers -=== Hash mappers +Hash mappers are converters of map objects to a `Map` and back. `HashMapper` is intended for using with Redis Hashes. -Hash mappers are converters to map objects to a `Map` and back. `HashMapper` is intended for using with Redis Hashes. +Multiple implementations are available: -Multiple implementations are available out of the box: +* `BeanUtilsHashMapper` using Spring's http://docs.spring.io/spring/docs/{springVersion}/javadoc-api/org/springframework/beans/BeanUtils.html[BeanUtils]. +* `ObjectHashMapper` using <>. +* <> using https://github.com/FasterXML/jackson[FasterXML Jackson]. -1. `BeanUtilsHashMapper` using Spring's http://docs.spring.io/spring/docs/{springVersion}/javadoc-api/org/springframework/beans/BeanUtils.html[BeanUtils]. -2. `ObjectHashMapper` using <>. -3. <> using https://github.com/FasterXML/jackson[FasterXML Jackson]. +The following example shows one way to implement hash mapping: [source,java] ---- @@ -419,12 +416,14 @@ public class HashMapping { [[redis.hashmappers.jackson2]] === Jackson2HashMapper -`Jackson2HashMapper` provides Redis Hash mapping for domain objects using https://github.com/FasterXML/jackson[FasterXML Jackson]. -`Jackson2HashMapper` can map data map top-level properties as Hash field names and optionally flatten the structure. -Simple types map to simple values. Complex types (nested objects, collections, maps) are represented as nested JSON. +`Jackson2HashMapper` provides Redis Hash mapping for domain objects by using https://github.com/FasterXML/jackson[FasterXML Jackson]. +`Jackson2HashMapper` can map top-level properties as Hash field names and, optionally, flatten the structure. +Simple types map to simple values. Complex types (nested objects, collections, maps, and so on) are represented as nested JSON. Flattening creates individual hash entries for all nested properties and resolves complex types into simple types, as far as possible. +Consider the following class and the data structure it contains: + [source,java] ---- public class Person { @@ -439,6 +438,8 @@ public class Address { } ---- +The following table shows how the data in the preceding class would appear in normal mapping: + .Normal Mapping [width="80%",cols="<1,<2",options="header"] |==== @@ -455,6 +456,8 @@ public class Address { |`{ "city" : "Castle Black", "country" : "The North" }` |==== +The following table shows how the data in the preceding class would appear in flat mapping: + .Flat Mapping [width="80%",cols="<1,<2",options="header"] |==== @@ -474,9 +477,7 @@ public class Address { |`The North` |==== -NOTE: Flattening requires all property names to not interfere with the JSON path. Using dots or brackets in map keys -or as property names is not supported using flattening. The resulting hash cannot be mapped back into an Object. - +NOTE: Flattening requires all property names to not interfere with the JSON path. Using dots or brackets in map keys or as property names is not supported when you use flattening. The resulting hash cannot be mapped back into an Object. :leveloffset: 2 include::{referenceDir}/redis-messaging.adoc[] @@ -491,14 +492,9 @@ include::{referenceDir}/redis-scripting.adoc[] [[redis:support]] == Support Classes -Package `org.springframework.data.redis.support` offers various reusable components that rely on Redis as a backing store. Currently the package contains various JDK-based -interface implementations on top of Redis such as http://download.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/package-summary.html[atomic] counters and JDK -http://download.oracle.com/javase/8/docs/api/java/util/Collection.html[Collections]. +Package `org.springframework.data.redis.support` offers various reusable components that rely on Redis as a backing store. Currently, the package contains various JDK-based interface implementations on top of Redis, such as http://download.oracle.com/javase/8/docs/api/java/util/concurrent/atomic/package-summary.html[atomic] counters and JDK http://download.oracle.com/javase/8/docs/api/java/util/Collection.html[Collections]. -The atomic counters make it easy to wrap Redis key incrementation while the collections allow easy management of Redis keys with minimal storage exposure or API -leakage: in particular the `RedisSet` and `RedisZSet` interfaces offer easy access to the *set* operations supported by Redis such as `intersection` and `union` -while `RedisList` implements the `List`, `Queue` and `Deque` contracts (and their equivalent blocking siblings) on top of Redis, exposing the storage as a -_FIFO (First-In-First-Out)_, _LIFO (Last-In-First-Out)_ or _capped collection_ with minimal configuration: +The atomic counters make it easy to wrap Redis key incrementation while the collections allow easy management of Redis keys with minimal storage exposure or API leakage. In particular, the `RedisSet` and `RedisZSet` interfaces offer easy access to the set operations supported by Redis, such as `intersection` and `union`. `RedisList` implements the `List`, `Queue`, and `Deque` contracts (and their equivalent blocking siblings) on top of Redis, exposing the storage as a FIFO (First-In-First-Out), LIFO (Last-In-First-Out) or capped collection with minimal configuration. The following example shows the configuration for a bean that uses a `RedisList`: [source,xml] ---- @@ -516,6 +512,8 @@ _FIFO (First-In-First-Out)_, _LIFO (Last-In-First-Out)_ or _capped collection_ w ---- +The following example shows a Java configuration example for a deque: + [source,java] ---- public class AnotherExample { @@ -529,14 +527,14 @@ public class AnotherExample { } ---- -As shown in the example above, the consuming code is decoupled from the actual storage implementation - in fact there is no indication that Redis is used underneath. This makes moving from development to production environments transparent and highly increases testability (the Redis implementation can just as well be replaced with an in-memory one). +As shown in the preceding example, the consuming code is decoupled from the actual storage implementation. In fact, there is no indication that Redis is used underneath. This makes moving from development to production environments transparent and highly increases testability (the Redis implementation can be replaced with an in-memory one). [[redis:support:cache-abstraction]] -=== Support for Spring Cache Abstraction +=== Support for the Spring Cache Abstraction NOTE: Changed in 2.0 -Spring Redis provides an implementation for Spring http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/integration.html#cache[cache abstraction] through the `org.springframework.data.redis.cache` package. To use Redis as a backing implementation, simply add `RedisCacheManager` to your configuration: +Spring Redis provides an implementation for the Spring http://docs.spring.io/spring/docs/{springVersion}/spring-framework-reference/integration.html#cache[cache abstraction] through the `org.springframework.data.redis.cache` package. To use Redis as a backing implementation, add `RedisCacheManager` to your configuration, as follows: [source,java] ---- @@ -546,7 +544,7 @@ public RedisCacheManager cacheManager(RedisConnectionFactory connectionFactory) } ---- -`RedisCacheManager` behavior can be configured via `RedisCacheManagerBuilder` allowing to set the default `RedisCacheConfiguration`, transaction behaviour and predefined caches. +`RedisCacheManager` behavior can be configured with `RedisCacheManagerBuilder`, letting you set the default `RedisCacheConfiguration`, transaction behavior, and predefined caches. [source,java] ---- @@ -557,8 +555,9 @@ RedisCacheManager cm = RedisCacheManager.builder(connectionFactory) .build(); ---- -Behavior of `RedisCache` created via `RedisCacheManager` is defined via `RedisCacheConfiguration`. The configuration allows to set key expiration times, prefixes and ``RedisSerializer``s for converting to and from the binary storage format. -As shown above `RedisCacheManager` allows definition of configurations on a per cache base. +As shown in the preceding example, `RedisCacheManager` allows definition of configurations on a per-cache basis. + +The behavior of `RedisCache` created with `RedisCacheManager` is defined with `RedisCacheConfiguration`. The configuration lets you set key expiration times, prefixes, and ``RedisSerializer`` implementations for converting to and from the binary storage format, as shown in the following example: [source,java] ---- @@ -567,8 +566,7 @@ RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig() .disableCachingNullValues(); ---- -`RedisCacheManager` defaults to a lock-free `RedisCacheWriter` for reading & writing binary values. Lock-free caching improves throughput. The lack of entry locking can lead to overlapping, non atomic commands, for `putIfAbsent` and `clean` methods as those require multiple commands sent to Redis. -The locking counterpart prevents command overlap by setting an explicit lock key and checking against presence of this key, which leads to additional requests and potential command wait times. +`RedisCacheManager` defaults to a lock-free `RedisCacheWriter` for reading and writing binary values. Lock-free caching improves throughput. The lack of entry locking can lead to overlapping, non-atomic commands for the `putIfAbsent` and `clean` methods, as those require multiple commands to be sent to Redis. The locking counterpart prevents command overlap by setting an explicit lock key and checking against presence of this key, which leads to additional requests and potential command wait times. It is possible to opt in to the locking behavior as follows: @@ -579,51 +577,59 @@ RedisCacheManager cm = RedisCacheManager.build(RedisCacheWriter.lockingRedisCach ... ---- -By default any `key` for a cache entry gets prefixed with the actual cache name followed by 2 colons. +By default, any `key` for a cache entry gets prefixed with the actual cache name followed by two colons. This behavior can be changed to a static as well as a computed prefix. +The following example shows how to set a static prefix: + [source,java] ---- // static key prefix RedisCacheConfiguration.defaultCacheConfig().prefixKeysWith("( ͡° ᴥ ͡°)"); +The following example shows how to set a computed prefix: + // computed key prefix RedisCacheConfiguration.defaultCacheConfig().computePrefixWith(cacheName -> "¯\_(ツ)_/¯" + cacheName); ---- -.RedisCacheManager defaults +The following table lists the default settings for `RedisCacheManager`: + +.`RedisCacheManager` defaults [width="80%",cols="<1,<2",options="header"] |==== |Setting |Value |Cache Writer -|non locking +|Non-locking |Cache Configuration |`RedisCacheConfiguration#defaultConfiguration` |Initial Caches -|none +|None |Trasaction Aware -|no +|No |==== +The following table lists the default settings for `RedisCacheConfiguration`: + .RedisCacheConfiguration defaults [width="80%",cols="<1,<2",options="header"] |==== |Key Expiration -|none +|None |Cache `null` -|yes +|Yes |Prefix Keys -|yes +|Yes |Default Prefix -|the actual cache name +|The actual cache name |Key Serializer |`StringRedisSerializer` @@ -634,4 +640,3 @@ RedisCacheConfiguration.defaultCacheConfig().computePrefixWith(cacheName -> "¯\ |Conversion Service |`DefaultFormattingConversionService` with default cache key converters |==== - diff --git a/src/main/asciidoc/reference/version-note.adoc b/src/main/asciidoc/reference/version-note.adoc new file mode 100644 index 0000000000..b0490ba864 --- /dev/null +++ b/src/main/asciidoc/reference/version-note.adoc @@ -0,0 +1 @@ +NOTE: As of version 1.1, an important change has been made to the `exec` methods of `RedisConnection` and `RedisTemplate`. Previously, these methods returned the results of transactions directly from the connectors. This means that the data types often differed from those returned from the methods of `RedisConnection`. For example, `zAdd` returns a boolean indicating whether the element has been added to the sorted set. Most connectors return this value as a long, and Spring Data Redis performs the conversion. Another common difference is that most connectors return a status reply (usually the string, `OK`) for operations such as `set`. These replies are typically discarded by Spring Data Redis. Prior to 1.1, these conversions were not performed on the results of `exec`. Also, results were not deserialized in `RedisTemplate`, so they often included raw byte arrays. If this change breaks your application, set `convertPipelineAndTxResults` to `false` on your `RedisConnectionFactory` to disable this behavior.