Description
Arun Vijayraghavan Today at 11:20 AM
Hello Team need some info, in elixir the proposal is that the bucket name will be dynamically generated , what it means to us is that bucket name could be different for example foo can be foo_xyz123 in dev
foo_abc234 in test and foo_lol777 in prod
My question is what are the areas in Spring Data that a bucket name is used and will warrant a developer to make code changes if the bucket name was hardcoded.
As an example, in SDK bootstrapping requires a bucket name and that will need to be changed.
We know today that connection string . username . password changes but other than that with this proposed change what would be the impact on developers code (edited)
4 replies
Michael Reiche 1 hour ago
are n1ql joins possible across scopes? [edit: yes]. Across buckets? [edit: yes] (edited)
Michael Reiche 1 hour ago
The integration tests use a generated bucket name, so we have lots of experience with this.
The obvious is that developers cannot hard-code the bucketName (for instance in an @query they should be using the #{#n1ql.bucket) instead of hard-coding the bucket name ). They should be using that anyway.
There is one case to consider, let’s see… when using scopes and collections - in order to be backwards compatible with @queries that used #{#n1ql.bucket} and did not use scopes and queries - #{#n1ql.bucket} returns the collection name (which comes from an annotation or an option) and uses a query_context of bucket.scope. That by itself is fine, but if they want to join with a different collection then that other collection must be hard-coded in the @query. "SELECT FROM #{#n1ql.bucket} , other_collection where …" I guess this is ok - the first collection will be the one from #{#n1ql.bucket, and the other collection will be other_collection (in the same bucket). We should probably add a #{#n1ql.collection} to use instead of the overloaded #{#n1ql.bucket}.
For joins across scopes, the keyspace name in the second scope would need to be fully specified as bucket.scope.collection. Since #{#n1ql.bucket} is overloaded to be the collection name, there is no expression for the bucketname. So this case would be broken. (it’s kind of already broken as the bucketname of the second bucket.scope.collection needs to be hard-coded, but with generated bucketnames it would really be broken as hard-coding is not an option). The solution would be to stop overloading n1ql.bucket with the collection name and create a new n1ql.collection. (and n1ql.scope would be nice, too) . Since this would be a breaking-change, it would be good to target for the spring-data-couchbase 5.x release.
A hack/workaround would be to get the bucketname from template.getCouchbaseClientFactory().getBucket().name(), and add a buckeName parameter to the query and use it in the query as #{#[]} (edited)