Skip to content

The logic for converting values for writing is confusing and I think very wrong. #1136

Open
@schauder

Description

@schauder

This became obvious when analysing #1127
Part of this problem is hopefully fixed by a fix for #1089.

We have multiple checks for determining what datatype a value should get converted to and I don't think they add to something that I understand.

BasicRelationalConverter.writeValue(@Nullable Object value, TypeInformation<?> type) converts the value twice when it is a simple type and only once otherwise.
At the very least this should be refactored so I can understand why it is doing that.

Concept for a rewrite of the writing conversion logic.

The goal is to have a clean and conceptual consistent process for converting Java values to database values.

The process should look like this:

  1. Identify the conversion target
    The conversion target is a Java type + a java.sql.SQLType (typically a java.sql.JDBCType)

  2. perform the conversion.

The process for identifying the conversion target should follow this process:

  • If it is an entity we don't convert it.

  • If it is collection-like, the target is the same kind of collection. We run the same process to determine the elements of the collection.

  • If there is a matching WritingConverter it determines the target.

  • If the incoming type in question is a simple type, the target type is just that type.

  • Otherwise we apply the standard conversions (e.g. Enums get converted to Strings ... we need to inspect what is currently in the source code).
    I wonder if we can identify the conversion target without actually applying it in this step.

  • there is special handling for arrays ... we need to slot them in somehow. I currently don't think they need special consideration in the conversion process, but I might be wrong about this.

  • Once the target type is determined the SQLType gets derived from that.
    This is either a lookup from JdbcUtil or it is part of the conversion target if it is JdbcUtil.
    I guess that means we don't have a SQLType, but a Function<Object, SQLType.

Part of the challenge and why the current implementation is so convoluted is, that we a have different sets of information going into this process:

  1. for save operations we do have the property.
  2. for derived queries we do have the property.
  3. for annotated query methods we do have the method parameter.
  4. I'm not even sure what we do have for SpEL expressions.

The class structure should lean heavily on Cassandra, but with a combination of Java type + SQLType instead of just a Java type as the target.

https://github.com/spring-projects/spring-data-cassandra/blob/main/spring-data-cassandra/src/main/java/org/springframework/data/cassandra/core/convert/ColumnType.java
https://github.com/spring-projects/spring-data-cassandra/blob/main/spring-data-cassandra/src/main/java/org/springframework/data/cassandra/core/convert/ColumnTypeResolver.java
https://github.com/spring-projects/spring-data-cassandra/blob/main/spring-data-cassandra/src/main/java/org/springframework/data/cassandra/core/convert/DefaultColumnTypeResolver.java
https://github.com/spring-projects/spring-data-cassandra/blob/main/spring-data-cassandra/src/main/java/org/springframework/data/cassandra/core/convert/MappingCassandraConverter.java

Related issues

The following issues might either get fixed as a side effect of this issue or much easier to fix after this issue is resolved.

#787
#1023
#725
#629

Metadata

Metadata

Assignees

No one assigned

    Labels

    in: mappingMapping and conversion infrastructure

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions