Description
This issue was previously reported in a comment to #1252. I'm re-reporting this as requested by @smarter. (The original issue was fixed, but the issue reported in the comment was not).
Given the code
trait A { type L[Y <: Int] >: Int }
trait B { type L[Y <: String] >: String }
trait C extends A with B { type L[Y <: Any] >: Any }
The compiler reports
-- Error: ... -------
3 |trait C extends A with B { type L[Y <: Any] >: Any }
| ^
|error overriding type L in trait B with bounds >: [Y <: String] => String <: [Y <: String] => Any;
| type L with bounds >: [Y] => Any <: [Y] => Any has incompatible type
The problem is that, when the compiler checks that the bounds of L
declared in C
conform to those declared in B
, it compares the bound annotations of the type-lambdas that form those bounds. But when checking whether two lambdas are subtypes, their bounds need not be compared, as long as both lambdas have a common kind.
Concretely, here we want all the lambdas to have the (wider) kind (Y <: String) -> *
which is the declared kind of L
in the parent trait B
:
[Y <: String] => String has kind (Y <: String) -> *
[Y <: String] => Any has kind (Y <: String) -> *
[Y] => Any has kind (Y <: Any) -> * sub-kind of (Y <: String) -> *
[Y] => Any has kind (Y <: Any) -> * sub-kind of (Y <: String) -> *
so all the bounds are well-kinded and their kinds conform to the signature in the parent class. Then we need to check that the definitions of the lambdas agree in this kind, which they do:
/* Lower bounds are checked contravariantly: */
String <: Any forall Y <: String, so [Y <: String] => String <: [Y <: Any ] => Any
/* Upper bounds are checked covariantly: */
Any <: Any forall Y <: String, so [Y <: Any ] => Any <: [Y <: String] => Any
With @smarter and @Blaisorblade, we have discussed two possible fixes:
-
Make subtype checking kind driven: when comparing two lambdas, use the expected bounds from the kind when comparing the bodies. This can be summed up in the following kinded subtyping rule.
G, X >: L <: U |- T1 <: T2 :: K G |- [X >: L1 <: U1] => T1 :: (X >: L <: U) -> K G |- [X >: L2 <: U2] => T2 :: (X >: L <: U) -> K ----------------------------------------------------------------------------- G |- [X >: L1 <: U1] => T1 <: [X >: L2 <: U2] => T2 :: (X >: L <: U) -> K
The advantage of this solution is that it's known to be sound (it has been verified at least for a subset of the type system in my thesis). The drawback is that subtype checking would have to be rewritten to take an additional input: expected kind of the types being compared. That would be a major change.
-
Find a compatible, common bound by computing the intersection/union of the bounds in the individual lambdas. As a subtyping rule, this would look as follows.
G, X >: (L1 | L2) <: (U1 & U2) |- T1 <: T2 ----------------------------------------------------- G |- [X >: L1 <: U1] => T1 <: [X >: L2 <: U2] => T2
The advantage of this solution is that subtype checking does not have to be redesigned completely, while the drawback is that it requires computing the intersections/unions of higher-kinded types. That can probably be done point-wise (i.e. by pushing intersections/unions into lambdas) but I have not worked out the details nor have I got any sort of soundness proof.