diff --git a/docs/docs/reference/contextual-delegate/context-bounds.md b/docs/docs/reference/contextual-delegate/context-bounds.md deleted file mode 100644 index ed54a4ba1411..000000000000 --- a/docs/docs/reference/contextual-delegate/context-bounds.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -layout: doc-page -title: "Context Bounds" ---- - -## Context Bounds - -A context bound is a shorthand for expressing a common pattern of an implicit parameter that depends on a type parameter. Using a context bound, the `maximum` function of the last section can be written like this: -```scala -def maximum[T: Ord](xs: List[T]): T = xs.reduceLeft(max) -``` -A bound like `: Ord` on a type parameter `T` of a method or class is equivalent to a given clause `given Ord[T]`. The implicit parameter(s) generated from context bounds come last in the definition of the containing method or class. E.g., -```scala -def f[T: C1 : C2, U: C3](x: T) given (y: U, z: V): R -``` -would expand to -```scala -def f[T, U](x: T) given (y: U, z: V) given C1[T], C2[T], C3[U]: R -``` -Context bounds can be combined with subtype bounds. If both are present, subtype bounds come first, e.g. -```scala -def g[T <: B : C](x: T): R = ... -``` - -## Syntax - -``` -TypeParamBounds ::= [SubtypeBounds] {ContextBound} -ContextBound ::= ‘:’ Type -``` diff --git a/docs/docs/reference/contextual-delegate/conversions.md b/docs/docs/reference/contextual-delegate/conversions.md deleted file mode 100644 index 802c27569a61..000000000000 --- a/docs/docs/reference/contextual-delegate/conversions.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -layout: doc-page -title: "Implicit Conversions" ---- - -Implicit conversions are defined by delegates for the `scala.Conversion` class. -This class is defined in package `scala` as follows: -```scala -abstract class Conversion[-T, +U] extends (T => U) -``` -For example, here is an implicit conversion from `String` to `Token`: -```scala -delegate for Conversion[String, Token] { - def apply(str: String): Token = new KeyWord(str) -} -``` -Using an alias delegate this can be expressed more concisely as: -```scala -delegate for Conversion[String, Token] = new KeyWord(_) -``` -An implicit conversion is applied automatically by the compiler in three situations: - -1. If an expression `e` has type `T`, and `T` does not conform to the expression's expected type `S`. -2. In a selection `e.m` with `e` of type `T`, but `T` defines no member `m`. -3. In an application `e.m(args)` with `e` of type `T`, if `T` does define - some member(s) named `m`, but none of these members can be applied to the arguments `args`. - -In the first case, the compiler looks for a delegate for -`scala.Conversion` that maps an argument of type `T` to type `S`. In the second and third -case, it looks for a delegate for `scala.Conversion` that maps an argument of type `T` -to a type that defines a member `m` which can be applied to `args` if present. -If such a delegate `C` is found, the expression `e` is replaced by `C.apply(e)`. - -## Examples - -1. The `Predef` package contains "auto-boxing" conversions that map -primitive number types to subclasses of `java.lang.Number`. For instance, the -conversion from `Int` to `java.lang.Integer` can be defined as follows: -```scala -delegate int2Integer for Conversion[Int, java.lang.Integer] = - java.lang.Integer.valueOf(_) -``` - -2. The "magnet" pattern is sometimes used to express many variants of a method. Instead of defining overloaded versions of the method, one can also let the method take one or more arguments of specially defined "magnet" types, into which various argument types can be converted. E.g. -```scala -object Completions { - - // The argument "magnet" type - enum CompletionArg { - case Error(s: String) - case Response(f: Future[HttpResponse]) - case Status(code: Future[StatusCode]) - } - object CompletionArg { - - // conversions defining the possible arguments to pass to `complete` - // these always come with CompletionArg - // They can be invoked explicitly, e.g. - // - // CompletionArg.fromStatusCode(statusCode) - - delegate fromString for Conversion[String, CompletionArg] = Error(_) - delegate fromFuture for Conversion[Future[HttpResponse], CompletionArg] = Response(_) - delegate fromStatusCode for Conversion[Future[StatusCode], CompletionArg] = Status(_) - } - import CompletionArg._ - - def complete[T](arg: CompletionArg) = arg match { - case Error(s) => ... - case Response(f) => ... - case Status(code) => ... - } -} -``` -This setup is more complicated than simple overloading of `complete`, but it can still be useful if normal overloading is not available (as in the case above, since we cannot have two overloaded methods that take `Future[...]` arguments), or if normal overloading would lead to a combinatorial explosion of variants. diff --git a/docs/docs/reference/contextual-delegate/derivation.md b/docs/docs/reference/contextual-delegate/derivation.md deleted file mode 100644 index eb1a089d62af..000000000000 --- a/docs/docs/reference/contextual-delegate/derivation.md +++ /dev/null @@ -1,383 +0,0 @@ ---- -layout: doc-page -title: Typeclass Derivation ---- - -Typeclass derivation is a way to generate delegates for certain type classes automatically or with minimal code hints. A type class in this sense is any trait or class with a type parameter that describes the type being operated on. Commonly used examples are `Eql`, `Ordering`, `Show`, or `Pickling`. Example: -```scala -enum Tree[T] derives Eql, Ordering, Pickling { - case Branch(left: Tree[T], right: Tree[T]) - case Leaf(elem: T) -} -``` -The `derives` clause generates delegates for the `Eql`, `Ordering`, and `Pickling` traits in the companion object `Tree`: -```scala -delegate [T: Eql] for Eql[Tree[T]] = Eql.derived -delegate [T: Ordering] for Ordering[Tree[T]] = Ordering.derived -delegate [T: Pickling] for Pickling[Tree[T]] = Pickling.derived -``` - -### Deriving Types - -Besides for enums, typeclasses can also be derived for other sets of classes and objects that form an algebraic data type. These are: - - - individual case classes or case objects - - sealed classes or traits that have only case classes and case objects as children. - - Examples: - - ```scala -case class Labelled[T](x: T, label: String) derives Eql, Show - -sealed trait Option[T] derives Eql -case class Some[T] extends Option[T] -case object None extends Option[Nothing] -``` - -The generated typeclass delegates are placed in the companion objects `Labelled` and `Option`, respectively. - -### Derivable Types - -A trait or class can appear in a `derives` clause if its companion object defines a method named `derived`. The type and implementation of a `derived` method are arbitrary, but typically it has a definition like this: -```scala - def derived[T] given Generic[T] = ... -``` -That is, the `derived` method takes an implicit parameter of type `Generic` that determines the _shape_ of the deriving type `T` and it computes the typeclass implementation according to that shape. A `Generic` delegate is generated automatically for any type that derives a typeclass with a `derived` method that refers to `Generic`. One can also derive `Generic` alone, which means a `Generic` delegate is generated without any other type class delegates. E.g.: -```scala -sealed trait ParseResult[T] derives Generic -``` -This is all a user of typeclass derivation has to know. The rest of this page contains information needed to be able to write a typeclass that can appear in a `derives` clause. In particular, it details the means provided for the implementation of data generic `derived` methods. - -### The Shape Type - -For every class with a `derives` clause, the compiler computes the shape of that class as a type. For example, here is the shape type for the `Tree[T]` enum: -```scala -Cases[( - Case[Branch[T], (Tree[T], Tree[T])], - Case[Leaf[T], T *: Unit] -)] -``` -Informally, this states that - -> The shape of a `Tree[T]` is one of two cases: Either a `Branch[T]` with two - elements of type `Tree[T]`, or a `Leaf[T]` with a single element of type `T`. - -The type constructors `Cases` and `Case` come from the companion object of a class -`scala.compiletime.Shape`, which is defined in the standard library as follows: -```scala -sealed abstract class Shape - -object Shape { - - /** A sum with alternative types `Alts` */ - case class Cases[Alts <: Tuple] extends Shape - - /** A product type `T` with element types `Elems` */ - case class Case[T, Elems <: Tuple] extends Shape -} -``` - -Here is the shape type for `Labelled[T]`: -```scala -Case[Labelled[T], (T, String)] -``` -And here is the one for `Option[T]`: -```scala -Cases[( - Case[Some[T], T *: Unit], - Case[None.type, Unit] -)] -``` -Note that an empty element tuple is represented as type `Unit`. A single-element tuple -is represented as `T *: Unit` since there is no direct syntax for such tuples: `(T)` is just `T` in parentheses, not a tuple. - -### The Generic Typeclass - -For every class `C[T_1,...,T_n]` with a `derives` clause, the compiler generates in the companion object of `C` a delegate for `Generic[C[T_1,...,T_n]]` that follows -the outline below: -```scala -delegate [T_1, ..., T_n] for Generic[C[T_1,...,T_n]] { - type Shape = ... - ... -} -``` -where the right hand side of `Shape` is the shape type of `C[T_1,...,T_n]`. -For instance, the definition -```scala -enum Result[+T, +E] derives Logging { - case class Ok[T](result: T) - case class Err[E](err: E) -} -``` -would produce: -```scala -object Result { - import scala.compiletime.Shape._ - - delegate [T, E] for Generic[Result[T, E]] { - type Shape = Cases[( - Case[Ok[T], T *: Unit], - Case[Err[E], E *: Unit] - )] - ... - } -} -``` -The `Generic` class is defined in package `scala.reflect`. - -```scala -abstract class Generic[T] { - type Shape <: scala.compiletime.Shape - - /** The mirror corresponding to ADT instance `x` */ - def reflect(x: T): Mirror - - /** The ADT instance corresponding to given `mirror` */ - def reify(mirror: Mirror): T - - /** The companion object of the ADT */ - def common: GenericClass -} -``` -It defines the `Shape` type for the ADT `T`, as well as two methods that map between a -type `T` and a generic representation of `T`, which we call a `Mirror`: -The `reflect` method maps an instance of the ADT `T` to its mirror whereas -the `reify` method goes the other way. There's also a `common` method that returns -a value of type `GenericClass` which contains information that is the same for all -instances of a class (right now, this consists of the runtime `Class` value and -the names of the cases and their parameters). - -### Mirrors - -A mirror is a generic representation of an instance of an ADT. `Mirror` objects have three components: - - - `adtClass: GenericClass`: The representation of the ADT class - - `ordinal: Int`: The ordinal number of the case among all cases of the ADT, starting from 0 - - `elems: Product`: The elements of the instance, represented as a `Product`. - - The `Mirror` class is defined in package `scala.reflect` as follows: - -```scala -class Mirror(val adtClass: GenericClass, val ordinal: Int, val elems: Product) { - - /** The `n`'th element of this generic case */ - def apply(n: Int): Any = elems.productElement(n) - - /** The name of the constructor of the case reflected by this mirror */ - def caseLabel: String = adtClass.label(ordinal)(0) - - /** The label of the `n`'th element of the case reflected by this mirror */ - def elementLabel(n: Int): String = adtClass.label(ordinal)(n + 1) -} -``` - -### GenericClass - -Here's the API of `scala.reflect.GenericClass`: - -```scala -class GenericClass(val runtimeClass: Class[_], labelsStr: String) { - - /** A mirror of case with ordinal number `ordinal` and elements as given by `Product` */ - def mirror(ordinal: Int, product: Product): Mirror = - new Mirror(this, ordinal, product) - - /** A mirror with elements given as an array */ - def mirror(ordinal: Int, elems: Array[AnyRef]): Mirror = - mirror(ordinal, new ArrayProduct(elems)) - - /** A mirror with an initial empty array of `numElems` elements, to be filled in. */ - def mirror(ordinal: Int, numElems: Int): Mirror = - mirror(ordinal, new Array[AnyRef](numElems)) - - /** A mirror of a case with no elements */ - def mirror(ordinal: Int): Mirror = - mirror(ordinal, EmptyProduct) - - /** Case and element labels as a two-dimensional array. - * Each row of the array contains a case label, followed by the labels of the elements of that case. - */ - val label: Array[Array[String]] = ... -} -``` - -The class provides four overloaded methods to create mirrors. The first of these is invoked by the `reify` method that maps an ADT instance to its mirror. It simply passes the -instance itself (which is a `Product`) to the second parameter of the mirror. That operation does not involve any copying and is thus quite efficient. The second and third versions of `mirror` are typically invoked by typeclass methods that create instances from mirrors. An example would be an `unpickle` method that first creates an array of elements, then creates -a mirror over that array, and finally uses the `reify` method in `Reflected` to create the ADT instance. The fourth version of `mirror` is used to create mirrors of instances that do not have any elements. - -### How to Write Generic Typeclasses - -Based on the machinery developed so far it becomes possible to define type classes generically. This means that the `derived` method will compute a type class delegate for any ADT that has a `Generic` delegate, recursively. -The implementation of these methods typically uses three new type-level constructs in Dotty: inline methods, inline matches, and implicit matches. As an example, here is one possible implementation of a generic `Eql` type class, with explanations. Let's assume `Eql` is defined by the following trait: -```scala -trait Eql[T] { - def eql(x: T, y: T): Boolean -} -``` -We need to implement a method `Eql.derived` that produces a delegate for `Eql[T]` provided -there exists evidence of type `Generic[T]`. Here's a possible solution: -```scala - inline def derived[T] given (ev: Generic[T]): Eql[T] = new Eql[T] { - def eql(x: T, y: T): Boolean = { - val mx = ev.reflect(x) // (1) - val my = ev.reflect(y) // (2) - inline erasedValue[ev.Shape] match { - case _: Cases[alts] => - mx.ordinal == my.ordinal && // (3) - eqlCases[alts](mx, my, 0) // [4] - case _: Case[_, elems] => - eqlElems[elems](mx, my, 0) // [5] - } - } - } -``` -The implementation of the inline method `derived` creates a delegate for `Eql[T]` and implements its `eql` method. The right-hand side of `eql` mixes compile-time and runtime elements. In the code above, runtime elements are marked with a number in parentheses, i.e -`(1)`, `(2)`, `(3)`. Compile-time calls that expand to runtime code are marked with a number in brackets, i.e. `[4]`, `[5]`. The implementation of `eql` consists of the following steps. - - 1. Map the compared values `x` and `y` to their mirrors using the `reflect` method of the implicitly passed `Generic` `(1)`, `(2)`. - 2. Match at compile-time against the shape of the ADT given in `ev.Shape`. Dotty does not have a construct for matching types directly, but we can emulate it using an `inline` match over an `erasedValue`. Depending on the actual type `ev.Shape`, the match will reduce at compile time to one of its two alternatives. - 3. If `ev.Shape` is of the form `Cases[alts]` for some tuple `alts` of alternative types, the equality test consists of comparing the ordinal values of the two mirrors `(3)` and, if they are equal, comparing the elements of the case indicated by that ordinal value. That second step is performed by code that results from the compile-time expansion of the `eqlCases` call `[4]`. - 4. If `ev.Shape` is of the form `Case[elems]` for some tuple `elems` for element types, the elements of the case are compared by code that results from the compile-time expansion of the `eqlElems` call `[5]`. - -Here is a possible implementation of `eqlCases`: -```scala - inline def eqlCases[Alts <: Tuple](mx: Mirror, my: Mirror, n: Int): Boolean = - inline erasedValue[Alts] match { - case _: (Shape.Case[_, elems] *: alts1) => - if (mx.ordinal == n) // (6) - eqlElems[elems](mx, my, 0) // [7] - else - eqlCases[alts1](mx, my, n + 1) // [8] - case _: Unit => - throw new MatchError(mx.ordinal) // (9) - } -``` -The inline method `eqlCases` takes as type arguments the alternatives of the ADT that remain to be tested. It takes as value arguments mirrors of the two instances `x` and `y` to be compared and an integer `n` that indicates the ordinal number of the case that is tested next. It produces an expression that compares these two values. - -If the list of alternatives `Alts` consists of a case of type `Case[_, elems]`, possibly followed by further cases in `alts1`, we generate the following code: - - 1. Compare the `ordinal` value of `mx` (a runtime value) with the case number `n` (a compile-time value translated to a constant in the generated code) in an if-then-else `(6)`. - 2. In the then-branch of the conditional we have that the `ordinal` value of both mirrors - matches the number of the case with elements `elems`. Proceed by comparing the elements - of the case in code expanded from the `eqlElems` call `[7]`. - 3. In the else-branch of the conditional we have that the present case does not match - the ordinal value of both mirrors. Proceed by trying the remaining cases in `alts1` using - code expanded from the `eqlCases` call `[8]`. - - If the list of alternatives `Alts` is the empty tuple, there are no further cases to check. - This place in the code should not be reachable at runtime. Therefore an appropriate - implementation is by throwing a `MatchError` or some other runtime exception `(9)`. - -The `eqlElems` method compares the elements of two mirrors that are known to have the same -ordinal number, which means they represent the same case of the ADT. Here is a possible -implementation: -```scala - inline def eqlElems[Elems <: Tuple](xs: Mirror, ys: Mirror, n: Int): Boolean = - inline erasedValue[Elems] match { - case _: (elem *: elems1) => - tryEql[elem]( // [12] - xs(n).asInstanceOf[elem], // (10) - ys(n).asInstanceOf[elem]) && // (11) - eqlElems[elems1](xs, ys, n + 1) // [13] - case _: Unit => - true // (14) - } -``` -`eqlElems` takes as arguments the two mirrors of the elements to compare and a compile-time index `n`, indicating the index of the next element to test. It is defined in terms of another compile-time match, this time over the tuple type `Elems` of all element types that remain to be tested. If that type is -non-empty, say of form `elem *: elems1`, the following code is produced: - - 1. Access the `n`'th elements of both mirrors and cast them to the current element type `elem` - `(10)`, `(11)`. Note that because of the way runtime reflection mirrors compile-time `Shape` types, the casts are guaranteed to succeed. - 2. Compare the element values using code expanded by the `tryEql` call `[12]`. - 3. "And" the result with code that compares the remaining elements using a recursive call - to `eqlElems` `[13]`. - - If type `Elems` is empty, there are no more elements to be compared, so the comparison's result is `true`. `(14)` - - Since `eqlElems` is an inline method, its recursive calls are unrolled. The end result is a conjunction `test_1 && ... && test_n && true` of test expressions produced by the `tryEql` calls. - -The last, and in a sense most interesting part of the derivation is the comparison of a pair of element values in `tryEql`. Here is the definition of this method: -```scala - inline def tryEql[T](x: T, y: T) = implicit match { - case ev: Eql[T] => - ev.eql(x, y) // (15) - case _ => - error("No `Eql` delegate was found for $T") - } -``` -`tryEql` is an inline method that takes an element type `T` and two element values of that type as arguments. It is defined using an `implicit match` that tries to find a delegate for `Eql[T]`. If a delegate `ev` is found, it proceeds by comparing the arguments using `ev.eql`. On the other hand, if no delegate is found -this signals a compilation error: the user tried a generic derivation of `Eql` for a class with an element type that does not have an `Eql` delegate itself. The error is signaled by -calling the `error` method defined in `scala.compiletime`. - -**Note:** At the moment our error diagnostics for metaprogramming does not support yet interpolated string arguments for the `scala.compiletime.error` method that is called in the second case above. As an alternative, one can simply leave off the second case, then a missing typeclass would result in a "failure to reduce match" error. - -**Example:** Here is a slightly polished and compacted version of the code that's generated by inline expansion for the derived `Eql` delegate for class `Tree`. - -```scala -delegate [T] for Eql[Tree[T]] given (elemEq: Eql[T]) { - def eql(x: Tree[T], y: Tree[T]): Boolean = { - val ev = the[Generic[Tree[T]]] - val mx = ev.reflect(x) - val my = ev.reflect(y) - mx.ordinal == my.ordinal && { - if (mx.ordinal == 0) { - this.eql(mx(0).asInstanceOf[Tree[T]], my(0).asInstanceOf[Tree[T]]) && - this.eql(mx(1).asInstanceOf[Tree[T]], my(1).asInstanceOf[Tree[T]]) - } - else if (mx.ordinal == 1) { - elemEq.eql(mx(0).asInstanceOf[T], my(0).asInstanceOf[T]) - } - else throw new MatchError(mx.ordinal) - } - } -} -``` - -One important difference between this approach and Scala-2 typeclass derivation frameworks such as Shapeless or Magnolia is that no automatic attempt is made to generate typeclass delegates for elements recursively using the generic derivation framework. There must be a delegate for `Eql[T]` (which can of course be produced in turn using `Eql.derived`), or the compilation will fail. The advantage of this more restrictive approach to typeclass derivation is that it avoids uncontrolled transitive typeclass derivation by design. This keeps code sizes smaller, compile times lower, and is generally more predictable. - -### Deriving Delegates Elsewhere - -Sometimes one would like to derive a typeclass delegate for an ADT after the ADT is defined, without being able to change the code of the ADT itself. -To do this, simply define a delegate with the `derived` method of the typeclass as right-hand side. E.g, to implement `Ordering` for `Option`, define: -```scala -delegate [T: Ordering]: Ordering[Option[T]] = Ordering.derived -``` -Usually, the `Ordering.derived` clause has an implicit parameter of type -`Generic[Option[T]]`. Since the `Option` trait has a `derives` clause, -the necessary delegate is already present in the companion object of `Option`. -If the ADT in question does not have a `derives` clause, a `Generic` delegate -would still be synthesized by the compiler at the point where `derived` is called. -This is similar to the situation with type tags or class tags: If no delegate -is found, the compiler will synthesize one. - -### Syntax - -``` -Template ::= InheritClauses [TemplateBody] -EnumDef ::= id ClassConstr InheritClauses EnumBody -InheritClauses ::= [‘extends’ ConstrApps] [‘derives’ QualId {‘,’ QualId}] -ConstrApps ::= ConstrApp {‘with’ ConstrApp} - | ConstrApp {‘,’ ConstrApp} -``` - -### Discussion - -The typeclass derivation framework is quite small and low-level. There are essentially -two pieces of infrastructure in the compiler-generated `Generic` delegates: - - - a type representing the shape of an ADT, - - a way to map between ADT instances and generic mirrors. - -Generic mirrors make use of the already existing `Product` infrastructure for case -classes, which means they are efficient and their generation requires not much code. - -Generic mirrors can be so simple because, just like `Product`s, they are weakly -typed. On the other hand, this means that code for generic typeclasses has to -ensure that type exploration and value selection proceed in lockstep and it -has to assert this conformance in some places using casts. If generic typeclasses -are correctly written these casts will never fail. - -It could make sense to explore a higher-level framework that encapsulates all casts -in the framework. This could give more guidance to the typeclass implementer. -It also seems quite possible to put such a framework on top of the lower-level -mechanisms presented here. diff --git a/docs/docs/reference/contextual-delegate/extension-methods.md b/docs/docs/reference/contextual-delegate/extension-methods.md deleted file mode 100644 index c53fe54117d9..000000000000 --- a/docs/docs/reference/contextual-delegate/extension-methods.md +++ /dev/null @@ -1,150 +0,0 @@ ---- -layout: doc-page -title: "Extension Methods" ---- - -Extension methods allow one to add methods to a type after the type is defined. Example: - -```scala -case class Circle(x: Double, y: Double, radius: Double) - -def (c: Circle) circumference: Double = c.radius * math.Pi * 2 -``` - -Like regular methods, extension methods can be invoked with infix `.`: - -```scala - val circle = Circle(0, 0, 1) - circle.circumference -``` - -### Translation of Extension Methods - -Extension methods are methods that have a parameter clause in front of the defined -identifier. They translate to methods where the leading parameter section is moved -to after the defined identifier. So, the definition of `circumference` above translates -to the plain method, and can also be invoked as such: -```scala -def circumference(c: Circle): Double = c.radius * math.Pi * 2 - -assert(circle.circumference == circumference(circle)) -``` - -### Translation of Calls to Extension Methods - -When is an extension method applicable? There are two possibilities. - - - An extension method is applicable if it is visible under a simple name, by being defined - or inherited or imported in a scope enclosing the application. - - An extension method is applicable if it is a member of some delegate that's eligible at the point of the application. - -As an example, consider an extension method `longestStrings` on `String` defined in a trait `StringSeqOps`. - -```scala -trait StringSeqOps { - def (xs: Seq[String]) longestStrings = { - val maxLength = xs.map(_.length).max - xs.filter(_.length == maxLength) - } -} -``` -We can make the extension method available by defining a delegate for `StringSeqOps`, like this: -```scala -delegate ops1 for StringSeqOps -``` -Then -```scala -List("here", "is", "a", "list").longestStrings -``` -is legal everywhere `ops1` is available as a delegate. Alternatively, we can define `longestStrings` as a member of a normal object. But then the method has to be brought into scope to be usable as an extension method. - -```scala -object ops2 extends StringSeqOps -import ops2.longestStrings -List("here", "is", "a", "list").longestStrings -``` -The precise rules for resolving a selection to an extension method are as follows. - -Assume a selection `e.m[Ts]` where `m` is not a member of `e`, where the type arguments `[Ts]` are optional, -and where `T` is the expected type. The following two rewritings are tried in order: - - 1. The selection is rewritten to `m[Ts](e)`. - 2. If the first rewriting does not typecheck with expected type `T`, and there is a delegate `d` - in either the current scope or in the implicit scope of `T`, and `d` defines an extension - method named `m`, then selection is expanded to `d.m[Ts](e)`. - This second rewriting is attempted at the time where the compiler also tries an implicit conversion - from `T` to a type containing `m`. If there is more than one way of rewriting, an ambiguity error results. - -So `circle.circumference` translates to `CircleOps.circumference(circle)`, provided -`circle` has type `Circle` and `CircleOps` is an eligible delegate (i.e. it is visible at the point of call or it is defined in the companion object of `Circle`). - -### Delegates for Extension Methods - -Delegates that define extension methods can also be defined without a `for` clause. E.g., - -```scala -delegate StringOps { - def (xs: Seq[String]) longestStrings: Seq[String] = { - val maxLength = xs.map(_.length).max - xs.filter(_.length == maxLength) - } -} - -delegate { - def (xs: List[T]) second[T] = xs.tail.head -} -``` -If such delegates are anonymous (as in the second clause), their name is synthesized from the name -of the first defined extension method. - -### Operators - -The extension method syntax also applies to the definitions of operators. -In each case the definition syntax mirrors the way the operator is applied. -Examples: -```scala - def (x: String) < (y: String) = ... - def (x: Elem) +: (xs: Seq[Elem]) = ... - - "ab" + "c" - 1 +: List(2, 3) -``` -The two definitions above translate to -```scala - def < (x: String)(y: String) = ... - def +: (xs: Seq[Elem])(x: Elem) = ... -``` -Note that swap of the two parameters `x` and `xs` when translating -the right-binding operator `+:` to an extension method. This is analogous -to the implementation of right binding operators as normal methods. - -### Generic Extensions - -The `StringSeqOps` examples extended a specific instance of a generic type. It is also possible to extend a generic type by adding type parameters to an extension method. Examples: - -```scala -def (xs: List[T]) second [T] = - xs.tail.head - -def (xs: List[List[T]]) flattened [T] = - xs.foldLeft[List[T]](Nil)(_ ++ _) - -def (x: T) + [T : Numeric](y: T): T = - the[Numeric[T]].plus(x, y) -``` - -As usual, type parameters of the extension method follow the defined method name. Nevertheless, such type parameters can already be used in the preceding parameter clause. - - -### Syntax - -The required syntax extension just adds one clause for extension methods relative -to the [current syntax](https://github.com/lampepfl/dotty/blob/master/docs/docs/internals/syntax.md). -``` -DefSig ::= ... - | ‘(’ DefParam ‘)’ [nl] id [DefTypeParamClause] DefParamClauses -``` - - - - diff --git a/docs/docs/reference/contextual-delegate/import-implied.md b/docs/docs/reference/contextual-delegate/import-implied.md deleted file mode 100644 index 5a5f937ea163..000000000000 --- a/docs/docs/reference/contextual-delegate/import-implied.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -layout: doc-page -title: "Delegate Imports" ---- - -A special form of import is used to import delegates. Example: -```scala -object A { - class TC - delegate tc for TC - def f given TC = ??? -} -object B { - import A._ - import delegate A._ -} -``` -In the code above, the `import A._` clause of object `B` will import all members -of `A` _except_ the delegate `tc`. Conversely, the second import `import delegate A._` will import _only_ that delegate. - -Generally, a normal import clause brings all definitions except delegates into scope whereas a delegate import brings only delegates into scope. - -There are two main benefits arising from these rules: - - - It is made clearer where delegates in scope are coming from. - In particular, it is not possible to hide imported delegates in a long list of regular imports. - - It enables importing all delegates - without importing anything else. This is particularly important since delegates - can be anonymous, so the usual recourse of using named imports is not - practical. - -### Relationship with Old-Style Implicits - -The rules of delegates above have the consequence that a library -would have to migrate in lockstep with all its users from old style implicits and -normal imports to delegate clauses and delegate imports. - -The following modifications avoid this hurdle to migration. - - 1. An delegate import also brings old style implicits into scope. So, in Scala 3.0 - an old-style implicit definition can be brought into scope either by a normal import or by an `import delegate`. - - 2. In Scala 3.1, old-style implicits accessed through a normal import - will give a deprecation warning. - - 3. In some version after 3.1, old-style implicits accessed through a normal import - will give a compiler error. - -These rules mean that library users can use `import delegate` to access old-style implicits in Scala 3.0, -and will be gently nudged and then forced to do so in later versions. Libraries can then switch to -representation clauses once their user base has migrated. diff --git a/docs/docs/reference/contextual-delegate/inferable-by-name-parameters.md b/docs/docs/reference/contextual-delegate/inferable-by-name-parameters.md deleted file mode 100644 index 302ed5f3e7ec..000000000000 --- a/docs/docs/reference/contextual-delegate/inferable-by-name-parameters.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -layout: doc-page -title: "Implicit By-Name Parameters" ---- - -Implicit by-name parameters can be used to avoid a divergent inferred expansion. Example: - -```scala -trait Codec[T] { - def write(x: T): Unit -} - -delegate intCodec for Codec[Int] = ??? - -delegate optionCodec[T] for Codec[Option[T]] given (ev: => Codec[T]) { - def write(xo: Option[T]) = xo match { - case Some(x) => ev.write(x) - case None => - } -} - -val s = the[Codec[Option[Int]]] - -s.write(Some(33)) -s.write(None) -``` -As is the case for a normal by-name parameter, the argument for the implicit parameter `ev` -is evaluated on demand. In the example above, if the option value `x` is `None`, it is -not evaluated at all. - -The synthesized argument for an implicit parameter is backed by a local val -if this is necessary to prevent an otherwise diverging expansion. - -The precise steps for synthesizing an argument for a by-name parameter of type `=> T` are as follows. - - 1. Create a new delegate for type `T`: - - ```scala - delegate lv for T = ??? - ``` - where `lv` is an arbitrary fresh name. - - 1. This delegate is not immediately available as candidate for argument inference (making it immediately available could result in a loop in the synthesized computation). But it becomes available in all nested contexts that look again for an argument to an implicit by-name parameter. - - 1. If this search succeeds with expression `E`, and `E` contains references to the delegate `lv`, replace `E` by - - - ```scala - { delegate lv for T = E; lv } - ``` - - Otherwise, return `E` unchanged. - -In the example above, the definition of `s` would be expanded as follows. - -```scala -val s = the[Test.Codec[Option[Int]]]( - optionCodec[Int](intCodec)) -``` - -No local delegate was generated because the synthesized argument is not recursive. - -### Reference - -For more info, see [Issue #1998](https://github.com/lampepfl/dotty/issues/1998) -and the associated [Scala SIP](https://docs.scala-lang.org/sips/byname-implicits.html). diff --git a/docs/docs/reference/contextual-delegate/inferable-params.md b/docs/docs/reference/contextual-delegate/inferable-params.md deleted file mode 100644 index 7293a1ac7b89..000000000000 --- a/docs/docs/reference/contextual-delegate/inferable-params.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -layout: doc-page -title: "Given Clauses" ---- - -Functional programming tends to express most dependencies as simple function parameterization. -This is clean and powerful, but it sometimes leads to functions that take many parameters and -call trees where the same value is passed over and over again in long call chains to many -functions. Given clauses can help here since they enable the compiler to synthesize -repetitive arguments instead of the programmer having to write them explicitly. - -For example, given the [delegates](./instance-defs.md) defined previously, -a maximum function that works for any arguments for which an ordering exists can be defined as follows: -```scala -def max[T](x: T, y: T) given (ord: Ord[T]): T = - if (ord.compare(x, y) < 1) y else x -``` -Here, `ord` is an _implicit parameter_ introduced with a `given` clause. -The `max` method can be applied as follows: -```scala -max(2, 3).given(IntOrd) -``` -The `.given(IntOrd)` part passes `IntOrd` as an argument for the `ord` parameter. But the point of -implicit parameters is that this argument can also be left out (and it usually is). So the following -applications are equally valid: -```scala -max(2, 3) -max(List(1, 2, 3), Nil) -``` - -## Anonymous Implicit Parameters - -In many situations, the name of an implicit parameter of a method need not be -mentioned explicitly at all, since it is only used in synthesized arguments for -other implicit parameters. In that case one can avoid defining a parameter name -and just provide its type. Example: -```scala -def maximum[T](xs: List[T]) given Ord[T]: T = - xs.reduceLeft(max) -``` -`maximum` takes an implicit parameter of type `Ord` only to pass it on as a -synthesized argument to `max`. The name of the parameter is left out. - -Generally, implicit parameters may be given either as a parameter list `(p_1: T_1, ..., p_n: T_n)` -or as a sequence of types, separated by commas. - -## Inferring Complex Arguments - -Here are two other methods that have an implicit parameter of type `Ord[T]`: -```scala -def descending[T] given (asc: Ord[T]): Ord[T] = new Ord[T] { - def compare(x: T, y: T) = asc.compare(y, x) -} - -def minimum[T](xs: List[T]) given Ord[T] = - maximum(xs).given(descending) -``` -The `minimum` method's right hand side passes `descending` as an explicit argument to `maximum(xs)`. -With this setup, the following calls are all well-formed, and they all normalize to the last one: -```scala -minimum(xs) -maximum(xs).given(descending) -maximum(xs).given(descending.given(ListOrd)) -maximum(xs).given(descending.given(ListOrd.given(IntOrd))) -``` - -## Mixing Given Clauses And Normal Parameters - -Given clauses can be freely mixed with normal parameters. -A given clause may be followed by a normal parameter and _vice versa_. -There can be several given clauses in a definition. Example: -```scala -def f given (u: Universe) (x: u.T) given Context = ... - -delegate global for Universe { type T = String ... } -delegate ctx for Context { ... } -``` -Then the following calls are all valid (and normalize to the last one) -```scala -f("abc") -f.given(global)("abc") -f("abc").given(ctx) -f.given(global)("abc").given(ctx) -``` - -## Summoning Delegates - -A method `the` in `Predef` returns the delegate for a given type. For example, -the delegate for `Ord[List[Int]]` is produced by -```scala -the[Ord[List[Int]]] // reduces to ListOrd given IntOrd -``` -The `the` method is simply defined as the (non-widening) identity function over an implicit parameter. -```scala -def the[T] given (x: T): x.type = x -``` - -## Syntax - -Here is the new syntax of parameters and arguments seen as a delta from the [standard context free syntax of Scala 3](http://dotty.epfl.ch/docs/internals/syntax.html). -``` -ClsParamClause ::= ... - | ‘given’ (‘(’ [ClsParams] ‘)’ | GivenTypes) -DefParamClause ::= ... - | GivenParamClause -GivenParamClause ::= ‘given’ (‘(’ DefParams ‘)’ | GivenTypes) -GivenTypes ::= AnnotType {‘,’ AnnotType} - -InfixExpr ::= ... - | InfixExpr ‘given’ (InfixExpr | ParArgumentExprs) -``` diff --git a/docs/docs/reference/contextual-delegate/instance-defs.md b/docs/docs/reference/contextual-delegate/instance-defs.md deleted file mode 100644 index 518e971635d8..000000000000 --- a/docs/docs/reference/contextual-delegate/instance-defs.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -layout: doc-page -title: "Delegates" ---- - -Delegates define "canonical" values of given types -that serve for synthesizing arguments to [given clauses](./inferable-params.html). Example: - -```scala -trait Ord[T] { - def compare(x: T, y: T): Int - def (x: T) < (y: T) = compare(x, y) < 0 - def (x: T) > (y: T) = compare(x, y) > 0 -} - -delegate IntOrd for Ord[Int] { - def compare(x: Int, y: Int) = - if (x < y) -1 else if (x > y) +1 else 0 -} - -delegate ListOrd[T] for Ord[List[T]] given (ord: Ord[T]) { - def compare(xs: List[T], ys: List[T]): Int = (xs, ys) match { - case (Nil, Nil) => 0 - case (Nil, _) => -1 - case (_, Nil) => +1 - case (x :: xs1, y :: ys1) => - val fst = ord.compare(x, y) - if (fst != 0) fst else xs1.compareTo(ys1) - } -} -``` -This code defines a trait `Ord` with two delegate clauses. `IntOrd` defines -a delegate for the type `Ord[Int]` whereas `ListOrd[T]` defines delegates -for `Ord[List[T]]` for all types `T` that come with a delegate for `Ord[T]` themselves. -The `given` clause in `ListOrd` defines an implicit parameter. -Given clauses are further explained in the [next section](./inferable-params.html). - -## Anonymous Delegates - -The name of a delegate can be left out. So the delegate definitions -of the last section can also be expressed like this: -```scala -delegate for Ord[Int] { ... } -delegate [T] for Ord[List[T]] given (ord: Ord[T]) { ... } -``` -If the name of a delegate is missing, the compiler will synthesize a name from -the type(s) in the `for` clause. - -## Alias Delegates - -An alias can be used to define a delegate that is equal to some expression. E.g.: -```scala -delegate global for ExecutionContext = new ForkJoinPool() -``` -This creates a delegate `global` of type `ExecutionContext` that resolves to the right hand side `new ForkJoinPool()`. -The first time `global` is accessed, a new `ForkJoinPool` is created, which is then -returned for this and all subsequent accesses to `global`. - -Alias delegates can be anonymous, e.g. -```scala -delegate for Position = enclosingTree.position -``` -An alias delegate can have type and context parameters just like any other delegate, but it can only implement a single type. - -## Delegate Creation - -A delegate without type parameters or given clause is created on-demand, the first time it is accessed. It is not required to ensure safe publication, which means that different threads might create different delegates for the same `delegate` clause. If a `delegate` clause has type parameters or a given clause, a fresh delegate is created for each reference. - -## Syntax - -Here is the new syntax of delegate clauses, seen as a delta from the [standard context free syntax of Scala 3](http://dotty.epfl.ch/docs/internals/syntax.html). -``` -TmplDef ::= ... - | ‘delegate’ DelegateDef -DelegateDef ::= [id] [DefTypeParamClause] DelegateBody -DelegateBody ::= [‘of’ ConstrApp {‘,’ ConstrApp }] {GivenParamClause} [TemplateBody] - | ‘of’ Type {GivenParamClause} ‘=’ Expr -ConstrApp ::= AnnotType {ArgumentExprs} - | ‘(’ ConstrApp {‘given’ (InfixExpr | ParArgumentExprs)} ‘)’ -GivenParamClause ::= ‘given’ (‘(’ [DefParams] ‘)’ | GivenTypes) -GivenTypes ::= AnnotType {‘,’ AnnotType} -``` -The identifier `id` can be omitted only if either the `for` part or the template body is present. -If the `for` part is missing, the template body must define at least one extension method. diff --git a/docs/docs/reference/contextual-delegate/motivation.md b/docs/docs/reference/contextual-delegate/motivation.md deleted file mode 100644 index 6f39e3614439..000000000000 --- a/docs/docs/reference/contextual-delegate/motivation.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -layout: doc-page -title: "Overview" ---- - -### Critique of the Status Quo - -Scala's implicits are its most distinguished feature. They are _the_ fundamental way to abstract over context. They represent a unified paradigm with a great variety of use cases, among them: implementing type classes, establishing context, dependency injection, expressing capabilities, computing new types and proving relationships between them. - -Following Haskell, Scala was the second popular language to have some form of implicits. Other languages have followed suit. E.g Rust's traits or Swift's protocol extensions. Design proposals are also on the table for Kotlin as [compile time dependency resolution](https://github.com/Kotlin/KEEP/blob/e863b25f8b3f2e9b9aaac361c6ee52be31453ee0/proposals/compile-time-dependency-resolution.md), for C# as [Shapes and Extensions](https://github.com/dotnet/csharplang/issues/164) -or for F# as [Traits](https://github.com/MattWindsor91/visualfsharp/blob/hackathon-vs/examples/fsconcepts.md). Implicits are also a common feature of theorem provers such as Coq or Agda. - -Even though these designs use widely different terminology, they are all variants of the core idea of _term inference_. Given a type, the compiler synthesizes a "canonical" term that has that type. Scala embodies the idea in a purer form than most other languages: An implicit parameter directly leads to an inferred argument term that could also be written down explicitly. By contrast, typeclass based designs are less direct since they hide term inference behind some form of type classification and do not offer the option of writing the inferred quantities (typically, dictionaries) explicitly. - -Given that term inference is where the industry is heading, and given that Scala has it in a very pure form, how come implicits are not more popular? In fact, it's fair to say that implicits are at the same time Scala's most distinguished and most controversial feature. I believe this is due to a number of aspects that together make implicits harder to learn than necessary and also make it harder to prevent abuses. - -Particular criticisms are: - -1. Being very powerful, implicits are easily over-used and mis-used. This observation holds in almost all cases when we talk about _implicit conversions_, which, even though conceptually different, share the same syntax with other implicit definitions. For instance, regarding the two definitions - - ```scala - implicit def i1(implicit x: T): C[T] = ... - implicit def i2(x: T): C[T] = ... - ``` - - the first of these is a conditional implicit _value_, the second an implicit _conversion_. Conditional implicit values are a cornerstone for expressing type classes, whereas most applications of implicit conversions have turned out to be of dubious value. The problem is that many newcomers to the language start with defining implicit conversions since they are easy to understand and seem powerful and convenient. Scala 3 will put under a language flag both definitions and applications of "undisciplined" implicit conversions between types defined elsewhere. This is a useful step to push back against overuse of implicit conversions. But the problem remains that syntactically, conversions and values just look too similar for comfort. - - 2. Another widespread abuse is over-reliance on implicit imports. This often leads to inscrutable type errors that go away with the right import incantation, leaving a feeling of frustration. Conversely, it is hard to see what implicits a program uses since implicits can hide anywhere in a long list of imports. - - 3. The syntax of implicit definitions is too minimal. It consists of a single modifier, `implicit`, that can be attached to a large number of language constructs. A problem with this for newcomers is that it conveys mechanism instead of intent. For instance, a typeclass instance is an implicit object or val if unconditional and an implicit def with implicit parameters referring to some class if conditional. This describes precisely what the implicit definitions translate to -- just drop the `implicit` modifier, and that's it! But the cues that define intent are rather indirect and can be easily misread, as demonstrated by the definitions of `i1` and `i2` above. - - 4. The syntax of implicit parameters also has shortcomings. It starts with the position of `implicit` as a pseudo-modifier that applies to a whole parameter section instead of a single parameter. This represents an irregular case wrt to the rest of Scala's syntax. Furthermore, while implicit _parameters_ are designated specifically, arguments are not. Passing an argument to an implicit parameter looks like a regular application `f(arg)`. This is problematic because it means there can be confusion regarding what parameter gets instantiated in a call. For instance, in - ```scala - def currentMap(implicit ctx: Context): Map[String, Int] - ``` - one cannot write `currentMap("abc")` since the string "abc" is taken as explicit argument to the implicit `ctx` parameter. One has to write `currentMap.apply("abc")` instead, which is awkward and irregular. For the same reason, a method definition can only have one implicit parameter section and it must always come last. This restriction not only reduces orthogonality, but also prevents some useful program constructs, such as a method with a regular parameter whose type depends on an implicit value. Finally, it's also a bit annoying that implicit parameters must have a name, even though in many cases that name is never referenced. - - 5. Implicits pose challenges for tooling. The set of available implicits depends on context, so command completion has to take context into account. This is feasible in an IDE but docs like ScalaDoc that are based static web pages can only provide an approximation. Another problem is that failed implicit searches often give very unspecific error messages, in particular if some deeply recursive implicit search has failed. Note that the Dotty compiler already implements some improvements in this case, but challenges still remain. - -None of the shortcomings is fatal, after all implicits are very widely used, and many libraries and applications rely on them. But together, they make code using implicits a lot more cumbersome and less clear than it could be. - -Historically, many of these shortcomings come from the way implicits were gradually "discovered" in Scala. Scala originally had only implicit conversions with the intended use case of "extending" a class or trait after it was defined, i.e. what is expressed by implicit classes in later versions of Scala. Implicit parameters and instance definitions came later in 2006 and picked similar syntax since it seemed convenient. For the same reason, no effort was made to distinguish implicit imports or arguments from normal ones. - -Existing Scala programmers by and large have gotten used to the status quo and see little need for change. But for newcomers this status quo presents a big hurdle. I believe if we want to overcome that hurdle, we should take a step back and allow ourselves to consider a radically new design. - -### The New Design - -The following pages introduce a redesign of contextual abstractions in Scala. They introduce four fundamental changes: - - 1. [Delegates](./instance-defs.html) are a new way to define basic terms that can be synthesized. They replace implicit definitions. The core principle of the proposal is that, rather than mixing the `implicit` modifier with a large number of features, we have a single way to define terms that can be synthesized for types. - - 2. [Given Clauses](./inferable-params.html) are a new syntax for implicit _parameters_ and their _arguments_. Both are introduced with the same keyword, `given`. This unambiguously aligns parameters and arguments, solving a number of language warts. It also allows us to have several implicit parameter sections, and to have implicit parameters followed by normal ones. - - 3. [Delegate Imports](./import-implied.html) are new form of import that specifically imports delegates and nothing else. Delegates _must be_ imported with `import delegate`, a plain import will no longer bring them into scope. - - 4. [Implicit Conversions](./conversions.html) are now expressed as delegates for a standard `Conversion` class. All other forms of implicit conversions will be phased out. - -This section also contains pages describing other language features that are related to context abstraction. These are: - - - [Context Bounds](./context-bounds.html), which carry over unchanged. - - [Extension Methods](./extension-methods.html) replace implicit classes in a way that integrates better with typeclasses. - - [Implementing Typeclasses](./typeclasses.html) demonstrates how some common typeclasses can be implemented using the new constructs. - - [Typeclass Derivation](./derivation.html) introduces constructs to automatically derive typeclass delegates for ADTs. - - [Multiversal Equality](./multiversal-equality.html) introduces a special typeclass - to support type safe equality. - - [Implicit Function Types](./query-types.html) introduce a way to abstract over implicit parameterization. - - [Implicit By-Name Parameters](./inferable-by-name-parameters.html) are an essential tool to define recursive implicits without looping. - - [Relationship with Scala 2 Implicits](./relationship-implicits.html) discusses the relationship between old-style implicits and - new-style delegates and given clauses and how to migrate from one to the other. - -Overall, the new design achieves a better separation of term inference from the rest of the language: There is a single way to define delegates instead of a multitude of forms all taking an `implicit` modifier. There is a single way to introduce implicit parameters and arguments instead of conflating implicit with normal arguments. There is a separate way to import delegates that does not allow them to hide in a sea of normal imports. And there is a single way to define an implicit conversion which is clearly marked as such and does not require special syntax. - -This design thus avoids feature interactions and makes the language more consistent and orthogonal. It will make implicits easier to learn and harder to abuse. It will greatly improve the clarity of the 95% of Scala programs that use implicits. It has thus the potential to fulfil the promise of term inference in a principled way that is also accessible and friendly. - -Could we achieve the same goals by tweaking existing implicits? After having tried for a long time, I believe now that this is impossible. - - - First, some of the problems are clearly syntactic and require different syntax to solve them. - - Second, there is the problem how to migrate. We cannot change the rules in mid-flight. At some stage of language evolution we need to accommodate both the new and the old rules. With a syntax change, this is easy: Introduce the new syntax with new rules, support the old syntax for a while to facilitate cross compilation, deprecate and phase out the old syntax at some later time. Keeping the same syntax does not offer this path, and in fact does not seem to offer any viable path for evolution - - Third, even if we would somehow succeed with migration, we still have the problem - how to teach this. We cannot make existing tutorials go away. Almost all existing tutorials start with implicit conversions, which will go away; they use normal imports, which will go away, and they explain calls to methods with implicit parameters by expanding them to plain applications, which will also go away. This means that we'd have - to add modifications and qualifications to all existing literature and courseware, likely causing more confusion with beginners instead of less. By contrast, with a new syntax there is a clear criterion: Any book or courseware that mentions `implicit` is outdated and should be updated. - diff --git a/docs/docs/reference/contextual-delegate/multiversal-equality.md b/docs/docs/reference/contextual-delegate/multiversal-equality.md deleted file mode 100644 index e8c0d73fd5cb..000000000000 --- a/docs/docs/reference/contextual-delegate/multiversal-equality.md +++ /dev/null @@ -1,218 +0,0 @@ ---- -layout: doc-page -title: "Multiversal Equality" ---- - -Previously, Scala had universal equality: Two values of any types -could be compared with each other with `==` and `!=`. This came from -the fact that `==` and `!=` are implemented in terms of Java's -`equals` method, which can also compare values of any two reference -types. - -Universal equality is convenient. But it is also dangerous since it -undermines type safety. For instance, let's assume one is left after some refactoring -with an erroneous program where a value `y` has type `S` instead of the correct type `T`. - -```scala -val x = ... // of type T -val y = ... // of type S, but should be T -x == y // typechecks, will always yield false -``` - -If `y` gets compared to other values of type `T`, -the program will still typecheck, since values of all types can be compared with each other. -But it will probably give unexpected results and fail at runtime. - -Multiversal equality is an opt-in way to make universal equality -safer. It uses a binary typeclass `Eql` to indicate that values of -two given types can be compared with each other. -The example above report a type error if `S` or `T` was a class -that derives `Eql`, e.g. -```scala -class T derives Eql -``` -Alternatively, one can also provide an `Eql` delegate directly, like this: -```scala -delegate for Eql[T, T] = Eql.derived -``` -This definition effectively says that values of type `T` can (only) be -compared to other values of type `T` when using `==` or `!=`. The definition -affects type checking but it has no significance for runtime -behavior, since `==` always maps to `equals` and `!=` always maps to -the negation of `equals`. The right hand side `Eql.derived` of the definition -is a value that has any `Eql` instance as its type. Here is the definition of class -`Eql` and its companion object: -```scala -package scala -import annotation.implicitNotFound - -@implicitNotFound("Values of types ${L} and ${R} cannot be compared with == or !=") -sealed trait Eql[-L, -R] - -object Eql { - object derived extends Eql[Any, Any] -} -``` - -One can have several `Eql` delegates for a type. For example, the four -definitions below make values of type `A` and type `B` comparable with -each other, but not comparable to anything else: - -```scala -delegate for Eql[A, A] = Eql.derived -delegate for Eql[B, B] = Eql.derived -delegate for Eql[A, B] = Eql.derived -delegate for Eql[B, A] = Eql.derived -``` -The `scala.Eql` object defines a number of `Eql` delegates that together -define a rule book for what standard types can be compared (more details below). - -There's also a "fallback" instance named `eqlAny` that allows comparisons -over all types that do not themselves have an `Eql` delegate. `eqlAny` is -defined as follows: - -```scala -def eqlAny[L, R]: Eql[L, R] = Eql.derived -``` - -Even though `eqlAny` is not declared a delegate, the compiler will still -construct an `eqlAny` instance as answer to an implicit search for the -type `Eql[L, R]`, unless `L` or `R` have `Eql` delegates -defined on them, or the language feature `strictEquality` is enabled - -The primary motivation for having `eqlAny` is backwards compatibility, -if this is of no concern, one can disable `eqlAny` by enabling the language -feature `strictEquality`. As for all language features this can be either -done with an import - -```scala -import scala.language.strictEquality -``` -or with a command line option `-language:strictEquality`. - -## Deriving Eql Delegates - -Instead of defining `Eql` delegates directly, it is often more convenient to derive them. Example: -```scala -class Box[T](x: T) derives Eql -``` -By the usual rules if [typeclass derivation](./derivation.html), -this generates the following `Eql` delegate in the companion object of `Box`: -```scala -delegate [T, U] for Eql[Box[T], Box[U]] given Eql[T, U] = Eql.derived -``` -That is, two boxes are comparable with `==` or `!=` if their elements are. Examples: -```scala -new Box(1) == new Box(1L) // ok since there is a delegate for `Eql[Int, Long]` -new Box(1) == new Box("a") // error: can't compare -new Box(1) == 1 // error: can't compare -``` - -## Precise Rules for Equality Checking - -The precise rules for equality checking are as follows. - -If the `strictEquality` feature is enabled then -a comparison using `x == y` or `x != y` between values `x: T` and `y: U` -is legal if - - 1. there is a delegate for `Eql[T, U]`, or - 2. one of `T`, `U` is `Null`. - -In the default case where the `strictEquality` feature is not enabled the comparison is -also legal if - - 1. `T` and `U` the same, or - 2. one of `T` and `U`is a subtype of the _lifted_ version of the other type, or - 3. neither `T` nor `U` have a _reflexive `Eql` delegate_. - -Explanations: - - - _lifting_ a type `S` means replacing all references to abstract types - in covariant positions of `S` by their upper bound, and to replacing - all refinement types in covariant positions of `S` by their parent. - - a type `T` has a _reflexive `Eql` delegate_ if the implicit search for `Eql[T, T]` - succeeds. - -## Predefined Eql Delegates - -The `Eql` object defines delegates for comparing - - the primitive types `Byte`, `Short`, `Char`, `Int`, `Long`, `Float`, `Double`, `Boolean`, and `Unit`, - - `java.lang.Number`, `java.lang.Boolean`, and `java.lang.Character`, - - `scala.collection.Seq`, and `scala.collection.Set`. - -Delegate are defined so that every one of these types is has a reflexive `Eql` delegate, and the following holds: - - - Primitive numeric types can be compared with each other. - - Primitive numeric types can be compared with subtypes of `java.lang.Number` (and _vice versa_). - - `Boolean` can be compared with `java.lang.Boolean` (and _vice versa_). - - `Char` can be compared with `java.lang.Character` (and _vice versa_). - - Two sequences (of arbitrary subtypes of `scala.collection.Seq`) can be compared - with each other if their element types can be compared. The two sequence types - need not be the same. - - Two sets (of arbitrary subtypes of `scala.collection.Set`) can be compared - with each other if their element types can be compared. The two set types - need not be the same. - - Any subtype of `AnyRef` can be compared with `Null` (and _vice versa_). - -## Why Two Type Parameters? - -One particular feature of the `Eql` type is that it takes _two_ type parameters, representing the types of the two items to be compared. By contrast, conventional -implementations of an equality type class take only a single type parameter which represents the common type of _both_ operands. One type parameter is simpler than two, so why go through the additional complication? The reason has to do with the fact that, rather than coming up with a type class where no operation existed before, -we are dealing with a refinement of pre-existing, universal equality. It's best illustrated through an example. - -Say you want to come up with a safe version of the `contains` method on `List[T]`. The original definition of `contains` in the standard library was: -```scala -class List[+T] { - ... - def contains(x: Any): Boolean -} -``` -That uses universal equality in an unsafe way since it permits arguments of any type to be compared with the list's elements. The "obvious" alternative definition -```scala - def contains(x: T): Boolean -``` -does not work, since it refers to the covariant parameter `T` in a nonvariant context. The only variance-correct way to use the type parameter `T` in `contains` is as a lower bound: -```scala - def contains[U >: T](x: U): Boolean -``` -This generic version of `contains` is the one used in the current (Scala 2.12) version of `List`. -It looks different but it admits exactly the same applications as the `contains(x: Any)` definition we started with. -However, we can make it more useful (i.e. restrictive) by adding an `Eql` parameter: -```scala - def contains[U >: T](x: U) given Eql[T, U]: Boolean // (1) -``` -This version of `contains` is equality-safe! More precisely, given -`x: T`, `xs: List[T]` and `y: U`, then `xs.contains(y)` is type-correct if and only if -`x == y` is type-correct. - -Unfortunately, the crucial ability to "lift" equality type checking from simple equality and pattern matching to arbitrary user-defined operations gets lost if we restrict ourselves to an equality class with a single type parameter. Consider the following signature of `contains` with a hypothetical `Eql1[T]` type class: -```scala - def contains[U >: T](x: U) given Eql1[U]: Boolean // (2) -``` -This version could be applied just as widely as the original `contains(x: Any)` method, -since the `Eql1[Any]` fallback is always available! So we have gained nothing. What got lost in the transition to a single parameter type class was the original rule that `Eql[A, B]` is available only if neither `A` nor `B` have a reflexive `Eql` delegate. That rule simply cannot be expressed if there is a single type parameter for `Eql`. - -The situation is different under `-language:strictEquality`. In that case, -the `Eql[Any, Any]` or `Eql1[Any]` instances would never be available, and the -single and two-parameter versions would indeed coincide for most practical purposes. - -But assuming `-language:strictEquality` immediately and everywhere poses migration problems which might well be unsurmountable. Consider again `contains`, which is in the standard library. Parameterizing it with the `Eql` type class as in (1) is an immediate win since it rules out non-sensical applications while still allowing all sensible ones. -So it can be done almost at any time, modulo binary compatibility concerns. -On the other hand, parameterizing `contains` with `Eql1` as in (2) would make `contains` -unusable for all types that have not yet declared an `Eql1` delegate, including all -types coming from Java. This is clearly unacceptable. It would lead to a situation where, -rather than migrating existing libraries to use safe equality, the only upgrade path is to have parallel libraries, with the new version only catering to types deriving `Eql1` and the old version dealing with everything else. Such a split of the ecosystem would be very problematic, which means the cure is likely to be worse than the disease. - -For these reasons, it looks like a two-parameter type class is the only way forward because it can take the existing ecosystem where it is and migrate it towards a future where more and more code uses safe equality. - -In applications where `-language:strictEquality` is the default one could also introduce a one-parameter type alias such as -```scala -type Eq[-T] = Eql[T, T] -``` -Operations needing safe equality could then use this alias instead of the two-parameter `Eql` class. But it would only -work under `-language:strictEquality`, since otherwise the universal `Eq[Any]` instance would be available everywhere. - - -More on multiversal equality is found in a [blog post](http://www.scala-lang.org/blog/2016/05/06/multiversal-equality.html) -and a [Github issue](https://github.com/lampepfl/dotty/issues/1247). diff --git a/docs/docs/reference/contextual-delegate/query-types-spec.md b/docs/docs/reference/contextual-delegate/query-types-spec.md deleted file mode 100644 index 0e4dae6cb66a..000000000000 --- a/docs/docs/reference/contextual-delegate/query-types-spec.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: doc-page -title: "Implicit Function Types - More Details" ---- - -## Syntax - - Type ::= ... - | `given' FunArgTypes `=>' Type - Expr ::= ... - | `given' FunParams `=>' Expr - -Implicit function types associate to the right, e.g. -`given S => given T => U` is the same as `given S => (given T => U)`. - -## Implementation - -Implicit function types are shorthands for class types that define `apply` -methods with implicit parameters. Specifically, the `N`-ary function type -`T1, ..., TN => R` is a shorthand for the class type -`ImplicitFunctionN[T1 , ... , TN, R]`. Such class types are assumed to have the following definitions, for any value of `N >= 1`: -```scala -package scala -trait ImplicitFunctionN[-T1 , ... , -TN, +R] { - def apply given (x1: T1 , ... , xN: TN): R -} -``` -Implicit function types erase to normal function types, so these classes are -generated on the fly for typechecking, but not realized in actual code. - -Implicit function literals `given (x1: T1, ..., xn: Tn) => e` map -implicit parameters `xi` of types `Ti` to a result given by expression `e`. -The scope of each implicit parameter `xi` is `e`. The parameters must have pairwise distinct names. - -If the expected type of the implicit function literal is of the form -`scala.ImplicitFunctionN[S1, ..., Sn, R]`, the expected type of `e` is `R` and -the type `Ti` of any of the parameters `xi` can be omitted, in which case `Ti -= Si` is assumed. If the expected type of the implicit function literal is -some other type, all implicit parameter types must be explicitly given, and the expected type of `e` is undefined. The type of the implicit function literal is `scala.ImplicitFunctionN[S1, ...,Sn, T]`, where `T` is the widened -type of `e`. `T` must be equivalent to a type which does not refer to any of -the implicit parameters `xi`. - -The implicit function literal is evaluated as the instance creation -expression: -```scala -new scala.ImplicitFunctionN[T1, ..., Tn, T] { - def apply given (x1: T1, ..., xn: Tn): T = e -} -``` -In the case of a single untyped parameter, `given (x) => e` can be -abbreviated to `given x => e`. - -An implicit parameter may also be a wildcard represented by an underscore `_`. In -that case, a fresh name for the parameter is chosen arbitrarily. - -Note: The closing paragraph of the -[Anonymous Functions section](https://www.scala-lang.org/files/archive/spec/2.12/06-expressions.html#anonymous-functions) -of Scala 2.12 is subsumed by implicit function types and should be removed. - -Implicit function literals `given (x1: T1, ..., xn: Tn) => e` are -automatically created for any expression `e` whose expected type is -`scala.ImplicitFunctionN[T1, ..., Tn, R]`, unless `e` is -itself a implicit function literal. This is analogous to the automatic -insertion of `scala.Function0` around expressions in by-name argument position. - -Implicit function types generalize to `N > 22` in the same way that function types do, see [the corresponding -documentation](https://dotty.epfl.ch/docs/reference/dropped-features/limit22.html). - -## Examples - -See the section on Expressiveness from [Simplicitly: foundations and -applications of implicit function -types](https://dl.acm.org/citation.cfm?id=3158130). I've extracted it in [this -Gist](https://gist.github.com/OlivierBlanvillain/234d3927fe9e9c6fba074b53a7bd9 -592), it might easier to access than the pdf. - -### Type Checking - -After desugaring no additional typing rules are required for implicit function types. diff --git a/docs/docs/reference/contextual-delegate/query-types.md b/docs/docs/reference/contextual-delegate/query-types.md deleted file mode 100644 index 3ef79aed15f9..000000000000 --- a/docs/docs/reference/contextual-delegate/query-types.md +++ /dev/null @@ -1,159 +0,0 @@ ---- -layout: doc-page -title: "Implicit Function Types" ---- - -_Implicit functions_ are functions with (only) implicit parameters. -Their types are _implicit function types_. Here is an example of an implicit function type: -```scala -type Contextual[T] = given Context => T -``` -A value of an implicit function type is applied to inferred arguments, in -the same way a method with a given clause is applied. For instance: -```scala - delegate ctx for Context = ... - - def f(x: Int): Contextual[Int] = ... - - f(2).given(ctx) // explicit argument - f(2) // argument is inferred -``` -Conversely, if the expected type of an expression `E` is an implicit function type -`given (T_1, ..., T_n) => U` and `E` is not already an -implicit function literal, `E` is converted to an implicit function literal by rewriting to -```scala - given (x_1: T1, ..., x_n: Tn) => E -``` -where the names `x_1`, ..., `x_n` are arbitrary. This expansion is performed -before the expression `E` is typechecked, which means that `x_1`, ..., `x_n` -are available as delegates in `E`. - -Like their types, implicit function literals are written with a `given` prefix. They differ from normal function literals in two ways: - - 1. Their parameters are implicit. - 2. Their types are implicit function types. - -For example, continuing with the previous definitions, -```scala - def g(arg: Contextual[Int]) = ... - - g(22) // is expanded to g(given ctx => 22) - - g(f(2)) // is expanded to g(given ctx => f(2).given(ctx)) - - g(given ctx => f(22).given(ctx)) // is left as it is -``` -### Example: Builder Pattern - -Implicit function types have considerable expressive power. For -instance, here is how they can support the "builder pattern", where -the aim is to construct tables like this: -```scala - table { - row { - cell("top left") - cell("top right") - } - row { - cell("bottom left") - cell("bottom right") - } - } -``` -The idea is to define classes for `Table` and `Row` that allow -addition of elements via `add`: -```scala - class Table { - val rows = new ArrayBuffer[Row] - def add(r: Row): Unit = rows += r - override def toString = rows.mkString("Table(", ", ", ")") - } - - class Row { - val cells = new ArrayBuffer[Cell] - def add(c: Cell): Unit = cells += c - override def toString = cells.mkString("Row(", ", ", ")") - } - - case class Cell(elem: String) -``` -Then, the `table`, `row` and `cell` constructor methods can be defined -in terms of implicit function types to avoid the plumbing boilerplate -that would otherwise be necessary. -```scala - def table(init: given Table => Unit) = { - delegate t for Table - init - t - } - - def row(init: given Row => Unit) given (t: Table) = { - delegate r for Row - init - t.add(r) - } - - def cell(str: String) given (r: Row) = - r.add(new Cell(str)) -``` -With that setup, the table construction code above compiles and expands to: -```scala - table { given ($t: Table) => - row { given ($r: Row) => - cell("top left").given($r) - cell("top right").given($r) - }.given($t) - row { given ($r: Row) => - cell("bottom left").given($r) - cell("bottom right").given($r) - }.given($t) - } -``` -### Example: Postconditions - -As a larger example, here is a way to define constructs for checking arbitrary postconditions using an extension method `ensuring`so that the checked result can be referred to simply by `result`. The example combines opaque aliases, implicit function types, and extension methods to provide a zero-overhead abstraction. - -```scala -object PostConditions { - opaque type WrappedResult[T] = T - - private object WrappedResult { - def wrap[T](x: T): WrappedResult[T] = x - def unwrap[T](x: WrappedResult[T]): T = x - } - - def result[T] given (r: WrappedResult[T]): T = WrappedResult.unwrap(r) - - def (x: T) ensuring [T](condition: given WrappedResult[T] => Boolean): T = { - delegate for WrappedResult[T] = WrappedResult.wrap(x) - assert(condition) - x - } -} - -object Test { - import PostConditions.{ensuring, result} - val s = List(1, 2, 3).sum.ensuring(result == 6) -} -``` -**Explanations**: We use a implicit function type `given WrappedResult[T] => Boolean` -as the type of the condition of `ensuring`. An argument to `ensuring` such as -`(result == 6)` will therefore have a delegate for type `WrappedResult[T]` in -scope to pass along to the `result` method. `WrappedResult` is a fresh type, to make sure -that we do not get unwanted delegates in scope (this is good practice in all cases -where implicit parameters are involved). Since `WrappedResult` is an opaque type alias, its -values need not be boxed, and since `ensuring` is added as an extension method, its argument -does not need boxing either. Hence, the implementation of `ensuring` is as about as efficient -as the best possible code one could write by hand: - - { val result = List(1, 2, 3).sum - assert(result == 6) - result - } - -### Reference - -For more info, see the [blog article](https://www.scala-lang.org/blog/2016/12/07/implicit-function-types.html), -(which uses a different syntax that has been superseded). - -[More details](./query-types-spec.html) diff --git a/docs/docs/reference/contextual-delegate/relationship-implicits.md b/docs/docs/reference/contextual-delegate/relationship-implicits.md deleted file mode 100644 index 215ff7545bbf..000000000000 --- a/docs/docs/reference/contextual-delegate/relationship-implicits.md +++ /dev/null @@ -1,162 +0,0 @@ ---- -layout: doc-page -title: Relationship with Scala 2 Implicits ---- - -Many, but not all, of the new contextual abstraction features in Scala 3 can be mapped to Scala 2's implicits. This page gives a rundown on the relationships between new and old features. - -## Simulating Contextual Abstraction with Implicits - -### Delegates - -Delegate clauses can be mapped to combinations of implicit objects, classes and implicit methods. - - 1. Delegates without parameters are mapped to implicit objects. E.g., - ```scala - delegate IntOrd for Ord[Int] { ... } - ``` - maps to - ```scala - implicit object IntOrd extends Ord[Int] { ... } - ``` - 2. Parameterized delegates are mapped to combinations of classes and implicit methods. E.g., - ```scala - delegate ListOrd[T] for Ord[List[T]] given (ord: Ord[T]) { ... } - ``` - maps to - ```scala - class ListOrd[T](implicit ord: Ord[T]) extends Ord[List[T]] { ... } - final implicit def ListOrd[T](implicit ord: Ord[T]): ListOrd[T] = new ListOrd[T] - ``` - 3. Alias delegates map to implicit methods. If the delegate has neither type parameters nor a given clause, the result of creating an instance is cached in a variable. If in addition the right hand side is pure and cheap to compute, a simple `val` can be used instead. E.g., - ```scala - delegate global for ExecutionContext = new ForkJoinContext() - delegate config for Config = default.config - ``` - map to - ```scala - private[this] var global$cache: ExecutionContext | Null = null - final implicit def global: ExecutionContext = { - if (global$cache == null) global$cache = new ForkJoinContext() - global$cache - } - - final implicit val config: Config = default.config - ``` - -### Anonymous Delegates - -Anonymous delegates get compiler synthesized names, which are generated in a reproducible way from the implemented type(s). For example, if the names of the `IntOrd` and `ListOrd` delegates above were left out, the following names would be synthesized instead: -```scala - delegate Ord_Int_repr for Ord[Int] { ... } - delegate Ord_List_repr[T] for Ord[List[T]] { ... } -``` -The synthesized type names are formed from - - - the simple name(s) of the implemented type(s), leaving out any prefixes, - - the simple name(s) of the toplevel argument type constructors to these types - - the suffix `_repr`. - -Anonymous delegates that define extension methods without also implementing a type -get their name from the name of the first extension method and the toplevel type -constructor of its first parameter. For example, the delegate -```scala - delegate { - def (xs: List[T]) second[T] = ... - } -``` -gets the synthesized name `second_of_List_T_repr`. - -### Implicit Parameters - -The new implicit parameter syntax with `given` corresponds largely to Scala-2's implicit parameters. E.g. -```scala - def max[T](x: T, y: T) given (ord: Ord[T]): T -``` -would be written -```scala - def max[T](x: T, y: T)(implicit ord: Ord[T]): T -``` -in Scala 2. The main difference concerns applications of such parameters. -Explicit arguments to parameters of given clauses _must_ be written using `given`, -mirroring the definition syntax. E.g, `max(2, 3).given(IntOrd)`. -Scala 2 uses normal applications `max(2, 3)(IntOrd)` instead. The Scala 2 syntax has some inherent ambiguities and restrictions which are overcome by the new syntax. For instance, multiple implicit parameter lists are not available in the old syntax, even though they can be simulated using auxiliary objects in the "Aux" pattern. - -The `the` method corresponds to `implicitly` in Scala 2. -It is precisely the same as the `the` method in Shapeless. -The difference between `the` (in both versions) and `implicitly` is -that `the` can return a more precise type than the type that was -asked for. - -### Context Bounds - -Context bounds are the same in both language versions. They expand to the respective forms of implicit parameters. - -**Note:** To ease migration, context bounds in Dotty map for a limited time to old-style implicit parameters for which arguments can be passed either with `given` or -with a normal application. Once old-style implicits are deprecated, context bounds -will map to given clauses instead. - -### Extension Methods - -Extension methods have no direct counterpart in Scala 2, but they can be simulated with implicit classes. For instance, the extension method -```scala - def (c: Circle) circumference: Double = c.radius * math.Pi * 2 -``` -could be simulated to some degree by -```scala - implicit class CircleDeco(c: Circle) extends AnyVal { - def circumference: Double = c.radius * math.Pi * 2 - } -``` -Extension methods in delegates have no direct counterpart in Scala-2. The only way to simulate these is to make implicit classes available through imports. The Simulacrum macro library can automate this process in some cases. - -### Typeclass Derivation - -Typeclass derivation has no direct counterpart in the Scala 2 language. Comparable functionality can be achieved by macro-based libraries such as Shapeless, Magnolia, or scalaz-deriving. - -### Implicit Function Types - -Implicit function types have no analogue in Scala 2. - -### Implicit By-Name Parameters - -Implicit by-name parameters are not supported in Scala 2, but can be emulated to some degree by the `Lazy` type in Shapeless. - -## Simulating Scala 2 Implicits in Dotty - -### Implicit Conversions - -Implicit conversion methods in Scala 2 can be expressed as delegates -of the `scala.Conversion` class in Dotty. E.g. instead of -```scala - implicit def stringToToken(str: String): Token = new Keyword(str) -``` -one can write -```scala - delegate stringToToken for Conversion[String, Token] { - def apply(str: String): Token = new KeyWord(str) - } -``` - -### Implicit Classes - -Implicit classes in Scala 2 are often used to define extension methods, which are directly supported in Dotty. Other uses of implicit classes can be simulated by a pair of a regular class and a conversion delegate. - -### Abstract Implicits - -An abstract implicit `val` or `def` in Scala 2 can be expressed in Dotty using a regular abstract definition and an alias delegate. E.g., Scala 2's -```scala - implicit def symDeco: SymDeco -``` -can be expressed in Dotty as -```scala - def symDeco: SymDeco - delegate for SymDeco = symDeco -``` - -## Implementation Status and Timeline - -The Dotty implementation implements both Scala-2's implicits and the new abstractions. In fact, support for Scala-2's implicits is an essential part of the common language subset between 2.13/2.14 and Dotty. -Migration to the new abstractions will be supported by making automatic rewritings available. - -Depending on adoption patterns, old style implicits might start to be deprecated in a version following Scala 3.0. diff --git a/docs/docs/reference/contextual-delegate/typeclasses.md b/docs/docs/reference/contextual-delegate/typeclasses.md deleted file mode 100644 index 0275017345f4..000000000000 --- a/docs/docs/reference/contextual-delegate/typeclasses.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -layout: doc-page -title: "Implementing Typeclasses" ---- - -Delegates, extension methods and context bounds -allow a concise and natural expression of _typeclasses_. Typeclasses are just traits -with canonical implementations defined by delegates. Here are some examples of standard typeclasses: - -### Semigroups and monoids: - -```scala -trait SemiGroup[T] { - def (x: T) combine (y: T): T -} -trait Monoid[T] extends SemiGroup[T] { - def unit: T -} -object Monoid { - def apply[T] given Monoid[T] = the[Monoid[T]] -} - -delegate for Monoid[String] { - def (x: String) combine (y: String): String = x.concat(y) - def unit: String = "" -} - -delegate for Monoid[Int] { - def (x: Int) combine (y: Int): Int = x + y - def unit: Int = 0 -} - -def sum[T: Monoid](xs: List[T]): T = - xs.foldLeft(Monoid[T].unit)(_.combine(_)) -``` - -### Functors and monads: - -```scala -trait Functor[F[_]] { - def (x: F[A]) map [A, B] (f: A => B): F[B] -} - -trait Monad[F[_]] extends Functor[F] { - def (x: F[A]) flatMap [A, B] (f: A => F[B]): F[B] - def (x: F[A]) map [A, B] (f: A => B) = x.flatMap(f `andThen` pure) - - def pure[A](x: A): F[A] -} - -delegate ListMonad for Monad[List] { - def (xs: List[A]) flatMap [A, B] (f: A => List[B]): List[B] = - xs.flatMap(f) - def pure[A](x: A): List[A] = - List(x) -} - -delegate ReaderMonad[Ctx] for Monad[[X] => Ctx => X] { - def (r: Ctx => A) flatMap [A, B] (f: A => Ctx => B): Ctx => B = - ctx => f(r(ctx))(ctx) - def pure[A](x: A): Ctx => A = - ctx => x -} -``` diff --git a/docs/docs/reference/contextual-evidence/context-bounds.md b/docs/docs/reference/contextual-evidence/context-bounds.md deleted file mode 100644 index 3458c5cf6cd1..000000000000 --- a/docs/docs/reference/contextual-evidence/context-bounds.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -layout: doc-page -title: "Context Bounds" ---- - -## Context Bounds - -A context bound is a shorthand for expressing a common pattern of an inferable parameter that depends on a type parameter. Using a context bound, the `maximum` function of the last section can be written like this: -```scala -def maximum[T: Ord](xs: List[T]): T = xs.reduceLeft(max) -``` -A bound like `: Ord` on a type parameter `T` of a method or class indicates an inferable parameter `given Ord[T]`. The inferable parameter(s) generated from context bounds come last in the definition of the containing method or class. E.g., -```scala -def f[T: C1 : C2, U: C3](x: T) given (y: U, z: V): R -``` -would expand to -```scala -def f[T, U](x: T) given (y: U, z: V) given C1[T], C2[T], C3[U]: R -``` -Context bounds can be combined with subtype bounds. If both are present, subtype bounds come first, e.g. -```scala -def g[T <: B : C](x: T): R = ... -``` - -## Syntax - -``` -TypeParamBounds ::= [SubtypeBounds] {ContextBound} -ContextBound ::= ‘:’ Type -``` diff --git a/docs/docs/reference/contextual-evidence/conversions.md b/docs/docs/reference/contextual-evidence/conversions.md deleted file mode 100644 index 74aec78294b5..000000000000 --- a/docs/docs/reference/contextual-evidence/conversions.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -layout: doc-page -title: "Implicit Conversions" ---- - -Implicit conversions are defined by evidence for the `scala.Conversion` class. -This class is defined in package `scala` as follows: -```scala -abstract class Conversion[-T, +U] extends (T => U) -``` -For example, here is an implicit conversion from `String` to `Token`: -```scala -evidence for Conversion[String, Token] { - def apply(str: String): Token = new KeyWord(str) -} -``` -Using an evidence alias this can be expressed more concisely as: -```scala -evidence for Conversion[String, Token] = new KeyWord(_) -``` -An implicit conversion is applied automatically by the compiler in three situations: - -1. If an expression `e` has type `T`, and `T` does not conform to the expression's expected type `S`. -2. In a selection `e.m` with `e` of type `T`, but `T` defines no member `m`. -3. In an application `e.m(args)` with `e` of type `T`, if `T` does define - some member(s) named `m`, but none of these members can be applied to the arguments `args`. - -In the first case, the compiler looks for an evidence value of class -`scala.Conversion` that maps an argument of type `T` to type `S`. In the second and third -case, it looks for an evidance value of class `scala.Conversion` that maps an argument of type `T` -to a type that defines a member `m` which can be applied to `args` if present. -If such an instance `C` is found, the expression `e` is replaced by `C.apply(e)`. - -## Examples - -1. The `Predef` package contains "auto-boxing" conversions that map -primitive number types to subclasses of `java.lang.Number`. For instance, the -conversion from `Int` to `java.lang.Integer` can be defined as follows: -```scala -evidence int2Integer for Conversion[Int, java.lang.Integer] = - java.lang.Integer.valueOf(_) -``` - -2. The "magnet" pattern is sometimes used to express many variants of a method. Instead of defining overloaded versions of the method, one can also let the method take one or more arguments of specially defined "magnet" types, into which various argument types can be converted. E.g. -```scala -object Completions { - - // The argument "magnet" type - enum CompletionArg { - case Error(s: String) - case Response(f: Future[HttpResponse]) - case Status(code: Future[StatusCode]) - } - object CompletionArg { - - // conversions defining the possible arguments to pass to `complete` - // these always come with CompletionArg - // They can be invoked explicitly, e.g. - // - // CompletionArg.fromStatusCode(statusCode) - - evidence fromString for Conversion[String, CompletionArg] = Error(_) - evidence fromFuture for Conversion[Future[HttpResponse], CompletionArg] = Response(_) - evidence fromStatusCode for Conversion[Future[StatusCode], CompletionArg] = Status(_) - } - import CompletionArg._ - - def complete[T](arg: CompletionArg) = arg match { - case Error(s) => ... - case Response(f) => ... - case Status(code) => ... - } -} -``` -This setup is more complicated than simple overloading of `complete`, but it can still be useful if normal overloading is not available (as in the case above, since we cannot have two overloaded methods that take `Future[...]` arguments), or if normal overloading would lead to a combinatorial explosion of variants. diff --git a/docs/docs/reference/contextual-evidence/derivation.md b/docs/docs/reference/contextual-evidence/derivation.md deleted file mode 100644 index 33a7516ad8a3..000000000000 --- a/docs/docs/reference/contextual-evidence/derivation.md +++ /dev/null @@ -1,382 +0,0 @@ ---- -layout: doc-page -title: Typeclass Derivation ---- - -Typeclass derivation is a way to generate instances of certain type classes automatically or with minimal code hints. A type class in this sense is any trait or class with a type parameter that describes the type being operated on. Commonly used examples are `Eql`, `Ordering`, `Show`, or `Pickling`. Example: -```scala -enum Tree[T] derives Eql, Ordering, Pickling { - case Branch(left: Tree[T], right: Tree[T]) - case Leaf(elem: T) -} -``` -The `derives` clause generates evidence for the `Eql`, `Ordering`, and `Pickling` traits in the companion object `Tree`: -```scala -evidence [T: Eql] for Eql[Tree[T]] = Eql.derived -evidence [T: Ordering] for Ordering[Tree[T]] = Ordering.derived -evidence [T: Pickling] for Pickling[Tree[T]] = Pickling.derived -``` - -### Deriving Types - -Besides for `enums`, typeclasses can also be derived for other sets of classes and objects that form an algebraic data type. These are: - - - individual case classes or case objects - - sealed classes or traits that have only case classes and case objects as children. - - Examples: - - ```scala -case class Labelled[T](x: T, label: String) derives Eql, Show - -sealed trait Option[T] derives Eql -case class Some[T] extends Option[T] -case object None extends Option[Nothing] -``` - -The generated typeclass instances are placed in the companion objects `Labelled` and `Option`, respectively. - -### Derivable Types - -A trait or class can appear in a `derives` clause if its companion object defines a method named `derived`. The type and implementation of a `derived` method are arbitrary, but typically it has a definition like this: -```scala - def derived[T] given Generic[T] = ... -``` -That is, the `derived` method takes an inferable parameter of type `Generic` that determines the _shape_ of the deriving type `T` and it computes the typeclass implementation according to that shape. Evidence for `Generic` is generated automatically for any type that derives a typeclass with a `derived` -method that refers to `Generic`. One can also derive `Generic` alone, which means a `Generic` instance is generated without any other type class instances. E.g.: -```scala -sealed trait ParseResult[T] derives Generic -``` -This is all a user of typeclass derivation has to know. The rest of this page contains information needed to be able to write a typeclass that can appear in a `derives` clause. In particular, it details the means provided for the implementation of data generic `derived` methods. - -### The Shape Type - -For every class with a `derives` clause, the compiler computes the shape of that class as a type. For example, here is the shape type for the `Tree[T]` enum: -```scala -Cases[( - Case[Branch[T], (Tree[T], Tree[T])], - Case[Leaf[T], T *: Unit] -)] -``` -Informally, this states that - -> The shape of a `Tree[T]` is one of two cases: Either a `Branch[T]` with two - elements of type `Tree[T]`, or a `Leaf[T]` with a single element of type `T`. - -The type constructors `Cases` and `Case` come from the companion object of a class -`scala.compiletime.Shape`, which is defined in the standard library as follows: -```scala -sealed abstract class Shape - -object Shape { - - /** A sum with alternative types `Alts` */ - case class Cases[Alts <: Tuple] extends Shape - - /** A product type `T` with element types `Elems` */ - case class Case[T, Elems <: Tuple] extends Shape -} -``` - -Here is the shape type for `Labelled[T]`: -```scala -Case[Labelled[T], (T, String)] -``` -And here is the one for `Option[T]`: -```scala -Cases[( - Case[Some[T], T *: Unit], - Case[None.type, Unit] -)] -``` -Note that an empty element tuple is represented as type `Unit`. A single-element tuple -is represented as `T *: Unit` since there is no direct syntax for such tuples: `(T)` is just `T` in parentheses, not a tuple. - -### The Generic Typeclass - -For every class `C[T_1,...,T_n]` with a `derives` clause, the compiler generates in the companion object of `C` evidence for `Generic[C[T_1,...,T_n]]` that follows the outline below: -```scala -evidence [T_1, ..., T_n] for Generic[C[T_1,...,T_n]] { - type Shape = ... - ... -} -``` -where the right hand side of `Shape` is the shape type of `C[T_1,...,T_n]`. -For instance, the definition -```scala -enum Result[+T, +E] derives Logging { - case class Ok[T](result: T) - case class Err[E](err: E) -} -``` -would produce: -```scala -object Result { - import scala.compiletime.Shape._ - - evidence [T, E] for Generic[Result[T, E]] { - type Shape = Cases[( - Case[Ok[T], T *: Unit], - Case[Err[E], E *: Unit] - )] - ... - } -} -``` -The `Generic` class is defined in package `scala.reflect`. - -```scala -abstract class Generic[T] { - type Shape <: scala.compiletime.Shape - - /** The mirror corresponding to ADT instance `x` */ - def reflect(x: T): Mirror - - /** The ADT instance corresponding to given `mirror` */ - def reify(mirror: Mirror): T - - /** The companion object of the ADT */ - def common: GenericClass -} -``` -It defines the `Shape` type for the ADT `T`, as well as two methods that map between a -type `T` and a generic representation of `T`, which we call a `Mirror`: -The `reflect` method maps an instance value of the ADT `T` to its mirror whereas -the `reify` method goes the other way. There's also a `common` method that returns -a value of type `GenericClass` which contains information that is the same for all -instances of a class (right now, this consists of the runtime `Class` value and -the names of the cases and their parameters). - -### Mirrors - -A mirror is a generic representation of an instance value of an ADT. `Mirror` objects have three components: - - - `adtClass: GenericClass`: The representation of the ADT class - - `ordinal: Int`: The ordinal number of the case among all cases of the ADT, starting from 0 - - `elems: Product`: The elements of the instance, represented as a `Product`. - - The `Mirror` class is defined in package `scala.reflect` as follows: - -```scala -class Mirror(val adtClass: GenericClass, val ordinal: Int, val elems: Product) { - - /** The `n`'th element of this generic case */ - def apply(n: Int): Any = elems.productElement(n) - - /** The name of the constructor of the case reflected by this mirror */ - def caseLabel: String = adtClass.label(ordinal)(0) - - /** The label of the `n`'th element of the case reflected by this mirror */ - def elementLabel(n: Int): String = adtClass.label(ordinal)(n + 1) -} -``` - -### GenericClass - -Here's the API of `scala.reflect.GenericClass`: - -```scala -class GenericClass(val runtimeClass: Class[_], labelsStr: String) { - - /** A mirror of case with ordinal number `ordinal` and elements as given by `Product` */ - def mirror(ordinal: Int, product: Product): Mirror = - new Mirror(this, ordinal, product) - - /** A mirror with elements given as an array */ - def mirror(ordinal: Int, elems: Array[AnyRef]): Mirror = - mirror(ordinal, new ArrayProduct(elems)) - - /** A mirror with an initial empty array of `numElems` elements, to be filled in. */ - def mirror(ordinal: Int, numElems: Int): Mirror = - mirror(ordinal, new Array[AnyRef](numElems)) - - /** A mirror of a case with no elements */ - def mirror(ordinal: Int): Mirror = - mirror(ordinal, EmptyProduct) - - /** Case and element labels as a two-dimensional array. - * Each row of the array contains a case label, followed by the labels of the elements of that case. - */ - val label: Array[Array[String]] = ... -} -``` - -The class provides four overloaded methods to create mirrors. The first of these is invoked by the `reify` method that maps an ADT instance to its mirror. It simply passes the -instance itself (which is a `Product`) to the second parameter of the mirror. That operation does not involve any copying and is thus quite efficient. The second and third versions of `mirror` are typically invoked by typeclass methods that create instances from mirrors. An example would be an `unpickle` method that first creates an array of elements, then creates -a mirror over that array, and finally uses the `reify` method in `Reflected` to create the ADT instance. The fourth version of `mirror` is used to create mirrors of instances that do not have any elements. - -### How to Write Generic Typeclasses - -Based on the machinery developed so far it becomes possible to define type classes generically. This means that the `derived` method will compute a type class instance for any ADT that has a `Generic` instance, recursively. -The implementation of these methods typically uses three new type-level constructs in Dotty: inline methods, inline matches, and implicit matches. As an example, here is one possible implementation of a generic `Eql` type class, with explanations. Let's assume `Eql` is defined by the following trait: -```scala -trait Eql[T] { - def eql(x: T, y: T): Boolean -} -``` -We need to implement a method `Eql.derived` that produces an instance of `Eql[T]` provided -there exists evidence of type `Generic[T]`. Here's a possible solution: -```scala - inline def derived[T] given (ev: Generic[T]): Eql[T] = new Eql[T] { - def eql(x: T, y: T): Boolean = { - val mx = ev.reflect(x) // (1) - val my = ev.reflect(y) // (2) - inline erasedValue[ev.Shape] match { - case _: Cases[alts] => - mx.ordinal == my.ordinal && // (3) - eqlCases[alts](mx, my, 0) // [4] - case _: Case[_, elems] => - eqlElems[elems](mx, my, 0) // [5] - } - } - } -``` -The implementation of the inline method `derived` creates an instance of `Eql[T]` and implements its `eql` method. The right-hand side of `eql` mixes compile-time and runtime elements. In the code above, runtime elements are marked with a number in parentheses, i.e -`(1)`, `(2)`, `(3)`. Compile-time calls that expand to runtime code are marked with a number in brackets, i.e. `[4]`, `[5]`. The implementation of `eql` consists of the following steps. - - 1. Map the compared values `x` and `y` to their mirrors using the `reflect` method of the implicitly passed `Generic` evidence `(1)`, `(2)`. - 2. Match at compile-time against the shape of the ADT given in `ev.Shape`. Dotty does not have a construct for matching types directly, but we can emulate it using an `inline` match over an `erasedValue`. Depending on the actual type `ev.Shape`, the match will reduce at compile time to one of its two alternatives. - 3. If `ev.Shape` is of the form `Cases[alts]` for some tuple `alts` of alternative types, the equality test consists of comparing the ordinal values of the two mirrors `(3)` and, if they are equal, comparing the elements of the case indicated by that ordinal value. That second step is performed by code that results from the compile-time expansion of the `eqlCases` call `[4]`. - 4. If `ev.Shape` is of the form `Case[elems]` for some tuple `elems` for element types, the elements of the case are compared by code that results from the compile-time expansion of the `eqlElems` call `[5]`. - -Here is a possible implementation of `eqlCases`: -```scala - inline def eqlCases[Alts <: Tuple](mx: Mirror, my: Mirror, n: Int): Boolean = - inline erasedValue[Alts] match { - case _: (Shape.Case[_, elems] *: alts1) => - if (mx.ordinal == n) // (6) - eqlElems[elems](mx, my, 0) // [7] - else - eqlCases[alts1](mx, my, n + 1) // [8] - case _: Unit => - throw new MatchError(mx.ordinal) // (9) - } -``` -The inline method `eqlCases` takes as type arguments the alternatives of the ADT that remain to be tested. It takes as value arguments mirrors of the two instances `x` and `y` to be compared and an integer `n` that indicates the ordinal number of the case that is tested next. It produces an expression that compares these two values. - -If the list of alternatives `Alts` consists of a case of type `Case[_, elems]`, possibly followed by further cases in `alts1`, we generate the following code: - - 1. Compare the `ordinal` value of `mx` (a runtime value) with the case number `n` (a compile-time value translated to a constant in the generated code) in an if-then-else `(6)`. - 2. In the then-branch of the conditional we have that the `ordinal` value of both mirrors - matches the number of the case with elements `elems`. Proceed by comparing the elements - of the case in code expanded from the `eqlElems` call `[7]`. - 3. In the else-branch of the conditional we have that the present case does not match - the ordinal value of both mirrors. Proceed by trying the remaining cases in `alts1` using - code expanded from the `eqlCases` call `[8]`. - - If the list of alternatives `Alts` is the empty tuple, there are no further cases to check. - This place in the code should not be reachable at runtime. Therefore an appropriate - implementation is by throwing a `MatchError` or some other runtime exception `(9)`. - -The `eqlElems` method compares the elements of two mirrors that are known to have the same -ordinal number, which means they represent the same case of the ADT. Here is a possible -implementation: -```scala - inline def eqlElems[Elems <: Tuple](xs: Mirror, ys: Mirror, n: Int): Boolean = - inline erasedValue[Elems] match { - case _: (elem *: elems1) => - tryEql[elem]( // [12] - xs(n).asInstanceOf[elem], // (10) - ys(n).asInstanceOf[elem]) && // (11) - eqlElems[elems1](xs, ys, n + 1) // [13] - case _: Unit => - true // (14) - } -``` -`eqlElems` takes as arguments the two mirrors of the elements to compare and a compile-time index `n`, indicating the index of the next element to test. It is defined in terms of another compile-time match, this time over the tuple type `Elems` of all element types that remain to be tested. If that type is -non-empty, say of form `elem *: elems1`, the following code is produced: - - 1. Access the `n`'th elements of both mirrors and cast them to the current element type `elem` - `(10)`, `(11)`. Note that because of the way runtime reflection mirrors compile-time `Shape` types, the casts are guaranteed to succeed. - 2. Compare the element values using code expanded by the `tryEql` call `[12]`. - 3. "And" the result with code that compares the remaining elements using a recursive call - to `eqlElems` `[13]`. - - If type `Elems` is empty, there are no more elements to be compared, so the comparison's result is `true`. `(14)` - - Since `eqlElems` is an inline method, its recursive calls are unrolled. The end result is a conjunction `test_1 && ... && test_n && true` of test expressions produced by the `tryEql` calls. - -The last, and in a sense most interesting part of the derivation is the comparison of a pair of element values in `tryEql`. Here is the definition of this method: -```scala - inline def tryEql[T](x: T, y: T) = implicit match { - case ev: Eql[T] => - ev.eql(x, y) // (15) - case _ => - error("No `Eql` instance was found for $T") - } -``` -`tryEql` is an inline method that takes an element type `T` and two element values of that type as arguments. It is defined using an `evidence match` that tries to find evidence for `Eql[T]`. If an instance `ev` is found, it proceeds by comparing the arguments using `ev.eql`. On the other hand, if no instance is found -this signals a compilation error: the user tried a generic derivation of `Eql` for a class with an element type that does not support an `Eql` instance itself. The error is signaled by -calling the `error` method defined in `scala.compiletime`. - -**Note:** At the moment our error diagnostics for metaprogramming does not support yet interpolated string arguments for the `scala.compiletime.error` method that is called in the second case above. As an alternative, one can simply leave off the second case, then a missing typeclass would result in a "failure to reduce match" error. - -**Example:** Here is a slightly polished and compacted version of the code that's generated by inline expansion for the derived `Eql` instance of class `Tree`. - -```scala -evidence [T] given (elemEq: Eql[T]) for Eql[Tree[T]] { - def eql(x: Tree[T], y: Tree[T]): Boolean = { - val ev = the[Generic[Tree[T]]] - val mx = ev.reflect(x) - val my = ev.reflect(y) - mx.ordinal == my.ordinal && { - if (mx.ordinal == 0) { - this.eql(mx(0).asInstanceOf[Tree[T]], my(0).asInstanceOf[Tree[T]]) && - this.eql(mx(1).asInstanceOf[Tree[T]], my(1).asInstanceOf[Tree[T]]) - } - else if (mx.ordinal == 1) { - elemEq.eql(mx(0).asInstanceOf[T], my(0).asInstanceOf[T]) - } - else throw new MatchError(mx.ordinal) - } - } -} -``` - -One important difference between this approach and Scala-2 typeclass derivation frameworks such as Shapeless or Magnolia is that no automatic attempt is made to generate typeclass instances of elements recursively using the generic derivation framework. There must be an evidence value of type `Eql[T]` (which can of course be produced in turn using `Eql.derived`), or the compilation will fail. The advantage of this more restrictive approach to typeclass derivation is that it avoids uncontrolled transitive typeclass derivation by design. This keeps code sizes smaller, compile times lower, and is generally more predictable. - -### Derived Instances Elsewhere - -Sometimes one would like to derive a typeclass instance for an ADT after the ADT is defined, without being able to change the code of the ADT itself. -To do this, simply define an instance with the `derived` method of the typeclass as right-hand side. E.g, to implement `Ordering` for `Option`, define: -```scala -evidence [T: Ordering]: Ordering[Option[T]] = Ordering.derived -``` -Usually, the `Ordering.derived` clause has an inferable parameter of type -`Generic[Option[T]]`. Since the `Option` trait has a `derives` clause, -the necessary evidence is already present in the companion object of `Option`. -If the ADT in question does not have a `derives` clause, evidence for `Generic` -would still be synthesized by the compiler at the point where `derived` is called. -This is similar to the situation with type tags or class tags: If no evidence is found, the compiler will synthesize it. - -### Syntax - -``` -Template ::= InheritClauses [TemplateBody] -EnumDef ::= id ClassConstr InheritClauses EnumBody -InheritClauses ::= [‘extends’ ConstrApps] [‘derives’ QualId {‘,’ QualId}] -ConstrApps ::= ConstrApp {‘with’ ConstrApp} - | ConstrApp {‘,’ ConstrApp} -``` - -### Discussion - -The typeclass derivation framework is quite small and low-level. There are essentially -two pieces of infrastructure in the compiler-generated `Generic` instances: - - - a type representing the shape of an ADT, - - a way to map between ADT instances and generic mirrors. - -Generic mirrors make use of the already existing `Product` infrastructure for case -classes, which means they are efficient and their generation requires not much code. - -Generic mirrors can be so simple because, just like `Product`s, they are weakly -typed. On the other hand, this means that code for generic typeclasses has to -ensure that type exploration and value selection proceed in lockstep and it -has to assert this conformance in some places using casts. If generic typeclasses -are correctly written these casts will never fail. - -It could make sense to explore a higher-level framework that encapsulates all casts -in the framework. This could give more guidance to the typeclass implementer. -It also seems quite possible to put such a framework on top of the lower-level -mechanisms presented here. diff --git a/docs/docs/reference/contextual-evidence/extension-methods.md b/docs/docs/reference/contextual-evidence/extension-methods.md deleted file mode 100644 index a13bede393dc..000000000000 --- a/docs/docs/reference/contextual-evidence/extension-methods.md +++ /dev/null @@ -1,150 +0,0 @@ ---- -layout: doc-page -title: "Extension Methods" ---- - -Extension methods allow one to add methods to a type after the type is defined. Example: - -```scala -case class Circle(x: Double, y: Double, radius: Double) - -def (c: Circle) circumference: Double = c.radius * math.Pi * 2 -``` - -Like regular methods, extension methods can be invoked with infix `.`: - -```scala - val circle = Circle(0, 0, 1) - circle.circumference -``` - -### Translation of Extension Methods - -Extension methods are methods that have a parameter clause in front of the defined -identifier. They translate to methods where the leading parameter section is moved -to after the defined identifier. So, the definition of `circumference` above translates -to the plain method, and can also be invoked as such: -```scala -def circumference(c: Circle): Double = c.radius * math.Pi * 2 - -assert(circle.circumference == circumference(circle)) -``` - -### Translation of Calls to Extension Methods - -When is an extension method applicable? There are two possibilities. - - - An extension method is applicable if it is visible under a simple name, by being defined - or inherited or imported in a scope enclosing the application. - - An extension method is applicable if it is a member of some evidence value at the point of the application. - -As an example, consider an extension method `longestStrings` on `String` defined in a trait `StringSeqOps`. - -```scala -trait StringSeqOps { - def (xs: Seq[String]) longestStrings = { - val maxLength = xs.map(_.length).max - xs.filter(_.length == maxLength) - } -} -``` -We can make the extension method available by defining evidence for `StringSeqOps`, like this: -```scala -evidence ops1 for StringSeqOps -``` -Then -```scala -List("here", "is", "a", "list").longestStrings -``` -is legal everywhere `ops1` is available as evidence. Alternatively, we can define `longestStrings` as a member of a normal object. But then the method has to be brought into scope to be usable as an extension method. - -```scala -object ops2 extends StringSeqOps -import ops2.longestStrings -List("here", "is", "a", "list").longestStrings -``` -The precise rules for resolving a selection to an extension method are as follows. - -Assume a selection `e.m[Ts]` where `m` is not a member of `e`, where the type arguments `[Ts]` are optional, -and where `T` is the expected type. The following two rewritings are tried in order: - - 1. The selection is rewritten to `m[Ts](e)`. - 2. If the first rewriting does not typecheck with expected type `T`, and there is evidence `i` - in either the current scope or in the evidence scope of `T`, and `i` defines an extension - method named `m`, then selection is expanded to `i.m[Ts](e)`. - This second rewriting is attempted at the time where the compiler also tries an implicit conversion - from `T` to a type containing `m`. If there is more than one way of rewriting, an ambiguity error results. - -So `circle.circumference` translates to `CircleOps.circumference(circle)`, provided -`circle` has type `Circle` and `CircleOps` is an eligible evidence value (i.e. it is visible at the point of call or it is defined in the companion object of `Circle`). - -### Evidence for Extension Methods - -Evidence that defines extension methods can also be defined without a `for` clause. E.g., - -```scala -evidence StringOps { - def (xs: Seq[String]) longestStrings: Seq[String] = { - val maxLength = xs.map(_.length).max - xs.filter(_.length == maxLength) - } -} - -evidence { - def (xs: List[T]) second[T] = xs.tail.head -} -``` -If such an evidence is anonymous (as in the second example above), its name is synthesized from the name -of the first defined extension method. - -### Operators - -The extension method syntax also applies to the definition of operators. -In each case the definition syntax mirrors the way the operator is applied. -Examples: -```scala - def (x: String) < (y: String) = ... - def (x: Elem) +: (xs: Seq[Elem]) = ... - - "ab" + "c" - 1 +: List(2, 3) -``` -The two definitions above translate to -```scala - def < (x: String)(y: String) = ... - def +: (xs: Seq[Elem])(x: Elem) = ... -``` -Note that swap of the two parameters `x` and `xs` when translating -the right-binding operator `+:` to an extension method. This is analogous -to the implementation of right binding operators as normal methods. - -### Generic Extensions - -The `StringSeqOps` examples extended a specific instance of a generic type. It is also possible to extend a generic type by adding type parameters to an extension method. Examples: - -```scala -def (xs: List[T]) second [T] = - xs.tail.head - -def (xs: List[List[T]]) flattened [T] = - xs.foldLeft[List[T]](Nil)(_ ++ _) - -def (x: T) + [T : Numeric](y: T): T = - the[Numeric[T]].plus(x, y) -``` - -As usual, type parameters of the extension method follow the defined method name. Nevertheless, such type parameters can already be used in the preceding parameter clause. - - -### Syntax - -The required syntax extension just adds one clause for extension methods relative -to the [current syntax](https://github.com/lampepfl/dotty/blob/master/docs/docs/internals/syntax.md). -``` -DefSig ::= ... - | ‘(’ DefParam ‘)’ [nl] id [DefTypeParamClause] DefParamClauses -``` - - - - diff --git a/docs/docs/reference/contextual-evidence/import-implied.md b/docs/docs/reference/contextual-evidence/import-implied.md deleted file mode 100644 index ff044cee7b67..000000000000 --- a/docs/docs/reference/contextual-evidence/import-implied.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -layout: doc-page -title: "Evidence Imports" ---- - -A special form of import is used to import evidence values. Example: -```scala -object A { - class TC - evidence tc for TC - def f given TC = ??? -} -object B { - import A._ - import evidence A._ -} -``` -In the code above, the `import A._` clause of object `B` will import all members -of `A` _except_ the evidence `tc`. Conversely, the second import `import evidence A._` will import _only_ that evidence. - -Generally, a normal import clause brings all members except evidence values into scope whereas an `import evidence` clause brings only evidence values into scope. - -There are two main benefits arising from these rules: - - - It is made clearer where evidence values in scope are coming from. In particular, it is not possible to hide imported evidence values in a long list of regular imports. - - It enables importing all evidence values - without importing anything else. This is particularly important since evidence - values can be anonymous, so the usual recourse of using named imports is not - practical. - -### Relationship with Old-Style Implicits - -The rules of evidence imports above have the consequence that a library -would have to migrate in lockstep with all its users from old style implicit definitions and -normal imports to evidence definitions and evidence imports. - -The following modifications avoid this hurdle to migration. - - 1. An evidence import also brings old style implicits into scope. So, in Scala 3.0 - an old-style implicit definition can be brought into scope either by a normal or - by an evidence import. - - 2. In Scala 3.1, an old-style implicits accessed implicitly through a normal import - will give a deprecation warning. - - 3. In some version after 3.1, an old-style implicits accessed implicitly through a normal import - will give a compiler error. - -These rules mean that library users can use `import evidence` to access old-style implicits in Scala 3.0, -and will be gently nudged and then forced to do so in later versions. Libraries can then switch to -evidence definitions once their user base has migrated. diff --git a/docs/docs/reference/contextual-evidence/inferable-by-name-parameters.md b/docs/docs/reference/contextual-evidence/inferable-by-name-parameters.md deleted file mode 100644 index 040a1d92de01..000000000000 --- a/docs/docs/reference/contextual-evidence/inferable-by-name-parameters.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -layout: doc-page -title: "Inferable By-Name Parameters" ---- - -Inferable by-name parameters can be used to avoid a divergent inferred expansion. Example: - -```scala -trait Codec[T] { - def write(x: T): Unit -} - -evidence intCodec for Codec[Int] = ??? - -evidence optionCodec[T] given (ev: => Codec[T]) for Codec[Option[T]] { - def write(xo: Option[T]) = xo match { - case Some(x) => ev.write(x) - case None => - } -} - -val s = the[Codec[Option[Int]]] - -s.write(Some(33)) -s.write(None) -``` -As is the case for a normal by-name parameter, the argument for the inferable parameter `ev` -is evaluated on demand. In the example above, if the option value `x` is `None`, it is -not evaluated at all. - -The synthesized argument for an inferable parameter is backed by a local val -if this is necessary to prevent an otherwise diverging expansion. - -The precise steps for constructing an inferable argument for a by-name parameter of type `=> T` are as follows. - - 1. Create a new evidence value of type `T`: - - ```scala - evidence lv for T = ??? - ``` - where `lv` is an arbitrary fresh name. - - 1. This instance is not immediately available as candidate for argument inference (making it immediately available could result in a loop in the synthesized computation). But it becomes available in all nested contexts that look again for an inferred argument to a by-name parameter. - - 1. If this search succeeds with expression `E`, and `E` contains references to the evidence `lv`, replace `E` by - - - ```scala - { evidence lv for T = E; lv } - ``` - - Otherwise, return `E` unchanged. - -In the example above, the definition of `s` would be expanded as follows. - -```scala -val s = the[Test.Codec[Option[Int]]]( - optionCodec[Int](intCodec)) -``` - -No local instance was generated because the synthesized argument is not recursive. - -### Reference - -For more info, see [Issue #1998](https://github.com/lampepfl/dotty/issues/1998) -and the associated [Scala SIP](https://docs.scala-lang.org/sips/byname-implicits.html). diff --git a/docs/docs/reference/contextual-evidence/inferable-params.md b/docs/docs/reference/contextual-evidence/inferable-params.md deleted file mode 100644 index 2e71d5412bc5..000000000000 --- a/docs/docs/reference/contextual-evidence/inferable-params.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -layout: doc-page -title: "Given Clauses" ---- - -Functional programming tends to express most dependencies as simple function parameterization. -This is clean and powerful, but it sometimes leads to functions that take many parameters and -call trees where the same value is passed over and over again in long call chains to many -functions. Given clauses can help here since they enable the compiler to synthesize -repetitive arguments instead of the programmer having to write them explicitly. - -For example, given the [evidence definitions](./instance-defs.md) of the previous section, -a maximum function that works for any arguments for which an ordering exists can be defined as follows: -```scala -def max[T](x: T, y: T) given (ord: Ord[T]): T = - if (ord.compare(x, y) < 1) y else x -``` -Here, the part following `given` introduces a constraint that `T` is ordered, or, otherwise put, that evidence for `Ord[T]` exists. The evidence is passed as an _implicit parameter_ to the method. Inside the method, the evidence value can be accessed under the name `ord`. - -The `max` method can be applied as follows: -```scala -max(2, 3) given IntOrd -``` -The `given IntOrd` part establishes `IntOrd` as the evidence to satisfy the constraint `Ord[Int]`. It does this by providing the `IntOrd` value as as an argument for the implicit `ord` parameter. But the point of implicit parameters is that this argument can also be left out (and it usually is). So the following applications are equally valid: -```scala -max(2, 3) -max(List(1, 2, 3), Nil) -``` - -## Anonymous Inferable Parameters - -In many situations, the name of an evidence parameter of a method need not be mentioned explicitly at all, since it is only used as synthesized evidence for other constraints. In that case one can avoid defining a parameter name and just provide its type. Example: -```scala -def maximum[T](xs: List[T]) given Ord[T]: T = - xs.reduceLeft(max) -``` -`maximum` takes an evidence parameter of type `Ord` only to pass it on as an implicit argument to `max`. The name of the parameter is left out. - -Generally, evidence parameters may be given either as a parameter list `(p_1: T_1, ..., p_n: T_n)` or as a sequence of types, separated by commas. - -## Inferring Complex Arguments - -Here are two other methods that require evidence of type `Ord[T]`: -```scala -def descending[T] given (asc: Ord[T]): Ord[T] = new Ord[T] { - def compare(x: T, y: T) = asc.compare(y, x) -} - -def minimum[T](xs: List[T]) given Ord[T] = - maximum(xs) given descending -``` -The `minimum` method's right hand side passes `descending` as an explicit argument to `maximum(xs)`. -With this setup, the following calls are all well-formed, and they all normalize to the last one: -```scala -minimum(xs) -maximum(xs) given descending -maximum(xs) given (descending given ListOrd) -maximum(xs) given (descending given (ListOrd given IntOrd)) -``` - -## Mixing Inferable And Normal Parameters - -Inferable parameters can be freely mixed with normal parameters. -An inferable parameter may be followed by a normal parameter and _vice versa_. -There can be several inferable parameter lists in a definition. Example: -```scala -def f given (u: Universe) (x: u.T) given Context = ... - -evidence global for Universe { type T = String ... } -evidence ctx for Context { ... } -``` -Then the following calls are all valid (and normalize to the last one) -```scala -f("abc") -(f given global)("abc") -f("abc") given ctx -(f given global)("abc") given ctx -``` - -## Summoning the Evidence - -A method `the` in `Predef` summons the evidence for a given type. For example, the evidence for `Ord[List[Int]]` is generated by -```scala -the[Ord[List[Int]]] // reduces to ListOrd given IntOrd -``` -The `the` method is simply defined as the (non-widening) identity function over an evidence parameter. -```scala -def the[T] given (x: T): x.type = x -``` -Functions like `the` that have only evidence parameters are also called _context queries_. - -## Syntax - -Here is the new syntax of parameters and arguments seen as a delta from the [standard context free syntax of Scala 3](http://dotty.epfl.ch/docs/internals/syntax.html). -``` -ClsParamClause ::= ... - | ‘given’ (‘(’ [ClsParams] ‘)’ | GivenTypes) -DefParamClause ::= ... - | GivenParamClause -GivenParamClause ::= ‘given’ (‘(’ DefParams ‘)’ | GivenTypes) -GivenTypes ::= AnnotType {‘,’ AnnotType} - -InfixExpr ::= ... - | InfixExpr ‘given’ (InfixExpr | ParArgumentExprs) -``` diff --git a/docs/docs/reference/contextual-evidence/instance-defs.md b/docs/docs/reference/contextual-evidence/instance-defs.md deleted file mode 100644 index 869332c76e46..000000000000 --- a/docs/docs/reference/contextual-evidence/instance-defs.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -layout: doc-page -title: "Evidence Definitions" ---- - -Evidence definitions define "canonical" values of given types -that can be synthesized by the compiler. Typically, such values are -used as evidence for constraints in [given clauses](./inferable-params.html). Example: - -```scala -trait Ord[T] { - def compare(x: T, y: T): Int - def (x: T) < (y: T) = compare(x, y) < 0 - def (x: T) > (y: T) = compare(x, y) > 0 -} - -evidence IntOrd for Ord[Int] { - def compare(x: Int, y: Int) = - if (x < y) -1 else if (x > y) +1 else 0 -} - -evidence ListOrd[T] given (ord: Ord[T]) for Ord[List[T]] { - def compare(xs: List[T], ys: List[T]): Int = (xs, ys) match { - case (Nil, Nil) => 0 - case (Nil, _) => -1 - case (_, Nil) => +1 - case (x :: xs1, y :: ys1) => - val fst = ord.compare(x, y) - if (fst != 0) fst else xs1.compareTo(ys1) - } -} -``` -This code defines a trait `Ord` and two evidence definitions. `IntOrd` defines -evidence for the type `Ord[Int]` whereas `ListOrd[T]` defines evidence for `Ord[List[T]]` -for all types `T` that come with evidence for `Ord[T]` themselves. -The `given` clause in `ListOrd` defines an [evidence parameter](./inferable-params.html). -Given clauses are further explained in the next section. - -## Anonymous Evidence Definitions - -The name of a defined evidence can be left out. So the evidence definitions -of the last section can also be expressed like this: -```scala -evidence for Ord[Int] { ... } -evidence [T] given (ord: Ord[T]) for Ord[List[T]] { ... } -``` -If a name is not given, the compiler will synthesize one from the type(s) in the `for` clause. - -## Evidence Aliases - -An evidence alias defines an evidence value that is equal to some expression. E.g., assuming a global method `currentThreadPool` returning a value with a member `context`, one could define: -```scala -evidence ctx for ExecutionContext = currentThreadPool().context -``` -This creates an evidence `ctx` of type `ExecutionContext` that resolves to the right hand side `currentThreadPool().context`. Each time an evidence for `ExecutionContext` is demanded, the result of evaluating the right-hand side expression is returned. - -Alias instances may be anonymous, e.g. -```scala -evidence for Position = enclosingTree.position -``` -An evidence alias can have type and context parameters just like any other evidence definition, but it can only implement a single type. - -## Syntax - -Here is the new syntax of evidence definitions, seen as a delta from the [standard context free syntax of Scala 3](http://dotty.epfl.ch/docs/internals/syntax.html). -``` -TmplDef ::= ... - | ‘evidence’ EvidenceDef -EvidenceDef ::= [id] EvidenceParams EvidenceBody -EvidenceParams ::= [DefTypeParamClause] {GivenParamClause} -GivenParamClause ::= ‘given’ (‘(’ [DefParams] ‘)’ | GivenTypes) -EvidenceBody ::= [‘for’ ConstrApp {‘,’ ConstrApp }] [TemplateBody] - | ‘for’ Type ‘=’ Expr -GivenTypes ::= AnnotType {‘,’ AnnotType} -``` -The identifier `id` can be omitted only if either the `for` part or the template body is present. -If the `for` part is missing, the template body must define at least one extension method. diff --git a/docs/docs/reference/contextual-evidence/motivation.md b/docs/docs/reference/contextual-evidence/motivation.md deleted file mode 100644 index 1888532b7fae..000000000000 --- a/docs/docs/reference/contextual-evidence/motivation.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -layout: doc-page -title: "Overview" ---- - -### Critique of the Status Quo - -Scala's implicits are its most distinguished feature. They are _the_ fundamental way to abstract over context. They represent a unified paradigm with a great variety of use cases, among them: implementing type classes, establishing context, dependency injection, expressing capabilities, computing new types and proving relationships between them. - -Following Haskell, Scala was the second popular language to have some form of implicits. Other languages have followed suit. E.g Rust's traits or Swift's protocol extensions. Design proposals are also on the table for Kotlin as [compile time dependency resolution](https://github.com/Kotlin/KEEP/blob/e863b25f8b3f2e9b9aaac361c6ee52be31453ee0/proposals/compile-time-dependency-resolution.md), for C# as [Shapes and Extensions](https://github.com/dotnet/csharplang/issues/164) -or for F# as [Traits](https://github.com/MattWindsor91/visualfsharp/blob/hackathon-vs/examples/fsconcepts.md). Implicits are also a common feature of theorem provers such as Coq or Agda. - -Even though these designs use widely different terminology, they are all variants of the core idea of _term inference_. Given a type, the compiler synthesizes a "canonical" term that has that type. Scala embodies the idea in a purer form than most other languages: An implicit parameter directly leads to an inferred argument term that could also be written down explicitly. By contrast, typeclass based designs are less direct since they hide term inference behind some form of type classification and do not offer the option of writing the inferred quantities (typically, dictionaries) explicitly. - -Given that term inference is where the industry is heading, and given that Scala has it in a very pure form, how come implicits are not more popular? In fact, it's fair to say that implicits are at the same time Scala's most distinguished and most controversial feature. I believe this is due to a number of aspects that together make implicits harder to learn than necessary and also make it harder to prevent abuses. - -Particular criticisms are: - -1. Being very powerful, implicits are easily over-used and mis-used. This observation holds in almost all cases when we talk about _implicit conversions_, which, even though conceptually different, share the same syntax with other implicit definitions. For instance, regarding the two definitions - - ```scala - implicit def i1(implicit x: T): C[T] = ... - implicit def i2(x: T): C[T] = ... - ``` - - the first of these is a conditional implicit _value_, the second an implicit _conversion_. Conditional implicit values are a cornerstone for expressing type classes, whereas most applications of implicit conversions have turned out to be of dubious value. The problem is that many newcomers to the language start with defining implicit conversions since they are easy to understand and seem powerful and convenient. Scala 3 will put under a language flag both definitions and applications of "undisciplined" implicit conversions between types defined elsewhere. This is a useful step to push back against overuse of implicit conversions. But the problem remains that syntactically, conversions and values just look too similar for comfort. - - 2. Another widespread abuse is over-reliance on implicit imports. This often leads to inscrutable type errors that go away with the right import incantation, leaving a feeling of frustration. Conversely, it is hard to see what implicits a program uses since implicits can hide anywhere in a long list of imports. - - 3. The syntax of implicit definitions is too minimal. It consists of a single modifier, `implicit`, that can be attached to a large number of language constructs. A problem with this for newcomers is that it conveys mechanism instead of intent. For instance, a typeclass instance is an implicit object or val if unconditional and an implicit def with implicit parameters referring to some class if conditional. This describes precisely what the implicit definitions translate to -- just drop the `implicit` modifier, and that's it! But the cues that define intent are rather indirect and can be easily misread, as demonstrated by the definitions of `i1` and `i2` above. - - 4. The syntax of implicit parameters also has shortcomings. It starts with the position of `implicit` as a pseudo-modifier that applies to a whole parameter section instead of a single parameter. This represents an irregular case wrt to the rest of Scala's syntax. Furthermore, while implicit _parameters_ are designated specifically, arguments are not. Passing an argument to an implicit parameter looks like a regular application `f(arg)`. This is problematic because it means there can be confusion regarding what parameter gets instantiated in a call. For instance, in - ```scala - def currentMap(implicit ctx: Context): Map[String, Int] - ``` - one cannot write `currentMap("abc")` since the string "abc" is taken as explicit argument to the implicit `ctx` parameter. One has to write `currentMap.apply("abc")` instead, which is awkward and irregular. For the same reason, a method definition can only have one implicit parameter section and it must always come last. This restriction not only reduces orthogonality, but also prevents some useful program constructs, such as a method with a regular parameter whose type depends on an implicit value. Finally, it's also a bit annoying that implicit parameters must have a name, even though in many cases that name is never referenced. - - 5. Implicits pose challenges for tooling. The set of available implicits depends on context, so command completion has to take context into account. This is feasible in an IDE but docs like ScalaDoc that are based static web pages can only provide an approximation. Another problem is that failed implicit searches often give very unspecific error messages, in particular if some deeply recursive implicit search has failed. Note that the Dotty compiler already implements some improvements in this case, but challenges still remain. - -None of the shortcomings is fatal, after all implicits are very widely used, and many libraries and applications rely on them. But together, they make code using implicits a lot more cumbersome and less clear than it could be. - -Historically, many of these shortcomings come from the way implicits were gradually "discovered" in Scala. Scala originally had only implicit conversions with the intended use case of "extending" a class or trait after it was defined, i.e. what is expressed by implicit classes in later versions of Scala. Implicit parameters and instance definitions came later in 2006 and picked similar syntax since it seemed convenient. For the same reason, no effort was made to distinguish implicit imports or arguments from normal ones. - -Existing Scala programmers by and large have gotten used to the status quo and see little need for change. But for newcomers this status quo presents a big hurdle. I believe if we want to overcome that hurdle, we should take a step back and allow ourselves to consider a radically new design. - -### The New Design - -The following pages introduce a redesign of contextual abstractions in Scala. They introduce four fundamental changes: - - 1. [Evidence Definitions](./instance-defs.html) are a new way to define inferable terms. They replace implicit definitions. The core principle of the proposal is that, rather than mixing the `implicit` modifier with a large number of features, we have a single way to define terms that can be synthesized for types. - - 2. [Given Clauses](./inferable-params.html) are a new syntax for implicit _parameters_ and their _arguments_. Both are introduced with the same keyword, `given`. This unambiguously aligns parameters and arguments, solving a number of language warts. - - 3. [Evidence Imports](./import-implied.html) are new form of import that specifically imports implicit definitions and nothing else. New-style evidence definitions _must be_ imported with `import evidence`, a plain import will no longer bring them into scope. Old-style implicit definitions can be imported with either form. - - 4. [Implicit Conversions](./conversions.html) are now expressed as evidence values of a standard `Conversion` class. All other forms of implicit conversions will be phased out. - -This section also contains pages describing other language features that are related to context abstraction. These are: - - - [Context Bounds](./context-bounds.html), which carry over unchanged. - - [Extension Methods](./extension-methods.html) replace implicit classes in a way that integrates better with typeclasses. - - [Implementing Typeclasses](./typeclasses.html) demonstrates how some common typeclasses can be implemented using the new constructs. - - [Typeclass Derivation](./derivation.html) introduces constructs to automatically derive typeclasses for ADTs. - - [Multiversal Equality](./multiversal-equality.html) introduces a special typeclass - to support type safe equality. - - [Context Queries](./query-types.html) _aka_ implicit function types introduce a way to abstract over implicit parameterization. - - [Inferable By-Name Parameters](./inferable-by-name-parameters.html) are an essential tool to define recursive implicits without looping. - - [Relationship with Scala 2 Implicits](./relationship-implicits.html) discusses the relationship between old-style and new-style implicits and how to migrate from one to the other. - -Overall, the new design achieves a better separation of term inference from the rest of the language: There is a single way to define evidence instead of a multitude of forms all taking an `implicit` modifier. There is a single way to introduce implicit parameters and arguments instead of conflating implicit with normal arguments. There is a separate way to import evidence that does not allow to hide it a sea of normal imports. And there is a single way to define an implicit conversion which is clearly marked as such and does not require special syntax. - -This design thus avoids feature interactions and makes the language more consistent and orthogonal. It will make implicits easier to learn and harder to abuse. It will greatly improve the clarity of the 95% of Scala programs that use implicits. It has thus the potential to fulfil the promise of term inference in a principled way that is also accessible and friendly. - -Could we achieve the same goals by tweaking existing implicits? After having tried for a long time, I believe now that this is impossible. - - - First, some of the problems are clearly syntactic and require different syntax to solve them. - - Second, there is the problem how to migrate. We cannot change the rules in mid-flight. At some stage of language evolution we need to accommodate both the new and the old rules. With a syntax change, this is easy: Introduce the new syntax with new rules, support the old syntax for a while to facilitate cross compilation, deprecate and phase out the old syntax at some later time. Keeping the same syntax does not offer this path, and in fact does not seem to offer any viable path for evolution - - Third, even if we would somehow succeed with migration, we still have the problem - how to teach this. We cannot make existing tutorials go away. Almost all existing tutorials start with implicit conversions, which will go away; they use normal imports, which will go away, and they explain calls to methods with implicit parameters by expanding them to plain applications, which will also go away. This means that we'd have - to add modifications and qualifications to all existing literature and courseware, likely causing more confusion with beginners instead of less. By contrast, with a new syntax there is a clear criterion: Any book or courseware that mentions `implicit` is outdated and should be updated. - diff --git a/docs/docs/reference/contextual-evidence/multiversal-equality.md b/docs/docs/reference/contextual-evidence/multiversal-equality.md deleted file mode 100644 index 0e9da020b762..000000000000 --- a/docs/docs/reference/contextual-evidence/multiversal-equality.md +++ /dev/null @@ -1,217 +0,0 @@ ---- -layout: doc-page -title: "Multiversal Equality" ---- - -Previously, Scala had universal equality: Two values of any types -could be compared with each other with `==` and `!=`. This came from -the fact that `==` and `!=` are implemented in terms of Java's -`equals` method, which can also compare values of any two reference -types. - -Universal equality is convenient. But it is also dangerous since it -undermines type safety. For instance, let's assume one is left after some refactoring -with an erroneous program where a value `y` has type `S` instead of the correct type `T`. - -```scala -val x = ... // of type T -val y = ... // of type S, but should be T -x == y // typechecks, will always yield false -``` - -If all the program does with `y` is compare it to other values of type `T`, the program will still typecheck, since values of all types can be compared with each other. -But it will probably give unexpected results and fail at runtime. - -Multiversal equality is an opt-in way to make universal equality -safer. It uses a binary typeclass `Eql` to indicate that values of -two given types can be compared with each other. -The example above would not typecheck if `S` or `T` was a class -that derives `Eql`, e.g. -```scala -class T derives Eql -``` -Alternatively, one can also provide the derived evidence directly, like this: -```scala -evidence for Eql[T, T] = Eql.derived -``` -This definition effectively says that values of type `T` can (only) be -compared to other values of type `T` when using `==` or `!=`. The definition -affects type checking but it has no significance for runtime -behavior, since `==` always maps to `equals` and `!=` always maps to -the negation of `equals`. The right hand side `Eql.derived` of the definition -is a value that has any `Eql` instance as its type. Here is the definition of class -`Eql` and its companion object: -```scala -package scala -import annotation.implicitNotFound - -@implicitNotFound("Values of types ${L} and ${R} cannot be compared with == or !=") -sealed trait Eql[-L, -R] - -object Eql { - object derived extends Eql[Any, Any] -} -``` - -One can have several `Eql` instances for a type. For example, the four -definitions below make values of type `A` and type `B` comparable with -each other, but not comparable to anything else: - -```scala -evidence for Eql[A, A] = Eql.derived -evidence for Eql[B, B] = Eql.derived -evidence for Eql[A, B] = Eql.derived -evidence for Eql[B, A] = Eql.derived -``` -The `scala.Eql` object defines a number of `Eql` instances that together -define a rule book for what standard types can be compared (more details below). - -There's also a "fallback" instance named `eqlAny` that allows comparisons -over all types that do not themselves have an `Eql` instance. `eqlAny` is -defined as follows: - -```scala -def eqlAny[L, R]: Eql[L, R] = Eql.derived -``` - -Even though `eqlAny` is not declared as `evidence`, the compiler will still -construct an `eqlAny` instance as answer to an implicit search for the -type `Eql[L, R]`, unless `L` or `R` have `Eql` instances -defined on them, or the language feature `strictEquality` is enabled - -The primary motivation for having `eqlAny` is backwards compatibility, -if this is of no concern one can disable `eqlAny` by enabling the language -feature `strictEquality`. As for all language features this can be either -done with an import - -```scala -import scala.language.strictEquality -``` -or with a command line option `-language:strictEquality`. - -## Deriving Eql Instances - -Instead of defining `Eql` instances directly, it is often more convenient to derive them. Example: -```scala -class Box[T](x: T) derives Eql -``` -By the usual rules if [typeclass derivation](./derivation.html), -this generates the following `Eql` instance in the companion object of `Box`: -```scala -evidence [T, U] given Eql[T, U] for Eql[Box[T], Box[U]] = Eql.derived -``` -That is, two boxes are comparable with `==` or `!=` if their elements are. Examples: -```scala -new Box(1) == new Box(1L) // ok since there is evidence for `Eql[Int, Long]` -new Box(1) == new Box("a") // error: can't compare -new Box(1) == 1 // error: can't compare -``` - -## Precise Rules for Equality Checking - -The precise rules for equality checking are as follows. - -If the `strictEquality` feature is enabled then -a comparison using `x == y` or `x != y` between values `x: T` and `y: U` -is legal if - - 1. there is an evidence for `Eql[T, U]`, or - 2. one of `T`, `U` is `Null`. - -In the default case where the `strictEquality` feature is not enabled the comparison is -also legal if - - 1. `T` and `U` the same, or - 2. one of `T` and `U`is a subtype of the _lifted_ version of the other type, or - 3. neither `T` nor `U` have a _reflexive `Eql` instance_. - -Explanations: - - - _lifting_ a type `S` means replacing all references to abstract types - in covariant positions of `S` by their upper bound, and to replacing - all refinement types in covariant positions of `S` by their parent. - - a type `T` has a _reflexive `Eql` instance_ if the implicit search for `Eql[T, T]` - succeeds. - -## Predefined Eql Instances - -The `Eql` object defines evidence for comparing - - the primitive types `Byte`, `Short`, `Char`, `Int`, `Long`, `Float`, `Double`, `Boolean`, and `Unit`, - - `java.lang.Number`, `java.lang.Boolean`, and `java.lang.Character`, - - `scala.collection.Seq`, and `scala.collection.Set`. - -Evidence is defined so that everyone of these types is has a reflexive `Eql` evidence, and the following holds: - - - Primitive numeric types can be compared with each other. - - Primitive numeric types can be compared with subtypes of `java.lang.Number` (and _vice versa_). - - `Boolean` can be compared with `java.lang.Boolean` (and _vice versa_). - - `Char` can be compared with `java.lang.Character` (and _vice versa_). - - Two sequences (of arbitrary subtypes of `scala.collection.Seq`) can be compared - with each other if their element types can be compared. The two sequence types - need not be the same. - - Two sets (of arbitrary subtypes of `scala.collection.Set`) can be compared - with each other if their element types can be compared. The two set types - need not be the same. - - Any subtype of `AnyRef` can be compared with `Null` (and _vice versa_). - -## Why Two Type Parameters? - -One particular feature of the `Eql` type is that it takes _two_ type parameters, representing the types of the two items to be compared. By contrast, conventional -implementations of an equality type class take only a single type parameter which represents the common type of _both_ operands. One type parameter is simpler than two, so why go through the additional complication? The reason has to do with the fact that, rather than coming up with a type class where no operation existed before, -we are dealing with a refinement of pre-existing, universal equality. It's best illustrated through an example. - -Say you want to come up with a safe version of the `contains` method on `List[T]`. The original definition of `contains` in the standard library was: -```scala -class List[+T] { - ... - def contains(x: Any): Boolean -} -``` -That uses universal equality in an unsafe way since it permits arguments of any type to be compared with the list's elements. The "obvious" alternative definition -```scala - def contains(x: T): Boolean -``` -does not work, since it refers to the covariant parameter `T` in a nonvariant context. The only variance-correct way to use the type parameter `T` in `contains` is as a lower bound: -```scala - def contains[U >: T](x: U): Boolean -``` -This generic version of `contains` is the one used in the current (Scala 2.12) version of `List`. -It looks different but it admits exactly the same applications as the `contains(x: Any)` definition we started with. -However, we can make it more useful (i.e. restrictive) by adding an `Eql` parameter: -```scala - def contains[U >: T](x: U) given Eql[T, U]: Boolean // (1) -``` -This version of `contains` is equality-safe! More precisely, given -`x: T`, `xs: List[T]` and `y: U`, then `xs.contains(y)` is type-correct if and only if -`x == y` is type-correct. - -Unfortunately, the crucial ability to "lift" equality type checking from simple equality and pattern matching to arbitrary user-defined operations gets lost if we restrict ourselves to an equality class with a single type parameter. Consider the following signature of `contains` with a hypothetical `Eql1[T]` type class: -```scala - def contains[U >: T](x: U) given Eql1[U]: Boolean // (2) -``` -This version could be applied just as widely as the original `contains(x: Any)` method, -since the `Eql1[Any]` fallback is always available! So we have gained nothing. What got lost in the transition to a single parameter type class was the original rule that `Eql[A, B]` is available only if neither `A` nor `B` have a reflexive `Eql` instance. That rule simply cannot be expressed if there is a single type parameter for `Eql`. - -The situation is different under `-language:strictEquality`. In that case, -the `Eql[Any, Any]` or `Eql1[Any]` instances would never be available, and the -single and two-parameter versions would indeed coincide for most practical purposes. - -But assuming `-language:strictEquality` immediately and everywhere poses migration problems which might well be unsurmountable. Consider again `contains`, which is in the standard library. Parameterizing it with the `Eql` type class as in (1) is an immediate win since it rules out non-sensical applications while still allowing all sensible ones. -So it can be done almost at any time, modulo binary compatibility concerns. -On the other hand, parameterizing `contains` with `Eql1` as in (2) would make `contains` -unusable for all types that have not yet declared an `Eql1` instance, including all -types coming from Java. This is clearly unacceptable. It would lead to a situation where, -rather than migrating existing libraries to use safe equality, the only upgrade path is to have parallel libraries, with the new version only catering to types deriving `Eql1` and the old version dealing with everything else. Such a split of the ecosystem would be very problematic, which means the cure is likely to be worse than the disease. - -For these reasons, it looks like a two-parameter type class is the only way forward because it can take the existing ecosystem where it is and migrate it towards a future where more and more code uses safe equality. - -In applications where `-language:strictEquality` is the default one could also introduce a one-parameter type alias such as -```scala -type Eq[-T] = Eql[T, T] -``` -Operations needing safe equality could then use this alias instead of the two-parameter `Eql` class. But it would only -work under `-language:strictEquality`, since otherwise the universal `Eq[Any]` instance would be available everywhere. - - -More on multiversal equality is found in a [blog post](http://www.scala-lang.org/blog/2016/05/06/multiversal-equality.html) -and a [Github issue](https://github.com/lampepfl/dotty/issues/1247). diff --git a/docs/docs/reference/contextual-evidence/query-types-spec.md b/docs/docs/reference/contextual-evidence/query-types-spec.md deleted file mode 100644 index 67c627ce79f4..000000000000 --- a/docs/docs/reference/contextual-evidence/query-types-spec.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: doc-page -title: "Context Query Types - More Details" ---- - -## Syntax - - Type ::= ... - | `given' FunArgTypes `=>' Type - Expr ::= ... - | `given' FunParams `=>' Expr - -Context query types associate to the right, e.g. -`given S => given T => U` is the same as `given S => (given T => U)`. - -## Implementation - -Context query types are shorthands for class types that define `apply` -methods with inferable parameters. Specifically, the `N`-ary function type -`T1, ..., TN => R` is a shorthand for the class type -`ImplicitFunctionN[T1 , ... , TN, R]`. Such class types are assumed to have the following definitions, for any value of `N >= 1`: -```scala -package scala -trait ImplicitFunctionN[-T1 , ... , -TN, +R] { - def apply given (x1: T1 , ... , xN: TN): R -} -``` -Context query types erase to normal function types, so these classes are -generated on the fly for typechecking, but not realized in actual code. - -Context query literals `given (x1: T1, ..., xn: Tn) => e` map -inferable parameters `xi` of types `Ti` to a result given by expression `e`. -The scope of each implicit parameter `xi` is `e`. The parameters must have pairwise distinct names. - -If the expected type of the query literal is of the form -`scala.ImplicitFunctionN[S1, ..., Sn, R]`, the expected type of `e` is `R` and -the type `Ti` of any of the parameters `xi` can be omitted, in which case `Ti -= Si` is assumed. If the expected type of the query literal is -some other type, all inferable parameter types must be explicitly given, and the expected type of `e` is undefined. The type of the query literal is `scala.ImplicitFunctionN[S1, ...,Sn, T]`, where `T` is the widened -type of `e`. `T` must be equivalent to a type which does not refer to any of -the inferable parameters `xi`. - -The query literal is evaluated as the instance creation -expression: -```scala -new scala.ImplicitFunctionN[T1, ..., Tn, T] { - def apply given (x1: T1, ..., xn: Tn): T = e -} -``` -In the case of a single untyped parameter, `given (x) => e` can be -abbreviated to `given x => e`. - -An inferable parameter may also be a wildcard represented by an underscore `_`. In -that case, a fresh name for the parameter is chosen arbitrarily. - -Note: The closing paragraph of the -[Anonymous Functions section](https://www.scala-lang.org/files/archive/spec/2.12/06-expressions.html#anonymous-functions) -of Scala 2.12 is subsumed by query types and should be removed. - -Query literals `given (x1: T1, ..., xn: Tn) => e` are -automatically created for any expression `e` whose expected type is -`scala.ImplicitFunctionN[T1, ..., Tn, R]`, unless `e` is -itself a query literal. This is analogous to the automatic -insertion of `scala.Function0` around expressions in by-name argument position. - -Context query types generalize to `N > 22` in the same way that function types do, see [the corresponding -documentation](https://dotty.epfl.ch/docs/reference/dropped-features/limit22.html). - -## Examples - -See the section on Expressiveness from [Simplicitly: foundations and -applications of implicit function -types](https://dl.acm.org/citation.cfm?id=3158130). I've extracted it in [this -Gist](https://gist.github.com/OlivierBlanvillain/234d3927fe9e9c6fba074b53a7bd9 -592), it might easier to access than the pdf. - -### Type Checking - -After desugaring no additional typing rules are required for context query types. diff --git a/docs/docs/reference/contextual-evidence/query-types.md b/docs/docs/reference/contextual-evidence/query-types.md deleted file mode 100644 index ab820b714af8..000000000000 --- a/docs/docs/reference/contextual-evidence/query-types.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -layout: doc-page -title: "Context Queries" ---- - -_Context queries_ are functions with (only) inferable parameters. -_Context query types_ are the types of first-class context queries. -Here is an example for a context query type: -```scala -type Contextual[T] = given Context => T -``` -A value of context query type is applied to inferred arguments, in -the same way a method with inferable parameters is applied. For instance: -```scala - evidence ctx for Context = ... - - def f(x: Int): Contextual[Int] = ... - - f(2) given ctx // explicit argument - f(2) // argument is inferred -``` -Conversely, if the expected type of an expression `E` is a context query -type `given (T_1, ..., T_n) => U` and `E` is not already a -context query literal, `E` is converted to a context query literal by rewriting to -```scala - given (x_1: T1, ..., x_n: Tn) => E -``` -where the names `x_1`, ..., `x_n` are arbitrary. This expansion is performed -before the expression `E` is typechecked, which means that `x_1`, ..., `x_n` -are available as evidence in `E`. - -Like query types, query literals are written with a `given` prefix. They differ from normal function literals in two ways: - - 1. Their parameters are inferable. - 2. Their types are context query types. - -For example, continuing with the previous definitions, -```scala - def g(arg: Contextual[Int]) = ... - - g(22) // is expanded to g(given ctx => 22) - - g(f(2)) // is expanded to g(given ctx => f(2) given ctx) - - g(given ctx => f(22) given ctx) // is left as it is -``` -### Example: Builder Pattern - -Context query types have considerable expressive power. For -instance, here is how they can support the "builder pattern", where -the aim is to construct tables like this: -```scala - table { - row { - cell("top left") - cell("top right") - } - row { - cell("bottom left") - cell("bottom right") - } - } -``` -The idea is to define classes for `Table` and `Row` that allow -addition of elements via `add`: -```scala - class Table { - val rows = new ArrayBuffer[Row] - def add(r: Row): Unit = rows += r - override def toString = rows.mkString("Table(", ", ", ")") - } - - class Row { - val cells = new ArrayBuffer[Cell] - def add(c: Cell): Unit = cells += c - override def toString = cells.mkString("Row(", ", ", ")") - } - - case class Cell(elem: String) -``` -Then, the `table`, `row` and `cell` constructor methods can be defined -in terms of query types to avoid the plumbing boilerplate -that would otherwise be necessary. -```scala - def table(init: given Table => Unit) = { - evidence t for Table - init - t - } - - def row(init: given Row => Unit) given (t: Table) = { - evidence r for Row - init - t.add(r) - } - - def cell(str: String) given (r: Row) = - r.add(new Cell(str)) -``` -With that setup, the table construction code above compiles and expands to: -```scala - table { given $t: Table => - row { given $r: Row => - cell("top left") given $r - cell("top right") given $r - } given $t - row { given $r: Row => - cell("bottom left") given $r - cell("bottom right") given $r - } given $t - } -``` -### Example: Postconditions - -As a larger example, here is a way to define constructs for checking arbitrary postconditions using an extension method `ensuring`so that the checked result can be referred to simply by `result`. The example combines opaque aliases, context query types, and extension methods to provide a zero-overhead abstraction. - -```scala -object PostConditions { - opaque type WrappedResult[T] = T - - private object WrappedResult { - def wrap[T](x: T): WrappedResult[T] = x - def unwrap[T](x: WrappedResult[T]): T = x - } - - def result[T] given (r: WrappedResult[T]): T = WrappedResult.unwrap(r) - - def (x: T) ensuring [T](condition: given WrappedResult[T] => Boolean): T = { - evidence for WrappedResult[T] = WrappedResult.wrap(x) - assert(condition) - x - } -} - -object Test { - import PostConditions.{ensuring, result} - val s = List(1, 2, 3).sum.ensuring(result == 6) -} -``` -**Explanations**: We use a context query type `given WrappedResult[T] => Boolean` -as the type of the condition of `ensuring`. An argument to `ensuring` such as -`(result == 6)` will therefore have evidence of type `WrappedResult[T]` in -scope to pass along to the `result` method. `WrappedResult` is a fresh type, to make sure -that we do not get unwanted evidence types in scope (this is good practice in all cases -where given clauses are involved). Since `WrappedResult` is an opaque type alias, its -values need not be boxed, and since `ensuring` is added as an extension method, its argument -does not need boxing either. Hence, the implementation of `ensuring` is as about as efficient -as the best possible code one could write by hand: - - { val result = List(1, 2, 3).sum - assert(result == 6) - result - } - -### Reference - -For more info, see the [blog article](https://www.scala-lang.org/blog/2016/12/07/implicit-function-types.html), -(which uses a different syntax that has been superseded). - -[More details](./query-types-spec.html) diff --git a/docs/docs/reference/contextual-evidence/relationship-implicits.md b/docs/docs/reference/contextual-evidence/relationship-implicits.md deleted file mode 100644 index 79c481915fe5..000000000000 --- a/docs/docs/reference/contextual-evidence/relationship-implicits.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -layout: doc-page -title: Relationship with Scala 2 Implicits ---- - -Many, but not all, of the new contextual abstraction features in Scala 3 can be mapped to Scala 2's implicits. This page gives a rundown on the relationships between new and old features. - -## Simulating Contextual Abstraction with Implicits - -### Evidence Definitions - -Evidence definitions can be mapped to combinations of implicit objects, classes and implicit methods. - - 1. Evidence definitions without parameters are mapped to implicit objects. E.g., - ```scala - evidence IntOrd for Ord[Int] { ... } - ``` - maps to - ```scala - implicit object IntOrd extends Ord[Int] { ... } - ``` - 2. Parameterized evidence definitions are mapped to combinations of classes and implicit methods. E.g., - ```scala - evidence ListOrd[T] given (ord: Ord[T]) for Ord[List[T]] { ... } - ``` - maps to - ```scala - class ListOrd[T](implicit ord: Ord[T]) extends Ord[List[T]] { ... } - final implicit def ListOrd[T](implicit ord: Ord[T]): ListOrd[T] = new ListOrd[T] - ``` - 3. Evidence aliases map to implicit methods. E.g., - ```scala - evidence ctx for ExecutionContext = ... - ``` - maps to - ```scala - final implicit def ctx: ExecutionContext = ... - ``` - -### Anonymous Evidence Definitions - -Anonymous evidence values get compiler synthesized names, which are generated in a reproducible way from the implemented type(s). For example, if the names of the `IntOrd` and `ListOrd` evidence above were left out, the following names would be synthesized instead: -```scala - evidence Ord_Int_ev for Ord[Int] { ... } - evidence Ord_List_ev[T] for Ord[List[T]] { ... } -``` -The synthesized type names are formed from - - - the simple name(s) of the implemented type(s), leaving out any prefixes, - - the simple name(s) of the toplevel argument type constructors to these types - - the suffix `_ev`. - -Anonymous evidence values that define extension methods without also implementing a type -get their name from the name of the first extension method and the toplevel type -constructor of its first parameter. For example, the evidence -```scala - evidence { - def (xs: List[T]) second[T] = ... - } -``` -gets the synthesized name `second_of_List_T_ev`. - -### Inferable Parameters - -The new inferable parameter syntax with `given` corresponds largely to Scala-2's implicit parameters. E.g. -```scala - def max[T](x: T, y: T) given (ord: Ord[T]): T -``` -would be written -```scala - def max[T](x: T, y: T)(implicit ord: Ord[T]): T -``` -in Scala 2. The main difference concerns applications of such parameters. -Explicit arguments to inferable parameters _must_ be written using `given`, -mirroring the definition syntax. E.g, `max(2, 3) given IntOrd`. -Scala 2 uses normal applications `max(2, 3)(IntOrd)` instead. The Scala 2 syntax has some inherent ambiguities and restrictions which are overcome by the new syntax. For instance, multiple implicit parameter lists are not available in the old syntax, even though they can be simulated using auxiliary objects in the "Aux" pattern. - -The `the` method corresponds to `implicitly` in Scala 2. -It is precisely the same as the `the` method in Shapeless. -The difference between `the` (in both versions) and `implicitly` is -that `the` can return a more precise type than the type that was -asked for. - -### Context Bounds - -Context bounds are the same in both language versions. They expand to the respective forms of implicit parameters. - -**Note:** To ease migration, context bounds in Dotty map for a limited time to old-style implicit parameters for which arguments can be passed either with `given` or -with a normal application. Once old-style implicits are deprecated, context bounds -will map to inferable parameters instead. - -### Extension Methods - -Extension methods have no direct counterpart in Scala 2, but they can be simulated with implicit classes. For instance, the extension method -```scala - def (c: Circle) circumference: Double = c.radius * math.Pi * 2 -``` -could be simulated to some degree by -```scala - implicit class CircleDeco(c: Circle) extends AnyVal { - def circumference: Double = c.radius * math.Pi * 2 - } -``` -Extension methods in evidence definitions have no direct counterpart in Scala-2. The only way to simulate these is to make implicit classes available through imports. The Simulacrum macro library can automate this process in some cases. - -### Typeclass Derivation - -Typeclass derivation has no direct counterpart in the Scala 2 language. Comparable functionality can be achieved by macro-based libraries such as Shapeless, Magnolia, or scalaz-deriving. - -### Context Query types - -Context Query types have no analogue in Scala 2. - -### Implicit By-Name Parameters - -Implicit by-name parameters are not supported in Scala 2, but can be emulated to some degree by the `Lazy` type in Shapeless. - -## Simulating Scala 2 Implicits in Dotty - -### Implicit Conversions - -Implicit conversion methods in Scala 2 can be expressed as evidence for -`scala.Conversion` in Dotty. E.g. instead of -```scala - implicit def stringToToken(str: String): Token = new Keyword(str) -``` -one can write -```scala - evidence stringToToken for Conversion[String, Token] { - def apply(str: String): Token = new KeyWord(str) - } -``` - -### Implicit Classes - -Implicit classes in Scala 2 are often used to define extension methods, which are directly supported in Dotty. Other uses of implicit classes can be simulated by a pair of a regular class and a `Conversion` evidence definition. - - -### Implicit Values - -Implicit `val` definitions in Scala 2 can be expressed in Dotty using a regular `val` definition and an evidence alias. E.g., Scala 2's -```scala - lazy implicit val pos: Position = tree.sourcePos -``` -can be expressed in Dotty as -```scala - lazy val pos: Position = tree.sourcePos - evidence for Position = pos -``` - -### Abstract Implicits - -An abstract implicit `val` or `def` in Scala 2 can be expressed in Dotty using a regular abstract definition and an evidence alias. E.g., Scala 2's -```scala - implicit def symDeco: SymDeco -``` -can be expressed in Dotty as -```scala - def symDeco: SymDeco - evidence for SymDeco = symDeco -``` - -## Implementation Status and Timeline - -The Dotty implementation implements both Scala-2's implicits and the new abstractions. In fact, support for Scala-2's implicits is an essential part of the common language subset between 2.13/2.14 and Dotty. -Migration to the new abstractions will be supported by making automatic rewritings available. - -Depending on adoption patterns, old style implicits might start to be deprecated in a version following Scala 3.0. diff --git a/docs/docs/reference/contextual-evidence/typeclasses.md b/docs/docs/reference/contextual-evidence/typeclasses.md deleted file mode 100644 index 3f5b49e84962..000000000000 --- a/docs/docs/reference/contextual-evidence/typeclasses.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -layout: doc-page -title: "Implementing Typeclasses" ---- - -Evidence definitions, extension methods and context bounds -allow a concise and natural expression of _typeclasses_. Typeclasses are just traits -with canonical implementations defined by evidence definitions. Here are some examples of standard typeclasses: - -### Semigroups and monoids: - -```scala -trait SemiGroup[T] { - def (x: T) combine (y: T): T -} -trait Monoid[T] extends SemiGroup[T] { - def unit: T -} -object Monoid { - def apply[T] given Monoid[T] = the[Monoid[T]] -} - -evidence for Monoid[String] { - def (x: String) combine (y: String): String = x.concat(y) - def unit: String = "" -} - -evidence for Monoid[Int] { - def (x: Int) combine (y: Int): Int = x + y - def unit: Int = 0 -} - -def sum[T: Monoid](xs: List[T]): T = - xs.foldLeft(Monoid[T].unit)(_.combine(_)) -``` - -### Functors and monads: - -```scala -trait Functor[F[_]] { - def (x: F[A]) map [A, B] (f: A => B): F[B] -} - -trait Monad[F[_]] extends Functor[F] { - def (x: F[A]) flatMap [A, B] (f: A => F[B]): F[B] - def (x: F[A]) map [A, B] (f: A => B) = x.flatMap(f `andThen` pure) - - def pure[A](x: A): F[A] -} - -evidence ListMonad for Monad[List] { - def (xs: List[A]) flatMap [A, B] (f: A => List[B]): List[B] = - xs.flatMap(f) - def pure[A](x: A): List[A] = - List(x) -} - -evidence ReaderMonad[Ctx] for Monad[[X] => Ctx => X] { - def (r: Ctx => A) flatMap [A, B] (f: A => Ctx => B): Ctx => B = - ctx => f(r(ctx))(ctx) - def pure[A](x: A): Ctx => A = - ctx => x -} -``` diff --git a/docs/docs/reference/contextual-implicit/context-bounds.md b/docs/docs/reference/contextual-implicit/context-bounds.md deleted file mode 100644 index ed54a4ba1411..000000000000 --- a/docs/docs/reference/contextual-implicit/context-bounds.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -layout: doc-page -title: "Context Bounds" ---- - -## Context Bounds - -A context bound is a shorthand for expressing a common pattern of an implicit parameter that depends on a type parameter. Using a context bound, the `maximum` function of the last section can be written like this: -```scala -def maximum[T: Ord](xs: List[T]): T = xs.reduceLeft(max) -``` -A bound like `: Ord` on a type parameter `T` of a method or class is equivalent to a given clause `given Ord[T]`. The implicit parameter(s) generated from context bounds come last in the definition of the containing method or class. E.g., -```scala -def f[T: C1 : C2, U: C3](x: T) given (y: U, z: V): R -``` -would expand to -```scala -def f[T, U](x: T) given (y: U, z: V) given C1[T], C2[T], C3[U]: R -``` -Context bounds can be combined with subtype bounds. If both are present, subtype bounds come first, e.g. -```scala -def g[T <: B : C](x: T): R = ... -``` - -## Syntax - -``` -TypeParamBounds ::= [SubtypeBounds] {ContextBound} -ContextBound ::= ‘:’ Type -``` diff --git a/docs/docs/reference/contextual-implicit/conversions.md b/docs/docs/reference/contextual-implicit/conversions.md deleted file mode 100644 index ca828a82cf62..000000000000 --- a/docs/docs/reference/contextual-implicit/conversions.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -layout: doc-page -title: "Implicit Conversions" ---- - -Implicit conversions are defined by implicit instances of the `scala.Conversion` class. -This class is defined in package `scala` as follows: -```scala -abstract class Conversion[-T, +U] extends (T => U) -``` -For example, here is an implicit conversion from `String` to `Token`: -```scala -implicit for Conversion[String, Token] { - def apply(str: String): Token = new KeyWord(str) -} -``` -Using an alias implicit this can be expressed more concisely as: -```scala -implicit for Conversion[String, Token] = new KeyWord(_) -``` -An implicit conversion is applied automatically by the compiler in three situations: - -1. If an expression `e` has type `T`, and `T` does not conform to the expression's expected type `S`. -2. In a selection `e.m` with `e` of type `T`, but `T` defines no member `m`. -3. In an application `e.m(args)` with `e` of type `T`, if `T` does define - some member(s) named `m`, but none of these members can be applied to the arguments `args`. - -In the first case, the compiler looks for an implicit value of class -`scala.Conversion` that maps an argument of type `T` to type `S`. In the second and third -case, it looks for an implicit value of class `scala.Conversion` that maps an argument of type `T` -to a type that defines a member `m` which can be applied to `args` if present. -If such a value `C` is found, the expression `e` is replaced by `C.apply(e)`. - -## Examples - -1. The `Predef` package contains "auto-boxing" conversions that map -primitive number types to subclasses of `java.lang.Number`. For instance, the -conversion from `Int` to `java.lang.Integer` can be defined as follows: -```scala -implicit int2Integer for Conversion[Int, java.lang.Integer] = - java.lang.Integer.valueOf(_) -``` - -2. The "magnet" pattern is sometimes used to express many variants of a method. Instead of defining overloaded versions of the method, one can also let the method take one or more arguments of specially defined "magnet" types, into which various argument types can be converted. E.g. -```scala -object Completions { - - // The argument "magnet" type - enum CompletionArg { - case Error(s: String) - case Response(f: Future[HttpResponse]) - case Status(code: Future[StatusCode]) - } - object CompletionArg { - - // conversions defining the possible arguments to pass to `complete` - // these always come with CompletionArg - // They can be invoked explicitly, e.g. - // - // CompletionArg.fromStatusCode(statusCode) - - implicit fromString for Conversion[String, CompletionArg] = Error(_) - implicit fromFuture for Conversion[Future[HttpResponse], CompletionArg] = Response(_) - implicit fromStatusCode for Conversion[Future[StatusCode], CompletionArg] = Status(_) - } - import CompletionArg._ - - def complete[T](arg: CompletionArg) = arg match { - case Error(s) => ... - case Response(f) => ... - case Status(code) => ... - } -} -``` -This setup is more complicated than simple overloading of `complete`, but it can still be useful if normal overloading is not available (as in the case above, since we cannot have two overloaded methods that take `Future[...]` arguments), or if normal overloading would lead to a combinatorial explosion of variants. diff --git a/docs/docs/reference/contextual-implicit/derivation.md b/docs/docs/reference/contextual-implicit/derivation.md deleted file mode 100644 index 6f33b7235762..000000000000 --- a/docs/docs/reference/contextual-implicit/derivation.md +++ /dev/null @@ -1,382 +0,0 @@ ---- -layout: doc-page -title: Typeclass Derivation ---- - -Typeclass derivation is a way to generate implicit instances for certain type classes automatically or with minimal code hints. A type class in this sense is any trait or class with a type parameter that describes the type being operated on. Commonly used examples are `Eql`, `Ordering`, `Show`, or `Pickling`. Example: -```scala -enum Tree[T] derives Eql, Ordering, Pickling { - case Branch(left: Tree[T], right: Tree[T]) - case Leaf(elem: T) -} -``` -The `derives` clause generates implicit instances for the `Eql`, `Ordering`, and `Pickling` traits in the companion object `Tree`: -```scala -implicit [T: Eql] for Eql[Tree[T]] = Eql.derived -implicit [T: Ordering] for Ordering[Tree[T]] = Ordering.derived -implicit [T: Pickling] for Pickling[Tree[T]] = Pickling.derived -``` - -### Deriving Types - -Besides for enums, typeclasses can also be derived for other sets of classes and objects that form an algebraic data type. These are: - - - individual case classes or case objects - - sealed classes or traits that have only case classes and case objects as children. - - Examples: - - ```scala -case class Labelled[T](x: T, label: String) derives Eql, Show - -sealed trait Option[T] derives Eql -case class Some[T] extends Option[T] -case object None extends Option[Nothing] -``` - -The generated typeclass instances are placed in the companion objects `Labelled` and `Option`, respectively. - -### Derivable Types - -A trait or class can appear in a `derives` clause if its companion object defines a method named `derived`. The type and implementation of a `derived` method are arbitrary, but typically it has a definition like this: -```scala - def derived[T] given Generic[T] = ... -``` -That is, the `derived` method takes an implicit parameter of type `Generic` that determines the _shape_ of the deriving type `T` and it computes the typeclass implementation according to that shape. An implicit `Generic` instance is generated automatically for any type that derives a typeclass with a `derived` method that refers to `Generic`. One can also derive `Generic` alone, which means a `Generic` instance is generated without any other type class instances. E.g.: -```scala -sealed trait ParseResult[T] derives Generic -``` -This is all a user of typeclass derivation has to know. The rest of this page contains information needed to be able to write a typeclass that can appear in a `derives` clause. In particular, it details the means provided for the implementation of data generic `derived` methods. - -### The Shape Type - -For every class with a `derives` clause, the compiler computes the shape of that class as a type. For example, here is the shape type for the `Tree[T]` enum: -```scala -Cases[( - Case[Branch[T], (Tree[T], Tree[T])], - Case[Leaf[T], T *: Unit] -)] -``` -Informally, this states that - -> The shape of a `Tree[T]` is one of two cases: Either a `Branch[T]` with two - elements of type `Tree[T]`, or a `Leaf[T]` with a single element of type `T`. - -The type constructors `Cases` and `Case` come from the companion object of a class -`scala.compiletime.Shape`, which is defined in the standard library as follows: -```scala -sealed abstract class Shape - -object Shape { - - /** A sum with alternative types `Alts` */ - case class Cases[Alts <: Tuple] extends Shape - - /** A product type `T` with element types `Elems` */ - case class Case[T, Elems <: Tuple] extends Shape -} -``` - -Here is the shape type for `Labelled[T]`: -```scala -Case[Labelled[T], (T, String)] -``` -And here is the one for `Option[T]`: -```scala -Cases[( - Case[Some[T], T *: Unit], - Case[None.type, Unit] -)] -``` -Note that an empty element tuple is represented as type `Unit`. A single-element tuple -is represented as `T *: Unit` since there is no direct syntax for such tuples: `(T)` is just `T` in parentheses, not a tuple. - -### The Generic Typeclass - -For every class `C[T_1,...,T_n]` with a `derives` clause, the compiler generates in the companion object of `C` an implicit instance for `Generic[C[T_1,...,T_n]]` that follows the outline below: -```scala -implicit [T_1, ..., T_n] for Generic[C[T_1,...,T_n]] { - type Shape = ... - ... -} -``` -where the right hand side of `Shape` is the shape type of `C[T_1,...,T_n]`. -For instance, the definition -```scala -enum Result[+T, +E] derives Logging { - case class Ok[T](result: T) - case class Err[E](err: E) -} -``` -would produce: -```scala -object Result { - import scala.compiletime.Shape._ - - implicit [T, E] for Generic[Result[T, E]] { - type Shape = Cases[( - Case[Ok[T], T *: Unit], - Case[Err[E], E *: Unit] - )] - ... - } -} -``` -The `Generic` class is defined in package `scala.reflect`. - -```scala -abstract class Generic[T] { - type Shape <: scala.compiletime.Shape - - /** The mirror corresponding to ADT instance `x` */ - def reflect(x: T): Mirror - - /** The ADT instance corresponding to given `mirror` */ - def reify(mirror: Mirror): T - - /** The companion object of the ADT */ - def common: GenericClass -} -``` -It defines the `Shape` type for the ADT `T`, as well as two methods that map between a -type `T` and a generic representation of `T`, which we call a `Mirror`: -The `reflect` method maps an instance of the ADT `T` to its mirror whereas -the `reify` method goes the other way. There's also a `common` method that returns -a value of type `GenericClass` which contains information that is the same for all -instances of a class (right now, this consists of the runtime `Class` value and -the names of the cases and their parameters). - -### Mirrors - -A mirror is a generic representation of an instance of an ADT. `Mirror` objects have three components: - - - `adtClass: GenericClass`: The representation of the ADT class - - `ordinal: Int`: The ordinal number of the case among all cases of the ADT, starting from 0 - - `elems: Product`: The elements of the instance, represented as a `Product`. - - The `Mirror` class is defined in package `scala.reflect` as follows: - -```scala -class Mirror(val adtClass: GenericClass, val ordinal: Int, val elems: Product) { - - /** The `n`'th element of this generic case */ - def apply(n: Int): Any = elems.productElement(n) - - /** The name of the constructor of the case reflected by this mirror */ - def caseLabel: String = adtClass.label(ordinal)(0) - - /** The label of the `n`'th element of the case reflected by this mirror */ - def elementLabel(n: Int): String = adtClass.label(ordinal)(n + 1) -} -``` - -### GenericClass - -Here's the API of `scala.reflect.GenericClass`: - -```scala -class GenericClass(val runtimeClass: Class[_], labelsStr: String) { - - /** A mirror of case with ordinal number `ordinal` and elements as given by `Product` */ - def mirror(ordinal: Int, product: Product): Mirror = - new Mirror(this, ordinal, product) - - /** A mirror with elements given as an array */ - def mirror(ordinal: Int, elems: Array[AnyRef]): Mirror = - mirror(ordinal, new ArrayProduct(elems)) - - /** A mirror with an initial empty array of `numElems` elements, to be filled in. */ - def mirror(ordinal: Int, numElems: Int): Mirror = - mirror(ordinal, new Array[AnyRef](numElems)) - - /** A mirror of a case with no elements */ - def mirror(ordinal: Int): Mirror = - mirror(ordinal, EmptyProduct) - - /** Case and element labels as a two-dimensional array. - * Each row of the array contains a case label, followed by the labels of the elements of that case. - */ - val label: Array[Array[String]] = ... -} -``` - -The class provides four overloaded methods to create mirrors. The first of these is invoked by the `reify` method that maps an ADT instance to its mirror. It simply passes the -instance itself (which is a `Product`) to the second parameter of the mirror. That operation does not involve any copying and is thus quite efficient. The second and third versions of `mirror` are typically invoked by typeclass methods that create instances from mirrors. An example would be an `unpickle` method that first creates an array of elements, then creates -a mirror over that array, and finally uses the `reify` method in `Reflected` to create the ADT instance. The fourth version of `mirror` is used to create mirrors of instances that do not have any elements. - -### How to Write Generic Typeclasses - -Based on the machinery developed so far it becomes possible to define type classes generically. This means that the `derived` method will compute a type class instance for any ADT that has a `Generic` instance, recursively. -The implementation of these methods typically uses three new type-level constructs in Dotty: inline methods, inline matches, and implicit matches. As an example, here is one possible implementation of a generic `Eql` type class, with explanations. Let's assume `Eql` is defined by the following trait: -```scala -trait Eql[T] { - def eql(x: T, y: T): Boolean -} -``` -We need to implement a method `Eql.derived` that produces an implicit instance of type `Eql[T]` provided -there exists an implicit instance of type `Generic[T]`. Here's a possible solution: -```scala - inline def derived[T] given (ev: Generic[T]): Eql[T] = new Eql[T] { - def eql(x: T, y: T): Boolean = { - val mx = ev.reflect(x) // (1) - val my = ev.reflect(y) // (2) - inline erasedValue[ev.Shape] match { - case _: Cases[alts] => - mx.ordinal == my.ordinal && // (3) - eqlCases[alts](mx, my, 0) // [4] - case _: Case[_, elems] => - eqlElems[elems](mx, my, 0) // [5] - } - } - } -``` -The implementation of the inline method `derived` creates an instance of `Eql[T]` and implements its `eql` method. The right-hand side of `eql` mixes compile-time and runtime elements. In the code above, runtime elements are marked with a number in parentheses, i.e -`(1)`, `(2)`, `(3)`. Compile-time calls that expand to runtime code are marked with a number in brackets, i.e. `[4]`, `[5]`. The implementation of `eql` consists of the following steps. - - 1. Map the compared values `x` and `y` to their mirrors using the `reflect` method of the implicitly passed `Generic` `(1)`, `(2)`. - 2. Match at compile-time against the shape of the ADT given in `ev.Shape`. Dotty does not have a construct for matching types directly, but we can emulate it using an `inline` match over an `erasedValue`. Depending on the actual type `ev.Shape`, the match will reduce at compile time to one of its two alternatives. - 3. If `ev.Shape` is of the form `Cases[alts]` for some tuple `alts` of alternative types, the equality test consists of comparing the ordinal values of the two mirrors `(3)` and, if they are equal, comparing the elements of the case indicated by that ordinal value. That second step is performed by code that results from the compile-time expansion of the `eqlCases` call `[4]`. - 4. If `ev.Shape` is of the form `Case[elems]` for some tuple `elems` for element types, the elements of the case are compared by code that results from the compile-time expansion of the `eqlElems` call `[5]`. - -Here is a possible implementation of `eqlCases`: -```scala - inline def eqlCases[Alts <: Tuple](mx: Mirror, my: Mirror, n: Int): Boolean = - inline erasedValue[Alts] match { - case _: (Shape.Case[_, elems] *: alts1) => - if (mx.ordinal == n) // (6) - eqlElems[elems](mx, my, 0) // [7] - else - eqlCases[alts1](mx, my, n + 1) // [8] - case _: Unit => - throw new MatchError(mx.ordinal) // (9) - } -``` -The inline method `eqlCases` takes as type arguments the alternatives of the ADT that remain to be tested. It takes as value arguments mirrors of the two instances `x` and `y` to be compared and an integer `n` that indicates the ordinal number of the case that is tested next. It produces an expression that compares these two values. - -If the list of alternatives `Alts` consists of a case of type `Case[_, elems]`, possibly followed by further cases in `alts1`, we generate the following code: - - 1. Compare the `ordinal` value of `mx` (a runtime value) with the case number `n` (a compile-time value translated to a constant in the generated code) in an if-then-else `(6)`. - 2. In the then-branch of the conditional we have that the `ordinal` value of both mirrors - matches the number of the case with elements `elems`. Proceed by comparing the elements - of the case in code expanded from the `eqlElems` call `[7]`. - 3. In the else-branch of the conditional we have that the present case does not match - the ordinal value of both mirrors. Proceed by trying the remaining cases in `alts1` using - code expanded from the `eqlCases` call `[8]`. - - If the list of alternatives `Alts` is the empty tuple, there are no further cases to check. - This place in the code should not be reachable at runtime. Therefore an appropriate - implementation is by throwing a `MatchError` or some other runtime exception `(9)`. - -The `eqlElems` method compares the elements of two mirrors that are known to have the same -ordinal number, which means they represent the same case of the ADT. Here is a possible -implementation: -```scala - inline def eqlElems[Elems <: Tuple](xs: Mirror, ys: Mirror, n: Int): Boolean = - inline erasedValue[Elems] match { - case _: (elem *: elems1) => - tryEql[elem]( // [12] - xs(n).asInstanceOf[elem], // (10) - ys(n).asInstanceOf[elem]) && // (11) - eqlElems[elems1](xs, ys, n + 1) // [13] - case _: Unit => - true // (14) - } -``` -`eqlElems` takes as arguments the two mirrors of the elements to compare and a compile-time index `n`, indicating the index of the next element to test. It is defined in terms of another compile-time match, this time over the tuple type `Elems` of all element types that remain to be tested. If that type is -non-empty, say of form `elem *: elems1`, the following code is produced: - - 1. Access the `n`'th elements of both mirrors and cast them to the current element type `elem` - `(10)`, `(11)`. Note that because of the way runtime reflection mirrors compile-time `Shape` types, the casts are guaranteed to succeed. - 2. Compare the element values using code expanded by the `tryEql` call `[12]`. - 3. "And" the result with code that compares the remaining elements using a recursive call - to `eqlElems` `[13]`. - - If type `Elems` is empty, there are no more elements to be compared, so the comparison's result is `true`. `(14)` - - Since `eqlElems` is an inline method, its recursive calls are unrolled. The end result is a conjunction `test_1 && ... && test_n && true` of test expressions produced by the `tryEql` calls. - -The last, and in a sense most interesting part of the derivation is the comparison of a pair of element values in `tryEql`. Here is the definition of this method: -```scala - inline def tryEql[T](x: T, y: T) = implicit match { - case ev: Eql[T] => - ev.eql(x, y) // (15) - case _ => - error("No `Eql` instance was found for $T") - } -``` -`tryEql` is an inline method that takes an element type `T` and two element values of that type as arguments. It is defined using an `implicit match` that tries to find an implicit instance for `Eql[T]`. If an instance `ev` is found, it proceeds by comparing the arguments using `ev.eql`. On the other hand, if no instance is found -this signals a compilation error: the user tried a generic derivation of `Eql` for a class with an element type that does not support an `Eql` instance itself. The error is signaled by -calling the `error` method defined in `scala.compiletime`. - -**Note:** At the moment our error diagnostics for metaprogramming does not support yet interpolated string arguments for the `scala.compiletime.error` method that is called in the second case above. As an alternative, one can simply leave off the second case, then a missing typeclass would result in a "failure to reduce match" error. - -**Example:** Here is a slightly polished and compacted version of the code that's generated by inline expansion for the derived `Eql` instance of class `Tree`. - -```scala -implicit [T] for Eql[Tree[T]] given (elemEq: Eql[T]) { - def eql(x: Tree[T], y: Tree[T]): Boolean = { - val ev = the[Generic[Tree[T]]] - val mx = ev.reflect(x) - val my = ev.reflect(y) - mx.ordinal == my.ordinal && { - if (mx.ordinal == 0) { - this.eql(mx(0).asInstanceOf[Tree[T]], my(0).asInstanceOf[Tree[T]]) && - this.eql(mx(1).asInstanceOf[Tree[T]], my(1).asInstanceOf[Tree[T]]) - } - else if (mx.ordinal == 1) { - elemEq.eql(mx(0).asInstanceOf[T], my(0).asInstanceOf[T]) - } - else throw new MatchError(mx.ordinal) - } - } -} -``` - -One important difference between this approach and Scala-2 typeclass derivation frameworks such as Shapeless or Magnolia is that no automatic attempt is made to generate typeclass instances of elements recursively using the generic derivation framework. There must be an implicit instance of type `Eql[T]` (which can of course be produced in turn using `Eql.derived`), or the compilation will fail. The advantage of this more restrictive approach to typeclass derivation is that it avoids uncontrolled transitive typeclass derivation by design. This keeps code sizes smaller, compile times lower, and is generally more predictable. - -### Deriving Instances Elsewhere - -Sometimes one would like to derive a typeclass instance for an ADT after the ADT is defined, without being able to change the code of the ADT itself. -To do this, simply define an implicit instance with the `derived` method of the typeclass as right-hand side. E.g, to implement `Ordering` for `Option`, define: -```scala -implicit [T: Ordering] for Ordering[Option[T]] = Ordering.derived -``` -Usually, the `Ordering.derived` clause has an implicit parameter of type -`Generic[Option[T]]`. Since the `Option` trait has a `derives` clause, -the necessary implicit instance is already present in the companion object of `Option`. -If the ADT in question does not have a `derives` clause, an implicit instance for `Generic` -would still be synthesized by the compiler at the point where `derived` is called. -This is similar to the situation with type tags or class tags: If no implicit instance is found, -the compiler will synthesize one. - -### Syntax - -``` -Template ::= InheritClauses [TemplateBody] -EnumDef ::= id ClassConstr InheritClauses EnumBody -InheritClauses ::= [‘extends’ ConstrApps] [‘derives’ QualId {‘,’ QualId}] -ConstrApps ::= ConstrApp {‘with’ ConstrApp} - | ConstrApp {‘,’ ConstrApp} -``` - -### Discussion - -The typeclass derivation framework is quite small and low-level. There are essentially -two pieces of infrastructure in the compiler-generated `Generic` instances: - - - a type representing the shape of an ADT, - - a way to map between ADT instances and generic mirrors. - -Generic mirrors make use of the already existing `Product` infrastructure for case -classes, which means they are efficient and their generation requires not much code. - -Generic mirrors can be so simple because, just like `Product`s, they are weakly -typed. On the other hand, this means that code for generic typeclasses has to -ensure that type exploration and value selection proceed in lockstep and it -has to assert this conformance in some places using casts. If generic typeclasses -are correctly written these casts will never fail. - -It could make sense to explore a higher-level framework that encapsulates all casts -in the framework. This could give more guidance to the typeclass implementer. -It also seems quite possible to put such a framework on top of the lower-level -mechanisms presented here. diff --git a/docs/docs/reference/contextual-implicit/extension-methods.md b/docs/docs/reference/contextual-implicit/extension-methods.md deleted file mode 100644 index 37efad933a54..000000000000 --- a/docs/docs/reference/contextual-implicit/extension-methods.md +++ /dev/null @@ -1,146 +0,0 @@ ---- -layout: doc-page -title: "Extension Methods" ---- - -Extension methods allow one to add methods to a type after the type is defined. Example: - -```scala -case class Circle(x: Double, y: Double, radius: Double) - -def (c: Circle) circumference: Double = c.radius * math.Pi * 2 -``` - -Like regular methods, extension methods can be invoked with infix `.`: - -```scala - val circle = Circle(0, 0, 1) - circle.circumference -``` - -### Translation of Extension Methods - -Extension methods are methods that have a parameter clause in front of the defined -identifier. They translate to methods where the leading parameter section is moved -to after the defined identifier. So, the definition of `circumference` above translates -to the plain method, and can also be invoked as such: -```scala -def circumference(c: Circle): Double = c.radius * math.Pi * 2 - -assert(circle.circumference == circumference(circle)) -``` - -### Translation of Calls to Extension Methods - -When is an extension method applicable? There are two possibilities. - - - An extension method is applicable if it is visible under a simple name, by being defined - or inherited or imported in a scope enclosing the application. - - An extension method is applicable if it is a member of some implicit instance at the point of the application. - -As an example, consider an extension method `longestStrings` on `String` defined in a trait `StringSeqOps`. - -```scala -trait StringSeqOps { - def (xs: Seq[String]) longestStrings = { - val maxLength = xs.map(_.length).max - xs.filter(_.length == maxLength) - } -} -``` -We can make the extension method available by defining an implicit instance for `StringSeqOps`, like this: -```scala -implicit ops1 for StringSeqOps -``` -Then -```scala -List("here", "is", "a", "list").longestStrings -``` -is legal everywhere `ops1` is available as an implicit. Alternatively, we can define `longestStrings` as a member of a normal object. But then the method has to be brought into scope to be usable as an extension method. - -```scala -object ops2 extends StringSeqOps -import ops2.longestStrings -List("here", "is", "a", "list").longestStrings -``` -The precise rules for resolving a selection to an extension method are as follows. - -Assume a selection `e.m[Ts]` where `m` is not a member of `e`, where the type arguments `[Ts]` are optional, -and where `T` is the expected type. The following two rewritings are tried in order: - - 1. The selection is rewritten to `m[Ts](e)`. - 2. If the first rewriting does not typecheck with expected type `T`, and there is an implicit `i` - in either the current scope or in the implicit scope of `T`, and `i` defines an extension - method named `m`, then selection is expanded to `i.m[Ts](e)`. - This second rewriting is attempted at the time where the compiler also tries an implicit conversion - from `T` to a type containing `m`. If there is more than one way of rewriting, an ambiguity error results. - -So `circle.circumference` translates to `CircleOps.circumference(circle)`, provided -`circle` has type `Circle` and `CircleOps` is an eligible implicit (i.e. it is visible at the point of call or it is defined in the companion object of `Circle`). - -### Implicits for Extension Methods - -An implicit instance that define extension methods can also be defined without a `for` clause. E.g., - -```scala -implicit StringOps { - def (xs: Seq[String]) longestStrings: Seq[String] = { - val maxLength = xs.map(_.length).max - xs.filter(_.length == maxLength) - } -} - -implicit { - def (xs: List[T]) second[T] = xs.tail.head -} -``` -If such an implicit is anonymous (as in the second example above), its name is synthesized from the name -of the first defined extension method. - -### Operators - -The extension method syntax also applies to the definition of operators. -In each case the definition syntax mirrors the way the operator is applied. -Examples: -```scala - def (x: String) < (y: String) = ... - def (x: Elem) +: (xs: Seq[Elem]) = ... - - "ab" + "c" - 1 +: List(2, 3) -``` -The two definitions above translate to -```scala - def < (x: String)(y: String) = ... - def +: (xs: Seq[Elem])(x: Elem) = ... -``` -Note that swap of the two parameters `x` and `xs` when translating -the right-binding operator `+:` to an extension method. This is analogous -to the implementation of right binding operators as normal methods. - -### Generic Extensions - -The `StringSeqOps` examples extended a specific instance of a generic type. It is also possible to extend a generic type by adding type parameters to an extension method. Examples: - -```scala -def (xs: List[T]) second [T] = - xs.tail.head - -def (xs: List[List[T]]) flattened [T] = - xs.foldLeft[List[T]](Nil)(_ ++ _) - -def (x: T) + [T : Numeric](y: T): T = - the[Numeric[T]].plus(x, y) -``` - -As usual, type parameters of the extension method follow the defined method name. Nevertheless, such type parameters can already be used in the preceding parameter clause. - - -### Syntax - -The required syntax extension just adds one clause for extension methods relative -to the [current syntax](https://github.com/lampepfl/dotty/blob/master/docs/docs/internals/syntax.md). -``` -DefSig ::= ... - | ‘(’ DefParam ‘)’ [nl] id [DefTypeParamClause] DefParamClauses -``` diff --git a/docs/docs/reference/contextual-implicit/import-implied.md b/docs/docs/reference/contextual-implicit/import-implied.md deleted file mode 100644 index 84a3a70e1891..000000000000 --- a/docs/docs/reference/contextual-implicit/import-implied.md +++ /dev/null @@ -1,48 +0,0 @@ ---- -layout: doc-page -title: "Import Implicit" ---- - -A special form of import is used to import implicit instances. Example: -```scala -object A { - class TC - implicit tc for TC - def f given TC = ??? -} -object B { - import A._ - import implicit A._ -} -``` -In the code above, the `import A._` clause of object `B` will import all members -of `A` _except_ the implicit `tc`. Conversely, the second import `import implicit A._` will import _only_ that implicit instance. - -Generally, a normal import clause brings all members except implicit instances into scope whereas an `import implicit` clause brings only implicit instances into scope. - -There are two main benefits arising from these rules: - - - It is made clearer where implicit instances in scope are coming from. In particular, it is not possible to hide imported implicit instances in a long list of regular imports. - - It enables importing all implicit instances - without importing anything else. This is particularly important since implicit - instances can be anonymous, so the usual recourse of using named imports is not - practical. - -### Migration - -The rules as stated above would break all existing code that imports implicits, which is of course unacceptable. -To make gradual migration possible, we adapt the following scheme. - - 1. In Scala 3.0, a normal import will also import implicits written in the old "implicit-as-a-modifier" style. - So these implicits can be brought into scope using either a normal import or an `import implicit`. - - 2. In Scala 3.1, an old-style implicit accessed implicitly through a normal import will give a deprecation warning. - - 3. In some version after 3.1, an old-style implicit accessed implicitly through a normal import - will give a compiler error. - -New-style implicit instance definitions always need to be imported with `import implicit`. - -These rules mean that library users can use `import implicit` to access old-style implicits in Scala 3.0, -and will be gently nudged and then forced to do so in later versions. Libraries can then switch to -new-style implicit definitions once their user base has migrated. diff --git a/docs/docs/reference/contextual-implicit/inferable-by-name-parameters.md b/docs/docs/reference/contextual-implicit/inferable-by-name-parameters.md deleted file mode 100644 index 7c8280d6d988..000000000000 --- a/docs/docs/reference/contextual-implicit/inferable-by-name-parameters.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -layout: doc-page -title: "Implicit By-Name Parameters" ---- - -Implicit by-name parameters can be used to avoid a divergent inferred expansion. Example: - -```scala -trait Codec[T] { - def write(x: T): Unit -} - -implicit intCodec for Codec[Int] = ??? - -implicit optionCodec[T] for Codec[Option[T]] given (ev: => Codec[T]) { - def write(xo: Option[T]) = xo match { - case Some(x) => ev.write(x) - case None => - } -} - -val s = the[Codec[Option[Int]]] - -s.write(Some(33)) -s.write(None) -``` -As is the case for a normal by-name parameter, the argument for the implicit parameter `ev` -is evaluated on demand. In the example above, if the option value `x` is `None`, it is -not evaluated at all. - -The synthesized argument for an implicit parameter is backed by a local val -if this is necessary to prevent an otherwise diverging expansion. - -The precise steps for synthesizing an argument for a by-name parameter of type `=> T` are as follows. - - 1. Create a new implicit instance for type `T`: - - ```scala - implicit lv for T = ??? - ``` - where `lv` is an arbitrary fresh name. - - 1. This instance is not immediately available as a candidate implicit (making it immediately available could result in a loop in the synthesized computation). But it becomes available in all nested contexts that look again for an implicit argument to a by-name parameter. - - 1. If this search succeeds with expression `E`, and `E` contains references to the implicit `lv`, replace `E` by - - - ```scala - { implicit lv for T = E; lv } - ``` - - Otherwise, return `E` unchanged. - -In the example above, the definition of `s` would be expanded as follows. - -```scala -val s = the[Test.Codec[Option[Int]]]( - optionCodec[Int](intCodec)) -``` - -No local implicit was generated because the synthesized argument is not recursive. - -### Reference - -For more info, see [Issue #1998](https://github.com/lampepfl/dotty/issues/1998) -and the associated [Scala SIP](https://docs.scala-lang.org/sips/byname-implicits.html). diff --git a/docs/docs/reference/contextual-implicit/inferable-params.md b/docs/docs/reference/contextual-implicit/inferable-params.md deleted file mode 100644 index fd726a0e4838..000000000000 --- a/docs/docs/reference/contextual-implicit/inferable-params.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -layout: doc-page -title: "Given Clauses" ---- - -Functional programming tends to express most dependencies as simple function parameterization. -This is clean and powerful, but it sometimes leads to functions that take many parameters and -call trees where the same value is passed over and over again in long call chains to many -functions. Given clauses can help here since they enable the compiler to synthesize -repetitive arguments instead of the programmer having to write them explicitly. - -For example, given the [implicit instances](./instance-defs.md) defined previously, -a maximum function that works for any arguments for which an ordering exists can be defined as follows: -```scala -def max[T](x: T, y: T) given (ord: Ord[T]): T = - if (ord.compare(x, y) < 1) y else x -``` -Here, `ord` is an _implicit parameter_ introduced with a `given` clause. -The `max` method can be applied as follows: -```scala -max(2, 3).given(IntOrd) -``` -The `.given(IntOrd)` part passes `IntOrd` as an argument for the `ord` parameter. But the point of -implicit parameters is that this argument can also be left out (and it usually is). So the following -applications are equally valid: -```scala -max(2, 3) -max(List(1, 2, 3), Nil) -``` - -## Anonymous Implicit Parameters - -In many situations, the name of an implicit parameter of a method need not be -mentioned explicitly at all, since it is only used in synthesized arguments for -other implicit parameters. In that case one can avoid defining a parameter name -and just provide its type. Example: -```scala -def maximum[T](xs: List[T]) given Ord[T]: T = - xs.reduceLeft(max) -``` -`maximum` takes an implicit parameter of type `Ord` only to pass it on as a -synthesized argument to `max`. The name of the parameter is left out. - -Generally, implicit parameters may be given either as a parameter list `(p_1: T_1, ..., p_n: T_n)` -or as a sequence of types, separated by commas. - -## Inferring Complex Arguments - -Here are two other methods that have an implicit parameter of type `Ord[T]`: -```scala -def descending[T] given (asc: Ord[T]): Ord[T] = new Ord[T] { - def compare(x: T, y: T) = asc.compare(y, x) -} - -def minimum[T](xs: List[T]) given Ord[T] = - maximum(xs).given(descending) -``` -The `minimum` method's right hand side passes `descending` as an explicit argument to `maximum(xs)`. -With this setup, the following calls are all well-formed, and they all normalize to the last one: -```scala -minimum(xs) -maximum(xs).given(descending) -maximum(xs).given(descending.given(ListOrd)) -maximum(xs).given(descending.given(ListOrd.given(IntOrd))) -``` - -## Mixing Given Clauses And Normal Parameters - -Given clauses can be freely mixed with normal parameters. -A given clause may be followed by a normal parameter and _vice versa_. -There can be several given clauses in a definition. Example: -```scala -def f given (u: Universe) (x: u.T) given Context = ... - -implicit global for Universe { type T = String ... } -implicit ctx for Context { ... } -``` -Then the following calls are all valid (and normalize to the last one) -```scala -f("abc") -f.given(global)("abc") -f("abc").given(ctx) -f.given(global)("abc").given(ctx) -``` - -## Summoning Implicit Instances - -A method `the` in `Predef` returns an implicit instance for a given type. For example, the implicit for `Ord[List[Int]]` is generated by -```scala -the[Ord[List[Int]]] // reduces to ListOrd given IntOrd -``` -The `the` method is simply defined as the (non-widening) identity function over an implicit parameter. -```scala -def the[T] given (x: T): x.type = x -``` - -## Syntax - -Here is the new syntax of parameters and arguments seen as a delta from the [standard context free syntax of Scala 3](http://dotty.epfl.ch/docs/internals/syntax.html). -``` -ClsParamClause ::= ... - | ‘given’ (‘(’ [ClsParams] ‘)’ | GivenTypes) -DefParamClause ::= ... - | GivenParamClause -GivenParamClause ::= ‘given’ (‘(’ DefParams ‘)’ | GivenTypes) -GivenTypes ::= AnnotType {‘,’ AnnotType} - -InfixExpr ::= ... - | InfixExpr ‘given’ (InfixExpr | ParArgumentExprs) -``` diff --git a/docs/docs/reference/contextual-implicit/instance-defs.md b/docs/docs/reference/contextual-implicit/instance-defs.md deleted file mode 100644 index e8c40d5e617e..000000000000 --- a/docs/docs/reference/contextual-implicit/instance-defs.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -layout: doc-page -title: "Implicit Instances" ---- - -Implicit instances define "canonical" values of given types -that can be synthesized by the compiler as arguments for -[given clauses](./inferable-params.html). Example: -```scala -trait Ord[T] { - def compare(x: T, y: T): Int - def (x: T) < (y: T) = compare(x, y) < 0 - def (x: T) > (y: T) = compare(x, y) > 0 -} - -implicit IntOrd for Ord[Int] { - def compare(x: Int, y: Int) = - if (x < y) -1 else if (x > y) +1 else 0 -} - -implicit ListOrd[T] for Ord[List[T]] given (ord: Ord[T]) { - def compare(xs: List[T], ys: List[T]): Int = (xs, ys) match { - case (Nil, Nil) => 0 - case (Nil, _) => -1 - case (_, Nil) => +1 - case (x :: xs1, y :: ys1) => - val fst = ord.compare(x, y) - if (fst != 0) fst else xs1.compareTo(ys1) - } -} -``` -This code defines a trait `Ord` and two implicit definitions. `IntOrd` defines -an implicit instance of the type `Ord[Int]` whereas `ListOrd[T]` defines implicit instances of type `Ord[List[T]]` -for all types `T` that come with an implicit instance for `Ord[T]` themselves. -The `given` clause in `ListOrd` defines an implicit parameter. -Given clauses are further explained in the [next section](./inferable-params.html). - -## Anonymous Implicit Instances - -The name of an implicit instance can be left out. So the implicit instance definitions -of the last section can also be expressed like this: -```scala -implicit for Ord[Int] { ... } -implicit [T] for Ord[List[T]] given (ord: Ord[T]) { ... } -``` -If the name of an implicit instance is missing, the compiler will synthesize a name from -the type(s) in the `for` clause. - -## Alias Implicits - -An alias can be used to define an implicit instance that is equal to some expression. E.g.: -```scala -implicit global for ExecutionContext = new ForkJoinPool() -``` -This creates an implicit `global` of type `ExecutionContext` that resolves to the right hand side `new ForkJoinPool()`. -The first time `global` is accessed, a new `ForkJoinPool` is created, which is then -returned for this and all subsequent accesses to `global`. - -Alias implicits may be anonymous, e.g. -```scala -implicit for Position = enclosingTree.position -``` -An alias implicit can have type and context parameters just like any other implicit definition, but it can only implement a single type. - -## Implicit Instance Creation - -An implicit instance without type parameters or given clause is created on-demand, the first time it is accessed. It is not required to ensure safe publication, which means that different threads might create different instances for the same `implicit` definition. If an `implicit` definition has type parameters or a given clause, a fresh instance is created for each reference. - -## Syntax - -Here is the new syntax of implicit instances, seen as a delta from the [standard context free syntax of Scala 3](http://dotty.epfl.ch/docs/internals/syntax.html). -``` -TmplDef ::= ... - | ‘implicit’ InstanceDef -InstanceDef ::= [id] [DefTypeParamClause] InstanceBody -InstanceBody ::= [‘for’ ConstrApp {‘,’ ConstrApp }] {GivenParamClause} [TemplateBody] - | ‘for’ Type {GivenParamClause} ‘=’ Expr -ConstrApp ::= SimpleConstrApp - | ‘(’ SimpleConstrApp {‘given’ (PrefixExpr | ParArgumentExprs)} ‘)’ -SimpleConstrApp ::= AnnotType {ArgumentExprs} -GivenParamClause ::= ‘given’ (‘(’ [DefParams] ‘)’ | GivenTypes) -GivenTypes ::= AnnotType {‘,’ AnnotType} -``` -The identifier `id` can be omitted only if either the `for` part or the template body is present. -If the `for` part is missing, the template body must define at least one extension method. diff --git a/docs/docs/reference/contextual-implicit/motivation.md b/docs/docs/reference/contextual-implicit/motivation.md deleted file mode 100644 index a2609a110a18..000000000000 --- a/docs/docs/reference/contextual-implicit/motivation.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -layout: doc-page -title: "Overview" ---- - -### Critique of the Status Quo - -Scala's implicits are its most distinguished feature. They are _the_ fundamental way to abstract over context. They represent a unified paradigm with a great variety of use cases, among them: implementing type classes, establishing context, dependency injection, expressing capabilities, computing new types and proving relationships between them. - -Following Haskell, Scala was the second popular language to have some form of implicits. Other languages have followed suit. E.g Rust's traits or Swift's protocol extensions. Design proposals are also on the table for Kotlin as [compile time dependency resolution](https://github.com/Kotlin/KEEP/blob/e863b25f8b3f2e9b9aaac361c6ee52be31453ee0/proposals/compile-time-dependency-resolution.md), for C# as [Shapes and Extensions](https://github.com/dotnet/csharplang/issues/164) -or for F# as [Traits](https://github.com/MattWindsor91/visualfsharp/blob/hackathon-vs/examples/fsconcepts.md). Implicits are also a common feature of theorem provers such as Coq or Agda. - -Even though these designs use widely different terminology, they are all variants of the core idea of _term inference_. Given a type, the compiler synthesizes a "canonical" term that has that type. Scala embodies the idea in a purer form than most other languages: An implicit parameter directly leads to an inferred argument term that could also be written down explicitly. By contrast, typeclass based designs are less direct since they hide term inference behind some form of type classification and do not offer the option of writing the inferred quantities (typically, dictionaries) explicitly. - -Given that term inference is where the industry is heading, and given that Scala has it in a very pure form, how come implicits are not more popular? In fact, it's fair to say that implicits are at the same time Scala's most distinguished and most controversial feature. I believe this is due to a number of aspects that together make implicits harder to learn than necessary and also make it harder to prevent abuses. - -Particular criticisms are: - -1. Being very powerful, implicits are easily over-used and mis-used. This observation holds in almost all cases when we talk about _implicit conversions_, which, even though conceptually different, share the same syntax with other implicit definitions. For instance, regarding the two definitions - - ```scala - implicit def i1(implicit x: T): C[T] = ... - implicit def i2(x: T): C[T] = ... - ``` - - the first of these is a conditional implicit _value_, the second an implicit _conversion_. Conditional implicit values are a cornerstone for expressing type classes, whereas most applications of implicit conversions have turned out to be of dubious value. The problem is that many newcomers to the language start with defining implicit conversions since they are easy to understand and seem powerful and convenient. Scala 3 will put under a language flag both definitions and applications of "undisciplined" implicit conversions between types defined elsewhere. This is a useful step to push back against overuse of implicit conversions. But the problem remains that syntactically, conversions and values just look too similar for comfort. - - 2. Another widespread abuse is over-reliance on implicit imports. This often leads to inscrutable type errors that go away with the right import incantation, leaving a feeling of frustration. Conversely, it is hard to see what implicits a program uses since implicits can hide anywhere in a long list of imports. - - 3. The syntax of implicit definitions is too minimal. It consists of a single modifier, `implicit`, that can be attached to a large number of language constructs. A problem with this for newcomers is that it conveys mechanism instead of intent. For instance, a typeclass instance is an implicit object or val if unconditional and an implicit def with implicit parameters referring to some class if conditional. This describes precisely what the implicit definitions translate to -- just drop the `implicit` modifier, and that's it! But the cues that define intent are rather indirect and can be easily misread, as demonstrated by the definitions of `i1` and `i2` above. - - 4. The syntax of implicit parameters also has shortcomings. It starts with the position of `implicit` as a pseudo-modifier that applies to a whole parameter section instead of a single parameter. This represents an irregular case wrt to the rest of Scala's syntax. Furthermore, while implicit _parameters_ are designated specifically, arguments are not. Passing an argument to an implicit parameter looks like a regular application `f(arg)`. This is problematic because it means there can be confusion regarding what parameter gets instantiated in a call. For instance, in - ```scala - def currentMap(implicit ctx: Context): Map[String, Int] - ``` - one cannot write `currentMap("abc")` since the string "abc" is taken as explicit argument to the implicit `ctx` parameter. One has to write `currentMap.apply("abc")` instead, which is awkward and irregular. For the same reason, a method definition can only have one implicit parameter section and it must always come last. This restriction not only reduces orthogonality, but also prevents some useful program constructs, such as a method with a regular parameter whose type depends on an implicit value. Finally, it's also a bit annoying that implicit parameters must have a name, even though in many cases that name is never referenced. - - 5. Implicits pose challenges for tooling. The set of available implicits depends on context, so command completion has to take context into account. This is feasible in an IDE but docs like ScalaDoc that are based static web pages can only provide an approximation. Another problem is that failed implicit searches often give very unspecific error messages, in particular if some deeply recursive implicit search has failed. Note that the Dotty compiler already implements some improvements in this case, but challenges still remain. - -None of the shortcomings is fatal, after all implicits are very widely used, and many libraries and applications rely on them. But together, they make code using implicits a lot more cumbersome and less clear than it could be. - -Historically, many of these shortcomings come from the way implicits were gradually "discovered" in Scala. Scala originally had only implicit conversions with the intended use case of "extending" a class or trait after it was defined, i.e. what is expressed by implicit classes in later versions of Scala. Implicit parameters and instance definitions came later in 2006 and picked similar syntax since it seemed convenient. For the same reason, no effort was made to distinguish implicit imports or arguments from normal ones. - -Existing Scala programmers by and large have gotten used to the status quo and see little need for change. But for newcomers this status quo presents a big hurdle. I believe if we want to overcome that hurdle, we should take a step back and allow ourselves to consider a radically new design. - -### The New Design - -The following pages introduce a redesign of contextual abstractions in Scala. They introduce four fundamental changes: - - 1. [Implicit Instance Definitions](./instance-defs.html) are a new way to define basic terms that can be synthesized. They replace the old style implicit-as-a-modifier form. The core principle is that, rather than mixing the `implicit` modifier with a large number of features, we have a single way to define terms that can be synthesized for types. - - 2. [Given Clauses](./inferable-params.html) are a new syntax for implicit _parameters_ and their _arguments_. Both are introduced with the same keyword, `given`. This unambiguously aligns parameters and arguments, solving a number of language warts. It also allows us to have several implicit parameter sections, and to have implicit parameters followed by normal ones. - - 3. [Import Implicit](./import-implied.html) is new form of import that specifically imports implicit definitions and nothing else. New-style implicit instances _must be_ imported with `import implicit`, a plain import will no longer bring them into scope. Old-style implicit definitions can be imported with either form. - - 4. [Implicit Conversions](./conversions.html) are now expressed as implicit instances of a standard `Conversion` class. All other forms of implicit conversions will be phased out. - -This section also contains pages describing other language features that are related to context abstraction. These are: - - - [Context Bounds](./context-bounds.html), which carry over unchanged. - - [Extension Methods](./extension-methods.html) replace implicit classes in a way that integrates better with typeclasses. - - [Implementing Typeclasses](./typeclasses.html) demonstrates how some common typeclasses can be implemented using the new constructs. - - [Typeclass Derivation](./derivation.html) introduces constructs to automatically derive implicit typeclass instances for ADTs. - - [Multiversal Equality](./multiversal-equality.html) introduces a special typeclass - to support type safe equality. - - [Implicit Function Types](./query-types.html) introduce a way to abstract over implicit parameterization. - - [Implicit By-Name Parameters](./inferable-by-name-parameters.html) are an essential tool to define recursive implicits without looping. - - [Relationship with Scala 2 Implicits](./relationship-implicits.html) discusses the relationship between old-style and - new-style implicits and how to migrate from one to the other. - -Overall, the new design achieves a better separation of term inference from the rest of the language: There is a single way to define implicit instances instead of a multitude of forms all taking an `implicit` modifier. There is a single way to introduce implicit parameters and arguments instead of conflating implicit with normal arguments. There is a separate way to import implicits that does not allow them to hide in a sea of normal imports. And there is a single way to define an implicit conversion which is clearly marked as such and does not require special syntax. - -This design thus avoids feature interactions and makes the language more consistent and orthogonal. It will make implicits easier to learn and harder to abuse. It will greatly improve the clarity of the 95% of Scala programs that use implicits. It has thus the potential to fulfil the promise of term inference in a principled way that is also accessible and friendly. - -Could we achieve the same goals by tweaking existing implicits? After having tried for a long time, I believe now that this is impossible. - - - First, some of the problems are clearly syntactic and require different syntax to solve them. - - Second, there is the problem how to migrate. We cannot change the rules in mid-flight. At some stage of language evolution we need to accommodate both the new and the old rules. With a syntax change, this is easy: Introduce the new syntax with new rules, support the old syntax for a while to facilitate cross compilation, deprecate and phase out the old syntax at some later time. Keeping the same syntax does not offer this path, and in fact does not seem to offer any viable path for evolution - - Third, even if we would somehow succeed with migration, we still have the problem - how to teach this. We cannot make existing tutorials go away. Almost all existing tutorials start with implicit conversions, which will go away; they use normal imports, which will go away, and they explain calls to methods with implicit parameters by expanding them to plain applications, which will also go away. This means that we'd have - to add modifications and qualifications to all existing literature and courseware, likely causing more confusion with beginners instead of less. By contrast, with a new syntax there is a clear criterion: Any book or courseware that mentions `implicit` is outdated and should be updated. diff --git a/docs/docs/reference/contextual-implicit/multiversal-equality.md b/docs/docs/reference/contextual-implicit/multiversal-equality.md deleted file mode 100644 index 4f7ee6cf8306..000000000000 --- a/docs/docs/reference/contextual-implicit/multiversal-equality.md +++ /dev/null @@ -1,218 +0,0 @@ ---- -layout: doc-page -title: "Multiversal Equality" ---- - -Previously, Scala had universal equality: Two values of any types -could be compared with each other with `==` and `!=`. This came from -the fact that `==` and `!=` are implemented in terms of Java's -`equals` method, which can also compare values of any two reference -types. - -Universal equality is convenient. But it is also dangerous since it -undermines type safety. For instance, let's assume one is left after some refactoring -with an erroneous program where a value `y` has type `S` instead of the correct type `T`. - -```scala -val x = ... // of type T -val y = ... // of type S, but should be T -x == y // typechecks, will always yield false -``` - -If `y` gets compared to other values of type `T`, -the program will still typecheck, since values of all types can be compared with each other. -But it will probably give unexpected results and fail at runtime. - -Multiversal equality is an opt-in way to make universal equality -safer. It uses a binary typeclass `Eql` to indicate that values of -two given types can be compared with each other. -The example above report a type error if `S` or `T` was a class -that derives `Eql`, e.g. -```scala -class T derives Eql -``` -Alternatively, one can also define an `Eql` instance directly, like this: -```scala -implicit for Eql[T, T] = Eql.derived -``` -This definition effectively says that values of type `T` can (only) be -compared to other values of type `T` when using `==` or `!=`. The definition -affects type checking but it has no significance for runtime -behavior, since `==` always maps to `equals` and `!=` always maps to -the negation of `equals`. The right hand side `Eql.derived` of the definition -is a value that has any `Eql` instance as its type. Here is the definition of class -`Eql` and its companion object: -```scala -package scala -import annotation.implicitNotFound - -@implicitNotFound("Values of types ${L} and ${R} cannot be compared with == or !=") -sealed trait Eql[-L, -R] - -object Eql { - object derived extends Eql[Any, Any] -} -``` - -One can have several `Eql` instances for a type. For example, the four -definitions below make values of type `A` and type `B` comparable with -each other, but not comparable to anything else: - -```scala -implicit for Eql[A, A] = Eql.derived -implicit for Eql[B, B] = Eql.derived -implicit for Eql[A, B] = Eql.derived -implicit for Eql[B, A] = Eql.derived -``` -The `scala.Eql` object defines a number of `Eql` instances that together -define a rule book for what standard types can be compared (more details below). - -There's also a "fallback" instance named `eqlAny` that allows comparisons -over all types that do not themselves have an `Eql` instance. `eqlAny` is -defined as follows: - -```scala -def eqlAny[L, R]: Eql[L, R] = Eql.derived -``` - -Even though `eqlAny` is not declared as an `implicit`, the compiler will still -construct an `eqlAny` instance as answer to an implicit search for the -type `Eql[L, R]`, unless `L` or `R` have implicit `Eql` instances -defined on them, or the language feature `strictEquality` is enabled - -The primary motivation for having `eqlAny` is backwards compatibility, -if this is of no concern, one can disable `eqlAny` by enabling the language -feature `strictEquality`. As for all language features this can be either -done with an import - -```scala -import scala.language.strictEquality -``` -or with a command line option `-language:strictEquality`. - -## Deriving Eql Instances - -Instead of defining implicit `Eql` instances directly, it is often more convenient to derive them. Example: -```scala -class Box[T](x: T) derives Eql -``` -By the usual rules if [typeclass derivation](./derivation.html), -this generates the following `Eql` instance in the companion object of `Box`: -```scala -implicit [T, U] for Eql[Box[T], Box[U]] given Eql[T, U] = Eql.derived -``` -That is, two boxes are comparable with `==` or `!=` if their elements are. Examples: -```scala -new Box(1) == new Box(1L) // ok since there is an implicit for `Eql[Int, Long]` -new Box(1) == new Box("a") // error: can't compare -new Box(1) == 1 // error: can't compare -``` - -## Precise Rules for Equality Checking - -The precise rules for equality checking are as follows. - -If the `strictEquality` feature is enabled then -a comparison using `x == y` or `x != y` between values `x: T` and `y: U` -is legal if - - 1. there is an implicit for `Eql[T, U]`, or - 2. one of `T`, `U` is `Null`. - -In the default case where the `strictEquality` feature is not enabled the comparison is -also legal if - - 1. `T` and `U` the same, or - 2. one of `T` and `U`is a subtype of the _lifted_ version of the other type, or - 3. neither `T` nor `U` have a _reflexive `Eql` instance_. - -Explanations: - - - _lifting_ a type `S` means replacing all references to abstract types - in covariant positions of `S` by their upper bound, and to replacing - all refinement types in covariant positions of `S` by their parent. - - a type `T` has a _reflexive `Eql` instance_ if the implicit search for `Eql[T, T]` - succeeds. - -## Predefined Eql Instances - -The `Eql` object defines implicit instances for comparing - - the primitive types `Byte`, `Short`, `Char`, `Int`, `Long`, `Float`, `Double`, `Boolean`, and `Unit`, - - `java.lang.Number`, `java.lang.Boolean`, and `java.lang.Character`, - - `scala.collection.Seq`, and `scala.collection.Set`. - -Implicit instances are defined so that every one of these types has a reflexive `Eql` instance, and the following holds: - - - Primitive numeric types can be compared with each other. - - Primitive numeric types can be compared with subtypes of `java.lang.Number` (and _vice versa_). - - `Boolean` can be compared with `java.lang.Boolean` (and _vice versa_). - - `Char` can be compared with `java.lang.Character` (and _vice versa_). - - Two sequences (of arbitrary subtypes of `scala.collection.Seq`) can be compared - with each other if their element types can be compared. The two sequence types - need not be the same. - - Two sets (of arbitrary subtypes of `scala.collection.Set`) can be compared - with each other if their element types can be compared. The two set types - need not be the same. - - Any subtype of `AnyRef` can be compared with `Null` (and _vice versa_). - -## Why Two Type Parameters? - -One particular feature of the `Eql` type is that it takes _two_ type parameters, representing the types of the two items to be compared. By contrast, conventional -implementations of an equality type class take only a single type parameter which represents the common type of _both_ operands. One type parameter is simpler than two, so why go through the additional complication? The reason has to do with the fact that, rather than coming up with a type class where no operation existed before, -we are dealing with a refinement of pre-existing, universal equality. It's best illustrated through an example. - -Say you want to come up with a safe version of the `contains` method on `List[T]`. The original definition of `contains` in the standard library was: -```scala -class List[+T] { - ... - def contains(x: Any): Boolean -} -``` -That uses universal equality in an unsafe way since it permits arguments of any type to be compared with the list's elements. The "obvious" alternative definition -```scala - def contains(x: T): Boolean -``` -does not work, since it refers to the covariant parameter `T` in a nonvariant context. The only variance-correct way to use the type parameter `T` in `contains` is as a lower bound: -```scala - def contains[U >: T](x: U): Boolean -``` -This generic version of `contains` is the one used in the current (Scala 2.12) version of `List`. -It looks different but it admits exactly the same applications as the `contains(x: Any)` definition we started with. -However, we can make it more useful (i.e. restrictive) by adding an `Eql` parameter: -```scala - def contains[U >: T](x: U) given Eql[T, U]: Boolean // (1) -``` -This version of `contains` is equality-safe! More precisely, given -`x: T`, `xs: List[T]` and `y: U`, then `xs.contains(y)` is type-correct if and only if -`x == y` is type-correct. - -Unfortunately, the crucial ability to "lift" equality type checking from simple equality and pattern matching to arbitrary user-defined operations gets lost if we restrict ourselves to an equality class with a single type parameter. Consider the following signature of `contains` with a hypothetical `Eql1[T]` type class: -```scala - def contains[U >: T](x: U) given Eql1[U]: Boolean // (2) -``` -This version could be applied just as widely as the original `contains(x: Any)` method, -since the `Eql1[Any]` fallback is always available! So we have gained nothing. What got lost in the transition to a single parameter type class was the original rule that `Eql[A, B]` is available only if neither `A` nor `B` have a reflexive `Eql` instance. That rule simply cannot be expressed if there is a single type parameter for `Eql`. - -The situation is different under `-language:strictEquality`. In that case, -the `Eql[Any, Any]` or `Eql1[Any]` instances would never be available, and the -single and two-parameter versions would indeed coincide for most practical purposes. - -But assuming `-language:strictEquality` immediately and everywhere poses migration problems which might well be unsurmountable. Consider again `contains`, which is in the standard library. Parameterizing it with the `Eql` type class as in (1) is an immediate win since it rules out non-sensical applications while still allowing all sensible ones. -So it can be done almost at any time, modulo binary compatibility concerns. -On the other hand, parameterizing `contains` with `Eql1` as in (2) would make `contains` -unusable for all types that have not yet declared an `Eql1` instance, including all -types coming from Java. This is clearly unacceptable. It would lead to a situation where, -rather than migrating existing libraries to use safe equality, the only upgrade path is to have parallel libraries, with the new version only catering to types deriving `Eql1` and the old version dealing with everything else. Such a split of the ecosystem would be very problematic, which means the cure is likely to be worse than the disease. - -For these reasons, it looks like a two-parameter type class is the only way forward because it can take the existing ecosystem where it is and migrate it towards a future where more and more code uses safe equality. - -In applications where `-language:strictEquality` is the default one could also introduce a one-parameter type alias such as -```scala -type Eq[-T] = Eql[T, T] -``` -Operations needing safe equality could then use this alias instead of the two-parameter `Eql` class. But it would only -work under `-language:strictEquality`, since otherwise the universal `Eq[Any]` instance would be available everywhere. - - -More on multiversal equality is found in a [blog post](http://www.scala-lang.org/blog/2016/05/06/multiversal-equality.html) -and a [Github issue](https://github.com/lampepfl/dotty/issues/1247). diff --git a/docs/docs/reference/contextual-implicit/query-types-spec.md b/docs/docs/reference/contextual-implicit/query-types-spec.md deleted file mode 100644 index 0e4dae6cb66a..000000000000 --- a/docs/docs/reference/contextual-implicit/query-types-spec.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: doc-page -title: "Implicit Function Types - More Details" ---- - -## Syntax - - Type ::= ... - | `given' FunArgTypes `=>' Type - Expr ::= ... - | `given' FunParams `=>' Expr - -Implicit function types associate to the right, e.g. -`given S => given T => U` is the same as `given S => (given T => U)`. - -## Implementation - -Implicit function types are shorthands for class types that define `apply` -methods with implicit parameters. Specifically, the `N`-ary function type -`T1, ..., TN => R` is a shorthand for the class type -`ImplicitFunctionN[T1 , ... , TN, R]`. Such class types are assumed to have the following definitions, for any value of `N >= 1`: -```scala -package scala -trait ImplicitFunctionN[-T1 , ... , -TN, +R] { - def apply given (x1: T1 , ... , xN: TN): R -} -``` -Implicit function types erase to normal function types, so these classes are -generated on the fly for typechecking, but not realized in actual code. - -Implicit function literals `given (x1: T1, ..., xn: Tn) => e` map -implicit parameters `xi` of types `Ti` to a result given by expression `e`. -The scope of each implicit parameter `xi` is `e`. The parameters must have pairwise distinct names. - -If the expected type of the implicit function literal is of the form -`scala.ImplicitFunctionN[S1, ..., Sn, R]`, the expected type of `e` is `R` and -the type `Ti` of any of the parameters `xi` can be omitted, in which case `Ti -= Si` is assumed. If the expected type of the implicit function literal is -some other type, all implicit parameter types must be explicitly given, and the expected type of `e` is undefined. The type of the implicit function literal is `scala.ImplicitFunctionN[S1, ...,Sn, T]`, where `T` is the widened -type of `e`. `T` must be equivalent to a type which does not refer to any of -the implicit parameters `xi`. - -The implicit function literal is evaluated as the instance creation -expression: -```scala -new scala.ImplicitFunctionN[T1, ..., Tn, T] { - def apply given (x1: T1, ..., xn: Tn): T = e -} -``` -In the case of a single untyped parameter, `given (x) => e` can be -abbreviated to `given x => e`. - -An implicit parameter may also be a wildcard represented by an underscore `_`. In -that case, a fresh name for the parameter is chosen arbitrarily. - -Note: The closing paragraph of the -[Anonymous Functions section](https://www.scala-lang.org/files/archive/spec/2.12/06-expressions.html#anonymous-functions) -of Scala 2.12 is subsumed by implicit function types and should be removed. - -Implicit function literals `given (x1: T1, ..., xn: Tn) => e` are -automatically created for any expression `e` whose expected type is -`scala.ImplicitFunctionN[T1, ..., Tn, R]`, unless `e` is -itself a implicit function literal. This is analogous to the automatic -insertion of `scala.Function0` around expressions in by-name argument position. - -Implicit function types generalize to `N > 22` in the same way that function types do, see [the corresponding -documentation](https://dotty.epfl.ch/docs/reference/dropped-features/limit22.html). - -## Examples - -See the section on Expressiveness from [Simplicitly: foundations and -applications of implicit function -types](https://dl.acm.org/citation.cfm?id=3158130). I've extracted it in [this -Gist](https://gist.github.com/OlivierBlanvillain/234d3927fe9e9c6fba074b53a7bd9 -592), it might easier to access than the pdf. - -### Type Checking - -After desugaring no additional typing rules are required for implicit function types. diff --git a/docs/docs/reference/contextual-implicit/query-types.md b/docs/docs/reference/contextual-implicit/query-types.md deleted file mode 100644 index 89a3b975b928..000000000000 --- a/docs/docs/reference/contextual-implicit/query-types.md +++ /dev/null @@ -1,159 +0,0 @@ ---- -layout: doc-page -title: "Implicit Function Types" ---- - -_Implicit functions_ are functions with (only) implicit parameters. -Their types are _implicit function types_. Here is an example of an implicit function type: -```scala -type Contextual[T] = given Context => T -``` -A value of an implicit function type is applied to inferred arguments, in -the same way a method with a given clause is applied. For instance: -```scala - implicit ctx for Context = ... - - def f(x: Int): Contextual[Int] = ... - - f(2).given(ctx) // explicit argument - f(2) // argument is inferred -``` -Conversely, if the expected type of an expression `E` is an implicit function type -`given (T_1, ..., T_n) => U` and `E` is not already an -implicit function literal, `E` is converted to an implicit function literal by rewriting to -```scala - given (x_1: T1, ..., x_n: Tn) => E -``` -where the names `x_1`, ..., `x_n` are arbitrary. This expansion is performed -before the expression `E` is typechecked, which means that `x_1`, ..., `x_n` -are available as implicits in `E`. - -Like their types, implicit function literals are written with a `given` prefix. They differ from normal function literals in two ways: - - 1. Their parameters are implicit. - 2. Their types are implicit function types. - -For example, continuing with the previous definitions, -```scala - def g(arg: Contextual[Int]) = ... - - g(22) // is expanded to g(given ctx => 22) - - g(f(2)) // is expanded to g(given ctx => f(2).given(ctx)) - - g(given ctx => f(22).given(ctx)) // is left as it is -``` -### Example: Builder Pattern - -Implicit function types have considerable expressive power. For -instance, here is how they can support the "builder pattern", where -the aim is to construct tables like this: -```scala - table { - row { - cell("top left") - cell("top right") - } - row { - cell("bottom left") - cell("bottom right") - } - } -``` -The idea is to define classes for `Table` and `Row` that allow -addition of elements via `add`: -```scala - class Table { - val rows = new ArrayBuffer[Row] - def add(r: Row): Unit = rows += r - override def toString = rows.mkString("Table(", ", ", ")") - } - - class Row { - val cells = new ArrayBuffer[Cell] - def add(c: Cell): Unit = cells += c - override def toString = cells.mkString("Row(", ", ", ")") - } - - case class Cell(elem: String) -``` -Then, the `table`, `row` and `cell` constructor methods can be defined -in terms of implicit function types to avoid the plumbing boilerplate -that would otherwise be necessary. -```scala - def table(init: given Table => Unit) = { - implicit t for Table - init - t - } - - def row(init: given Row => Unit) given (t: Table) = { - implicit r for Row - init - t.add(r) - } - - def cell(str: String) given (r: Row) = - r.add(new Cell(str)) -``` -With that setup, the table construction code above compiles and expands to: -```scala - table { given ($t: Table) => - row { given ($r: Row) => - cell("top left").given($r) - cell("top right").given($r) - }.given($t) - row { given ($r: Row) => - cell("bottom left").given($r) - cell("bottom right").given($r) - }.given($t) - } -``` -### Example: Postconditions - -As a larger example, here is a way to define constructs for checking arbitrary postconditions using an extension method `ensuring`so that the checked result can be referred to simply by `result`. The example combines opaque aliases, implicit function types, and extension methods to provide a zero-overhead abstraction. - -```scala -object PostConditions { - opaque type WrappedResult[T] = T - - private object WrappedResult { - def wrap[T](x: T): WrappedResult[T] = x - def unwrap[T](x: WrappedResult[T]): T = x - } - - def result[T] given (r: WrappedResult[T]): T = WrappedResult.unwrap(r) - - def (x: T) ensuring [T](condition: given WrappedResult[T] => Boolean): T = { - implicit for WrappedResult[T] = WrappedResult.wrap(x) - assert(condition) - x - } -} - -object Test { - import PostConditions.{ensuring, result} - val s = List(1, 2, 3).sum.ensuring(result == 6) -} -``` -**Explanations**: We use a implicit function type `given WrappedResult[T] => Boolean` -as the type of the condition of `ensuring`. An argument to `ensuring` such as -`(result == 6)` will therefore have an implicit for `WrappedResult[T]` in -scope to pass along to the `result` method. `WrappedResult` is a fresh type, to make sure -that we do not get unwanted implicits in scope (this is good practice in all cases -where implicit parameters are involved). Since `WrappedResult` is an opaque type alias, its -values need not be boxed, and since `ensuring` is added as an extension method, its argument -does not need boxing either. Hence, the implementation of `ensuring` is as about as efficient -as the best possible code one could write by hand: - - { val result = List(1, 2, 3).sum - assert(result == 6) - result - } - -### Reference - -For more info, see the [blog article](https://www.scala-lang.org/blog/2016/12/07/implicit-function-types.html), -(which uses a different syntax that has been superseded). - -[More details](./query-types-spec.html) diff --git a/docs/docs/reference/contextual-implicit/relationship-implicits.md b/docs/docs/reference/contextual-implicit/relationship-implicits.md deleted file mode 100644 index 7cd67abbd81a..000000000000 --- a/docs/docs/reference/contextual-implicit/relationship-implicits.md +++ /dev/null @@ -1,176 +0,0 @@ ---- -layout: doc-page -title: Relationship with Scala 2 Implicits ---- - -Many, but not all, of the new contextual abstraction features in Scala 3 can be mapped to Scala 2's implicits. This page gives a rundown on the relationships between new and old features. - -## Simulating Contextual Abstraction with Implicits - -### Implicit Instances - -Implicit instances can be mapped to combinations of implicit objects and implicit methods together with normal classes. - - 1. Implicit instances without parameters are mapped to implicit objects. E.g., - ```scala - implicit IntOrd for Ord[Int] { ... } - ``` - maps to - ```scala - implicit object IntOrd extends Ord[Int] { ... } - ``` - 2. Parameterized implicit instances are mapped to combinations of classes and implicit methods. E.g., - ```scala - implicit ListOrd[T] for Ord[List[T]] given (ord: Ord[T]) { ... } - ``` - maps to - ```scala - class ListOrd[T](implicit ord: Ord[T]) extends Ord[List[T]] { ... } - final implicit def ListOrd[T](implicit ord: Ord[T]): ListOrd[T] = new ListOrd[T] - ``` - 3. Alias implicits map to implicit methods. If the implicit has neither type parameters nor a given clause, the result of creating an instance is cached in a variable. There are two cases that can be optimized: - - - If the right hand side is a simple reference, we can - use a forwarder to that reference without caching it. - - If the right hand side is more complex, but still known to be a pure path, we can - create a `val` that computes it ahead of time. - - Examples: - - ```scala - implicit global for ExecutionContext = new ForkJoinContext() - implicit config for Config = default.config - - val ctx: Context - implicit for Context = ctx - ``` - would map to - ```scala - private[this] var global$_cache: ExecutionContext | Null = null - final implicit def global: ExecutionContext = { - if (global$_cache == null) global$_cache = new ForkJoinContext() - global$_cache - } - - final implicit val config: Config = default.config - - val ctx: Context - final implicit def Context_ev = ctx - ``` - -### Anonymous Implicit Instances - -Anonymous implicit instances get compiler synthesized names, which are generated in a reproducible way from the implemented type(s). For example, if the names of the `IntOrd` and `ListOrd` implicits above were left out, the following names would be synthesized instead: -```scala - implicit Ord_Int_ev for Ord[Int] { ... } - implicit Ord_List_ev[T] for Ord[List[T]] { ... } -``` -The synthesized type names are formed from - - - the simple name(s) of the implemented type(s), leaving out any prefixes, - - the simple name(s) of the toplevel argument type constructors to these types - - the suffix `_ev`. - -Anonymous implicit instances that define extension methods without also implementing a type -get their name from the name of the first extension method and the toplevel type -constructor of its first parameter. For example, the implicit -```scala - implicit { - def (xs: List[T]) second[T] = ... - } -``` -gets the synthesized name `second_of_List_T_ev`. - -### Implicit Parameters - -The new implicit parameter syntax with `given` corresponds largely to Scala-2's implicit parameters. E.g. -```scala - def max[T](x: T, y: T) given (ord: Ord[T]): T -``` -would be written -```scala - def max[T](x: T, y: T)(implicit ord: Ord[T]): T -``` -in Scala 2. The main difference concerns applications of such parameters. -Explicit arguments to parameters of given clauses _must_ be written using `given`, -mirroring the definition syntax. E.g, `max(2, 3).given(IntOrd)`. -Scala 2 uses normal applications `max(2, 3)(IntOrd)` instead. The Scala 2 syntax has some inherent ambiguities and restrictions which are overcome by the new syntax. For instance, multiple implicit parameter lists are not available in the old syntax, even though they can be simulated using auxiliary objects in the "Aux" pattern. - -The `the` method corresponds to `implicitly` in Scala 2. -It is precisely the same as the `the` method in Shapeless. -The difference between `the` (in both versions) and `implicitly` is -that `the` can return a more precise type than the type that was -asked for. - -### Context Bounds - -Context bounds are the same in both language versions. They expand to the respective forms of implicit parameters. - -**Note:** To ease migration, context bounds in Dotty map for a limited time to old-style implicit parameters for which arguments can be passed either with `given` or -with a normal application. Once old-style implicits are deprecated, context bounds -will map to given clauses instead. - -### Extension Methods - -Extension methods have no direct counterpart in Scala 2, but they can be simulated with implicit classes. For instance, the extension method -```scala - def (c: Circle) circumference: Double = c.radius * math.Pi * 2 -``` -could be simulated to some degree by -```scala - implicit class CircleDeco(c: Circle) extends AnyVal { - def circumference: Double = c.radius * math.Pi * 2 - } -``` -Extension methods in implicit instances have no direct counterpart in Scala-2. The only way to simulate these is to make implicit classes available through imports. The Simulacrum macro library can automate this process in some cases. - -### Typeclass Derivation - -Typeclass derivation has no direct counterpart in the Scala 2 language. Comparable functionality can be achieved by macro-based libraries such as Shapeless, Magnolia, or scalaz-deriving. - -### Implicit Function Types - -Implicit function types have no analogue in Scala 2. - -### Implicit By-Name Parameters - -Implicit by-name parameters are not supported in Scala 2, but can be emulated to some degree by the `Lazy` type in Shapeless. - -## Simulating Scala 2 Implicits in Dotty - -### Implicit Conversions - -Implicit conversion methods in Scala 2 can be expressed as implicit instances of the -`scala.Conversion` class in Dotty. E.g. instead of -```scala - implicit def stringToToken(str: String): Token = new Keyword(str) -``` -one can write -```scala - implicit stringToToken for Conversion[String, Token] { - def apply(str: String): Token = new KeyWord(str) - } -``` - -### Implicit Classes - -Implicit classes in Scala 2 are often used to define extension methods, which are directly supported in Dotty. Other uses of implicit classes can be simulated by a pair of a regular class and an implicit `Conversion` instance. - -### Abstract Implicits - -An abstract implicit `val` or `def` in Scala 2 can be expressed in Dotty using a regular abstract definition and an alias implicit. E.g., Scala 2's -```scala - implicit def symDeco: SymDeco -``` -can be expressed in Dotty as -```scala - def symDeco: SymDeco - implicit for SymDeco = symDeco -``` - -## Implementation Status and Timeline - -The Dotty implementation implements both Scala-2's implicits and the new abstractions. In fact, support for Scala-2's implicits is an essential part of the common language subset between 2.13/2.14 and Dotty. -Migration to the new abstractions will be supported by making automatic rewritings available. - -Depending on adoption patterns, old style implicits might start to be deprecated in a version following Scala 3.0. diff --git a/docs/docs/reference/contextual-implicit/typeclasses.md b/docs/docs/reference/contextual-implicit/typeclasses.md deleted file mode 100644 index e1fdc9bc57a7..000000000000 --- a/docs/docs/reference/contextual-implicit/typeclasses.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -layout: doc-page -title: "Implementing Typeclasses" ---- - -Implicit instances, extension methods and context bounds -allow a concise and natural expression of _typeclasses_. Typeclasses are just traits -with canonical implementations defined by implicit instances. Here are some examples of standard typeclasses: - -### Semigroups and monoids: - -```scala -trait SemiGroup[T] { - def (x: T) combine (y: T): T -} -trait Monoid[T] extends SemiGroup[T] { - def unit: T -} -object Monoid { - def apply[T] given Monoid[T] = the[Monoid[T]] -} - -implicit for Monoid[String] { - def (x: String) combine (y: String): String = x.concat(y) - def unit: String = "" -} - -implicit for Monoid[Int] { - def (x: Int) combine (y: Int): Int = x + y - def unit: Int = 0 -} - -def sum[T: Monoid](xs: List[T]): T = - xs.foldLeft(Monoid[T].unit)(_.combine(_)) -``` - -### Functors and monads: - -```scala -trait Functor[F[_]] { - def (x: F[A]) map [A, B] (f: A => B): F[B] -} - -trait Monad[F[_]] extends Functor[F] { - def (x: F[A]) flatMap [A, B] (f: A => F[B]): F[B] - def (x: F[A]) map [A, B] (f: A => B) = x.flatMap(f `andThen` pure) - - def pure[A](x: A): F[A] -} - -implicit ListMonad for Monad[List] { - def (xs: List[A]) flatMap [A, B] (f: A => List[B]): List[B] = - xs.flatMap(f) - def pure[A](x: A): List[A] = - List(x) -} - -implicit ReaderMonad[Ctx] for Monad[[X] => Ctx => X] { - def (r: Ctx => A) flatMap [A, B] (f: A => Ctx => B): Ctx => B = - ctx => f(r(ctx))(ctx) - def pure[A](x: A): Ctx => A = - ctx => x -} -``` diff --git a/docs/docs/reference/contextual-instance/context-bounds.md b/docs/docs/reference/contextual-instance/context-bounds.md deleted file mode 100644 index 3458c5cf6cd1..000000000000 --- a/docs/docs/reference/contextual-instance/context-bounds.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -layout: doc-page -title: "Context Bounds" ---- - -## Context Bounds - -A context bound is a shorthand for expressing a common pattern of an inferable parameter that depends on a type parameter. Using a context bound, the `maximum` function of the last section can be written like this: -```scala -def maximum[T: Ord](xs: List[T]): T = xs.reduceLeft(max) -``` -A bound like `: Ord` on a type parameter `T` of a method or class indicates an inferable parameter `given Ord[T]`. The inferable parameter(s) generated from context bounds come last in the definition of the containing method or class. E.g., -```scala -def f[T: C1 : C2, U: C3](x: T) given (y: U, z: V): R -``` -would expand to -```scala -def f[T, U](x: T) given (y: U, z: V) given C1[T], C2[T], C3[U]: R -``` -Context bounds can be combined with subtype bounds. If both are present, subtype bounds come first, e.g. -```scala -def g[T <: B : C](x: T): R = ... -``` - -## Syntax - -``` -TypeParamBounds ::= [SubtypeBounds] {ContextBound} -ContextBound ::= ‘:’ Type -``` diff --git a/docs/docs/reference/contextual-instance/conversions.md b/docs/docs/reference/contextual-instance/conversions.md deleted file mode 100644 index 725fa566fd0d..000000000000 --- a/docs/docs/reference/contextual-instance/conversions.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -layout: doc-page -title: "Implicit Conversions" ---- - -Implicit conversions are defined by implicit instances of the `scala.Conversion` class. -This class is defined in package `scala` as follows: -```scala -abstract class Conversion[-T, +U] extends (T => U) -``` -For example, here is an implicit conversion from `String` to `Token`: -```scala -instance of Conversion[String, Token] { - def apply(str: String): Token = new KeyWord(str) -} -``` -Using an alias instance, this can be expressed more concisely as: -```scala -instance of Conversion[String, Token] = new KeyWord(_) -``` -An implicit conversion is applied automatically by the compiler in three situations: - -1. If an expression `e` has type `T`, and `T` does not conform to the expression's expected type `S`. -2. In a selection `e.m` with `e` of type `T`, but `T` defines no member `m`. -3. In an application `e.m(args)` with `e` of type `T`, if `T` does define - some member(s) named `m`, but none of these members can be applied to the arguments `args`. - -In the first case, the compiler looks for an implicit instance of class -`scala.Conversion` that maps an argument of type `T` to type `S`. In the second and third -case, it looks for an implicit instance of class `scala.Conversion` that maps an argument of type `T` -to a type that defines a member `m` which can be applied to `args` if present. -If such an instance `C` is found, the expression `e` is replaced by `C.apply(e)`. - -## Examples - -1. The `Predef` package contains "auto-boxing" conversions that map -primitive number types to subclasses of `java.lang.Number`. For instance, the -conversion from `Int` to `java.lang.Integer` can be defined as follows: -```scala -instance int2Integer of Conversion[Int, java.lang.Integer] = - java.lang.Integer.valueOf(_) -``` - -2. The "magnet" pattern is sometimes used to express many variants of a method. Instead of defining overloaded versions of the method, one can also let the method take one or more arguments of specially defined "magnet" types, into which various argument types can be converted. E.g. -```scala -object Completions { - - // The argument "magnet" type - enum CompletionArg { - case Error(s: String) - case Response(f: Future[HttpResponse]) - case Status(code: Future[StatusCode]) - } - object CompletionArg { - - // conversions defining the possible arguments to pass to `complete` - // these always come with CompletionArg - // They can be invoked explicitly, e.g. - // - // CompletionArg.fromStatusCode(statusCode) - - instance fromString of Conversion[String, CompletionArg] = Error(_) - instance fromFuture of Conversion[Future[HttpResponse], CompletionArg] = Response(_) - instance fromStatusCode of Conversion[Future[StatusCode], CompletionArg] = Status(_) - } - import CompletionArg._ - - def complete[T](arg: CompletionArg) = arg match { - case Error(s) => ... - case Response(f) => ... - case Status(code) => ... - } -} -``` -This setup is more complicated than simple overloading of `complete`, but it can still be useful if normal overloading is not available (as in the case above, since we cannot have two overloaded methods that take `Future[...]` arguments), or if normal overloading would lead to a combinatorial explosion of variants. diff --git a/docs/docs/reference/contextual-instance/derivation.md b/docs/docs/reference/contextual-instance/derivation.md deleted file mode 100644 index d093893aecbb..000000000000 --- a/docs/docs/reference/contextual-instance/derivation.md +++ /dev/null @@ -1,383 +0,0 @@ ---- -layout: doc-page -title: Typeclass Derivation ---- - -Typeclass derivation is a way to generate instances of certain type classes automatically or with minimal code hints. A type class in this sense is any trait or class with a type parameter that describes the type being operated on. Commonly used examples are `Eql`, `Ordering`, `Show`, or `Pickling`. Example: -```scala -enum Tree[T] derives Eql, Ordering, Pickling { - case Branch(left: Tree[T], right: Tree[T]) - case Leaf(elem: T) -} -``` -The `derives` clause generates implicit instances of the `Eql`, `Ordering`, and `Pickling` traits in the companion object `Tree`: -```scala -instance [T: Eql] of Eql[Tree[T]] = Eql.derived -instance [T: Ordering] of Ordering[Tree[T]] = Ordering.derived -instance [T: Pickling] of Pickling[Tree[T]] = Pickling.derived -``` - -### Deriving Types - -Besides for `enums`, typeclasses can also be derived for other sets of classes and objects that form an algebraic data type. These are: - - - individual case classes or case objects - - sealed classes or traits that have only case classes and case objects as children. - - Examples: - - ```scala -case class Labelled[T](x: T, label: String) derives Eql, Show - -sealed trait Option[T] derives Eql -case class Some[T] extends Option[T] -case object None extends Option[Nothing] -``` - -The generated typeclass instances are placed in the companion objects `Labelled` and `Option`, respectively. - -### Derivable Types - -A trait or class can appear in a `derives` clause if its companion object defines a method named `derived`. The type and implementation of a `derived` method are arbitrary, but typically it has a definition like this: -```scala - def derived[T] given Generic[T] = ... -``` -That is, the `derived` method takes an inferable parameter of type `Generic` that determines the _shape_ of the deriving type `T` and it computes the typeclass implementation according to that shape. An implicit instance of `Generic` is generated automatically for any type that derives a typeclass with a `derived` -method that refers to `Generic`. One can also derive `Generic` alone, which means a `Generic` instance is generated without any other type class instances. E.g.: -```scala -sealed trait ParseResult[T] derives Generic -``` -This is all a user of typeclass derivation has to know. The rest of this page contains information needed to be able to write a typeclass that can appear in a `derives` clause. In particular, it details the means provided for the implementation of data generic `derived` methods. - -### The Shape Type - -For every class with a `derives` clause, the compiler computes the shape of that class as a type. For example, here is the shape type for the `Tree[T]` enum: -```scala -Cases[( - Case[Branch[T], (Tree[T], Tree[T])], - Case[Leaf[T], T *: Unit] -)] -``` -Informally, this states that - -> The shape of a `Tree[T]` is one of two cases: Either a `Branch[T]` with two - elements of type `Tree[T]`, or a `Leaf[T]` with a single element of type `T`. - -The type constructors `Cases` and `Case` come from the companion object of a class -`scala.compiletime.Shape`, which is defined in the standard library as follows: -```scala -sealed abstract class Shape - -object Shape { - - /** A sum with alternative types `Alts` */ - case class Cases[Alts <: Tuple] extends Shape - - /** A product type `T` with element types `Elems` */ - case class Case[T, Elems <: Tuple] extends Shape -} -``` - -Here is the shape type for `Labelled[T]`: -```scala -Case[Labelled[T], (T, String)] -``` -And here is the one for `Option[T]`: -```scala -Cases[( - Case[Some[T], T *: Unit], - Case[None.type, Unit] -)] -``` -Note that an empty element tuple is represented as type `Unit`. A single-element tuple -is represented as `T *: Unit` since there is no direct syntax for such tuples: `(T)` is just `T` in parentheses, not a tuple. - -### The Generic Typeclass - -For every class `C[T_1,...,T_n]` with a `derives` clause, the compiler generates in the companion object of `C` an implicit instance of `Generic[C[T_1,...,T_n]]` that follows the outline below: -```scala -instance [T_1, ..., T_n] of Generic[C[T_1,...,T_n]] { - type Shape = ... - ... -} -``` -where the right hand side of `Shape` is the shape type of `C[T_1,...,T_n]`. -For instance, the definition -```scala -enum Result[+T, +E] derives Logging { - case class Ok[T](result: T) - case class Err[E](err: E) -} -``` -would produce: -```scala -object Result { - import scala.compiletime.Shape._ - - instance [T, E] of Generic[Result[T, E]] { - type Shape = Cases[( - Case[Ok[T], T *: Unit], - Case[Err[E], E *: Unit] - )] - ... - } -} -``` -The `Generic` class is defined in package `scala.reflect`. - -```scala -abstract class Generic[T] { - type Shape <: scala.compiletime.Shape - - /** The mirror corresponding to ADT instance `x` */ - def reflect(x: T): Mirror - - /** The ADT instance corresponding to given `mirror` */ - def reify(mirror: Mirror): T - - /** The companion object of the ADT */ - def common: GenericClass -} -``` -It defines the `Shape` type for the ADT `T`, as well as two methods that map between a -type `T` and a generic representation of `T`, which we call a `Mirror`: -The `reflect` method maps an instance value of the ADT `T` to its mirror whereas -the `reify` method goes the other way. There's also a `common` method that returns -a value of type `GenericClass` which contains information that is the same for all -instances of a class (right now, this consists of the runtime `Class` value and -the names of the cases and their parameters). - -### Mirrors - -A mirror is a generic representation of an instance value of an ADT. `Mirror` objects have three components: - - - `adtClass: GenericClass`: The representation of the ADT class - - `ordinal: Int`: The ordinal number of the case among all cases of the ADT, starting from 0 - - `elems: Product`: The elements of the instance, represented as a `Product`. - - The `Mirror` class is defined in package `scala.reflect` as follows: - -```scala -class Mirror(val adtClass: GenericClass, val ordinal: Int, val elems: Product) { - - /** The `n`'th element of this generic case */ - def apply(n: Int): Any = elems.productElement(n) - - /** The name of the constructor of the case reflected by this mirror */ - def caseLabel: String = adtClass.label(ordinal)(0) - - /** The label of the `n`'th element of the case reflected by this mirror */ - def elementLabel(n: Int): String = adtClass.label(ordinal)(n + 1) -} -``` - -### GenericClass - -Here's the API of `scala.reflect.GenericClass`: - -```scala -class GenericClass(val runtimeClass: Class[_], labelsStr: String) { - - /** A mirror of case with ordinal number `ordinal` and elements as given by `Product` */ - def mirror(ordinal: Int, product: Product): Mirror = - new Mirror(this, ordinal, product) - - /** A mirror with elements given as an array */ - def mirror(ordinal: Int, elems: Array[AnyRef]): Mirror = - mirror(ordinal, new ArrayProduct(elems)) - - /** A mirror with an initial empty array of `numElems` elements, to be filled in. */ - def mirror(ordinal: Int, numElems: Int): Mirror = - mirror(ordinal, new Array[AnyRef](numElems)) - - /** A mirror of a case with no elements */ - def mirror(ordinal: Int): Mirror = - mirror(ordinal, EmptyProduct) - - /** Case and element labels as a two-dimensional array. - * Each row of the array contains a case label, followed by the labels of the elements of that case. - */ - val label: Array[Array[String]] = ... -} -``` - -The class provides four overloaded methods to create mirrors. The first of these is invoked by the `reify` method that maps an ADT instance to its mirror. It simply passes the -instance itself (which is a `Product`) to the second parameter of the mirror. That operation does not involve any copying and is thus quite efficient. The second and third versions of `mirror` are typically invoked by typeclass methods that create instances from mirrors. An example would be an `unpickle` method that first creates an array of elements, then creates -a mirror over that array, and finally uses the `reify` method in `Reflected` to create the ADT instance. The fourth version of `mirror` is used to create mirrors of instances that do not have any elements. - -### How to Write Generic Typeclasses - -Based on the machinery developed so far it becomes possible to define type classes generically. This means that the `derived` method will compute a type class instance for any ADT that has a `Generic` instance, recursively. -The implementation of these methods typically uses three new type-level constructs in Dotty: inline methods, inline matches, and implicit matches. As an example, here is one possible implementation of a generic `Eql` type class, with explanations. Let's assume `Eql` is defined by the following trait: -```scala -trait Eql[T] { - def eql(x: T, y: T): Boolean -} -``` -We need to implement a method `Eql.derived` that produces an implicit instance of `Eql[T]` provided -there exists an implicit instance of `Generic[T]`. Here's a possible solution: -```scala - inline def derived[T] given (ev: Generic[T]): Eql[T] = new Eql[T] { - def eql(x: T, y: T): Boolean = { - val mx = ev.reflect(x) // (1) - val my = ev.reflect(y) // (2) - inline erasedValue[ev.Shape] match { - case _: Cases[alts] => - mx.ordinal == my.ordinal && // (3) - eqlCases[alts](mx, my, 0) // [4] - case _: Case[_, elems] => - eqlElems[elems](mx, my, 0) // [5] - } - } - } -``` -The implementation of the inline method `derived` creates an instance of `Eql[T]` and implements its `eql` method. The right-hand side of `eql` mixes compile-time and runtime elements. In the code above, runtime elements are marked with a number in parentheses, i.e -`(1)`, `(2)`, `(3)`. Compile-time calls that expand to runtime code are marked with a number in brackets, i.e. `[4]`, `[5]`. The implementation of `eql` consists of the following steps. - - 1. Map the compared values `x` and `y` to their mirrors using the `reflect` method of the implicitly passed `Generic` instance `(1)`, `(2)`. - 2. Match at compile-time against the shape of the ADT given in `ev.Shape`. Dotty does not have a construct for matching types directly, but we can emulate it using an `inline` match over an `erasedValue`. Depending on the actual type `ev.Shape`, the match will reduce at compile time to one of its two alternatives. - 3. If `ev.Shape` is of the form `Cases[alts]` for some tuple `alts` of alternative types, the equality test consists of comparing the ordinal values of the two mirrors `(3)` and, if they are equal, comparing the elements of the case indicated by that ordinal value. That second step is performed by code that results from the compile-time expansion of the `eqlCases` call `[4]`. - 4. If `ev.Shape` is of the form `Case[elems]` for some tuple `elems` for element types, the elements of the case are compared by code that results from the compile-time expansion of the `eqlElems` call `[5]`. - -Here is a possible implementation of `eqlCases`: -```scala - inline def eqlCases[Alts <: Tuple](mx: Mirror, my: Mirror, n: Int): Boolean = - inline erasedValue[Alts] match { - case _: (Shape.Case[_, elems] *: alts1) => - if (mx.ordinal == n) // (6) - eqlElems[elems](mx, my, 0) // [7] - else - eqlCases[alts1](mx, my, n + 1) // [8] - case _: Unit => - throw new MatchError(mx.ordinal) // (9) - } -``` -The inline method `eqlCases` takes as type arguments the alternatives of the ADT that remain to be tested. It takes as value arguments mirrors of the two instances `x` and `y` to be compared and an integer `n` that indicates the ordinal number of the case that is tested next. It produces an expression that compares these two values. - -If the list of alternatives `Alts` consists of a case of type `Case[_, elems]`, possibly followed by further cases in `alts1`, we generate the following code: - - 1. Compare the `ordinal` value of `mx` (a runtime value) with the case number `n` (a compile-time value translated to a constant in the generated code) in an if-then-else `(6)`. - 2. In the then-branch of the conditional we have that the `ordinal` value of both mirrors - matches the number of the case with elements `elems`. Proceed by comparing the elements - of the case in code expanded from the `eqlElems` call `[7]`. - 3. In the else-branch of the conditional we have that the present case does not match - the ordinal value of both mirrors. Proceed by trying the remaining cases in `alts1` using - code expanded from the `eqlCases` call `[8]`. - - If the list of alternatives `Alts` is the empty tuple, there are no further cases to check. - This place in the code should not be reachable at runtime. Therefore an appropriate - implementation is by throwing a `MatchError` or some other runtime exception `(9)`. - -The `eqlElems` method compares the elements of two mirrors that are known to have the same -ordinal number, which means they represent the same case of the ADT. Here is a possible -implementation: -```scala - inline def eqlElems[Elems <: Tuple](xs: Mirror, ys: Mirror, n: Int): Boolean = - inline erasedValue[Elems] match { - case _: (elem *: elems1) => - tryEql[elem]( // [12] - xs(n).asInstanceOf[elem], // (10) - ys(n).asInstanceOf[elem]) && // (11) - eqlElems[elems1](xs, ys, n + 1) // [13] - case _: Unit => - true // (14) - } -``` -`eqlElems` takes as arguments the two mirrors of the elements to compare and a compile-time index `n`, indicating the index of the next element to test. It is defined in terms of another compile-time match, this time over the tuple type `Elems` of all element types that remain to be tested. If that type is -non-empty, say of form `elem *: elems1`, the following code is produced: - - 1. Access the `n`'th elements of both mirrors and cast them to the current element type `elem` - `(10)`, `(11)`. Note that because of the way runtime reflection mirrors compile-time `Shape` types, the casts are guaranteed to succeed. - 2. Compare the element values using code expanded by the `tryEql` call `[12]`. - 3. "And" the result with code that compares the remaining elements using a recursive call - to `eqlElems` `[13]`. - - If type `Elems` is empty, there are no more elements to be compared, so the comparison's result is `true`. `(14)` - - Since `eqlElems` is an inline method, its recursive calls are unrolled. The end result is a conjunction `test_1 && ... && test_n && true` of test expressions produced by the `tryEql` calls. - -The last, and in a sense most interesting part of the derivation is the comparison of a pair of element values in `tryEql`. Here is the definition of this method: -```scala - inline def tryEql[T](x: T, y: T) = instance match { - case ev: Eql[T] => - ev.eql(x, y) // (15) - case _ => - error("No `Eql` instance was found for $T") - } -``` -`tryEql` is an inline method that takes an element type `T` and two element values of that type as arguments. It is defined using an `instance match` that tries to find an implicit instance of `Eql[T]`. If an instance `ev` is found, it proceeds by comparing the arguments using `ev.eql`. On the other hand, if no instance is found -this signals a compilation error: the user tried a generic derivation of `Eql` for a class with an element type that does not support an `Eql` instance itself. The error is signaled by -calling the `error` method defined in `scala.compiletime`. - -**Note:** At the moment our error diagnostics for metaprogramming does not support yet interpolated string arguments for the `scala.compiletime.error` method that is called in the second case above. As an alternative, one can simply leave off the second case, then a missing typeclass would result in a "failure to reduce match" error. - -**Example:** Here is a slightly polished and compacted version of the code that's generated by inline expansion for the derived `Eql` instance of class `Tree`. - -```scala -instance [T] of Eql[Tree[T]] given (elemEq: Eql[T]) { - def eql(x: Tree[T], y: Tree[T]): Boolean = { - val ev = the[Generic[Tree[T]]] - val mx = ev.reflect(x) - val my = ev.reflect(y) - mx.ordinal == my.ordinal && { - if (mx.ordinal == 0) { - this.eql(mx(0).asInstanceOf[Tree[T]], my(0).asInstanceOf[Tree[T]]) && - this.eql(mx(1).asInstanceOf[Tree[T]], my(1).asInstanceOf[Tree[T]]) - } - else if (mx.ordinal == 1) { - elemEq.eql(mx(0).asInstanceOf[T], my(0).asInstanceOf[T]) - } - else throw new MatchError(mx.ordinal) - } - } -} -``` - -One important difference between this approach and Scala-2 typeclass derivation frameworks such as Shapeless or Magnolia is that no automatic attempt is made to generate typeclass instances of elements recursively using the generic derivation framework. There must be an implicit instance of type `Eql[T]` (which can of course be produced in turn using `Eql.derived`), or the compilation will fail. The advantage of this more restrictive approach to typeclass derivation is that it avoids uncontrolled transitive typeclass derivation by design. This keeps code sizes smaller, compile times lower, and is generally more predictable. - -### Derived Instances Elsewhere - -Sometimes one would like to derive a typeclass instance for an ADT after the ADT is defined, without being able to change the code of the ADT itself. -To do this, simply define an instance with the `derived` method of the typeclass as right-hand side. E.g, to implement `Ordering` for `Option`, define: -```scala -instance [T: Ordering] of Ordering[Option[T]] = Ordering.derived -``` -Usually, the `Ordering.derived` clause has an inferable parameter of type -`Generic[Option[T]]`. Since the `Option` trait has a `derives` clause, -the necessary implicit instance is already present in the companion object of `Option`. -If the ADT in question does not have a `derives` clause, an implicit instance of `Generic` -would still be synthesized by the compiler at the point where `derived` is called. -This is similar to the situation with type tags or class tags: If no instance is found, -the compiler will synthesize one. - -### Syntax - -``` -Template ::= InheritClauses [TemplateBody] -EnumDef ::= id ClassConstr InheritClauses EnumBody -InheritClauses ::= [‘extends’ ConstrApps] [‘derives’ QualId {‘,’ QualId}] -ConstrApps ::= ConstrApp {‘with’ ConstrApp} - | ConstrApp {‘,’ ConstrApp} -``` - -### Discussion - -The typeclass derivation framework is quite small and low-level. There are essentially -two pieces of infrastructure in the compiler-generated `Generic` instances: - - - a type representing the shape of an ADT, - - a way to map between ADT instances and generic mirrors. - -Generic mirrors make use of the already existing `Product` infrastructure for case -classes, which means they are efficient and their generation requires not much code. - -Generic mirrors can be so simple because, just like `Product`s, they are weakly -typed. On the other hand, this means that code for generic typeclasses has to -ensure that type exploration and value selection proceed in lockstep and it -has to assert this conformance in some places using casts. If generic typeclasses -are correctly written these casts will never fail. - -It could make sense to explore a higher-level framework that encapsulates all casts -in the framework. This could give more guidance to the typeclass implementer. -It also seems quite possible to put such a framework on top of the lower-level -mechanisms presented here. diff --git a/docs/docs/reference/contextual-instance/extension-methods.md b/docs/docs/reference/contextual-instance/extension-methods.md deleted file mode 100644 index 29a0420501c5..000000000000 --- a/docs/docs/reference/contextual-instance/extension-methods.md +++ /dev/null @@ -1,150 +0,0 @@ ---- -layout: doc-page -title: "Extension Methods" ---- - -Extension methods allow one to add methods to a type after the type is defined. Example: - -```scala -case class Circle(x: Double, y: Double, radius: Double) - -def (c: Circle) circumference: Double = c.radius * math.Pi * 2 -``` - -Like regular methods, extension methods can be invoked with infix `.`: - -```scala - val circle = Circle(0, 0, 1) - circle.circumference -``` - -### Translation of Extension Methods - -Extension methods are methods that have a parameter clause in front of the defined -identifier. They translate to methods where the leading parameter section is moved -to after the defined identifier. So, the definition of `circumference` above translates -to the plain method, and can also be invoked as such: -```scala -def circumference(c: Circle): Double = c.radius * math.Pi * 2 - -assert(circle.circumference == circumference(circle)) -``` - -### Translation of Calls to Extension Methods - -When is an extension method applicable? There are two possibilities. - - - An extension method is applicable if it is visible under a simple name, by being defined - or inherited or imported in a scope enclosing the application. - - An extension method is applicable if it is a member of some implicit instance at the point of the application. - -As an example, consider an extension method `longestStrings` on `String` defined in a trait `StringSeqOps`. - -```scala -trait StringSeqOps { - def (xs: Seq[String]) longestStrings = { - val maxLength = xs.map(_.length).max - xs.filter(_.length == maxLength) - } -} -``` -We can make the extension method available by defining an implicit instance of `StringSeqOps`, like this: -```scala -instance ops1 of StringSeqOps -``` -Then -```scala -List("here", "is", "a", "list").longestStrings -``` -is legal everywhere `ops1` is available as an implicit. Alternatively, we can define `longestStrings` as a member of a normal object. But then the method has to be brought into scope to be usable as an extension method. - -```scala -object ops2 extends StringSeqOps -import ops2.longestStrings -List("here", "is", "a", "list").longestStrings -``` -The precise rules for resolving a selection to an extension method are as follows. - -Assume a selection `e.m[Ts]` where `m` is not a member of `e`, where the type arguments `[Ts]` are optional, -and where `T` is the expected type. The following two rewritings are tried in order: - - 1. The selection is rewritten to `m[Ts](e)`. - 2. If the first rewriting does not typecheck with expected type `T`, and there is an implicit `i` - in either the current scope or in the implicit scope of `T`, and `i` defines an extension - method named `m`, then selection is expanded to `i.m[Ts](e)`. - This second rewriting is attempted at the time where the compiler also tries an implicit conversion - from `T` to a type containing `m`. If there is more than one way of rewriting, an ambiguity error results. - -So `circle.circumference` translates to `CircleOps.circumference(circle)`, provided -`circle` has type `Circle` and `CircleOps` is an eligible implicit (i.e. it is visible at the point of call or it is defined in the companion object of `Circle`). - -### Implicit Instances for Extension Methods - -Implicits that wrap extension methods can also be defined without an `of` clause. E.g., - -```scala -instance StringOps { - def (xs: Seq[String]) longestStrings: Seq[String] = { - val maxLength = xs.map(_.length).max - xs.filter(_.length == maxLength) - } -} - -instance { - def (xs: List[T]) second[T] = xs.tail.head -} -``` -If such an instance is anonymous (as in the second example above), its name is synthesized from the name -of the first defined extension method. - -### Operators - -The extension method syntax also applies to the definition of operators. -In each case the definition syntax mirrors the way the operator is applied. -Examples: -```scala - def (x: String) < (y: String) = ... - def (x: Elem) +: (xs: Seq[Elem]) = ... - - "ab" + "c" - 1 +: List(2, 3) -``` -The two definitions above translate to -```scala - def < (x: String)(y: String) = ... - def +: (xs: Seq[Elem])(x: Elem) = ... -``` -Note that swap of the two parameters `x` and `xs` when translating -the right-binding operator `+:` to an extension method. This is analogous -to the implementation of right binding operators as normal methods. - -### Generic Extensions - -The `StringSeqOps` examples extended a specific instance of a generic type. It is also possible to extend a generic type by adding type parameters to an extension method. Examples: - -```scala -def (xs: List[T]) second [T] = - xs.tail.head - -def (xs: List[List[T]]) flattened [T] = - xs.foldLeft[List[T]](Nil)(_ ++ _) - -def (x: T) + [T : Numeric](y: T): T = - the[Numeric[T]].plus(x, y) -``` - -As usual, type parameters of the extension method follow the defined method name. Nevertheless, such type parameters can already be used in the preceding parameter clause. - - -### Syntax - -The required syntax extension just adds one clause for extension methods relative -to the [current syntax](https://github.com/lampepfl/dotty/blob/master/docs/docs/internals/syntax.md). -``` -DefSig ::= ... - | ‘(’ DefParam ‘)’ [nl] id [DefTypeParamClause] DefParamClauses -``` - - - - diff --git a/docs/docs/reference/contextual-instance/import-implied.md b/docs/docs/reference/contextual-instance/import-implied.md deleted file mode 100644 index afe32da41c3c..000000000000 --- a/docs/docs/reference/contextual-instance/import-implied.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -layout: doc-page -title: "Instance Imports" ---- - -A special form of import is used to import implicit instances. Example: -```scala -object A { - class TC - instance tc of TC - def f given TC = ??? -} -object B { - import A._ - import instance A._ -} -``` -In the code above, the `import A._` clause of object `B` will import all members -of `A` _except_ the instance `tc`. Conversely, the second import `import instance A._` will import _only_ that instance. - -Generally, a normal import clause brings all members except implicit instances into scope whereas an `import instance` clause brings only implicit instances into scope. - -There are two main benefits arising from these rules: - - - It is made clearer where instance values in scope are coming from. - In particular, it is not possible to hide imported instance values - in a long list of regular imports. - - It enables importing all instance values - without importing anything else. This is particularly important since implicit - instances can be anonymous, so the usual recourse of using named imports is not - practical. - -### Relationship with Old-Style Implicits - -The rules of instance imports above have the consequence that a library -would have to migrate in lockstep with all its users from old style implicit definitions and -normal imports to instance definitions and instance imports. - -The following modifications avoid this hurdle to migration. - - 1. An instance import also brings old style implicits into scope. So, in Scala 3.0 - an old-style implicit definition can be brought into scope either by a normal or - by an instance import. - - 2. In Scala 3.1, an old-style implicits accessed implicitly through a normal import - will give a deprecation warning. - - 3. In some version after 3.1, an old-style implicits accessed implicitly through a normal import - will give a compiler error. - -These rules mean that library users can use `import instance` to access old-style implicits in Scala 3.0, -and will be gently nudged and then forced to do so in later versions. Libraries can then switch to -instance definitions once their user base has migrated. diff --git a/docs/docs/reference/contextual-instance/inferable-by-name-parameters.md b/docs/docs/reference/contextual-instance/inferable-by-name-parameters.md deleted file mode 100644 index 52bb75d3af9e..000000000000 --- a/docs/docs/reference/contextual-instance/inferable-by-name-parameters.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -layout: doc-page -title: "Inferable By-Name Parameters" ---- - -Inferable by-name parameters can be used to avoid a divergent inferred expansion. Example: - -```scala -trait Codec[T] { - def write(x: T): Unit -} - -instance intCodec of Codec[Int] = ??? - -instance optionCodec[T] of Codec[Option[T]] given (ev: => Codec[T]) { - def write(xo: Option[T]) = xo match { - case Some(x) => ev.write(x) - case None => - } -} - -val s = the[Codec[Option[Int]]] - -s.write(Some(33)) -s.write(None) -``` -As is the case for a normal by-name parameter, the argument for the inferable parameter `ev` -is evaluated on demand. In the example above, if the option value `x` is `None`, it is -not evaluated at all. - -The synthesized argument for an inferable parameter is backed by a local val -if this is necessary to prevent an otherwise diverging expansion. - -The precise steps for constructing an inferable argument for a by-name parameter of type `=> T` are as follows. - - 1. Create a new implicit instance of type `T`: - - ```scala - instance lv of T = ??? - ``` - where `lv` is an arbitrary fresh name. - - 1. This instance is not immediately available as candidate for argument inference (making it immediately available could result in a loop in the synthesized computation). But it becomes available in all nested contexts that look again for an inferred argument to a by-name parameter. - - 1. If this search succeeds with expression `E`, and `E` contains references to `lv`, replace `E` by - - - ```scala - { instance lv of T = E; lv } - ``` - - Otherwise, return `E` unchanged. - -In the example above, the definition of `s` would be expanded as follows. - -```scala -val s = the[Test.Codec[Option[Int]]]( - optionCodec[Int](intCodec)) -``` - -No local instance was generated because the synthesized argument is not recursive. - -### Reference - -For more info, see [Issue #1998](https://github.com/lampepfl/dotty/issues/1998) -and the associated [Scala SIP](https://docs.scala-lang.org/sips/byname-implicits.html). diff --git a/docs/docs/reference/contextual-instance/inferable-params.md b/docs/docs/reference/contextual-instance/inferable-params.md deleted file mode 100644 index 92b79c6823ce..000000000000 --- a/docs/docs/reference/contextual-instance/inferable-params.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -layout: doc-page -title: "Given Clauses" ---- - -Functional programming tends to express most dependencies as simple function parameterization. -This is clean and powerful, but it sometimes leads to functions that take many parameters and -call trees where the same value is passed over and over again in long call chains to many -functions. Given clauses can help here since they enable the compiler to synthesize -repetitive arguments instead of the programmer having to write them explicitly. - -For example, given the [instance definitions](./instance-defs.md) of the previous section, -a maximum function that works for any arguments for which an ordering exists can be defined as follows: -```scala -def max[T](x: T, y: T) given (ord: Ord[T]): T = - if (ord.compare(x, y) < 1) y else x -``` -Here, the part following `given` introduces a constraint that `T` is ordered, or, otherwise put, that an implicit instance for `Ord[T]` exists. -That instance is passed as an _implicit parameter_ to the method. Inside the method, the implicit instance can be accessed under the name `ord`. - -The `max` method can be applied as follows: -```scala -max(2, 3) given IntOrd -``` -The `given IntOrd` part establishes `IntOrd` as the instance to satisfy the constraint `Ord[Int]`. -It does this by providing the `IntOrd` value as as an argument for the implicit `ord` parameter. -But the point of implicit parameters is that this argument can also be left out (and it usually is). -So the following applications are equally valid: -```scala -max(2, 3) -max(List(1, 2, 3), Nil) -``` - -## Anonymous Inferable Parameters - -In many situations, the name of an implicit parameter of a method need not be mentioned explicitly at all, -since it is only used as a synthesized instance for other constraints. In that case one can avoid defining -a parameter name and just provide its type. Example: -```scala -def maximum[T](xs: List[T]) given Ord[T]: T = - xs.reduceLeft(max) -``` -`maximum` takes an implicit parameter of type `Ord` only to pass it on as an implicit argument to `max`. The name of the parameter is left out. - -Generally, implicit parameters may be given either as a parameter list `(p_1: T_1, ..., p_n: T_n)` or as a sequence of types, separated by commas. - -## Inferring Complex Arguments - -Here are two other methods that require implicits of type `Ord[T]`: -```scala -def descending[T] given (asc: Ord[T]): Ord[T] = new Ord[T] { - def compare(x: T, y: T) = asc.compare(y, x) -} - -def minimum[T](xs: List[T]) given Ord[T] = - maximum(xs) given descending -``` -The `minimum` method's right hand side passes `descending` as an explicit argument to `maximum(xs)`. -With this setup, the following calls are all well-formed, and they all normalize to the last one: -```scala -minimum(xs) -maximum(xs) given descending -maximum(xs) given (descending given ListOrd) -maximum(xs) given (descending given (ListOrd given IntOrd)) -``` - -## Mixing Inferable And Normal Parameters - -Inferable parameters can be freely mixed with normal parameters. -An inferable parameter may be followed by a normal parameter and _vice versa_. -There can be several inferable parameter lists in a definition. Example: -```scala -def f given (u: Universe) (x: u.T) given Context = ... - -instance global of Universe { type T = String ... } -instance ctx of Context { ... } -``` -Then the following calls are all valid (and normalize to the last one) -```scala -f("abc") -(f given global)("abc") -f("abc") given ctx -(f given global)("abc") given ctx -``` - -## Summoning Instances - -A method `the` in `Predef` summons the implicit instance for a given type. For example, the instance for `Ord[List[Int]]` is generated by -```scala -the[Ord[List[Int]]] // reduces to ListOrd given IntOrd -``` -The `the` method is simply defined as the (non-widening) identity function over an implicit parameter. -```scala -def the[T] given (x: T): x.type = x -``` -Functions like `the` that have only implicit parameters are also called _context queries_. - -## Syntax - -Here is the new syntax of parameters and arguments seen as a delta from the [standard context free syntax of Scala 3](http://dotty.epfl.ch/docs/internals/syntax.html). -``` -ClsParamClause ::= ... - | ‘given’ (‘(’ [ClsParams] ‘)’ | GivenTypes) -DefParamClause ::= ... - | GivenParamClause -GivenParamClause ::= ‘given’ (‘(’ DefParams ‘)’ | GivenTypes) -GivenTypes ::= AnnotType {‘,’ AnnotType} - -InfixExpr ::= ... - | InfixExpr ‘given’ (InfixExpr | ParArgumentExprs) -``` diff --git a/docs/docs/reference/contextual-instance/instance-defs.md b/docs/docs/reference/contextual-instance/instance-defs.md deleted file mode 100644 index 044570e412d6..000000000000 --- a/docs/docs/reference/contextual-instance/instance-defs.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: doc-page -title: "Instance Definitions" ---- - -Instance definitions define "canonical" values of given types -that can be synthesized by the compiler. Typically, such values are -used as implicit arguments for constraints in [given clauses](./inferable-params.html). Example: - -```scala -trait Ord[T] { - def compare(x: T, y: T): Int - def (x: T) < (y: T) = compare(x, y) < 0 - def (x: T) > (y: T) = compare(x, y) > 0 -} - -instance IntOrd of Ord[Int] { - def compare(x: Int, y: Int) = - if (x < y) -1 else if (x > y) +1 else 0 -} - -instance ListOrd[T] of Ord[List[T]] given (ord: Ord[T]) { - def compare(xs: List[T], ys: List[T]): Int = (xs, ys) match { - case (Nil, Nil) => 0 - case (Nil, _) => -1 - case (_, Nil) => +1 - case (x :: xs1, y :: ys1) => - val fst = ord.compare(x, y) - if (fst != 0) fst else xs1.compareTo(ys1) - } -} -``` -This code defines a trait `Ord` and two instance definitions. `IntOrd` defines -an implicit instance of type `Ord[Int]` whereas `ListOrd[T]` defines implicit instances of type `Ord[List[T]]` -for all types `T` that come with an instance for `Ord[T]` themselves. -The `given` clause in `ListOrd` defines an [implicit parameter](./inferable-params.html). -Given clauses are further explained in the next section. - -## Anonymous Instance Definitions - -The name of an implicit instance can be left out. So the instance definitions -of the last section can also be expressed like this: -```scala -instance of Ord[Int] { ... } -instance [T] of Ord[List[T]] given Ord[T] { ... } -``` -If a name is not given, the compiler will synthesize one from the type(s) in the `of` clause. - -## Alias Instances - -An alias instance defines an implicit instance that is equal to some expression. E.g., assuming a global method `currentThreadPool` returning a value with a member `context`, one could define: -```scala -instance ctx of ExecutionContext = currentThreadPool().context -``` -This creates an implicit instance `ctx` of type `ExecutionContext` that resolves to the right hand side `currentThreadPool().context`. -Each time an instance for `ExecutionContext` is demanded, the result of evaluating the right-hand side expression is returned. - -Alias instances may be anonymous, e.g. -```scala -instance of Position = enclosingTree.position -``` -An alias instance can have type and context parameters just like any other instance definition, but it can only implement a single type. - -## Syntax - -Here is the new syntax of instance definitions, seen as a delta from the [standard context free syntax of Scala 3](http://dotty.epfl.ch/docs/internals/syntax.html). -``` -TmplDef ::= ... - | ‘instance’ InstanceDef -InstanceDef ::= [id] [DefTypeParamClause] InstanceBody -InstanceBody ::= [‘of’ ConstrApp {‘,’ ConstrApp }] {GivenParamClause} [TemplateBody] - | ‘of’ Type {GivenParamClause} ‘=’ Expr -ConstrApp ::= AnnotType {ArgumentExprs} - | ‘(’ ConstrApp {‘given’ (InfixExpr | ParArgumentExprs)} ‘)’ -GivenParamClause ::= ‘given’ (‘(’ [DefParams] ‘)’ | GivenTypes) -GivenTypes ::= AnnotType {‘,’ AnnotType} -``` -The identifier `id` can be omitted only if either the `of` part or the template body is present. -If the `of` part is missing, the template body must define at least one extension method. diff --git a/docs/docs/reference/contextual-instance/motivation.md b/docs/docs/reference/contextual-instance/motivation.md deleted file mode 100644 index 366e78130c64..000000000000 --- a/docs/docs/reference/contextual-instance/motivation.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -layout: doc-page -title: "Overview" ---- - -### Critique of the Status Quo - -Scala's implicits are its most distinguished feature. They are _the_ fundamental way to abstract over context. They represent a unified paradigm with a great variety of use cases, among them: implementing type classes, establishing context, dependency injection, expressing capabilities, computing new types and proving relationships between them. - -Following Haskell, Scala was the second popular language to have some form of implicits. Other languages have followed suit. E.g Rust's traits or Swift's protocol extensions. Design proposals are also on the table for Kotlin as [compile time dependency resolution](https://github.com/Kotlin/KEEP/blob/e863b25f8b3f2e9b9aaac361c6ee52be31453ee0/proposals/compile-time-dependency-resolution.md), for C# as [Shapes and Extensions](https://github.com/dotnet/csharplang/issues/164) -or for F# as [Traits](https://github.com/MattWindsor91/visualfsharp/blob/hackathon-vs/examples/fsconcepts.md). Implicits are also a common feature of theorem provers such as Coq or Agda. - -Even though these designs use widely different terminology, they are all variants of the core idea of _term inference_. Given a type, the compiler synthesizes a "canonical" term that has that type. Scala embodies the idea in a purer form than most other languages: An implicit parameter directly leads to an inferred argument term that could also be written down explicitly. By contrast, typeclass based designs are less direct since they hide term inference behind some form of type classification and do not offer the option of writing the inferred quantities (typically, dictionaries) explicitly. - -Given that term inference is where the industry is heading, and given that Scala has it in a very pure form, how come implicits are not more popular? In fact, it's fair to say that implicits are at the same time Scala's most distinguished and most controversial feature. I believe this is due to a number of aspects that together make implicits harder to learn than necessary and also make it harder to prevent abuses. - -Particular criticisms are: - -1. Being very powerful, implicits are easily over-used and mis-used. This observation holds in almost all cases when we talk about _implicit conversions_, which, even though conceptually different, share the same syntax with other implicit definitions. For instance, regarding the two definitions - - ```scala - implicit def i1(implicit x: T): C[T] = ... - implicit def i2(x: T): C[T] = ... - ``` - - the first of these is a conditional implicit _value_, the second an implicit _conversion_. Conditional implicit values are a cornerstone for expressing type classes, whereas most applications of implicit conversions have turned out to be of dubious value. The problem is that many newcomers to the language start with defining implicit conversions since they are easy to understand and seem powerful and convenient. Scala 3 will put under a language flag both definitions and applications of "undisciplined" implicit conversions between types defined elsewhere. This is a useful step to push back against overuse of implicit conversions. But the problem remains that syntactically, conversions and values just look too similar for comfort. - - 2. Another widespread abuse is over-reliance on implicit imports. This often leads to inscrutable type errors that go away with the right import incantation, leaving a feeling of frustration. Conversely, it is hard to see what implicits a program uses since implicits can hide anywhere in a long list of imports. - - 3. The syntax of implicit definitions is too minimal. It consists of a single modifier, `implicit`, that can be attached to a large number of language constructs. A problem with this for newcomers is that it conveys mechanism instead of intent. For instance, a typeclass instance is an implicit object or val if unconditional and an implicit def with implicit parameters referring to some class if conditional. This describes precisely what the implicit definitions translate to -- just drop the `implicit` modifier, and that's it! But the cues that define intent are rather indirect and can be easily misread, as demonstrated by the definitions of `i1` and `i2` above. - - 4. The syntax of implicit parameters also has shortcomings. It starts with the position of `implicit` as a pseudo-modifier that applies to a whole parameter section instead of a single parameter. This represents an irregular case wrt to the rest of Scala's syntax. Furthermore, while implicit _parameters_ are designated specifically, arguments are not. Passing an argument to an implicit parameter looks like a regular application `f(arg)`. This is problematic because it means there can be confusion regarding what parameter gets instantiated in a call. For instance, in - ```scala - def currentMap(implicit ctx: Context): Map[String, Int] - ``` - one cannot write `currentMap("abc")` since the string "abc" is taken as explicit argument to the implicit `ctx` parameter. One has to write `currentMap.apply("abc")` instead, which is awkward and irregular. For the same reason, a method definition can only have one implicit parameter section and it must always come last. This restriction not only reduces orthogonality, but also prevents some useful program constructs, such as a method with a regular parameter whose type depends on an implicit value. Finally, it's also a bit annoying that implicit parameters must have a name, even though in many cases that name is never referenced. - - 5. Implicits pose challenges for tooling. The set of available implicits depends on context, so command completion has to take context into account. This is feasible in an IDE but docs like ScalaDoc that are based static web pages can only provide an approximation. Another problem is that failed implicit searches often give very unspecific error messages, in particular if some deeply recursive implicit search has failed. Note that the Dotty compiler already implements some improvements in this case, but challenges still remain. - -None of the shortcomings is fatal, after all implicits are very widely used, and many libraries and applications rely on them. But together, they make code using implicits a lot more cumbersome and less clear than it could be. - -Historically, many of these shortcomings come from the way implicits were gradually "discovered" in Scala. Scala originally had only implicit conversions with the intended use case of "extending" a class or trait after it was defined, i.e. what is expressed by implicit classes in later versions of Scala. Implicit parameters and instance definitions came later in 2006 and picked similar syntax since it seemed convenient. For the same reason, no effort was made to distinguish implicit imports or arguments from normal ones. - -Existing Scala programmers by and large have gotten used to the status quo and see little need for change. But for newcomers this status quo presents a big hurdle. I believe if we want to overcome that hurdle, we should take a step back and allow ourselves to consider a radically new design. - -### The New Design - -The following pages introduce a redesign of contextual abstractions in Scala. They introduce four fundamental changes: - - 1. [Instance Definitions](./instance-defs.html) are a new way to define inferable terms. They replace implicit definitions. The core principle of the proposal is that, rather than mixing the `implicit` modifier with a large number of features, we have a single way to define terms that can be synthesized for types. - - 2. [Given Clauses](./inferable-params.html) are a new syntax for implicit _parameters_ and their _arguments_. Both are introduced with the same keyword, `given`. This unambiguously aligns parameters and arguments, solving a number of language warts. - - 3. [Instance Imports](./import-implied.html) are new form of import that specifically imports implicit definitions and nothing else. New-style instance definitions _must be_ imported with `import instance`, a plain import will no longer bring them into scope. Old-style implicit definitions can be imported with either form. - - 4. [Implicit Conversions](./conversions.html) are now expressed as implicit instances of a standard `Conversion` class. All other forms of implicit conversions will be phased out. - -This section also contains pages describing other language features that are related to context abstraction. These are: - - - [Context Bounds](./context-bounds.html), which carry over unchanged. - - [Extension Methods](./extension-methods.html) replace implicit classes in a way that integrates better with typeclasses. - - [Implementing Typeclasses](./typeclasses.html) demonstrates how some common typeclasses can be implemented using the new constructs. - - [Typeclass Derivation](./derivation.html) introduces constructs to automatically derive typeclasses for ADTs. - - [Multiversal Equality](./multiversal-equality.html) introduces a special typeclass - to support type safe equality. - - [Context Queries](./query-types.html) _aka_ implicit function types introduce a way to abstract over implicit parameterization. - - [Inferable By-Name Parameters](./inferable-by-name-parameters.html) are an essential tool to define recursive implicits without looping. - - [Relationship with Scala 2 Implicits](./relationship-implicits.html) discusses the relationship between old-style and new-style implicits and how to migrate from one to the other. - -Overall, the new design achieves a better separation of term inference from the rest of the language: There is a single way to define implicit instances instead of a multitude of forms all taking an `implicit` modifier. There is a single way to introduce implicit parameters and arguments instead of conflating implicit with normal arguments. There is a separate way to import implicit instances that does not allow to hide them in a sea of normal imports. And there is a single way to define an implicit conversion which is clearly marked as such and does not require special syntax. - -This design thus avoids feature interactions and makes the language more consistent and orthogonal. It will make implicits easier to learn and harder to abuse. It will greatly improve the clarity of the 95% of Scala programs that use implicits. It has thus the potential to fulfil the promise of term inference in a principled way that is also accessible and friendly. - -Could we achieve the same goals by tweaking existing implicits? After having tried for a long time, I believe now that this is impossible. - - - First, some of the problems are clearly syntactic and require different syntax to solve them. - - Second, there is the problem how to migrate. We cannot change the rules in mid-flight. At some stage of language evolution we need to accommodate both the new and the old rules. With a syntax change, this is easy: Introduce the new syntax with new rules, support the old syntax for a while to facilitate cross compilation, deprecate and phase out the old syntax at some later time. Keeping the same syntax does not offer this path, and in fact does not seem to offer any viable path for evolution - - Third, even if we would somehow succeed with migration, we still have the problem - how to teach this. We cannot make existing tutorials go away. Almost all existing tutorials start with implicit conversions, which will go away; they use normal imports, which will go away, and they explain calls to methods with implicit parameters by expanding them to plain applications, which will also go away. This means that we'd have - to add modifications and qualifications to all existing literature and courseware, likely causing more confusion with beginners instead of less. By contrast, with a new syntax there is a clear criterion: Any book or courseware that mentions `implicit` is outdated and should be updated. - diff --git a/docs/docs/reference/contextual-instance/multiversal-equality.md b/docs/docs/reference/contextual-instance/multiversal-equality.md deleted file mode 100644 index 9e8526b9a1aa..000000000000 --- a/docs/docs/reference/contextual-instance/multiversal-equality.md +++ /dev/null @@ -1,217 +0,0 @@ ---- -layout: doc-page -title: "Multiversal Equality" ---- - -Previously, Scala had universal equality: Two values of any types -could be compared with each other with `==` and `!=`. This came from -the fact that `==` and `!=` are implemented in terms of Java's -`equals` method, which can also compare values of any two reference -types. - -Universal equality is convenient. But it is also dangerous since it -undermines type safety. For instance, let's assume one is left after some refactoring -with an erroneous program where a value `y` has type `S` instead of the correct type `T`. - -```scala -val x = ... // of type T -val y = ... // of type S, but should be T -x == y // typechecks, will always yield false -``` - -If all the program does with `y` is compare it to other values of type `T`, the program will still typecheck, since values of all types can be compared with each other. -But it will probably give unexpected results and fail at runtime. - -Multiversal equality is an opt-in way to make universal equality -safer. It uses a binary typeclass `Eql` to indicate that values of -two given types can be compared with each other. -The example above would not typecheck if `S` or `T` was a class -that derives `Eql`, e.g. -```scala -class T derives Eql -``` -Alternatively, one can also provide the derived instance directly, like this: -```scala -instance of Eql[T, T] = Eql.derived -``` -This definition effectively says that values of type `T` can (only) be -compared to other values of type `T` when using `==` or `!=`. The definition -affects type checking but it has no significance for runtime -behavior, since `==` always maps to `equals` and `!=` always maps to -the negation of `equals`. The right hand side `Eql.derived` of the definition -is a value that has any `Eql` instance as its type. Here is the definition of class -`Eql` and its companion object: -```scala -package scala -import annotation.implicitNotFound - -@implicitNotFound("Values of types ${L} and ${R} cannot be compared with == or !=") -sealed trait Eql[-L, -R] - -object Eql { - object derived extends Eql[Any, Any] -} -``` - -One can have several `Eql` instances for a type. For example, the four -definitions below make values of type `A` and type `B` comparable with -each other, but not comparable to anything else: - -```scala -instance of Eql[A, A] = Eql.derived -instance of Eql[B, B] = Eql.derived -instance of Eql[A, B] = Eql.derived -instance of Eql[B, A] = Eql.derived -``` -The `scala.Eql` object defines a number of `Eql` instances that together -define a rule book for what standard types can be compared (more details below). - -There's also a "fallback" instance named `eqlAny` that allows comparisons -over all types that do not themselves have an `Eql` instance. `eqlAny` is -defined as follows: - -```scala -def eqlAny[L, R]: Eql[L, R] = Eql.derived -``` - -Even though `eqlAny` is not declared as `instance`, the compiler will still -construct an `eqlAny` instance as answer to an implicit search for the -type `Eql[L, R]`, unless `L` or `R` have `Eql` instances -defined on them, or the language feature `strictEquality` is enabled - -The primary motivation for having `eqlAny` is backwards compatibility, -if this is of no concern one can disable `eqlAny` by enabling the language -feature `strictEquality`. As for all language features this can be either -done with an import - -```scala -import scala.language.strictEquality -``` -or with a command line option `-language:strictEquality`. - -## Deriving Eql Instances - -Instead of defining `Eql` instances directly, it is often more convenient to derive them. Example: -```scala -class Box[T](x: T) derives Eql -``` -By the usual rules if [typeclass derivation](./derivation.html), -this generates the following `Eql` instance in the companion object of `Box`: -```scala -instance [T, U] of Eql[Box[T], Box[U]] given Eql[T, U] = Eql.derived -``` -That is, two boxes are comparable with `==` or `!=` if their elements are. Examples: -```scala -new Box(1) == new Box(1L) // ok since there is instance of `Eql[Int, Long]` -new Box(1) == new Box("a") // error: can't compare -new Box(1) == 1 // error: can't compare -``` - -## Precise Rules for Equality Checking - -The precise rules for equality checking are as follows. - -If the `strictEquality` feature is enabled then -a comparison using `x == y` or `x != y` between values `x: T` and `y: U` -is legal if - - 1. there is an instance of `Eql[T, U]`, or - 2. one of `T`, `U` is `Null`. - -In the default case where the `strictEquality` feature is not enabled the comparison is -also legal if - - 1. `T` and `U` the same, or - 2. one of `T` and `U`is a subtype of the _lifted_ version of the other type, or - 3. neither `T` nor `U` have a _reflexive `Eql` instance_. - -Explanations: - - - _lifting_ a type `S` means replacing all references to abstract types - in covariant positions of `S` by their upper bound, and to replacing - all refinement types in covariant positions of `S` by their parent. - - a type `T` has a _reflexive `Eql` instance_ if the implicit search for `Eql[T, T]` - succeeds. - -## Predefined Eql Instances - -The `Eql` object defines implicit instances for - - the primitive types `Byte`, `Short`, `Char`, `Int`, `Long`, `Float`, `Double`, `Boolean`, and `Unit`, - - `java.lang.Number`, `java.lang.Boolean`, and `java.lang.Character`, - - `scala.collection.Seq`, and `scala.collection.Set`. - -Instances are defined so that everyone of these types has a reflexive `Eql` instance, and the following holds: - - - Primitive numeric types can be compared with each other. - - Primitive numeric types can be compared with subtypes of `java.lang.Number` (and _vice versa_). - - `Boolean` can be compared with `java.lang.Boolean` (and _vice versa_). - - `Char` can be compared with `java.lang.Character` (and _vice versa_). - - Two sequences (of arbitrary subtypes of `scala.collection.Seq`) can be compared - with each other if their element types can be compared. The two sequence types - need not be the same. - - Two sets (of arbitrary subtypes of `scala.collection.Set`) can be compared - with each other if their element types can be compared. The two set types - need not be the same. - - Any subtype of `AnyRef` can be compared with `Null` (and _vice versa_). - -## Why Two Type Parameters? - -One particular feature of the `Eql` type is that it takes _two_ type parameters, representing the types of the two items to be compared. By contrast, conventional -implementations of an equality type class take only a single type parameter which represents the common type of _both_ operands. One type parameter is simpler than two, so why go through the additional complication? The reason has to do with the fact that, rather than coming up with a type class where no operation existed before, -we are dealing with a refinement of pre-existing, universal equality. It's best illustrated through an example. - -Say you want to come up with a safe version of the `contains` method on `List[T]`. The original definition of `contains` in the standard library was: -```scala -class List[+T] { - ... - def contains(x: Any): Boolean -} -``` -That uses universal equality in an unsafe way since it permits arguments of any type to be compared with the list's elements. The "obvious" alternative definition -```scala - def contains(x: T): Boolean -``` -does not work, since it refers to the covariant parameter `T` in a nonvariant context. The only variance-correct way to use the type parameter `T` in `contains` is as a lower bound: -```scala - def contains[U >: T](x: U): Boolean -``` -This generic version of `contains` is the one used in the current (Scala 2.12) version of `List`. -It looks different but it admits exactly the same applications as the `contains(x: Any)` definition we started with. -However, we can make it more useful (i.e. restrictive) by adding an `Eql` parameter: -```scala - def contains[U >: T](x: U) given Eql[T, U]: Boolean // (1) -``` -This version of `contains` is equality-safe! More precisely, given -`x: T`, `xs: List[T]` and `y: U`, then `xs.contains(y)` is type-correct if and only if -`x == y` is type-correct. - -Unfortunately, the crucial ability to "lift" equality type checking from simple equality and pattern matching to arbitrary user-defined operations gets lost if we restrict ourselves to an equality class with a single type parameter. Consider the following signature of `contains` with a hypothetical `Eql1[T]` type class: -```scala - def contains[U >: T](x: U) given Eql1[U]: Boolean // (2) -``` -This version could be applied just as widely as the original `contains(x: Any)` method, -since the `Eql1[Any]` fallback is always available! So we have gained nothing. What got lost in the transition to a single parameter type class was the original rule that `Eql[A, B]` is available only if neither `A` nor `B` have a reflexive `Eql` instance. That rule simply cannot be expressed if there is a single type parameter for `Eql`. - -The situation is different under `-language:strictEquality`. In that case, -the `Eql[Any, Any]` or `Eql1[Any]` instances would never be available, and the -single and two-parameter versions would indeed coincide for most practical purposes. - -But assuming `-language:strictEquality` immediately and everywhere poses migration problems which might well be unsurmountable. Consider again `contains`, which is in the standard library. Parameterizing it with the `Eql` type class as in (1) is an immediate win since it rules out non-sensical applications while still allowing all sensible ones. -So it can be done almost at any time, modulo binary compatibility concerns. -On the other hand, parameterizing `contains` with `Eql1` as in (2) would make `contains` -unusable for all types that have not yet declared an `Eql1` instance, including all -types coming from Java. This is clearly unacceptable. It would lead to a situation where, -rather than migrating existing libraries to use safe equality, the only upgrade path is to have parallel libraries, with the new version only catering to types deriving `Eql1` and the old version dealing with everything else. Such a split of the ecosystem would be very problematic, which means the cure is likely to be worse than the disease. - -For these reasons, it looks like a two-parameter type class is the only way forward because it can take the existing ecosystem where it is and migrate it towards a future where more and more code uses safe equality. - -In applications where `-language:strictEquality` is the default one could also introduce a one-parameter type alias such as -```scala -type Eq[-T] = Eql[T, T] -``` -Operations needing safe equality could then use this alias instead of the two-parameter `Eql` class. But it would only -work under `-language:strictEquality`, since otherwise the universal `Eq[Any]` instance would be available everywhere. - - -More on multiversal equality is found in a [blog post](http://www.scala-lang.org/blog/2016/05/06/multiversal-equality.html) -and a [Github issue](https://github.com/lampepfl/dotty/issues/1247). diff --git a/docs/docs/reference/contextual-instance/query-types-spec.md b/docs/docs/reference/contextual-instance/query-types-spec.md deleted file mode 100644 index 67c627ce79f4..000000000000 --- a/docs/docs/reference/contextual-instance/query-types-spec.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: doc-page -title: "Context Query Types - More Details" ---- - -## Syntax - - Type ::= ... - | `given' FunArgTypes `=>' Type - Expr ::= ... - | `given' FunParams `=>' Expr - -Context query types associate to the right, e.g. -`given S => given T => U` is the same as `given S => (given T => U)`. - -## Implementation - -Context query types are shorthands for class types that define `apply` -methods with inferable parameters. Specifically, the `N`-ary function type -`T1, ..., TN => R` is a shorthand for the class type -`ImplicitFunctionN[T1 , ... , TN, R]`. Such class types are assumed to have the following definitions, for any value of `N >= 1`: -```scala -package scala -trait ImplicitFunctionN[-T1 , ... , -TN, +R] { - def apply given (x1: T1 , ... , xN: TN): R -} -``` -Context query types erase to normal function types, so these classes are -generated on the fly for typechecking, but not realized in actual code. - -Context query literals `given (x1: T1, ..., xn: Tn) => e` map -inferable parameters `xi` of types `Ti` to a result given by expression `e`. -The scope of each implicit parameter `xi` is `e`. The parameters must have pairwise distinct names. - -If the expected type of the query literal is of the form -`scala.ImplicitFunctionN[S1, ..., Sn, R]`, the expected type of `e` is `R` and -the type `Ti` of any of the parameters `xi` can be omitted, in which case `Ti -= Si` is assumed. If the expected type of the query literal is -some other type, all inferable parameter types must be explicitly given, and the expected type of `e` is undefined. The type of the query literal is `scala.ImplicitFunctionN[S1, ...,Sn, T]`, where `T` is the widened -type of `e`. `T` must be equivalent to a type which does not refer to any of -the inferable parameters `xi`. - -The query literal is evaluated as the instance creation -expression: -```scala -new scala.ImplicitFunctionN[T1, ..., Tn, T] { - def apply given (x1: T1, ..., xn: Tn): T = e -} -``` -In the case of a single untyped parameter, `given (x) => e` can be -abbreviated to `given x => e`. - -An inferable parameter may also be a wildcard represented by an underscore `_`. In -that case, a fresh name for the parameter is chosen arbitrarily. - -Note: The closing paragraph of the -[Anonymous Functions section](https://www.scala-lang.org/files/archive/spec/2.12/06-expressions.html#anonymous-functions) -of Scala 2.12 is subsumed by query types and should be removed. - -Query literals `given (x1: T1, ..., xn: Tn) => e` are -automatically created for any expression `e` whose expected type is -`scala.ImplicitFunctionN[T1, ..., Tn, R]`, unless `e` is -itself a query literal. This is analogous to the automatic -insertion of `scala.Function0` around expressions in by-name argument position. - -Context query types generalize to `N > 22` in the same way that function types do, see [the corresponding -documentation](https://dotty.epfl.ch/docs/reference/dropped-features/limit22.html). - -## Examples - -See the section on Expressiveness from [Simplicitly: foundations and -applications of implicit function -types](https://dl.acm.org/citation.cfm?id=3158130). I've extracted it in [this -Gist](https://gist.github.com/OlivierBlanvillain/234d3927fe9e9c6fba074b53a7bd9 -592), it might easier to access than the pdf. - -### Type Checking - -After desugaring no additional typing rules are required for context query types. diff --git a/docs/docs/reference/contextual-instance/query-types.md b/docs/docs/reference/contextual-instance/query-types.md deleted file mode 100644 index 27175cb7b11e..000000000000 --- a/docs/docs/reference/contextual-instance/query-types.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -layout: doc-page -title: "Context Queries" ---- - -_Context queries_ are functions with (only) inferable parameters. -_Context query types_ are the types of first-class context queries. -Here is an example for a context query type: -```scala -type Contextual[T] = given Context => T -``` -A value of context query type is applied to inferred arguments, in -the same way a method with inferable parameters is applied. For instance: -```scala - instance ctx of Context = ... - - def f(x: Int): Contextual[Int] = ... - - f(2) given ctx // explicit argument - f(2) // argument is inferred -``` -Conversely, if the expected type of an expression `E` is a context query -type `given (T_1, ..., T_n) => U` and `E` is not already a -context query literal, `E` is converted to a context query literal by rewriting to -```scala - given (x_1: T1, ..., x_n: Tn) => E -``` -where the names `x_1`, ..., `x_n` are arbitrary. This expansion is performed -before the expression `E` is typechecked, which means that `x_1`, ..., `x_n` -are available as implicits in `E`. - -Like query types, query literals are written with a `given` prefix. They differ from normal function literals in two ways: - - 1. Their parameters are inferable. - 2. Their types are context query types. - -For example, continuing with the previous definitions, -```scala - def g(arg: Contextual[Int]) = ... - - g(22) // is expanded to g(given ctx => 22) - - g(f(2)) // is expanded to g(given ctx => f(2) given ctx) - - g(given ctx => f(22) given ctx) // is left as it is -``` -### Example: Builder Pattern - -Context query types have considerable expressive power. For -instance, here is how they can support the "builder pattern", where -the aim is to construct tables like this: -```scala - table { - row { - cell("top left") - cell("top right") - } - row { - cell("bottom left") - cell("bottom right") - } - } -``` -The idea is to define classes for `Table` and `Row` that allow -addition of elements via `add`: -```scala - class Table { - val rows = new ArrayBuffer[Row] - def add(r: Row): Unit = rows += r - override def toString = rows.mkString("Table(", ", ", ")") - } - - class Row { - val cells = new ArrayBuffer[Cell] - def add(c: Cell): Unit = cells += c - override def toString = cells.mkString("Row(", ", ", ")") - } - - case class Cell(elem: String) -``` -Then, the `table`, `row` and `cell` constructor methods can be defined -in terms of query types to avoid the plumbing boilerplate -that would otherwise be necessary. -```scala - def table(init: given Table => Unit) = { - instance t of Table - init - t - } - - def row(init: given Row => Unit) given (t: Table) = { - instance r of Row - init - t.add(r) - } - - def cell(str: String) given (r: Row) = - r.add(new Cell(str)) -``` -With that setup, the table construction code above compiles and expands to: -```scala - table { given $t: Table => - row { given $r: Row => - cell("top left") given $r - cell("top right") given $r - } given $t - row { given $r: Row => - cell("bottom left") given $r - cell("bottom right") given $r - } given $t - } -``` -### Example: Postconditions - -As a larger example, here is a way to define constructs for checking arbitrary postconditions using an extension method `ensuring`so that the checked result can be referred to simply by `result`. The example combines opaque aliases, context query types, and extension methods to provide a zero-overhead abstraction. - -```scala -object PostConditions { - opaque type WrappedResult[T] = T - - private object WrappedResult { - def wrap[T](x: T): WrappedResult[T] = x - def unwrap[T](x: WrappedResult[T]): T = x - } - - def result[T] given (r: WrappedResult[T]): T = WrappedResult.unwrap(r) - - def (x: T) ensuring [T](condition: given WrappedResult[T] => Boolean): T = { - instance of WrappedResult[T] = WrappedResult.wrap(x) - assert(condition) - x - } -} - -object Test { - import PostConditions.{ensuring, result} - val s = List(1, 2, 3).sum.ensuring(result == 6) -} -``` -**Explanations**: We use a context query type `given WrappedResult[T] => Boolean` -as the type of the condition of `ensuring`. An argument to `ensuring` such as -`(result == 6)` will therefore have an implicit instance of type `WrappedResult[T]` in -scope to pass along to the `result` method. `WrappedResult` is a fresh type, to make sure -that we do not get unwanted implicit instances in scope (this is good practice in all cases -where given clauses are involved). Since `WrappedResult` is an opaque type alias, its -values need not be boxed, and since `ensuring` is added as an extension method, its argument -does not need boxing either. Hence, the implementation of `ensuring` is as about as efficient -as the best possible code one could write by hand: - - { val result = List(1, 2, 3).sum - assert(result == 6) - result - } - -### Reference - -For more info, see the [blog article](https://www.scala-lang.org/blog/2016/12/07/implicit-function-types.html), -(which uses a different syntax that has been superseded). - -[More details](./query-types-spec.html) diff --git a/docs/docs/reference/contextual-instance/relationship-implicits.md b/docs/docs/reference/contextual-instance/relationship-implicits.md deleted file mode 100644 index 945647672a67..000000000000 --- a/docs/docs/reference/contextual-instance/relationship-implicits.md +++ /dev/null @@ -1,169 +0,0 @@ ---- -layout: doc-page -title: Relationship with Scala 2 Implicits ---- - -Many, but not all, of the new contextual abstraction features in Scala 3 can be mapped to Scala 2's implicits. This page gives a rundown on the relationships between new and old features. - -## Simulating Contextual Abstraction with Implicits - -### Instance Definitions - -Instance definitions can be mapped to combinations of implicit objects, classes and implicit methods. - - 1. Instance definitions without parameters are mapped to implicit objects. E.g., - ```scala - instance IntOrd of Ord[Int] { ... } - ``` - maps to - ```scala - implicit object IntOrd extends Ord[Int] { ... } - ``` - 2. Parameterized instance definitions are mapped to combinations of classes and implicit methods. E.g., - ```scala - instance ListOrd[T] of Ord[List[T]] given (ord: Ord[T]) { ... } - ``` - maps to - ```scala - class ListOrd[T](implicit ord: Ord[T]) extends Ord[List[T]] { ... } - final implicit def ListOrd[T](implicit ord: Ord[T]): ListOrd[T] = new ListOrd[T] - ``` - 3. Instance aliases map to implicit methods. E.g., - ```scala - instance ctx of ExecutionContext = ... - ``` - maps to - ```scala - final implicit def ctx: ExecutionContext = ... - ``` - -### Anonymous Instance Definitions - -Anonymous instance values get compiler synthesized names, which are generated in a reproducible way from the implemented type(s). For -example, if the names of the `IntOrd` and `ListOrd` instances above were left out, the following names would be synthesized instead: -```scala - instance Ord_Int_ev of Ord[Int] { ... } - instance Ord_List_ev[T] of Ord[List[T]] { ... } -``` -The synthesized type names are formed from - - - the simple name(s) of the implemented type(s), leaving out any prefixes, - - the simple name(s) of the toplevel argument type constructors to these types - - the suffix `_ev`. - -Anonymous implicit instances that define extension methods without also implementing a type -get their name from the name of the first extension method and the toplevel type -constructor of its first parameter. For example, the instance -```scala - instance { - def (xs: List[T]) second[T] = ... - } -``` -gets the synthesized name `second_of_List_T_ev`. - -### Inferable Parameters - -The new inferable parameter syntax with `given` corresponds largely to Scala-2's implicit parameters. E.g. -```scala - def max[T](x: T, y: T) given (ord: Ord[T]): T -``` -would be written -```scala - def max[T](x: T, y: T)(implicit ord: Ord[T]): T -``` -in Scala 2. The main difference concerns applications of such parameters. -Explicit arguments to inferable parameters _must_ be written using `given`, -mirroring the definition syntax. E.g, `max(2, 3) given IntOrd`. -Scala 2 uses normal applications `max(2, 3)(IntOrd)` instead. The Scala 2 syntax has some inherent ambiguities and restrictions which are overcome by the new syntax. For instance, multiple implicit parameter lists are not available in the old syntax, even though they can be simulated using auxiliary objects in the "Aux" pattern. - -The `the` method corresponds to `implicitly` in Scala 2. -It is precisely the same as the `the` method in Shapeless. -The difference between `the` (in both versions) and `implicitly` is -that `the` can return a more precise type than the type that was -asked for. - -### Context Bounds - -Context bounds are the same in both language versions. They expand to the respective forms of implicit parameters. - -**Note:** To ease migration, context bounds in Dotty map for a limited time to old-style implicit parameters for which arguments can be passed either with `given` or -with a normal application. Once old-style implicits are deprecated, context bounds -will map to inferable parameters instead. - -### Extension Methods - -Extension methods have no direct counterpart in Scala 2, but they can be simulated with implicit classes. For instance, the extension method -```scala - def (c: Circle) circumference: Double = c.radius * math.Pi * 2 -``` -could be simulated to some degree by -```scala - implicit class CircleDeco(c: Circle) extends AnyVal { - def circumference: Double = c.radius * math.Pi * 2 - } -``` -Extension methods in instance definitions have no direct counterpart in Scala-2. The only way to simulate these is to make implicit classes available through imports. The Simulacrum macro library can automate this process in some cases. - -### Typeclass Derivation - -Typeclass derivation has no direct counterpart in the Scala 2 language. Comparable functionality can be achieved by macro-based libraries such as Shapeless, Magnolia, or scalaz-deriving. - -### Context Query types - -Context Query types have no analogue in Scala 2. - -### Implicit By-Name Parameters - -Implicit by-name parameters are not supported in Scala 2, but can be emulated to some degree by the `Lazy` type in Shapeless. - -## Simulating Scala 2 Implicits in Dotty - -### Implicit Conversions - -Implicit conversion methods in Scala 2 can be expressed as implicit instances of class -`scala.Conversion` in Dotty. E.g. instead of -```scala - implicit def stringToToken(str: String): Token = new Keyword(str) -``` -one can write -```scala - instance stringToToken of Conversion[String, Token] { - def apply(str: String): Token = new KeyWord(str) - } -``` - -### Implicit Classes - -Implicit classes in Scala 2 are often used to define extension methods, which are directly supported in Dotty. Other uses of implicit classes can be simulated by a pair of a regular class and a `Conversion` instance definition. - - -### Implicit Values - -Implicit `val` definitions in Scala 2 can be expressed in Dotty using a regular `val` definition and an alias instance. E.g., Scala 2's -```scala - lazy implicit val pos: Position = tree.sourcePos -``` -can be expressed in Dotty as -```scala - lazy val pos: Position = tree.sourcePos - instance of Position = pos -``` - -### Abstract Implicits - -An abstract implicit `val` or `def` in Scala 2 can be expressed in Dotty using a regular abstract definition and an alias instance. E.g., Scala 2's -```scala - implicit def symDeco: SymDeco -``` -can be expressed in Dotty as -```scala - def symDeco: SymDeco - instance of SymDeco = symDeco -``` - -## Implementation Status and Timeline - -The Dotty implementation implements both Scala-2's implicits and the new abstractions. In fact, support for Scala-2's implicits is an essential part of the common language subset between 2.13/2.14 and Dotty. -Migration to the new abstractions will be supported by making automatic rewritings available. - -Depending on adoption patterns, old style implicits might start to be deprecated in a version following Scala 3.0. diff --git a/docs/docs/reference/contextual-instance/typeclasses.md b/docs/docs/reference/contextual-instance/typeclasses.md deleted file mode 100644 index f14fc0178504..000000000000 --- a/docs/docs/reference/contextual-instance/typeclasses.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -layout: doc-page -title: "Implementing Typeclasses" ---- - -Instance definitions, extension methods and context bounds -allow a concise and natural expression of _typeclasses_. Typeclasses are just traits -with canonical implementations defined by instance definitions. Here are some examples of standard typeclasses: - -### Semigroups and monoids: - -```scala -trait SemiGroup[T] { - def (x: T) combine (y: T): T -} -trait Monoid[T] extends SemiGroup[T] { - def unit: T -} -object Monoid { - def apply[T] given Monoid[T] = the[Monoid[T]] -} - -instance of Monoid[String] { - def (x: String) combine (y: String): String = x.concat(y) - def unit: String = "" -} - -instance of Monoid[Int] { - def (x: Int) combine (y: Int): Int = x + y - def unit: Int = 0 -} - -def sum[T: Monoid](xs: List[T]): T = - xs.foldLeft(Monoid[T].unit)(_.combine(_)) -``` - -### Functors and monads: - -```scala -trait Functor[F[_]] { - def (x: F[A]) map [A, B] (f: A => B): F[B] -} - -trait Monad[F[_]] extends Functor[F] { - def (x: F[A]) flatMap [A, B] (f: A => F[B]): F[B] - def (x: F[A]) map [A, B] (f: A => B) = x.flatMap(f `andThen` pure) - - def pure[A](x: A): F[A] -} - -instance ListMonad of Monad[List] { - def (xs: List[A]) flatMap [A, B] (f: A => List[B]): List[B] = - xs.flatMap(f) - def pure[A](x: A): List[A] = - List(x) -} - -instance ReaderMonad[Ctx] of Monad[[X] => Ctx => X] { - def (r: Ctx => A) flatMap [A, B] (f: A => Ctx => B): Ctx => B = - ctx => f(r(ctx))(ctx) - def pure[A](x: A): Ctx => A = - ctx => x -} -``` diff --git a/docs/docs/reference/contextual-repr/context-bounds.md b/docs/docs/reference/contextual-repr/context-bounds.md deleted file mode 100644 index ed54a4ba1411..000000000000 --- a/docs/docs/reference/contextual-repr/context-bounds.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -layout: doc-page -title: "Context Bounds" ---- - -## Context Bounds - -A context bound is a shorthand for expressing a common pattern of an implicit parameter that depends on a type parameter. Using a context bound, the `maximum` function of the last section can be written like this: -```scala -def maximum[T: Ord](xs: List[T]): T = xs.reduceLeft(max) -``` -A bound like `: Ord` on a type parameter `T` of a method or class is equivalent to a given clause `given Ord[T]`. The implicit parameter(s) generated from context bounds come last in the definition of the containing method or class. E.g., -```scala -def f[T: C1 : C2, U: C3](x: T) given (y: U, z: V): R -``` -would expand to -```scala -def f[T, U](x: T) given (y: U, z: V) given C1[T], C2[T], C3[U]: R -``` -Context bounds can be combined with subtype bounds. If both are present, subtype bounds come first, e.g. -```scala -def g[T <: B : C](x: T): R = ... -``` - -## Syntax - -``` -TypeParamBounds ::= [SubtypeBounds] {ContextBound} -ContextBound ::= ‘:’ Type -``` diff --git a/docs/docs/reference/contextual-repr/conversions.md b/docs/docs/reference/contextual-repr/conversions.md deleted file mode 100644 index 48e0b64dde44..000000000000 --- a/docs/docs/reference/contextual-repr/conversions.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -layout: doc-page -title: "Implicit Conversions" ---- - -Implicit conversions are defined by representatives of the `scala.Conversion` class. -This class is defined in package `scala` as follows: -```scala -abstract class Conversion[-T, +U] extends (T => U) -``` -For example, here is an implicit conversion from `String` to `Token`: -```scala -repr of Conversion[String, Token] { - def apply(str: String): Token = new KeyWord(str) -} -``` -Using an alias representative this can be expressed more concisely as: -```scala -repr of Conversion[String, Token] = new KeyWord(_) -``` -An implicit conversion is applied automatically by the compiler in three situations: - -1. If an expression `e` has type `T`, and `T` does not conform to the expression's expected type `S`. -2. In a selection `e.m` with `e` of type `T`, but `T` defines no member `m`. -3. In an application `e.m(args)` with `e` of type `T`, if `T` does define - some member(s) named `m`, but none of these members can be applied to the arguments `args`. - -In the first case, the compiler looks for a representative of -`scala.Conversion` that maps an argument of type `T` to type `S`. In the second and third -case, it looks for a representative of `scala.Conversion` that maps an argument of type `T` -to a type that defines a member `m` which can be applied to `args` if present. -If such a representative `C` is found, the expression `e` is replaced by `C.apply(e)`. - -## Examples - -1. The `Predef` package contains "auto-boxing" conversions that map -primitive number types to subclasses of `java.lang.Number`. For instance, the -conversion from `Int` to `java.lang.Integer` can be defined as follows: -```scala -repr int2Integer of Conversion[Int, java.lang.Integer] = - java.lang.Integer.valueOf(_) -``` - -2. The "magnet" pattern is sometimes used to express many variants of a method. Instead of defining overloaded versions of the method, one can also let the method take one or more arguments of specially defined "magnet" types, into which various argument types can be converted. E.g. -```scala -object Completions { - - // The argument "magnet" type - enum CompletionArg { - case Error(s: String) - case Response(f: Future[HttpResponse]) - case Status(code: Future[StatusCode]) - } - object CompletionArg { - - // conversions defining the possible arguments to pass to `complete` - // these always come with CompletionArg - // They can be invoked explicitly, e.g. - // - // CompletionArg.fromStatusCode(statusCode) - - repr fromString of Conversion[String, CompletionArg] = Error(_) - repr fromFuture of Conversion[Future[HttpResponse], CompletionArg] = Response(_) - repr fromStatusCode of Conversion[Future[StatusCode], CompletionArg] = Status(_) - } - import CompletionArg._ - - def complete[T](arg: CompletionArg) = arg match { - case Error(s) => ... - case Response(f) => ... - case Status(code) => ... - } -} -``` -This setup is more complicated than simple overloading of `complete`, but it can still be useful if normal overloading is not available (as in the case above, since we cannot have two overloaded methods that take `Future[...]` arguments), or if normal overloading would lead to a combinatorial explosion of variants. diff --git a/docs/docs/reference/contextual-repr/derivation.md b/docs/docs/reference/contextual-repr/derivation.md deleted file mode 100644 index f4167863a9a2..000000000000 --- a/docs/docs/reference/contextual-repr/derivation.md +++ /dev/null @@ -1,382 +0,0 @@ ---- -layout: doc-page -title: Typeclass Derivation ---- - -Typeclass derivation is a way to generate representatives of certain type classes automatically or with minimal code hints. A type class in this sense is any trait or class with a type parameter that describes the type being operated on. Commonly used examples are `Eql`, `Ordering`, `Show`, or `Pickling`. Example: -```scala -enum Tree[T] derives Eql, Ordering, Pickling { - case Branch(left: Tree[T], right: Tree[T]) - case Leaf(elem: T) -} -``` -The `derives` clause generates representatives of the `Eql`, `Ordering`, and `Pickling` traits in the companion object `Tree`: -```scala -repr [T: Eql] of Eql[Tree[T]] = Eql.derived -repr [T: Ordering] of Ordering[Tree[T]] = Ordering.derived -repr [T: Pickling] of Pickling[Tree[T]] = Pickling.derived -``` - -### Deriving Types - -Besides for enums, typeclasses can also be derived for other sets of classes and objects that form an algebraic data type. These are: - - - individual case classes or case objects - - sealed classes or traits that have only case classes and case objects as children. - - Examples: - - ```scala -case class Labelled[T](x: T, label: String) derives Eql, Show - -sealed trait Option[T] derives Eql -case class Some[T] extends Option[T] -case object None extends Option[Nothing] -``` - -The generated typeclass representatives are placed in the companion objects `Labelled` and `Option`, respectively. - -### Derivable Types - -A trait or class can appear in a `derives` clause if its companion object defines a method named `derived`. The type and implementation of a `derived` method are arbitrary, but typically it has a definition like this: -```scala - def derived[T] given Generic[T] = ... -``` -That is, the `derived` method takes an implicit parameter of type `Generic` that determines the _shape_ of the deriving type `T` and it computes the typeclass implementation according to that shape. A `Generic` representative is generated automatically for any type that derives a typeclass with a `derived` method that refers to `Generic`. One can also derive `Generic` alone, which means a `Generic` representative is generated without any other type class representatives. E.g.: -```scala -sealed trait ParseResult[T] derives Generic -``` -This is all a user of typeclass derivation has to know. The rest of this page contains information needed to be able to write a typeclass that can appear in a `derives` clause. In particular, it details the means provided for the implementation of data generic `derived` methods. - -### The Shape Type - -For every class with a `derives` clause, the compiler computes the shape of that class as a type. For example, here is the shape type for the `Tree[T]` enum: -```scala -Cases[( - Case[Branch[T], (Tree[T], Tree[T])], - Case[Leaf[T], T *: Unit] -)] -``` -Informally, this states that - -> The shape of a `Tree[T]` is one of two cases: Either a `Branch[T]` with two - elements of type `Tree[T]`, or a `Leaf[T]` with a single element of type `T`. - -The type constructors `Cases` and `Case` come from the companion object of a class -`scala.compiletime.Shape`, which is defined in the standard library as follows: -```scala -sealed abstract class Shape - -object Shape { - - /** A sum with alternative types `Alts` */ - case class Cases[Alts <: Tuple] extends Shape - - /** A product type `T` with element types `Elems` */ - case class Case[T, Elems <: Tuple] extends Shape -} -``` - -Here is the shape type for `Labelled[T]`: -```scala -Case[Labelled[T], (T, String)] -``` -And here is the one for `Option[T]`: -```scala -Cases[( - Case[Some[T], T *: Unit], - Case[None.type, Unit] -)] -``` -Note that an empty element tuple is represented as type `Unit`. A single-element tuple -is represented as `T *: Unit` since there is no direct syntax for such tuples: `(T)` is just `T` in parentheses, not a tuple. - -### The Generic Typeclass - -For every class `C[T_1,...,T_n]` with a `derives` clause, the compiler generates in the companion object of `C` a representative of `Generic[C[T_1,...,T_n]]` that follows the outline below: -```scala -repr [T_1, ..., T_n] of Generic[C[T_1,...,T_n]] { - type Shape = ... - ... -} -``` -where the right hand side of `Shape` is the shape type of `C[T_1,...,T_n]`. -For instance, the definition -```scala -enum Result[+T, +E] derives Logging { - case class Ok[T](result: T) - case class Err[E](err: E) -} -``` -would produce: -```scala -object Result { - import scala.compiletime.Shape._ - - repr [T, E] of Generic[Result[T, E]] { - type Shape = Cases[( - Case[Ok[T], T *: Unit], - Case[Err[E], E *: Unit] - )] - ... - } -} -``` -The `Generic` class is defined in package `scala.reflect`. - -```scala -abstract class Generic[T] { - type Shape <: scala.compiletime.Shape - - /** The mirror corresponding to ADT instance `x` */ - def reflect(x: T): Mirror - - /** The ADT instance corresponding to given `mirror` */ - def reify(mirror: Mirror): T - - /** The companion object of the ADT */ - def common: GenericClass -} -``` -It defines the `Shape` type for the ADT `T`, as well as two methods that map between a -type `T` and a generic representation of `T`, which we call a `Mirror`: -The `reflect` method maps an instance of the ADT `T` to its mirror whereas -the `reify` method goes the other way. There's also a `common` method that returns -a value of type `GenericClass` which contains information that is the same for all -instances of a class (right now, this consists of the runtime `Class` value and -the names of the cases and their parameters). - -### Mirrors - -A mirror is a generic representation of an instance of an ADT. `Mirror` objects have three components: - - - `adtClass: GenericClass`: The representation of the ADT class - - `ordinal: Int`: The ordinal number of the case among all cases of the ADT, starting from 0 - - `elems: Product`: The elements of the instance, represented as a `Product`. - - The `Mirror` class is defined in package `scala.reflect` as follows: - -```scala -class Mirror(val adtClass: GenericClass, val ordinal: Int, val elems: Product) { - - /** The `n`'th element of this generic case */ - def apply(n: Int): Any = elems.productElement(n) - - /** The name of the constructor of the case reflected by this mirror */ - def caseLabel: String = adtClass.label(ordinal)(0) - - /** The label of the `n`'th element of the case reflected by this mirror */ - def elementLabel(n: Int): String = adtClass.label(ordinal)(n + 1) -} -``` - -### GenericClass - -Here's the API of `scala.reflect.GenericClass`: - -```scala -class GenericClass(val runtimeClass: Class[_], labelsStr: String) { - - /** A mirror of case with ordinal number `ordinal` and elements as given by `Product` */ - def mirror(ordinal: Int, product: Product): Mirror = - new Mirror(this, ordinal, product) - - /** A mirror with elements given as an array */ - def mirror(ordinal: Int, elems: Array[AnyRef]): Mirror = - mirror(ordinal, new ArrayProduct(elems)) - - /** A mirror with an initial empty array of `numElems` elements, to be filled in. */ - def mirror(ordinal: Int, numElems: Int): Mirror = - mirror(ordinal, new Array[AnyRef](numElems)) - - /** A mirror of a case with no elements */ - def mirror(ordinal: Int): Mirror = - mirror(ordinal, EmptyProduct) - - /** Case and element labels as a two-dimensional array. - * Each row of the array contains a case label, followed by the labels of the elements of that case. - */ - val label: Array[Array[String]] = ... -} -``` - -The class provides four overloaded methods to create mirrors. The first of these is invoked by the `reify` method that maps an ADT instance to its mirror. It simply passes the -instance itself (which is a `Product`) to the second parameter of the mirror. That operation does not involve any copying and is thus quite efficient. The second and third versions of `mirror` are typically invoked by typeclass methods that create instances from mirrors. An example would be an `unpickle` method that first creates an array of elements, then creates -a mirror over that array, and finally uses the `reify` method in `Reflected` to create the ADT instance. The fourth version of `mirror` is used to create mirrors of instances that do not have any elements. - -### How to Write Generic Typeclasses - -Based on the machinery developed so far it becomes possible to define type classes generically. This means that the `derived` method will compute a type class representative for any ADT that has a `Generic` representative, recursively. -The implementation of these methods typically uses three new type-level constructs in Dotty: inline methods, inline matches, and implicit matches. As an example, here is one possible implementation of a generic `Eql` type class, with explanations. Let's assume `Eql` is defined by the following trait: -```scala -trait Eql[T] { - def eql(x: T, y: T): Boolean -} -``` -We need to implement a method `Eql.derived` that produces a representative of `Eql[T]` provided -there exists a representative of type `Generic[T]`. Here's a possible solution: -```scala - inline def derived[T] given (ev: Generic[T]): Eql[T] = new Eql[T] { - def eql(x: T, y: T): Boolean = { - val mx = ev.reflect(x) // (1) - val my = ev.reflect(y) // (2) - inline erasedValue[ev.Shape] match { - case _: Cases[alts] => - mx.ordinal == my.ordinal && // (3) - eqlCases[alts](mx, my, 0) // [4] - case _: Case[_, elems] => - eqlElems[elems](mx, my, 0) // [5] - } - } - } -``` -The implementation of the inline method `derived` creates a representative of `Eql[T]` and implements its `eql` method. The right-hand side of `eql` mixes compile-time and runtime elements. In the code above, runtime elements are marked with a number in parentheses, i.e -`(1)`, `(2)`, `(3)`. Compile-time calls that expand to runtime code are marked with a number in brackets, i.e. `[4]`, `[5]`. The implementation of `eql` consists of the following steps. - - 1. Map the compared values `x` and `y` to their mirrors using the `reflect` method of the implicitly passed `Generic` `(1)`, `(2)`. - 2. Match at compile-time against the shape of the ADT given in `ev.Shape`. Dotty does not have a construct for matching types directly, but we can emulate it using an `inline` match over an `erasedValue`. Depending on the actual type `ev.Shape`, the match will reduce at compile time to one of its two alternatives. - 3. If `ev.Shape` is of the form `Cases[alts]` for some tuple `alts` of alternative types, the equality test consists of comparing the ordinal values of the two mirrors `(3)` and, if they are equal, comparing the elements of the case indicated by that ordinal value. That second step is performed by code that results from the compile-time expansion of the `eqlCases` call `[4]`. - 4. If `ev.Shape` is of the form `Case[elems]` for some tuple `elems` for element types, the elements of the case are compared by code that results from the compile-time expansion of the `eqlElems` call `[5]`. - -Here is a possible implementation of `eqlCases`: -```scala - inline def eqlCases[Alts <: Tuple](mx: Mirror, my: Mirror, n: Int): Boolean = - inline erasedValue[Alts] match { - case _: (Shape.Case[_, elems] *: alts1) => - if (mx.ordinal == n) // (6) - eqlElems[elems](mx, my, 0) // [7] - else - eqlCases[alts1](mx, my, n + 1) // [8] - case _: Unit => - throw new MatchError(mx.ordinal) // (9) - } -``` -The inline method `eqlCases` takes as type arguments the alternatives of the ADT that remain to be tested. It takes as value arguments mirrors of the two instances `x` and `y` to be compared and an integer `n` that indicates the ordinal number of the case that is tested next. It produces an expression that compares these two values. - -If the list of alternatives `Alts` consists of a case of type `Case[_, elems]`, possibly followed by further cases in `alts1`, we generate the following code: - - 1. Compare the `ordinal` value of `mx` (a runtime value) with the case number `n` (a compile-time value translated to a constant in the generated code) in an if-then-else `(6)`. - 2. In the then-branch of the conditional we have that the `ordinal` value of both mirrors - matches the number of the case with elements `elems`. Proceed by comparing the elements - of the case in code expanded from the `eqlElems` call `[7]`. - 3. In the else-branch of the conditional we have that the present case does not match - the ordinal value of both mirrors. Proceed by trying the remaining cases in `alts1` using - code expanded from the `eqlCases` call `[8]`. - - If the list of alternatives `Alts` is the empty tuple, there are no further cases to check. - This place in the code should not be reachable at runtime. Therefore an appropriate - implementation is by throwing a `MatchError` or some other runtime exception `(9)`. - -The `eqlElems` method compares the elements of two mirrors that are known to have the same -ordinal number, which means they represent the same case of the ADT. Here is a possible -implementation: -```scala - inline def eqlElems[Elems <: Tuple](xs: Mirror, ys: Mirror, n: Int): Boolean = - inline erasedValue[Elems] match { - case _: (elem *: elems1) => - tryEql[elem]( // [12] - xs(n).asInstanceOf[elem], // (10) - ys(n).asInstanceOf[elem]) && // (11) - eqlElems[elems1](xs, ys, n + 1) // [13] - case _: Unit => - true // (14) - } -``` -`eqlElems` takes as arguments the two mirrors of the elements to compare and a compile-time index `n`, indicating the index of the next element to test. It is defined in terms of another compile-time match, this time over the tuple type `Elems` of all element types that remain to be tested. If that type is -non-empty, say of form `elem *: elems1`, the following code is produced: - - 1. Access the `n`'th elements of both mirrors and cast them to the current element type `elem` - `(10)`, `(11)`. Note that because of the way runtime reflection mirrors compile-time `Shape` types, the casts are guaranteed to succeed. - 2. Compare the element values using code expanded by the `tryEql` call `[12]`. - 3. "And" the result with code that compares the remaining elements using a recursive call - to `eqlElems` `[13]`. - - If type `Elems` is empty, there are no more elements to be compared, so the comparison's result is `true`. `(14)` - - Since `eqlElems` is an inline method, its recursive calls are unrolled. The end result is a conjunction `test_1 && ... && test_n && true` of test expressions produced by the `tryEql` calls. - -The last, and in a sense most interesting part of the derivation is the comparison of a pair of element values in `tryEql`. Here is the definition of this method: -```scala - inline def tryEql[T](x: T, y: T) = implicit match { - case ev: Eql[T] => - ev.eql(x, y) // (15) - case _ => - error("No `Eql` instance was found for $T") - } -``` -`tryEql` is an inline method that takes an element type `T` and two element values of that type as arguments. It is defined using an `implicit match` that tries to find a representative of `Eql[T]`. If a representative `ev` is found, it proceeds by comparing the arguments using `ev.eql`. On the other hand, if no representative is found -this signals a compilation error: the user tried a generic derivation of `Eql` for a class with an element type that does not have an `Eql` representative itself. The error is signaled by -calling the `error` method defined in `scala.compiletime`. - -**Note:** At the moment our error diagnostics for metaprogramming does not support yet interpolated string arguments for the `scala.compiletime.error` method that is called in the second case above. As an alternative, one can simply leave off the second case, then a missing typeclass would result in a "failure to reduce match" error. - -**Example:** Here is a slightly polished and compacted version of the code that's generated by inline expansion for the derived `Eql` representative of class `Tree`. - -```scala -repr [T] of Eql[Tree[T]] given (elemEq: Eql[T]) { - def eql(x: Tree[T], y: Tree[T]): Boolean = { - val ev = the[Generic[Tree[T]]] - val mx = ev.reflect(x) - val my = ev.reflect(y) - mx.ordinal == my.ordinal && { - if (mx.ordinal == 0) { - this.eql(mx(0).asInstanceOf[Tree[T]], my(0).asInstanceOf[Tree[T]]) && - this.eql(mx(1).asInstanceOf[Tree[T]], my(1).asInstanceOf[Tree[T]]) - } - else if (mx.ordinal == 1) { - elemEq.eql(mx(0).asInstanceOf[T], my(0).asInstanceOf[T]) - } - else throw new MatchError(mx.ordinal) - } - } -} -``` - -One important difference between this approach and Scala-2 typeclass derivation frameworks such as Shapeless or Magnolia is that no automatic attempt is made to generate typeclass representatives of elements recursively using the generic derivation framework. There must be a representative of `Eql[T]` (which can of course be produced in turn using `Eql.derived`), or the compilation will fail. The advantage of this more restrictive approach to typeclass derivation is that it avoids uncontrolled transitive typeclass derivation by design. This keeps code sizes smaller, compile times lower, and is generally more predictable. - -### Deriving Representatives Elsewhere - -Sometimes one would like to derive a typeclass representative for an ADT after the ADT is defined, without being able to change the code of the ADT itself. -To do this, simply define a representative with the `derived` method of the typeclass as right-hand side. E.g, to implement `Ordering` for `Option`, define: -```scala -repr [T: Ordering] of Ordering[Option[T]] = Ordering.derived -``` -Usually, the `Ordering.derived` clause has an implicit parameter of type -`Generic[Option[T]]`. Since the `Option` trait has a `derives` clause, -the necessary representative is already present in the companion object of `Option`. -If the ADT in question does not have a `derives` clause, a `Generic` representative -would still be synthesized by the compiler at the point where `derived` is called. -This is similar to the situation with type tags or class tags: If no representative -is found, the compiler will synthesize one. - -### Syntax - -``` -Template ::= InheritClauses [TemplateBody] -EnumDef ::= id ClassConstr InheritClauses EnumBody -InheritClauses ::= [‘extends’ ConstrApps] [‘derives’ QualId {‘,’ QualId}] -ConstrApps ::= ConstrApp {‘with’ ConstrApp} - | ConstrApp {‘,’ ConstrApp} -``` - -### Discussion - -The typeclass derivation framework is quite small and low-level. There are essentially -two pieces of infrastructure in the compiler-generated `Generic` representatives: - - - a type representing the shape of an ADT, - - a way to map between ADT instances and generic mirrors. - -Generic mirrors make use of the already existing `Product` infrastructure for case -classes, which means they are efficient and their generation requires not much code. - -Generic mirrors can be so simple because, just like `Product`s, they are weakly -typed. On the other hand, this means that code for generic typeclasses has to -ensure that type exploration and value selection proceed in lockstep and it -has to assert this conformance in some places using casts. If generic typeclasses -are correctly written these casts will never fail. - -It could make sense to explore a higher-level framework that encapsulates all casts -in the framework. This could give more guidance to the typeclass implementer. -It also seems quite possible to put such a framework on top of the lower-level -mechanisms presented here. diff --git a/docs/docs/reference/contextual-repr/extension-methods.md b/docs/docs/reference/contextual-repr/extension-methods.md deleted file mode 100644 index fdb20d9e24c9..000000000000 --- a/docs/docs/reference/contextual-repr/extension-methods.md +++ /dev/null @@ -1,150 +0,0 @@ ---- -layout: doc-page -title: "Extension Methods" ---- - -Extension methods allow one to add methods to a type after the type is defined. Example: - -```scala -case class Circle(x: Double, y: Double, radius: Double) - -def (c: Circle) circumference: Double = c.radius * math.Pi * 2 -``` - -Like regular methods, extension methods can be invoked with infix `.`: - -```scala - val circle = Circle(0, 0, 1) - circle.circumference -``` - -### Translation of Extension Methods - -Extension methods are methods that have a parameter clause in front of the defined -identifier. They translate to methods where the leading parameter section is moved -to after the defined identifier. So, the definition of `circumference` above translates -to the plain method, and can also be invoked as such: -```scala -def circumference(c: Circle): Double = c.radius * math.Pi * 2 - -assert(circle.circumference == circumference(circle)) -``` - -### Translation of Calls to Extension Methods - -When is an extension method applicable? There are two possibilities. - - - An extension method is applicable if it is visible under a simple name, by being defined - or inherited or imported in a scope enclosing the application. - - An extension method is applicable if it is a member of some representative that's eligible at the point of the application. - -As an example, consider an extension method `longestStrings` on `String` defined in a trait `StringSeqOps`. - -```scala -trait StringSeqOps { - def (xs: Seq[String]) longestStrings = { - val maxLength = xs.map(_.length).max - xs.filter(_.length == maxLength) - } -} -``` -We can make the extension method available by defining a representative of `StringSeqOps`, like this: -```scala -repr ops1 of StringSeqOps -``` -Then -```scala -List("here", "is", "a", "list").longestStrings -``` -is legal everywhere `ops1` is eligible. Alternatively, we can define `longestStrings` as a member of a normal object. But then the method has to be brought into scope to be usable as an extension method. - -```scala -object ops2 extends StringSeqOps -import ops2.longestStrings -List("here", "is", "a", "list").longestStrings -``` -The precise rules for resolving a selection to an extension method are as follows. - -Assume a selection `e.m[Ts]` where `m` is not a member of `e`, where the type arguments `[Ts]` are optional, -and where `T` is the expected type. The following two rewritings are tried in order: - - 1. The selection is rewritten to `m[Ts](e)`. - 2. If the first rewriting does not typecheck with expected type `T`, and there is a representative `r` - in either the current scope or in the implicit scope of `T`, and `r` defines an extension - method named `m`, then selection is expanded to `r.m[Ts](e)`. - This second rewriting is attempted at the time where the compiler also tries an implicit conversion - from `T` to a type containing `m`. If there is more than one way of rewriting, an ambiguity error results. - -So `circle.circumference` translates to `CircleOps.circumference(circle)`, provided -`circle` has type `Circle` and `CircleOps` is an eligible representative (i.e. it is visible at the point of call or it is defined in the companion object of `Circle`). - -### Representatives for Extension Methods - -Representatives that define extension methods can also be defined without an `of` clause. E.g., - -```scala -repr StringOps { - def (xs: Seq[String]) longestStrings: Seq[String] = { - val maxLength = xs.map(_.length).max - xs.filter(_.length == maxLength) - } -} - -repr { - def (xs: List[T]) second[T] = xs.tail.head -} -``` -If such a representative is anonymous (as in the second clause), its name is synthesized from the name -of the first defined extension method. - -### Operators - -The extension method syntax also applies to the definition of operators. -In each case the definition syntax mirrors the way the operator is applied. -Examples: -```scala - def (x: String) < (y: String) = ... - def (x: Elem) +: (xs: Seq[Elem]) = ... - - "ab" + "c" - 1 +: List(2, 3) -``` -The two definitions above translate to -```scala - def < (x: String)(y: String) = ... - def +: (xs: Seq[Elem])(x: Elem) = ... -``` -Note that swap of the two parameters `x` and `xs` when translating -the right-binding operator `+:` to an extension method. This is analogous -to the implementation of right binding operators as normal methods. - -### Generic Extensions - -The `StringSeqOps` examples extended a specific instance of a generic type. It is also possible to extend a generic type by adding type parameters to an extension method. Examples: - -```scala -def (xs: List[T]) second [T] = - xs.tail.head - -def (xs: List[List[T]]) flattened [T] = - xs.foldLeft[List[T]](Nil)(_ ++ _) - -def (x: T) + [T : Numeric](y: T): T = - the[Numeric[T]].plus(x, y) -``` - -As usual, type parameters of the extension method follow the defined method name. Nevertheless, such type parameters can already be used in the preceding parameter clause. - - -### Syntax - -The required syntax extension just adds one clause for extension methods relative -to the [current syntax](https://github.com/lampepfl/dotty/blob/master/docs/docs/internals/syntax.md). -``` -DefSig ::= ... - | ‘(’ DefParam ‘)’ [nl] id [DefTypeParamClause] DefParamClauses -``` - - - - diff --git a/docs/docs/reference/contextual-repr/import-implied.md b/docs/docs/reference/contextual-repr/import-implied.md deleted file mode 100644 index 33041ebf1225..000000000000 --- a/docs/docs/reference/contextual-repr/import-implied.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -layout: doc-page -title: "Imports of Representatives" ---- - -A special form of import is used to import representatives. Example: -```scala -object A { - class TC - repr tc of TC - def f given TC = ??? -} -object B { - import A._ - import repr A._ -} -``` -In the code above, the `import A._` clause of object `B` will import all members -of `A` _except_ the representative `tc`. Conversely, the second import `import repr A._` will import _only_ that representative. - -Generally, a normal import clause brings all members except representatives into scope whereas an `import repr` clause brings only representatives into scope. - -There are two main benefits arising from these rules: - - - It is made clearer where representatives in scope are coming from. - In particular, it is not possible to hide imported representatives in a long list of regular imports. - - It enables importing all representatives - without importing anything else. This is particularly important since representatives - can be anonymous, so the usual recourse of using named imports is not - practical. - -### Migration - -The rules of representatives above have the consequence that a library -would have to migrate in lockstep with all its users from old style implicits and -normal imports to representatives and `import repr` clauses. - -The following modifications avoid this hurdle to migration. - - 1. An `import repr` also brings old style implicits into scope. So, in Scala 3.0 - an old-style implicit definition can be brought into scope either by a normal import or - by an `import repr`. - - 2. In Scala 3.1, old-style implicits accessed through a normal import - will give a deprecation warning. - - 3. In some version after 3.1, old-style implicits accessed through a normal import - will give a compiler error. - -These rules mean that library users can use `import repr` to access old-style implicits in Scala 3.0, -and will be gently nudged and then forced to do so in later versions. Libraries can then switch to -`repr` clauses once their user base has migrated. diff --git a/docs/docs/reference/contextual-repr/inferable-by-name-parameters.md b/docs/docs/reference/contextual-repr/inferable-by-name-parameters.md deleted file mode 100644 index 9c06db8848b3..000000000000 --- a/docs/docs/reference/contextual-repr/inferable-by-name-parameters.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -layout: doc-page -title: "Implicit By-Name Parameters" ---- - -Implicit by-name parameters can be used to avoid a divergent inferred expansion. Example: - -```scala -trait Codec[T] { - def write(x: T): Unit -} - -repr intCodec of Codec[Int] = ??? - -repr optionCodec[T] of Codec[Option[T]] given (ev: => Codec[T]) { - def write(xo: Option[T]) = xo match { - case Some(x) => ev.write(x) - case None => - } -} - -val s = the[Codec[Option[Int]]] - -s.write(Some(33)) -s.write(None) -``` -As is the case for a normal by-name parameter, the argument for the implicit parameter `ev` -is evaluated on demand. In the example above, if the option value `x` is `None`, it is -not evaluated at all. - -The synthesized argument for an implicit parameter is backed by a local val -if this is necessary to prevent an otherwise diverging expansion. - -The precise steps for synthesizing an argument for a by-name parameter of type `=> T` are as follows. - - 1. Create a new representative of type `T`: - - ```scala - repr lv of T = ??? - ``` - where `lv` is an arbitrary fresh name. - - 1. This representative is not immediately eligible as a candidate for argument inference (making it immediately eligible could result in a loop in the synthesized computation). But it becomes eligible in all nested contexts that look again for an implicit argument to a by-name parameter. - - 1. If this search succeeds with expression `E`, and `E` contains references to the representative `lv`, replace `E` by - - - ```scala - { repr lv of T = E; lv } - ``` - - Otherwise, return `E` unchanged. - -In the example above, the definition of `s` would be expanded as follows. - -```scala -val s = the[Test.Codec[Option[Int]]]( - optionCodec[Int](intCodec)) -``` - -No local representative was generated because the synthesized argument is not recursive. - -### Reference - -For more info, see [Issue #1998](https://github.com/lampepfl/dotty/issues/1998) -and the associated [Scala SIP](https://docs.scala-lang.org/sips/byname-implicits.html). diff --git a/docs/docs/reference/contextual-repr/inferable-params.md b/docs/docs/reference/contextual-repr/inferable-params.md deleted file mode 100644 index fdc758b44f6b..000000000000 --- a/docs/docs/reference/contextual-repr/inferable-params.md +++ /dev/null @@ -1,111 +0,0 @@ ---- -layout: doc-page -title: "Given Clauses" ---- - -Functional programming tends to express most dependencies as simple function parameterization. -This is clean and powerful, but it sometimes leads to functions that take many parameters and -call trees where the same value is passed over and over again in long call chains to many -functions. Given clauses can help here since they enable the compiler to synthesize -repetitive arguments instead of the programmer having to write them explicitly. - -For example, given the [representatives](./instance-defs.md) defined previously, -a maximum function that works for any arguments for which an ordering exists can be defined as follows: -```scala -def max[T](x: T, y: T) given (ord: Ord[T]): T = - if (ord.compare(x, y) < 1) y else x -``` -Here, `ord` is an _implicit parameter_ introduced with a `given` clause. -The `max` method can be applied as follows: -```scala -max(2, 3).given(IntOrd) -``` -The `.given(IntOrd)` part passes `IntOrd` as an argument for the `ord` parameter. But the point of -implicit parameters is that this argument can also be left out (and it usually is). So the following -applications are equally valid: -```scala -max(2, 3) -max(List(1, 2, 3), Nil) -``` - -## Anonymous Implicit Parameters - -In many situations, the name of an implicit parameter of a method need not be -mentioned explicitly at all, since it is only used in synthesized arguments for -other implicit parameters. In that case one can avoid defining a parameter name -and just provide its type. Example: -```scala -def maximum[T](xs: List[T]) given Ord[T]: T = - xs.reduceLeft(max) -``` -`maximum` takes an implicit parameter of type `Ord` only to pass it on as a -synthesized argument to `max`. The name of the parameter is left out. - -Generally, implicit parameters may be given either as a parameter list `(p_1: T_1, ..., p_n: T_n)` -or as a sequence of types, separated by commas. - -## Inferring Complex Arguments - -Here are two other methods that have an implicit parameter of type `Ord[T]`: -```scala -def descending[T] given (asc: Ord[T]): Ord[T] = new Ord[T] { - def compare(x: T, y: T) = asc.compare(y, x) -} - -def minimum[T](xs: List[T]) given Ord[T] = - maximum(xs).given(descending) -``` -The `minimum` method's right hand side passes `descending` as an explicit argument to `maximum(xs)`. -With this setup, the following calls are all well-formed, and they all normalize to the last one: -```scala -minimum(xs) -maximum(xs).given(descending) -maximum(xs).given(descending.given(ListOrd)) -maximum(xs).given(descending.given(ListOrd.given(IntOrd))) -``` - -## Mixing Given Clauses And Normal Parameters - -Given clauses can be freely mixed with normal parameters. -A given clause may be followed by a normal parameter and _vice versa_. -There can be several given clauses in a definition. Example: -```scala -def f given (u: Universe) (x: u.T) given Context = ... - -repr global of Universe { type T = String ... } -repr ctx of Context { ... } -``` -Then the following calls are all valid (and normalize to the last one) -```scala -f("abc") -f.given(global)("abc") -f("abc").given(ctx) -f.given(global)("abc").given(ctx) -``` - -## Summoning Representatives - -A method `the` in `Predef` returns the representative of a given type. For example, -the representative of `Ord[List[Int]]` is produced by -```scala -the[Ord[List[Int]]] // reduces to ListOrd given IntOrd -``` -The `the` method is simply defined as the (non-widening) identity function over an implicit parameter. -```scala -def the[T] given (x: T): x.type = x -``` - -## Syntax - -Here is the new syntax of parameters and arguments seen as a delta from the [standard context free syntax of Scala 3](http://dotty.epfl.ch/docs/internals/syntax.html). -``` -ClsParamClause ::= ... - | ‘given’ (‘(’ [ClsParams] ‘)’ | GivenTypes) -DefParamClause ::= ... - | GivenParamClause -GivenParamClause ::= ‘given’ (‘(’ DefParams ‘)’ | GivenTypes) -GivenTypes ::= AnnotType {‘,’ AnnotType} - -InfixExpr ::= ... - | InfixExpr ‘given’ (InfixExpr | ParArgumentExprs) -``` diff --git a/docs/docs/reference/contextual-repr/instance-defs.md b/docs/docs/reference/contextual-repr/instance-defs.md deleted file mode 100644 index 0df31b30825e..000000000000 --- a/docs/docs/reference/contextual-repr/instance-defs.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -layout: doc-page -title: "Representatives" ---- - -Representatives define "canonical" values of given types -that can be synthesized by the compiler as arguments for -[given clauses](./inferable-params.html). Example: -```scala -trait Ord[T] { - def compare(x: T, y: T): Int - def (x: T) < (y: T) = compare(x, y) < 0 - def (x: T) > (y: T) = compare(x, y) > 0 -} - -repr IntOrd of Ord[Int] { - def compare(x: Int, y: Int) = - if (x < y) -1 else if (x > y) +1 else 0 -} - -repr ListOrd[T] of Ord[List[T]] given (ord: Ord[T]) { - def compare(xs: List[T], ys: List[T]): Int = (xs, ys) match { - case (Nil, Nil) => 0 - case (Nil, _) => -1 - case (_, Nil) => +1 - case (x :: xs1, y :: ys1) => - val fst = ord.compare(x, y) - if (fst != 0) fst else xs1.compareTo(ys1) - } -} -``` -This code defines a trait `Ord` with two representative clauses. `IntOrd` defines -a representative of the type `Ord[Int]` whereas `ListOrd[T]` defines representatives -of `Ord[List[T]]` for all types `T` that come with a representative of `Ord[T]` themselves. -The `given` clause in `ListOrd` defines an implicit parameter. -Given clauses are further explained in the [next section](./inferable-params.html). - -## Anonymous Representatives - -The name of a representative can be left out. So the representatives -of the last section can also be expressed like this: -```scala -repr of Ord[Int] { ... } -repr [T] of Ord[List[T]] given (ord: Ord[T]) { ... } -``` -If the name of a representative is missing, the compiler will synthesize a name from -the type(s) in the `of` clause. - -## Alias Representatives - -An alias can be used to define a representative that is equal to some expression. E.g.: -```scala -repr ctx of ExecutionContext = new ForkJoinPool() -``` -This creates a repreentative `global` of type `ExecutionContext` that resolves to the right hand side `new ForkJoinPool()`. -The first time `global` is accessed, a new `ForkJoinPool` is created, which is then -returned for this and all subsequent accesses to `global`. - -Alias representatives can be anonymous, e.g. -```scala -repr of Position = enclosingTree.position -``` -An alias representative can have type and context parameters just like any other representative, but it can only implement a single type. - -## Creating Representatives - -A representative without type parameters or given clause is created on-demand, the first time it is accessed. It is not required to ensure safe publication, which means that different threads might create different representatives for the same `repr` clause. If a `repr` clause has type parameters or a given clause, a fresh representative is created for each reference. - -## Syntax - -Here is the new syntax of representative clauses, seen as a delta from the [standard context free syntax of Scala 3](http://dotty.epfl.ch/docs/internals/syntax.html). -``` -TmplDef ::= ... - | ‘repr’ ReprDef -ReprDef ::= [id] [DefTypeParamClause] ReprBody -ReprBody ::= [‘of’ ConstrApp {‘,’ ConstrApp }] {GivenParamClause} [TemplateBody] - | ‘of’ Type {GivenParamClause} ‘=’ Expr -ConstrApp ::= AnnotType {ArgumentExprs} - | ‘(’ ConstrApp {‘given’ (InfixExpr | ParArgumentExprs)} ‘)’ -GivenParamClause ::= ‘given’ (‘(’ [DefParams] ‘)’ | GivenTypes) -GivenTypes ::= AnnotType {‘,’ AnnotType} -``` -The identifier `id` can be omitted only if either the `of` part or the template body is present. -If the `of` part is missing, the template body must define at least one extension method. diff --git a/docs/docs/reference/contextual-repr/motivation.md b/docs/docs/reference/contextual-repr/motivation.md deleted file mode 100644 index 460ded31b6d4..000000000000 --- a/docs/docs/reference/contextual-repr/motivation.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -layout: doc-page -title: "Overview" ---- - -### Critique of the Status Quo - -Scala's implicits are its most distinguished feature. They are _the_ fundamental way to abstract over context. They represent a unified paradigm with a great variety of use cases, among them: implementing type classes, establishing context, dependency injection, expressing capabilities, computing new types and proving relationships between them. - -Following Haskell, Scala was the second popular language to have some form of implicits. Other languages have followed suit. E.g Rust's traits or Swift's protocol extensions. Design proposals are also on the table for Kotlin as [compile time dependency resolution](https://github.com/Kotlin/KEEP/blob/e863b25f8b3f2e9b9aaac361c6ee52be31453ee0/proposals/compile-time-dependency-resolution.md), for C# as [Shapes and Extensions](https://github.com/dotnet/csharplang/issues/164) -or for F# as [Traits](https://github.com/MattWindsor91/visualfsharp/blob/hackathon-vs/examples/fsconcepts.md). Implicits are also a common feature of theorem provers such as Coq or Agda. - -Even though these designs use widely different terminology, they are all variants of the core idea of _term inference_. Given a type, the compiler synthesizes a "canonical" term that has that type. Scala embodies the idea in a purer form than most other languages: An implicit parameter directly leads to an inferred argument term that could also be written down explicitly. By contrast, typeclass based designs are less direct since they hide term inference behind some form of type classification and do not offer the option of writing the inferred quantities (typically, dictionaries) explicitly. - -Given that term inference is where the industry is heading, and given that Scala has it in a very pure form, how come implicits are not more popular? In fact, it's fair to say that implicits are at the same time Scala's most distinguished and most controversial feature. I believe this is due to a number of aspects that together make implicits harder to learn than necessary and also make it harder to prevent abuses. - -Particular criticisms are: - -1. Being very powerful, implicits are easily over-used and mis-used. This observation holds in almost all cases when we talk about _implicit conversions_, which, even though conceptually different, share the same syntax with other implicit definitions. For instance, regarding the two definitions - - ```scala - implicit def i1(implicit x: T): C[T] = ... - implicit def i2(x: T): C[T] = ... - ``` - - the first of these is a conditional implicit _value_, the second an implicit _conversion_. Conditional implicit values are a cornerstone for expressing type classes, whereas most applications of implicit conversions have turned out to be of dubious value. The problem is that many newcomers to the language start with defining implicit conversions since they are easy to understand and seem powerful and convenient. Scala 3 will put under a language flag both definitions and applications of "undisciplined" implicit conversions between types defined elsewhere. This is a useful step to push back against overuse of implicit conversions. But the problem remains that syntactically, conversions and values just look too similar for comfort. - - 2. Another widespread abuse is over-reliance on implicit imports. This often leads to inscrutable type errors that go away with the right import incantation, leaving a feeling of frustration. Conversely, it is hard to see what implicits a program uses since implicits can hide anywhere in a long list of imports. - - 3. The syntax of implicit definitions is too minimal. It consists of a single modifier, `implicit`, that can be attached to a large number of language constructs. A problem with this for newcomers is that it conveys mechanism instead of intent. For instance, a typeclass instance is an implicit object or val if unconditional and an implicit def with implicit parameters referring to some class if conditional. This describes precisely what the implicit definitions translate to -- just drop the `implicit` modifier, and that's it! But the cues that define intent are rather indirect and can be easily misread, as demonstrated by the definitions of `i1` and `i2` above. - - 4. The syntax of implicit parameters also has shortcomings. It starts with the position of `implicit` as a pseudo-modifier that applies to a whole parameter section instead of a single parameter. This represents an irregular case wrt to the rest of Scala's syntax. Furthermore, while implicit _parameters_ are designated specifically, arguments are not. Passing an argument to an implicit parameter looks like a regular application `f(arg)`. This is problematic because it means there can be confusion regarding what parameter gets instantiated in a call. For instance, in - ```scala - def currentMap(implicit ctx: Context): Map[String, Int] - ``` - one cannot write `currentMap("abc")` since the string "abc" is taken as explicit argument to the implicit `ctx` parameter. One has to write `currentMap.apply("abc")` instead, which is awkward and irregular. For the same reason, a method definition can only have one implicit parameter section and it must always come last. This restriction not only reduces orthogonality, but also prevents some useful program constructs, such as a method with a regular parameter whose type depends on an implicit value. Finally, it's also a bit annoying that implicit parameters must have a name, even though in many cases that name is never referenced. - - 5. Implicits pose challenges for tooling. The set of available implicits depends on context, so command completion has to take context into account. This is feasible in an IDE but docs like ScalaDoc that are based static web pages can only provide an approximation. Another problem is that failed implicit searches often give very unspecific error messages, in particular if some deeply recursive implicit search has failed. Note that the Dotty compiler already implements some improvements in this case, but challenges still remain. - -None of the shortcomings is fatal, after all implicits are very widely used, and many libraries and applications rely on them. But together, they make code using implicits a lot more cumbersome and less clear than it could be. - -Historically, many of these shortcomings come from the way implicits were gradually "discovered" in Scala. Scala originally had only implicit conversions with the intended use case of "extending" a class or trait after it was defined, i.e. what is expressed by implicit classes in later versions of Scala. Implicit parameters and instance definitions came later in 2006 and picked similar syntax since it seemed convenient. For the same reason, no effort was made to distinguish implicit imports or arguments from normal ones. - -Existing Scala programmers by and large have gotten used to the status quo and see little need for change. But for newcomers this status quo presents a big hurdle. I believe if we want to overcome that hurdle, we should take a step back and allow ourselves to consider a radically new design. - -### The New Design - -The following pages introduce a redesign of contextual abstractions in Scala. They introduce four fundamental changes: - - 1. [Representatives](./instance-defs.html) are a new way to define basic terms that can be synthesized. They replace implicit definitions. The core principle is that, rather than mixing the `implicit` modifier with a large number of features, we have a single way to define terms that can be synthesized for types. - - 2. [Given Clauses](./inferable-params.html) are a new syntax for implicit _parameters_ and their _arguments_. Both are introduced with the same keyword, `given`. This unambiguously aligns parameters and arguments, solving a number of language warts. It also allows us to have several implicit parameter sections, and to have implicit parameters followed by normal ones. - - 3. [Import Repr](./import-implied.html) is new form of import that specifically imports representatives and nothing else. Representatives _must be_ imported with `import repr`, a plain import will no longer bring them into scope. - - 4. [Implicit Conversions](./conversions.html) are now expressed as representatives of a standard `Conversion` class. All other forms of implicit conversions will be phased out. - -This section also contains pages describing other language features that are related to context abstraction. These are: - - - [Context Bounds](./context-bounds.html), which carry over unchanged. - - [Extension Methods](./extension-methods.html) replace implicit classes in a way that integrates better with typeclasses. - - [Implementing Typeclasses](./typeclasses.html) demonstrates how some common typeclasses can be implemented using the new constructs. - - [Typeclass Derivation](./derivation.html) introduces constructs to automatically derive typeclass representatives for ADTs. - - [Multiversal Equality](./multiversal-equality.html) introduces a special typeclass - to support type safe equality. - - [Implicit Function Types](./query-types.html) introduce a way to abstract over implicit parameterization. - - [Implicit By-Name Parameters](./inferable-by-name-parameters.html) are an essential tool to define recursive implicits without looping. - - [Relationship with Scala 2 Implicits](./relationship-implicits.html) discusses the relationship between old-style implicits and - new-style representatives and given clauses and how to migrate from one to the other. - -Overall, the new design achieves a better separation of term inference from the rest of the language: There is a single way to define representatives instead of a multitude of forms all taking an `implicit` modifier. There is a single way to introduce implicit parameters and arguments instead of conflating implicit with normal arguments. There is a separate way to import representatives that does not allow them to hide in a sea of normal imports. And there is a single way to define an implicit conversion which is clearly marked as such and does not require special syntax. - -This design thus avoids feature interactions and makes the language more consistent and orthogonal. It will make implicits easier to learn and harder to abuse. It will greatly improve the clarity of the 95% of Scala programs that use implicits. It has thus the potential to fulfil the promise of term inference in a principled way that is also accessible and friendly. - -Could we achieve the same goals by tweaking existing implicits? After having tried for a long time, I believe now that this is impossible. - - - First, some of the problems are clearly syntactic and require different syntax to solve them. - - Second, there is the problem how to migrate. We cannot change the rules in mid-flight. At some stage of language evolution we need to accommodate both the new and the old rules. With a syntax change, this is easy: Introduce the new syntax with new rules, support the old syntax for a while to facilitate cross compilation, deprecate and phase out the old syntax at some later time. Keeping the same syntax does not offer this path, and in fact does not seem to offer any viable path for evolution - - Third, even if we would somehow succeed with migration, we still have the problem - how to teach this. We cannot make existing tutorials go away. Almost all existing tutorials start with implicit conversions, which will go away; they use normal imports, which will go away, and they explain calls to methods with implicit parameters by expanding them to plain applications, which will also go away. This means that we'd have - to add modifications and qualifications to all existing literature and courseware, likely causing more confusion with beginners instead of less. By contrast, with a new syntax there is a clear criterion: Any book or courseware that mentions `implicit` is outdated and should be updated. diff --git a/docs/docs/reference/contextual-repr/multiversal-equality.md b/docs/docs/reference/contextual-repr/multiversal-equality.md deleted file mode 100644 index 516e3c7347fc..000000000000 --- a/docs/docs/reference/contextual-repr/multiversal-equality.md +++ /dev/null @@ -1,218 +0,0 @@ ---- -layout: doc-page -title: "Multiversal Equality" ---- - -Previously, Scala had universal equality: Two values of any types -could be compared with each other with `==` and `!=`. This came from -the fact that `==` and `!=` are implemented in terms of Java's -`equals` method, which can also compare values of any two reference -types. - -Universal equality is convenient. But it is also dangerous since it -undermines type safety. For instance, let's assume one is left after some refactoring -with an erroneous program where a value `y` has type `S` instead of the correct type `T`. - -```scala -val x = ... // of type T -val y = ... // of type S, but should be T -x == y // typechecks, will always yield false -``` - -If `y` gets compared to other values of type `T`, -the program will still typecheck, since values of all types can be compared with each other. -But it will probably give unexpected results and fail at runtime. - -Multiversal equality is an opt-in way to make universal equality -safer. It uses a binary typeclass `Eql` to indicate that values of -two given types can be compared with each other. -The example above report a type error if `S` or `T` was a class -that derives `Eql`, e.g. -```scala -class T derives Eql -``` -Alternatively, one can also define an `Eql` representative directly, like this: -```scala -repr for Eql[T, T] = Eql.derived -``` -This definition effectively says that values of type `T` can (only) be -compared to other values of type `T` when using `==` or `!=`. The definition -affects type checking but it has no significance for runtime -behavior, since `==` always maps to `equals` and `!=` always maps to -the negation of `equals`. The right hand side `Eql.derived` of the definition -is a value that has any `Eql` instance as its type. Here is the definition of class -`Eql` and its companion object: -```scala -package scala -import annotation.implicitNotFound - -@implicitNotFound("Values of types ${L} and ${R} cannot be compared with == or !=") -sealed trait Eql[-L, -R] - -object Eql { - object derived extends Eql[Any, Any] -} -``` - -One can have several `Eql` representatives for a type. For example, the four -definitions below make values of type `A` and type `B` comparable with -each other, but not comparable to anything else: - -```scala -repr of Eql[A, A] = Eql.derived -repr of Eql[B, B] = Eql.derived -repr of Eql[A, B] = Eql.derived -repr of Eql[B, A] = Eql.derived -``` -The `scala.Eql` object defines a number of `Eql` representatives that together -define a rule book for what standard types can be compared (more details below). - -There's also a "fallback" instance named `eqlAny` that allows comparisons -over all types that do not themselves have an `Eql` representative. `eqlAny` is -defined as follows: - -```scala -def eqlAny[L, R]: Eql[L, R] = Eql.derived -``` - -Even though `eqlAny` is not declared as a representative, the compiler will still -construct an `eqlAny` instance as answer to an implicit search for the -type `Eql[L, R]`, unless `L` or `R` have `Eql` representatives -defined on them, or the language feature `strictEquality` is enabled - -The primary motivation for having `eqlAny` is backwards compatibility, -if this is of no concern, one can disable `eqlAny` by enabling the language -feature `strictEquality`. As for all language features this can be either -done with an import - -```scala -import scala.language.strictEquality -``` -or with a command line option `-language:strictEquality`. - -## Deriving Eql Representatives - -Instead of defining `Eql` representatives directly, it is often more convenient to derive them. Example: -```scala -class Box[T](x: T) derives Eql -``` -By the usual rules if [typeclass derivation](./derivation.html), -this generates the following `Eql` representative in the companion object of `Box`: -```scala -repr [T, U] of Eql[Box[T], Box[U]] given Eql[T, U] = Eql.derived -``` -That is, two boxes are comparable with `==` or `!=` if their elements are. Examples: -```scala -new Box(1) == new Box(1L) // ok since `Eql[Int, Long]` is represented. -new Box(1) == new Box("a") // error: can't compare -new Box(1) == 1 // error: can't compare -``` - -## Precise Rules for Equality Checking - -The precise rules for equality checking are as follows. - -If the `strictEquality` feature is enabled then -a comparison using `x == y` or `x != y` between values `x: T` and `y: U` -is legal if - - 1. there is representation of type `Eql[T, U]`, or - 2. one of `T`, `U` is `Null`. - -In the default case where the `strictEquality` feature is not enabled the comparison is -also legal if - - 1. `T` and `U` the same, or - 2. one of `T` and `U`is a subtype of the _lifted_ version of the other type, or - 3. neither `T` nor `U` have a _reflexive `Eql` representative_. - -Explanations: - - - _lifting_ a type `S` means replacing all references to abstract types - in covariant positions of `S` by their upper bound, and to replacing - all refinement types in covariant positions of `S` by their parent. - - a type `T` has a _reflexive `Eql` representative_ if the implicit search for `Eql[T, T]` - succeeds. - -## Predefined Eql Representatives - -The `Eql` object defines representatives for - - the primitive types `Byte`, `Short`, `Char`, `Int`, `Long`, `Float`, `Double`, `Boolean`, and `Unit`, - - `java.lang.Number`, `java.lang.Boolean`, and `java.lang.Character`, - - `scala.collection.Seq`, and `scala.collection.Set`. - -Representative are defined so that every one of these types has a reflexive `Eql` representative, and the following holds: - - - Primitive numeric types can be compared with each other. - - Primitive numeric types can be compared with subtypes of `java.lang.Number` (and _vice versa_). - - `Boolean` can be compared with `java.lang.Boolean` (and _vice versa_). - - `Char` can be compared with `java.lang.Character` (and _vice versa_). - - Two sequences (of arbitrary subtypes of `scala.collection.Seq`) can be compared - with each other if their element types can be compared. The two sequence types - need not be the same. - - Two sets (of arbitrary subtypes of `scala.collection.Set`) can be compared - with each other if their element types can be compared. The two set types - need not be the same. - - Any subtype of `AnyRef` can be compared with `Null` (and _vice versa_). - -## Why Two Type Parameters? - -One particular feature of the `Eql` type is that it takes _two_ type parameters, representing the types of the two items to be compared. By contrast, conventional -implementations of an equality type class take only a single type parameter which represents the common type of _both_ operands. One type parameter is simpler than two, so why go through the additional complication? The reason has to do with the fact that, rather than coming up with a type class where no operation existed before, -we are dealing with a refinement of pre-existing, universal equality. It's best illustrated through an example. - -Say you want to come up with a safe version of the `contains` method on `List[T]`. The original definition of `contains` in the standard library was: -```scala -class List[+T] { - ... - def contains(x: Any): Boolean -} -``` -That uses universal equality in an unsafe way since it permits arguments of any type to be compared with the list's elements. The "obvious" alternative definition -```scala - def contains(x: T): Boolean -``` -does not work, since it refers to the covariant parameter `T` in a nonvariant context. The only variance-correct way to use the type parameter `T` in `contains` is as a lower bound: -```scala - def contains[U >: T](x: U): Boolean -``` -This generic version of `contains` is the one used in the current (Scala 2.12) version of `List`. -It looks different but it admits exactly the same applications as the `contains(x: Any)` definition we started with. -However, we can make it more useful (i.e. restrictive) by adding an `Eql` parameter: -```scala - def contains[U >: T](x: U) given Eql[T, U]: Boolean // (1) -``` -This version of `contains` is equality-safe! More precisely, given -`x: T`, `xs: List[T]` and `y: U`, then `xs.contains(y)` is type-correct if and only if -`x == y` is type-correct. - -Unfortunately, the crucial ability to "lift" equality type checking from simple equality and pattern matching to arbitrary user-defined operations gets lost if we restrict ourselves to an equality class with a single type parameter. Consider the following signature of `contains` with a hypothetical `Eql1[T]` type class: -```scala - def contains[U >: T](x: U) given Eql1[U]: Boolean // (2) -``` -This version could be applied just as widely as the original `contains(x: Any)` method, -since the `Eql1[Any]` fallback is always available! So we have gained nothing. What got lost in the transition to a single parameter type class was the original rule that `Eql[A, B]` is available only if neither `A` nor `B` have a reflexive `Eql` representative. That rule simply cannot be expressed if there is a single type parameter for `Eql`. - -The situation is different under `-language:strictEquality`. In that case, -the `Eql[Any, Any]` or `Eql1[Any]` instances would never be available, and the -single and two-parameter versions would indeed coincide for most practical purposes. - -But assuming `-language:strictEquality` immediately and everywhere poses migration problems which might well be unsurmountable. Consider again `contains`, which is in the standard library. Parameterizing it with the `Eql` type class as in (1) is an immediate win since it rules out non-sensical applications while still allowing all sensible ones. -So it can be done almost at any time, modulo binary compatibility concerns. -On the other hand, parameterizing `contains` with `Eql1` as in (2) would make `contains` -unusable for all types that have not yet declared an `Eql1` representative, including all -types coming from Java. This is clearly unacceptable. It would lead to a situation where, -rather than migrating existing libraries to use safe equality, the only upgrade path is to have parallel libraries, with the new version only catering to types deriving `Eql1` and the old version dealing with everything else. Such a split of the ecosystem would be very problematic, which means the cure is likely to be worse than the disease. - -For these reasons, it looks like a two-parameter type class is the only way forward because it can take the existing ecosystem where it is and migrate it towards a future where more and more code uses safe equality. - -In applications where `-language:strictEquality` is the default one could also introduce a one-parameter type alias such as -```scala -type Eq[-T] = Eql[T, T] -``` -Operations needing safe equality could then use this alias instead of the two-parameter `Eql` class. But it would only -work under `-language:strictEquality`, since otherwise the universal `Eq[Any]` instance would be available everywhere. - - -More on multiversal equality is found in a [blog post](http://www.scala-lang.org/blog/2016/05/06/multiversal-equality.html) -and a [Github issue](https://github.com/lampepfl/dotty/issues/1247). diff --git a/docs/docs/reference/contextual-repr/query-types-spec.md b/docs/docs/reference/contextual-repr/query-types-spec.md deleted file mode 100644 index 0e4dae6cb66a..000000000000 --- a/docs/docs/reference/contextual-repr/query-types-spec.md +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: doc-page -title: "Implicit Function Types - More Details" ---- - -## Syntax - - Type ::= ... - | `given' FunArgTypes `=>' Type - Expr ::= ... - | `given' FunParams `=>' Expr - -Implicit function types associate to the right, e.g. -`given S => given T => U` is the same as `given S => (given T => U)`. - -## Implementation - -Implicit function types are shorthands for class types that define `apply` -methods with implicit parameters. Specifically, the `N`-ary function type -`T1, ..., TN => R` is a shorthand for the class type -`ImplicitFunctionN[T1 , ... , TN, R]`. Such class types are assumed to have the following definitions, for any value of `N >= 1`: -```scala -package scala -trait ImplicitFunctionN[-T1 , ... , -TN, +R] { - def apply given (x1: T1 , ... , xN: TN): R -} -``` -Implicit function types erase to normal function types, so these classes are -generated on the fly for typechecking, but not realized in actual code. - -Implicit function literals `given (x1: T1, ..., xn: Tn) => e` map -implicit parameters `xi` of types `Ti` to a result given by expression `e`. -The scope of each implicit parameter `xi` is `e`. The parameters must have pairwise distinct names. - -If the expected type of the implicit function literal is of the form -`scala.ImplicitFunctionN[S1, ..., Sn, R]`, the expected type of `e` is `R` and -the type `Ti` of any of the parameters `xi` can be omitted, in which case `Ti -= Si` is assumed. If the expected type of the implicit function literal is -some other type, all implicit parameter types must be explicitly given, and the expected type of `e` is undefined. The type of the implicit function literal is `scala.ImplicitFunctionN[S1, ...,Sn, T]`, where `T` is the widened -type of `e`. `T` must be equivalent to a type which does not refer to any of -the implicit parameters `xi`. - -The implicit function literal is evaluated as the instance creation -expression: -```scala -new scala.ImplicitFunctionN[T1, ..., Tn, T] { - def apply given (x1: T1, ..., xn: Tn): T = e -} -``` -In the case of a single untyped parameter, `given (x) => e` can be -abbreviated to `given x => e`. - -An implicit parameter may also be a wildcard represented by an underscore `_`. In -that case, a fresh name for the parameter is chosen arbitrarily. - -Note: The closing paragraph of the -[Anonymous Functions section](https://www.scala-lang.org/files/archive/spec/2.12/06-expressions.html#anonymous-functions) -of Scala 2.12 is subsumed by implicit function types and should be removed. - -Implicit function literals `given (x1: T1, ..., xn: Tn) => e` are -automatically created for any expression `e` whose expected type is -`scala.ImplicitFunctionN[T1, ..., Tn, R]`, unless `e` is -itself a implicit function literal. This is analogous to the automatic -insertion of `scala.Function0` around expressions in by-name argument position. - -Implicit function types generalize to `N > 22` in the same way that function types do, see [the corresponding -documentation](https://dotty.epfl.ch/docs/reference/dropped-features/limit22.html). - -## Examples - -See the section on Expressiveness from [Simplicitly: foundations and -applications of implicit function -types](https://dl.acm.org/citation.cfm?id=3158130). I've extracted it in [this -Gist](https://gist.github.com/OlivierBlanvillain/234d3927fe9e9c6fba074b53a7bd9 -592), it might easier to access than the pdf. - -### Type Checking - -After desugaring no additional typing rules are required for implicit function types. diff --git a/docs/docs/reference/contextual-repr/query-types.md b/docs/docs/reference/contextual-repr/query-types.md deleted file mode 100644 index aa8535b15104..000000000000 --- a/docs/docs/reference/contextual-repr/query-types.md +++ /dev/null @@ -1,159 +0,0 @@ ---- -layout: doc-page -title: "Implicit Function Types" ---- - -_Implicit functions_ are functions with (only) implicit parameters. -Their types are _implicit function types_. Here is an example of an implicit function type: -```scala -type Contextual[T] = given Context => T -``` -A value of an implicit function type is applied to inferred arguments, in -the same way a method with a given clause is applied. For instance: -```scala - repr ctx of Context = ... - - def f(x: Int): Contextual[Int] = ... - - f(2).given(ctx) // explicit argument - f(2) // argument is inferred -``` -Conversely, if the expected type of an expression `E` is an implicit function type -`given (T_1, ..., T_n) => U` and `E` is not already an -implicit function literal, `E` is converted to an implicit function literal by rewriting to -```scala - given (x_1: T1, ..., x_n: Tn) => E -``` -where the names `x_1`, ..., `x_n` are arbitrary. This expansion is performed -before the expression `E` is typechecked, which means that `x_1`, ..., `x_n` -are available as representatives in `E`. - -Like their types, implicit function literals are written with a `given` prefix. They differ from normal function literals in two ways: - - 1. Their parameters are implicit. - 2. Their types are implicit function types. - -For example, continuing with the previous definitions, -```scala - def g(arg: Contextual[Int]) = ... - - g(22) // is expanded to g(given ctx => 22) - - g(f(2)) // is expanded to g(given ctx => f(2).given(ctx)) - - g(given ctx => f(22).given(ctx)) // is left as it is -``` -### Example: Builder Pattern - -Implicit function types have considerable expressive power. For -instance, here is how they can support the "builder pattern", where -the aim is to construct tables like this: -```scala - table { - row { - cell("top left") - cell("top right") - } - row { - cell("bottom left") - cell("bottom right") - } - } -``` -The idea is to define classes for `Table` and `Row` that allow -addition of elements via `add`: -```scala - class Table { - val rows = new ArrayBuffer[Row] - def add(r: Row): Unit = rows += r - override def toString = rows.mkString("Table(", ", ", ")") - } - - class Row { - val cells = new ArrayBuffer[Cell] - def add(c: Cell): Unit = cells += c - override def toString = cells.mkString("Row(", ", ", ")") - } - - case class Cell(elem: String) -``` -Then, the `table`, `row` and `cell` constructor methods can be defined -in terms of implicit function types to avoid the plumbing boilerplate -that would otherwise be necessary. -```scala - def table(init: given Table => Unit) = { - repr t of Table - init - t - } - - def row(init: given Row => Unit) given (t: Table) = { - repr r of Row - init - t.add(r) - } - - def cell(str: String) given (r: Row) = - r.add(new Cell(str)) -``` -With that setup, the table construction code above compiles and expands to: -```scala - table { given ($t: Table) => - row { given ($r: Row) => - cell("top left").given($r) - cell("top right").given($r) - }.given($t) - row { given ($r: Row) => - cell("bottom left").given($r) - cell("bottom right").given($r) - }.given($t) - } -``` -### Example: Postconditions - -As a larger example, here is a way to define constructs for checking arbitrary postconditions using an extension method `ensuring`so that the checked result can be referred to simply by `result`. The example combines opaque aliases, implicit function types, and extension methods to provide a zero-overhead abstraction. - -```scala -object PostConditions { - opaque type WrappedResult[T] = T - - private object WrappedResult { - def wrap[T](x: T): WrappedResult[T] = x - def unwrap[T](x: WrappedResult[T]): T = x - } - - def result[T] given (r: WrappedResult[T]): T = WrappedResult.unwrap(r) - - def (x: T) ensuring [T](condition: given WrappedResult[T] => Boolean): T = { - repr of WrappedResult[T] = WrappedResult.wrap(x) - assert(condition) - x - } -} - -object Test { - import PostConditions.{ensuring, result} - val s = List(1, 2, 3).sum.ensuring(result == 6) -} -``` -**Explanations**: We use a implicit function type `given WrappedResult[T] => Boolean` -as the type of the condition of `ensuring`. An argument to `ensuring` such as -`(result == 6)` will therefore have an representative of type `WrappedResult[T]` in -scope to pass along to the `result` method. `WrappedResult` is a fresh type, to make sure -that we do not get unwanted representatives in scope (this is good practice in all cases -where implicit parameters are involved). Since `WrappedResult` is an opaque type alias, its -values need not be boxed, and since `ensuring` is added as an extension method, its argument -does not need boxing either. Hence, the implementation of `ensuring` is as about as efficient -as the best possible code one could write by hand: - - { val result = List(1, 2, 3).sum - assert(result == 6) - result - } - -### Reference - -For more info, see the [blog article](https://www.scala-lang.org/blog/2016/12/07/implicit-function-types.html), -(which uses a different syntax that has been superseded). - -[More details](./query-types-spec.html) diff --git a/docs/docs/reference/contextual-repr/relationship-implicits.md b/docs/docs/reference/contextual-repr/relationship-implicits.md deleted file mode 100644 index 6261c7f3653d..000000000000 --- a/docs/docs/reference/contextual-repr/relationship-implicits.md +++ /dev/null @@ -1,175 +0,0 @@ ---- -layout: doc-page -title: Relationship with Scala 2 Implicits ---- - -Many, but not all, of the new contextual abstraction features in Scala 3 can be mapped to Scala 2's implicits. This page gives a rundown on the relationships between new and old features. - -## Simulating Contextual Abstraction with Implicits - -### Representatives - -Representative clauses can be mapped to combinations of implicit objects and implicit methods together with normal classes. - - 1. Representatives without parameters are mapped to implicit objects. E.g., - ```scala - repr IntOrd of Ord[Int] { ... } - ``` - maps to - ```scala - implicit object IntOrd extends Ord[Int] { ... } - ``` - 2. Parameterized representatives are mapped to combinations of classes and implicit methods. E.g., - ```scala - repr ListOrd[T] of Ord[List[T]] given (ord: Ord[T]) { ... } - ``` - maps to - ```scala - class ListOrd[T](implicit ord: Ord[T]) extends Ord[List[T]] { ... } - final implicit def ListOrd[T](implicit ord: Ord[T]): ListOrd[T] = new ListOrd[T] - ``` - 3. Alias representatives map to implicit methods. If the representative has neither type parameters nor a given clause, the result of creating an instance is cached in a variable. There are two cases that can be optimized: - - - If the right hand side is a simple reference, we can - use a forwarder to that reference without caching it. - - If the right hand side is more complex, but still known to be pure, we can - create a `val` that computes it ahead of time. - - Examples: - - ```scala - repr global of ExecutionContext = new ForkJoinContext() - repr config of Config = default.config - - def ctx: Context - repr of Context = ctx - ``` - would map to - ```scala - private[this] var global$cache: ExecutionContext | Null = null - final implicit def global: ExecutionContext = { - if (global$cache == null) global$cache = new ForkJoinContext() - global$cache - } - - final implicit val config: Config = default.config - - final implicit def Context_repr = ctx - ``` - -### Anonymous Representatives - -Anonymous representatives get compiler synthesized names, which are generated in a reproducible way from the implemented type(s). For example, if the names of the `IntOrd` and `ListOrd` representatives above were left out, the following names would be synthesized instead: -```scala - repr Ord_Int_repr of Ord[Int] { ... } - repr Ord_List_repr[T] of Ord[List[T]] { ... } -``` -The synthesized type names are formed from - - - the simple name(s) of the implemented type(s), leaving out any prefixes, - - the simple name(s) of the toplevel argument type constructors to these types - - the suffix `_repr`. - -Anonymous representatives that define extension methods without also implementing a type -get their name from the name of the first extension method and the toplevel type -constructor of its first parameter. For example, the representative -```scala - repr { - def (xs: List[T]) second[T] = ... - } -``` -gets the synthesized name `second_of_List_T_repr`. - -### Implicit Parameters - -The new implicit parameter syntax with `given` corresponds largely to Scala-2's implicit parameters. E.g. -```scala - def max[T](x: T, y: T) given (ord: Ord[T]): T -``` -would be written -```scala - def max[T](x: T, y: T)(implicit ord: Ord[T]): T -``` -in Scala 2. The main difference concerns applications of such parameters. -Explicit arguments to parameters of given clauses _must_ be written using `given`, -mirroring the definition syntax. E.g, `max(2, 3).given(IntOrd)`. -Scala 2 uses normal applications `max(2, 3)(IntOrd)` instead. The Scala 2 syntax has some inherent ambiguities and restrictions which are overcome by the new syntax. For instance, multiple implicit parameter lists are not available in the old syntax, even though they can be simulated using auxiliary objects in the "Aux" pattern. - -The `the` method corresponds to `implicitly` in Scala 2. -It is precisely the same as the `the` method in Shapeless. -The difference between `the` (in both versions) and `implicitly` is -that `the` can return a more precise type than the type that was -asked for. - -### Context Bounds - -Context bounds are the same in both language versions. They expand to the respective forms of implicit parameters. - -**Note:** To ease migration, context bounds in Dotty map for a limited time to old-style implicit parameters for which arguments can be passed either with `given` or -with a normal application. Once old-style implicits are deprecated, context bounds -will map to given clauses instead. - -### Extension Methods - -Extension methods have no direct counterpart in Scala 2, but they can be simulated with implicit classes. For instance, the extension method -```scala - def (c: Circle) circumference: Double = c.radius * math.Pi * 2 -``` -could be simulated to some degree by -```scala - implicit class CircleDeco(c: Circle) extends AnyVal { - def circumference: Double = c.radius * math.Pi * 2 - } -``` -Extension methods in representatives have no direct counterpart in Scala-2. The only way to simulate these is to make implicit classes available through imports. The Simulacrum macro library can automate this process in some cases. - -### Typeclass Derivation - -Typeclass derivation has no direct counterpart in the Scala 2 language. Comparable functionality can be achieved by macro-based libraries such as Shapeless, Magnolia, or scalaz-deriving. - -### Implicit Function Types - -Implicit function types have no analogue in Scala 2. - -### Implicit By-Name Parameters - -Implicit by-name parameters are not supported in Scala 2, but can be emulated to some degree by the `Lazy` type in Shapeless. - -## Simulating Scala 2 Implicits in Dotty - -### Implicit Conversions - -Implicit conversion methods in Scala 2 can be expressed as representatives -of the `scala.Conversion` class in Dotty. E.g. instead of -```scala - implicit def stringToToken(str: String): Token = new Keyword(str) -``` -one can write -```scala - repr stringToToken of Conversion[String, Token] { - def apply(str: String): Token = new KeyWord(str) - } -``` - -### Implicit Classes - -Implicit classes in Scala 2 are often used to define extension methods, which are directly supported in Dotty. Other uses of implicit classes can be simulated by a pair of a regular class and a conversion representative. - -### Abstract Implicits - -An abstract implicit `val` or `def` in Scala 2 can be expressed in Dotty using a regular abstract definition and an alias representative. E.g., Scala 2's -```scala - implicit def symDeco: SymDeco -``` -can be expressed in Dotty as -```scala - def symDeco: SymDeco - repr of SymDeco = symDeco -``` - -## Implementation Status and Timeline - -The Dotty implementation implements both Scala-2's implicits and the new abstractions. In fact, support for Scala-2's implicits is an essential part of the common language subset between 2.13/2.14 and Dotty. -Migration to the new abstractions will be supported by making automatic rewritings available. - -Depending on adoption patterns, old style implicits might start to be deprecated in a version following Scala 3.0. diff --git a/docs/docs/reference/contextual-repr/typeclasses.md b/docs/docs/reference/contextual-repr/typeclasses.md deleted file mode 100644 index fae1cc9a5bd3..000000000000 --- a/docs/docs/reference/contextual-repr/typeclasses.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -layout: doc-page -title: "Implementing Typeclasses" ---- - -Representatives, extension methods and context bounds -allow a concise and natural expression of _typeclasses_. Typeclasses are just traits -with canonical implementations defined by representatives. Here are some examples of standard typeclasses: - -### Semigroups and monoids: - -```scala -trait SemiGroup[T] { - def (x: T) combine (y: T): T -} -trait Monoid[T] extends SemiGroup[T] { - def unit: T -} -object Monoid { - def apply[T] given Monoid[T] = the[Monoid[T]] -} - -repr of Monoid[String] { - def (x: String) combine (y: String): String = x.concat(y) - def unit: String = "" -} - -repr of Monoid[Int] { - def (x: Int) combine (y: Int): Int = x + y - def unit: Int = 0 -} - -def sum[T: Monoid](xs: List[T]): T = - xs.foldLeft(Monoid[T].unit)(_.combine(_)) -``` - -### Functors and monads: - -```scala -trait Functor[F[_]] { - def (x: F[A]) map [A, B] (f: A => B): F[B] -} - -trait Monad[F[_]] extends Functor[F] { - def (x: F[A]) flatMap [A, B] (f: A => F[B]): F[B] - def (x: F[A]) map [A, B] (f: A => B) = x.flatMap(f `andThen` pure) - - def pure[A](x: A): F[A] -} - -repr ListMonad of Monad[List] { - def (xs: List[A]) flatMap [A, B] (f: A => List[B]): List[B] = - xs.flatMap(f) - def pure[A](x: A): List[A] = - List(x) -} - -repr ReaderMonad[Ctx] of Monad[[X] => Ctx => X] { - def (r: Ctx => A) flatMap [A, B] (f: A => Ctx => B): Ctx => B = - ctx => f(r(ctx))(ctx) - def pure[A](x: A): Ctx => A = - ctx => x -} -``` diff --git a/docs/docs/reference/features-classification.md b/docs/docs/reference/features-classification.md index dcfab2d4984c..8f5342d7e49c 100644 --- a/docs/docs/reference/features-classification.md +++ b/docs/docs/reference/features-classification.md @@ -170,9 +170,9 @@ It's worth noting that macros were never included in the Scala 2 language specif To enable porting most uses of macros, we are experimenting with the advanced language constructs listed below. These designs are more provisional than the rest of the proposed language constructs for Scala 3.0. There might still be some changes until the final release. Stabilizing the feature set needed for meta programming is our first priority. - [Match Types](https://dotty.epfl.ch/docs/reference/new-types/match-types.html) allow computation on types. -- [Inline](https://dotty.epfl.ch/docs/reference/other-new-features/inline.html) provides +- [Inline](https://dotty.epfl.ch/docs/reference/metaprogramming/inline.html) provides by itself a straightforward implementation of some simple macros and is at the same time an essential building block for the implementation of complex macros. -- [Quotes and Splices](https://dotty.epfl.ch/docs/reference/other-new-features/principled-meta-programming.html) provide a principled way to express macros and staging with a unified set of abstractions. +- [Quotes and Splices](https://dotty.epfl.ch/docs/reference/metaprogramming/macros.html) provide a principled way to express macros and staging with a unified set of abstractions. - [Typeclass derivation](https://dotty.epfl.ch/docs/reference/contextual/derivation.html) provides an in-language implementation of the `Gen` macro in Shapeless and other foundational libraries. The new implementation is more robust, efficient and easier to use than the macro. - [Implicit by-name parameters](https://dotty.epfl.ch/docs/reference/contextual/inferable-by-name-parameters.html) provide a more robust in-language implementation of the `Lazy` macro in Shapeless. - [Erased Terms](https://dotty.epfl.ch/docs/reference/other-new-features/erased-terms.html) provide a general mechanism for compile-time-only computations. diff --git a/docs/docs/reference/metaprogramming/inline.md b/docs/docs/reference/metaprogramming/inline.md new file mode 100644 index 000000000000..91a2833de614 --- /dev/null +++ b/docs/docs/reference/metaprogramming/inline.md @@ -0,0 +1,475 @@ +--- +layout: doc-page +title: Inline +--- + +## Inline (blackbox/whitebox) + +`inline` is a new [soft modifier](../soft-modifier.html) that guarantees that a +definition will be inlined at the point of use. Example: + +```scala +object Config { + inline val logging = false +} + +object Logger { + + private var indent = 0 + + inline def log[T](msg: => String)(op: => T): T = + if (Config.logging) { + println(s"${" " * indent}start $msg") + indent += 1 + val result = op + indent -= 1 + println(s"${" " * indent}$msg = $result") + result + } + else op +} +``` + +The `Config` object contains a definition of the **inline value** `logging`. +This means that `logging` is treated as a _constant value_, equivalent to its +right-hand side `false`. The right-hand side of such an `inline val` must itself +be a [constant expression](https://scala-lang.org/files/archive/spec/2.12/06-expressions.html#constant-expressions). Used in this +way, `inline` is equivalent to Java and Scala 2's `final`. `final` meaning +_inlined constant_ is still supported in Dotty, but will be phased out. + +The `Logger` object contains a definition of the **inline method** `log`. +This method will always be inlined at the point of call. + +In the inlined code, an if-then-else with a constant condition will be rewritten +to its then- or else-part. Consequently, in the `log` method above +`if (Config.logging)` with `Config.logging == true` will rewritten into its then-part. + +Here's an example: + +```scala +def factorial(n: BigInt): BigInt = + log(s"factorial($n)") { + if (n == 0) 1 + else n * factorial(n - 1) + } +``` + +If `Config.logging == false`, this will be rewritten (simplified) to + +```scala +def factorial(n: BigInt): BigInt = { + /* parameters of log passed by-name (1) */ + def msg = s"factorial($n)" + def op = + if (n == 0) 1 + else n * factorial(n - 1) + + /* inlined body of log (2) */ + op +} +``` + +and if `true` it will be rewritten in the code below: + +```scala +def factorial(n: BigInt): BigInt = { + /* parameters of log passed by-name (1) */ + def msg = s"factorial($n)" + def op = + if (n == 0) 1 + else n * factorial(n - 1) + + /* inlined body of log (2) */ + println(s"${" " * indent}start $msg") + indent += 1 + val result = op + indent -= 1 + println(s"${" " * indent}$msg = $result") + result +} +``` + +Note (1) that the arguments corresponding to the parameters `msg` and `op` of +the inline method `log` are defined before the inlined body (which is in this +case simply `op` (2)). By-name parameters of the inline method correspond to +`def` bindings whereas by-value parameters correspond to `val` bindings. So if +`log` was defined like this: + +```scala +inline def log[T](msg: String)(op: => T): T = ... +``` + +we'd get + +```scala +val msg = s"factorial($n)" +``` + +instead. This behavior is designed so that calling an inline method is +semantically the same as calling a normal method: By-value arguments are +evaluated before the call whereas by-name arguments are evaluated each time they +are referenced. As a consequence, it is often preferable to make arguments of +inline methods by-name in order to avoid unnecessary evaluations. Additionally, +in the code above, our goal is to print the result after the evaluation of `op`. +Imagine, if we were printing the duration of the evaluation between the two +prints. + +For instance, here is how we can define a zero-overhead `foreach` method that +translates into a straightforward while loop without any indirection or +overhead: + +```scala +inline def foreach(op: => Int => Unit): Unit = { + var i = from + while (i < end) { + op(i) + i += 1 + } +} +``` + +By contrast, if `op` is a call-by-value parameter, it would be evaluated +separately as a closure. + +Inline methods can be recursive. For instance, when called with a constant +exponent `n`, the following method for `power` will be implemented by +straight inline code without any loop or recursion. + +```scala +inline def power(x: Double, n: Int): Double = { + if (n == 0) 1.0 + else if (n == 1) x + else { + val y = power(x, n / 2) + if (n % 2 == 0) y * y else y * y * x + } + + power(expr, 10) + // translates to + // + // val x = expr + // val y1 = x * x // ^2 + // val y2 = y1 * y1 // ^4 + // val y3 = y2 * x // ^5 + // y3 * y3 // ^10 +} +``` + +Parameters of inline methods can be marked `inline`. This means +that actual arguments to these parameters must be constant expressions. +For example: + +```scala +inline def power(x: Double, inline n: Int): Double +``` + +### Relationship to @inline + +Scala also defines a `@inline` annotation which is used as a hint +for the backend to inline. The `inline` modifier is a more powerful +option: Expansion is guaranteed instead of best effort, +it happens in the frontend instead of in the backend, and it also applies +to recursive methods. + +To cross compile between both Dotty and Scalac, we introduce a new `@forceInline` +annotation which is equivalent to the new `inline` modifier. Note that +Scala 2 ignores the `@forceInline` annotation, so one must use both +annotations to guarantee inlining for Dotty and at the same time hint inlining +for Scala 2 (i.e. `@forceInline @inline`). + +### Evaluation Rules + +As you noticed by the examples above a lambda of the form + +`((x_1, ..., x_n) => B)(E_1, ..., E_n)` is rewritten to: + +``` +{ val/def x_1 = E_1 + ... + val/def x_n = E_n + B +} +``` + +where vals are used for value parameter bindings and defs are used for by-name +parameter bindings. If an argument `E_i` is a simple variable reference `y`, the +corresponding binding is omitted and `y` is used instead of `x_i` in `B`. + +If a `inline` modifier is given for parameters, corresponding arguments must be +pure expressions of constant type. + +#### The definition of constant expression + +Right-hand sides of inline values and of arguments for inline parameters must be +constant expressions in the sense defined by the [SLS § +6.24](https://www.scala-lang.org/files/archive/spec/2.12/06-expressions.html#constant-expressions), +including _platform-specific_ extensions such as constant folding of pure +numeric computations. + +### Specializing Inline (Whitebox) + +Inline methods support the ` <: T` return type syntax. This means that the return type +of the inline method is going to be specialized to a more precise type upon +expansion. + +Consider the example below where the inline method `choose` can return an object +of any of the two dynamic types. The subtype relationship is `B <: A`. Since we +use the specializing inline syntax, the static types of the `val`s are inferred +accordingly. Consequently, calling `meth` on `obj2` is not a compile-time error +as `obj2` will be of type `B`. + +```scala +class A +class B extends A { + def meth() = true +} + +inline def choose(b: Boolean) <: A = { + if (b) new A() + else new B() +} + +val obj1 = choose(true) // static type is A +val obj2 = choose(false) // static type is B + +obj1.meth() // compile-time error +obj2.meth() // OK +``` + +In the following example, we see how the return type of `zero` is specialized to +the singleton type `0` permitting the addition to be ascribed with the correct +type `1`. + +```scala +inline def zero() <: Int = 0 + +final val one: 1 = zero() + 1 +``` + +#### Inline Match + +A `match` expression in the body of an `inline` method definition may be +prefixed by the `inline` modifier. If there is enough static information to +unambiguously take a branch, the expression is reduced to that branch and the +type of the result is taken. The example below defines an inline method with a +single inline match expression that picks a case based on its static type: + +```scala +inline def g(x: Any) <: Any = inline x match { + case x: String => (x, x) // Tuple2[String, String](x, x) + case x: Double => x +} + +g(1.0d) // Has type 1.0d which is a subtype of Double +g("test") // Has type (String, String) +``` + +The scrutinee `x` is examined statically and the inline match is reduced +accordingly returning the corresponding value (with the type specialized due to +the `<:` in the return type). This example performs a simple type test over the +scrutinee. The type can have a richer structure like the simple ADT below. +`toInt` matches the structure of a number in Church-encoding and _computes_ the +corresponding integer. + +```scala +trait Nat +case object Zero extends Nat +case class Succ[N <: Nat](n: N) extends Nat + +inline def toInt(n: Nat) <: Int = inline n match { + case Zero => 0 + case Succ(n1) => toInt(n1) + 1 +} + +final val natTwo = toInt(Succ(Succ(Zero))) +val intTwo: 2 = natTwo +``` + +`natTwo` is inferred to have the singleton type 2. + +#### scala.compiletime._ + +This package contains helper definitions providing support for compile time +operations over values. + +##### Const Value & Const Value Opt + +`constvalue` is a function that produces the constant value represented by a +type. + +```scala +import scala.compiletime.{constValue, S} + +inline def toIntC[N] <: Int = + inline constValue[N] match { + case 0 => 0 + case _: S[n1] => 1 + toIntC[n1] + } + +final val ctwo = toIntC[2] +``` + +`constValueOpt` is the same as `constValue`, however returning an `Option[T]` +enabling us to handle situations where a value is not present. Note that `S` is +the type of the successor of some singleton type. For example the type `S[1]` is +the singleton type `2`. + +##### Erased Value + +We have seen so far inline methods that take terms (tuples and integers) as +parameters. What if we want to base case distinctions on types instead? For +instance, one would like to be able to write a function `defaultValue`, that, +given a type `T` returns optionally the default value of `T`, if it exists. In +fact, we can already express this using rewrite match expressions and a simple +helper function, `scala.compiletime.erasedValue`, which is defined as follows: + +```scala +erased def erasedValue[T]: T = ??? +``` + +The `erasedValue` function _pretends_ to return a value of its type argument +`T`. In fact, it would always raise a `NotImplementedError` exception when +called. But the function can in fact never be called, since it is declared +`erased`, so can be only used a compile-time during type checking. + +Using `erasedValue`, we can then define `defaultValue` as follows: + +```scala +inline def defaultValue[T] = inline erasedValue[T] match { + case _: Byte => Some(0: Byte) + case _: Char => Some(0: Char) + case _: Short => Some(0: Short) + case _: Int => Some(0) + case _: Long => Some(0L) + case _: Float => Some(0.0f) + case _: Double => Some(0.0d) + case _: Boolean => Some(false) + case _: Unit => Some(()) + case _: t >: Null => Some(null) + case _ => None +} +``` + +Then: +```scala +defaultValue[Int] = Some(0) +defaultValue[Boolean] = Some(false) +defaultValue[String | Null] = Some(null) +defaultValue[AnyVal] = None +``` + +As another example, consider the type-level version of `toNat` above: given a +_type_ representing a Peano number, return the integer _value_ corresponding to +it. Here's how this can be defined: + +```scala +inline def toInt[N <: Nat] <: Int = inline scala.compiletime.erasedValue[N] match { + case _: Zero => 0 + case _: Succ[n] => toIntT[n] + 1 +} + +final val two = toInt[Succ[Succ[Zero]]] +``` + +`erasedValue` is an `erased` method so it cannot be used and has no runtime +behavior. Since `toInt` performs static checks over the static type of `N` we +can safely use it to scrutinize its return type (`S[S[Z]]` in this case). + +##### Error + +This package provides a compile time `error` definition with the following signature: + +```scala +inline def error(inline msg: String, objs: Any*): Nothing +``` + +The purpose of this is to expand at the point of use, an error message (a +constant string) and append with commas, compile time values passed in the +`objs` param. + +#### Implicit Match + +It is foreseen that many areas of typelevel programming can be done with rewrite +methods instead of implicits. But sometimes implicits are unavoidable. The +problem so far was that the Prolog-like programming style of implicit search +becomes viral: Once some construct depends on implicit search it has to be +written as a logic program itself. Consider for instance the problem of creating +a `TreeSet[T]` or a `HashSet[T]` depending on whether `T` has an `Ordering` or +not. We can create a set of implicit definitions like this: + +```scala +trait SetFor[T, S <: Set[T]] +class LowPriority { + implicit def hashSetFor[T]: SetFor[T, HashSet[T]] = ... +} +object SetsFor extends LowPriority { + implicit def treeSetFor[T: Ordering]: SetFor[T, TreeSet[T]] = ... +} +``` + +Clearly, this is not pretty. Besides all the usual indirection of implicit +search, we face the problem of rule prioritization where we have to ensure that +`treeSetFor` takes priority over `hashSetFor` if the element type has an +ordering. This is solved (clumsily) by putting `hashSetFor` in a superclass +`LowPriority` of the object `SetsFor` where `treeSetFor` is defined. Maybe the +boilerplate would still be acceptable if the crufty code could be contained. +However, this is not the case. Every user of the abstraction has to be +parameterized itself with a `SetFor` implicit. Considering the simple task _"I +want a `TreeSet[T]` if `T` has an ordering and a `HashSet[T]` otherwise"_, this +seems like a lot of ceremony. + +There are some proposals to improve the situation in specific areas, for +instance by allowing more elaborate schemes to specify priorities. But they all +keep the viral nature of implicit search programs based on logic programming. + +By contrast, the new `implicit match` construct makes implicit search available +in a functional context. To solve the problem of creating the right set, one +would use it as follows: +```scala +inline def setFor[T]: Set[T] = implicit match { + case ord: Ordering[T] => new TreeSet[T] + case _ => new HashSet[T] +} +``` +An implicit match uses the `implicit` keyword in the place of the scrutinee. Its +patterns are type ascriptions of the form `identifier : Type`. + +Patterns are tried in sequence. The first case with a pattern `x: T` such that +an implicit value of type `T` can be summoned is chosen. The variable `x` is +then bound to the implicit value for the remainder of the case. It can in turn +be used as an implicit in the right hand side of the case. It is an error if one +of the tested patterns gives rise to an ambiguous implicit search. + +An implicit matches is considered to be a special kind of a inline match. This +means it can only occur in the body of an inline method, and it must be reduced +at compile time. + +Consequently, if we summon an `Ordering[String]` the code above will return a +new instance of `TreeSet[String]`. + +```scala +the[Ordering[String]] + +println(setFor[String].getClass) // prints class scala.collection.immutable.TreeSet +``` + +**Note** implicit matches can raise ambiguity errors. Consider the following +code with two implicit values in scope of type `A`. The single pattern match +case of the implicit match with type ascription of an `A` raises the ambiguity +error. + +```scala +class A +implicit val a1: A = new A +implicit val a2: A = new A + +inline def f: Any = implicit match { + case _: A => ??? // error: ambiguous implicits +} +``` + +### Reference + +For more info, see [PR #4927](https://github.com/lampepfl/dotty/pull/4768), +which explains how inline methods can be used for typelevel programming and code +specialization. \ No newline at end of file diff --git a/docs/docs/reference/metaprogramming/macros-spec.md b/docs/docs/reference/metaprogramming/macros-spec.md new file mode 100644 index 000000000000..ba8ab2b191db --- /dev/null +++ b/docs/docs/reference/metaprogramming/macros-spec.md @@ -0,0 +1,242 @@ +--- +layout: doc-page +title: "Macros Spec" +--- + +## Implementation + +### Syntax + +Compared to the [Dotty reference grammar](../../internals/syntax.md) +there are the following syntax changes: + + SimpleExpr ::= ... + | ‘'’ ‘{’ Block ‘}’ + | ‘'’ ‘[’ Type ‘]’ + | ‘$’ ‘{’ Block ‘}’ + SimpleType ::= ... + | ‘$’ ‘{’ Block ‘}’ + +In addition, an identifier `$x` starting with a `$` that appears inside +a quoted expression or type is treated as a splice `${x}` and a quoted identifier +`'x` that appears inside a splice is treated as a quote `'{x}` + +### Implementation in `dotc` + +Quotes and splices are primitive forms in the generated abstract syntax trees. +Top-level splices are eliminated during macro expansion while typing. On the +other hand, top-level quotes are eliminated in an expansion phase `ReifyQuotes` +phase (after typing and pickling). PCP checking occurs while preparing the RHS +of an inline method for top-level splices and in the `Staging` phase (after +typing and before pickling). + +Macro-expansion works outside-in. If the outermost scope is a splice, +the spliced AST will be evaluated in an interpreter. A call to a +previously compiled method can be implemented as a reflective call to +that method. With the restrictions on splices that are currently in +place that’s all that’s needed. We might allow more interpretation in +splices in the future, which would allow us to loosen the +restriction. Quotes in spliced, interpreted code are kept as they +are, after splices nested in the quotes are expanded. + +If the outermost scope is a quote, we need to generate code that +constructs the quoted tree at run-time. We implement this by +serializing the tree as a Tasty structure, which is stored +in a string literal. At runtime, an unpickler method is called to +deserialize the string into a tree. + +Splices inside quoted code insert the spliced tree as is, after +expanding any quotes in the spliced code recursively. + +## Formalization + +The phase consistency principle can be formalized in a calculus that +extends simply-typed lambda calculus with quotes and splices. + +### Syntax + +The syntax of terms, values, and types is given as follows: + + Terms t ::= x variable + (x: T) => t lambda + t t application + 't quote + $t splice + + Values v ::= (x: T) => t lambda + 'u quote + + Simple terms u ::= x | (x: T) => u | u u | 't + + Types T ::= A base type + T -> T function type + expr T quoted + +Typing rules are formulated using a stack of environments +`Es`. Individual environments `E` consist as usual of variable +bindings `x: T`. Environments can be combined using the two +combinators `'` and `$`. + + Environment E ::= () empty + E, x: T + + Env. stack Es ::= () empty + E simple + Es * Es combined + + Separator * ::= ' + $ + +The two environment combinators are both associative with left and +right identity `()`. + +### Operational semantics: + +We define a small step reduction relation `-->` with the following rules: + + ((x: T) => t) v --> [x := v]t + + ${'u} --> u + + t1 --> t2 + ----------------- + e[t1] --> e[t2] + +The first rule is standard call-by-value beta-reduction. The second +rule says that splice and quotes cancel each other out. The third rule +is a context rule; it says that reduction is allowed in the hole `[ ]` +position of an evaluation context. Evaluation contexts `e` and +splice evaluation context `e_s` are defined syntactically as follows: + + Eval context e ::= [ ] | e t | v e | 'e_s[${e}] + Splice context e_s ::= [ ] | (x: T) => e_s | e_s t | u e_s + +### Typing rules + +Typing judgments are of the form `Es |- t: T`. There are two +substructural rules which express the fact that quotes and splices +cancel each other out: + + Es1 * Es2 |- t: T + --------------------------- + Es1 $ E1 ' E2 * Es2 |- t: T + + + Es1 * Es2 |- t: T + --------------------------- + Es1 ' E1 $ E2 * Es2 |- t: T + +The lambda calculus fragment of the rules is standard, except that we +use a stack of environments. The rules only interact with the topmost +environment of the stack. + + x: T in E + -------------- + Es * E |- x: T + + + Es * E, x: T1 |- t: T2 + ------------------------------- + Es * E |- (x: T1) => t: T -> T2 + + + Es |- t1: T2 -> T Es |- t2: T2 + --------------------------------- + Es |- t1 t2: T + +The rules for quotes and splices map between `expr T` and `T` by trading `'` and `$` between +environments and terms. + + Es $ () |- t: expr T + -------------------- + Es |- $t: T + + + Es ' () |- t: T + ---------------- + Es |- 't: expr T + +The meta theory of a slightly simplified variant 2-stage variant of this calculus +is studied [separately](../simple-smp.md). + +## Going Further + +The meta-programming framework as presented and currently implemented is quite restrictive +in that it does not allow for the inspection of quoted expressions and +types. It’s possible to work around this by providing all necessary +information as normal, unquoted inline parameters. But we would gain +more flexibility by allowing for the inspection of quoted code with +pattern matching. This opens new possibilities. For instance, here is a +version of `power` that generates the multiplications directly if the +exponent is statically known and falls back to the dynamic +implementation of power otherwise. +```scala + inline def power(n: Int, x: Double): Double = ${ + 'n match { + case Constant(n1) => powerCode(n1, 'x) + case _ => '{ dynamicPower(n, x) } + } + } + + private def dynamicPower(n: Int, x: Double): Double = + if (n == 0) 1.0 + else if (n % 2 == 0) dynamicPower(n / 2, x * x) + else x * dynamicPower(n - 1, x) +``` +This assumes a `Constant` extractor that maps tree nodes representing +constants to their values. + +With the right extractors, the "AsFunction" conversion +that maps expressions over functions to functions over expressions can +be implemented in user code: +```scala + implied AsFunction1[T, U] for Conversion[Expr[T => U], Expr[T] => Expr[U]] { + def apply(f: Expr[T => U]): Expr[T] => Expr[U] = + (x: Expr[T]) => f match { + case Lambda(g) => g(x) + case _ => '{ ($f)($x) } + } + } +``` +This assumes an extractor +```scala + object Lambda { + def unapply[T, U](x: Expr[T => U]): Option[Expr[T] => Expr[U]] + } +``` +Once we allow inspection of code via extractors, it’s tempting to also +add constructors that create typed trees directly without going +through quotes. Most likely, those constructors would work over `Expr` +types which lack a known type argument. For instance, an `Apply` +constructor could be typed as follows: +```scala + def Apply(fn: Expr[_], args: List[Expr[_]]): Expr[_] +``` +This would allow constructing applications from lists of arguments +without having to match the arguments one-by-one with the +corresponding formal parameter types of the function. We then need "at +the end" a method to convert an `Expr[_]` to an `Expr[T]` where `T` is +given from the outside. E.g. if `code` yields a `Expr[_]`, then +`code.atType[T]` yields an `Expr[T]`. The `atType` method has to be +implemented as a primitive; it would check that the computed type +structure of `Expr` is a subtype of the type structure representing +`T`. + +Before going down that route, we should evaluate in detail the tradeoffs it +presents. Constructing trees that are only verified _a posteriori_ +to be type correct loses a lot of guidance for constructing the right +trees. So we should wait with this addition until we have more +use-cases that help us decide whether the loss in type-safety is worth +the gain in flexibility. In this context, it seems that deconstructing types is +less error-prone than deconstructing terms, so one might also +envisage a solution that allows the former but not the latter. + +## Conclusion + +Meta-programming has a reputation of being difficult and confusing. +But with explicit `Expr/Type` types and quotes and splices it can become +downright pleasant. A simple strategy first defines the underlying quoted or unquoted +values using `Expr` and `Type` and then inserts quotes and splices to make the types +line up. Phase consistency is at the same time a great guideline +where to insert a splice or a quote and a vital sanity check that +the result makes sense. \ No newline at end of file diff --git a/docs/docs/reference/metaprogramming/macros.md b/docs/docs/reference/metaprogramming/macros.md new file mode 100644 index 000000000000..3f0a025798c4 --- /dev/null +++ b/docs/docs/reference/metaprogramming/macros.md @@ -0,0 +1,599 @@ +--- +layout: doc-page +title: "Macros" +--- + +### Macros: Quotes and Splices + +Macros are built on two well-known fundamental operations: quotation and +splicing. Quotation is expressed as `'{...}` for expressions (both forms are +equivalent) and as `'[...]` for types. Splicing is expressed as `${ ... }`. +Additionally, within a quote or a splice we can quote or splice identifiers +directly (i.e. `'e` and `$e`). Readers may notice the resemblance of the two +aforementioned syntactic schemes with the familiar string interpolation syntax. + +```scala +println(s"Hello, $name, here is the result of 1 + 1 = ${1 + 1}") +``` + +In string interpolation we _quoted_ a string and then we _spliced_ into it, two +others. The first, `name`, is a reference to a value of type `string`, and the +second is an arithmetic expression that will be _evaluated_ followed by the +splicing of its string representation. + +Quotes and splices in this section allow us to treat code in a similar way, +effectively supporting macros. The entry point for macros is an inline method +with a top-level splice. We call it a top-level because it is the only occation +where we encounter a splice outside a quote (consider as a quote the +compilation-unit at the call-site). For example, the code below presents an +`inline` method `assert` which calls at compile-time a method `assertImpl` with +a boolean expression tree as argument. `assertImpl` evaluates the expression and +prints it again in an error message if it evaluates to `false`. + +```scala + import scala.quoted._ + + inline def assert(expr: => Boolean): Unit = + ${ assertImpl('{ expr }) } + + def assertImpl(expr: Expr[Boolean]) = '{ + if !(${ expr }) then + throw new AssertionError(s"failed assertion: ${${ showExpr(expr) }}") + } + + def showExpr(expr: Expr[Boolean]): Expr[String] = + '{ "" } // Better implementation later in this document +``` + +If `e` is an expression, then `'{e}` represent the typed +abstract syntax tree representing `e`. If `T` is a type, then `'[T]` +represents the type structure representing `T`. The precise +definitions of "typed abstract syntax tree" or "type-structure" do not +matter for now, the terms are used only to give some +intuition. Conversely, `${e}` evaluates the expression `e`, which must +yield a typed abstract syntax tree or type structure, and embeds the +result as an expression (respectively, type) in the enclosing program. + +Quotations can have spliced parts in them; in this case the embedded +splices are evaluated and embedded as part of the formation of the +quotation. + +Quotes and splices can also be applied directly to identifiers. An identifier +`$x` starting with a `$` that appears inside a quoted expression or type is treated as a +splice `${x}`. Analogously, an quoted identifier 'x that appears inside a splice +is treated as a quote `'{x}`. See the Syntax section below for details. + +Quotes and splices are duals of each other. For arbitrary +expressions `e` and types `T` we have: + + ${'{e}} = e + '{${e}} = e + ${'[T]} = T + '[${T}] = T + +### Types for Quotations + +The type signatures of quotes and splices can be described using +two fundamental types: + + - `Expr[T]`: abstract syntax trees representing expressions of type `T` + - `Type[T]`: type structures representing type `T`. + +Quoting takes expressions of type `T` to expressions of type `Expr[T]` +and it takes types `T` to expressions of type `Type[T]`. Splicing +takes expressions of type `Expr[T]` to expressions of type `T` and it +takes expressions of type `Type[T]` to types `T`. + +The two types can be defined in package `scala.quoted` as follows: +```scala + package scala.quoted + + sealed abstract class Expr[T] + sealed abstract class Type[T] +``` +Both `Expr` and `Type` are abstract and sealed, so all constructors for +these types are provided by the system. One way to construct values of +these types is by quoting, the other is by type-specific lifting +operations that will be discussed later on. + +### The Phase Consistency Principle + +A fundamental *phase consistency principle* (PCP) regulates accesses +to free variables in quoted and spliced code: + + - _For any free variable reference `x`, the number of quoted scopes and the number of spliced scopes between the reference to `x` and the definition of `x` must be equal_. + +Here, `this`-references count as free variables. On the other +hand, we assume that all imports are fully expanded and that `_root_` is +not a free variable. So references to global definitions are +allowed everywhere. + +The phase consistency principle can be motivated as follows: First, +suppose the result of a program `P` is some quoted text `'{ ... x +... }` that refers to a free variable `x` in `P` This can be +represented only by referring to the original variable `x`. Hence, the +result of the program will need to persist the program state itself as +one of its parts. We don’t want to do this, hence this situation +should be made illegal. Dually, suppose a top-level part of a program +is a spliced text `${ ... x ... }` that refers to a free variable `x` +in `P`. This would mean that we refer during _construction_ of `P` to +a value that is available only during _execution_ of `P`. This is of +course impossible and therefore needs to be ruled out. Now, the +small-step evaluation of a program will reduce quotes and splices in +equal measure using the cancellation rules above. But it will neither +create nor remove quotes or splices individually. So the PCP ensures +that program elaboration will lead to neither of the two unwanted +situations described above. + +In what concerns the range of features it covers, this form of macros introduces +a principled meta programming framework that is quite close to the MetaML family of +languages. One difference is that MetaML does not have an equivalent of the PCP +- quoted code in MetaML _can_ access variables in its immediately enclosing +environment, with some restrictions and caveats since such accesses involve +serialization. However, this does not constitute a fundamental gain in +expressiveness. + +### From `Expr`s to Functions and Back + +The `Expr` companion object contains an implicit `AsFunctionN` (for 0 <= N < 23) conversion that turns a tree +describing a function into a function mapping trees to trees. +```scala + object Expr { + ... + implied AsFunction1[T, U] for Conversion[Expr[T => U], Expr[T] => Expr[U]] ... + } +``` +This decorator gives `Expr` the `apply` operation of an applicative functor, where `Expr`s +over function types can be applied to `Expr` arguments. The definition +of `AsFunction1(f).apply(x)` is assumed to be functionally the same as +`'{($f)($x)}`, however it should optimize this call by returning the +result of beta-reducing `f(x)` if `f` is a known lambda expression. + +The `AsFunction1` decorator distributes applications of `Expr` over function +arrows: +```scala + AsFunction1(_).apply: Expr[S => T] => (Expr[S] => Expr[T]) +``` +Its dual, let’s call it `reflect`, can be defined as follows: +```scala + def reflect[T, U](f: Expr[T] => Expr[U]): Expr[T => U] = '{ + (x: T) => ${ f('x) } + } +``` +Note how the fundamental phase consistency principle works in two +different directions here for `f` and `x`. The reference to `f` is +legal because it is quoted, then spliced, whereas the reference to `x` +is legal because it is spliced, then quoted. + +### Types and the PCP + +In principle, The phase consistency principle applies to types as well +as for expressions. This might seem too restrictive. Indeed, the +definition of `reflect` above is not phase correct since there is a +quote but no splice between the parameter binding of `T` and its +usage. But the code can be made phase correct by adding a binding +of a `Type[T]` tag: +```scala + def reflect[T, U](f: Expr[T] => Expr[U]) given (t: Type[T]): Expr[T => U] = + '{ (x: $t) => ${ f('x) } } +``` +In this version of `reflect`, the type of `x` is now the result of +splicing the `Type` value `t`. This operation _is_ splice correct -- there +is one quote and one splice between the use of `t` and its definition. + +To avoid clutter, the Scala implementation tries to convert any phase-incorrect +reference to a type `T` to a type-splice, by rewriting `T` to `${ the[Type[T]] }`. +For instance, the user-level definition of `reflect`: + +```scala + def reflect[T: Type, U: Type](f: Expr[T] => Expr[U]): Expr[T => U] = + '{ (x: T) => ${ f('x) } } +``` +would be rewritten to +```scala + def reflect[T: Type, U: Type](f: Expr[T] => Expr[U]): Expr[T => U] = + '{ (x: ${ the[Type[T]] }) => ${ f('x) } } +``` +The `the` query succeeds because there is an implied value of +type `Type[T]` available (namely the given parameter corresponding +to the context bound `: Type`), and the reference to that value is +phase-correct. If that was not the case, the phase inconsistency for +`T` would be reported as an error. + +### Lifting Expressions + +Consider the following implementation of a staged interpreter that implements +a compiler through staging. +```scala + import scala.quoted._ + + enum Exp { + case Num(n: Int) + case Plus(e1: Exp, e2: Exp) + case Var(x: String) + case Let(x: String, e: Exp, in: Exp) + } +``` +The interpreted language consists of numbers `Num`, addition `Plus`, and variables +`Var` which are bound by `Let`. Here are two sample expressions in the language: +```scala + val exp = Plus(Plus(Num(2), Var("x")), Num(4)) + val letExp = Let("x", Num(3), exp) +``` +Here’s a compiler that maps an expression given in the interpreted +language to quoted Scala code of type `Expr[Int]`. +The compiler takes an environment that maps variable names to Scala `Expr`s. +```scala + import implied scala.quoted._ + + def compile(e: Exp, env: Map[String, Expr[Int]]): Expr[Int] = e match { + case Num(n) => + n.toExpr + case Plus(e1, e2) => + '{ ${ compile(e1, env) } + ${ compile(e2, env) } } + case Var(x) => + env(x) + case Let(x, e, body) => + '{ val y = ${ compile(e, env) }; ${ compile(body, env + (x -> 'y)) } } + } +``` +Running `compile(letExp, Map())` would yield the following Scala code: +```scala + '{ val y = 3; (2 + y) + 4 } +``` +The body of the first clause, `case Num(n) => n.toExpr`, looks suspicious. `n` +is declared as an `Int`, yet it is converted to an `Expr[Int]` with `toExpr`. +Shouldn’t `n` be quoted? In fact this would not +work since replacing `n` by `'n` in the clause would not be phase +correct. + +The `toExpr` extension method is defined in package `quoted`: +```scala + package quoted + + implied LiftingOps { + def (x: T) toExpr[T] given (ev: Liftable[T]): Expr[T] = ev.toExpr(x) + } +``` +The extension says that values of types implementing the `Liftable` type class can be +converted ("lifted") to `Expr` values using `toExpr`, provided an implied import +of `scala.quoted._` is in scope. + +Dotty comes with implied instance definitions of `Liftable` for +several types including `Boolean`, `String`, and all primitive number +types. For example, `Int` values can be converted to `Expr[Int]` +values by wrapping the value in a `Literal` tree node. This makes use +of the underlying tree representation in the compiler for +efficiency. But the `Liftable` instances are nevertheless not _magic_ +in the sense that they could all be defined in a user program without +knowing anything about the representation of `Expr` trees. For +instance, here is a possible instance of `Liftable[Boolean]`: +```scala + implied for Liftable[Boolean] { + def toExpr(b: Boolean) = if (b) '{ true } else '{ false } + } +``` +Once we can lift bits, we can work our way up. For instance, here is a +possible implementation of `Liftable[Int]` that does not use the underlying +tree machinery: +```scala + implied for Liftable[Int] { + def toExpr(n: Int): Expr[Int] = n match { + case Int.MinValue => '{ Int.MinValue } + case _ if n < 0 => '{ - ${ toExpr(n) } } + case 0 => '{ 0 } + case _ if n % 2 == 0 => '{ ${ toExpr(n / 2) } * 2 } + case _ => '{ ${ toExpr(n / 2) } * 2 + 1 } + } + } +``` +Since `Liftable` is a type class, its instances can be conditional. For example, +a `List` is liftable if its element type is: +```scala + implied [T: Liftable] for Liftable[List[T]] { + def toExpr(xs: List[T]): Expr[List[T]] = xs match { + case head :: tail => '{ ${ toExpr(head) } :: ${ toExpr(tail) } } + case Nil => '{ Nil: List[T] } + } + } +``` +In the end, `Liftable` resembles very much a serialization +framework. Like the latter it can be derived systematically for all +collections, case classes and enums. Note also that the synthesis +of _type-tag_ values of type `Type[T]` is essentially the type-level +analogue of lifting. + +Using lifting, we can now give the missing definition of `showExpr` in the introductory example: +```scala + def showExpr[T](expr: Expr[T]): Expr[String] = { + val code: String = expr.show + code.toExpr + } +``` +That is, the `showExpr` method converts its `Expr` argument to a string (`code`), and lifts +the result back to an `Expr[String]` using the `toExpr` method. + +**Note**: the `toExpr` extension method can be ommited by importing an implicit +conversion with `import scala.quoted.autolift._`. The programmer is able to +declutter slightly the code at the cost of readable _phase distinction_ between +stages. + +### Lifting Types + +The previous section has shown that the metaprogramming framework has +to be able to take a type `T` and convert it to a type tree of type +`Type[T]` that can be reified. This means that all free variables of +the type tree refer to types and values defined in the current stage. + +For a reference to a global class, this is easy: Just issue the fully +qualified name of the class. Members of reifiable types are handled by +just reifying the containing type together with the member name. But +what to do for references to type parameters or local type definitions +that are not defined in the current stage? Here, we cannot construct +the `Type[T]` tree directly, so we need to get it from a recursive +implicit search. For instance, to implement +```scala + the[Type[List[T]]] +``` +where `T` is not defined in the current stage, we construct the type constructor +of `List` applied to the splice of the result of searching for an implied instance for `Type[T]`: +```scala + '[ List[ ${ the[Type[T]] } ] ] +``` +This is exactly the algorithm that Scala 2 uses to search for type tags. +In fact Scala 2's type tag feature can be understood as a more ad-hoc version of +`quoted.Type`. As was the case for type tags, the implicit search for a `quoted.Type` +is handled by the compiler, using the algorithm sketched above. + +### Relationship with Inline + +Seen by itself, principled meta-programming looks more like a framework for +runtime metaprogramming than one for compile-time meta programming with macros. +But combined with Dotty’s `inline` feature it can be turned into a compile-time +system. The idea is that macro elaboration can be understood as a combination of +a macro library and a quoted program. For instance, here’s the `assert` macro +again together with a program that calls `assert`. + +```scala + object Macros { + + inline def assert(expr: => Boolean): Unit = + ${ assertImpl('expr) } + + def assertImpl(expr: Expr[Boolean]) = + '{ if !($expr) then throw new AssertionError(s"failed assertion: ${$expr}") } + } + + object App { + val program = { + val x = 1 + Macros.assert(x != 0) + } + } +``` +Inlining the `assert` function would give the following program: +```scala + val program = { + val x = 1 + ${ Macros.assertImpl('{ x != 0) } } + } +``` +The example is only phase correct because Macros is a global value and +as such not subject to phase consistency checking. Conceptually that’s +a bit unsatisfactory. If the PCP is so fundamental, it should be +applicable without the global value exception. But in the example as +given this does not hold since both `assert` and `program` call +`assertImpl` with a splice but no quote. + +However, one could argue that the example is really missing +an important aspect: The macro library has to be compiled in a phase +prior to the program using it, but in the code above, macro +and program are defined together. A more accurate view of +macros would be to have the user program be in a phase after the macro +definitions, reflecting the fact that macros have to be defined and +compiled before they are used. Hence, conceptually the program part +should be treated by the compiler as if it was quoted: +```scala + val program = '{ + val x = 1 + ${ Macros.assertImpl('{ x != 0 }) } + } +``` +If `program` is treated as a quoted expression, the call to +`Macro.assertImpl` becomes phase correct even if macro library and +program are conceptualized as local definitions. + +But what about the call from `assert` to `assertImpl`? Here, we need a +tweak of the typing rules. An inline function such as `assert` that +contains a splice operation outside an enclosing quote is called a +_macro_. Macros are supposed to be expanded in a subsequent phase, +i.e. in a quoted context. Therefore, they are also type checked as if +they were in a quoted context. For instance, the definition of +`assert` is typechecked as if it appeared inside quotes. This makes +the call from `assert` to `assertImpl` phase-correct, even if we +assume that both definitions are local. + +The `inline` modifier is used to declare a `val` that is +either a constant or is a parameter that will be a constant when instantiated. This +aspect is also important for macro expansion. To illustrate this, +consider an implementation of the `power` function that makes use of a +statically known exponent: +```scala + inline def power(inline n: Int, x: Double) = ${ powerCode(n, 'x) } + + private def powerCode(n: Int, x: Expr[Double]): Expr[Double] = + if (n == 0) '{ 1.0 } + else if (n == 1) x + else if (n % 2 == 0) '{ val y = $x * $x; ${ powerCode(n / 2, 'y) } } + else '{ $x * ${ powerCode(n - 1, x) } } +``` +The reference to `n` as an argument in `${ powerCode(n, 'x) }` is not +phase-consistent, since `n` appears in a splice without an enclosing +quote. Normally that would be a problem because it means that we need +the _value_ of `n` at compile time, which is not available for general +parameters. But since `n` is an inline parameter of a macro, we know +that at the macro’s expansion point `n` will be instantiated to a +constant, so the value of `n` will in fact be known at this +point. To reflect this, we loosen the phase consistency requirements +as follows: + + - If `x` is a inline value (or a inline parameter of an inline + function) of type Boolean, Byte, Short, Int, Long, Float, Double, + Char or String, it can be accessed in all contexts where the number + of splices minus the number of quotes between use and definition + is either 0 or 1. + +### Scope Extrusion + +Quotes and splices are duals as far as the PCP is concerned. But there is an +additional restriction that needs to be imposed on splices to guarantee +soundness: code in splices must be free of side effects. The restriction +prevents code like this: + +```scala + var x: Expr[T] = ... + '{ (y: T) => ${ x = 'y; 1 } } +``` + +This code, if it was accepted, would _extrude_ a reference to a quoted variable +`y` from its scope. This would subsequently allow access to a variable outside the +scope where it is defined, which is likely problematic. The code is clearly +phase consistent, so we cannot use PCP to rule it out. Instead we postulate a +future effect system that can guarantee that splices are pure. In the absence of +such a system we simply demand that spliced expressions are pure by convention, +and allow for undefined compiler behavior if they are not. This is analogous to +the status of pattern guards in Scala, which are also required, but not +verified, to be pure. + +[Multi-Stage Programming](./staging.html) introduces one additional methods where +you can expand code at runtime with a method `run`. There is also a problem with +that invokation of `run` in splices. Consider the following expression: + +```scala + '{ (x: Int) => ${ ('x).run; 1 } } +``` +This is again phase correct, but will lead us into trouble. Indeed, evaluating +the splice will reduce the expression `('x).run` to `x`. But then the result + +```scala + '{ (x: Int) => ${ x; 1 } } +``` + +is no longer phase correct. To prevent this soundness hole it seems easiest to +classify `run` as a side-effecting operation. It would thus be prevented from +appearing in splices. In a base language with side-effects we'd have to do this +anyway: Since `run` runs arbitrary code it can always produce a side effect if +the code it runs produces one. + +### Example Expansion + +Assume we have two methods, one `map` that takes an `Expr[Array[T]]` and a +function `f` and one `sum` that performs a sum by delegating to `map`. + +```scala +object Macros { + def map[T](arr: Expr[Array[T]], f: Expr[T] => Expr[Unit])(implicit t: Type[T]): Expr[Unit] = '{ + var i: Int = 0 + while (i < ($arr).length) { + val element: $t = ($arr)(i) + ${f('element)} + i += 1 + } + } + + def sum(arr: Expr[Array[Int]]): Expr[Int] = '{ + var sum = 0 + ${ map(arr, x => '{sum += $x}) } + sum + } + + inline def sum_m(arr: Array[Int]): Int = ${sum('arr)} +} +``` + +A call to `sum_m(Array(1,2,3))` will first inline `sum_m`: + +```scala +val arr: Array[Int] = Array.apply(1, [2,3 : Int]:Int*) +${_root_.Macros.sum('arr)} +``` + +then it will splice `sum`: + +```scala +val arr: Array[Int] = Array.apply(1, [2,3 : Int]:Int*) + +var sum = 0 +${ map(arr, x => '{sum += $x}) } +sum +``` + +then it will inline `map`: + +```scala +val arr: Array[Int] = Array.apply(1, [2,3 : Int]:Int*) + +var sum = 0 +val f = x => '{sum += $x} +${ _root_.Macros.map(arr, '[Int], 'f)} +sum +``` + +then it will expand and splice inside quotes `map`: + +```scala +val arr: Array[Int] = Array.apply(1, [2,3 : Int]:Int*) + +var sum = 0 +val f = x => '{sum += $x} +var i: Int = 0 +while (i < (arr).length) { + val element: Int = (arr)(i) + sum += element + i += 1 +} +sum +``` + +Finally cleanups and dead code elimination: +```scala +val arr: Array[Int] = Array.apply(1, [2,3 : Int]:Int*) +var sum = 0 +var i: Int = 0 +while (i < arr.length) { + val element: Int = arr(i) + sum += element + i += 1 +} +sum +``` + +### Relationship with Whitebox Inline + +[Inline](./inline.html) documents inlining. The code below introduces a whitebox +inline method that can calculate either a value of type `Int` or a value of type +`String`. + +```scala +inline def defaultOf(inline str: String) <: Any = ${ defaultOfImpl(str) } + +def defaultOfImpl(str: String): Expr[Any] = str match { + case "int" => '{1} + case "string" => '{"a"} +} + +// in a separate file +val a: Int = defaultOf("int") +val b: String = defaultOf("string") +``` + +### Let + +`scala.tasty.reflect.utils.TreeUtils` offers a method `let` that allows us to +bind the `rhs` to a `val` and use it in `body`. Its definition is shown below: + +```scala +def let(rhs: Term)(body: Ident => Term): Term +``` + +[More details](./macros-spec.html) \ No newline at end of file diff --git a/docs/docs/reference/metaprogramming/relationship-typelevel.md b/docs/docs/reference/metaprogramming/relationship-typelevel.md new file mode 100644 index 000000000000..9a10a75f2bcc --- /dev/null +++ b/docs/docs/reference/metaprogramming/relationship-typelevel.md @@ -0,0 +1,3 @@ +## Relationship to Typelevel Programming + +https://github.com/lampepfl/dotty/blob/master/docs/docs/typelevel.md \ No newline at end of file diff --git a/docs/docs/reference/simple-smp.md b/docs/docs/reference/metaprogramming/simple-smp.md similarity index 100% rename from docs/docs/reference/simple-smp.md rename to docs/docs/reference/metaprogramming/simple-smp.md diff --git a/docs/docs/reference/metaprogramming/staging.md b/docs/docs/reference/metaprogramming/staging.md new file mode 100644 index 000000000000..4c3681e200a7 --- /dev/null +++ b/docs/docs/reference/metaprogramming/staging.md @@ -0,0 +1,95 @@ +--- +layout: doc-page +title: "Multi-Stage Programming" +---- + +The framework expresses at the same time compile-time meta-programming and +multi-staging programming. We can think of compile-time meta-programming as a +two stage compilation process: one that we write the code in top-level splices, +that will be used for code generation (macros) and one that will perform all +necessecary evaluations at compile-time and an object program that we will run +as usual. What if we could synthesize code at runtime and offer one extra stage +to the programmer? Then we can have a value of type `Expr[T]` at runtime that we +can essentially treat as a typed-syntax tree that we can either _show_ as a +string (pretty-print) or compile and run. If the number of quotes exceeds the +number of splices more than one (effectively handling at run-time values of type +`Expr[Expr[T]]`, `Expr[Expr[Expr[T]]]`, ... we talk about Multi-Stage +Programming). + +The motivation behind this _paradigm_ is to let runtime information affect or +guide code-generation. + +Intuition: The phase in which code is run is determined by the difference +between the number of splice scopes and quote scopes in which it is embedded. + + - If there are more splices than quotes, the code is run at "compile-time" i.e. + as a macro. In the general case, this means running an interpreter that + evaluates the code, which is represented as a typed abstract syntax tree. The + interpreter can fall back to reflective calls when evaluating an application + of a previously compiled method. If the splice excess is more than one, it + would mean that a macro’s implementation code (as opposed to the code it + expands to) invokes other macros. If macros are realized by interpretation, + this would lead to towers of interpreters, where the first interpreter would + itself interpret an interpreter code that possibly interprets another + interpreter and so on. + + - If the number of splices equals the number of quotes, the code is compiled + and run as usual. + + - If the number of quotes exceeds the number of splices, the code is staged. + That is, it produces a typed abstract syntax tree or type structure at + run-time. A quote excess of more than one corresponds to multi-staged + programming. + +Providing an interpreter for the full language is quite difficult, and it is +even more difficult to make that interpreter run efficiently. So we currently +impose the following restrictions on the use of splices. + + 1. A top-level splice must appear in an inline method (turning that method + into a macro) + + 2. The splice must call a previously compiled + method passing quoted arguments, constant arguments or inline arguments. + + 3. Splices inside splices (but no intervening quotes) are not allowed. + + +## API + +The framework as discussed so far allows code to be staged, i.e. be prepared +to be executed at a later stage. To run that code, there is another method +in class `Expr` called `run`. Note that `$` and `run` both map from `Expr[T]` +to `T` but only `$` is subject to the PCP, whereas `run` is just a normal method. + +```scala +sealed abstract class Expr[T] { + def run given (toolbox: Toolbox): T // run staged code + def show given (toolbox: Toolbox): String // show staged code +} +``` + +## Example + +Now take exactly the same example as in [Macros](./macros.html). Assume that we +do not want to pass an array statically but generated code at run-time and pass +the value, also at run-time. Note, how we make a future-stage function of type +`Expr[Array[Int] => Int]` in line 4 below. Invoking the `.show` or `.run` we can +either show the code or run it respectivelly. + +```scala +// make available the necessary toolbox for runtime code generation +implicit val toolbox: scala.quoted.Toolbox = scala.quoted.Toolbox.make(getClass.getClassLoader) + +val stagedSum: Expr[Array[Int] => Int] = '{ (arr: Array[Int]) => ${sum('arr)}} + +println(stagedSum.show) + +stagedSum.run.apply(Array(1, 2, 3)) // Returns 6 +``` + +Note that if we need to run the main (in an object called `Test`) after +compilation we need make available the compiler to the runtime: + +```shell +sbt:dotty> dotr -classpath out -with-compiler Test +``` diff --git a/docs/docs/reference/metaprogramming/tasty-inspect.md b/docs/docs/reference/metaprogramming/tasty-inspect.md new file mode 100644 index 000000000000..ca6503f3525a --- /dev/null +++ b/docs/docs/reference/metaprogramming/tasty-inspect.md @@ -0,0 +1,36 @@ +--- +layout: doc-page +title: "TASTy Inspection" +--- + +TASTy files contain the full typed tree of a class including source positions +and documentation. This is ideal for tools that analyze or extract semantic +information of the code. To avoid the hassle of working directly with the TASTy +file we provide the `TastyConsumer` which loads the contents and exposes it +through the TASTy reflect API. + + +## Inspecting TASTy files + +To inspect the TASTy Reflect trees of a TASTy file a consumer can be defined in +the following way. + +```scala +class Consumer extends TastyConsumer { + final def apply(reflect: Reflection)(root: reflect.Tree): Unit = { + import reflect._ + // Do something with the tree + } +} +``` + +Then the consumer can be instantiated with the following code to get the tree of +the class `foo.Bar` for a foo in the classpath. + +```scala +object Test { + def main(args: Array[String]): Unit = { + ConsumeTasty("", List("foo.Bar"), new Consumer) + } +} +``` \ No newline at end of file diff --git a/docs/docs/reference/other-new-features/tasty-reflect.md b/docs/docs/reference/metaprogramming/tasty-reflect.md similarity index 50% rename from docs/docs/reference/other-new-features/tasty-reflect.md rename to docs/docs/reference/metaprogramming/tasty-reflect.md index e796831adf7d..866b783fc7a9 100644 --- a/docs/docs/reference/other-new-features/tasty-reflect.md +++ b/docs/docs/reference/metaprogramming/tasty-reflect.md @@ -3,22 +3,25 @@ layout: doc-page title: "TASTy Reflect" --- -TASTy Reflect enables inspection and construction of Typed Abstract Syntax Trees (TAST). -It may be used on quoted expressions (`quoted.Expr`) and quoted types (`quoted.Type`) from [Principled Meta-programming](./principled-meta-programming.html) -or on full TASTy files. +TASTy Reflect enables inspection and construction of Typed Abstract Syntax Trees +(Typed-AST). It may be used on quoted expressions (`quoted.Expr`) and quoted +types (`quoted.Type`) from [Macros](./macros.html) or on full TASTy files. -If you are writing macros, please first read [Principled Meta-programming](./principled-meta-programming.html). +If you are writing macros, please first read [Macros](./macros.html). You may find all you need without using TASTy Reflect. -## From quotes and splices to TASTs Reflect trees and back +## API: From quotes and splices to TASTy reflect trees and back -`quoted.Expr` and `quoted.Type` are only meant for generative meta-programming, generation of code without inspecting the ASTs. -[Principled Meta-programming](./principled-meta-programming.html) provides the guarantee that the generation of code will be type-correct. -Using TASTy Reflect will break these guarantees and may fail at macro expansion time, hence additional explicit check must be done. +With `quoted.Expr` and `quoted.Type` we can compute code but also analyze code +by inspecting the ASTs. [Macros](./macros.html) provides the guarantee that the +generation of code will be type-correct. Using TASTy Reflect will break these +guarantees and may fail at macro expansion time, hence additional explicit +checks must be done. - -To provide reflection capabilities in macros we need to add an implicit parameter of type `scala.tasty.Reflection` and import it in the scope where it is used. +To provide reflection capabilities in macros we need to add an implicit +parameter of type `scala.tasty.Reflection` and import it in the scope where it +is used. ```scala import scala.quoted._ @@ -32,8 +35,13 @@ def natConstImpl(x: Expr[Int])(implicit reflection: Reflection): Expr[Int] = { } ``` -`import reflection._` will provide an `unseal` extension method on `quoted.Expr` and `quoted.Type` which returns a `reflection.Term` and `reflection.TypeTree` respectively. -It will also import all extractors and methods on TASTy Reflect trees. For example the `Term.Literal(_)` extractor used below. +### Sealing and Unsealing + +`import reflection._` will provide an `unseal` extension method on `quoted.Expr` +and `quoted.Type` which returns a `reflection.Term` that represents the tree of +the expression and `reflection.TypeTree` that represents the tree of the type +respectively. It will also import all extractors and methods on TASTy Reflect +trees. For example the `Literal(_)` extractor used below. ```scala def natConstImpl(x: Expr[Int])(implicit reflection: Reflection): Expr[Int] = { @@ -50,36 +58,87 @@ def natConstImpl(x: Expr[Int])(implicit reflection: Reflection): Expr[Int] = { } ``` -To easily know which extractors are needed, the `reflection.Term.show` method returns the string representation of the extractors. - -The method `reflection.Term.reify[T]` provides a way to go back to a `quoted.Expr`. -Note that the type must be set explicitly and that if it does not conform to it an exception will be thrown. -In the code above we could have replaced `n.toExpr` by `xTree.reify[Int]`. +To easily know which extractors are needed, the `reflection.Term.show` method +returns the string representation of the extractors. +The method `reflection.Term.seal[T]` provides a way to go back to a +`quoted.Expr[Any]`. Note that the type is `Expr[Any]`. Consequently, the type +must be set explicitly with a checked `cast` call. If the type does not conform +to it an exception will be thrown. In the code above, we could have replaced +`n.toExpr` by `xTree.seal.cast[Int]`. -## Inspect a TASTy file +### Obtaining the underlying argument -To inspect the TASTy Reflect trees of a TASTy file a consumer can be defined in the following way. +A macro can access the tree of the actual argument passed on the call-site. The +`underlyingArgument` method on a `Term` object will give access to the tree +defining the expression passed. For example the code below matches a selection +operation expression passed while calling the `macro` below. ```scala -class Consumer extends TastyConsumer { - final def apply(reflect: Reflection)(root: reflect.Tree): Unit = { - import reflect._ - // Do somthing with the tree +inline def macro(param: => Boolean): Unit = ${ macroImpl('param) } + +def macroImpl(param: Expr[Boolean])(implicit refl: Reflection): Expr[Unit] = { + import refl._ + import util._ + + param.unseal.underlyingArgument match { + case t @ Apply(Select(lhs, op), rhs :: Nil) => .. } } + +// example +macro(this.checkCondition()) +``` + +### Positions + +The tasty context provides a `rootPosition` value. For macros it corresponds to +the expansion site. The macro authors can obtain various information about that +expansion site. The example below shows how we can obtain position information +such as the start line, the end line or even the source code at the expansion +point. + +```scala +def macroImpl()(reflect: Reflection): Expr[Unit] = { + import reflect.{Position => _, _} + val pos = rootPosition + + val path = pos.sourceFile.jpath.toString + val start = pos.start + val end = pos.end + val startLine = pos.startLine + val endLine = pos.endLine + val startColumn = pos.startColumn + val endColumn = pos.endColumn + val sourceCode = pos.sourceCode + ... ``` -Then the consumer can be instantiated with the following code to get the tree of the class `foo.Bar` for a foo in the classpath. +### Tree Utilities + +`scala.tasty.reflect.TreeUtils` contains three facilities for tree traversal and +transformations. + +`TreeAccumulator` ties the knot of a traversal. By calling `foldOver(x, tree))` +we can dive in the `tree` node and start accumulating values of type `X` (e.g., +of type List[Symbol] if we want to collect symbols). The code below, for +example, collects the pattern variables of a tree. ```scala -object Test { - def main(args: Array[String]): Unit = { - ConsumeTasty("", List("foo.Bar"), new Consumer) +def collectPatternVariables(tree: Tree)(implicit ctx: Context): List[Symbol] = { + val acc = new TreeAccumulator[List[Symbol]] { + def apply(syms: List[Symbol], tree: Tree)(implicit ctx: Context) = tree match { + case Bind(_, body) => apply(tree.symbol :: syms, body) + case _ => foldOver(syms, tree) + } } + acc(Nil, tree) } ``` +A `TreeTraverser` extends a `TreeAccumulator` and performs the same traversal +but without returning any value. Finally a `TreeMap` performs a transformation. + ## TASTy Reflect API TASTy Reflect provides the following types: @@ -93,8 +152,9 @@ TASTy Reflect provides the following types: | | +- DefDef | | +- ValDef | | - | +- Term --------+- Ident - | +- Select + | +- Term --------+- Ref -+- Ident + | | +- Select + | | | +- Literal | +- This | +- New @@ -116,6 +176,7 @@ TASTy Reflect provides the following types: | +- SelectOuter | +- While | + | +- TypeTree ----+- Inferred | +- TypeIdent | +- TypeSelect @@ -127,21 +188,21 @@ TASTy Reflect provides the following types: | +- MatchTypeTree | +- ByName | +- LambdaTypeTree - | +- Bind + | +- TypeBind + | +- TypeBlock | +- TypeBoundsTree - +- SyntheticBounds + +- WildcardTypeTree +- CaseDef +- TypeCaseDef +- Pattern --+- Value +- Bind +- Unapply - +- Alternative + +- Alternatives +- TypeTest +- WildcardPattern - +- NoPrefix +- TypeOrBounds -+- TypeBounds | @@ -164,29 +225,26 @@ TASTy Reflect provides the following types: +- LambdaType[ParamInfo <: TypeOrBounds] -+- MethodType +- PolyType +- TypeLambda - +- ImportSelector -+- SimpleSelector +- RenameSelector +- OmitSelector - +- Id - +- Signature - +- Position - +- Comment - +- Constant - +- Symbol --+- PackageDefSymbol - +- ClassDefSymbol - +- TypeDefSymbol - +- TypeBindSymbol - +- DefDefSymbol - +- ValDefSymbol - +- BindSymbol + | + +- TypeSymbol -+- ClassDefSymbol + | +- TypeDefSymbol + | +- TypeBindSymbol + | + +- TermSymbol -+- DefDefSymbol + | +- ValDefSymbol + | +- BindSymbol + | +- NoSymbol ++- Flags ``` diff --git a/docs/docs/reference/metaprogramming/toc.md b/docs/docs/reference/metaprogramming/toc.md new file mode 100644 index 000000000000..f48429814532 --- /dev/null +++ b/docs/docs/reference/metaprogramming/toc.md @@ -0,0 +1,50 @@ +--- +layout: doc-page +title: "Overview" +--- + +The following pages introduce the redesign of metaprogramming in Scala. They +introduce the following fundamental facilities: + +1. [Inline](./inline.html) `inline` is a new soft-modifier that guarantees that + a definition will be inlined at the point of use. The primary motivation + behind inline is to reduce the overhead behind function calls and access to + values. The expansion will be performed by the Scala compiler during the + `Typer` compiler phase. However, as opposed to inline in other ecosystems, + inlining is not merely a request to the compiler but for Scala it is a + _command_. The reason is that inlining in Scala can driver other compile-time + operations too, like inline pattern matching (enabling type-level + programming), macros (enabling compile-time, generative, metaprogramming) and + runtime code generation (multi-stage programming). In this section we + describe up to inline pattern matching describing the basic constructs. + +2. [Macros](./macros.html) Macros are built on two well-known fundamental + operations: quotation and splicing. Quotation is expressed as `'{...}` for + expressions (both forms are equivalent) and as `'[...]` for types. Splicing + is expressed as `${ ... }`. Whereas inlining is driven completely by the + language level features of scala (pattern matching, inline vals and + definitions), macros enable is synthesize/compute code at will treating code + values as first class citizens and splicing them together independently. + Here, we move towards _domain-specific_ metaprogramming. + +3. [Staging](./staging.html) Macros can be seen as distinct phase while + programming. You write your regular code that will be compiled according to + the semantics of the language and the macro code that is going to be + "compiled" or "generated" according the intented purpose of the programmer. + Staging (or Multi-Stage Programming) can be seen as taking one step further + this exact concept and can make code generation depent not only on static + data but also on data available at _runtime_. This splits the evaluation of + the program in many phases or ... stages, thus the "Multi-Stage" in the + programming paradigm we say it supports. + +4. [TASTy Reflection](./tasty-reflect.html) With TASTy reflection we can + `unseal` fragments of code and analyze them with reflection over the TASTy + format of the code. + +5. [TASTy Inspection](./tasty-inspect.html) Up until now we described how we can + expand, calculate at compile-time, or generate programs. The Scala compiler + offers quarantees at the level of types that the generated programs cannot go + wrong. With TASTy inspection we can load compiled files and analyze their + typed AST structure according to the TASTy format. + + diff --git a/docs/docs/reference/other-new-features/inline.md b/docs/docs/reference/other-new-features/inline.md deleted file mode 100644 index 47f0bfa99ae8..000000000000 --- a/docs/docs/reference/other-new-features/inline.md +++ /dev/null @@ -1,140 +0,0 @@ ---- -layout: doc-page -title: Inline ---- - -`inline` is a new [soft modifier](../soft-modifier.html) that guarantees that a definition will be inline at the point of use. Example: - - object Config { - inline val logging = false - } - - object Logger { - - private var indent = 0 - - inline def log[T](msg: => String)(op: => T): T = - if (Config.logging) { - println(s"${" " * indent}start $msg") - indent += 1 - val result = op - indent -= 1 - println(s"${" " * indent}$msg = $result") - result - } - else op - } - -The `Config` object contains a definition of a `inline` value -`logging`. This means that `logging` is treated as a constant value, -equivalent to its right-hand side `false`. The right-hand side of such -a inline val must itself be a [constant -expression](#the-definition-of-constant-expression). Used in this way, -`inline` is equivalent to Java and Scala 2's `final`. `final` meaning -"constant" is still supported in Dotty, but will be phased out. - -The `Logger` object contains a definition of an `inline` method `log`. -This method will always be inlined at the point of call. - -In the inlined code, an if-then-else with a constant condition will be -rewritten to its then- or else-part. Here's an example: - - def factorial(n: BigInt): BigInt = - log(s"factorial($n)") { - if (n == 0) 1 - else n * factorial(n - 1) - } - -If `Config.logging == false`, this will be rewritten to - - def factorial(n: BigInt): BigInt = { - def msg = s"factorial($n)" - def op = - if (n == 0) 1 - else n * factorial(n - 1) - op - } - -Note that the arguments corresponding to the parameters `msg` and `op` -of the inline method `log` are defined before the inlined body (which -is in this case simply `op`). By-name parameters of the inline method -correspond to `def` bindings whereas by-value parameters correspond to -`val` bindings. So if `log` was defined like this: - - inline def log[T](msg: String)(op: => T): T = ... - -we'd get - - val msg = s"factorial($n)" - -instead. This behavior is designed so that calling an inline method is -semantically the same as calling a normal method: By-value arguments -are evaluated before the call whereas by-name arguments are evaluated -each time they are referenced. As a consequence, it is often -preferable to make arguments of inline methods by-name in order to -avoid unnecessary evaluations. - -For instance, here is how we can define a zero-overhead `foreach` method -that translates into a straightforward while loop without any indirection or -overhead: - - inline def foreach(op: => Int => Unit): Unit = { - var i = from - while (i < end) { - op(i) - i += 1 - } - } - -By contrast, if `op` is a call-by-value parameter, it would be evaluated separately as a closure. - -Inline methods can be recursive. For instance, when called with a constant -exponent `n`, the following method for `power` will be implemented by -straight inline code without any loop or recursion. - - inline def power(x: Double, n: Int): Double = - if (n == 0) 1.0 - else if (n == 1) x - else { - val y = power(x, n / 2) - if (n % 2 == 0) y * y else y * y * x - } - - power(expr, 10) - // translates to - // - // val x = expr - // val y1 = x * x // ^2 - // val y2 = y1 * y1 // ^4 - // val y3 = y2 * x // ^5 - // y3 * y3 // ^10 - -Parameters of inline methods can be marked `inline`. This means -that actual arguments to these parameters must be constant expressions. - -### Relationship to `@inline`. - -Scala also defines a `@inline` annotation which is used as a hint -for the backend to inline. The `inline` modifier is a more powerful -option: Expansion is guaranteed instead of best effort, -it happens in the frontend instead of in the backend, and it also applies -to recursive methods. - -To cross compile between both Dotty and Scalac, we introduce a new `@forceInline` -annotation which is equivalent to the new `inline` modifier. Note that -Scala 2 ignores the `@forceInline` annotation, so one must use both -annotations to guarantee inlining for Dotty and at the same time hint inlining -for Scala 2 (i.e. `@forceInline @inline`). - -### The definition of constant expression - -Right-hand sides of inline values and of arguments for inline parameters -must be constant expressions in the sense defined by the [SLS § -6.24](https://www.scala-lang.org/files/archive/spec/2.12/06-expressions.html#constant-expressions), -including "platform-specific" extensions such as constant folding of -pure numeric computations. - -### Reference - -For more info, see [PR #4927](https://github.com/lampepfl/dotty/pull/4768), which explains how -inline methods can be used for typelevel programming and code specialization. diff --git a/docs/docs/reference/other-new-features/principled-meta-programming.md b/docs/docs/reference/other-new-features/principled-meta-programming.md deleted file mode 100644 index 8c9b2899da5a..000000000000 --- a/docs/docs/reference/other-new-features/principled-meta-programming.md +++ /dev/null @@ -1,842 +0,0 @@ ---- -layout: doc-page -title: "Principled Meta Programming" ---- - -Principled meta programming is a new framework for staging and for some -forms of macros. It is expressed as strongly and statically typed -code using two fundamental operations: quotations and splicing. A -novel aspect of the approach is that these two operations are -regulated by a phase consistency principle that treats quotes and -splices in exactly the same way. - -## Overview - -### Quotes and Splices - -Principled meta programming is built on two well-known fundamental -operations: quotation and splicing. Quotation is expressed as -`'{...}` for expressions (both forms are equivalent) and -as `'[...]` for types. Splicing is expressed as `${ ... }`. - -For example, the code below presents an inline function `assert` -which calls at compile-time a method `assertImpl` with a boolean -expression tree as argument. `assertImpl` evaluates the expression and -prints it again in an error message if it evaluates to `false`. -```scala - import scala.quoted._ - - inline def assert(expr: => Boolean): Unit = - ${ assertImpl('{ expr }) } - - def assertImpl(expr: Expr[Boolean]) = '{ - if !(${ expr }) then - throw new AssertionError(s"failed assertion: ${${ showExpr(expr) }}") - } - - def showExpr(expr: Expr[Boolean]): Expr[String] = - '{ "" } // Better implementation later in this document -``` -If `e` is an expression, then `'{e}` represent the typed -abstract syntax tree representing `e`. If `T` is a type, then `'[T]` -represents the type structure representing `T`. The precise -definitions of "typed abstract syntax tree" or "type-structure" do not -matter for now, the terms are used only to give some -intuition. Conversely, `${e}` evaluates the expression `e`, which must -yield a typed abstract syntax tree or type structure, and embeds the -result as an expression (respectively, type) in the enclosing program. - -Quotations can have spliced parts in them; in this case the embedded -splices are evaluated and embedded as part of the formation of the -quotation. - -Quotes and splices can also be applied directly to identifiers. An identifier -`$x` starting with a `$` that appears inside a quoted expression or type is treated as a -splice `${x}`. Analogously, an quoted identifier 'x that appears inside a splice -is treated as a quote `'{x}`. See the Syntax section below for details. - -Quotes and splices are duals of each other. For arbitrary -expressions `e` and types `T` we have: - - ${'{e}} = e - '{${e}} = e - ${'[T]} = T - '[${T}] = T - -### Types for Quotations - -The type signatures of quotes and splices can be described using -two fundamental types: - - - `Expr[T]`: abstract syntax trees representing expressions of type `T` - - `Type[T]`: type structures representing type `T`. - -Quoting takes expressions of type `T` to expressions of type `Expr[T]` -and it takes types `T` to expressions of type `Type[T]`. Splicing -takes expressions of type `Expr[T]` to expressions of type `T` and it -takes expressions of type `Type[T]` to types `T`. - -The two types can be defined in package `scala.quoted` as follows: -```scala - package scala.quoted - - sealed abstract class Expr[T] - sealed abstract class Type[T] -``` -Both `Expr` and `Type` are abstract and sealed, so all constructors for -these types are provided by the system. One way to construct values of -these types is by quoting, the other is by type-specific lifting -operations that will be discussed later on. - -### The Phase Consistency Principle - -A fundamental *phase consistency principle* (PCP) regulates accesses -to free variables in quoted and spliced code: - - - _For any free variable reference `x`, the number of quoted scopes and the number of spliced scopes between the reference to `x` and the definition of `x` must be equal_. - -Here, `this`-references count as free variables. On the other -hand, we assume that all imports are fully expanded and that `_root_` is -not a free variable. So references to global definitions are -allowed everywhere. - -The phase consistency principle can be motivated as follows: First, -suppose the result of a program `P` is some quoted text `'{ ... x -... }` that refers to a free variable `x` in `P` This can be -represented only by referring to the original variable `x`. Hence, the -result of the program will need to persist the program state itself as -one of its parts. We don’t want to do this, hence this situation -should be made illegal. Dually, suppose a top-level part of a program -is a spliced text `${ ... x ... }` that refers to a free variable `x` -in `P`. This would mean that we refer during _construction_ of `P` to -a value that is available only during _execution_ of `P`. This is of -course impossible and therefore needs to be ruled out. Now, the -small-step evaluation of a program will reduce quotes and splices in -equal measure using the cancellation rules above. But it will neither -create nor remove quotes or splices individually. So the PCP ensures -that program elaboration will lead to neither of the two unwanted -situations described above. - -In what concerns the range of features it covers, principled meta programming is -quite close to the MetaML family of languages. One difference is that MetaML does -not have an equivalent of the PCP - quoted code in MetaML _can_ access -variables in its immediately enclosing environment, with some -restrictions and caveats since such accesses involve serialization. -However, this does not constitute a fundamental gain in -expressiveness. Principled meta programming allows to define a `Liftable` -type-class which can implement such accesses within the confines of the -PCP. This is explained further in a later section. - -## Details - -### From `Expr`s to Functions and Back - -The `Expr` companion object contains an implicit `AsFunctionN` (for 0 <= N < 23) conversion that turns a tree -describing a function into a function mapping trees to trees. -```scala - object Expr { - ... - implied AsFunction1[T, U] for Conversion[Expr[T => U], Expr[T] => Expr[U]] ... - } -``` -This decorator gives `Expr` the `apply` operation of an applicative functor, where `Expr`s -over function types can be applied to `Expr` arguments. The definition -of `AsFunction1(f).apply(x)` is assumed to be functionally the same as -`'{($f)($x)}`, however it should optimize this call by returning the -result of beta-reducing `f(x)` if `f` is a known lambda expression. - -The `AsFunction1` decorator distributes applications of `Expr` over function -arrows: -```scala - AsFunction1(_).apply: Expr[S => T] => (Expr[S] => Expr[T]) -``` -Its dual, let’s call it `reflect`, can be defined as follows: -```scala - def reflect[T, U](f: Expr[T] => Expr[U]): Expr[T => U] = '{ - (x: T) => ${ f('x) } - } -``` -Note how the fundamental phase consistency principle works in two -different directions here for `f` and `x`. The reference to `f` is -legal because it is quoted, then spliced, whereas the reference to `x` -is legal because it is spliced, then quoted. - -### Types and the PCP - -In principle, The phase consistency principle applies to types as well -as for expressions. This might seem too restrictive. Indeed, the -definition of `reflect` above is not phase correct since there is a -quote but no splice between the parameter binding of `T` and its -usage. But the code can be made phase correct by adding a binding -of a `Type[T]` tag: -```scala - def reflect[T, U](f: Expr[T] => Expr[U]) given (t: Type[T]): Expr[T => U] = - '{ (x: $t) => ${ f('x) } } -``` -In this version of `reflect`, the type of `x` is now the result of -splicing the `Type` value `t`. This operation _is_ splice correct -- there -is one quote and one splice between the use of `t` and its definition. - -To avoid clutter, the Scala implementation tries to convert any phase-incorrect -reference to a type `T` to a type-splice, by rewriting `T` to `${ the[Type[T]] }`. -For instance, the user-level definition of `reflect`: -```scala - def reflect[T: Type, U](f: Expr[T] => Expr[U]): Expr[T => U] = - '{ (x: T) => ${ f('x) } } -``` -would be rewritten to -```scala - def reflect[T: Type, U](f: Expr[T] => Expr[U]): Expr[T => U] = - '{ (x: ${ the[Type[T]] }) => ${ f('x) } } -``` -The `the` query succeeds because there is an implied value of -type `Type[T]` available (namely the given parameter corresponding -to the context bound `: Type`), and the reference to that value is -phase-correct. If that was not the case, the phase inconsistency for -`T` would be reported as an error. - -### Lifting Types - -The previous section has shown that the metaprogramming framework has -to be able to take a type `T` and convert it to a type tree of type -`Type[T]` that can be reified. This means that all free variables of -the type tree refer to types and values defined in the current stage. - -For a reference to a global class, this is easy: Just issue the fully -qualified name of the class. Members of reifiable types are handled by -just reifying the containing type together with the member name. But -what to do for references to type parameters or local type definitions -that are not defined in the current stage? Here, we cannot construct -the `Type[T]` tree directly, so we need to get it from a recursive -implicit search. For instance, to implement -```scala - the[Type[List[T]]] -``` -where `T` is not defined in the current stage, we construct the type constructor -of `List` applied to the splice of the result of searching for an implied instance for `Type[T]`: -```scala - '[ List[ ${ the[Type[T]] } ] ] -``` -This is exactly the algorithm that Scala 2 uses to search for type tags. -In fact Scala 2's type tag feature can be understood as a more ad-hoc version of -`quoted.Type`. As was the case for type tags, the implicit search for a `quoted.Type` -is handled by the compiler, using the algorithm sketched above. - -### Example Expansion - -Assume an `Array` class with an inline `map` method that forwards to a macro implementation. -```scala - class Array[T] { - inline def map[U](f: T => U): Array[U] = ${ Macros.mapImpl[T, U]('[U], 'this, 'f) } - } -``` -Here’s the definition of the `mapImpl` macro, which takes quoted types and expressions to a quoted expression: -```scala - object Macros { - - def mapImpl[T, U](u: Type[U], arr: Expr[Array[T]], op: Expr[T => U]): Expr[Array[U]] = '{ - var i = 0 - val xs = $arr - var len = xs.length - val ys = new Array[$u](len) - while (i < len) { - ys(i) = ${ op('{ xs(i) }) } - i += 1 - } - ys - } - } -``` -Here’s an application of `map` and how it rewrites to optimized code: -```scala - genSeq[Int]().map(x => x + 1) -``` -==> (inline) -```scala - val _this: Seq[Int] = genSeq[Int]() - val f: Int => Int = x => x + 1 - ${ _root_.Macros.mapImpl[Int, Int]('[Int], '_this, 'f) } -``` -==> (splice) -```scala - val _this: Seq[Int] = genSeq[Int]() - val f: Int => Int = x => x + 1 - - { - var i = 0 - val xs = ${ '_this } - var len = xs.length - val ys = new Array[${ '[Int] }](len) - while (i < len) { - ys(i) = ${ ('f)('{ xs(i) }) } - i += 1 - } - ys - } -``` -==> (expand and splice inside quotes) -```scala - val _this: Seq[Int] = genSeq[Int]() - val f: Int => Int = x => x + 1 - - { - var i = 0 - val xs = _this - var len = xs.length - val ys = new Array[Int](len) - while (i < len) { - ys(i) = xs(i) + 1 - i += 1 - } - ys - } -``` -==> (elim dead code) -```scala - val _this: Seq[Int] = genSeq[Int]() - - { - var i = 0 - val xs = _this - var len = xs.length - val ys = new Array[Int](len) - while (i < len) { - ys(i) = xs(i) + 1 - i += 1 - } - ys - } -``` -### Relationship with Inline and Macros - -Seen by itself, principled meta-programming looks more like a -framework for staging than one for compile-time meta programming with -macros. But combined with Dotty’s `inline` feature it can be turned into a -compile-time system. The idea is that macro elaboration can be -understood as a combination of a macro library and a quoted -program. For instance, here’s the `assert` macro again together with a -program that calls `assert`. -```scala - object Macros { - - inline def assert(expr: => Boolean): Unit = - ${ assertImpl('expr) } - - def assertImpl(expr: Expr[Boolean]) = - '{ if !($expr) then throw new AssertionError(s"failed assertion: ${$expr}") } - } - - object App { - val program = { - val x = 1 - Macros.assert(x != 0) - } - } -``` -Inlining the `assert` function would give the following program: -```scala - val program = { - val x = 1 - ${ Macros.assertImpl('{ x != 0) } } - } -``` -The example is only phase correct because Macros is a global value and -as such not subject to phase consistency checking. Conceptually that’s -a bit unsatisfactory. If the PCP is so fundamental, it should be -applicable without the global value exception. But in the example as -given this does not hold since both `assert` and `program` call -`assertImpl` with a splice but no quote. - -However, one could argue that the example is really missing -an important aspect: The macro library has to be compiled in a phase -prior to the program using it, but in the code above, macro -and program are defined together. A more accurate view of -macros would be to have the user program be in a phase after the macro -definitions, reflecting the fact that macros have to be defined and -compiled before they are used. Hence, conceptually the program part -should be treated by the compiler as if it was quoted: -```scala - val program = '{ - val x = 1 - ${ Macros.assertImpl('{ x != 0 }) } - } -``` -If `program` is treated as a quoted expression, the call to -`Macro.assertImpl` becomes phase correct even if macro library and -program are conceptualized as local definitions. - -But what about the call from `assert` to `assertImpl`? Here, we need a -tweak of the typing rules. An inline function such as `assert` that -contains a splice operation outside an enclosing quote is called a -_macro_. Macros are supposed to be expanded in a subsequent phase, -i.e. in a quoted context. Therefore, they are also type checked as if -they were in a quoted context. For instance, the definition of -`assert` is typechecked as if it appeared inside quotes. This makes -the call from `assert` to `assertImpl` phase-correct, even if we -assume that both definitions are local. - -The `inline` modifier is used to declare a `val` that is -either a constant or is a parameter that will be a constant when instantiated. This -aspect is also important for macro expansion. To illustrate this, -consider an implementation of the `power` function that makes use of a -statically known exponent: -```scala - inline def power(inline n: Int, x: Double) = ${ powerCode(n, 'x) } - - private def powerCode(n: Int, x: Expr[Double]): Expr[Double] = - if (n == 0) '{ 1.0 } - else if (n == 1) x - else if (n % 2 == 0) '{ val y = $x * $x; ${ powerCode(n / 2, 'y) } } - else '{ $x * ${ powerCode(n - 1, x) } } -``` -The reference to `n` as an argument in `${ powerCode(n, 'x) }` is not -phase-consistent, since `n` appears in a splice without an enclosing -quote. Normally that would be a problem because it means that we need -the _value_ of `n` at compile time, which is not available for general -parameters. But since `n` is an inline parameter of a macro, we know -that at the macro’s expansion point `n` will be instantiated to a -constant, so the value of `n` will in fact be known at this -point. To reflect this, we loosen the phase consistency requirements -as follows: - - - If `x` is a inline value (or a inline parameter of an inline - function) of type Boolean, Byte, Short, Int, Long, Float, Double, - Char or String, it can be accessed in all contexts where the number - of splices minus the number of quotes between use and definition - is either 0 or 1. - -### Relationship with Staging - -The framework expresses at the same time compile-time meta-programming -and staging. The phase in which code is run is determined by the -difference between the number of splice scopes and quote scopes in -which it is embedded. - - - If there are more splices than quotes, the code is run at - "compile-time" i.e. as a macro. In the general case, this means - running an interpreter that evaluates the code, which is - represented as a typed abstract syntax tree. The interpreter can - fall back to reflective calls when evaluating an application of a - previously compiled method. If the splice excess is more than one, - it would mean that a macro’s implementation code (as opposed to the - code it expands to) invokes other macros. If macros are realized by - interpretation, this would lead to towers of interpreters, where - the first interpreter would itself interpret an interpreter code - that possibly interprets another interpreter and so on. - - - If the number of splices equals the number of quotes, the code is - compiled and run as usual. - - - If the number of quotes exceeds the number of splices, the code is - staged. That is, it produces a typed abstract syntax tree or type - structure at run-time. A quote excess of more than one corresponds - to multi-staged programming. - -Providing an interpreter for the full language is quite difficult, and -it is even more difficult to make that interpreter run efficiently. So -we currently impose the following restrictions on the use of splices. - - 1. A top-level splice must appear in an inline method (turning that method - into a macro) - - 2. The splice must call a previously compiled - method passing quoted arguments, constant arguments or inline arguments. - - 3. Splices inside splices (but no intervening quotes) are not allowed. - - 4. A macro method is effectively final and it may override no other method. - -The framework as discussed so far allows code to be staged, i.e. be prepared -to be executed at a later stage. To run that code, there is another method -in class `Expr` called `run`. Note that `$` and `run` both map from `Expr[T]` -to `T` but only `$` is subject to the PCP, whereas `run` is just a normal method. -```scala - sealed abstract class Expr[T] { - def run given (toolbox: Toolbox): T // run staged code - def show given (toolbox: Toolbox): String // show staged code - } -``` - -### Limitations to Splicing - -Quotes and splices are duals as far as the PCP is concerned. But there is an additional -restriction that needs to be imposed on splices to guarantee soundness: -code in splices must be free of side effects. The restriction prevents code like this: -```scala - var x: Expr[T] - '{ (y: T) => ${ x = 'y; 1 } } -``` -This code, if it was accepted, would "extrude" a reference to a quoted variable `y` from its scope. -This means we an subsequently access a variable outside the scope where it is defined, which is -likely problematic. The code is clearly phase consistent, so we cannot use PCP to -rule it out. Instead we postulate a future effect system that can guarantee that splices -are pure. In the absence of such a system we simply demand that spliced expressions are -pure by convention, and allow for undefined compiler behavior if they are not. This is analogous -to the status of pattern guards in Scala, which are also required, but not verified, to be pure. - -There is also a problem with `run` in splices. Consider the following expression: -```scala - '{ (x: Int) => ${ ('x).run; 1 } } -``` -This is again phase correct, but will lead us into trouble. Indeed, evaluating the splice will reduce the -expression `('x).run` to `x`. But then the result -```scala - '{ (x: Int) => ${ x; 1 } } -``` -is no longer phase correct. To prevent this soundness hole it seems easiest to classify `run` as a side-effecting -operation. It would thus be prevented from appearing in splices. In a base language with side-effects we'd have to -do this anyway: Since `run` runs arbitrary code it can always produce a side effect if the code it runs produces one. - -### The `Liftable` type-class - -Consider the following implementation of a staged interpreter that implements -a compiler through staging. -```scala - import scala.quoted._ - - enum Exp { - case Num(n: Int) - case Plus(e1: Exp, e2: Exp) - case Var(x: String) - case Let(x: String, e: Exp, in: Exp) - } -``` -The interpreted language consists of numbers `Num`, addition `Plus`, and variables -`Var` which are bound by `Let`. Here are two sample expressions in the language: -```scala - val exp = Plus(Plus(Num(2), Var("x")), Num(4)) - val letExp = Let("x", Num(3), exp) -``` -Here’s a compiler that maps an expression given in the interpreted -language to quoted Scala code of type `Expr[Int]`. -The compiler takes an environment that maps variable names to Scala `Expr`s. -```scala - import implied scala.quoted._ - - def compile(e: Exp, env: Map[String, Expr[Int]]): Expr[Int] = e match { - case Num(n) => - n.toExpr - case Plus(e1, e2) => - '{ ${ compile(e1, env) } + ${ compile(e2, env) } } - case Var(x) => - env(x) - case Let(x, e, body) => - '{ val y = ${ compile(e, env) }; ${ compile(body, env + (x -> 'y)) } } - } -``` -Running `compile(letExp, Map())` would yield the following Scala code: -```scala - '{ val y = 3; (2 + y) + 4 } -``` -The body of the first clause, `case Num(n) => n.toExpr`, looks suspicious. `n` -is declared as an `Int`, yet it is converted to an `Expr[Int]` with `toExpr`. -Shouldn’t `n` be quoted? In fact this would not -work since replacing `n` by `'n` in the clause would not be phase -correct. - -The `toExpr` extension method is defined in package `quoted`: -```scala - package quoted - - implied LiftingOps { - def (x: T) toExpr[T] given (ev: Liftable[T]): Expr[T] = ev.toExpr(x) - } -``` -The extension says that values of types implementing the `Liftable` type class can be -converted ("lifted") to `Expr` values using `toExpr`, provided an implied import -of `scala.quoted._` is in scope. - -Dotty comes with implied instance definitions of `Liftable` for -several types including `Boolean`, `String`, and all primitive number -types. For example, `Int` values can be converted to `Expr[Int]` -values by wrapping the value in a `Literal` tree node. This makes use -of the underlying tree representation in the compiler for -efficiency. But the `Liftable` instances are nevertheless not "magic" -in the sense that they could all be defined in a user program without -knowing anything about the representation of `Expr` trees. For -instance, here is a possible instance of `Liftable[Boolean]`: -```scala - implied for Liftable[Boolean] { - def toExpr(b: Boolean) = if (b) '{ true } else '{ false } - } -``` -Once we can lift bits, we can work our way up. For instance, here is a -possible implementation of `Liftable[Int]` that does not use the underlying -tree machinery: -```scala - implied for Liftable[Int] { - def toExpr(n: Int): Expr[Int] = n match { - case Int.MinValue => '{ Int.MinValue } - case _ if n < 0 => '{ - ${ toExpr(n) } } - case 0 => '{ 0 } - case _ if n % 2 == 0 => '{ ${ toExpr(n / 2) } * 2 } - case _ => '{ ${ toExpr(n / 2) } * 2 + 1 } - } - } -``` -Since `Liftable` is a type class, its instances can be conditional. For example, -a `List` is liftable if its element type is: -```scala - implied [T: Liftable] for Liftable[List[T]] { - def toExpr(xs: List[T]): Expr[List[T]] = xs match { - case x :: xs1 => '{ ${ toExpr(x) } :: ${ toExpr(xs1) } } - case Nil => '{ Nil: List[T] } - } - } -``` -In the end, `Liftable` resembles very much a serialization -framework. Like the latter it can be derived systematically for all -collections, case classes and enums. Note also that the synthesis -of "type-tag" values of type `Type[T]` is essentially the type-level -analogue of lifting. - -Using lifting, we can now give the missing definition of `showExpr` in the introductory example: -```scala - def showExpr[T](expr: Expr[T]): Expr[String] = { - val code = expr.show - code.toExpr - } -``` -That is, the `showExpr` method converts its `Expr` argument to a string (`code`), and lifts -the result back to an `Expr[String]` using the `toExpr` wrapper. - -**Note**: the `toExpr` extension method can be ommited by importing an implicit -conversion with `import scala.quoted.autolift._`. The programmer is able to -declutter slightly the code at the cost of readable _phase distinction_ between -stages. - - -## Implementation - -### Syntax - -Compared to the [Dotty reference grammar](../../internals/syntax.md) -there are the following syntax changes: - - SimpleExpr ::= ... - | ‘'’ ‘{’ Block ‘}’ - | ‘'’ ‘[’ Type ‘]’ - | ‘$’ ‘{’ Block ‘}’ - SimpleType ::= ... - | ‘$’ ‘{’ Block ‘}’ - -In addition, an identifier `$x` starting with a `$` that appears inside -a quoted expression or type is treated as a splice `${x}` and a quoted identifier -`'x` that appears inside a splice is treated as a quote `'{x}` - -### Implementation in `dotc` - -Quotes and splices are primitive forms in the generated abstract -syntax trees. They are eliminated in an expansion phase -`Staging`. This phase runs after typing and pickling. - -Macro-expansion works outside-in. If the outermost scope is a splice, -the spliced AST will be evaluated in an interpreter. A call to a -previously compiled method can be implemented as a reflective call to -that method. With the restrictions on splices that are currently in -place that’s all that’s needed. We might allow more interpretation in -splices in the future, which would allow us to loosen the -restriction. Quotes in spliced, interpreted code are kept as they -are, after splices nested in the quotes are expanded. - -If the outermost scope is a quote, we need to generate code that -constructs the quoted tree at run-time. We implement this by -serializing the tree as a Tasty structure, which is stored -in a string literal. At runtime, an unpickler method is called to -deserialize the string into a tree. - -Splices inside quoted code insert the spliced tree as is, after -expanding any quotes in the spliced code recursively. - -## Formalization - -The phase consistency principle can be formalized in a calculus that -extends simply-typed lambda calculus with quotes and splices. - -### Syntax - -The syntax of terms, values, and types is given as follows: - - Terms t ::= x variable - (x: T) => t lambda - t t application - 't quote - $t splice - - Values v ::= (x: T) => t lambda - 'u quote - - Simple terms u ::= x | (x: T) => u | u u | 't - - Types T ::= A base type - T -> T function type - expr T quoted - -Typing rules are formulated using a stack of environments -`Es`. Individual environments `E` consist as usual of variable -bindings `x: T`. Environments can be combined using the two -combinators `'` and `$`. - - Environment E ::= () empty - E, x: T - - Env. stack Es ::= () empty - E simple - Es * Es combined - - Separator * ::= ' - $ - -The two environment combinators are both associative with left and -right identity `()`. - -### Operational semantics: - -We define a small step reduction relation `-->` with the following rules: - - ((x: T) => t) v --> [x := v]t - - ${'u} --> u - - t1 --> t2 - ----------------- - e[t1] --> e[t2] - -The first rule is standard call-by-value beta-reduction. The second -rule says that splice and quotes cancel each other out. The third rule -is a context rule; it says that reduction is allowed in the hole `[ ]` -position of an evaluation context. Evaluation contexts `e` and -splice evaluation context `e_s` are defined syntactically as follows: - - Eval context e ::= [ ] | e t | v e | 'e_s[${e}] - Splice context e_s ::= [ ] | (x: T) => e_s | e_s t | u e_s - -### Typing rules - -Typing judgments are of the form `Es |- t: T`. There are two -substructural rules which express the fact that quotes and splices -cancel each other out: - - Es1 * Es2 |- t: T - --------------------------- - Es1 $ E1 ' E2 * Es2 |- t: T - - - Es1 * Es2 |- t: T - --------------------------- - Es1 ' E1 $ E2 * Es2 |- t: T - -The lambda calculus fragment of the rules is standard, except that we -use a stack of environments. The rules only interact with the topmost -environment of the stack. - - x: T in E - -------------- - Es * E |- x: T - - - Es * E, x: T1 |- t: T2 - ------------------------------- - Es * E |- (x: T1) => t: T -> T2 - - - Es |- t1: T2 -> T Es |- t2: T2 - --------------------------------- - Es |- t1 t2: T - -The rules for quotes and splices map between `expr T` and `T` by trading `'` and `$` between -environments and terms. - - Es $ () |- t: expr T - -------------------- - Es |- $t: T - - - Es ' () |- t: T - ---------------- - Es |- 't: expr T - -The meta theory of a slightly simplified variant 2-stage variant of this calculus -is studied [separately](../simple-smp.md). - -## Going Further - -The meta-programming framework as presented and currently implemented is quite restrictive -in that it does not allow for the inspection of quoted expressions and -types. It’s possible to work around this by providing all necessary -information as normal, unquoted inline parameters. But we would gain -more flexibility by allowing for the inspection of quoted code with -pattern matching. This opens new possibilities. For instance, here is a -version of `power` that generates the multiplications directly if the -exponent is statically known and falls back to the dynamic -implementation of power otherwise. -```scala - inline def power(n: Int, x: Double): Double = ${ - 'n match { - case Constant(n1) => powerCode(n1, 'x) - case _ => '{ dynamicPower(n, x) } - } - } - - private def dynamicPower(n: Int, x: Double): Double = - if (n == 0) 1.0 - else if (n % 2 == 0) dynamicPower(n / 2, x * x) - else x * dynamicPower(n - 1, x) -``` -This assumes a `Constant` extractor that maps tree nodes representing -constants to their values. - -With the right extractors, the "AsFunction" conversion -that maps expressions over functions to functions over expressions can -be implemented in user code: -```scala - implied AsFunction1[T, U] for Conversion[Expr[T => U], Expr[T] => Expr[U]] { - def apply(f: Expr[T => U]): Expr[T] => Expr[U] = - (x: Expr[T]) => f match { - case Lambda(g) => g(x) - case _ => '{ ($f)($x) } - } - } -``` -This assumes an extractor -```scala - object Lambda { - def unapply[T, U](x: Expr[T => U]): Option[Expr[T] => Expr[U]] - } -``` -Once we allow inspection of code via extractors, it’s tempting to also -add constructors that create typed trees directly without going -through quotes. Most likely, those constructors would work over `Expr` -types which lack a known type argument. For instance, an `Apply` -constructor could be typed as follows: -```scala - def Apply(fn: Expr[_], args: List[Expr[_]]): Expr[_] -``` -This would allow constructing applications from lists of arguments -without having to match the arguments one-by-one with the -corresponding formal parameter types of the function. We then need "at -the end" a method to convert an `Expr[_]` to an `Expr[T]` where `T` is -given from the outside. E.g. if `code` yields a `Expr[_]`, then -`code.atType[T]` yields an `Expr[T]`. The `atType` method has to be -implemented as a primitive; it would check that the computed type -structure of `Expr` is a subtype of the type structure representing -`T`. - -Before going down that route, we should evaluate in detail the tradeoffs it -presents. Constructing trees that are only verified _a posteriori_ -to be type correct loses a lot of guidance for constructing the right -trees. So we should wait with this addition until we have more -use-cases that help us decide whether the loss in type-safety is worth -the gain in flexibility. In this context, it seems that deconstructing types is -less error-prone than deconstructing terms, so one might also -envisage a solution that allows the former but not the latter. - -## Conclusion - -Meta-programming has a reputation of being difficult and confusing. -But with explicit `Expr/Type` types and quotes and splices it can become -downright pleasant. A simple strategy first defines the underlying quoted or unquoted -values using `Expr` and `Type` and then inserts quotes and splices to make the types -line up. Phase consistency is at the same time a great guideline -where to insert a splice or a quote and a vital sanity check that -the result makes sense. diff --git a/docs/docs/reference/overview-old.md b/docs/docs/reference/overview-old.md deleted file mode 100644 index d2e159075cd4..000000000000 --- a/docs/docs/reference/overview-old.md +++ /dev/null @@ -1,147 +0,0 @@ ---- -layout: doc-page -title: "Overview" ---- - -This section gives an overview of the most important language additions in Dotty. -It classifies features into eight groups: (1) essential foundations, (2) simplifications, (3) restrictions, (4) dropped features, (5) changed features, (6) new features, (7) features oriented towards meta-programming with the aim to replace existing macros, and (8) changes to type checking and inference. - -The new features address four major concerns: - - - [Consistency](http://dotty.epfl.ch/docs/reference/overview.html#consistency) - improve orthogonality and eliminate restrictions. - - [Safety](http://dotty.epfl.ch/docs/reference/overview.html#safety) - enable precise domain modeling and safe refactoring. - - [Ergonomics](http://dotty.epfl.ch/docs/reference/overview.html#ergonomics) - support readable and concise code. - - [Performance](http://dotty.epfl.ch/docs/reference/overview.html#performance) - remove performance penalties for high-level code. - -Scala 3 also drops a number of features that were used rarely, or where experience showed -that they tended to cause problems. These are listed separately in the [Dropped Features](http://dotty.epfl.ch/docs) section. - -Another important set of changes is about meta programming and generative programming. So far these have relied on a [macro system](https://docs.scala-lang.org/overviews/macros/overview.html) that had experimental status. This macro system will be replaced with a different solution that extends [principled meta programming](http://dotty.epfl.ch/docs/reference/other-new-features/principled-meta-programming.html) and [inline](http://dotty.epfl.ch/docs/reference/other-new-features/inline.html) definitions with some reflective capabilities. The current state of the full design and its ramifications for generative programming will be described elsewhere. - - -## Consistency - -The primary goal of the language constructs in this section is to make the language more consistent, both internally, and in relationship to its [foundations](http://www.scala-lang.org/blog/2016/02/03/essence-of-scala.html). - - - [Intersection types](http://dotty.epfl.ch/docs/reference/new-types/intersection-types.html) `A & B` - - They replace compound types `A with B` (the old syntax is kept for the moment but will - be deprecated in the future). Intersection types are one of the core features of DOT. They - are commutative: `A & B` and `B & A` represent the same type. - - - [Context query types](http://dotty.epfl.ch/docs/reference/contextual/query-types.html) `given A => B`. - - Methods and lambdas can have implicit parameters, so it's natural to extend the - same property to function types. context query types help ergonomics and performance - as well. They can replace many uses of monads, offering better composability and an order of magnitude improvement in runtime speed. - - - [Dependent function types](http://dotty.epfl.ch/docs/reference/new-types/dependent-function-types.html) `(x: T) => x.S`. - - The result type of a method can refer to its parameters. We now extend the same capability - to the result type of a function. - - - [Trait parameters](http://dotty.epfl.ch/docs/reference/other-new-features/trait-parameters.html) `trait T(x: S)` - - Traits can now have value parameters, just like classes do. This replaces the more complex [early initializer](http://dotty.epfl.ch/docs/reference/dropped-features/early-initializers.html) syntax. - - - Generic tuples - - ([Pending](https://github.com/lampepfl/dotty/pull/2199)) Tuples with arbitrary numbers of elements are treated as sequences of nested pairs. E.g. `(a, b, c)` is shorthand for `(a, (b, (c, ())))`. This lets us drop the current limit of 22 for maximal tuple length and it allows generic programs over tuples analogous to what is currently done for `HList`. - - -## Safety - -Listed in this section are new language constructs that help precise, typechecked domain modeling and that improve the reliability of refactorings. - - - [Union types](http://dotty.epfl.ch/docs/reference/new-types/union-types.html) `A | B` - - Union types gives fine-grained control over the possible values of a type. - A union type `A | B` states that a value can be an `A` or a `B` without having - to widen to a common supertype of `A` and `B`. Union types thus enable more - precise domain modeling. They are also very useful for interoperating with - Javascript libraries and JSON protocols. - - - [Multiversal Equality](http://dotty.epfl.ch/docs/reference/contextual/multiversal-equality.html) - - Multiversal equality is an opt-in way to check that comparisons using `==` and - `!=` only apply to compatible types. It thus removes the biggest remaining hurdle - to type-based refactoring. Normally, one would wish that one could change the type - of some value or operation in a large code base, fix all type errors, and obtain - at the end a working program. But universal equality `==` works for all types. - So what should conceptually be a type error would not be reported and - runtime behavior might change instead. Multiversal equality closes that loophole. - - - Restrict Implicit Conversions - - ([Pending](https://github.com/lampepfl/dotty/pull/4229)) - Implicit conversions are very easily mis-used, which makes them the cause of much surprising behavior. - We now require a language feature import not only when an implicit conversion is defined - but also when it is applied. This protects users of libraries that define implicit conversions - from being bitten by unanticipated feature interactions. - - - Null safety - - (Planned) Adding a `null` value to every type has been called a "Billion Dollar Mistake" - by its inventor, Tony Hoare. With the introduction of union types, we can now do better. - A type like `String` will not carry the `null` value. To express that a value can - be `null`, one will use the union type `String | Null` instead. For backwards compatibility and Java interoperability, selecting on a value that's possibly `null` will still be permitted but will have a declared effect that a `NullPointerException` can be thrown (see next section). - - - Effect Capabilities - - (Planned) Scala so far is an impure functional programming language in that side effects - are not tracked. We want to put in the hooks to allow to change this over time. The idea - is to treat effects as capabilities represented as implicit parameters. Some effect types - will be defined by the language, others can be added by libraries. Initially, the language - will likely only cover exceptions as effect capabilities, but this can be extended later - to mutations and other effects. To ensure backwards compatibility, all effect - capabilities are initially available in `Predef`. Un-importing effect capabilities from - `Predef` will enable stricter effect checking, and provide stronger guarantees of purity. - - -## Ergonomics - -The primary goal of the language constructs in this section is to make common programming patterns more concise and readable. - - - [Enums](http://dotty.epfl.ch/docs/reference/enums/enums.html) `enum Color { case Red, Green, Blue }` - - Enums give a simple way to express a type with a finite set of named values. They - are found in most languages. The previous encodings of enums as library-defined types - were not fully satisfactory and consequently were not adopted widely. The new native `enum` construct in Scala is quite flexible; among others it gives a more concise way to express [algebraic data types](http://dotty.epfl.ch/docs/reference/enums/adts.html). - Scala enums will interoperate with the host platform. They support multiversal equality - out of the box, i.e. an enum can only be compared to values of the same enum type. - - - [Type lambdas](http://dotty.epfl.ch/docs/reference/new-types/type-lambdas.html) `[X] => C[X]` - - Type lambdas were encoded previously in a roundabout way, exploiting - loopholes in Scala's type system which made it Turing complete. With - the removal of [unrestricted type projection](dropped-features/type-projection.html), the loopholes are eliminated, so the - previous encodings are no longer expressible. Type lambdas in the language provide - a safe and more ergonomic alternative. - - - Extension clauses `extension StringOps for String { ... }` - - ([Pending](https://github.com/lampepfl/dotty/pull/4114)) Extension clauses allow to define extension methods and late implementations - of traits via instance declarations. They are more readable and convey intent better - than the previous encodings of these features through implicit classes and value classes. - Extensions will replace implicit classes. Extensions and opaque types together can - replace almost all usages of value classes. Value classes are kept around for the - time being since there might be a new good use case for them in the future if the host platform supports "structs" or some other way to express multi-field value classes. - - -## Performance - -The primary goal of the language constructs in this section is to enable high-level, safe code without having to pay a performance penalty. - - - [Opaque Type Aliases](https://dotty.epfl.ch/docs/reference/other-new-features/opaques.html) `opaque type A = T` - - An opaque alias defines a new type `A` in terms of an existing type `T`. Unlike the previous modeling using value classes, opaque types never box. Opaque types are described in detail in [SIP 35](https://docs.scala-lang.org/sips/opaque-types.html). - - - [Erased parameters](http://dotty.epfl.ch/docs/reference/other-new-features/erased-terms.html) - - Parameters of methods and functions can be declared `erased`. This means that - the corresponding arguments are only used for type checking purposes and no code - will be generated for them. Typical candidates for erased parameters are type - constraints such as `=:=` and `<:<` that are expressed through implicits. - Erased parameters improve both run times (since no argument has to be constructed) and compile times (since potentially large arguments can be eliminated early). - -See also: [A classification of proposed language features](./features-classification.html) diff --git a/docs/docs/reference/overview.md b/docs/docs/reference/overview.md index f0e5b6ae1e3c..24a0c7db357e 100644 --- a/docs/docs/reference/overview.md +++ b/docs/docs/reference/overview.md @@ -121,9 +121,9 @@ It's worth noting that macros were never included in the Scala 2 language specif To enable porting most uses of macros, we are experimenting with the advanced language constructs listed below. These designs are more provisional than the rest of the proposed language constructs for Scala 3.0. There might still be some changes until the final release. Stabilizing the feature set needed for meta programming is our first priority. - [Match Types](https://dotty.epfl.ch/docs/reference/new-types/match-types.html) allow computation on types. -- [Inline](https://dotty.epfl.ch/docs/reference/other-new-features/inline.html) provides +- [Inline](https://dotty.epfl.ch/docs/reference/metaprogramming/inline.html) provides by itself a straightforward implementation of some simple macros and is at the same time an essential building block for the implementation of complex macros. -- [Quotes and Splices](https://dotty.epfl.ch/docs/reference/other-new-features/principled-meta-programming.html) provide a principled way to express macros and staging with a unified set of abstractions. +- [Quotes and Splices](https://dotty.epfl.ch/docs/reference/metaprogramming/macros.html) provide a principled way to express macros and staging with a unified set of abstractions. - [Typeclass derivation](https://dotty.epfl.ch/docs/reference/contextual/derivation.html) provides an in-language implementation of the `Gen` macro in Shapeless and other foundational libraries. The new implementation is more robust, efficient and easier to use than the macro. - [Implicit by-name parameters](https://dotty.epfl.ch/docs/reference/contextual/inferable-by-name-parameters.html) provide a more robust in-language implementation of the `Lazy` macro in Shapeless. - [Erased Terms](https://dotty.epfl.ch/docs/reference/other-new-features/erased-terms.html) provide a general mechanism for compile-time-only computations. diff --git a/docs/sidebar.yml b/docs/sidebar.yml index a25063ae4746..10dc041ceef5 100644 --- a/docs/sidebar.yml +++ b/docs/sidebar.yml @@ -67,6 +67,20 @@ sidebar: url: docs/reference/contextual/inferable-by-name-parameters.html - title: Relationship with Scala 2 Implicits url: docs/reference/contextual/relationship-implicits.html + - title: Metaprogramming + subsection: + - title: Overview + url: docs/reference/metaprogramming/toc.html + - title: Inline + url: docs/reference/metaprogramming/inline.html + - title: Macros + url: docs/reference/metaprogramming/macros.html + - title: Staging + url: docs/reference/metaprogramming/staging.html + - title: TASTy Reflection + url: docs/reference/metaprogramming/tasty-reflect.html + - title: TASTy Inspection + url: docs/reference/metaprogramming/tasty-inspect.html - title: Other New Features subsection: - title: Trait Parameters @@ -75,12 +89,6 @@ sidebar: url: docs/reference/other-new-features/creator-applications.html - title: Export Clauses url: docs/reference/other-new-features/export.html - - title: Inlining by Rewriting - url: docs/reference/other-new-features/inline.html - - title: Meta Programming - url: docs/reference/other-new-features/principled-meta-programming.html - - title: TASTy Reflect - url: docs/reference/other-new-features/tasty-reflect.html - title: Opaque Type Aliases url: docs/reference/other-new-features/opaques.html - title: Parameter Untupling diff --git a/library/src/scala/tasty/reflect/Core.scala b/library/src/scala/tasty/reflect/Core.scala index 42035a3d9e01..911d9647c551 100644 --- a/library/src/scala/tasty/reflect/Core.scala +++ b/library/src/scala/tasty/reflect/Core.scala @@ -212,7 +212,7 @@ trait Core { /** Tree representing a pattern match `implicit match { ... }` in the source code */ type ImplicitMatch = kernel.ImplicitMatch - /** Tree representing a tyr catch `try x catch { ... } finally { ... }` in the source code */ + /** Tree representing a try catch `try x catch { ... } finally { ... }` in the source code */ type Try = kernel.Try /** Tree representing a `return` in the source code */