From 30c357131f881dedaa0a2ee3b1184eec57e28083 Mon Sep 17 00:00:00 2001 From: vincenzobaz Date: Mon, 30 Nov 2020 11:00:15 +0100 Subject: [PATCH 1/7] Write blog post about new tuples in Scala 3 --- ...1-30-flexible-and-safe-tuples-in-scala3.md | 383 ++++++++++++++++++ 1 file changed, 383 insertions(+) create mode 100644 _posts/2020-11-30-flexible-and-safe-tuples-in-scala3.md diff --git a/_posts/2020-11-30-flexible-and-safe-tuples-in-scala3.md b/_posts/2020-11-30-flexible-and-safe-tuples-in-scala3.md new file mode 100644 index 000000000..6f19c2dc6 --- /dev/null +++ b/_posts/2020-11-30-flexible-and-safe-tuples-in-scala3.md @@ -0,0 +1,383 @@ +--- +layout: blog-detail +post-type: blog +by: Vincenzo Bazzucchi, Scala Center +title: Flexible and safe tuples in Scala 3 +--- + +# Flexible and safe tuples in Scala 3 + +Tuples are revisited and completely rethought in Scala 3. +They are more **flexible**, more dynamic and support a **wider range of operations**. +This is enabled by new and powerful language features. + +In this post we will explore the new capabilities of tuples before +looking under the hood to learn how the improvements in the Scala 3 type system, +in particular *dependent types* and *match types*, enable implementing type safe +operations on tuples. + +# The basics: what are tuples? + +In the Python programming language, tuples are a simple concept: +they are immutable collections of objects. As such, they are opposed +to lists, which are mutable. + +In Scala both `List`s and tuples are immutable, so why do we care +about tuples? + +Scala being a statically typed programming language, the difference between +list and tuples is in the type. Lists are *homogeneous* collections while +tuples are *heterogeneous*. In simpler terms, a tuple collects items maintaining +the type of each element, while a list collects objects retaining a common type +for all the elements. + +This is better explained with an example: +```scala +scala> List(1, "2", 3.0, List(4)) +val res0: List[Any] = List(1, 2, 3.0, List(4)) +``` +We see that the compiler tries to infer a common supertype for the elements of the list, +in this case `Any`. + +If we do the same with tuples, the elements maintain their individual and specific type: +```scala +scala> (1, "2", 3.0, List(4)) +val res0: (Int, String, Double, List[Int]) = (1, 2, 3.0, List(4)) +``` +This behavior is desirable in many cases, for example when +we want a function to return two or more values having different types. + +# How are tuples better in Scala 3? + +## Size limit + +Probably the most well known limitation of tuples in Scala 2 was the +restriction to 22 for the number of elements. + +```scala +scala> (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23) + error: tuples may not have more than 22 elements, but 23 given +``` + +In Scala 3 the previous tuple is perfectly legal. + +## Element accessor + +The only way to retrieve an element of a tuple in Scala 2 was to +use the (1-based) `._i` attribute. For example: + +```scala +("First", "Second")._2 // "Second" +``` + +In Scala 3, we can use the `apply` method with a 0-based argument: + +```scala +("First", "Second")(1) // "Second" +``` + +As most of indexes are 0 based in Scala, this brings more consistency +to codebases. It also provides more flexibility. We can, for example, +*iterate* over any tuple to print each element on a line: + +```scala +val someStuff = (1, "2", 3.0, List(4)) +for (i <- 0 until someStuff.size) + println(someStuff(i)) +``` + +The argument provided to `apply` is checked at compile time. This means that +**`someStuff(-1)` or `someStuff(4)` will result in a compilation error**. + +This was possible in Scala 2 with the `productIterator` although this +produced a value of type `Iterator[Any]` which means that we had to pattern +match or eventually cast the type of the elements. + +This brings us to the conceptual change that we will explore in the +next change: tuples become a collection of data that we can manipulate +and program against. + +## New operations + +A lot of operations are now available on tuples out of the box! + +Many of these were possible only using third-party libraries such +as Shapeless in Scala 2, which was a complicated task for new +Scala developers. + +These operations are now available in the standard library, they +are safe and preserve the individual types of each element. + +The first one was already introduced: `.size` retrieves the number +of elements in the tuple. + +### Adding elements to a tuple + +We can add an element to a tuple using the `*:` operator, +which is very similar to the `::` operator available on `List`. + +```scala +val fourElements = (1, "2", 3.0, List(4)) +val evenWeirder = 1 *: "2" *: 3.0 *: List(4) *: Tuple() + +val thisIsTrue = fourWeirdElements == evenWeirder // true + +val fiveWeirdElements = Set(0) *: evenWeirder // (Set(0),1,2,3.0,List(4)) +``` + +When we use a tuple as argument of `*:`, it is prepended as a single element: +```scala +val notGood: ((Int, Int), Int, Int) = (1, 2) *: (3,4) // ((1, 2), 3, 4) +``` +So how can we concatenate two tuples? +The `++` is there exactly for this purpose: +```scala +val better: (Int, Int, Int, Int) = (1, 2) ++ (3, 4) // (1, 2, 3, 4) +``` + +### Removing elements from a tuple + +Similarly to operators available on lists, we can retrieve a subset of +a tuple. Here is a quick overview: + + - `drop` allows to ignore the first *n* elements of the tuple, returning + an empty tuple when the number of elements is smaller than *n*: +```scala +(1, "2", 3.0, List(4)).drop(2) // (3.0, List(4)) +(1, "2", 3.0, List(4)).drop(10) // () +``` + - `take` retrieves the first *n* elements of the tuple, returning the original + tuple when the number of elements is smaller than *n* +```scala +(1, "2", 3.0, List(4)).take(2) // (1, "2") +(1, "2", 3.0, List(4)).take(10) // (1, "2", 3.0, List(4)) +``` + - `splitAt` creates two tuples, the first of which contains the first *n* elements + of the original tuple and the second contains the remaining elements +```scala +(Set(0), 1, "2", 3.0, List(4)).splitAt(3) // ((Set(0), 1, "2", 3.0), (3.0, List(4))) +``` + +### Transforming tuples + +Again, similarly to conversion methods on collections, it is possible to +transform a tuple into a collection. + +We have to pay attention to the type of the resulting collection. +Let's start with the simple case: as its name might hint, +`toArray` produces an array. The type of its elements will always be +`AnyRef`. This makes it easy to reason about this method although it +forgets the type of the elements. +It is also possible to use `.toIArray` which has exactly the same behavior +but produces an `IArray` where the `I` stands for immutable. +```scala +scala> (1, "2").toArray +val res0: Array[AnyRef] = Array(1, 2) +``` + +I believe however that the most interesting conversion is `toList` +which produces a `List[U]` where `U` is the [union type](https://dotty.epfl.ch/docs/reference/new-types/union-types.html) +of the types of the elements of the tuple. +That is: + +```scala +val ls: List[Int | String | Double] = (1, "2", 3.0).toList +``` +This is interesting because the type information is somehow maintained. +We can iterate over `list` and use pattern matching to apply the +correct transformation, knowing exactly how many and what cases to +treat: + +```scala +// The compiler tells it cannot help with checking: +// Non-exhaustive match +(1, "2").toArray.map { + case i: Int => (i * 2).toString + case j: String => j +} + +// The code compiles without errors or warning +// the compile verified that we handled all possible cases +(1, "2").toList.map { + case i: Int => (i + 2).toString + case j: String => j +} +``` + +We can also transform a tuple by applying a function to each element. +The method, similarly to what we are used to with collections, is called +`map`. The difference from collections (or functors) is however +that they expect a `f: A => B` where `A` is the type of the elements +of the collection. +With tuples each element has a different type! +How can we generalize the concept of a function whose argument type is +not fixed ? +We can use a **`PolyFunction`**. This is a more advanced syntax: + +```scala +val options: (Option[Int], Option[Char], Option[String], Option[Double]) = + (1, 'a', "dog", 3.0).map[[X] =>> Option[X]]([T] => (t: T) => Some(t)) +``` +You can read more about `PolyFunction`s [here]() + +## Zipping tuples + +The last operation allows to pair the elements of two tuples. +You might have guessed, it is called `zip`. If the two tuples have +different lengths, the extra elements of the longest will be +ignored: + +```scala +val numbers = (1, 2, 3, 4, 5) +val letters = ('a', 'b', 'c') + +numbers.zip(letters) // ((1, 'a'), (2, 'b'), (3, 'c')) +``` + +# Under the hood: new type operators of Scala 3 + +I believe that the core new features that allows such a flexible +implementation of tuples are **match types**. +I invite you to read more about them [here](http://dotty.epfl.ch/docs/reference/new-types/match-types.html). + +Let's see how we can implement the `++` operator using this powerful +construct. We will naively call our version `concat` + +DISCLAIMER: This section is a bit more advanced ! + +## Defining tuples + +First let's define our own tuple: + +```scala +enum Tup: + case EmpT + case TCons[H, T <: Tup](head: H, tail: T) +``` + +That is a tuple is either empty, or an element `head` which precedes +another tuple. Using this recursive definition we can create +a tuple as follow: + +```scala +import Tup._ + +val myTup = TCons(1, TCons(2, EmpT)) +``` +It is not very pretty, but it can be easily adapted to provide +the same ease of use as the previous examples. +To do so we can use another Scala 3 feature: [extension methods](http://dotty.epfl.ch/docs/reference/contextual/extension-methods.html) + +```scala +import Tup._ + +extension [A, T <: Tup] (a: A) def *: (t: T): TCons[A, T] = + TCons(a, t) +``` +So that we can write: + +```scala +1 *: "2" *: EmpT +``` + +## Concatenating tuples + +Now let's focus on `concat`, which could look like this: +```scala +import Tup._ + +def concat[L <: Tup, R <: Tup](left: L, right: R): Tup = + left match + case EmpT => right + case TCons(head, tail) => TCons(head, concat(tail, right)) +``` + +Let's analyze the algorithm line by line: +`L` and `R` are the type of the left and right tuple. We require +them to be a subtype of `Tup` because we want to concatenate tuples. +Why not using `Tup` directly? Because in this way we receive more specific +information about the two arguments. +Then we proceed recursively by case: if the left tuple is empty, +the result of the concatenation is just the right tuple. +Otherwise the result is the current head followed by the result of +concatenating the tail with the other tuple. + +If we test the function, it seems to work: +```scala +val left = 1 *: 2 *: EmpT +val right = 3 *: 4 *: EmpT + +concat(left, right) // TCons(1,TCons(2,TCons(3, TCons(4,EmpT)))) +``` + +So everything seems good. However we can have more safety. +For instance the following code is perfectly fine: +```scala +def concat[L <: Tup, R <: Tup](left: L, right: R): Tup = left +``` +Because the returned type is just a tuple, we do not check anything else. +This means that the function can return an arbitrary tuple, +the compiler cannot check that returned value consists of the concatenation +of the two tuples. In other words, we need a type to indicate that +the return of this function is all the types of `left` followed +by all the types of the elements of `right`. + +Can we make it so that the compiler verifies that we are indeed +returning a tuple consisting of the correct elements ? + +In Scala 3 it is now possible, without requiring external libraries! + +## A new type for the result of `concat` + +We know that we need to focus on the return type. We can define this the return +type exactly as we have just described it. +Let's call this type `Concat` to mirror the name of the function. + +```scala +type Concat[L <: Tup, R <: Tup] <: Tup = L match + case EmpT.type => R + case TCons[h, t] => TCons[h, Concat[t, R]] +``` + +You can see that the implementation closely follows the one +above for the method. +To use it we need to massage a bit the method implementation and +to change its return type: + +```scala +def concat[L <: Tup, R <: Tup](left: L, right: R): Concat[L, R] = + left match + case _: EmpT.type => right + case cons: TCons[head, tail] => TCons(cons.head, concat(cons.tail, right)) +``` + +We use here a combination of match types and a form of dependent types called +*dependent match types*. There are some quirks to it as you might have noticed: +using lower case types means using type variables and we cannot use pattern matching +on the object. I think however that this implementation is extremely concise and readable. + +Now the compiler will prevent us from doing mistakes: + +```scala +def malicious[L <: Tup, R <: Tup](left: L, right: R): Concat[L, R] = left +// This does not compile! +``` + +We can use an extension method to allow users to write `(1, 2) ++ (3, 4)` instead +of `concat((1, 2), (3, 4))`, I believe that you now know how to do this too. + +We can use the same approach for other functions on tuples, I invite you to have +a look at the source code of the standard library to see how the other operators are +implemented. + +# Conclusion + +We had a look at the new operations that are available on tuples in Scala 3 and at +how a more flexible type system provides the fundamental tools to implement safer +and more readable code. + +This shows how advanced type combinators in Scala 3 allow to create +APIs that benefit developers no matter their level of proficiency in the language: +an expert-oriented feature such as dependent match types allow to build a safe +and simple operation such as tuple concatenation. + From 4b6a6b909b99a887652cdf45f1b175c85b5e039e Mon Sep 17 00:00:00 2001 From: vincenzobaz Date: Fri, 15 Jan 2021 10:48:00 +0100 Subject: [PATCH 2/7] Rewrite post to show power of tuples with an application --- ...1-30-flexible-and-safe-tuples-in-scala3.md | 383 ------------------ ...es-bring-generic-programming-to-scala-3.md | 326 +++++++++++++++ 2 files changed, 326 insertions(+), 383 deletions(-) delete mode 100644 _posts/2020-11-30-flexible-and-safe-tuples-in-scala3.md create mode 100644 _posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md diff --git a/_posts/2020-11-30-flexible-and-safe-tuples-in-scala3.md b/_posts/2020-11-30-flexible-and-safe-tuples-in-scala3.md deleted file mode 100644 index 6f19c2dc6..000000000 --- a/_posts/2020-11-30-flexible-and-safe-tuples-in-scala3.md +++ /dev/null @@ -1,383 +0,0 @@ ---- -layout: blog-detail -post-type: blog -by: Vincenzo Bazzucchi, Scala Center -title: Flexible and safe tuples in Scala 3 ---- - -# Flexible and safe tuples in Scala 3 - -Tuples are revisited and completely rethought in Scala 3. -They are more **flexible**, more dynamic and support a **wider range of operations**. -This is enabled by new and powerful language features. - -In this post we will explore the new capabilities of tuples before -looking under the hood to learn how the improvements in the Scala 3 type system, -in particular *dependent types* and *match types*, enable implementing type safe -operations on tuples. - -# The basics: what are tuples? - -In the Python programming language, tuples are a simple concept: -they are immutable collections of objects. As such, they are opposed -to lists, which are mutable. - -In Scala both `List`s and tuples are immutable, so why do we care -about tuples? - -Scala being a statically typed programming language, the difference between -list and tuples is in the type. Lists are *homogeneous* collections while -tuples are *heterogeneous*. In simpler terms, a tuple collects items maintaining -the type of each element, while a list collects objects retaining a common type -for all the elements. - -This is better explained with an example: -```scala -scala> List(1, "2", 3.0, List(4)) -val res0: List[Any] = List(1, 2, 3.0, List(4)) -``` -We see that the compiler tries to infer a common supertype for the elements of the list, -in this case `Any`. - -If we do the same with tuples, the elements maintain their individual and specific type: -```scala -scala> (1, "2", 3.0, List(4)) -val res0: (Int, String, Double, List[Int]) = (1, 2, 3.0, List(4)) -``` -This behavior is desirable in many cases, for example when -we want a function to return two or more values having different types. - -# How are tuples better in Scala 3? - -## Size limit - -Probably the most well known limitation of tuples in Scala 2 was the -restriction to 22 for the number of elements. - -```scala -scala> (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23) - error: tuples may not have more than 22 elements, but 23 given -``` - -In Scala 3 the previous tuple is perfectly legal. - -## Element accessor - -The only way to retrieve an element of a tuple in Scala 2 was to -use the (1-based) `._i` attribute. For example: - -```scala -("First", "Second")._2 // "Second" -``` - -In Scala 3, we can use the `apply` method with a 0-based argument: - -```scala -("First", "Second")(1) // "Second" -``` - -As most of indexes are 0 based in Scala, this brings more consistency -to codebases. It also provides more flexibility. We can, for example, -*iterate* over any tuple to print each element on a line: - -```scala -val someStuff = (1, "2", 3.0, List(4)) -for (i <- 0 until someStuff.size) - println(someStuff(i)) -``` - -The argument provided to `apply` is checked at compile time. This means that -**`someStuff(-1)` or `someStuff(4)` will result in a compilation error**. - -This was possible in Scala 2 with the `productIterator` although this -produced a value of type `Iterator[Any]` which means that we had to pattern -match or eventually cast the type of the elements. - -This brings us to the conceptual change that we will explore in the -next change: tuples become a collection of data that we can manipulate -and program against. - -## New operations - -A lot of operations are now available on tuples out of the box! - -Many of these were possible only using third-party libraries such -as Shapeless in Scala 2, which was a complicated task for new -Scala developers. - -These operations are now available in the standard library, they -are safe and preserve the individual types of each element. - -The first one was already introduced: `.size` retrieves the number -of elements in the tuple. - -### Adding elements to a tuple - -We can add an element to a tuple using the `*:` operator, -which is very similar to the `::` operator available on `List`. - -```scala -val fourElements = (1, "2", 3.0, List(4)) -val evenWeirder = 1 *: "2" *: 3.0 *: List(4) *: Tuple() - -val thisIsTrue = fourWeirdElements == evenWeirder // true - -val fiveWeirdElements = Set(0) *: evenWeirder // (Set(0),1,2,3.0,List(4)) -``` - -When we use a tuple as argument of `*:`, it is prepended as a single element: -```scala -val notGood: ((Int, Int), Int, Int) = (1, 2) *: (3,4) // ((1, 2), 3, 4) -``` -So how can we concatenate two tuples? -The `++` is there exactly for this purpose: -```scala -val better: (Int, Int, Int, Int) = (1, 2) ++ (3, 4) // (1, 2, 3, 4) -``` - -### Removing elements from a tuple - -Similarly to operators available on lists, we can retrieve a subset of -a tuple. Here is a quick overview: - - - `drop` allows to ignore the first *n* elements of the tuple, returning - an empty tuple when the number of elements is smaller than *n*: -```scala -(1, "2", 3.0, List(4)).drop(2) // (3.0, List(4)) -(1, "2", 3.0, List(4)).drop(10) // () -``` - - `take` retrieves the first *n* elements of the tuple, returning the original - tuple when the number of elements is smaller than *n* -```scala -(1, "2", 3.0, List(4)).take(2) // (1, "2") -(1, "2", 3.0, List(4)).take(10) // (1, "2", 3.0, List(4)) -``` - - `splitAt` creates two tuples, the first of which contains the first *n* elements - of the original tuple and the second contains the remaining elements -```scala -(Set(0), 1, "2", 3.0, List(4)).splitAt(3) // ((Set(0), 1, "2", 3.0), (3.0, List(4))) -``` - -### Transforming tuples - -Again, similarly to conversion methods on collections, it is possible to -transform a tuple into a collection. - -We have to pay attention to the type of the resulting collection. -Let's start with the simple case: as its name might hint, -`toArray` produces an array. The type of its elements will always be -`AnyRef`. This makes it easy to reason about this method although it -forgets the type of the elements. -It is also possible to use `.toIArray` which has exactly the same behavior -but produces an `IArray` where the `I` stands for immutable. -```scala -scala> (1, "2").toArray -val res0: Array[AnyRef] = Array(1, 2) -``` - -I believe however that the most interesting conversion is `toList` -which produces a `List[U]` where `U` is the [union type](https://dotty.epfl.ch/docs/reference/new-types/union-types.html) -of the types of the elements of the tuple. -That is: - -```scala -val ls: List[Int | String | Double] = (1, "2", 3.0).toList -``` -This is interesting because the type information is somehow maintained. -We can iterate over `list` and use pattern matching to apply the -correct transformation, knowing exactly how many and what cases to -treat: - -```scala -// The compiler tells it cannot help with checking: -// Non-exhaustive match -(1, "2").toArray.map { - case i: Int => (i * 2).toString - case j: String => j -} - -// The code compiles without errors or warning -// the compile verified that we handled all possible cases -(1, "2").toList.map { - case i: Int => (i + 2).toString - case j: String => j -} -``` - -We can also transform a tuple by applying a function to each element. -The method, similarly to what we are used to with collections, is called -`map`. The difference from collections (or functors) is however -that they expect a `f: A => B` where `A` is the type of the elements -of the collection. -With tuples each element has a different type! -How can we generalize the concept of a function whose argument type is -not fixed ? -We can use a **`PolyFunction`**. This is a more advanced syntax: - -```scala -val options: (Option[Int], Option[Char], Option[String], Option[Double]) = - (1, 'a', "dog", 3.0).map[[X] =>> Option[X]]([T] => (t: T) => Some(t)) -``` -You can read more about `PolyFunction`s [here]() - -## Zipping tuples - -The last operation allows to pair the elements of two tuples. -You might have guessed, it is called `zip`. If the two tuples have -different lengths, the extra elements of the longest will be -ignored: - -```scala -val numbers = (1, 2, 3, 4, 5) -val letters = ('a', 'b', 'c') - -numbers.zip(letters) // ((1, 'a'), (2, 'b'), (3, 'c')) -``` - -# Under the hood: new type operators of Scala 3 - -I believe that the core new features that allows such a flexible -implementation of tuples are **match types**. -I invite you to read more about them [here](http://dotty.epfl.ch/docs/reference/new-types/match-types.html). - -Let's see how we can implement the `++` operator using this powerful -construct. We will naively call our version `concat` - -DISCLAIMER: This section is a bit more advanced ! - -## Defining tuples - -First let's define our own tuple: - -```scala -enum Tup: - case EmpT - case TCons[H, T <: Tup](head: H, tail: T) -``` - -That is a tuple is either empty, or an element `head` which precedes -another tuple. Using this recursive definition we can create -a tuple as follow: - -```scala -import Tup._ - -val myTup = TCons(1, TCons(2, EmpT)) -``` -It is not very pretty, but it can be easily adapted to provide -the same ease of use as the previous examples. -To do so we can use another Scala 3 feature: [extension methods](http://dotty.epfl.ch/docs/reference/contextual/extension-methods.html) - -```scala -import Tup._ - -extension [A, T <: Tup] (a: A) def *: (t: T): TCons[A, T] = - TCons(a, t) -``` -So that we can write: - -```scala -1 *: "2" *: EmpT -``` - -## Concatenating tuples - -Now let's focus on `concat`, which could look like this: -```scala -import Tup._ - -def concat[L <: Tup, R <: Tup](left: L, right: R): Tup = - left match - case EmpT => right - case TCons(head, tail) => TCons(head, concat(tail, right)) -``` - -Let's analyze the algorithm line by line: -`L` and `R` are the type of the left and right tuple. We require -them to be a subtype of `Tup` because we want to concatenate tuples. -Why not using `Tup` directly? Because in this way we receive more specific -information about the two arguments. -Then we proceed recursively by case: if the left tuple is empty, -the result of the concatenation is just the right tuple. -Otherwise the result is the current head followed by the result of -concatenating the tail with the other tuple. - -If we test the function, it seems to work: -```scala -val left = 1 *: 2 *: EmpT -val right = 3 *: 4 *: EmpT - -concat(left, right) // TCons(1,TCons(2,TCons(3, TCons(4,EmpT)))) -``` - -So everything seems good. However we can have more safety. -For instance the following code is perfectly fine: -```scala -def concat[L <: Tup, R <: Tup](left: L, right: R): Tup = left -``` -Because the returned type is just a tuple, we do not check anything else. -This means that the function can return an arbitrary tuple, -the compiler cannot check that returned value consists of the concatenation -of the two tuples. In other words, we need a type to indicate that -the return of this function is all the types of `left` followed -by all the types of the elements of `right`. - -Can we make it so that the compiler verifies that we are indeed -returning a tuple consisting of the correct elements ? - -In Scala 3 it is now possible, without requiring external libraries! - -## A new type for the result of `concat` - -We know that we need to focus on the return type. We can define this the return -type exactly as we have just described it. -Let's call this type `Concat` to mirror the name of the function. - -```scala -type Concat[L <: Tup, R <: Tup] <: Tup = L match - case EmpT.type => R - case TCons[h, t] => TCons[h, Concat[t, R]] -``` - -You can see that the implementation closely follows the one -above for the method. -To use it we need to massage a bit the method implementation and -to change its return type: - -```scala -def concat[L <: Tup, R <: Tup](left: L, right: R): Concat[L, R] = - left match - case _: EmpT.type => right - case cons: TCons[head, tail] => TCons(cons.head, concat(cons.tail, right)) -``` - -We use here a combination of match types and a form of dependent types called -*dependent match types*. There are some quirks to it as you might have noticed: -using lower case types means using type variables and we cannot use pattern matching -on the object. I think however that this implementation is extremely concise and readable. - -Now the compiler will prevent us from doing mistakes: - -```scala -def malicious[L <: Tup, R <: Tup](left: L, right: R): Concat[L, R] = left -// This does not compile! -``` - -We can use an extension method to allow users to write `(1, 2) ++ (3, 4)` instead -of `concat((1, 2), (3, 4))`, I believe that you now know how to do this too. - -We can use the same approach for other functions on tuples, I invite you to have -a look at the source code of the standard library to see how the other operators are -implemented. - -# Conclusion - -We had a look at the new operations that are available on tuples in Scala 3 and at -how a more flexible type system provides the fundamental tools to implement safer -and more readable code. - -This shows how advanced type combinators in Scala 3 allow to create -APIs that benefit developers no matter their level of proficiency in the language: -an expert-oriented feature such as dependent match types allow to build a safe -and simple operation such as tuple concatenation. - diff --git a/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md b/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md new file mode 100644 index 000000000..2c3572ff8 --- /dev/null +++ b/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md @@ -0,0 +1,326 @@ +--- +layout: blog-detail +post-type: blog +by: Vincenzo Bazzucchi, Scala Center +title: Tuples bring generic programming to Scala 3 +--- + +Tuples allow developers to create new types by associating existing types. In +doing so, they are very similar to case classes but unlike them they retain +only the structure of the types (e.g., which type is in which order) rather +than giving each element a name. + +In Scala 3, tuples gain power thanks to new operations, additional type safety +and fewer restrictions, pointing in the direction of a construct called +**Heterogeneous Lists** (HLists), one of the core data structures in generic +programming. + +This post focuses on how tuples in Scala 3 allow to address generic programming +challenges, without external libraries or macros. It will also provide a short +cheat sheet for new tuple operations as well as showing how a new language +feature, dependent match types, allows the implementation of these operations. + +# Why generic programming ? + +When considering type-safety, HList offer the same guarantees as case classes, +without having to declare class or field names. This makes them more +convenient in some scenarios, for example in return types. If we consider +`List`, you can see that `def splitAt(n: Int)` produces a `(List[A], List[A])` +and not a `case class SplitResult(left: List[A], right: List[A])` because of +the cognitive cost of introducing a new name `SplitResult`. + +Moreover, there are infinitely many case classes which share a common +structure, which means that they have the same number and type of fields. We +might want to apply the same transformations to them, so that such +transformations can be defined only once. [The Type Astronaut's Guide to +Shapeless](https://underscore.io/books/shapeless-guide/) proposes the following +simple example: + +```scala +case class Employee(name: String, number: Int, manager: Boolean) case +class IceCream(name: String, numCherries: Int, inCone: Boolean) +``` + +If you are implementing an operation such as serializing instances of these +types to CSV or JSON, you will realize that the logic is exactly the same and +you will want to implement it only once. This is equivalent on defining the +serialization algorithm for the `(String, Int, Boolean)` HList, assuming that +you can reduce both case classes to it. + +# A simple CSV encoder + +Let's consider a simple CSV encoder for our Employee and IceCream case classes. +Each record, or line, of a CSV file is a sequence of values separated by a +delimiter, usually a comma or a semicolon. In Scala we can represent each value +as text, using the `String` type, and thus each record can be a list of values, +with type `List[String]`. Therefore, in order to encode case classes to CSV, we +need to extract each field of the case class and to turn it into a `String`, +before collecting all the fields in a list. In this setting, `Employee` and +`IceCream` could be treated in the same way, because they can be simply be seen +as a `(String, Int, Boolean)` which need to be transformed into a +`List[String]`. We will first see how to handle this simple scenario before +briefly looking at how to obtain a tuple from a case class. + +Assuming that we know how to transform each element of a tuple into a +`List[String]`, can we transform any tuple into a `List[String]` ? + +The answer is yes, and this is possible because Scala 3 introduces types `*:`, +`EmptyTuple` and `NonEmptyTuple` but also methods `head` and `tail` which allow +us to define recursive operations on tuples. + +## Set up + +Let's define the `Enc[A]` type-class, which describes the capability of values +of type `A` to be converted into `List[String]`: + +```scala +trait Enc[A]: + def toCsv(a: A): List[String] +``` + +We can then add some instances for our base types: + +```scala +object BaseEnc: + given Enc[Int] with + def toCsv(x: Int) = List(x.toString) + + given Enc[Boolean] with + def toCsv(x: Boolean) = List(if x then "true" else "false") + + given Enc[String] with + def toCsv(x: String) = List(x) +``` + +## Recursion! + +Now that all these tools are in place, let's focus on the hard part: +implementing the actual transformation. Similarly to how you may be used to +recurse on lists, on tuples we need to manage two scenarios: the base case +(`EmptyTuple`) and the inductive case (`NonEmptyTuple`). + +In the following snippet, I prefer to use the [context bound +syntax](https://dotty.epfl.ch/docs/reference/contextual/context-bounds.html) +even if I need a handle for the instances because it concentrates all the +constraints in the type parameter list (and I do not need to come up with any +name). After this personal preference disclaimer, let's see the two cases: + +```scala +object TupleEnc: + // Base case + given [T: Enc]: Enc[T *: EmptyTuple] with + def toCsv(oneElement: T *: EmptyTuple) = + summon[Enc[T]].toCsv(oneElement.head) + + // Inductive case + given [H: Enc, T <: NonEmptyTuple: Enc]: Enc[H *: T] with + def toCsv(tuple: H *: T) = + summon[Enc[H]].toCsv(tuple.head) ++ summon[Enc[T]].toCsv(tuple.tail) +``` +When recursion hits the last element of the tuple, we use its encoder, +otherwise we invoke the encoder for the first element and for the tail of the +tuple and combine the the two lists using the concatenation operator. + +We can create an entrypoint function and test this implementation: +```scala +def tupleToCsv[X <: Tuple: Enc](tuple: X): List[String] = + summon[Enc[X]].toCsv(tuple) + +tupleToCsv(("Bob", 42, false)) // List("Bob", 42, false) +``` + +## How to obtain a tuple from a case class ? + +Scala 3 introduces the +[`Mirror`](https://dotty.epfl.ch/docs/reference/contextual/derivation.html) +type-class which provides type-level information about the components and +labels of types. [A paragraph from that +documentation](https://dotty.epfl.ch/docs/reference/contextual/derivation.html#types-supporting-derives-clauses) +is particularly interesting for our use case: + +> The compiler automatically generates instances of `Mirror` for `enum`s and +> their cases, **case classes** and case objects, sealed classes or traits +> having only case classes and case objects as children. + +That's why we can obtain a tuple from a case class using: +```scala +val bob: Employee = Employee("Bob", 42, false) +val bobTuple: (String, Int, Boolean) = Tuple.fromProductTyped(bob) +``` +But that is also why we can revert the operation: +```scala +val bobAgain: Employee = summon[Mirror.Of[Employee]].fromProduct(bobTuple) +``` + +# New tuples operations +In the previous example, we saw that we can use `.head` and `.tail` on tuples, +but Scala 3 introduces many other operations, here is a quick overview: + +| Operation | Example | Result | +|------------|-------------------------------------------------------------|------------------------------------------------------| +| `size` | `(1, 2, 3).size` | `3` | +| `head` | `(3 *: 4 *: 5 *: EmptyTuple).head` | `3` | +| `tail` | `(3 *: 4 *: 5 *: EmptyTuple).tail` | `(4, 5)` | +| `*:` | `3 *: 4 *: 5 *: 6 *: EmptyTuple` | `(3, 4, 5, 6)` | +| `++` | `(1, 2, 3) ++ (4, 5, 6)` | `(1, 2, 3, 4, 5, 6)` | +| `drop` | `(1, 2, 3).drop(2)` | `(3)` | +| `take` | `(1, 2, 3).take(2)` | `(1, 2)` | +| `apply` | `(1, 2, 3)(2)` | `3` | +| `splitAt` | `(1, 2, 3, 4, 5).splitAt(2)` | `((1, 2), (3, 4, 5))` | +| `zip` | `(1, 2, 3).zip(('a', 'b'))` | `((1 'a'), (2, 'b'))` | +| `toList` | `(1, 'a', 2).toList` | `List(1, 'a', 2) : List[Int | Char]` | +| `toArray` | `(1, 'a', 2).toArray` | `Array(1, '1', 2) : Array[AnyRef]` | +| `toIArray` | `(1, 'a', 2).toIArray` | `IArray(1, '1', 2) : IArray[AnyRef]` | +| `map` | `(1, 'a').map[[X] =>> Option[X]]([T] => (t: T) => Some(t))` | `(Some(1), Some('a')) : (Option[Int], Option[Char])` | + + +# Under the hood: Scala 3 introduces match types + +All the operations in the above table use very precise types. For example, the +compiler ensures that `3 *: (4, 5, 6)` is a `(Int, Int, Int, Int)` or that the +index provided to `apply` is strictly inferior to the size of the tuple. + +How is this possible? + +The core new feature that allows such a flexible implementation of tuples are +**match types**. I invite you to read more about them +[here](http://dotty.epfl.ch/docs/reference/new-types/match-types.html). + +Let's see how we can implement the `++` operator using this powerful construct. +We will call our naive version `concat`. + +## Defining tuples + +First let's define our own tuple: + +```scala +enum Tup: + case EmpT + case TCons[H, T <: Tup](head: H, tail: T) +``` + +That is a tuple is either empty, or an element `head` which precedes another +tuple. Using this recursive definition we can create a tuple in the following +way: + +```scala +import Tup._ + +val myTup = TCons(1, TCons(2, EmpT)) +``` +It is not very pretty, but it can be easily adapted to provide the same ease of +use as the previous examples. To do so we can use another Scala 3 feature: +[extension +methods](http://dotty.epfl.ch/docs/reference/contextual/extension-methods.html) + +```scala +import Tup._ + +extension [A, T <: Tup] (a: A) def *: (t: T): TCons[A, T] = + TCons(a, t) +``` +So that we can write: + +```scala +1 *: "2" *: EmpT +``` + +## Concatenating tuples + +Now let's focus on `concat`, which could look like this: +```scala +import Tup._ + +def concat[L <: Tup, R <: Tup](left: L, right: R): Tup = + left match + case EmpT => right + case TCons(head, tail) => TCons(head, concat(tail, right)) +``` + +Let's analyze the algorithm line by line: `L` and `R` are the type of the left +and right tuple. We require them to be a subtype of `Tup` because we want to +concatenate tuples. Then we proceed recursively by case: if the left tuple is +empty, the result of the concatenation is just the right tuple. Otherwise the +result is the current head followed by the result of concatenating the tail +with the other tuple. + +If we test the function, it seems to work: +```scala +val left = 1 *: 2 *: EmpT +val right = 3 *: 4 *: EmpT + +concat(left, right) // TCons(1,TCons(2,TCons(3, TCons(4,EmpT)))) +``` + +So everything seems good. However we can ask the compiler to verify that the +function behaves as expected. For instance the following code type-checks: + +```scala +def concat[L <: Tup, R <: Tup](left: L, right: R): Tup = left +``` + +More problematic is the fact that this signature prevents us from using a more +specific type for our variables or methods: +```scala +// This does not compile +val res: TCons[Int, TCons[Int, TCons[Int, TCons[Int, EmpT.type]]]] = concat(left, right) +``` + +Because the returned type is just a tuple, we do not check anything else. This +means that the function can return an arbitrary tuple, the compiler cannot +check that returned value consists of the concatenation of the two tuples. In +other words, we need a type to indicate that the return of this function is all +the types of `left` followed by all the types of the elements of `right`. + +Can we make it so that the compiler verifies that we are indeed returning a +tuple consisting of the correct elements ? + +In Scala 3 it is now possible, without requiring external libraries! + +## A new type for the result of `concat` + +We know that we need to focus on the return type. We can define it exactly as +we have just described it. Let's call this type `Concat` to mirror the name of +the function. + +```scala +type Concat[L <: Tup, R <: Tup] <: Tup = L match + case EmpT.type => R + case TCons[headType, tailType] => TCons[headType, Concat[tailType, R]] +``` + +You can see that the implementation closely follows the one above for the +method. The syntax can be read in the following way: the `Concat` type is a +subtype of `Tup` and is obtained by combining types `L` and `R` which are both +subtypes of `Tup`. To use it we need to massage a bit the method +implementation and to change its return type: + +```scala +def concat[L <: Tup, R <: Tup](left: L, right: R): Concat[L, R] = + left match + case _: EmpT.type => right + case cons: TCons[_, _] => TCons(cons.head, concat(cons.tail, right)) +``` + +We use here a combination of match types and a form of dependent types called +*dependent match types* (docs +[here](http://dotty.epfl.ch/docs/reference/new-types/match-types.html) and +[here](http://dotty.epfl.ch/docs/reference/new-types/dependent-function-types.html)). +There are some quirks to it as you might have noticed: using lower case types +means using type variables and we cannot use pattern matching on the object. I +think however that this implementation is extremely concise and readable. + +Now the compiler will prevent us from making the above mistake: + +```scala +def wrong[L <: Tup, R <: Tup](left: L, right: R): Concat[L, R] = left +// This does not compile! +``` + +We can use an extension method to allow users to write `(1, 2) ++ (3, 4)` +instead of `concat((1, 2), (3, 4))`, similarly to how we implemented `*:`. + +We can use the same approach for other functions on tuples, I invite you to +have a look at the [source code of the standard +library](https://github.com/lampepfl/dotty/blob/87102a0b182849c71f61a6febe631f767bcc72c3/library/src-bootstrapped/scala/Tuple.scala) +to see how the other operators are implemented. From eacb003e4645afaee11047697cde675939ae7024 Mon Sep 17 00:00:00 2001 From: vincenzobaz Date: Wed, 20 Jan 2021 09:24:55 +0100 Subject: [PATCH 3/7] Address spelling and formatting remarks --- ...es-bring-generic-programming-to-scala-3.md | 21 ++++++++++--------- 1 file changed, 11 insertions(+), 10 deletions(-) diff --git a/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md b/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md index 2c3572ff8..feaa8abe1 100644 --- a/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md +++ b/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md @@ -37,27 +37,27 @@ Shapeless](https://underscore.io/books/shapeless-guide/) proposes the following simple example: ```scala -case class Employee(name: String, number: Int, manager: Boolean) case -class IceCream(name: String, numCherries: Int, inCone: Boolean) +case class Employee(name: String, number: Int, manager: Boolean) +case class IceCream(name: String, numCherries: Int, inCone: Boolean) ``` If you are implementing an operation such as serializing instances of these types to CSV or JSON, you will realize that the logic is exactly the same and you will want to implement it only once. This is equivalent on defining the serialization algorithm for the `(String, Int, Boolean)` HList, assuming that -you can reduce both case classes to it. +you can map both case classes to it. # A simple CSV encoder -Let's consider a simple CSV encoder for our Employee and IceCream case classes. +Let's consider a simple CSV encoder for our `Employee` and `IceCream` case classes. Each record, or line, of a CSV file is a sequence of values separated by a delimiter, usually a comma or a semicolon. In Scala we can represent each value as text, using the `String` type, and thus each record can be a list of values, with type `List[String]`. Therefore, in order to encode case classes to CSV, we need to extract each field of the case class and to turn it into a `String`, -before collecting all the fields in a list. In this setting, `Employee` and +and then collect all the fields in a list. In this setting, `Employee` and `IceCream` could be treated in the same way, because they can be simply be seen -as a `(String, Int, Boolean)` which need to be transformed into a +as a `(String, Int, Boolean)` which needs to be transformed into a `List[String]`. We will first see how to handle this simple scenario before briefly looking at how to obtain a tuple from a case class. @@ -95,9 +95,10 @@ object BaseEnc: ## Recursion! Now that all these tools are in place, let's focus on the hard part: -implementing the actual transformation. Similarly to how you may be used to -recurse on lists, on tuples we need to manage two scenarios: the base case -(`EmptyTuple`) and the inductive case (`NonEmptyTuple`). +implementing the transformation of a tuple with an arbitrary number of elements +into a `List[String]`. Similarly to how you may be used to recurse on lists, on +tuples we need to manage two scenarios: the base case (`EmptyTuple`) and the +inductive case (`NonEmptyTuple`). In the following snippet, I prefer to use the [context bound syntax](https://dotty.epfl.ch/docs/reference/contextual/context-bounds.html) @@ -119,7 +120,7 @@ object TupleEnc: ``` When recursion hits the last element of the tuple, we use its encoder, otherwise we invoke the encoder for the first element and for the tail of the -tuple and combine the the two lists using the concatenation operator. +tuple and combine the two lists using the concatenation operator. We can create an entrypoint function and test this implementation: ```scala From 8963629261f4419f91f3f0c478333112264c19e5 Mon Sep 17 00:00:00 2001 From: vincenzobaz Date: Wed, 20 Jan 2021 10:08:28 +0100 Subject: [PATCH 4/7] Separate row and field encoders and handle emptytuple --- ...es-bring-generic-programming-to-scala-3.md | 60 +++++++++++-------- 1 file changed, 35 insertions(+), 25 deletions(-) diff --git a/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md b/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md index feaa8abe1..6b82cc2d2 100644 --- a/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md +++ b/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md @@ -70,33 +70,41 @@ us to define recursive operations on tuples. ## Set up -Let's define the `Enc[A]` type-class, which describes the capability of values -of type `A` to be converted into `List[String]`: +Let's define the `RowEncoder[A]` type-class, which describes the capability of +values of type `A` to be converted into `Row`. To encode a type to `Row`, we +first need to convert each field of the type into a `String`: this capability +is defined by the `FieldEncoder` type-class. ```scala -trait Enc[A]: - def toCsv(a: A): List[String] +trait FieldEncoder[A]: + def encodeField(a: A): String + +type Row = List[String] + +trait RowEncoder[A]: + def encodeRow(a: A): Row ``` We can then add some instances for our base types: ```scala -object BaseEnc: - given Enc[Int] with - def toCsv(x: Int) = List(x.toString) +object BaseEncoders: + given FieldEncoder[Int] with + def encodeField(x: Int) = x.toString - given Enc[Boolean] with - def toCsv(x: Boolean) = List(if x then "true" else "false") + given FieldEncoder[Boolean] with + def encodeField(x: Boolean) = if x then "true" else "false" - given Enc[String] with - def toCsv(x: String) = List(x) + given FieldEncoder[String] with + def encodeField(x: String) = x +end BaseEncoders ``` ## Recursion! Now that all these tools are in place, let's focus on the hard part: implementing the transformation of a tuple with an arbitrary number of elements -into a `List[String]`. Similarly to how you may be used to recurse on lists, on +into a `Row`. Similarly to how you may be used to recurse on lists, on tuples we need to manage two scenarios: the base case (`EmptyTuple`) and the inductive case (`NonEmptyTuple`). @@ -104,28 +112,30 @@ In the following snippet, I prefer to use the [context bound syntax](https://dotty.epfl.ch/docs/reference/contextual/context-bounds.html) even if I need a handle for the instances because it concentrates all the constraints in the type parameter list (and I do not need to come up with any -name). After this personal preference disclaimer, let's see the two cases: +name). After this personal preference disclaimer, let's see the two cases: ```scala -object TupleEnc: +object TupleEncoders: // Base case - given [T: Enc]: Enc[T *: EmptyTuple] with - def toCsv(oneElement: T *: EmptyTuple) = - summon[Enc[T]].toCsv(oneElement.head) + given RowEncoder[EmptyTuple] with + def encodeRow(empty: EmptyTuple) = + List.empty // Inductive case - given [H: Enc, T <: NonEmptyTuple: Enc]: Enc[H *: T] with - def toCsv(tuple: H *: T) = - summon[Enc[H]].toCsv(tuple.head) ++ summon[Enc[T]].toCsv(tuple.tail) + given [H: FieldEncoder, T <: Tuple: RowEncoder]: RowEncoder[H *: T] with + def encodeRow(tuple: H *: T) = + summon[FieldEncoder[H]].encodeField(tuple.head) :: summon[RowEncoder[T]].encodeRow(tuple.tail) +end TupleEncoders ``` -When recursion hits the last element of the tuple, we use its encoder, -otherwise we invoke the encoder for the first element and for the tail of the -tuple and combine the two lists using the concatenation operator. + +If the tuple is empty, we produce an empty list. To encode a non-empty tuple we +invoke the encoder for the first element and we prepend the result to the `Row` +created by the encoder of the tail of the tuple. We can create an entrypoint function and test this implementation: ```scala -def tupleToCsv[X <: Tuple: Enc](tuple: X): List[String] = - summon[Enc[X]].toCsv(tuple) +def tupleToCsv[X <: Tuple : RowEncoder](tuple: X): List[String] = + summon[RowEncoder[X]].encodeRow(tuple) tupleToCsv(("Bob", 42, false)) // List("Bob", 42, false) ``` From 86d9cba42e986b907fdee9b5d3bb816a9eeadc38 Mon Sep 17 00:00:00 2001 From: vincenzobaz Date: Wed, 20 Jan 2021 14:44:50 +0100 Subject: [PATCH 5/7] Correct two sentences --- .../2021-01-15-tuples-bring-generic-programming-to-scala-3.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md b/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md index 6b82cc2d2..26fabfbce 100644 --- a/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md +++ b/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md @@ -43,7 +43,7 @@ case class IceCream(name: String, numCherries: Int, inCone: Boolean) If you are implementing an operation such as serializing instances of these types to CSV or JSON, you will realize that the logic is exactly the same and -you will want to implement it only once. This is equivalent on defining the +you will want to implement it only once. This is equivalent to defining the serialization algorithm for the `(String, Int, Boolean)` HList, assuming that you can map both case classes to it. @@ -96,7 +96,7 @@ object BaseEncoders: def encodeField(x: Boolean) = if x then "true" else "false" given FieldEncoder[String] with - def encodeField(x: String) = x + def encodeField(x: String) = x // Ideally, we should also escape commas and double quotes end BaseEncoders ``` From 3265b5c5aeaca96a4e0fd1849474054eb49b93bd Mon Sep 17 00:00:00 2001 From: vincenzobaz Date: Mon, 25 Jan 2021 10:11:17 +0100 Subject: [PATCH 6/7] Clarify intro --- ...es-bring-generic-programming-to-scala-3.md | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md b/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md index 26fabfbce..e04634284 100644 --- a/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md +++ b/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md @@ -8,7 +8,11 @@ title: Tuples bring generic programming to Scala 3 Tuples allow developers to create new types by associating existing types. In doing so, they are very similar to case classes but unlike them they retain only the structure of the types (e.g., which type is in which order) rather -than giving each element a name. +than giving each element a name. A tuple can also be seen as a *sequence* and +therefore a collection of objects, however, whereas *homogeneous* collections +such as `List[A]` or `Set[A]` accumulate elements retaining only one type +(`A`), tuples are capable of storing data of different types while preserving +the type of each entry. In Scala 3, tuples gain power thanks to new operations, additional type safety and fewer restrictions, pointing in the direction of a construct called @@ -22,12 +26,13 @@ feature, dependent match types, allows the implementation of these operations. # Why generic programming ? -When considering type-safety, HList offer the same guarantees as case classes, -without having to declare class or field names. This makes them more -convenient in some scenarios, for example in return types. If we consider -`List`, you can see that `def splitAt(n: Int)` produces a `(List[A], List[A])` -and not a `case class SplitResult(left: List[A], right: List[A])` because of -the cognitive cost of introducing a new name `SplitResult`. +HLists and case classes can both be used to define products of types. However +HLists do not require the developer to declare class or field names. This +makes them more convenient in some scenarios, for example in return types. If +we consider `List`, you can see that `def splitAt(n: Int)` produces a +`(List[A], List[A])` and not a `case class SplitResult(left: List[A], right: +List[A])` because of the cognitive cost of introducing new names +(`SplitResult`, `left` and `right`). Moreover, there are infinitely many case classes which share a common structure, which means that they have the same number and type of fields. We From f643fa95bbf7508722d929ca694ee230cd4358de Mon Sep 17 00:00:00 2001 From: vincenzobaz Date: Wed, 10 Feb 2021 16:55:28 +0100 Subject: [PATCH 7/7] Adapt the intro to show the iterative and teaching approach of the blogpost --- ...-02-10-tuples-bring-generic-programming-to-scala-3.md} | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) rename _posts/{2021-01-15-tuples-bring-generic-programming-to-scala-3.md => 2021-02-10-tuples-bring-generic-programming-to-scala-3.md} (97%) diff --git a/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md b/_posts/2021-02-10-tuples-bring-generic-programming-to-scala-3.md similarity index 97% rename from _posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md rename to _posts/2021-02-10-tuples-bring-generic-programming-to-scala-3.md index e04634284..abac0b85d 100644 --- a/_posts/2021-01-15-tuples-bring-generic-programming-to-scala-3.md +++ b/_posts/2021-02-10-tuples-bring-generic-programming-to-scala-3.md @@ -19,10 +19,10 @@ and fewer restrictions, pointing in the direction of a construct called **Heterogeneous Lists** (HLists), one of the core data structures in generic programming. -This post focuses on how tuples in Scala 3 allow to address generic programming -challenges, without external libraries or macros. It will also provide a short -cheat sheet for new tuple operations as well as showing how a new language -feature, dependent match types, allows the implementation of these operations. +In this post I will take you on a tour of the new Tuple API before looking at +how a new language feature, dependent match types, allows to implement such +API. I hope that through the two proposed examples, you will develop an +intuition about the usage and power of a few new exciting features of Scala 3. # Why generic programming ?