diff --git a/_contribute_resources/2-bug-fixes.md b/_contribute_resources/2-bug-fixes.md index 7026e7e15b..7f3e5c2a78 100644 --- a/_contribute_resources/2-bug-fixes.md +++ b/_contribute_resources/2-bug-fixes.md @@ -3,6 +3,6 @@ title: Bug fixes link: /contribute/guide.html icon: fa fa-bug --- -Issues with the tools, core libraries and compiler. Also you can help us by [reporting bugs][bug-reporting-guide]. +Issues with the tools, core libraries and compiler. Also, you can help us by [reporting bugs][bug-reporting-guide]. [bug-reporting-guide]: {% link _overviews/contribute/bug-reporting-guide.md %} diff --git a/_getting-started/intellij-track/getting-started-with-scala-in-intellij.md b/_getting-started/intellij-track/getting-started-with-scala-in-intellij.md index 9b99acabd3..cc7eb73add 100644 --- a/_getting-started/intellij-track/getting-started-with-scala-in-intellij.md +++ b/_getting-started/intellij-track/getting-started-with-scala-in-intellij.md @@ -34,7 +34,7 @@ you'll need to install a Scala SDK. To the right of the Scala SDK field, click the **Create** button. 1. Select the highest version number (e.g. {{ site.scala-version }}) and click **Download**. This might take a few minutes but subsequent projects can use the same SDK. -1. Once the SDK is created and you're back to the "New Project" window click **Finish**. +1. Once the SDK is created, and you're back to the "New Project" window, click **Finish**. ## Writing code diff --git a/_getting-started/intellij-track/testing-scala-in-intellij-with-scalatest.md b/_getting-started/intellij-track/testing-scala-in-intellij-with-scalatest.md index bfc23717f8..1b765824b6 100644 --- a/_getting-started/intellij-track/testing-scala-in-intellij-with-scalatest.md +++ b/_getting-started/intellij-track/testing-scala-in-intellij-with-scalatest.md @@ -28,7 +28,7 @@ This assumes you know [how to build a project in IntelliJ](building-a-scala-proj unrecognized. 1. On the project pane on the left, expand `src` => `main`. 1. Right-click on `scala` and select **New** => **Scala class**. -1. Call it `CubeCalculator`, change the **Kind** to `object`, and hit enter or double click on `object`. +1. Call it `CubeCalculator`, change the **Kind** to `object`, and hit enter or double-click on `object`. 1. Replace the code with the following: ``` object CubeCalculator extends App { @@ -41,7 +41,7 @@ This assumes you know [how to build a project in IntelliJ](building-a-scala-proj ## Creating a test 1. On the project pane on the left, expand `src` => `test`. 1. Right-click on `scala` and select **New** => **Scala class**. -1. Name the class `CubeCalculatorTest` and hit enter or double click on `class`. +1. Name the class `CubeCalculatorTest` and hit enter or double-click on `class`. 1. Replace the code with the following: ``` import org.scalatest.funsuite.AnyFunSuite diff --git a/_glossary/index.md b/_glossary/index.md index 3eba952f88..75b5c98081 100644 --- a/_glossary/index.md +++ b/_glossary/index.md @@ -44,7 +44,7 @@ You can assign an object to a variable. Afterwards, the variable will refer to t Extra constructors defined inside the curly braces of the class definition, which look like method definitions named `this`, but with no result type. * #### block -One or more expressions and declarations surrounded by curly braces. When the block evaluates, all of its expressions and declarations are processed in order, and then the block returns the value of the last expression as its own value. Blocks are commonly used as the bodies of functions, [for expressions](#for-expression), `while` loops, and any other place where you want to group a number of statements together. More formally, a block is an encapsulation construct for which you can only see side effects and a result value. The curly braces in which you define a class or object do not, therefore, form a block, because fields and methods (which are defined inside those curly braces) are visible from the out- side. Such curly braces form a template. +One or more expressions and declarations surrounded by curly braces. When the block evaluates, all of its expressions and declarations are processed in order, and then the block returns the value of the last expression as its own value. Blocks are commonly used as the bodies of functions, [for expressions](#for-expression), `while` loops, and any other place where you want to group a number of statements together. More formally, a block is an encapsulation construct for which you can only see side effects and a result value. The curly braces in which you define a class or object do not, therefore, form a block, because fields and methods (which are defined inside those curly braces) are visible from the outside. Such curly braces form a template. * #### bound variable A bound variable of an expression is a variable that’s both used and defined inside the expression. For instance, in the function literal expression `(x: Int) => (x, y)`, both variables `x` and `y` are used, but only `x` is bound, because it is defined in the expression as an `Int` and the sole argument to the function described by the expression. @@ -299,7 +299,7 @@ A _self type_ of a trait is the assumed type of `this`, the receiver, to be used XML data is semi-structured. It is more structured than a flat binary file or text file, but it does not have the full structure of a programming language’s data structures. * #### serialization -You can _serialize_ an object into a byte stream which can then be saved to files or transmitted over the network. You can later _deserialize_ the byte stream, even on different computer, and obtain an object that is the same as the original serialized object. +You can _serialize_ an object into a byte stream which can then be saved to a file or transmitted over the network. You can later _deserialize_ the byte stream, even on different computer, and obtain an object that is the same as the original serialized object. * #### shadow A new declaration of a local variable _shadows_ one of the same name in an enclosing scope. diff --git a/_overviews/FAQ/index.md b/_overviews/FAQ/index.md index 89595bf4c5..44fb12f398 100644 --- a/_overviews/FAQ/index.md +++ b/_overviews/FAQ/index.md @@ -82,7 +82,7 @@ fatal. opinionated sbt plugin that sets many options automatically, depending on Scala version; you can see [here](https://github.com/DavidGregory084/sbt-tpolecat/blob/master/src/main/scala/io/github/davidgregory084/TpolecatPlugin.scala) -what it sets. Some of the choices it makes are oriented towards +what it sets. Some choices it makes are oriented towards pure-functional programmers. ### How do I find what some symbol means or does? @@ -205,7 +205,7 @@ So for example, a `List[Int]` in Scala code will appear to Java as a appear as type parameters, but couldn't they appear as their boxed equivalents, such as `List[java.lang.Integer]`? -One would hope so, but doing it that way was tried and it proved impossible. +One would hope so, but doing it that way was tried, and it proved impossible. [This SO question](https://stackoverflow.com/questions/11167430/why-are-primitive-types-such-as-int-erased-to-object-in-scala) sadly lacks a concise explanation, but it does link to past discussions. diff --git a/_overviews/FAQ/initialization-order.md b/_overviews/FAQ/initialization-order.md index ece62c6b9f..787281f0db 100644 --- a/_overviews/FAQ/initialization-order.md +++ b/_overviews/FAQ/initialization-order.md @@ -94,7 +94,7 @@ Usually the best answer. Unfortunately you cannot declare an abstract lazy val. 2. Declare an abstract def, and hope subclasses will implement it as a lazy val. If they do not, it will be re-evaluated on every access. 3. Declare a concrete lazy val which throws an exception, and hope subclasses override it. If they do not, it will... throw an exception. -An exception during initialization of a lazy val will cause the right hand side to be re-evaluated on the next access: see SLS 5.2. +An exception during initialization of a lazy val will cause the right-hand side to be re-evaluated on the next access: see SLS 5.2. Note that using multiple lazy vals creates a new risk: cycles among lazy vals can result in a stack overflow on first access. diff --git a/_overviews/collections-2.13/arrays.md b/_overviews/collections-2.13/arrays.md index 64d96a95db..b06b4c9361 100644 --- a/_overviews/collections-2.13/arrays.md +++ b/_overviews/collections-2.13/arrays.md @@ -59,9 +59,9 @@ The `ArrayOps` object gets inserted automatically by the implicit conversion. So scala> intArrayOps(a1).reverse res5: Array[Int] = Array(3, 2, 1) -where `intArrayOps` is the implicit conversion that was inserted previously. This raises the question how the compiler picked `intArrayOps` over the other implicit conversion to `ArraySeq` in the line above. After all, both conversions map an array to a type that supports a reverse method, which is what the input specified. The answer to that question is that the two implicit conversions are prioritized. The `ArrayOps` conversion has a higher priority than the `ArraySeq` conversion. The first is defined in the `Predef` object whereas the second is defined in a class `scala.LowPriorityImplicits`, which is inherited by `Predef`. Implicits in subclasses and subobjects take precedence over implicits in base classes. So if both conversions are applicable, the one in `Predef` is chosen. A very similar scheme works for strings. +where `intArrayOps` is the implicit conversion that was inserted previously. This raises the question of how the compiler picked `intArrayOps` over the other implicit conversion to `ArraySeq` in the line above. After all, both conversions map an array to a type that supports a reverse method, which is what the input specified. The answer to that question is that the two implicit conversions are prioritized. The `ArrayOps` conversion has a higher priority than the `ArraySeq` conversion. The first is defined in the `Predef` object whereas the second is defined in a class `scala.LowPriorityImplicits`, which is inherited by `Predef`. Implicits in subclasses and subobjects take precedence over implicits in base classes. So if both conversions are applicable, the one in `Predef` is chosen. A very similar scheme works for strings. -So now you know how arrays can be compatible with sequences and how they can support all sequence operations. What about genericity? In Java you cannot write a `T[]` where `T` is a type parameter. How then is Scala's `Array[T]` represented? In fact a generic array like `Array[T]` could be at run-time any of Java's eight primitive array types `byte[]`, `short[]`, `char[]`, `int[]`, `long[]`, `float[]`, `double[]`, `boolean[]`, or it could be an array of objects. The only common run-time type encompassing all of these types is `AnyRef` (or, equivalently `java.lang.Object`), so that's the type to which the Scala compiler maps `Array[T]`. At run-time, when an element of an array of type `Array[T]` is accessed or updated there is a sequence of type tests that determine the actual array type, followed by the correct array operation on the Java array. These type tests slow down array operations somewhat. You can expect accesses to generic arrays to be three to four times slower than accesses to primitive or object arrays. This means that if you need maximal performance, you should prefer concrete over generic arrays. Representing the generic array type is not enough, however, there must also be a way to create generic arrays. This is an even harder problem, which requires a little bit of help from you. To illustrate the problem, consider the following attempt to write a generic method that creates an array. +So now you know how arrays can be compatible with sequences and how they can support all sequence operations. What about genericity? In Java, you cannot write a `T[]` where `T` is a type parameter. How then is Scala's `Array[T]` represented? In fact a generic array like `Array[T]` could be at run-time any of Java's eight primitive array types `byte[]`, `short[]`, `char[]`, `int[]`, `long[]`, `float[]`, `double[]`, `boolean[]`, or it could be an array of objects. The only common run-time type encompassing all of these types is `AnyRef` (or, equivalently `java.lang.Object`), so that's the type to which the Scala compiler maps `Array[T]`. At run-time, when an element of an array of type `Array[T]` is accessed or updated there is a sequence of type tests that determine the actual array type, followed by the correct array operation on the Java array. These type tests slow down array operations somewhat. You can expect accesses to generic arrays to be three to four times slower than accesses to primitive or object arrays. This means that if you need maximal performance, you should prefer concrete to generic arrays. Representing the generic array type is not enough, however, there must also be a way to create generic arrays. This is an even harder problem, which requires a little of help from you. To illustrate the issue, consider the following attempt to write a generic method that creates an array. // this is wrong! def evenElems[T](xs: Vector[T]): Array[T] = { @@ -71,7 +71,7 @@ So now you know how arrays can be compatible with sequences and how they can sup arr } -The `evenElems` method returns a new array that consist of all elements of the argument vector `xs` which are at even positions in the vector. The first line of the body of `evenElems` creates the result array, which has the same element type as the argument. So depending on the actual type parameter for `T`, this could be an `Array[Int]`, or an `Array[Boolean]`, or an array of some of the other primitive types in Java, or an array of some reference type. But these types have all different runtime representations, so how is the Scala runtime going to pick the correct one? In fact, it can't do that based on the information it is given, because the actual type that corresponds to the type parameter `T` is erased at runtime. That's why you will get the following error message if you compile the code above: +The `evenElems` method returns a new array that consist of all elements of the argument vector `xs` which are at even positions in the vector. The first line of the body of `evenElems` creates the result array, which has the same element type as the argument. So depending on the actual type parameter for `T`, this could be an `Array[Int]`, or an `Array[Boolean]`, or an array of some other primitive types in Java, or an array of some reference type. But these types have all different runtime representations, so how is the Scala runtime going to pick the correct one? In fact, it can't do that based on the information it is given, because the actual type that corresponds to the type parameter `T` is erased at runtime. That's why you will get the following error message if you compile the code above: error: cannot find class manifest for element type T val arr = new Array[T]((arr.length + 1) / 2) diff --git a/_overviews/collections-2.13/concrete-immutable-collection-classes.md b/_overviews/collections-2.13/concrete-immutable-collection-classes.md index 152605d760..24f1b70648 100644 --- a/_overviews/collections-2.13/concrete-immutable-collection-classes.md +++ b/_overviews/collections-2.13/concrete-immutable-collection-classes.md @@ -84,7 +84,7 @@ the original array’s elements. ## Vectors We have seen in the previous sections that `List` and `ArraySeq` are efficient data structures in some specific -use cases but they are also inefficient in other use cases: for instance, prepending an element is constant for `List`, +use cases, but they are also inefficient in other use cases: for instance, prepending an element is constant for `List`, but linear for `ArraySeq`, and, conversely, indexed access is constant for `ArraySeq` but linear for `List`. [Vector](https://www.scala-lang.org/api/{{ site.scala-version }}/scala/collection/immutable/Vector.html) is a collection type that provides good performance for all its operations. Vectors allow accessing any element of the sequence in "effectively" constant time. It's a larger constant than for access to the head of a List or for reading an element of an ArraySeq, but it's a constant nonetheless. As a result, algorithms using vectors do not have to be careful about accessing just the head of the sequence. They can access and modify elements at arbitrary locations, and thus they can be much more convenient to write. diff --git a/_overviews/collections-2.13/concrete-mutable-collection-classes.md b/_overviews/collections-2.13/concrete-mutable-collection-classes.md index ddc5daf609..e0d9c5fb19 100644 --- a/_overviews/collections-2.13/concrete-mutable-collection-classes.md +++ b/_overviews/collections-2.13/concrete-mutable-collection-classes.md @@ -107,7 +107,7 @@ It is supported by class [mutable.Stack](https://www.scala-lang.org/api/{{ site. Array sequences are mutable sequences of fixed size which store their elements internally in an `Array[Object]`. They are implemented in Scala by class [ArraySeq](https://www.scala-lang.org/api/{{ site.scala-version }}/scala/collection/mutable/ArraySeq.html). -You would typically use an `ArraySeq` if you want an array for its performance characteristics, but you also want to create generic instances of the sequence where you do not know the type of the elements and you do not have a `ClassTag` to provide it at run-time. These issues are explained in the section on [arrays]({% link _overviews/collections-2.13/arrays.md %}). +You would typically use an `ArraySeq` if you want an array for its performance characteristics, but you also want to create generic instances of the sequence where you do not know the type of the elements, and you do not have a `ClassTag` to provide it at run-time. These issues are explained in the section on [arrays]({% link _overviews/collections-2.13/arrays.md %}). ## Hash Tables diff --git a/_overviews/collections-2.13/creating-collections-from-scratch.md b/_overviews/collections-2.13/creating-collections-from-scratch.md index 729b3008f9..660830dd99 100644 --- a/_overviews/collections-2.13/creating-collections-from-scratch.md +++ b/_overviews/collections-2.13/creating-collections-from-scratch.md @@ -41,7 +41,7 @@ Besides `apply`, every collection companion object also defines a member `empty` The operations provided by collection companion objects are summarized in the following table. In short, there's * `concat`, which concatenates an arbitrary number of collections together, -* `fill` and `tabulate`, which generate single or multi-dimensional collections of given dimensions initialized by some expression or tabulating function, +* `fill` and `tabulate`, which generate single or multidimensional collections of given dimensions initialized by some expression or tabulating function, * `range`, which generates integer collections with some constant step length, and * `iterate` and `unfold`, which generates the collection resulting from repeated application of a function to a start element or state. diff --git a/_overviews/collections-2.13/equality.md b/_overviews/collections-2.13/equality.md index 3ce85f4815..3f6d249f83 100644 --- a/_overviews/collections-2.13/equality.md +++ b/_overviews/collections-2.13/equality.md @@ -14,7 +14,7 @@ permalink: /overviews/collections-2.13/:title.html The collection libraries have a uniform approach to equality and hashing. The idea is, first, to divide collections into sets, maps, and sequences. Collections in different categories are always unequal. For instance, `Set(1, 2, 3)` is unequal to `List(1, 2, 3)` even though they contain the same elements. On the other hand, within the same category, collections are equal if and only if they have the same elements (for sequences: the same elements in the same order). For example, `List(1, 2, 3) == Vector(1, 2, 3)`, and `HashSet(1, 2) == TreeSet(2, 1)`. -It does not matter for the equality check whether a collection is mutable or immutable. For a mutable collection one simply considers its current elements at the time the equality test is performed. This means that a mutable collection might be equal to different collections at different times, depending what elements are added or removed. This is a potential trap when using a mutable collection as a key in a hashmap. Example: +It does not matter for the equality check whether a collection is mutable or immutable. For a mutable collection one simply considers its current elements at the time the equality test is performed. This means that a mutable collection might be equal to different collections at different times, depending on what elements are added or removed. This is a potential trap when using a mutable collection as a key in a hashmap. Example: scala> import collection.mutable.{HashMap, ArrayBuffer} import collection.mutable.{HashMap, ArrayBuffer} diff --git a/_overviews/collections-2.13/introduction.md b/_overviews/collections-2.13/introduction.md index 477f8ffb10..7341e31ff5 100644 --- a/_overviews/collections-2.13/introduction.md +++ b/_overviews/collections-2.13/introduction.md @@ -48,7 +48,7 @@ lines run at first try. **Fast:** Collection operations are tuned and optimized in the libraries. As a result, using collections is typically quite -efficient. You might be able to do a little bit better with carefully +efficient. You might be able to do a little better with carefully hand-tuned data structures and operations, but you might also do a lot worse by making some suboptimal implementation decisions along the way. diff --git a/_overviews/collections-2.13/iterators.md b/_overviews/collections-2.13/iterators.md index 0f88475625..7af6bc9d0c 100644 --- a/_overviews/collections-2.13/iterators.md +++ b/_overviews/collections-2.13/iterators.md @@ -170,7 +170,7 @@ A lazy operation does not immediately compute all of its results. Instead, it co So the expression `(1 to 10).iterator.map(println)` would not print anything to the screen. The `map` method in this case doesn't apply its argument function to the values in the range, it returns a new `Iterator` that will do this as each one is requested. Adding `.toList` to the end of that expression will actually print the elements. -A consequence of this is that a method like `map` or `filter` won't necessarily apply its argument function to all of the input elements. The expression `(1 to 10).iterator.map(println).take(5).toList` would only print the values `1` to `5`, for instance, since those are only ones that will be requested from the `Iterator` returned by `map`. +A consequence of this is that a method like `map` or `filter` won't necessarily apply its argument function to all the input elements. The expression `(1 to 10).iterator.map(println).take(5).toList` would only print the values `1` to `5`, for instance, since those are only ones that will be requested from the `Iterator` returned by `map`. This is one of the reasons why it's important to only use pure functions as arguments to `map`, `filter`, `fold` and similar methods. Remember, a pure function has no side-effects, so one would not normally use `println` in a `map`. `println` is used to demonstrate laziness as it's not normally visible with pure functions. diff --git a/_overviews/collections-2.13/trait-iterable.md b/_overviews/collections-2.13/trait-iterable.md index bd219ec746..4e67903189 100644 --- a/_overviews/collections-2.13/trait-iterable.md +++ b/_overviews/collections-2.13/trait-iterable.md @@ -144,6 +144,6 @@ Two more methods exist in `Iterable` that return iterators: `grouped` and `slidi In the inheritance hierarchy below `Iterable` you find three traits: [Seq](https://www.scala-lang.org/api/{{ site.scala-version }}/scala/collection/Seq.html), [Set](https://www.scala-lang.org/api/{{ site.scala-version }}/scala/collection/Set.html), and [Map](https://www.scala-lang.org/api/{{ site.scala-version }}/scala/collection/Map.html). `Seq` and `Map` implement the [PartialFunction](https://www.scala-lang.org/api/{{ site.scala-version }}/scala/PartialFunction.html) trait with its `apply` and `isDefinedAt` methods, each implemented differently. `Set` gets its `apply` method from [SetOps](https://www.scala-lang.org/api/{{ site.scala-version }}/scala/collection/SetOps.html). -For sequences, `apply` is positional indexing, where elements are always numbered from `0`. That is, `Seq(1, 2, 3)(1)` gives `2`. For sets, `apply` is a membership test. For instance, `Set('a', 'b', 'c')('b')` gives `true` whereas `Set()('a')` gives `false`. Finally for maps, `apply` is a selection. For instance, `Map('a' -> 1, 'b' -> 10, 'c' -> 100)('b')` gives `10`. +For sequences, `apply` is positional indexing, where elements are always numbered from `0`. That is, `Seq(1, 2, 3)(1)` gives `2`. For sets, `apply` is a membership test. For instance, `Set('a', 'b', 'c')('b')` gives `true` whereas `Set()('a')` gives `false`. Finally, for maps, `apply` is a selection. For instance, `Map('a' -> 1, 'b' -> 10, 'c' -> 100)('b')` gives `10`. In the following, we will explain each of the three kinds of collections in more detail. diff --git a/_overviews/collections-2.13/views.md b/_overviews/collections-2.13/views.md index 0b0e3f2c1e..dbaad18128 100644 --- a/_overviews/collections-2.13/views.md +++ b/_overviews/collections-2.13/views.md @@ -87,7 +87,7 @@ The main reason for using views is performance. You have seen that by switching def isPalindrome(x: String) = x == x.reverse def findPalindrome(s: Seq[String]) = s find isPalindrome -Now, assume you have a very long sequence words and you want to find a palindrome in the first million words of that sequence. Can you re-use the definition of `findPalindrome`? Of course, you could write: +Now, assume you have a very long sequence words, and you want to find a palindrome in the first million words of that sequence. Can you re-use the definition of `findPalindrome`? Of course, you could write: findPalindrome(words take 1000000) diff --git a/_overviews/collections/arrays.md b/_overviews/collections/arrays.md index 019ac91248..637806b014 100644 --- a/_overviews/collections/arrays.md +++ b/_overviews/collections/arrays.md @@ -24,7 +24,7 @@ permalink: /overviews/collections/:title.html Given that Scala arrays are represented just like Java arrays, how can these additional features be supported in Scala? In fact, the answer to this question differs between Scala 2.8 and earlier versions. Previously, the Scala compiler somewhat "magically" wrapped and unwrapped arrays to and from `Seq` objects when required in a process called boxing and unboxing. The details of this were quite complicated, in particular when one created a new array of generic type `Array[T]`. There were some puzzling corner cases and the performance of array operations was not all that predictable. -The Scala 2.8 design is much simpler. Almost all compiler magic is gone. Instead the Scala 2.8 array implementation makes systematic use of implicit conversions. In Scala 2.8 an array does not pretend to _be_ a sequence. It can't really be that because the data type representation of a native array is not a subtype of `Seq`. Instead there is an implicit "wrapping" conversion between arrays and instances of class `scala.collection.mutable.WrappedArray`, which is a subclass of `Seq`. Here you see it in action: +The Scala 2.8 design is much simpler. Almost all compiler magic is gone. Instead, the Scala 2.8 array implementation makes systematic use of implicit conversions. In Scala 2.8 an array does not pretend to _be_ a sequence. It can't really be that because the data type representation of a native array is not a subtype of `Seq`. Instead, there is an implicit "wrapping" conversion between arrays and instances of class `scala.collection.mutable.WrappedArray`, which is a subclass of `Seq`. Here you see it in action: scala> val seq: Seq[Int] = a1 seq: Seq[Int] = WrappedArray(1, 2, 3) @@ -60,9 +60,9 @@ The `ArrayOps` object gets inserted automatically by the implicit conversion. So scala> intArrayOps(a1).reverse res5: Array[Int] = Array(3, 2, 1) -where `intArrayOps` is the implicit conversion that was inserted previously. This raises the question how the compiler picked `intArrayOps` over the other implicit conversion to `WrappedArray` in the line above. After all, both conversions map an array to a type that supports a reverse method, which is what the input specified. The answer to that question is that the two implicit conversions are prioritized. The `ArrayOps` conversion has a higher priority than the `WrappedArray` conversion. The first is defined in the `Predef` object whereas the second is defined in a class `scala.LowPriorityImplicits`, which is inherited by `Predef`. Implicits in subclasses and subobjects take precedence over implicits in base classes. So if both conversions are applicable, the one in `Predef` is chosen. A very similar scheme works for strings. +where `intArrayOps` is the implicit conversion that was inserted previously. This raises the question of how the compiler picked `intArrayOps` over the other implicit conversion to `WrappedArray` in the line above. After all, both conversions map an array to a type that supports a reverse method, which is what the input specified. The answer to that question is that the two implicit conversions are prioritized. The `ArrayOps` conversion has a higher priority than the `WrappedArray` conversion. The first is defined in the `Predef` object whereas the second is defined in a class `scala.LowPriorityImplicits`, which is inherited by `Predef`. Implicits in subclasses and subobjects take precedence over implicits in base classes. So if both conversions are applicable, the one in `Predef` is chosen. A very similar scheme works for strings. -So now you know how arrays can be compatible with sequences and how they can support all sequence operations. What about genericity? In Java you cannot write a `T[]` where `T` is a type parameter. How then is Scala's `Array[T]` represented? In fact a generic array like `Array[T]` could be at run-time any of Java's eight primitive array types `byte[]`, `short[]`, `char[]`, `int[]`, `long[]`, `float[]`, `double[]`, `boolean[]`, or it could be an array of objects. The only common run-time type encompassing all of these types is `AnyRef` (or, equivalently `java.lang.Object`), so that's the type to which the Scala compiler maps `Array[T]`. At run-time, when an element of an array of type `Array[T]` is accessed or updated there is a sequence of type tests that determine the actual array type, followed by the correct array operation on the Java array. These type tests slow down array operations somewhat. You can expect accesses to generic arrays to be three to four times slower than accesses to primitive or object arrays. This means that if you need maximal performance, you should prefer concrete over generic arrays. Representing the generic array type is not enough, however, there must also be a way to create generic arrays. This is an even harder problem, which requires a little bit of help from you. To illustrate the problem, consider the following attempt to write a generic method that creates an array. +So now you know how arrays can be compatible with sequences and how they can support all sequence operations. What about genericity? In Java, you cannot write a `T[]` where `T` is a type parameter. How then is Scala's `Array[T]` represented? In fact a generic array like `Array[T]` could be at run-time any of Java's eight primitive array types `byte[]`, `short[]`, `char[]`, `int[]`, `long[]`, `float[]`, `double[]`, `boolean[]`, or it could be an array of objects. The only common run-time type encompassing all of these types is `AnyRef` (or, equivalently `java.lang.Object`), so that's the type to which the Scala compiler maps `Array[T]`. At run-time, when an element of an array of type `Array[T]` is accessed or updated there is a sequence of type tests that determine the actual array type, followed by the correct array operation on the Java array. These type tests slow down array operations somewhat. You can expect accesses to generic arrays to be three to four times slower than accesses to primitive or object arrays. This means that if you need maximal performance, you should prefer concrete to generic arrays. Representing the generic array type is not enough, however, there must also be a way to create generic arrays. This is an even harder problem, which requires a little of help from you. To illustrate the issue, consider the following attempt to write a generic method that creates an array. // this is wrong! def evenElems[T](xs: Vector[T]): Array[T] = { @@ -72,7 +72,7 @@ So now you know how arrays can be compatible with sequences and how they can sup arr } -The `evenElems` method returns a new array that consist of all elements of the argument vector `xs` which are at even positions in the vector. The first line of the body of `evenElems` creates the result array, which has the same element type as the argument. So depending on the actual type parameter for `T`, this could be an `Array[Int]`, or an `Array[Boolean]`, or an array of some of the other primitive types in Java, or an array of some reference type. But these types have all different runtime representations, so how is the Scala runtime going to pick the correct one? In fact, it can't do that based on the information it is given, because the actual type that corresponds to the type parameter `T` is erased at runtime. That's why you will get the following error message if you compile the code above: +The `evenElems` method returns a new array that consist of all elements of the argument vector `xs` which are at even positions in the vector. The first line of the body of `evenElems` creates the result array, which has the same element type as the argument. So depending on the actual type parameter for `T`, this could be an `Array[Int]`, or an `Array[Boolean]`, or an array of some other primitive types in Java, or an array of some reference type. But these types have all different runtime representations, so how is the Scala runtime going to pick the correct one? In fact, it can't do that based on the information it is given, because the actual type that corresponds to the type parameter `T` is erased at runtime. That's why you will get the following error message if you compile the code above: error: cannot find class manifest for element type T val arr = new Array[T]((arr.length + 1) / 2) diff --git a/_overviews/collections/concrete-immutable-collection-classes.md b/_overviews/collections/concrete-immutable-collection-classes.md index 95a76570d1..6324128e48 100644 --- a/_overviews/collections/concrete-immutable-collection-classes.md +++ b/_overviews/collections/concrete-immutable-collection-classes.md @@ -19,7 +19,7 @@ A [List](https://www.scala-lang.org/api/{{ site.scala-212-version }}/scala/colle Lists have always been the workhorse for Scala programming, so not much needs to be said about them here. The major change in 2.8 is that the `List` class together with its subclass `::` and its subobject `Nil` is now defined in package `scala.collection.immutable`, where it logically belongs. There are still aliases for `List`, `Nil`, and `::` in the `scala` package, so from a user perspective, lists can be accessed as before. -Another change is that lists now integrate more closely into the collections framework, and are less of a special case than before. For instance all of the numerous methods that originally lived in the `List` companion object have been deprecated. They are replaced by the [uniform creation methods]({{ site.baseurl }}/overviews/collections/creating-collections-from-scratch.html) inherited by every collection. +Another change is that lists now integrate more closely into the collections framework, and are less of a special case than before. For instance all the numerous methods that originally lived in the `List` companion object have been deprecated. They are replaced by the [uniform creation methods]({{ site.baseurl }}/overviews/collections/creating-collections-from-scratch.html) inherited by every collection. ## Streams diff --git a/_overviews/collections/concrete-mutable-collection-classes.md b/_overviews/collections/concrete-mutable-collection-classes.md index bc7bf02567..108b531c9a 100644 --- a/_overviews/collections/concrete-mutable-collection-classes.md +++ b/_overviews/collections/concrete-mutable-collection-classes.md @@ -54,7 +54,7 @@ Just like an array buffer is useful for building arrays, and a list buffer is us ## Linked Lists -Linked lists are mutable sequences that consist of nodes which are linked with next pointers. They are supported by class [LinkedList](https://www.scala-lang.org/api/{{ site.scala-212-version }}/scala/collection/mutable/LinkedList.html). In most languages `null` would be picked as the empty linked list. That does not work for Scala collections, because even empty sequences must support all sequence methods. In particular `LinkedList.empty.isEmpty` should return `true` and not throw a `NullPointerException`. Empty linked lists are encoded instead in a special way: Their `next` field points back to the node itself. Like their immutable cousins, linked lists are best traversed sequentially. In addition linked lists make it easy to insert an element or linked list into another linked list. +Linked lists are mutable sequences that consist of nodes which are linked with next pointers. They are supported by class [LinkedList](https://www.scala-lang.org/api/{{ site.scala-212-version }}/scala/collection/mutable/LinkedList.html). In most languages `null` would be picked as the empty linked list. That does not work for Scala collections, because even empty sequences must support all sequence methods. In particular `LinkedList.empty.isEmpty` should return `true` and not throw a `NullPointerException`. Empty linked lists are encoded instead in a special way: Their `next` field points back to the node itself. Like their immutable cousins, linked lists are best traversed sequentially. In addition, linked lists make it easy to insert an element or linked list into another linked list. ## Double Linked Lists @@ -85,7 +85,7 @@ Scala provides mutable queues in addition to immutable ones. You use a `mQueue` Array sequences are mutable sequences of fixed size which store their elements internally in an `Array[Object]`. They are implemented in Scala by class [ArraySeq](https://www.scala-lang.org/api/{{ site.scala-212-version }}/scala/collection/mutable/ArraySeq.html). -You would typically use an `ArraySeq` if you want an array for its performance characteristics, but you also want to create generic instances of the sequence where you do not know the type of the elements and you do not have a `ClassTag` to provide it at run-time. These issues are explained in the section on [arrays]({{ site.baseurl }}/overviews/collections/arrays.html). +You would typically use an `ArraySeq` if you want an array for its performance characteristics, but you also want to create generic instances of the sequence where you do not know the type of the elements, and you do not have a `ClassTag` to provide it at run-time. These issues are explained in the section on [arrays]({{ site.baseurl }}/overviews/collections/arrays.html). ## Stacks diff --git a/_overviews/collections/creating-collections-from-scratch.md b/_overviews/collections/creating-collections-from-scratch.md index a7c1a7ff5b..2468bf9e27 100644 --- a/_overviews/collections/creating-collections-from-scratch.md +++ b/_overviews/collections/creating-collections-from-scratch.md @@ -40,7 +40,7 @@ Besides `apply`, every collection companion object also defines a member `empty` Descendants of `Seq` classes provide also other factory operations in their companion objects. These are summarized in the following table. In short, there's * `concat`, which concatenates an arbitrary number of traversables together, -* `fill` and `tabulate`, which generate single or multi-dimensional sequences of given dimensions initialized by some expression or tabulating function, +* `fill` and `tabulate`, which generate single or multidimensional sequences of given dimensions initialized by some expression or tabulating function, * `range`, which generates integer sequences with some constant step length, and * `iterate`, which generates the sequence resulting from repeated application of a function to a start element. diff --git a/_overviews/collections/equality.md b/_overviews/collections/equality.md index c949d7aac5..bb9abc6f06 100644 --- a/_overviews/collections/equality.md +++ b/_overviews/collections/equality.md @@ -13,7 +13,7 @@ permalink: /overviews/collections/:title.html The collection libraries have a uniform approach to equality and hashing. The idea is, first, to divide collections into sets, maps, and sequences. Collections in different categories are always unequal. For instance, `Set(1, 2, 3)` is unequal to `List(1, 2, 3)` even though they contain the same elements. On the other hand, within the same category, collections are equal if and only if they have the same elements (for sequences: the same elements in the same order). For example, `List(1, 2, 3) == Vector(1, 2, 3)`, and `HashSet(1, 2) == TreeSet(2, 1)`. -It does not matter for the equality check whether a collection is mutable or immutable. For a mutable collection one simply considers its current elements at the time the equality test is performed. This means that a mutable collection might be equal to different collections at different times, depending what elements are added or removed. This is a potential trap when using a mutable collection as a key in a hashmap. Example: +It does not matter for the equality check whether a collection is mutable or immutable. For a mutable collection one simply considers its current elements at the time the equality test is performed. This means that a mutable collection might be equal to different collections at different times, depending on what elements are added or removed. This is a potential trap when using a mutable collection as a key in a hashmap. Example: scala> import collection.mutable.{HashMap, ArrayBuffer} import collection.mutable.{HashMap, ArrayBuffer} diff --git a/_overviews/collections/introduction.md b/_overviews/collections/introduction.md index d61806d127..5fc2e3f301 100644 --- a/_overviews/collections/introduction.md +++ b/_overviews/collections/introduction.md @@ -55,7 +55,7 @@ lines run at first try. **Fast:** Collection operations are tuned and optimized in the libraries. As a result, using collections is typically quite -efficient. You might be able to do a little bit better with carefully +efficient. You might be able to do a little better with carefully hand-tuned data structures and operations, but you might also do a lot worse by making some suboptimal implementation decisions along the way. What's more, collections have been recently adapted to parallel diff --git a/_overviews/collections/iterators.md b/_overviews/collections/iterators.md index f08e65d5a3..78dfcc69f0 100644 --- a/_overviews/collections/iterators.md +++ b/_overviews/collections/iterators.md @@ -26,7 +26,7 @@ As always, for-expressions can be used as an alternate syntax for expressions in for (elem <- it) println(elem) -There's an important difference between the foreach method on iterators and the same method on traversable collections: When called on an iterator, `foreach` will leave the iterator at its end when it is done. So calling `next` again on the same iterator will fail with a `NoSuchElementException`. By contrast, when called on a collection, `foreach` leaves the number of elements in the collection unchanged (unless the passed function adds to removes elements, but this is discouraged, because it may lead to surprising results). +There's an important difference between the foreach method on iterators and the same method on traversable collections: When called on an iterator, `foreach` will leave the iterator at its end when it is done. So calling `next` again on the same iterator will fail with a `NoSuchElementException`. By contrast, when called on a collection, `foreach` leaves the number of elements in the collection unchanged (unless the passed function adds or removes elements, but this is discouraged, because it may lead to surprising results). The other operations that Iterator has in common with `Traversable` have the same property. For instance, iterators provide a `map` method, which returns a new iterator: @@ -166,7 +166,7 @@ A lazy operation does not immediately compute all of its results. Instead, it co So the expression `(1 to 10).iterator.map(println)` would not print anything to the screen. The `map` method in this case doesn't apply its argument function to the values in the range, it returns a new `Iterator` that will do this as each one is requested. Adding `.toList` to the end of that expression will actually print the elements. -A consequence of this is that a method like `map` or `filter` won't necessarily apply its argument function to all of the input elements. The expression `(1 to 10).iterator.map(println).take(5).toList` would only print the values `1` to `5`, for instance, since those are only ones that will be requested from the `Iterator` returned by `map`. +A consequence of this is that a method like `map` or `filter` won't necessarily apply its argument function to all the input elements. The expression `(1 to 10).iterator.map(println).take(5).toList` would only print the values `1` to `5`, for instance, since those are only ones that will be requested from the `Iterator` returned by `map`. This is one of the reasons why it's important to only use pure functions as arguments to `map`, `filter`, `fold` and similar methods. Remember, a pure function has no side-effects, so one would not normally use `println` in a `map`. `println` is used to demonstrate laziness as it's not normally visible with pure functions. diff --git a/_overviews/collections/migrating-from-scala-27.md b/_overviews/collections/migrating-from-scala-27.md index d621c78899..5e1efc7822 100644 --- a/_overviews/collections/migrating-from-scala-27.md +++ b/_overviews/collections/migrating-from-scala-27.md @@ -12,7 +12,7 @@ permalink: /overviews/collections/:title.html Porting your existing Scala applications to use the new collections should be almost automatic. There are only a couple of possible issues to take care of. -Generally, the old functionality of Scala 2.7 collections has been left in place. Some features have been deprecated, which means they will removed in some future release. You will get a _deprecation warning_ when you compile code that makes use of these features in Scala 2.8. In a few places deprecation was unfeasible, because the operation in question was retained in 2.8, but changed in meaning or performance characteristics. These cases will be flagged with _migration warnings_ when compiled under 2.8. To get full deprecation and migration warnings with suggestions how to change your code, pass the `-deprecation` and `-Xmigration` flags to `scalac` (note that `-Xmigration` is an extended option, so it starts with an `X`). You can also pass the same options to the `scala` REPL to get the warnings in an interactive session. Example: +Generally, the old functionality of Scala 2.7 collections has been left in place. Some features have been deprecated, which means they will be removed in some future release. You will get a _deprecation warning_ when you compile code that makes use of these features in Scala 2.8. In a few places deprecation was unfeasible, because the operation in question was retained in 2.8, but changed in meaning or performance characteristics. These cases will be flagged with _migration warnings_ when compiled under 2.8. To get full deprecation and migration warnings with suggestions how to change your code, pass the `-deprecation` and `-Xmigration` flags to `scalac` (note that `-Xmigration` is an extended option, so it starts with an `X`). You can also pass the same options to the `scala` REPL to get the warnings in an interactive session. Example: >scala -deprecation -Xmigration Welcome to Scala version 2.8.0.final @@ -38,7 +38,7 @@ Generally, the old functionality of Scala 2.7 collections has been left in place There are two parts of the old libraries which have been replaced wholesale, and for which deprecation warnings were not feasible. -1. The previous `scala.collection.jcl` package is gone. This package tried to mimick some of the Java collection library design in Scala, but in doing so broke many symmetries. Most people who wanted Java collections bypassed `jcl` and used `java.util` directly. Scala 2.8 offers automatic conversion mechanisms between both collection libraries in the [JavaConversions]({{ site.baseurl }}/overviews/collections/conversions-between-java-and-scala-collections.html) object which replaces the `jcl` package. +1. The previous `scala.collection.jcl` package is gone. This package tried to mimic aspects of the Java collection library design in Scala, but in doing so broke many symmetries. Most people who wanted Java collections bypassed `jcl` and used `java.util` directly. Scala 2.8 offers automatic conversion mechanisms between both collection libraries in the [JavaConversions]({{ site.baseurl }}/overviews/collections/conversions-between-java-and-scala-collections.html) object which replaces the `jcl` package. 2. Projections have been generalized and cleaned up and are now available as views. It seems that projections were used rarely, so not much code should be affected by this change. So, if your code uses either `jcl` or projections there might be some minor rewriting to do. diff --git a/_overviews/collections/trait-iterable.md b/_overviews/collections/trait-iterable.md index abc8051703..ac72783f41 100644 --- a/_overviews/collections/trait-iterable.md +++ b/_overviews/collections/trait-iterable.md @@ -62,6 +62,6 @@ Trait `Iterable` also adds some other methods to `Traversable` that can be imple In the inheritance hierarchy below Iterable you find three traits: [Seq](https://www.scala-lang.org/api/{{ site.scala-212-version }}/scala/collection/Seq.html), [Set](https://www.scala-lang.org/api/{{ site.scala-212-version }}/scala/collection/Set.html), and [Map](https://www.scala-lang.org/api/{{ site.scala-212-version }}/scala/collection/Map.html). `Seq` and `Map` implement the [PartialFunction](https://www.scala-lang.org/api/{{ site.scala-212-version }}/scala/PartialFunction.html) trait with its `apply` and `isDefinedAt` methods, each implemented differently. `Set` gets its `apply` method from [GenSetLike](https://www.scala-lang.org/api/{{ site.scala-212-version }}/scala/collection/GenSetLike.html). -For sequences, `apply` is positional indexing, where elements are always numbered from `0`. That is, `Seq(1, 2, 3)(1)` gives `2`. For sets, `apply` is a membership test. For instance, `Set('a', 'b', 'c')('b')` gives `true` whereas `Set()('a')` gives `false`. Finally for maps, `apply` is a selection. For instance, `Map('a' -> 1, 'b' -> 10, 'c' -> 100)('b')` gives `10`. +For sequences, `apply` is positional indexing, where elements are always numbered from `0`. That is, `Seq(1, 2, 3)(1)` gives `2`. For sets, `apply` is a membership test. For instance, `Set('a', 'b', 'c')('b')` gives `true` whereas `Set()('a')` gives `false`. Finally, for maps, `apply` is a selection. For instance, `Map('a' -> 1, 'b' -> 10, 'c' -> 100)('b')` gives `10`. In the following, we will explain each of the three kinds of collections in more detail. diff --git a/_overviews/collections/trait-traversable.md b/_overviews/collections/trait-traversable.md index 11aaa6b349..d2173cb789 100644 --- a/_overviews/collections/trait-traversable.md +++ b/_overviews/collections/trait-traversable.md @@ -25,7 +25,7 @@ The `foreach` method is meant to traverse all elements of the collection, and ap * **Conversions** `toArray`, `toList`, `toIterable`, `toSeq`, `toIndexedSeq`, `toStream`, `toSet`, `toMap`, which turn a `Traversable` collection into something more specific. All these conversions return their receiver argument unchanged if the run-time type of the collection already matches the demanded collection type. For instance, applying `toList` to a list will yield the list itself. * **Copying operations** `copyToBuffer` and `copyToArray`. As their names imply, these copy collection elements to a buffer or array, respectively. * **Size info** operations `isEmpty`, `nonEmpty`, `size`, and `hasDefiniteSize`: Traversable collections can be finite or infinite. An example of an infinite traversable collection is the stream of natural numbers `Stream.from(0)`. The method `hasDefiniteSize` indicates whether a collection is possibly infinite. If `hasDefiniteSize` returns true, the collection is certainly finite. If it returns false, the collection has not been fully elaborated yet, so it might be infinite or finite. -* **Element retrieval** operations `head`, `last`, `headOption`, `lastOption`, and `find`. These select the first or last element of a collection, or else the first element matching a condition. Note, however, that not all collections have a well-defined meaning of what "first" and "last" means. For instance, a hash set might store elements according to their hash keys, which might change from run to run. In that case, the "first" element of a hash set could also be different for every run of a program. A collection is _ordered_ if it always yields its elements in the same order. Most collections are ordered, but some (_e.g._ hash sets) are not-- dropping the ordering gives a little bit of extra efficiency. Ordering is often essential to give reproducible tests and to help in debugging. That's why Scala collections give ordered alternatives for all collection types. For instance, the ordered alternative for `HashSet` is `LinkedHashSet`. +* **Element retrieval** operations `head`, `last`, `headOption`, `lastOption`, and `find`. These select the first or last element of a collection, or else the first element matching a condition. Note, however, that not all collections have a well-defined meaning of what "first" and "last" means. For instance, a hash set might store elements according to their hash keys, which might change from run to run. In that case, the "first" element of a hash set could also be different for every run of a program. A collection is _ordered_ if it always yields its elements in the same order. Most collections are ordered, but some (_e.g._ hash sets) are not-- dropping the ordering gives a little extra efficiency. Ordering is often essential to give reproducible tests and to help in debugging. That's why Scala collections give ordered alternatives for all collection types. For instance, the ordered alternative for `HashSet` is `LinkedHashSet`. * **Sub-collection retrieval operations** `tail`, `init`, `slice`, `take`, `drop`, `takeWhile`, `dropWhile`, `filter`, `filterNot`, `withFilter`. These all return some sub-collection identified by an index range or some predicate. * **Subdivision operations** `splitAt`, `span`, `partition`, `groupBy`, which split the elements of this collection into several sub-collections. * **Element tests** `exists`, `forall`, `count` which test collection elements with a given predicate. diff --git a/_overviews/collections/views.md b/_overviews/collections/views.md index dd3c128657..1798d77cf4 100644 --- a/_overviews/collections/views.md +++ b/_overviews/collections/views.md @@ -73,7 +73,7 @@ There are two reasons why you might want to consider using views. The first is p def isPalindrome(x: String) = x == x.reverse def findPalindrome(s: Seq[String]) = s find isPalindrome -Now, assume you have a very long sequence words and you want to find a palindrome in the first million words of that sequence. Can you re-use the definition of `findPalindrome`? Of course, you could write: +Now, assume you have a very long sequence of words, and you want to find a palindrome in the first million words of that sequence. Can you re-use the definition of `findPalindrome`? Of course, you could write: findPalindrome(words take 1000000) diff --git a/_overviews/contribute/add-guides.md b/_overviews/contribute/add-guides.md index 5d4f6622e2..f52764736e 100644 --- a/_overviews/contribute/add-guides.md +++ b/_overviews/contribute/add-guides.md @@ -12,7 +12,7 @@ involved with complex tools like the compiler. ## Architecture -This documentation website is backed by an open-source [github repository](https://github.com/scala/docs.scala-lang), +This documentation website is backed by an open-source [GitHub repository](https://github.com/scala/docs.scala-lang), and is always contribution-ready. ### Content @@ -34,10 +34,10 @@ The website is statically generated from [Markdown](https://en.wikipedia.org/wik This workflow was chosen to help contributors to focus on writing helpful content, rather than on configuration and boilerplate. It also aids publishing a static site in a central location. -The markdown syntax being used supports [Maruku](https://github.com/bhollis/maruku) extensions, and has automatic +The Markdown syntax being used supports [Maruku](https://github.com/bhollis/maruku) extensions, and has automatic syntax highlighting, without the need for any tags. -Additionally [mdoc](https://github.com/scalameta/mdoc) is used during pull requests to validate Scala code blocks. +Additionally, [mdoc](https://github.com/scalameta/mdoc) is used during pull requests to validate Scala code blocks. To use this feature you must use the backtick notation as documented by mdoc, [see here](#code-blocks) for an example. @@ -271,7 +271,7 @@ the metadata of `/tutorials.md`. e.g. it could look like icon: code --- -You must also add the tutorial to the drop down list in the navigation bar. To do this, add an extra entry to +You must also add the tutorial to the drop-down list in the navigation bar. To do this, add an extra entry to `_data/doc-nav-header.yml`. i.e. --- @@ -290,7 +290,7 @@ You must also add the tutorial to the drop down list in the navigation bar. To d ### Cheatsheets -Cheatsheets have a special layout, and the content is expected to be a markdown table. To contribute a cheatsheet, +Cheatsheets have a special layout, and the content is expected to be a Markdown table. To contribute a cheatsheet, you should use the following format: --- diff --git a/_overviews/contribute/bug-reporting-guide.md b/_overviews/contribute/bug-reporting-guide.md index 6830c18ab9..41f349016e 100644 --- a/_overviews/contribute/bug-reporting-guide.md +++ b/_overviews/contribute/bug-reporting-guide.md @@ -34,7 +34,7 @@ If you have a code snippet that is resulting in bytecode which you believe is be 1. Gradually remove parts from the original failing code snippet until you believe you have the simplest representation of your problem. - 2. Ensure that you have decoupled your code snippet from any library that could be introducing the incorrect behavior. One way to achieve this is to try to recompile the offending code snippet in isolation, outside of the context of any complex build environment. If your code depends on some strictly Java library and source code is available for it, make sure that the latter is also minimized. + 2. Ensure that you have decoupled your code snippet from any library that could be introducing the incorrect behavior. One way to achieve this is to try to recompile the offending code snippet in isolation, outside the context of any complex build environment. If your code depends on some strictly Java library and source code is available for it, make sure that the latter is also minimized. 3. Make sure you are compiling your project from a clean slate. Your problem could be related to separate compilation, which is difficult to detect without a clean build with new `.class` files. @@ -42,7 +42,7 @@ If you have a code snippet that is resulting in bytecode which you believe is be 5. If you want to file an improvement in the issue tracker please discuss it first on one of the mailing lists. They offer much bigger audience than issue tracker. The latter is not suitable for long discussions. -* Keep in mind that the behavior you are witnessing could be intended. Good formal resources for verifying whether or not the language behavior is intended is either in the [Scala Improvement Proposal Documents][sips] or in the [Scala Language Specification](https://www.scala-lang.org/files/archive/spec/2.13/). If in doubt, you may always ask on the [Community Category](https://contributors.scala-lang.org/c/community) or [Stack Overflow](https://stackoverflow.com/questions/tagged/scala). +* Keep in mind that the behavior you are witnessing could be intended. Good formal resources for verifying whether the language behavior is intended is either in the [Scala Improvement Proposal Documents][sips] or in the [Scala Language Specification](https://www.scala-lang.org/files/archive/spec/2.13/). If in doubt, you may always ask on the [Community Category](https://contributors.scala-lang.org/c/community) or [Stack Overflow](https://stackoverflow.com/questions/tagged/scala). In general, if you find yourself stuck on any of these steps, asking on [Scala Contributors](https://contributors.scala-lang.org/) can be helpful: diff --git a/_overviews/contribute/codereviews.md b/_overviews/contribute/codereviews.md index be8e66ba46..c4c44c9981 100644 --- a/_overviews/contribute/codereviews.md +++ b/_overviews/contribute/codereviews.md @@ -19,7 +19,7 @@ own pull requests. * Attach comments to particular lines or regions they pertain to whenever possible. * Short code examples are often more descriptive than prose. * If you have thoroughly reviewed the PR and thought through all angles, LGTM (Looks Good To Me) is the preferred acceptance response. -* Additional reviews should try to offer additional insights: "I also thought about it from this angle and it still looks good.." +* Additional reviews should try to offer additional insights: "I also thought about it from this angle, and it still looks good.." * Above all, remember that the people you are reviewing might be reviewing your PRs one day too. * If you are receiving the review, consider that the advice is being given to make you, and Scala, better rather than as a negative critique. Assume the best, rather than the worst. @@ -39,7 +39,7 @@ own pull requests.

scala/scala-lang

-

The Scala language web site.

+

The Scala language website.

scala/docs.scala-lang.org

diff --git a/_overviews/contribute/documentation.md b/_overviews/contribute/documentation.md index bd93b7fbd3..469396e40c 100644 --- a/_overviews/contribute/documentation.md +++ b/_overviews/contribute/documentation.md @@ -22,7 +22,7 @@ The Scala API documentation lives with the scala project source code. There are * [Log issues for missing scaladoc documentation][report-api-doc-bugs] - Please *follow the issue submission process closely* to help prevent duplicate issues being created. -* [Claim Scaladoc Issues and Provide Documentation][scala-standard-library-api-documentation] - please claim issues prior to working on a specific scaladoc task to prevent duplication of effort. If you sit on an issue for too long without submitting a pull request, it will revert back to unassigned and you will need to re-claim it. +* [Claim Scaladoc Issues and Provide Documentation][scala-standard-library-api-documentation] - please claim issues prior to working on a specific scaladoc task to prevent duplication of effort. If you sit on an issue for too long without submitting a pull request, it will revert to unassigned, and you will need to re-claim it. * You can also just [submit new Scaladoc][scala-standard-library-api-documentation] without creating an issue, but please look to see if there is an issue already submitted for your task and claim it if there is. If not, please post your intention to work on a specific scaladoc task on [Scala Contributors](https://contributors.scala-lang.org/) so that people know what you are doing. @@ -42,7 +42,7 @@ without creating an issue, but please look to see if there is an issue already s and more Please read [Add New Guides/Tutorials][add-guides] through before embarking on changes. The site uses -the [Jekyll](https://jekyllrb.com/) markdown engine so you will need to follow the instructions to get that running as well. +the [Jekyll](https://jekyllrb.com/) Markdown engine, so you will need to follow the instructions to get that running as well. ### Updating scala-lang.org diff --git a/_overviews/contribute/guide.md b/_overviews/contribute/guide.md index 33330f511d..9fa69a410c 100644 --- a/_overviews/contribute/guide.md +++ b/_overviews/contribute/guide.md @@ -23,7 +23,7 @@ Just to name a few common reasons: The main Scala project consists of the standard Scala library, the Scala reflection and macros library, the Scala compiler and the Scaladoc tool. This means there's plenty to choose from when deciding what to work on. -Typically the scaladoc tool provides a low entry point for new committers, so it is a good first step into contributing. +Typically, the scaladoc tool provides a low entry point for new committers, so it is a good first step into contributing. On the [Scala bug tracker](https://github.com/scala/bug) you will find the bugs that you could pick up. Once you decided on a ticket to look at, see the next step on how to proceed further. @@ -53,7 +53,7 @@ https://github.com/scala/scala#git-hygiene). For bug fixes, a single commit is r 10. [Work with a reviewer](https://github.com/scala/scala#reviewing) to [get your pull request merged in][hackers-review]. 11. Celebrate! -Need more information or a little more hand-holding for the first one? We got you covered: take a read through the entire [Hacker Guide][hackers] (or the [equivalent Scala 3 Contributing Guide][scala3-hackers]) for an example of implementing a new feature (some of the steps can be skipped for bug fixes, this will be obvious from reading it, but many of the steps here will help with bug fixes too). +Need more information or a little more hand-holding for the first one? We got you covered: take a read through the entire [Hacker Guide][hackers] (or the [equivalent Scala 3 Contributing Guide][scala3-hackers]) for an example of implementing a new feature (some steps can be skipped for bug fixes, this will be obvious from reading it, but many of the steps here will help with bug fixes too). ### Larger Changes, New Features diff --git a/_overviews/contribute/hacker-guide.md b/_overviews/contribute/hacker-guide.md index 2a70714f53..e78df88b12 100644 --- a/_overviews/contribute/hacker-guide.md +++ b/_overviews/contribute/hacker-guide.md @@ -27,7 +27,7 @@ One approach would be to go the [Scala 2 bug tracker](https://github.com/scala/b Sometimes it's appealing to hack alone and not to have to interact with others. However, in the context a big project such as Scala, there might be better ways. There are people in the Scala community who have spent years accumulating knowledge about Scala libraries and internals. They might provide unique insights and, what's even better, direct assistance in their areas, so it is not only advantageous, but recommended to communicate with the community about your new patch. -Typically bug fixes and new features start out as an idea or an experiment posted on one of [our forums](https://scala-lang.org/community/index.html#forums) to find out how people feel +Typically, bug fixes and new features start out as an idea or an experiment posted on one of [our forums](https://scala-lang.org/community/index.html#forums) to find out how people feel about things you want to implement. People proficient in certain areas of Scala usually monitor forums and discussion rooms, so you'll often get some help by posting a message. But the most efficient way to connect is to mention in your message one of the people responsible for maintaining the aspect of Scala which you wish to contribute to. @@ -38,7 +38,7 @@ In our running example, since Martin is the person who submitted the string inte As alluded to earlier, one must also choose an appropriate avenue to discuss the issue. Typically, one would use the [Scala Contributor's Forum][contrib-forum], as there are post categories devoted to discussions about the core internal design and implementation of the Scala system. In this example, the issue was previously discussed on the (now unused) scala-user mailing list, at the time, -we would have posted to the [the (now unused) scala-user mailing list](https://groups.google.com/group/scala-user) about our issue: +we would have posted to [the (now unused) scala-user mailing list](https://groups.google.com/group/scala-user) about our issue: Posting to scala-user Response from Martin @@ -53,7 +53,7 @@ it probably makes sense to familiarize yourself with Git first. We recommend * the [Git Pro](https://git-scm.com/book/en/) online book. * the help page on [Forking a Git Repository](https://help.github.com/articles/fork-a-repo). -* this great training tool [LearnGitBranching](https://pcottle.github.io/learnGitBranching/). One hour hands-on training helps more than 1000 hours reading. +* this great training tool [LearnGitBranching](https://pcottle.github.io/learnGitBranching/). One-hour hands-on training helps more than 1000 hours reading. ### Fork @@ -67,8 +67,8 @@ If you're new to Git, don't be afraid of messing up-- there is no way you can co ### Clone If everything went okay, you will be redirected to your own fork at `https://github.com/user-name/scala`, where `username` -is your GitHub user name. You might find it helpful to read [https://help.github.com/fork-a-repo/](https://help.github.com/fork-a-repo/), -which covers some of the things that will follow below. Then, _clone_ your repository (i.e. pull a copy from GitHub to your local machine) by running the following on the command line: +is your GitHub username. You might find it helpful to read [https://help.github.com/fork-a-repo/](https://help.github.com/fork-a-repo/), +which covers some things that will follow below. Then, _clone_ your repository (i.e. pull a copy from GitHub to your local machine) by running the following on the command line: 16:35 ~/Projects$ git clone https://github.com/xeno-by/scala Cloning into 'scala'... @@ -129,7 +129,7 @@ We recognise that there exist preferences towards specific IDE/editor experience ## 3. Hack When hacking on your topic of choice, you'll be modifying Scala, compiling it and testing it on relevant input files. -Typically you would want to first make sure that your changes work on a small example and afterwards verify that nothing break +Typically, you would want to first make sure that your changes work on a small example and afterwards verify that nothing break by running a comprehensive test suite. We'll start by creating a `sandbox` directory (`./sandbox` is listed in the .gitignore of the Scala repository), which will hold a single test file and its compilation results. First, let's make sure that @@ -200,12 +200,12 @@ The [Scala Collections Guide][collections-intro] is more general, covering the s ##### The Scala Compiler -Documentation about the internal workings of the Scala compiler is scarce, and most of the knowledge is passed around by forum ([Scala Contributors](https://contributors.scala-lang.org/) forum), chat-rooms (see `#scala-contributors` on [Discord][discord-contrib]), ticket, or word of mouth. However the situation is steadily improving. Here are the resources that might help: +Documentation about the internal workings of the Scala compiler is scarce, and most of the knowledge is passed around by forum ([Scala Contributors](https://contributors.scala-lang.org/) forum), chat-rooms (see `#scala-contributors` on [Discord][discord-contrib]), ticket, or word of mouth. However, the situation is steadily improving. Here are the resources that might help: * [Compiler internals videos by Martin Odersky](https://www.scala-lang.org/old/node/598.html) are quite dated, but still very useful. In this three-video series Martin explains the general architecture of the compiler, and the basics of the front-end, which later became the `scala-reflect` module's API. * [Reflection documentation][reflect-overview] describes fundamental data structures (like `Tree`s, `Symbol`s, and `Types`) that - are used to represent Scala programs and operations defined on then. Since much of the compiler has been factored out and made accessible via the `scala-reflect` module, all of the fundamentals needed for reflection are the same for the compiler. + are used to represent Scala programs and operations defined on then. Since much of the compiler has been factored out and made accessible via the `scala-reflect` module, all the fundamentals needed for reflection are the same for the compiler. * [Scala compiler corner](https://lampwww.epfl.ch/~magarcia/ScalaCompilerCornerReloaded/) contains extensive documentation about most of the post-typer phases (i.e. the backend) in the Scala compiler. * [Scala Contributors](https://contributors.scala-lang.org/), a forum which hosts discussions about the core @@ -306,7 +306,7 @@ This means your change is backward or forward binary incompatible with the speci ### Verify Now to make sure that my fix doesn't break anything I need to run the test suite. The Scala test suite uses [JUnit](https://junit.org/junit4/) and [partest][partest-guide], a tool we wrote for testing Scala. -Run `sbt test` and `sbt partest` to run all of the JUnit and partest tests, respectively. +Run `sbt test` and `sbt partest` to run all the JUnit and partest tests, respectively. `partest` (not `sbt partest`) also allows you to run a subset of the tests using wildcards: 18:52 ~/Projects/scala/sandbox (ticket/6725)$ cd ../test @@ -358,7 +358,7 @@ Once you are satisfied with your work, synced with `master` and cleaned up your ### Submit -Now, we must simply submit our proposed patch. Navigate to your branch in GitHub (for me it was `https://github.com/xeno-by/scala/tree/ticket/6725`) +Now, we must simply submit our proposed patch. Navigate to your branch in GitHub (for me, it was `https://github.com/xeno-by/scala/tree/ticket/6725`) and click the pull request button to submit your patch as a pull request to Scala. If you've never submitted patches to Scala, you will need to sign the contributor license agreement, which [can be done online](https://www.lightbend.com/contribute/cla/scala) within a few minutes. diff --git a/_overviews/contribute/inclusive-language-guide.md b/_overviews/contribute/inclusive-language-guide.md index 5072f71813..d32b5144a8 100644 --- a/_overviews/contribute/inclusive-language-guide.md +++ b/_overviews/contribute/inclusive-language-guide.md @@ -125,7 +125,7 @@ Prefer the direct meaning instead. ## Backward compatibility Sometimes, we have existing code, APIs or commands that do not follow the above recommendations. -It is generally advisable to perform renamings to address the issue, but that should not be done to the detriment of backward compatibility (in particular, backward binary compatibility of libraries). +It is generally advisable to perform renaming to address the issue, but that should not be done to the detriment of backward compatibility (in particular, backward binary compatibility of libraries). Deprecated aliases should be retained when possible. Sometimes, it is not possible to preserve backward compatibility through renaming; for example for methods intended to be overridden by user-defined subclasses. diff --git a/_overviews/contribute/index.md b/_overviews/contribute/index.md index da197e233e..9e0d6e9073 100644 --- a/_overviews/contribute/index.md +++ b/_overviews/contribute/index.md @@ -11,7 +11,7 @@ kindly helping others in return. So why not join the Scala community and help everyone make things better? **What Can I Do?** -That depends on what you want to contribute. Below are some getting started resources for different contribution domains. Please read all of the documentation and follow all the links from the topic pages below before attempting to contribute, as many of the questions you have will already be answered. +That depends on what you want to contribute. Below are some getting started resources for different contribution domains. Please read all the documentation and follow all the links from the topic pages below before attempting to contribute, as many of the questions you have will already be answered. ### Reporting bugs diff --git a/_overviews/contribute/scala-internals.md b/_overviews/contribute/scala-internals.md index f3b1de54de..738746f9d3 100644 --- a/_overviews/contribute/scala-internals.md +++ b/_overviews/contribute/scala-internals.md @@ -54,7 +54,7 @@ Even if you do not wish to post on [Scala Contributors][scala-contributors], ple anyway, as posting to the forum is *not* criteria for it to be accepted. For smaller, self-contained bugs it is especially less important to make a post, however larger issues or features take more time to consider accepting them. For large contributions we strongly recommend that you do to notify of your intention, which will help you determine if -there is large community support for your change, making it more likely that your large contribution will accepted, +there is large community support for your change, making it more likely that your large contribution will be accepted, before you spend a long time implementing it. [scala-contributors]: https://contributors.scala-lang.org diff --git a/_overviews/contribute/scala-standard-library-api-documentation.md b/_overviews/contribute/scala-standard-library-api-documentation.md index a6c812b7e4..692cf00a76 100644 --- a/_overviews/contribute/scala-standard-library-api-documentation.md +++ b/_overviews/contribute/scala-standard-library-api-documentation.md @@ -81,7 +81,7 @@ new API documentation to save time, effort, mistakes and repetition. * [Scaladoc for library authors][scaladoc-lib-authors] covers the use of scaladoc tags, markdown and other features. * [Scaladoc's interface][scaladoc-interface] - covers all of the features of Scaladoc's interface, e.g. switching between + covers all the features of Scaladoc's interface, e.g. switching between companions, browsing package object documentation, searching, token searches and so on. * Prior to commit, be sure to read @@ -92,11 +92,11 @@ new API documentation to save time, effort, mistakes and repetition. message formats, noting *present tense*, *length limits* and that it must merge cleanly. Remember that the title of the pull request will become the commit message when merged. **Also**, be sure to assign one or more reviewers to the PR, if this is - not possible for you, you could mention a user in **in the pull request comments**. + not possible for you, you could mention a user **in the pull request comments**. ### Extra Requirements for Scaladoc Documentation Commits -Although some of the requirements for bug fix pull requests are not needed for +Although some requirements for bug fix pull requests are not needed for API documentation commits, here are the step by step requirements to ensure your API documentation PR is merged in smoothly: diff --git a/_overviews/contributors/index.md b/_overviews/contributors/index.md index 6d5fd1a920..24958703e2 100644 --- a/_overviews/contributors/index.md +++ b/_overviews/contributors/index.md @@ -49,9 +49,9 @@ The first reason for setting up a continuous integration (CI) server is to syste Examples of CI servers that are free for open source projects are [GitHub Actions](https://github.com/features/actions), [Travis CI](https://travis-ci.com), [Drone](https://drone.io) or [AppVeyor](https://appveyor.com). -Our example uses Github Actions. This feature is enabled by default on GitHub repositories. You can verify if that is +Our example uses GitHub Actions. This feature is enabled by default on GitHub repositories. You can verify if that is the case in the *Actions* section of the *Settings* tab of the repository. -If *Disable all actions* is checked, then Actions are not enabled and you can activate them +If *Disable all actions* is checked, then Actions are not enabled, and you can activate them by selecting *Allow all actions*, *Allow local actions only* or *Allow select actions*. With Actions enabled, you can create a *workflow definition file*. A **workflow** is an automated procedure, @@ -81,7 +81,7 @@ jobs: run: sbt +test ~~~ -This workflow is called *Continuous integration* and it will run every time one +This workflow is called *Continuous integration*, and it will run every time one or more commits are pushed to the repository. It contains only one job called *ci*, which will run on an Ubuntu runner and that is composed of three actions. The action `setup-java` installs a JDK and caches the library dependencies @@ -181,7 +181,7 @@ credentials += Credentials("Sonatype Nexus Repository Manager", "(Sonatype password)") ~~~ -(Put your actual user name and password in place of `(Sonatype user name)` and `(Sonatype password)`) +(Put your actual username and password in place of `(Sonatype user name)` and `(Sonatype password)`) **Never** check this file into version control. @@ -375,7 +375,7 @@ an sbt-site to GitHub Pages. ### Create the Documentation Site In this example we choose to use [Paradox](https://developer.lightbend.com/docs/paradox/current/index.html) -because it runs on the JVM and thus doesn’t require setting up another VM on your system (in contrast with +because it runs on the JVM and thus doesn't require setting up another VM on your system (in contrast with most other documentation generators, which are based on Ruby, Node.js or Python). To install Paradox and sbt-site, add the following lines to your `project/plugins.sbt` file: @@ -392,7 +392,7 @@ enablePlugins(ParadoxPlugin, ParadoxSitePlugin) Paradox / sourceDirectory := sourceDirectory.value / "documentation" {% endhighlight %} -The `ParadoxPlugin` is responsible of generating the website, and the `ParadoxSitePlugin` provides +The `ParadoxPlugin` is responsible for generating the website, and the `ParadoxSitePlugin` provides integration with `sbt-site`. The second line is optional, it defines the location of the website source files. In our case, in `src/documentation`. @@ -642,11 +642,11 @@ jobs: From the user point of view, upgrading to a new version of a library should be a smooth process. Possibly, it should even be a “non-event”. -Breaking changes and migration steps should be thoroughly documented, and a we recommend following the +Breaking changes and migration steps should be thoroughly documented, and we recommend following the [semantic versioning](/overviews/core/binary-compatibility-for-library-authors.html#versioning-scheme---communicating-compatibility-breakages) policy. -The [MiMa](https://github.com/lightbend/migration-manager) tool can help you checking that you don’t +The [MiMa](https://github.com/lightbend/migration-manager) tool can help you to check that you don't break this versioning policy. Add the `sbt-mima-plugin` to your build with the following, in your `project/plugins.sbt` file: @@ -654,7 +654,7 @@ break this versioning policy. Add the `sbt-mima-plugin` to your build with the f addSbtPlugin("com.typesafe" % "sbt-mima-plugin" % "0.9.2") ~~~ -Configure it as follow, in `build.sbt`: +Configure it as follows, in `build.sbt`: ~~~ scala mimaPreviousArtifacts := previousStableVersion.value.map(organization.value %% name.value % _).toSet diff --git a/_overviews/core/actors-migration-guide.md b/_overviews/core/actors-migration-guide.md index c82aecc3e5..47f69b3623 100644 --- a/_overviews/core/actors-migration-guide.md +++ b/_overviews/core/actors-migration-guide.md @@ -50,18 +50,18 @@ Due to differences in Akka and Scala actor models the complete functionality can In Scala linked actors terminate if one of the linked parties terminates abnormally. If termination is tracked explicitly (by `self.trapExit`) the actor receives the termination reason from the failed actor. This functionality can not be migrated to Akka with the AMK. The AMK allows migration only for the [Akka monitoring](https://doc.akka.io/docs/akka/2.1.0/general/supervision.html#What_Lifecycle_Monitoring_Means) -mechanism. Monitoring is different than linking because it is unidirectional and the termination reason is now known. If monitoring support is not enough, the migration +mechanism. Monitoring is different from linking because it is unidirectional and the termination reason is now known. If monitoring support is not enough, the migration of `link` must be postponed until the last possible moment (Step 5 of migration). -Then, when moving to Akka, users must create an [supervision hierarchy](https://doc.akka.io/docs/akka/2.1.0/general/supervision.html) that will handle faults. +Then, when moving to Akka, users must create a [supervision hierarchy](https://doc.akka.io/docs/akka/2.1.0/general/supervision.html) that will handle faults. -2. Usage of the `restart` method - Akka does not provide explicit restart of actors so we can not provide the smooth migration for this use-case. +2. Usage of the `restart` method - Akka does not provide explicit restart of actors, so we can not provide the smooth migration for this use-case. The user must change the system so there are no usages of the `restart` method. 3. Usage of method `getState` - Akka actors do not have explicit state so this functionality can not be migrated. The user code must not have `getState` invocations. 4. Not starting actors right after instantiation - Akka actors are automatically started when instantiated. Users will have to -reshape their system so it starts all the actors right after their instantiation. +reshape their system, so it starts all the actors right after their instantiation. 5. Method `mailboxSize` does not exist in Akka and therefore can not be migrated. This method is seldom used and can easily be removed. @@ -70,7 +70,7 @@ reshape their system so it starts all the actors right after their instantiation ### Migration Kit In Scala 2.10.0 actors reside inside the [Scala distribution](https://www.scala-lang.org/downloads) as a separate jar ( *scala-actors.jar* ), and -the their interface is deprecated. The distribution also includes Akka actors in the *akka-actor.jar*. +their interface is deprecated. The distribution also includes Akka actors in the *akka-actor.jar*. The AMK resides both in the Scala actors and in the *akka-actor.jar*. Future major releases of Scala will not contain Scala actors and the AMK. To start the migration user needs to add the *scala-actors.jar* and the *scala-actors-migration.jar* to the build of their projects. @@ -92,7 +92,7 @@ the library used to Akka. On the Akka side, the `ActorDSL` and the `ActWithStash ## Step by Step Guide for Migrating to Akka In this chapter we will go through 5 steps of the actor migration. After each step the code can be tested for possible errors. In the first 4 - steps one can migrate one actor at a time and test the functionality. However, the last step migrates all actors to Akka and it can be tested + steps one can migrate one actor at a time and test the functionality. However, the last step migrates all actors to Akka, and it can be tested only as a whole. After this step the system should have the same functionality as before, however it will use the Akka actor library. ### Step 1 - Everything as an Actor @@ -175,13 +175,13 @@ Note that Akka actors are always started on instantiation. In case actors in the system are created and started at different locations, and changing this can affect the behavior of the system, users need to change the code so actors are started right after instantiation. -Remote actors also need to be fetched as `ActorRef`s. To get an `ActorRef` of an remote actor use the method `selectActorRef`. +Remote actors also need to be fetched as `ActorRef`s. To get an `ActorRef` of a remote actor use the method `selectActorRef`. #### Different Method Signatures At this point we have changed all the actor instantiations to return `ActorRef`s, however, we are not done yet. There are differences in the interface of `ActorRef`s and `Actor`s so we need to change the methods invoked on each migrated instance. -Unfortunately, some of the methods that Scala `Actor`s provide can not be migrated. For the following methods users need to find a workaround: +Unfortunately, some methods that Scala `Actor`s provide can not be migrated. For the following methods users need to find a workaround: 1. `getState()` - actors in Akka are managed by their supervising actors and are restarted by default. In that scenario state of an actor is not relevant. @@ -196,7 +196,7 @@ Note that all the rules require the following imports: import scala.actors.migration._ import scala.concurrent._ -Additionally rules 1-3 require an implicit `Timeout` with infinite duration defined in the scope. However, since Akka does not allow for infinite timeouts, we will use +Additionally, rules 1-3 require an implicit `Timeout` with infinite duration defined in the scope. However, since Akka does not allow for infinite timeouts, we will use 100 years. For example: implicit val timeout = Timeout(36500 days) @@ -234,7 +234,7 @@ inside the actor definition so their migration is not relevant in this step. At this point all actors inherit the `Actor` trait, we instantiate actors through special factory methods, and all actors are accessed through the `ActorRef` interface. -Now we need to change all actors to the `ActWithStash` class from the AMK. This class behaves exactly the same like Scala `Actor` +Now we need to change all actors to the `ActWithStash` class from the AMK. This class behaves exactly the same as Scala `Actor` but, additionally, provides methods that correspond to methods in Akka's `Actor` trait. This allows easy, step by step, migration to the Akka behavior. To achieve this all classes that extend `Actor` should extend the `ActWithStash`. Apply the @@ -289,7 +289,7 @@ rules for translating individual methods of the `scala.actors.Actor` trait. #### Removal of `act` In the following list we present the translation rules for common message processing patterns. This list is not -exhaustive and it covers only some common patterns. However, users can migrate more complex `act` methods to Akka by looking +exhaustive, and it covers only some common patterns. However, users can migrate more complex `act` methods to Akka by looking at existing translation rules and extending them for more complex situations. A note about nested `react`/`reactWithin` calls: the message handling @@ -501,7 +501,7 @@ returns the `Terminated(a: ActorRef)` message that contains only the `ActorRef`. Note that this will happen even when the watched actor terminated normally. In Scala linked actors terminate, with the same termination reason, only if one of the actors terminates abnormally. - If the system can not be migrated solely with `watch` the user should leave invocations to `link` and `exit(reason)` as is. However since `act()` overrides the `Exit` message the following transformation + If the system can not be migrated solely with `watch` the user should leave invocations to `link` and `exit(reason)` as is. However, since `act()` overrides the `Exit` message the following transformation needs to be applied: case Exit(actor, reason) => @@ -520,7 +520,7 @@ In Akka, watching the already dead actor will result in sending the `Terminated` ### Step 5 - Moving to the Akka Back-end At this point user code is ready to operate on Akka actors. Now we can switch the actors library from Scala to -Akka actors. To do this configure the build to exclude the `scala-actors.jar` and the `scala-actors-migration.jar`, +Akka actors. To do this, configure the build to exclude the `scala-actors.jar` and the `scala-actors-migration.jar`, and to include *akka-actor.jar* and *typesafe-config.jar*. The AMK is built to work only with Akka actors version 2.1 which are included in the [Scala distribution](https://www.scala-lang.org/downloads) and can be configured by these [instructions](https://doc.akka.io/docs/akka/2.1.0/intro/getting-started.html#Using_a_build_tool). @@ -541,7 +541,7 @@ In Scala actors the `stash` method needs a message as a parameter. For example: case x => stash(x) } -In Akka only the currently processed message can be stashed. Therefore replace the above example with: +In Akka only the currently processed message can be stashed. Therefore, replace the above example with: def receive = { ... @@ -551,7 +551,7 @@ In Akka only the currently processed message can be stashed. Therefore replace t #### Adding Actor Systems The Akka actors are organized in [Actor systems](https://doc.akka.io/docs/akka/2.1.0/general/actor-systems.html). - Each actor that is instantiated must belong to one `ActorSystem`. To achieve this add an `ActorSystem` instance to each actor instantiation call as a first argument. The following example shows the transformation. + Each actor that is instantiated must belong to one `ActorSystem`. To achieve this, add an `ActorSystem` instance to each actor instantiation call as a first argument. The following example shows the transformation. To achieve this transformation you need to have an actor system instantiated. The actor system is usually instantiated in Scala objects or configuration classes that are global to your system. For example: @@ -572,11 +572,11 @@ Finally, Scala programs are terminating when all the non-daemon threads and acto #### Remote Actors -Once the code base is moved to Akka remoting will not work any more. The methods `registerActorFor` and `alive` need to be removed. In Akka, remoting is done solely by configuration and +Once the code base is moved to Akka remoting will not work anymore. The methods `registerActorFor` and `alive` need to be removed. In Akka, remoting is done solely by configuration and for further details refer to the [Akka remoting documentation](https://doc.akka.io/docs/akka/2.1.0/scala/remoting.html). #### Examples and Issues -All of the code snippets presented in this document can be found in the [Actors Migration test suite](https://github.com/scala/actors-migration/tree/master/src/test/) as test files with the prefix `actmig`. +All the code snippets presented in this document can be found in the [Actors Migration test suite](https://github.com/scala/actors-migration/tree/master/src/test/) as test files with the prefix `actmig`. This document and the Actor Migration Kit were designed and implemented by: [Vojin Jovanovic](https://people.epfl.ch/vojin.jovanovic) and [Philipp Haller](https://lampwww.epfl.ch/~phaller/) diff --git a/_overviews/core/architecture-of-scala-213-collections.md b/_overviews/core/architecture-of-scala-213-collections.md index 4a49e9b42a..a5caef55ab 100644 --- a/_overviews/core/architecture-of-scala-213-collections.md +++ b/_overviews/core/architecture-of-scala-213-collections.md @@ -223,7 +223,7 @@ trait SortedSet[A] extends SortedSetOps[A, SortedSet, SortedSet[A]] Last, there is a fourth kind of collection that requires a specialized template trait: `SortedMap[K, V]`. This type of collection has two type parameters and -needs an implicit ordering instance on the type of keys. Therefore we have a +needs an implicit ordering instance on the type of keys. Therefore, we have a `SortedMapOps` template trait that provides the appropriate overloads. In total, we’ve seen that we have four branches of template traits: @@ -351,7 +351,7 @@ trait MapFactory[+CC[_, _]] { ## When a strict evaluation is preferable (or unavoidable) ## In the previous sections we explained that the “strictness” of concrete collections -should be preserved by default operation implementations. However in some cases this +should be preserved by default operation implementations. However, in some cases this leads to less efficient implementations. For instance, `partition` has to perform two traversals of the underlying collection. In some other case (e.g. `groupBy`) it is simply not possible to implement the operation without evaluating the collection @@ -394,7 +394,7 @@ trait IterableFactory[+CC[_]] { } ~~~ -Note that, in general, an operation that doesn’t *have to* be strict should +Note that, in general, an operation that doesn't *have to* be strict should be implemented in a non-strict mode, otherwise it would lead to surprising behaviour when used on a non-strict concrete collection (you can read more about that statement in diff --git a/_overviews/core/architecture-of-scala-collections.md b/_overviews/core/architecture-of-scala-collections.md index 76bcde648f..437ef8b015 100644 --- a/_overviews/core/architecture-of-scala-collections.md +++ b/_overviews/core/architecture-of-scala-collections.md @@ -217,7 +217,7 @@ maps the key/value pair to an integer, namely its value component. In that case, we cannot form a `Map` from the results, but we can still form an `Iterable`, a supertrait of `Map`. -You might ask, why not restrict `map` so that it can always return the +You might ask why, not restrict `map` so that it can always return the same kind of collection? For instance, on bit sets `map` could accept only `Int`-to-`Int` functions and on `Map`s it could only accept pair-to-pair functions. Not only are such restrictions undesirable @@ -646,7 +646,7 @@ function, which is also the element type of the new collection. The `That` appears as the result type of `map`, so it represents the type of the new collection that gets created. -How is the `That` type determined? In fact it is linked to the other +How is the `That` type determined? In fact, it is linked to the other types by an implicit parameter `cbf`, of type `CanBuildFrom[Repr, B, That]`. These `CanBuildFrom` implicits are defined by the individual collection classes. Recall that an implicit value of type @@ -747,7 +747,7 @@ ignoring its argument. That is it. The final [`RNA` class](#final-version-of-rna-strands-class) implements all collection methods at -their expected types. Its implementation requires a little bit of +their expected types. Its implementation requires a little of protocol. In essence, you need to know where to put the `newBuilder` factories and the `canBuildFrom` implicits. On the plus side, with relatively little code you get a large number of methods automatically @@ -979,14 +979,14 @@ provided by the `empty` method, which is the last method defined in } } -We'll now turn to the companion object `PrefixMap`. In fact it is not +We'll now turn to the companion object `PrefixMap`. In fact, it is not strictly necessary to define this companion object, as class `PrefixMap` can stand well on its own. The main purpose of object `PrefixMap` is to define some convenience factory methods. It also defines a `CanBuildFrom` implicit to make typing work out better. The two convenience methods are `empty` and `apply`. The same methods are -present for all other collections in Scala's collection framework so +present for all other collections in Scala's collection framework, so it makes sense to define them here, too. With the two methods, you can write `PrefixMap` literals like you do for any other collection: diff --git a/_overviews/core/collections-migration-213.md b/_overviews/core/collections-migration-213.md index 5eb05536ee..76cd202cd3 100644 --- a/_overviews/core/collections-migration-213.md +++ b/_overviews/core/collections-migration-213.md @@ -29,7 +29,7 @@ The most important changes in the Scala 2.13 collections library are: ## Tools for migrating and cross-building -The [scala-collection-compat](https://github.com/scala/scala-collection-compat) is a library released for 2.11, 2.12 and 2.13 that provides some of the new APIs from Scala 2.13 for the older versions. This simplifies cross-building projects. +The [scala-collection-compat](https://github.com/scala/scala-collection-compat) is a library released for 2.11, 2.12 and 2.13 that provides some new APIs from Scala 2.13 for the older versions. This simplifies cross-building projects. The module also provides [migration rules](https://github.com/scala/scala-collection-compat#migration-tool) for [scalafix](https://scalacenter.github.io/scalafix/docs/users/installation.html) that can update a project's source code to work with the 2.13 collections library. @@ -42,7 +42,7 @@ a method such as `orderFood(xs: _*)` the varargs parameter `xs` must be an immut [SLS 6.6]: https://www.scala-lang.org/files/archive/spec/2.12/06-expressions.html#function-applications -Therefore any method signature in Scala 2.13 which includes `scala.Seq`, varargs, or `scala.IndexedSeq` is going +Therefore, any method signature in Scala 2.13 which includes `scala.Seq`, varargs, or `scala.IndexedSeq` is going to have a breaking change in API semantics (as the immutable sequence types require more — immutability — than the not-immutable types). For example, users of a method like `def orderFood(order: Seq[Order]): Seq[Food]` would previously have been able to pass in an `ArrayBuffer` of `Order`, but cannot in 2.13. @@ -68,7 +68,7 @@ We recommend using `import scala.collection`/`import scala.collection.immutable` `collection.Seq`/`immutable.Seq`. We recommend against using `import scala.collection.Seq`, which shadows the automatically imported `scala.Seq`, -because even if it's a oneline change it causes name confusion. For code generation or macros the safest option +because even if it's a one-line change it causes name confusion. For code generation or macros the safest option is using the fully-qualified `_root_.scala.collection.Seq`. As an example, the migration would look something like this: @@ -81,7 +81,7 @@ object FoodToGo { } ~~~ -However users of this code in Scala 2.13 would also have to migrate, as the result type is source-incompatible +However, users of this code in Scala 2.13 would also have to migrate, as the result type is source-incompatible with any `scala.Seq` (or just `Seq`) usage in their code: ~~~ scala @@ -233,7 +233,7 @@ Other notable changes are: You can make this conversion explicit by writing `f _` or `f(_)` instead of `f`. scala> Map(1 -> "a").map(f _) res10: scala.collection.immutable.Map[Int,String] = ChampHashMap(2 -> a) - - `View`s have been completely redesigned and we expect their usage to have a more predictable evaluation model. + - `View`s have been completely redesigned, and we expect their usage to have a more predictable evaluation model. You can read more about the new design [here](https://scala-lang.org/blog/2017/11/28/view-based-collections.html). - `mutable.ArraySeq` (which wraps an `Array[AnyRef]` in 2.12, meaning that primitives were boxed in the array) can now wrap boxed and unboxed arrays. `mutable.ArraySeq` in 2.13 is in fact equivalent to `WrappedArray` in 2.12, there are specialized subclasses for primitive arrays. Note that a `mutable.ArraySeq` can be used either way for primitive arrays (TODO: document how). `WrappedArray` is deprecated. - There is no "default" `Factory` (previously known as `[A, C] => CanBuildFrom[Nothing, A, C]`): use `Factory[A, Vector[A]]` explicitly instead. diff --git a/_overviews/core/custom-collection-operations.md b/_overviews/core/custom-collection-operations.md index 25f792fd7a..f226756ba4 100644 --- a/_overviews/core/custom-collection-operations.md +++ b/_overviews/core/custom-collection-operations.md @@ -329,5 +329,5 @@ be `List[Int]`. as parameter, - To also support `String`, `Array` and `View`, use `IsIterable`, - To produce a collection given its type, use a `Factory`, -- To produce a collection based on the type of a source collection and the type of elements of the collection +- To produce a collection based on the type of source collection and the type of elements of the collection to produce, use `BuildFrom`. diff --git a/_overviews/core/custom-collections.md b/_overviews/core/custom-collections.md index ab0432376a..5e1fe6f7ea 100644 --- a/_overviews/core/custom-collections.md +++ b/_overviews/core/custom-collections.md @@ -968,7 +968,7 @@ However, in all these cases, to build the right kind of collection you need to start with an empty collection of that kind. This is provided by the `empty` method, which simply returns a fresh `PrefixMap`. -We'll now turn to the companion object `PrefixMap`. In fact it is not +We'll now turn to the companion object `PrefixMap`. In fact, it is not strictly necessary to define this companion object, as class `PrefixMap` can stand well on its own. The main purpose of object `PrefixMap` is to define some convenience factory methods. It also defines an implicit @@ -980,7 +980,7 @@ can not because a `Factory` fixes the type of collection elements, whereas `PrefixMap` has a polymorphic type of values). The two convenience methods are `empty` and `apply`. The same methods are -present for all other collections in Scala's collection framework so +present for all other collections in Scala's collection framework, so it makes sense to define them here, too. With the two methods, you can write `PrefixMap` literals like you do for any other collection: diff --git a/_overviews/core/futures.md b/_overviews/core/futures.md index eec0e4529c..6b9e6bae4a 100644 --- a/_overviews/core/futures.md +++ b/_overviews/core/futures.md @@ -96,7 +96,7 @@ only if each blocking call is wrapped inside a `blocking` call (more on that bel Otherwise, there is a risk that the thread pool in the global execution context is starved, and no computation can proceed. -By default the `ExecutionContext.global` sets the parallelism level of its underlying fork-join pool to the number of available processors +By default, the `ExecutionContext.global` sets the parallelism level of its underlying fork-join pool to the number of available processors ([Runtime.availableProcessors](https://docs.oracle.com/javase/7/docs/api/java/lang/Runtime.html#availableProcessors%28%29)). This configuration can be overridden by setting one (or more) of the following VM attributes: @@ -185,7 +185,7 @@ Fortunately the concurrent package provides a convenient way for doing so: Note that `blocking` is a general construct that will be discussed more in depth [below](#blocking-inside-a-future). -Last but not least, you must remember that the `ForkJoinPool` is not designed for long lasting blocking operations. +Last but not least, you must remember that the `ForkJoinPool` is not designed for long-lasting blocking operations. Even when notified with `blocking` the pool might not spawn new workers as you would expect, and when new workers are created they can be as many as 32767. To give you an idea, the following code will use 32000 threads: @@ -216,7 +216,7 @@ To give you an idea, the following code will use 32000 threads: {% endtabs %} -If you need to wrap long lasting blocking operations we recommend using a dedicated `ExecutionContext`, for instance by wrapping a Java `Executor`. +If you need to wrap long-lasting blocking operations we recommend using a dedicated `ExecutionContext`, for instance by wrapping a Java `Executor`. ### Adapting a Java Executor @@ -371,7 +371,7 @@ Our example was based on a hypothetical social network API where the computation consists of sending a network request and waiting for a response. It is fair to offer an example involving an asynchronous computation -which you can try out of the box. Assume you have a text file and +which you can try out of the box. Assume you have a text file, and you want to find the position of the first occurrence of a particular keyword. This computation may involve blocking while the file contents are being retrieved from the disk, so it makes sense to perform it @@ -578,10 +578,10 @@ callbacks may be executed concurrently with one another. However, a particular `ExecutionContext` implementation may result in a well-defined order. -5. In the event that some of the callbacks throw an exception, the +5. In the event that some callbacks throw an exception, the other callbacks are executed regardless. -6. In the event that some of the callbacks never complete (e.g. the +6. In the event that some callbacks never complete (e.g. the callback contains an infinite loop), the other callbacks may not be executed at all. In these cases, a potentially blocking callback must use the `blocking` construct (see below). @@ -635,7 +635,7 @@ be done using callbacks: We start by creating a future `rateQuote` which gets the current exchange rate. After this value is obtained from the server and the future successfully -completed, the computation proceeds in the `foreach` callback and we are +completed, the computation proceeds in the `foreach` callback, and we are ready to decide whether to buy or not. We therefore create another future `purchase` which makes a decision to buy only if it's profitable to do so, and then sends a request. @@ -723,7 +723,7 @@ combinators. The `flatMap` method takes a function that maps the value to a new future `g`, and then returns a future which is completed once `g` is completed. -Lets assume that we want to exchange US dollars for Swiss francs +Let's assume that we want to exchange US dollars for Swiss francs (CHF). We have to fetch quotes for both currencies, and then decide on buying based on both quotes. Here is an example of `flatMap` and `withFilter` usage within for-comprehensions: @@ -1043,7 +1043,7 @@ However, blocking may be necessary in certain situations and is supported by the Futures and Promises API. In the currency trading example above, one place to block is at the -end of the application to make sure that all of the futures have been completed. +end of the application to make sure that all the futures have been completed. Here is an example of how to block on the result of a future: {% tabs futures-14 class=tabs-scala-version %} @@ -1358,7 +1358,7 @@ Abstract `Duration` contains methods that allow: for example, `val d = Duration(100, MILLISECONDS)`. 3. By parsing a string that represent a time period, for example, `val d = Duration("1.2 µs")`. -Duration also provides `unapply` methods so it can be used in pattern matching constructs. +Duration also provides `unapply` methods, so it can be used in pattern matching constructs. Examples: {% tabs futures-17 class=tabs-scala-version %} diff --git a/_overviews/core/implicit-classes.md b/_overviews/core/implicit-classes.md index 39694e9caa..d809dc6217 100644 --- a/_overviews/core/implicit-classes.md +++ b/_overviews/core/implicit-classes.md @@ -59,7 +59,7 @@ value or conversion. Implicit classes have the following restrictions: -**1. They must be defined inside of another `trait`/`class`/`object`.** +**1. They must be defined inside another `trait`/`class`/`object`.** object Helpers { diff --git a/_overviews/core/string-interpolation.md b/_overviews/core/string-interpolation.md index 27c4aed480..56fdae8f37 100644 --- a/_overviews/core/string-interpolation.md +++ b/_overviews/core/string-interpolation.md @@ -44,7 +44,7 @@ Prepending `s` to any string literal allows the usage of variables directly in t Here `$name` is nested inside an `s` processed string. The `s` interpolator knows to insert the value of the `name` variable at this location in the string, resulting in the string `Hello, James`. With the `s` interpolator, any name that is in scope can be used within a string. -String interpolators can also take arbitrary expressions. For example: +String interpolators can also take arbitrary expressions. For example: {% tabs example-3 %} {% tab 'Scala 2 and 3' for=example-3 %} diff --git a/_overviews/macros/annotations.md b/_overviews/macros/annotations.md index 103f65dc90..7300704010 100644 --- a/_overviews/macros/annotations.md +++ b/_overviews/macros/annotations.md @@ -57,8 +57,8 @@ results have to be wrapped in a `Block` for the lack of better notion in the ref At this point you might be wondering. A single annottee and a single result is understandable, but what is the many-to-many mapping supposed to mean? There are several rules guiding the process: -1. If a class is annotated and it has a companion, then both are passed into the macro. (But not vice versa - if an object - is annotated and it has a companion class, only the object itself is expanded). +1. If a class is annotated, and it has a companion, then both are passed into the macro. (But not vice versa - if an object + is annotated, and it has a companion class, only the object itself is expanded). 1. If a parameter of a class, method or type member is annotated, then it expands its owner. First comes the annottee, then the owner and then its companion as specified by the previous rule. 1. Annottees can expand into whatever number of trees of any flavor, and the compiler will then transparently @@ -109,8 +109,8 @@ at a later point in the future). In the spirit of Scala macros, macro annotations are as untyped as possible to stay flexible and as typed as possible to remain useful. On the one hand, macro annottees are untyped, so that we can change their signatures (e.g. lists of class members). But on the other hand, the thing about all flavors of Scala macros is integration with the typechecker, and -macro annotations are not an exceptions. During expansion we can have all the type information that's possible to have -(e.g. we can reflect against the surrounding program or perform type checks / implicit lookups in the enclosing context). +macro annotations are not an exceptions. During expansion, we can have all the type information that's possible to have +(e.g. we can reflect against the surrounding program or perform type checks / implicit lookup in the enclosing context). ## Blackbox vs whitebox diff --git a/_overviews/macros/blackbox-whitebox.md b/_overviews/macros/blackbox-whitebox.md index 4fdb9e4fd0..d29cd6b16d 100644 --- a/_overviews/macros/blackbox-whitebox.md +++ b/_overviews/macros/blackbox-whitebox.md @@ -19,7 +19,7 @@ Separation of macros into blackbox ones and whitebox ones is a feature of Scala With macros becoming a part of the official Scala 2.10 release, programmers in research and industry have found creative ways of using macros to address all sorts of problems, far extending our original expectations. -In fact, macros became an important part of our ecosystem so quickly that just a couple months after the release of Scala 2.10, when macros were introduced in experimental capacity, we had a Scala language team meeting and decided to standardize macros and make them a full-fledged feature of Scala by 2.12. +In fact, macros became an important part of our ecosystem so quickly that just a couple of months after the release of Scala 2.10, when macros were introduced in experimental capacity, we had a Scala language team meeting and decided to standardize macros and make them a full-fledged feature of Scala by 2.12. UPDATE It turned out that it was not that simple to stabilize macros by Scala 2.12. Our research into that has resulted in establishing a new metaprogramming foundation for Scala, called [scala.meta](https://scalameta.org), whose first beta is expected to be released simultaneously with Scala 2.12 and might later be included in future versions of Scala. In the meanwhile, Scala 2.12 is not going to see any changes to reflection and macros - everything is going to stay experimental as it was in Scala 2.10 and Scala 2.11, and no features are going to be removed. However, even though circumstances under which this document has been written have changed, the information still remains relevant, so please continue reading. @@ -30,13 +30,13 @@ comprehensibility. ## Blackbox and whitebox macros -However sometimes def macros transcend the notion of "just a regular method". For example, it is possible for a macro expansion to yield an expression of a type that is more specific than the return type of a macro. In Scala 2.10, such expansion will retain its precise type as highlighted in the ["Static return type of Scala macros"](https://stackoverflow.com/questions/13669974/static-return-type-of-scala-macros) article at Stack Overflow. +However, sometimes def macros transcend the notion of "just a regular method". For example, it is possible for a macro expansion to yield an expression of a type that is more specific than the return type of macro. In Scala 2.10, such expansion will retain its precise type as highlighted in the ["Static return type of Scala macros"](https://stackoverflow.com/questions/13669974/static-return-type-of-scala-macros) article at Stack Overflow. This curious feature provides additional flexibility, enabling [fake type providers](https://meta.plasm.us/posts/2013/07/11/fake-type-providers-part-2/), [extended vanilla materialization](https://github.com/scala/improvement-proposals/pull/18), [fundep materialization]({{ site.baseurl }}/overviews/macros/implicits.html#fundep-materialization) and [extractor macros](https://github.com/scala/scala/commit/84a335916556cb0fe939d1c51f27d80d9cf980dc), but it also sacrifices clarity - both for humans and for machines. To concretize the crucial distinction between macros that behave just like normal methods and macros that refine their return types, we introduce the notions of blackbox macros and whitebox macros. Macros that faithfully follow their type signatures are called **blackbox macros** as their implementations are irrelevant to understanding their behaviour (could be treated as black boxes). Macros that can't have precise signatures in Scala's type system are called **whitebox macros** (whitebox def macros do have signatures, but these signatures are only approximations). -We recognize the importance of both blackbox and whitebox macros, however we feel more confidence in blackbox macros, because they are easier to explain, specify and support. Therefore our plans to standardize macros only include blackbox macros. Later on, we might also include whitebox macros into our plans, but it's too early to tell. +We recognize the importance of both blackbox and whitebox macros, however we feel more confidence in blackbox macros, because they are easier to explain, specify and support. Therefore, our plans to standardize macros only include blackbox macros. Later on, we might also include whitebox macros into our plans, but it's too early to tell. ## Codifying the distinction diff --git a/_overviews/macros/bundles.md b/_overviews/macros/bundles.md index 57f380b7f6..255b504391 100644 --- a/_overviews/macros/bundles.md +++ b/_overviews/macros/bundles.md @@ -18,7 +18,7 @@ Macro bundles are a feature of Scala 2.11.x and Scala 2.12.x. Macro bundles are ## Macro bundles In Scala 2.10.x, macro implementations are represented with functions. Once the compiler sees an application of a macro definition, -it calls the macro implementation - as simple as that. However practice shows that just functions are often not enough due to the +it calls the macro implementation - as simple as that. However, practice shows that just functions are often not enough due to the following reasons: 1. Being limited to functions makes modularizing complex macros awkward. It's quite typical to see macro logic concentrate in helper diff --git a/_overviews/macros/implicits.md b/_overviews/macros/implicits.md index 1f660d6ec9..04852d0f2d 100644 --- a/_overviews/macros/implicits.md +++ b/_overviews/macros/implicits.md @@ -140,7 +140,7 @@ macro, which synthesizes `Iso[C, L]`, scalac will helpfully infer `L` as `Nothin As demonstrated by [https://github.com/scala/scala/pull/2499](https://github.com/scala/scala/pull/2499), the solution to the outlined problem is extremely simple and elegant. -In 2.10 we don't allow macro applications to expand until all their type arguments are inferred. However we don't have to do that. +In 2.10 we don't allow macro applications to expand until all their type arguments are inferred. However, we don't have to do that. The typechecker can infer as much as it possibly can (e.g. in the running example `C` will be inferred to `Foo` and `L` will remain uninferred) and then stop. After that we expand the macro and then proceed with type inference using the type of the expansion to help the typechecker with previously undetermined type arguments. This is how it's implemented in Scala 2.11.0. diff --git a/_overviews/macros/paradise.md b/_overviews/macros/paradise.md index 14e61dd9a5..1fb058c210 100644 --- a/_overviews/macros/paradise.md +++ b/_overviews/macros/paradise.md @@ -35,7 +35,7 @@ to learn more about our support guarantees. Some features in macro paradise bring a compile-time dependency on the macro paradise plugin, some features do not, however none of those features need macro paradise at runtime. -Proceed to the [the feature list](roadmap.html) document for more information. +Proceed to [the feature list](roadmap.html) document for more information. Consult [https://github.com/scalamacros/sbt-example-paradise](https://github.com/scalamacros/sbt-example-paradise) for an end-to-end example, but in a nutshell working with macro paradise is as easy as adding the following two lines diff --git a/_overviews/macros/typemacros.md b/_overviews/macros/typemacros.md index 691b2f5e83..dbbf65995e 100644 --- a/_overviews/macros/typemacros.md +++ b/_overviews/macros/typemacros.md @@ -84,7 +84,7 @@ In Scala programs type macros can appear in one of five possible roles: type rol To put it in a nutshell, expansion of a type macro replace the usage of a type macro with a tree it returns. To find out whether an expansion makes sense, mentally replace some usage of a macro with its expansion and check whether the resulting program is correct. -For example, a type macro used as `TM(2)(3)` in `class C extends TM(2)(3)` can expand into `Apply(Ident(TypeName("B")), List(Literal(Constant(2))))`, because that would result in `class C extends B(2)`. However the same expansion wouldn't make sense if `TM(2)(3)` was used as a type in `def x: TM(2)(3) = ???`, because `def x: B(2) = ???` (given that `B` itself is not a type macro; if it is, it will be recursively expanded and the result of the expansion will determine validity of the program). +For example, a type macro used as `TM(2)(3)` in `class C extends TM(2)(3)` can expand into `Apply(Ident(TypeName("B")), List(Literal(Constant(2))))`, because that would result in `class C extends B(2)`. However, the same expansion wouldn't make sense if `TM(2)(3)` was used as a type in `def x: TM(2)(3) = ???`, because `def x: B(2) = ???` (given that `B` itself is not a type macro; if it is, it will be recursively expanded and the result of the expansion will determine validity of the program). ## Tips and tricks diff --git a/_overviews/macros/typeproviders.md b/_overviews/macros/typeproviders.md index 175126eab1..1e90c17003 100644 --- a/_overviews/macros/typeproviders.md +++ b/_overviews/macros/typeproviders.md @@ -85,7 +85,7 @@ captures the essence of the generated classes, providing a statically typed inte This approach to type providers is quite neat, because it can be used with production versions of Scala, however it has performance problems caused by the fact that Scala emits reflective calls when compiling accesses to members -of structural types. There are several strategies of dealing with that, but this margin is too narrow to contain them +of structural types. There are several strategies of dealing with that, but this margin is too narrow to contain them, so I refer you to an amazing blog series by Travis Brown for details: [post 1](https://meta.plasm.us/posts/2013/06/19/macro-supported-dsls-for-schema-bindings/), [post 2](https://meta.plasm.us/posts/2013/07/11/fake-type-providers-part-2/), [post 3](https://meta.plasm.us/posts/2013/07/12/vampire-methods-for-structural-types/). ## Public type providers diff --git a/_overviews/macros/untypedmacros.md b/_overviews/macros/untypedmacros.md index cfceefb78c..340121f241 100644 --- a/_overviews/macros/untypedmacros.md +++ b/_overviews/macros/untypedmacros.md @@ -18,7 +18,7 @@ for an explanation and suggested migration strategy. ## Intuition Being statically typed is great, but sometimes that is too much of a burden. Take for example, the latest experiment of Alois Cochard with -implementing enums using type macros - the so called [Enum Paradise](https://github.com/aloiscochard/enum-paradise). Here's how Alois has +implementing enums using type macros - the so-called [Enum Paradise](https://github.com/aloiscochard/enum-paradise). Here's how Alois has to write his type macro, which synthesizes an enumeration module from a lightweight spec: object Days extends Enum('Monday, 'Tuesday, 'Wednesday...) @@ -56,9 +56,9 @@ of the linked JIRA issue. Untyped macros make the full power of textual abstract unit test provides details on this matter. If a macro has one or more untyped parameters, then when typing its expansions, the typechecker will do nothing to its arguments -and will pass them to the macro untyped. Even if some of the parameters do have type annotations, they will currently be ignored. This +and will pass them to the macro untyped. Even if some parameters do have type annotations, they will currently be ignored. This is something we plan on improving: [SI-6971](https://issues.scala-lang.org/browse/SI-6971). Since arguments aren't typechecked, you -also won't having implicits resolved and type arguments inferred (however, you can do both with `c.typeCheck` and `c.inferImplicitValue`). +also won't have implicits resolved and type arguments inferred (however, you can do both with `c.typeCheck` and `c.inferImplicitValue`). Explicitly provided type arguments will be passed to the macro as is. If type arguments aren't provided, they will be inferred as much as possible without typechecking the value arguments and passed to the macro in that state. Note that type arguments still get typechecked, but @@ -69,6 +69,6 @@ the first typecheck of a def macro expansion is performed against the return typ against the expected type of the expandee. More information can be found at Stack Overflow: [Static return type of Scala macros](https://stackoverflow.com/questions/13669974/static-return-type-of-scala-macros). Type macros never underwent the first typecheck, so nothing changes for them (and you won't be able to specify any return type for a type macro to begin with). -Finally the untyped macros patch enables using `c.Tree` instead of `c.Expr[T]` everywhere in signatures of macro implementations. +Finally, the untyped macros patch enables using `c.Tree` instead of `c.Expr[T]` everywhere in signatures of macro implementations. Both for parameters and return types, all four combinations of untyped/typed in macro def and tree/expr in macro impl are supported. Check our unit tests for more information: test/files/run/macro-untyped-conformance. diff --git a/_overviews/parallel-collections/architecture.md b/_overviews/parallel-collections/architecture.md index 2b64486f63..3ab2014f75 100644 --- a/_overviews/parallel-collections/architecture.md +++ b/_overviews/parallel-collections/architecture.md @@ -93,7 +93,7 @@ regular collections framework's corresponding traits, as shown below.
The goal is of course to integrate parallel collections as tightly as possible -with sequential collections, so as to allow for straightforward substitution +with sequential collections, to allow for straightforward substitution of sequential and parallel collections. In order to be able to have a reference to a collection which may be either diff --git a/_overviews/parallel-collections/custom-parallel-collections.md b/_overviews/parallel-collections/custom-parallel-collections.md index 7ea4330c62..88307d3910 100644 --- a/_overviews/parallel-collections/custom-parallel-collections.md +++ b/_overviews/parallel-collections/custom-parallel-collections.md @@ -72,10 +72,10 @@ Finally, methods `split` and `psplit` are used to create splitters which traverse subsets of the elements of the current splitter. Method `split` has the contract that it returns a sequence of splitters which traverse disjoint, non-overlapping subsets of elements that the current splitter traverses, none -of which is empty. If the current splitter has 1 or less elements, then +of which is empty. If the current splitter has 1 or fewer elements, then `split` just returns a sequence of this splitter. Method `psplit` has to return a sequence of splitters which traverse exactly as many elements as -specified by the `sizes` parameter. If the `sizes` parameter specifies less +specified by the `sizes` parameter. If the `sizes` parameter specifies fewer elements than the current splitter, then an additional splitter with the rest of the elements is appended at the end. If the `sizes` parameter requires more elements than there are remaining in the current splitter, it will append an @@ -112,9 +112,9 @@ may be suboptimal - producing a string again from the vector after filtering may ## Parallel collections with combiners -Lets say we want to `filter` the characters of the parallel string, to get rid +Let's say we want to `filter` the characters of the parallel string, to get rid of commas for example. As noted above, calling `filter` produces a parallel -vector and we want to obtain a parallel string (since some interface in the +vector, and we want to obtain a parallel string (since some interface in the API might require a sequential string). To avoid this, we have to write a combiner for the parallel string collection. @@ -134,7 +134,7 @@ is internally used by `filter`. protected[this] override def newCombiner: Combiner[Char, ParString] = new ParStringCombiner Next we define the `ParStringCombiner` class. Combiners are subtypes of -builders and they introduce an additional method called `combine`, which takes +builders, and they introduce an additional method called `combine`, which takes another combiner as an argument and returns a new combiner which contains the elements of both the current and the argument combiner. The current and the argument combiner are invalidated after calling `combine`. If the argument is @@ -195,7 +195,7 @@ live with this sequential bottleneck. There are no predefined recipes-- it depends on the data-structure at hand, and usually requires a bit of ingenuity on the implementer's -part. However there are a few approaches usually taken: +part. However, there are a few approaches usually taken: 1. Concatenation and merge. Some data-structures have efficient implementations (usually logarithmic) of these operations. diff --git a/_overviews/parallel-collections/overview.md b/_overviews/parallel-collections/overview.md index b03bec798e..1ced205636 100644 --- a/_overviews/parallel-collections/overview.md +++ b/_overviews/parallel-collections/overview.md @@ -17,7 +17,7 @@ If you're using Scala 2.13+ and want to use Scala's parallel collections, you'll ## Motivation Amidst the shift in recent years by processor manufacturers from single to -multi-core architectures, academia and industry alike have conceded that +multicore architectures, academia and industry alike have conceded that _Popular Parallel Programming_ remains a formidable challenge. Parallel collections were included in the Scala standard library in an effort @@ -65,7 +65,7 @@ from Scala's (sequential) collection library, including: In addition to a common architecture, Scala's parallel collections library additionally shares _extensibility_ with the sequential collections library. That is, like normal sequential collections, users can integrate their own -collection types and automatically inherit all of the predefined (parallel) +collection types and automatically inherit all the predefined (parallel) operations available on the other parallel collections in the standard library. @@ -155,13 +155,13 @@ sections of this guide. While the parallel collections abstraction feels very much the same as normal sequential collections, it's important to note that its semantics differs, -especially with regards to side-effects and non-associative operations. +especially in regard to side-effects and non-associative operations. In order to see how this is the case, first, we visualize _how_ operations are performed in parallel. Conceptually, Scala's parallel collections framework parallelizes an operation on a parallel collection by recursively "splitting" a given collection, applying an operation on each partition of the collection -in parallel, and re-"combining" all of the results that were completed in +in parallel, and re-"combining" all the results that were completed in parallel. These concurrent, and "out-of-order" semantics of parallel collections lead to @@ -176,7 +176,7 @@ Given the _concurrent_ execution semantics of the parallel collections framework, operations performed on a collection which cause side-effects should generally be avoided, in order to maintain determinism. A simple example is by using an accessor method, like `foreach` to increment a `var` -declared outside of the closure which is passed to `foreach`. +declared outside the closure which is passed to `foreach`. scala> var sum = 0 sum: Int = 0 diff --git a/_overviews/plugins/index.md b/_overviews/plugins/index.md index c28e441f08..ccbdad19e0 100644 --- a/_overviews/plugins/index.md +++ b/_overviews/plugins/index.md @@ -35,7 +35,7 @@ You should not actually need to modify the Scala compiler very frequently, because Scala's light, flexible syntax will frequently allow you to provide a better solution using a clever library. -There are some times, though, where a compiler modification is the +There are some cases, though, where a compiler modification is the best choice even for Scala. Popular compiler plugins (as of 2018) include: diff --git a/_overviews/quasiquotes/expression-details.md b/_overviews/quasiquotes/expression-details.md index 62e810697d..6ef424fac1 100644 --- a/_overviews/quasiquotes/expression-details.md +++ b/_overviews/quasiquotes/expression-details.md @@ -16,7 +16,7 @@ permalink: /overviews/quasiquotes/:title.html 1. `Val`s, `Var`s and `Def`s without the right-hand side have it set to `q""`. 2. Abstract type definitions without bounds have them set to `q""`. -3. `Try` expressions without a finally clause have it set to `q""`. +3. `Try` expressions without a `finally` clause have it set to `q""`. 4. `Case` clauses without guards have them set to `q""`. The default `toString` formats `q""` as ``. @@ -58,13 +58,13 @@ During deconstruction you can use [unlifting]({{ site.baseurl }}/overviews/quasi scala> val q"${x: Int}" = q"1" x: Int = 1 -Similarly it would work with all the literal types except `Null`. (see [standard unliftables]({{ site.baseurl }}/overviews/quasiquotes/unlifting.html#standard-unliftables)) +Similarly, it would work with all the literal types except `Null`. (see [standard unliftables]({{ site.baseurl }}/overviews/quasiquotes/unlifting.html#standard-unliftables)) ## Identifier and Selection Identifiers and member selections are two fundamental primitives that let you refer to other definitions. A combination of two of them is also known as a `RefTree`. -Each term identifier is defined by its name and whether or not it is backquoted: +Each term identifier is defined by its name and whether it is backquoted: scala> val name = TermName("Foo") name: universe.TermName = Foo @@ -90,7 +90,7 @@ Apart from matching on identifiers with a given name, you can also extract their Name ascription is important here because without it you'll get a pattern that is equivalent to regular pattern variable binding. -Similarly you can create and extract member selections: +Similarly, you can create and extract member selections: scala> val member = TermName("bar") member: universe.TermName = bar @@ -112,7 +112,7 @@ This tree supports following variations: So an unqualified `q"this"` is equivalent to `q"${tpnme.EMPTY}.this"`. -Similarly for `super` we have: +Similarly, for `super` we have: scala> val q"$name.super[$qual].$field" = q"super.foo" name: universe.TypeName = @@ -145,7 +145,7 @@ This can be accomplished with the following: type arguments: List(Int), value arguments: List(1, 2) type arguments: List(), value arguments: List(scala.Symbol("a"), scala.Symbol("b")) -As you can see, we were able to match both calls regardless as to whether or not a specific type application exists. This happens because the type application matcher extracts the empty list of type arguments if the tree is not an actual type application, making it possible to handle both situations uniformly. +As you can see, we were able to match both calls regardless of whether a specific type application exists. This happens because the type application matcher extracts the empty list of type arguments if the tree is not an actual type application, making it possible to handle both situations uniformly. It is recommended to always include type applications when you match on a function with type arguments, as they will be inserted by the compiler during type checking, even if the user didn't write them explicitly: @@ -175,7 +175,7 @@ Here we might get one, or two subsequent value applications: scala> val q"g(...$argss)" = q"g" argss: List[List[universe.Tree]] = List() -Therefore it's recommended to use more specific patterns that check that ensure the extracted `argss` is not empty. +Therefore, it's recommended to use more specific patterns that check that ensure the extracted `argss` is not empty. Similarly to type arguments, implicit value arguments are automatically inferred during type checking: @@ -244,7 +244,7 @@ The *throw* expression is used to throw a throwable: ## Ascription -Ascriptions let users annotate the type of an intermediate expression: +Ascriptions let users annotate the type of intermediate expression: scala> val ascribed = q"(1 + 1): Int" ascribed: universe.Typed = (1.$plus(1): Int) @@ -469,7 +469,7 @@ There are three ways to create anonymous function: scala> val f3 = q"(a: Int) => a + 1" anon3: universe.Function = ((a: Int) => a.$plus(1)) -The first one uses the placeholder syntax. The second one names the function parameter but still relies on type inference to infer its type. An the last one explicitly defines the function parameter. Due to an implementation restriction, the second notation can only be used in parentheses or inside another expression. If you leave them out the you must specify the parameter types. +The first one uses the placeholder syntax. The second one names the function parameter but still relies on type inference to infer its type. An the last one explicitly defines the function parameter. Due to an implementation restriction, the second notation can only be used in parentheses or inside another expression. If you leave them out then you must specify the parameter types. Parameters are represented as [Vals]({{ site.baseurl }}/overviews/quasiquotes/definition-details.html#val-and-var-definitions). If you want to programmatically create a `val` that should have its type inferred you need to use the [empty type]({{ site.baseurl }}/overviews/quasiquotes/type-details.html#empty-type): @@ -576,7 +576,7 @@ Each enumerator in the comprehension can be expressed with the `fq"..."` interpo scala> val `for-yield` = q"for (..$enums) yield y" for-yield: universe.Tree -Similarly one can deconstruct the `for-yield` back into a list of enumerators and body: +Similarly, one can deconstruct the `for-yield` back into a list of enumerators and body: scala> val q"for (..$enums) yield $body" = `for-yield` enums: List[universe.Tree] = List(`<-`((x @ _), xs), `if`(x.$greater(0)), (y @ _) = x.$times(2)) @@ -609,10 +609,10 @@ Selectors are extracted as pattern trees that are syntactically similar to selec 1. Simple identifier selectors are represented as pattern bindings: `pq"bar"` 2. Renaming selectors are represented as thin arrow patterns: `pq"baz -> boo"` -3. Unimport selectors are represented as thin arrows with a wildcard right hand side: `pq"poison -> _"` +3. Unimport selectors are represented as thin arrows with a wildcard right-hand side: `pq"poison -> _"` 4. The wildcard selector is represented as a wildcard pattern: `pq"_"` -Similarly one construct imports back from a programmatically created list of selectors: +Similarly, one construct imports back from a programmatically created list of selectors: scala> val ref = q"a.b" scala> val sels = List(pq"foo -> _", pq"_") diff --git a/_overviews/quasiquotes/hygiene.md b/_overviews/quasiquotes/hygiene.md index 1523655696..f08a9145de 100644 --- a/_overviews/quasiquotes/hygiene.md +++ b/_overviews/quasiquotes/hygiene.md @@ -12,7 +12,7 @@ permalink: /overviews/quasiquotes/:title.html The notion of hygiene has been widely popularized by macro research in Scheme. A code generator is called hygienic if it ensures the absence of name clashes between regular and generated code, preventing accidental capture of identifiers. As numerous experience reports show, hygiene is of great importance to code generation, because name binding problems are often non-obvious and lack of hygiene might manifest itself in subtle ways. -Sophisticated macro systems such as Racket's have mechanisms that make macros hygienic without any effort from macro writers. In Scala we don't have automatic hygiene - both of our codegen facilities (compile-time codegen with macros and runtime codegen with toolboxes) require programmers to handle hygiene manually. You must know how to work around the absence of hygiene, which is what this section is about. +Sophisticated macro systems such as Racket's have mechanisms that make macros hygienic without any effort from macro writers. In Scala, we don't have automatic hygiene - both of our codegen facilities (compile-time codegen with macros and runtime codegen with toolboxes) require programmers to handle hygiene manually. You must know how to work around the absence of hygiene, which is what this section is about. Preventing name clashes between regular and generated code means two things. First, we must ensure that, regardless of the context in which we put generated code, its meaning will not change (*referential transparency*). Second, we must make certain that regardless of the context in which we splice regular code, its meaning will not change (often called *hygiene in the narrow sense*). Let's see what can be done to this end on a series of examples. @@ -56,7 +56,7 @@ Here we can see that the unqualified reference to `Map` does not respect our cus MyMacro(2) } -If we compile both the macro and it's usage, we'll see that `println` will not be called when the application runs. This will happen because, after macro expansion, `Test.scala` will look like: +If we compile both the macro, and it's usage, we'll see that `println` will not be called when the application runs. This will happen because, after macro expansion, `Test.scala` will look like: // Expanded Test.scala package example diff --git a/_overviews/quasiquotes/intro.md b/_overviews/quasiquotes/intro.md index 4ffba9e912..de31e4f162 100644 --- a/_overviews/quasiquotes/intro.md +++ b/_overviews/quasiquotes/intro.md @@ -90,7 +90,7 @@ Similarly, patterns and expressions are also not equivalent: It's extremely important to use the right interpolator for the job in order to construct a valid syntax tree. -Additionally there are two auxiliary interpolators that let you work with minor areas of scala syntax: +Additionally, there are two auxiliary interpolators that let you work with minor areas of scala syntax:   | Used for ----|------------------------------------- diff --git a/_overviews/quasiquotes/setup.md b/_overviews/quasiquotes/setup.md index b121d666d6..155ee8a32b 100644 --- a/_overviews/quasiquotes/setup.md +++ b/_overviews/quasiquotes/setup.md @@ -18,9 +18,9 @@ All examples and code snippets in this guide are run under in 2.11 REPL with one scala> val universe: scala.reflect.runtime.universe.type = scala.reflect.runtime.universe scala> import universe._ -A wildcard import from a universe (be it a runtime reflection universe like here or a compile-time universe provided in macros) is all that's needed to use quasiquotes. All of the examples will assume that import. +A wildcard import from a universe (be it a runtime reflection universe like here or a compile-time universe provided in macros) is all that's needed to use quasiquotes. All the examples will assume that import. -Additionally some examples that use `ToolBox` API will need a few more lines to get things rolling: +Additionally, some examples that use `ToolBox` API will need a few more lines to get things rolling: scala> import scala.reflect.runtime.currentMirror scala> import scala.tools.reflect.ToolBox diff --git a/_overviews/quasiquotes/type-details.md b/_overviews/quasiquotes/type-details.md index f67cd4e563..a3cd254d24 100644 --- a/_overviews/quasiquotes/type-details.md +++ b/_overviews/quasiquotes/type-details.md @@ -37,7 +37,7 @@ It is recommended to always ascribe the name as `TypeName` when you work with ty ## Singleton Type -A singleton type is a way to express a type of a term definition that is being referenced: +A singleton type is a way to express a type of term definition that is being referenced: scala> val singleton = tq"foo.bar.type".sr singleton: String = SingletonTypeTree(Select(Ident(TermName("foo")), TermName("bar"))) @@ -124,7 +124,7 @@ A compound type lets users express a combination of a number of types with an op parents: List[universe.Tree] = List(A, B, C) defns: List[universe.Tree] = List() -Braces after parents are required to signal that this type is a compound type, even if there are no refinements and we just want to extract a sequence of types combined with the `with` keyword. +Braces after parents are required to signal that this type is a compound type, even if there are no refinements, and we just want to extract a sequence of types combined with the `with` keyword. On the other side of the spectrum are pure refinements without explicit parents (a.k.a. structural types): diff --git a/_overviews/quasiquotes/unlifting.md b/_overviews/quasiquotes/unlifting.md index e23f2d7152..adb8d4ed41 100644 --- a/_overviews/quasiquotes/unlifting.md +++ b/_overviews/quasiquotes/unlifting.md @@ -65,7 +65,7 @@ Here one must pay attention to a few nuances: 1. Similarly to `Liftable`, `Unliftable` defines a helper `apply` function in the companion object to simplify the creation of `Unliftable` instances. It - take a type parameter `T` as well as a partial function `PartialFunction[Tree, T]` + takes a type parameter `T` as well as a partial function `PartialFunction[Tree, T]` and returns an `Unliftable[T]`. At all inputs where a partial function is defined it is expected to return an instance of `T` unconditionally. diff --git a/_overviews/reflection/overview.md b/_overviews/reflection/overview.md index 3de78d0525..25205074ec 100644 --- a/_overviews/reflection/overview.md +++ b/_overviews/reflection/overview.md @@ -21,7 +21,7 @@ and logic programming paradigms. While some languages are built around reflection as a guiding principle, many languages progressively evolve their reflection abilities over time. -Reflection involves the ability to **reify** (ie. make explicit) otherwise-implicit +Reflection involves the ability to **reify** (i.e. make explicit) otherwise-implicit elements of a program. These elements can be either static program elements like classes, methods, or expressions, or dynamic elements like the current continuation or execution events such as method invocations and field accesses. diff --git a/_overviews/reflection/symbols-trees-types.md b/_overviews/reflection/symbols-trees-types.md index faad275ac0..4fba8ca28e 100644 --- a/_overviews/reflection/symbols-trees-types.md +++ b/_overviews/reflection/symbols-trees-types.md @@ -694,11 +694,11 @@ section: It's important to note that, unlike `reify`, toolboxes aren't limited by the typeability requirement-- although this flexibility is achieved by sacrificing -robustness. That is, here we can see that `parse`, unlike `reify`, doesn’t +robustness. That is, here we can see that `parse`, unlike `reify`, doesn't reflect the fact that `println` should be bound to the standard `println` method. -_Note:_ when using macros, one shouldn’t use `ToolBox.parse`. This is because +_Note:_ when using macros, one shouldn't use `ToolBox.parse`. This is because there’s already a `parse` method built into the macro context. For example: bash$ scala -Yrepl-class-based:false @@ -726,7 +726,7 @@ and execute trees. In addition to outlining the structure of the program, trees also hold important information about the semantics of the program encoded in `symbol` (a symbol assigned to trees that introduce or reference definitions), and -`tpe` (the type of the tree). By default these fields are empty, but +`tpe` (the type of the tree). By default, these fields are empty, but typechecking fills them in. When using the runtime reflection framework, typechecking is implemented by diff --git a/_overviews/reflection/thread-safety.md b/_overviews/reflection/thread-safety.md index 862d465872..6c5aaa2e11 100644 --- a/_overviews/reflection/thread-safety.md +++ b/_overviews/reflection/thread-safety.md @@ -20,7 +20,7 @@ and to look up technical details, and here's a concise summary of the state of t

NEW Thread safety issues have been fixed in Scala 2.11.0-RC1, but we are going to keep this document available for now, since the problem still remains in the Scala 2.10.x series, and we currently don't have concrete plans on when the fix is going to be backported.

-Currently we know about two kinds of races associated with reflection. First of all, reflection initialization (the code that is called +Currently, we know about two kinds of races associated with reflection. First of all, reflection initialization (the code that is called when `scala.reflect.runtime.universe` is accessed for the first time) cannot be safely called from multiple threads. Secondly, symbol initialization (the code that is called when symbol's flags or type signature are accessed for the first time) isn't safe as well. Here's a typical manifestation: diff --git a/_overviews/repl/overview.md b/_overviews/repl/overview.md index 38d5008dd6..c462643399 100644 --- a/_overviews/repl/overview.md +++ b/_overviews/repl/overview.md @@ -79,4 +79,4 @@ Its facilities can be witnessed using `:imports` or `-Xprint:parser`. ### Contributing to Scala REPL The REPL source is part of the Scala project. Issues are tracked by the standard -mechanism for the project and pull requests are accepted at [the github repository](https://github.com/scala/scala). +mechanism for the project and pull requests are accepted at [the GitHub repository](https://github.com/scala/scala). diff --git a/_overviews/scala3-book/ca-given-using-clauses.md b/_overviews/scala3-book/ca-given-using-clauses.md index 704f83bd82..c47d14fc25 100644 --- a/_overviews/scala3-book/ca-given-using-clauses.md +++ b/_overviews/scala3-book/ca-given-using-clauses.md @@ -32,7 +32,7 @@ Let us assume that the configuration does not change throughout most of our code Passing `c` to each and every method call (like `renderWidget`) becomes very tedious and makes our program more difficult to read, since we need to ignore the `c` argument. #### Using `using` to mark parameters as contextual -In Scala 3, we can mark some of the parameters of our methods as _contextual_. +In Scala 3, we can mark some parameters of our methods as _contextual_. ```scala def renderWebsite(path: String)(using c: Config): String = "" + renderWidget(List("cart")) + "" @@ -65,7 +65,7 @@ Like we specified our parameter section with `using`, we can also explicitly pro ```scala renderWebsite("/home")(using config) ``` -Explicitly providing contextual parameters can be useful if we have multiple different values in scope that would make sense and we want to make sure that the correct one is passed to the function. +Explicitly providing contextual parameters can be useful if we have multiple different values in scope that would make sense, and we want to make sure that the correct one is passed to the function. For all other cases, as we will see in the next Section, there is also another way to bring contextual values into scope. diff --git a/_overviews/scala3-book/ca-multiversal-equality.md b/_overviews/scala3-book/ca-multiversal-equality.md index a106ed0856..215f2e2a7c 100644 --- a/_overviews/scala3-book/ca-multiversal-equality.md +++ b/_overviews/scala3-book/ca-multiversal-equality.md @@ -192,7 +192,7 @@ println(aBook == pBook) // true (works because of `equals` in `AudioBook`) println(pBook == aBook) // false ``` -Currently the `PrintedBook` book doesn’t have an `equals` method, so the second comparison returns `false`. +Currently, the `PrintedBook` book doesn’t have an `equals` method, so the second comparison returns `false`. To enable that comparison, just override the `equals` method in `PrintedBook`. You can find additional information on [multiversal equality][ref-equal] in the reference documentation. diff --git a/_overviews/scala3-book/collections-classes.md b/_overviews/scala3-book/collections-classes.md index e9b7adec45..f3dc1d03b4 100644 --- a/_overviews/scala3-book/collections-classes.md +++ b/_overviews/scala3-book/collections-classes.md @@ -55,7 +55,7 @@ And this figure shows all collections in package _scala.collection.mutable_: ![Mutable collection hierarchy][collections3] -Having seen that detailed view of all of the collections types, the following sections introduce some of the common types you’ll use on a regular basis. +Having seen that detailed view of all the collections types, the following sections introduce some common types you’ll use on a regular basis. {% comment %} NOTE: those images come from this page: https://docs.scala-lang.org/overviews/collections-2.13/overview.html @@ -150,7 +150,7 @@ val things: List[Any] = List(1, "two", 3.0) ### Adding elements to a List Because `List` is immutable, you can’t add new elements to it. -Instead you create a new list by prepending or appending elements to an existing `List`. +Instead, you create a new list by prepending or appending elements to an existing `List`. For instance, given this `List`: ```scala @@ -300,7 +300,7 @@ val people = Vector( ``` Because `Vector` is immutable, you can’t add new elements to it. -Instead you create a new sequence by appending or prepending elements to an existing `Vector`. +Instead, you create a new sequence by appending or prepending elements to an existing `Vector`. These examples show how to _append_ elements to a `Vector`: ```scala @@ -338,7 +338,7 @@ Ed ## ArrayBuffer Use `ArrayBuffer` when you need a general-purpose, mutable indexed sequence in your Scala applications. -It’s mutable so you can change its elements, and also resize it. +It’s mutable, so you can change its elements, and also resize it. Because it’s indexed, random access of elements is fast. ### Creating an ArrayBuffer @@ -453,7 +453,7 @@ val ak = states("AK") // ak: String = Alaska val al = states("AL") // al: String = Alabama ``` -In practice you’ll also use methods like `keys`, `keySet`, `keysIterator`, `for` loops, and higher-order functions like `map` to work with `Map` keys and values. +In practice, you’ll also use methods like `keys`, `keySet`, `keysIterator`, `for` loops, and higher-order functions like `map` to work with `Map` keys and values. ### Adding elements to a Map diff --git a/_overviews/scala3-book/collections-methods.md b/_overviews/scala3-book/collections-methods.md index 4316dea761..6d09f2f1bf 100644 --- a/_overviews/scala3-book/collections-methods.md +++ b/_overviews/scala3-book/collections-methods.md @@ -76,7 +76,7 @@ In those numbered examples: This much verbosity is _rarely_ required, and only needed in the most complex usages. 2. The compiler knows that `a` contains `Int`, so it’s not necessary to restate that here. 3. Parentheses aren’t needed when you have only one parameter, such as `i`. -4. When you have a single parameter and it appears only once in your anonymous function, you can replace the parameter with `_`. +4. When you have a single parameter, and it appears only once in your anonymous function, you can replace the parameter with `_`. The [Anonymous Function][lambdas] provides more details and examples of the rules related to shortening lambda expressions. @@ -245,7 +245,7 @@ Because of this you may want to use `headOption` instead of `head`, especially w emptyList.headOption // None ``` -As shown, it doesn’t throw an exception, it simply returns the type `Option` that has the value `None`. +As shown, it doesn't throw an exception, it simply returns the type `Option` that has the value `None`. You can learn more about this programming style in the [Functional Programming][fp-intro] chapter. @@ -270,7 +270,7 @@ Just like `head`, `tail` also works on strings: "bar".tail // "ar" ``` -`tail` throws an _java.lang.UnsupportedOperationException_ if the list is empty, so just like `head` and `headOption`, there’s also a `tailOption` method, which is preferred in functional programming. +`tail` throws a _java.lang.UnsupportedOperationException_ if the list is empty, so just like `head` and `headOption`, there’s also a `tailOption` method, which is preferred in functional programming. A list can also be matched, so you can write expressions like this: diff --git a/_overviews/scala3-book/concurrency.md b/_overviews/scala3-book/concurrency.md index bda65f21a9..3cbe50c4e0 100644 --- a/_overviews/scala3-book/concurrency.md +++ b/_overviews/scala3-book/concurrency.md @@ -8,7 +8,7 @@ next-page: scala-tools --- -When you want to write parallel and concurrent applications in Scala, you _can_ use the native Java `Thread`---but the Scala [Future](https://www.scala-lang.org/api/current/scala/concurrent/Future$.html) offers a more high level and idiomatic approach so it’s preferred, and covered in this chapter. +When you want to write parallel and concurrent applications in Scala, you _can_ use the native Java `Thread`---but the Scala [Future](https://www.scala-lang.org/api/current/scala/concurrent/Future$.html) offers a more high level and idiomatic approach, so it’s preferred, and covered in this chapter. @@ -48,7 +48,7 @@ val x = aShortRunningTask() println("Here") ``` -Conversely, if `aShortRunningTask` is created as a `Future`, the `println` statement is printed almost immediately because `aShortRunningTask` is spawned off on some other thread---it doesn’t block. +Conversely, if `aShortRunningTask` is created as a `Future`, the `println` statement is printed almost immediately because `aShortRunningTask` is spawned off on some other thread---it doesn't block. In this chapter you’ll see how to use futures, including how to run multiple futures in parallel and combine their results in a `for` expression. You’ll also see examples of methods that are used to handle the value in a future once it returns. @@ -83,7 +83,7 @@ def longRunningAlgorithm() = 42 ``` -That fancy algorithm returns the integer value `42` after a ten second delay. +That fancy algorithm returns the integer value `42` after a ten-second delay. Now call that algorithm by wrapping it into the `Future` constructor, and assigning the result to a variable: ```scala @@ -92,7 +92,7 @@ eventualInt: scala.concurrent.Future[Int] = Future() ``` Right away, your computation---the call to `longRunningAlgorithm()`---begins running. -If you immediately check the value of the variable `eventualInt`, you see that the future hasn’t been completed yet: +If you immediately check the value of the variable `eventualInt`, you see that the future hasn't been completed yet: ```scala scala> eventualInt @@ -157,7 +157,7 @@ Got the callback, value = 42 ## Other Future methods The `Future` class has other methods you can use. -It has some of the methods that you find on Scala collections classes, including: +It has some methods that you find on Scala collections classes, including: - `filter` - `flatMap` @@ -269,7 +269,7 @@ But because they’re run in parallel, the total time is just slightly longer th > r1 + r2 + r3 > ~~~ > So, if you want the computations to be possibly run in parallel, remember -> to run them outside of the `for` expression. +> to run them outside the `for` expression. ### A method that returns a future diff --git a/_overviews/scala3-book/control-structures.md b/_overviews/scala3-book/control-structures.md index 5ace74f351..834f532638 100644 --- a/_overviews/scala3-book/control-structures.md +++ b/_overviews/scala3-book/control-structures.md @@ -484,9 +484,9 @@ This is how the expression works: 1. The `for` expression starts to iterate over the values in the range `(10, 11, 12)`. It first works on the value `10`, multiplies it by `2`, then _yields_ that result, the value `20`. 2. Next, it works on the `11`---the second value in the range. - It multiples it by `2`, then yields the value `22`. + It multiplies it by `2`, then yields the value `22`. You can think of these yielded values as accumulating in a temporary holding place. -3. Finally the loop gets the number `12` from the range, multiplies it by `2`, yielding the number `24`. +3. Finally, the loop gets the number `12` from the range, multiplies it by `2`, yielding the number `24`. The loop completes at this point and yields the final result, the `Vector(20, 22, 24)`. {% comment %} @@ -503,7 +503,7 @@ val list = (10 to 12).map(i => i * 2) {% endtab %} {% endtabs %} -`for` expressions can be used any time you need to traverse all of the elements in a collection and apply an algorithm to those elements to create a new list. +`for` expressions can be used any time you need to traverse all the elements in a collection and apply an algorithm to those elements to create a new list. Here’s an example that shows how to use a block of code after the `yield`: @@ -538,7 +538,7 @@ val capNames = for name <- names yield ### Using a `for` expression as the body of a method Because a `for` expression yields a result, it can be used as the body of a method that returns a useful value. -This method returns all of the values in a given list of integers that are between `3` and `10`: +This method returns all the values in a given list of integers that are between `3` and `10`: {% tabs control-structures-20 class=tabs-scala-version %} {% tab 'Scala 2' for=control-structures-20 %} @@ -856,7 +856,7 @@ Using a `match` expression as the body of a method is a very common use. #### Match expressions support many different types of patterns There are many different forms of patterns that can be used to write `match` expressions. -Examples includes: +Examples include: - Constant patterns (such as `case 3 => `) - Sequence patterns (such as `case List(els : _*) =>`) @@ -986,6 +986,6 @@ finally {% endtab %} {% endtabs %} -Assuming that the `openAndReadAFile` method uses the Java `java.io.*` classes to read a file and doesn’t catch its exceptions, attempting to open and read a file can result in both a `FileNotFoundException` and an `IOException`, and those two exceptions are caught in the `catch` block of this example. +Assuming that the `openAndReadAFile` method uses the Java `java.io.*` classes to read a file and doesn't catch its exceptions, attempting to open and read a file can result in both a `FileNotFoundException` and an `IOException`, and those two exceptions are caught in the `catch` block of this example. [matchable]: {{ site.scala3ref }}/other-new-features/matchable.html diff --git a/_overviews/scala3-book/domain-modeling-fp.md b/_overviews/scala3-book/domain-modeling-fp.md index 22783b63df..047883b605 100644 --- a/_overviews/scala3-book/domain-modeling-fp.md +++ b/_overviews/scala3-book/domain-modeling-fp.md @@ -178,12 +178,12 @@ To compute the price of the crust we simultaneously pattern match on both the si > All they do is simply receive values and compute the result. {% comment %} -I’ve added this comment per [this Github comment](https://github.com/scalacenter/docs.scala-lang/pull/3#discussion_r543372428). +I’ve added this comment per [this GitHub comment](https://github.com/scalacenter/docs.scala-lang/pull/3#discussion_r543372428). To that point, I’ve added these definitions here from our Slack conversation, in case anyone wants to update the “pure function” definition. If not, please delete this comment. Sébastien: ---------- -A function `f` is pure if, given the same input `x`, it will always return the same output `f(x)`, and it never modifies any state outside of it (therefore potentially causing other functions to behave differently in the future). +A function `f` is pure if, given the same input `x`, it will always return the same output `f(x)`, and it never modifies any state outside it (therefore potentially causing other functions to behave differently in the future). Jonathan: --------- @@ -266,13 +266,13 @@ However, there are also a few tradeoffs that should be considered: - It tightly couples the functionality to your data model. In particular, the companion object needs to be defined in the same file as your `case` class. -- It might be unclear where to define functions like `crustPrice` that could equally well be placed in an companion object of `CrustSize` or `CrustType`. +- It might be unclear where to define functions like `crustPrice` that could equally well be placed in a companion object of `CrustSize` or `CrustType`. ## Modules A second way to organize behavior is to use a “modular” approach. -The book, *Programming in Scala*, defines a *module* as, “a ‘smaller program piece’ with a well defined interface and a hidden implementation.” +The book, *Programming in Scala*, defines a *module* as, “a ‘smaller program piece’ with a well-defined interface and a hidden implementation.” Let’s look at what this means. ### Creating a `PizzaService` interface diff --git a/_overviews/scala3-book/domain-modeling-oop.md b/_overviews/scala3-book/domain-modeling-oop.md index 744a45023a..db91c504f2 100644 --- a/_overviews/scala3-book/domain-modeling-oop.md +++ b/_overviews/scala3-book/domain-modeling-oop.md @@ -23,7 +23,7 @@ Scala provides all the necessary tools for object-oriented design: - **Access modifiers** lets you control which members of a class can be accessed by which part of the code. ## Traits -Perhaps different than other languages with support for OOP, such as Java, the primary tool of decomposition in Scala is not classes, but traits. +Perhaps different from other languages with support for OOP, such as Java, the primary tool of decomposition in Scala is not classes, but traits. They can serve to describe abstract interfaces like: ```scala @@ -78,7 +78,7 @@ To compose the two services, we can simply create a new trait extending them: trait ComposedService extends GreetingService, TranslationService ``` Abstract members in one trait (such as `translate` in `GreetingService`) are automatically matched with concrete members in another trait. -This not only works with methods as in this example, but also with all of the other abstract members mentioned above (that is, types, value definitions, etc.). +This not only works with methods as in this example, but also with all the other abstract members mentioned above (that is, types, value definitions, etc.). ## Classes Traits are great to modularize components and describe interfaces (required and provided). @@ -239,7 +239,7 @@ Specifically, we define a _singleton_ object `SensorReader` that extends `Subjec In the implementation of `SensorReader`, we say that type `S` is now defined as type `Sensor`, and type `O` is defined to be equal to type `Display`. Both `Sensor` and `Display` are defined as nested classes within `SensorReader`, implementing the traits `Subject` and `Observer`, correspondingly. -Besides being an example of a service oriented design, this code also highlights many aspects of object-oriented programming: +Besides, being an example of a service oriented design, this code also highlights many aspects of object-oriented programming: - The class `Sensor` introduces its own private state (`currentValue`) and encapsulates modification of the state behind the method `changeValue`. - The implementation of `changeValue` uses the method `publish` defined in the extended trait. diff --git a/_overviews/scala3-book/domain-modeling-tools.md b/_overviews/scala3-book/domain-modeling-tools.md index 0c947c8080..855592dbc6 100644 --- a/_overviews/scala3-book/domain-modeling-tools.md +++ b/_overviews/scala3-book/domain-modeling-tools.md @@ -34,7 +34,7 @@ class Movie(var name: String, var director: String, var year: Int) These examples show that Scala has a very lightweight way to declare classes. -All of the parameters of our example classes are defined as `var` fields, which means they are mutable: you can read them, and also modify them. +All the parameters of our example classes are defined as `var` fields, which means they are mutable: you can read them, and also modify them. If you want them to be immutable---read only---create them as `val` fields instead, or use a case class. Prior to Scala 3, you used the `new` keyword to create a new instance of a class: @@ -572,7 +572,7 @@ val cubs2016 = cubs1908.copy(lastWorldSeriesWin = 2016) As mentioned, case classes support functional programming (FP): -- In FP you try to avoid mutating data structures. +- In FP, you try to avoid mutating data structures. It thus makes sense that constructor fields default to `val`. Since instances of case classes can’t be changed, they can easily be shared without fearing mutation or race conditions. - Instead of mutating an instance, you can use the `copy` method as a template to create a new (potentially changed) instance. diff --git a/_overviews/scala3-book/first-look-at-types.md b/_overviews/scala3-book/first-look-at-types.md index 9ae69afd3a..26645ea76c 100644 --- a/_overviews/scala3-book/first-look-at-types.md +++ b/_overviews/scala3-book/first-look-at-types.md @@ -30,7 +30,7 @@ The [reference documentation][matchable] contains more information about `Matcha `Matchable` has two important subtypes: `AnyVal` and `AnyRef`. *`AnyVal`* represents value types. -There are a couple of predefined value types and they are non-nullable: `Double`, `Float`, `Long`, `Int`, `Short`, `Byte`, `Char`, `Unit`, and `Boolean`. +There are a couple of predefined value types, and they are non-nullable: `Double`, `Float`, `Long`, `Int`, `Short`, `Byte`, `Char`, `Unit`, and `Boolean`. `Unit` is a value type which carries no meaningful information. There is exactly one instance of `Unit` which we can refer to as: `()`. @@ -190,7 +190,7 @@ println(s"x.abs = ${x.abs}") // prints "x.abs = 1" The `s` that you place before the string is just one possible interpolator. If you use an `f` instead of an `s`, you can use `printf`-style formatting syntax in the string. -Furthermore, a string interpolator is a just special method and it is possible to define your own. +Furthermore, a string interpolator is a just special method, and it is possible to define your own. For instance, some database libraries define the very powerful `sql` interpolator. diff --git a/_overviews/scaladoc/for-library-authors.md b/_overviews/scaladoc/for-library-authors.md index 2c16394c22..7d58cc95a4 100644 --- a/_overviews/scaladoc/for-library-authors.md +++ b/_overviews/scaladoc/for-library-authors.md @@ -82,7 +82,7 @@ include: ### Usage tags - `@see` reference other sources of information like external document links or related entities in the documentation. -- `@note` add a note for pre or post conditions, or any other notable restrictions +- `@note` add a note for pre- or post-conditions, or any other notable restrictions or expectations. - `@example` for providing example code or related example documentation. - `@usecase` provide a simplified method definition for when the full method @@ -97,7 +97,7 @@ They allow you to organize the Scaladoc page into distinct sections, with each one shown separately, in the order that you choose. These tags are *not* enabled by default! You must pass the `-groups` -flag to Scaladoc in order to turn them on. Typically the sbt for this +flag to Scaladoc in order to turn them on. Typically, the sbt for this will look something like: ``` scalacOptions in (Compile, doc) ++= Seq( @@ -130,7 +130,7 @@ the resulting documentation. ### Diagram tags - `@contentDiagram` - use with traits and classes to include a content hierarchy diagram showing included types. - The diagram content can be fine tuned with additional specifiers taken from `hideNodes`, `hideOutgoingImplicits`, + The diagram content can be fine-tuned with additional specifiers taken from `hideNodes`, `hideOutgoingImplicits`, `hideSubclasses`, `hideEdges`, `hideIncomingImplicits`, `hideSuperclasses` and `hideInheritedNode`. `hideDiagram` can be supplied to prevent a diagram from being created if it would be created by default. Packages and objects have content diagrams by default. @@ -168,7 +168,7 @@ If a comment is not provided for an entity at the current inheritance level, but is supplied for the overridden entity at a higher level in the inheritance hierarchy, the comment from the super-class will be used. -Likewise if `@param`, `@tparam`, `@return` and other entity tags are omitted +Likewise, if `@param`, `@tparam`, `@return` and other entity tags are omitted but available from a superclass, those comments will be used. ### Explicit @@ -180,7 +180,7 @@ For explicit comment inheritance, use the `@inheritdoc` tag. It is still possible to embed HTML tags in Scaladoc (like with Javadoc), but not necessary most of the time as markup may be used instead. -Some of the standard markup available: +Some types of markup available: `monospace` ''italic text'' diff --git a/_overviews/scaladoc/interface.md b/_overviews/scaladoc/interface.md index b18d3caf57..3f2908182e 100644 --- a/_overviews/scaladoc/interface.md +++ b/_overviews/scaladoc/interface.md @@ -26,7 +26,7 @@ unaware of some of the more powerful features of Scaladoc. - Known subclasses lists all subclasses for this entity within the current Scaladoc. - Type hierarchy shows a graphical view of this class related to its super - classes and traits, immediate sub-types, and important related entities. The + classes and traits, immediate subtypes, and important related entities. The graphics themselves are links to the various entities. - The link in the Source section takes you to the online source for the class assuming it is available (and it certainly is for the core libraries and for diff --git a/_overviews/tutorials/binary-compatibility-for-library-authors.md b/_overviews/tutorials/binary-compatibility-for-library-authors.md index 92b993ca24..ad0d34f913 100644 --- a/_overviews/tutorials/binary-compatibility-for-library-authors.md +++ b/_overviews/tutorials/binary-compatibility-for-library-authors.md @@ -52,7 +52,7 @@ Similarly to the JVM, Scala.js and Scala Native have their respective equivalent However, contrary to the JVM, Scala.js and Scala Native link their respective IR files at link time, so eagerly, instead of lazily at run-time. Failure to correctly link the entire program results in linking errors reported while trying to invoke `fastOptJS`/`fullOptJS` or `nativeLink`. -Besides that difference in the timing of linkage errors, the models are extremely similar. **Unless otherwise noted, the contents of this guide apply equally to the JVM, Scala.js and Scala Native.** +Besides, that difference in the timing of linkage errors, the models are extremely similar. **Unless otherwise noted, the contents of this guide apply equally to the JVM, Scala.js and Scala Native.** Before we look at how to avoid binary incompatibility errors, let us first establish some key terminologies we will be using for the rest of the guide. @@ -67,7 +67,7 @@ Because of this, having multiple versions of the same library in the classpath i * Unexpected runtime behavior if the order of class files changes Therefore, build tools like sbt and Gradle will pick one version and **evict** the rest when resolving JARs to use for compilation and packaging. -By default they pick the latest version of each library, but it is possible to specify another version if required. +By default, they pick the latest version of each library, but it is possible to specify another version if required. ### Source Compatibility Two library versions are **Source Compatible** with each other if switching one for the other does not incur any compile errors or unintended behavioral changes (semantic errors). @@ -115,7 +115,7 @@ Our application `App` depends on library `A` and `B`. Both `A` and `B` depends o ![Initial dependency graph]({{ site.baseurl }}/resources/images/library-author-guide/before_update.png){: style="width: 50%; margin: auto; display: block;"} -Sometime later, we see `B v1.1.0` is available and upgrade its version in our build. Our code compiles and seems to work so we push it to production and go home for dinner. +Sometime later, we see `B v1.1.0` is available and upgrade its version in our build. Our code compiles and seems to work, so we push it to production and go home for dinner. Unfortunately at 2am, we get frantic calls from customers saying that our application is broken! Looking at the logs, you find lots of `NoSuchMethodError` are being thrown by some code in `A`! @@ -171,7 +171,7 @@ in library releases: You can find detailed explanations, runnable examples and tips to maintain binary compatibility in [Binary Compatibility Code Examples & Explanation](https://github.com/jatcwang/binary-compatibility-guide). -Again, we recommend using MiMa to double check that you have not broken binary compatibility after making changes. +Again, we recommend using MiMa to double-check that you have not broken binary compatibility after making changes. ## Versioning Scheme - Communicating compatibility breakages diff --git a/_overviews/tutorials/scala-on-android.md b/_overviews/tutorials/scala-on-android.md index a6299e6bb3..c3557908ea 100644 --- a/_overviews/tutorials/scala-on-android.md +++ b/_overviews/tutorials/scala-on-android.md @@ -47,7 +47,7 @@ gu install native-image You will need `adb`, “Android Debug Bridge”, to connect to your Android device and install the app on it. [Here you can find more on how to do it](https://www.fosslinux.com/25170/how-to-install-and-setup-adb-tools-on-linux.htm). -Make sure your `gcc` is at least version 6. [You can try following these steps](https://tuxamito.com/wiki/index.php/Installing_newer_GCC_versions_in_Ubuntu). On top of that, you will need some specific C libraries (like GTK) to build the native image and it varies from one computer to another, so I can’t tell you exactly what to do. But it shouldn’t be a big problem. Just follow error messages saying that you lack something and google how to install them. In my case this was the list: +Make sure your `gcc` is at least version 6. [You can try following these steps](https://tuxamito.com/wiki/index.php/Installing_newer_GCC_versions_in_Ubuntu). On top of that, you will need some specific C libraries (like GTK) to build the native image, and it varies from one computer to another, so I can’t tell you exactly what to do. But it shouldn’t be a big problem. Just follow error messages saying that you lack something and google how to install them. In my case this was the list: ``` libasound2-dev (for pkgConfig alsa) @@ -75,16 +75,16 @@ In the `pom.xml` of HelloScala you will find a list of plugins and dependencies - We will use Java 16 and Scala 2.13. - [A tiny Scala library](https://mvnrepository.com/artifact/org.scalameta/svm-subs) which resolves [this problem](https://github.com/scala/bug/issues/11634) in the interaction between Scala 2.13 and GraalVM Native Image. - For the GUI we will use JavaFX 16. -- We will use two Gluon libraries: [Glisten](https://docs.gluonhq.com/charm/javadoc/6.0.6/com.gluonhq.charm.glisten/module-summary.html) and [Attach](https://gluonhq.com/products/mobile/attach/). Glisten enriches JavaFX with additional functionality specifically designed for mobile applications. Attach is an abstraction layer over the underlying platform. For us it means we should be able to use it to access everything on Android from the local storage to permissions to push notifications. +- We will use two Gluon libraries: [Glisten](https://docs.gluonhq.com/charm/javadoc/6.0.6/com.gluonhq.charm.glisten/module-summary.html) and [Attach](https://gluonhq.com/products/mobile/attach/). Glisten enriches JavaFX with additional functionality specifically designed for mobile applications. Attach is an abstraction layer over the underlying platform. For us, it means we should be able to use it to access everything on Android from the local storage to permissions to push notifications. - [scala-maven-plugin](https://github.com/davidB/scala-maven-plugin) lets us use Scala in Maven builds *(well, d’oh)*. Thank you, David! -- [gluonfx-maven-plugin](https://github.com/gluonhq/gluonfx-maven-plugin) lets us compile Gluon dependencies and JavaFX code into a native image. In its configuration you will find the `attachList` with Gluon Attach modules we need: `device`, `display`, `storage`, `util`, `statusbar`, and `lifecycle`. From those we will use directly only `display` (to set the dimensions of the app's windows in case we run the app on a desktop and not in the fullscreen mode on a mobile) and `util` (to check if we run the app on a desktop or a mobile), but the others are needed by these two and by Gluon Glisten. +- [gluonfx-maven-plugin](https://github.com/gluonhq/gluonfx-maven-plugin) lets us compile Gluon dependencies and JavaFX code into a native image. In its configuration you will find the `attachList` with Gluon Attach modules we need: `device`, `display`, `storage`, `util`, `statusbar`, and `lifecycle`. From those we will use directly only `display` (to set the dimensions of the app's windows in case we run the app on a desktop and not in the full-screen mode on a mobile) and `util` (to check if we run the app on a desktop or a mobile), but the others are needed by these two and by Gluon Glisten. - [javafx-maven-plugin](https://github.com/openjfx/javafx-maven-plugin) which is a requirement for gluonfx-maven-plugin. ### The code -[HelloScala](https://github.com/makingthematrix/scalaonandroid/tree/main/helloscala) is just a simple example app — the actual Scala code only sets up a few widgets and displays them. The [`Main`](https://github.com/makingthematrix/scalaonandroid/blob/main/hellogluon/src/main/scala/hellogluon/Main.scala) class extends `MobileApplication` from the Glisten library and then construct the main view programatically, in two methods: `init()` for creating the widgets, and `postInit(Scene)` for decorating them. Since we want to test the app on our laptop before we install it on a mobile, we use `postInit` also to check on which platform the app is being run, and if it's a desktop, we set the dimensions on the app's window. In the case of a mobile it's not necessary — our app will take the whole available space on the screen. +[HelloScala](https://github.com/makingthematrix/scalaonandroid/tree/main/helloscala) is just a simple example app — the actual Scala code only sets up a few widgets and displays them. The [`Main`](https://github.com/makingthematrix/scalaonandroid/blob/main/hellogluon/src/main/scala/hellogluon/Main.scala) class extends `MobileApplication` from the Glisten library and then construct the main view programmatically, in two methods: `init()` for creating the widgets, and `postInit(Scene)` for decorating them. Since we want to test the app on our laptop before we install it on a mobile, we use `postInit` also to check on which platform the app is being run, and if it's a desktop, we set the dimensions on the app's window. In the case of a mobile it's not necessary — our app will take the whole available space on the screen. -Another way to set up and display widgets in JavaFX is to use a WYSIWYG editor called [Scene Builder](https://gluonhq.com/products/scene-builder/) which generates FXML files, a version of XML, that you can then load into your app. You can see how it is done in another example: [HelloFXML](https://github.com/makingthematrix/scalaonandroid/tree/main/HelloFXML). For more complex applications, you will probably mix those two approaches: FXML for more-or-less static views and programatically set up widgets in places where the UI within one view changes in reaction to events (think, for example, of a scrollable list of incoming messages). +Another way to set up and display widgets in JavaFX is to use a WYSIWYG editor called [Scene Builder](https://gluonhq.com/products/scene-builder/) which generates FXML files, a version of XML, that you can then load into your app. You can see how it is done in another example: [HelloFXML](https://github.com/makingthematrix/scalaonandroid/tree/main/HelloFXML). For more complex applications, you will probably mix those two approaches: FXML for more-or-less static views and programmatically set up widgets in places where the UI within one view changes in reaction to events (think, for example, of a scrollable list of incoming messages). ### How to run the app @@ -106,7 +106,7 @@ After all, we work on a cross-platform solution here. Unless you want to test fe mvn -Pandroid gluonfx:build gluonfx:package ``` -Successful execution of this command will create an APK file in the` target/client/aarch64-android/gvm` directory. Connect your Android phone to the computer with an USB cable, give the computer permission to send files to the phone, and type `adb devices` to check if your phone is recognized. It should display something like this in the console: +Successful execution of this command will create an APK file in the` target/client/aarch64-android/gvm` directory. Connect your Android phone to the computer with a USB cable, give the computer permission to send files to the phone, and type `adb devices` to check if your phone is recognized. It should display something like this in the console: ``` > adb devices @@ -118,7 +118,7 @@ Now you should be able to install the app on the connected device with `adb inst Installation might not work for a number of reasons, one of the most popular being that your Android simply does not allow installing apps this way. Go to Settings, find “Developers options”, and there enable “USB debugging” and “Install via USB”. -If everything works and you see the app’s screen on your device, type `adb logcat | grep GraalCompiled` to see the log output. Now you can click the button with the magnifying glass icon on the app’s screen and you should see `"log something from Scala"` printed to the console. Of course, before you write a more complex app, please look into plugins in the IDE of your choice that can display logs from `adb logcat` in a better way. For example +If everything works, and you see the app’s screen on your device, type `adb logcat | grep GraalCompiled` to see the log output. Now you can click the button with the magnifying glass icon on the app’s screen, and you should see `"log something from Scala"` printed to the console. Of course, before you write a more complex app, please look into plugins in the IDE of your choice that can display logs from `adb logcat` in a better way. For example * [Logcat in Android Studio](https://developer.android.com/studio/debug/am-logcat) * [Log Viewer for Android Studio and IntelliJ](https://plugins.jetbrains.com/plugin/10015-log-viewer) @@ -139,7 +139,7 @@ If you managed to build one of the example apps and want to code something more - Look through [Gluon’s documentation of Glisten and Attach](https://docs.gluonhq.com/) to learn how to make your app look better on a mobile device, and how to get access to your device’s features. - Download an example from [Gluon’s list of samples](https://docs.gluonhq.com/) and rewrite it to Scala. And when you do, let me know! - Look into [ScalaFX](http://www.scalafx.org/) — a more declarative, Scala-idiomatic wrapper over JavaFX. -- Download some of the other examples from [the “Scala on Android” repository on GitHub](https://github.com/makingthematrix/scalaonandroid). Contact me, if you write an example app of your own and want me to include it. +- Download some other examples from [the “Scala on Android” repository on GitHub](https://github.com/makingthematrix/scalaonandroid). Contact me, if you write an example app of your own and want me to include it. - Join us on the official Scala discord — we have a [#scala-android channel](https://discord.gg/UuDawpq7) there. - There is also an [#android channel](https://discord.gg/XHMt6Yq4) on the “Learning Scala” discord. - Finally, if you have any questions, [you can always find me on Twitter](https://twitter.com/makingthematrix). diff --git a/_overviews/tutorials/scala-with-maven.md b/_overviews/tutorials/scala-with-maven.md index a8c6572301..339d153e0f 100644 --- a/_overviews/tutorials/scala-with-maven.md +++ b/_overviews/tutorials/scala-with-maven.md @@ -108,7 +108,7 @@ Example structure: Again, you can read more about the Scala Maven Plugin at its [website][22]. ### Creating a Jar -By default the jar created by the Scala Maven Plugin doesn't include a `Main-Class` attribute in the manifest. I had to add the [Maven Assembly Plugin][19] to my `pom.xml` in order to specify custom attributes in the manifest. You can check the latest version of this plugin at the [project summary][20] or at [The Central Repository][21] +By default, the jar created by the Scala Maven Plugin doesn't include a `Main-Class` attribute in the manifest. I had to add the [Maven Assembly Plugin][19] to my `pom.xml` in order to specify custom attributes in the manifest. You can check the latest version of this plugin at the [project summary][20] or at [The Central Repository][21] X.X.X @@ -212,7 +212,7 @@ Unfortunately, the integration isn't perfect. Firstly, open up the generated `.c Change the `*.java` to `*.scala` (or duplicate the lines and change them to `*.scala` if you also have Java sources). -Secondly, open the `.project` eclipse file (again, in the same folder). Change `` and `` to look like this. Now Eclipse knows to use the Scala editor and it won't think that everything is a syntax error. +Secondly, open the `.project` eclipse file (again, in the same folder). Change `` and `` to look like this. Now Eclipse knows to use the Scala editor, and it won't think that everything is a syntax error. diff --git a/learn.md b/learn.md index 590ae7c0d4..6f6659eec6 100644 --- a/learn.md +++ b/learn.md @@ -24,7 +24,7 @@ More details on [this page]({% link online-courses.md %}). ## Dr. Mark C Lewis's lectures from Trinity University -[Dr. Mark C Lewis](https://www.cs.trinity.edu/~mlewis/) from Trinity University, San Antonio, TX, teaches programming courses using the Scala language. Course videos are available in YouTube for free. Some of the courses below. +[Dr. Mark C Lewis](https://www.cs.trinity.edu/~mlewis/) from Trinity University, San Antonio, TX, teaches programming courses using the Scala language. Course videos are available in YouTube for free. Some courses below. * [Introduction to Programming and Problem Solving Using Scala](https://www.youtube.com/playlist?list=PLLMXbkbDbVt9MIJ9DV4ps-_trOzWtphYO) * [Object-Orientation, Abstraction, and Data Structures Using Scala](https://www.youtube.com/playlist?list=PLLMXbkbDbVt8JLumqKj-3BlHmEXPIfR42) diff --git a/news/_posts/2012-12-12-functional-programming-principles-in-scala-impressions-and-statistics.md b/news/_posts/2012-12-12-functional-programming-principles-in-scala-impressions-and-statistics.md index 38a941ac28..c52618024d 100644 --- a/news/_posts/2012-12-12-functional-programming-principles-in-scala-impressions-and-statistics.md +++ b/news/_posts/2012-12-12-functional-programming-principles-in-scala-impressions-and-statistics.md @@ -33,7 +33,7 @@ Those of you reading this who were enrolled in the course might recall that, sev For example, as mentioned above, a success rate of 20% is quite high for an online course. One might suspect that it was because the course was very easy, but in our previous experience that's not the case. In fact, the online course is a direct adaptation of a 2nd year course that was given at EPFL for a number of years and that has a reputation of being rather tough. If anything, the material in the online course was a bit more compressed than in the previous on-campus class. -In particular, 57% of all respondents to the survey rated the overall course as being 3, "Just Right", on a scale from 1 to 5, with 1 being "Too Easy" and 5 being "Too Difficult. With regard to programming assignments specifically, 40% rated the assignments as being 3, "Just Right", while 46% rated assignments as being 4 "Challenging". +In particular, 57% of all respondents to the survey rated the overall course as being 3, "Just Right", on a scale from 1 to 5, with 1 being "Too Easy" and 5 being "Too Difficult". With regard to programming assignments specifically, 40% rated the assignments as being 3, "Just Right", while 46% rated assignments as being 4 "Challenging". Another point which might be particularly interesting is the fact that the difficulty rating appears to be independent of whether people have a background in Computer Science/Software Engineering or not. One might guess that this could mean that learning Scala is not much more difficult without a formal educational background in Computer Science. @@ -43,7 +43,7 @@ While a majority of the students in the course have degrees in Computer Science/
Participants' Fields of Study
 
-However, we were still interested to see how the formal education of participants influenced their assessment of the perceived difficulty. It turns out that, of those who have or have pursued university degrees— Bachelors or Masters degrees, there was almost no difference in perceived difficulty. The only marked differences appeared to the far left and the far right of the spectrum. +However, we were still interested to see how the formal education of participants influenced their assessment of the perceived difficulty. It turns out that, of those who have or have pursued university degrees— Bachelors or Master's degrees, there was almost no difference in perceived difficulty. The only marked differences appeared to the far left and the far right of the spectrum.
Perceived Difficulty Relative to Level of Education
 
Scale: 1 - Too Easy, 2 - Easy, 3 - Just Right, 4 - Challenging, 5 - Too Difficult

 

@@ -93,7 +93,7 @@ Here's that graph again, relating that population of students who enrolled in th ## Get the data and explore it with Scala! -For those of you who want to have a little bit of fun with the numbers, we've made all of the data publicly available, and we've made a small Scala project out of it. In particular, we put the code that we used to produce the above plots on [github (progfun-stats)](https://www.github.com/heathermiller/progfun-stats). +For those of you who want to have a little bit of fun with the numbers, we've made all the data publicly available, and we've made a small Scala project out of it. In particular, we put the code that we used to produce the above plots on [github (progfun-stats)](https://www.github.com/heathermiller/progfun-stats). For those of you who have taken the course and are itching for some fun additional exercises in functional programming, one of our suggestions is to tinker with and extend this project! You'll find the code examples for generating most of these plots available in this post, in the above repository. diff --git a/scala3/guides/tasty-overview.md b/scala3/guides/tasty-overview.md index 4346271a8b..bccfa992f5 100644 --- a/scala3/guides/tasty-overview.md +++ b/scala3/guides/tasty-overview.md @@ -39,7 +39,7 @@ This leads to the natural question, “What is tasty?” TASTy is an acronym that comes from the term, _Typed Abstract Syntax Trees_. It’s a high-level interchange format for Scala 3, and in this document we’ll refer to it as _Tasty_. -A first important thing to know is that Tasty files are generated by the `scalac` compiler, and contain _all_ of the information about your source code, including the syntactic structure of your program, and _complete_ information about types, positions, and even documentation. Tasty files contain much more information than _.class_ files, which are generated to run on the JVM. (More on this shortly.) +A first important thing to know is that Tasty files are generated by the `scalac` compiler, and contain _all_ the information about your source code, including the syntactic structure of your program, and _complete_ information about types, positions, and even documentation. Tasty files contain much more information than _.class_ files, which are generated to run on the JVM. (More on this shortly.) In Scala 3, the compilation process looks like this: @@ -80,7 +80,7 @@ that code is compiled to a _.class_ file that needs to be compatible with the JV public scala.collection.immutable.List xs(); ``` -That `javap` command output shows a Java representation of what’s contained in the class file. Notice in this output that `xs` _is not_ defined as a `List[Int]` any more; it’s essentially represented as a `List[java.lang.Object]`. For your Scala code to work with the JVM, the `Int` type has been erased. +That `javap` command output shows a Java representation of what’s contained in the class file. Notice in this output that `xs` _is not_ defined as a `List[Int]` anymore; it’s essentially represented as a `List[java.lang.Object]`. For your Scala code to work with the JVM, the `Int` type has been erased. Later, when you access an element of your `List[Int]` in your Scala code, like this: @@ -115,7 +115,7 @@ A second key point is to understand that there are differences between the infor With Scala 3 and Tasty, here’s another important note about compile time: -- When you write Scala 3 code that uses other Scala 3 libraries, `scalac` doesn’t have to read their _.class_ files any more; it can read their _.tasty_ files, which, as mentioned, are an _exact_ representation of your code. This is important to enable separate compilation and compatiblity between Scala 2.13 and Scala 3. +- When you write Scala 3 code that uses other Scala 3 libraries, `scalac` doesn’t have to read their _.class_ files anymore; it can read their _.tasty_ files, which, as mentioned, are an _exact_ representation of your code. This is important to enable separate compilation and compatibility between Scala 2.13 and Scala 3. @@ -138,7 +138,7 @@ In summary, Tasty is a high-level interchange format for Scala 3, and _.tasty_ f For more details, see these resources: -- In this [this video](https://www.youtube.com/watch?v=YQmVrUdx8TU), Jamie Thompson of the Scala Center provides a thorough discussion of how Tasty works, and its benefits +- In [this video](https://www.youtube.com/watch?v=YQmVrUdx8TU), Jamie Thompson of the Scala Center provides a thorough discussion of how Tasty works, and its benefits - [Binary Compatibility for library authors][binary] discusses binary compatibility, source compatibility, and the JVM execution model - [Forward Compatibility for the Scala 3 Transition](https://www.scala-lang.org/blog/2020/11/19/scala-3-forward-compat.html) demonstrates techniques for using Scala 2.13 and Scala 3 in the same project diff --git a/scala3/new-in-scala3.md b/scala3/new-in-scala3.md index fa2b6aa609..3198b8cd3d 100644 --- a/scala3/new-in-scala3.md +++ b/scala3/new-in-scala3.md @@ -10,7 +10,7 @@ changes. If you want to dig deeper, there are a few references at your disposal: - The [Scala 3 Book]({% link _overviews/scala3-book/introduction.md %}) targets developers new to the Scala language. - The [Syntax Summary][syntax-summary] provides you with a formal description of the new syntax. - The [Language Reference][reference] gives a detailed description of the changes from Scala 2 to Scala 3. -- The [Migration Guide][migration] provides you with all of the information necessary to move from Scala 2 to Scala 3. +- The [Migration Guide][migration] provides you with all the information necessary to move from Scala 2 to Scala 3. - The [Scala 3 Contributing Guide][contribution] dives deeper into the compiler, including a guide to fix issues. ## What's new in Scala 3 @@ -60,7 +60,7 @@ Besides greatly improved type inference, the Scala 3 type system also offers man - **Opaque Types**. Hide implementation details behind [opaque type aliases][types-opaque] without paying for it in performance! Opaque types supersede value classes and allow you to set up an abstraction barrier without causing additional boxing overhead. -- **Intersection and union types**. Basing the type system on new foundations led to the introduction of new type system features: instances of [intersection types][types-intersection], like `A & B`, are instances of _both_ `A` and of `B`. Instances of [union types][types-union], like `A | B`, are instances of _either_ `A` or `B`. Both constructs allow programmers to flexibly express type constraints outside of the inheritance hierarchy. +- **Intersection and union types**. Basing the type system on new foundations led to the introduction of new type system features: instances of [intersection types][types-intersection], like `A & B`, are instances of _both_ `A` and of `B`. Instances of [union types][types-union], like `A | B`, are instances of _either_ `A` or `B`. Both constructs allow programmers to flexibly express type constraints outside the inheritance hierarchy. - **Dependent function types**. Scala 2 already allowed return types to depend on (value) arguments. In Scala 3 it is now possible to abstract over this pattern and express [dependent function types][types-dependent]. In the type `type F = (e: Entry) => e.Key` the result type _depends_ on the argument! diff --git a/scala3/scaladoc.md b/scala3/scaladoc.md index 10a4366a63..6bae13c405 100644 --- a/scala3/scaladoc.md +++ b/scala3/scaladoc.md @@ -47,7 +47,7 @@ The following features are currently (May 2021) not stable to be released with s ### Snippet compiling -One of the experimental features of Scaladoc is a snippets compiler. This tool will allow you to compile snippets that you attach to your docstring +One of the experimental features of Scaladoc is a compiler for snippets. This tool will allow you to compile snippets that you attach to your docstring to check that they actually behave as intended, e.g., to properly compile. This feature is very similar to the `tut` or `mdoc` tools, but will be shipped with Scaladoc out of the box for easy setup and integration into your project. Making snippets interactive---e.g., letting users edit and compile them in the browser---is under consideration, though this feature is not in scope at this time. @@ -64,7 +64,7 @@ Searching for functions by their symbolic names can be time-consuming. That is why the new scaladoc allows you to search for methods and fields by their types. -So, for a declatation: +So, for a declaration: ``` extension [T](arr: IArray[T]) def span(p: T => Boolean): (IArray[T], IArray[T]) = ... ``` diff --git a/scala3/talks.md b/scala3/talks.md index c10559b754..3ba01d3588 100644 --- a/scala3/talks.md +++ b/scala3/talks.md @@ -49,7 +49,7 @@ Deep Dive with Scala 3 - (Typelevel Summit Oslo, May 2016) [Dotty and types: the story so far](https://www.youtube.com/watch?v=YIQjfCKDR5A) by Guillaume Martres [\[slides\]](http://guillaume.martres.me/talks/typelevel-summit-oslo/). - Guillaume focused on some of the practical improvements to the type system that Dotty makes, like the new type parameter + Guillaume focused on some practical improvements to the type system that Dotty makes, like the new type parameter inference algorithm that is able to reason about the type safety of more situations than scalac. - (flatMap(Oslo) 2016) [AutoSpecialization in Dotty](https://vimeo.com/165928176) by [Dmitry Petrashko](http://twitter.com/darkdimius) [\[slides\]](https://d-d.me/talks/flatmap2016/#/).