diff --git a/contribute.md b/contribute.md index 8f7a2d2b82..e075804b81 100644 --- a/contribute.md +++ b/contribute.md @@ -88,7 +88,7 @@ The rest of the document should, of course, be written in [Markdown](http://en.w At the moment, `RELEVANT-CATEGORY` corresponds to only a single category, "core," because we are currently focusing on building up documentation of core libraries. However, expect more categories here in the future. -If your document consists of **multiple** pages, like the [Collections]({{ site.baseurl }}/overviews/collections/index.html) overview, an ordering must be specified, by numbering documents in their logical order with `num`, and a name must be assigned to the collection of pages using `partof`. For example, the following header might be used for a document in the collections overview: +If your document consists of **multiple** pages, like the [Collections]({{ site.baseurl }}/overviews/collections/introduction.html) overview, an ordering must be specified, by numbering documents in their logical order with `num`, and a name must be assigned to the collection of pages using `partof`. For example, the following header might be used for a document in the collections overview: --- layout: overview-large diff --git a/es/overviews/parallel-collections/overview.md b/es/overviews/parallel-collections/overview.md index 90aea0afb0..8a66e9c976 100644 --- a/es/overviews/parallel-collections/overview.md +++ b/es/overviews/parallel-collections/overview.md @@ -59,7 +59,7 @@ Usando un `map` paralelizado para transformar una colección de elementos tipo ` scala> val apellidos = List("Smith","Jones","Frankenstein","Bach","Jackson","Rodin").par apellidos: scala.collection.parallel.immutable.ParSeq[String] = ParVector(Smith, Jones, Frankenstein, Bach, Jackson, Rodin) - + scala> apellidos.map(_.toUpperCase) res0: scala.collection.parallel.immutable.ParSeq[String] = ParVector(SMITH, JONES, FRANKENSTEIN, BACH, JACKSON, RODIN) @@ -69,7 +69,7 @@ Sumatoria mediante `fold` en un `ParArray`: scala> val parArray = (1 to 1000000).toArray.par parArray: scala.collection.parallel.mutable.ParArray[Int] = ParArray(1, 2, 3, ... - + scala> parArray.fold(0)(_ + _) res0: Int = 1784293664 @@ -80,7 +80,7 @@ Usando un filtrado mediante `filter` paralelizado para seleccionar los apellidos scala> val apellidos = List("Smith","Jones","Frankenstein","Bach","Jackson","Rodin").par apellidos: scala.collection.parallel.immutable.ParSeq[String] = ParVector(Smith, Jones, Frankenstein, Bach, Jackson, Rodin) - + scala> apellidos.filter(_.head >= 'J') res0: scala.collection.parallel.immutable.ParSeq[String] = ParVector(Smith, Jones, Jackson, Rodin) @@ -104,11 +104,11 @@ Lo que es importante desarrollar aquí son estos métodos para la conversión de _Nota:_ Las colecciones que son inherentemente secuenciales (en el sentido que sus elementos deben ser accedidos uno a uno), como las listas, colas y streams (a veces llamados flujos), son convertidos a sus contrapartes paralelizadas al copiar los todos sus elementos. Un ejemplo es la clase `List` --es convertida a una secuencia paralelizada inmutable común, que es un `ParVector`. Por supuesto, el tener que copiar los elementos para estas colecciones involucran una carga más de trabajo que no se sufre con otros tipos como: `Array`, `Vector`, `HashMap`, etc. For more information on conversions on parallel collections, see the -[conversions]({{ site.baseurl }}/overviews/parallel-collections/converesions.html) -and [concrete parallel collection classes]({{ site.baseurl }}/overviews/parallel-collections/concrete-parallel-collections.html) +[conversions]({{ site.baseurl }}/overviews/parallel-collections/conversions.html) +and [concrete parallel collection classes]({{ site.baseurl }}/overviews/parallel-collections/concrete-parallel-collections.html) sections of thise guide. -Para más información sobre la conversión de colecciones paralelizadas, véase los artículos sobre [conversiones]({{ site.baseurl }}/es/overviews/parallel-collections/converesions.html) y [clases concretas de colecciones paralelizadas]({{ site.baseurl }}/es/overviews/parallel-collections/concrete-parallel-collections.html) de esta misma serie. +Para más información sobre la conversión de colecciones paralelizadas, véase los artículos sobre [conversiones]({{ site.baseurl }}/es/overviews/parallel-collections/conversions.html) y [clases concretas de colecciones paralelizadas]({{ site.baseurl }}/es/overviews/parallel-collections/concrete-parallel-collections.html) de esta misma serie. ## Entendiendo las colecciones paralelizadas @@ -138,19 +138,19 @@ Veamos un ejemplo: scala> val list = (1 to 1000).toList.par list: scala.collection.parallel.immutable.ParSeq[Int] = ParVector(1, 2, 3,… - + scala> list.foreach(sum += _); sum res01: Int = 467766 - + scala> var sum = 0 sum: Int = 0 - + scala> list.foreach(sum += _); sum res02: Int = 457073 - + scala> var sum = 0 sum: Int = 0 - + scala> list.foreach(sum += _); sum res03: Int = 468520 @@ -171,13 +171,13 @@ Dado este funcionamiento "fuera de orden", también se debe ser cuidadoso de rea scala> val list = (1 to 1000).toList.par list: scala.collection.parallel.immutable.ParSeq[Int] = ParVector(1, 2, 3,… - + scala> list.reduce(_-_) res01: Int = -228888 - + scala> list.reduce(_-_) res02: Int = -61000 - + scala> list.reduce(_-_) res03: Int = -331818 @@ -186,8 +186,8 @@ En el ejemplo anterior invocamos reduce sobre un `ParVector[Int]` pasándole `_- _Nota:_ Generalmente se piensa que, al igual que las operaciones no asociativas, las operaciones no conmutativas pasadas a un función de orden superior también generan resultados extraños (no deterministas). En realidad esto no es así, un simple ejemplo es la concatenación de Strings (cadenas de caracteres). -- una operación asociativa, pero no conmutativa: scala> val strings = List("abc","def","ghi","jk","lmnop","qrs","tuv","wx","yz").par - strings: scala.collection.parallel.immutable.ParSeq[java.lang.String] = ParVector(abc, def, ghi, jk, lmnop, qrs, tuv, wx, yz) - + strings: scala.collection.parallel.immutable.ParSeq[java.lang.String] = ParVector(abc, def, ghi, jk, lmnop, qrs, tuv, wx, yz) + scala> val alfabeto = strings.reduce(_++_) alfabeto: java.lang.String = abcdefghijklmnopqrstuvwxyz diff --git a/es/tutorials/tour/default-parameter-values.md b/es/tutorials/tour/default-parameter-values.md index 0d0a281a21..96df8a1d5c 100644 --- a/es/tutorials/tour/default-parameter-values.md +++ b/es/tutorials/tour/default-parameter-values.md @@ -15,8 +15,8 @@ En Java, uno tiende a ver muchos métodos sobrecargados que solamente sirven par public class HashMap { public HashMap(Map m); - /** Create a new HashMap with default capacity (16) - * and loadFactor (0.75) + /** Create a new HashMap with default capacity (16) + * and loadFactor (0.75) */ public HashMap(); /** Create a new HashMap with default loadFactor (0.75) */ @@ -33,8 +33,8 @@ Más problemático es que los valores usados para ser por defecto están tanto e public static final float DEFAULT_LOAD_FACTOR = 0.75; public HashMap(Map m); - /** Create a new HashMap with default capacity (16) - * and loadFactor (0.75) + /** Create a new HashMap with default capacity (16) + * and loadFactor (0.75) */ public HashMap(); /** Create a new HashMap with default loadFactor (0.75) */ @@ -62,4 +62,4 @@ Scala cuenta con soporte directo para esto: // mediante parametros nombrados val m4 = new HashMap[String,Int](loadFactor = 0.8) -Nótese cómo podemos sacar ventaja de cualquier valor por defecto al utilizar [parámetros nombrados]({{ site.baseurl }}/tutorials/tour/named_parameters.html). +Nótese cómo podemos sacar ventaja de cualquier valor por defecto al utilizar [parámetros nombrados]({{ site.baseurl }}/tutorials/tour/named-parameters.html). diff --git a/es/tutorials/tour/tour-of-scala.md b/es/tutorials/tour/tour-of-scala.md index fcfeb51abc..be726a8799 100644 --- a/es/tutorials/tour/tour-of-scala.md +++ b/es/tutorials/tour/tour-of-scala.md @@ -23,7 +23,7 @@ Además, la noción de reconocimiento de patrones de Scala se puede extender nat Scala cuenta con un expresivo sistema de tipado que fuerza estáticamente las abstracciones a ser usadas en una manera coherente y segura. En particular, el sistema de tipado soporta: * [Clases genéricas](generic-classes.html) * [anotaciones variables](variances.html), -* límites de tipado [superiores](upper-type-bounds.html) e [inferiores](lower-type-bouunds.html), +* límites de tipado [superiores](upper-type-bounds.html) e [inferiores](lower-type-bounds.html), * [clases internas](inner-classes.html) y [tipos abstractos](abstract-types.html) como miembros de objetos, * [tipos compuestos](compound-types.html) * [auto-referencias explicitamente tipadas](explicitly-typed-self-references.html) diff --git a/ja/overviews/index.md b/ja/overviews/index.md index 9460bc6805..a7a33f1c9a 100644 --- a/ja/overviews/index.md +++ b/ja/overviews/index.md @@ -28,7 +28,7 @@ title: ガイドと概要 * [Scala 2.7 からの移行](/ja/overviews/collections/migrating-from-scala-27.html) * [文字列の補間](/ja/overviews/core/string-interpolation.html) New in 2.10 * [値クラスと汎用トレイト](/ja/overviews/core/value-classes.html) New in 2.10 - +

並列および並行プログラミング

@@ -39,7 +39,7 @@ title: ガイドと概要 * [並列コレクションへの変換](/ja/overviews/parallel-collections/conversions.html) * [並行トライ](/ja/overviews/parallel-collections/ctries.html) * [並列コレクションライブラリのアーキテクチャ](/ja/overviews/parallel-collections/architecture.html) - * [カスタム並列コレクションの作成](/ja/overviews/parallel-collections/custom-parallel-collections.*tml) + * [カスタム並列コレクションの作成](/ja/overviews/parallel-collections/custom-parallel-collections.html) * [並列コレクションの設定](/ja/overviews/parallel-collections/configuration.html) * [性能の測定](/ja/overviews/parallel-collections/performance.html) diff --git a/overviews/collections/migrating-from-scala-27.md b/overviews/collections/migrating-from-scala-27.md index 5d349eb8b4..fa0d66dd16 100644 --- a/overviews/collections/migrating-from-scala-27.md +++ b/overviews/collections/migrating-from-scala-27.md @@ -38,7 +38,7 @@ Generally, the old functionality of Scala 2.7 collections has been left in place There are two parts of the old libraries which have been replaced wholesale, and for which deprecation warnings were not feasible. -1. The previous `scala.collection.jcl` package is gone. This package tried to mimick some of the Java collection library design in Scala, but in doing so broke many symmetries. Most people who wanted Java collections bypassed `jcl` and used `java.util` directly. Scala 2.8 offers automatic conversion mechanisms between both collection libraries in the [JavaConversions]({{ site.baseurl }}/overviews/collections/conversions-between-java-and-scala-collections.md) object which replaces the `jcl` package. +1. The previous `scala.collection.jcl` package is gone. This package tried to mimick some of the Java collection library design in Scala, but in doing so broke many symmetries. Most people who wanted Java collections bypassed `jcl` and used `java.util` directly. Scala 2.8 offers automatic conversion mechanisms between both collection libraries in the [JavaConversions]({{ site.baseurl }}/overviews/collections/conversions-between-java-and-scala-collections.html) object which replaces the `jcl` package. 2. Projections have been generalized and cleaned up and are now available as views. It seems that projections were used rarely, so not much code should be affected by this change. So, if your code uses either `jcl` or projections there might be some minor rewriting to do. diff --git a/overviews/reflection/symbols-trees-types.md b/overviews/reflection/symbols-trees-types.md index 247374b673..3d8d32c995 100644 --- a/overviews/reflection/symbols-trees-types.md +++ b/overviews/reflection/symbols-trees-types.md @@ -384,9 +384,9 @@ as ASTs. In Scala reflection, APIs that produce or use trees are the following: -1. Scala annotations, which use trees to represent their arguments, exposed in `Annotation.scalaArgs` (for more, see the [Annotations]({{ site.baseurl }}/overviews/reflection/names-exprs-scopes-more.html) section of this guide). +1. Scala annotations, which use trees to represent their arguments, exposed in `Annotation.scalaArgs` (for more, see the [Annotations]({{ site.baseurl }}/overviews/reflection/annotations-names-scopes.html) section of this guide). 2. `reify`, a special method that takes an expression and returns an AST that represents this expression. -3. Compile-time reflection with macros (outlined in the [Macros guide]({{ site.baseurl }}/macros/overview.html)) and runtime compilation with toolboxes both use trees as their program representation medium. +3. Compile-time reflection with macros (outlined in the [Macros guide]({{ site.baseurl }}/overviews/macros/overview.html)) and runtime compilation with toolboxes both use trees as their program representation medium. It's important to note that trees are immutable except for three fields-- `pos` (`Position`), `symbol` (`Symbol`), and `tpe` (`Type`), which are @@ -441,7 +441,7 @@ expression: Here, `reify` simply takes the Scala expression it was passed, and returns a Scala `Expr`, which is simply wraps a `Tree` and a `TypeTag` (see the -[Expr]({{ site.baseurl }}/overviews/reflection/names-exprs-scopes-more.html) +[Expr]({{ site.baseurl }}/overviews/reflection/annotations-names-scopes.html) section of this guide for more information about `Expr`s). We can obtain the tree that `expr` contains by: diff --git a/sips/minutes/_posts/2016-07-15-sip-minutes.md b/sips/minutes/_posts/2016-07-15-sip-minutes.md index 547d7866b2..d448e4ef24 100644 --- a/sips/minutes/_posts/2016-07-15-sip-minutes.md +++ b/sips/minutes/_posts/2016-07-15-sip-minutes.md @@ -58,19 +58,19 @@ Minutes were taken by Jorge Vicente Cantero, acting secretary. Attendees Present: -* Martin Odersky ([@odersky](github.com/odersky)), EPFL -* Adriaan Moors ([@adriaanm](github.com/adriaanm)), Lightbend -* Heather Miller ([@heathermiller](github.com/heathermiller)), Scala Center -* Sébastien Doeraene ([@sjrd](github.com/sjrd)), EPFL -* Eugene Burmako ([@xeno-by](github.com/xeno-by)), EPFL -* Andrew Marki ([@som-snytt](github.com/som-snytt)), independent -* Josh Suereth ([@jsuereth](github.com/jsuereth)), Google -* Dmitry Petrashko ([@DarkDimius](github.com/DarkDimius)), as a guest -* Jorge Vicente Cantero ([@jvican](github.com/jvican)), Process Lead +* Martin Odersky ([@odersky](https://github.com/odersky)), EPFL +* Adriaan Moors ([@adriaanm](https://github.com/adriaanm)), Lightbend +* Heather Miller ([@heathermiller](https://github.com/heathermiller)), Scala Center +* Sébastien Doeraene ([@sjrd](https://github.com/sjrd)), EPFL +* Eugene Burmako ([@xeno-by](https://github.com/xeno-by)), EPFL +* Andrew Marki ([@som-snytt](https://github.com/som-snytt)), independent +* Josh Suereth ([@jsuereth](https://github.com/jsuereth)), Google +* Dmitry Petrashko ([@DarkDimius](https://github.com/DarkDimius)), as a guest +* Jorge Vicente Cantero ([@jvican](https://github.com/jvican)), Process Lead ## Guests -* Dmitry Petrashko ([@DarkDimius](github.com/DarkDimius)), EPFL (guest) +* Dmitry Petrashko ([@DarkDimius](https://github.com/DarkDimius)), EPFL (guest) ## Proceedings diff --git a/sips/minutes/_posts/2016-08-16-sip-10th-august-minutes.md b/sips/minutes/_posts/2016-08-16-sip-10th-august-minutes.md index aaeddb11c0..4bd82b8425 100644 --- a/sips/minutes/_posts/2016-08-16-sip-10th-august-minutes.md +++ b/sips/minutes/_posts/2016-08-16-sip-10th-august-minutes.md @@ -39,18 +39,18 @@ Minutes were taken by Jorge Vicente Cantero, acting secretary. Attendees Present: -* Seth Tisue ([@SethTisue](github.com/SethTisue)), EPFL -* Adriaan Moors ([@adriaanm](github.com/adriaanm)), Lightbend -* Heather Miller ([@heathermiller](github.com/heathermiller)), Scala Center -* Eugene Burmako ([@xeno-by](github.com/xeno-by)), EPFL -* Andrew Marki ([@som-snytt](github.com/som-snytt)), independent -* Josh Suereth ([@jsuereth](github.com/jsuereth)), Google -* Jorge Vicente Cantero ([@jvican](github.com/jvican)), Process Lead +* Seth Tisue ([@SethTisue](https://github.com/SethTisue)), EPFL +* Adriaan Moors ([@adriaanm](https://github.com/adriaanm)), Lightbend +* Heather Miller ([@heathermiller](https://github.com/heathermiller)), Scala Center +* Eugene Burmako ([@xeno-by](https://github.com/xeno-by)), EPFL +* Andrew Marki ([@som-snytt](https://github.com/som-snytt)), independent +* Josh Suereth ([@jsuereth](https://github.com/jsuereth)), Google +* Jorge Vicente Cantero ([@jvican](https://github.com/jvican)), Process Lead ## Apologies -* Martin Odersky ([@odersky](github.com/odersky)), EPFL -* Sébastien Doeraene ([@sjrd](github.com/sjrd)), EPFL +* Martin Odersky ([@odersky](https://github.com/odersky)), EPFL +* Sébastien Doeraene ([@sjrd](https://github.com/sjrd)), EPFL ## Proceedings ### Opening Remarks diff --git a/tutorials/FAQ/finding-symbols.md b/tutorials/FAQ/finding-symbols.md index 29cb27da47..4405ffe6fb 100644 --- a/tutorials/FAQ/finding-symbols.md +++ b/tutorials/FAQ/finding-symbols.md @@ -159,7 +159,7 @@ supertypes (`AnyRef` or `Any`) or a type parameter. In this case, we find avaialable on all types. Other implicit conversions may be visible in your scope depending on imports, extended types or -self-type annotations. See [Finding implicits](tutorials/FAQ/finding-implicits.md) for details. +self-type annotations. See [Finding implicits](tutorials/FAQ/finding-implicits.html) for details. Syntactic sugars/composition ----------------------------- diff --git a/zh-cn/overviews/core/architecture-of-scala-collections.md b/zh-cn/overviews/core/architecture-of-scala-collections.md index 30e4aceeb1..4d88ad725b 100644 --- a/zh-cn/overviews/core/architecture-of-scala-collections.md +++ b/zh-cn/overviews/core/architecture-of-scala-collections.md @@ -17,7 +17,7 @@ language: zh-cn Builder类概要: package scala.collection.mutable - + class Builder[-Elem, +To] { def +=(elem: Elem): this.type def result(): To @@ -33,7 +33,7 @@ Builder类概要: scala> val buf = new ArrayBuffer[Int] buf: scala.collection.mutable.ArrayBuffer[Int] = ArrayBuffer() - + scala> val bldr = buf mapResult (_.toArray) bldr: scala.collection.mutable.Builder[Int,Array[Int]] = ArrayBuffer() @@ -45,7 +45,7 @@ Builder类概要: ### TraversableLike类概述 package scala.collection - + class TraversableLike[+Elem, +Repr] { def newBuilder: Builder[Elem, Repr] // deferred def foreach[U](f: Elem => U) // deferred @@ -54,7 +54,7 @@ Builder类概要: val b = newBuilder foreach { elem => if (p(elem)) b += elem } b.result - } + } } Collection库重构的主要设计目标是在拥有自然类型的同时又尽可能的共享代码实现。Scala的Collection 遵从“结果类型相同”的原则:只要可能,容器上的转换方法最后都会生成相同类型的Collection。例如,过滤操作对各种Collection类型都应该产生相同类型的实例。在List上应用过滤器应该获得List,在Map上应用过滤器,应该获得Map,如此等等。在下面的章节中,会告诉大家该原则的实现方法。 @@ -73,13 +73,13 @@ Scala的 Collection 库通过在 trait 实现中使用通用的构建器(build scala> import collection.immutable.BitSet import collection.immutable.BitSet - + scala> val bits = BitSet(1, 2, 3) bits: scala.collection.immutable.BitSet = BitSet(1, 2, 3) - + scala> bits map (_ * 2) res13: scala.collection.immutable.BitSet = BitSet(2, 4, 6) - + scala> bits map (_.toFloat) res14: scala.collection.immutable.Set[Float] = Set(1.0, 2.0, 3.0) @@ -91,11 +91,11 @@ Scala的 Collection 库通过在 trait 实现中使用通用的构建器(build 类似 BitSet 的问题不是唯一的,这里还有在map类型上应用map函数的交互式例子: scala> Map("a" -> 1, "b" -> 2) map { case (x, y) => (y, x) } - res3: scala.collection.immutable.Map[Int,java.lang.String] + res3: scala.collection.immutable.Map[Int,java.lang.String] = Map(1 -> a, 2 -> b) - + scala> Map("a" -> 1, "b" -> 2) map { case (x, y) => y } - res4: scala.collection.immutable.Iterable[Int] + res4: scala.collection.immutable.Iterable[Int] = List(1, 2) 第一个函数用于交换两个键值对。这个函数映射的结果是一个类似的Map,键和值颠倒了。事实上,地一个表达式产生了一个键值颠倒的map类型(在原map可颠倒的情况下)。然而,第二个函数,把键值对映射成一个整型,即成员变成了具体的值。在这种情况下,我们不可能把结果转换成Map类型,因此处理成,把结果转换成Map的一个可遍历的超类,这里是List。 @@ -112,16 +112,16 @@ TraversableLike 中映射(map)的实现: for (x <- this) b += f(x) b.result } - + 上面的代码展示了TraversableLike如何实现映射的trait。看起来非常类似于TraversableLike类的过滤器的实现。主要的区别在于,过滤器使用TraversableLike类的抽象方法 newBuilder,而映射使用的是Builder工场,它作为CanBuildFrom类型的一个额外的隐式参数传入。 CanBuildFrom trait: - + package scala.collection.generic - + trait CanBuildFrom[-From, -Elem, +To] { - // 创建一个新的构造器(builder) - def apply(from: From): Builder[Elem, To] + // 创建一个新的构造器(builder) + def apply(from: From): Builder[Elem, To] } 上面的代码是 trait CanBuildFrom 的定义,它代表着构建者工场。它有三个参数:Elem是要创建的容器(collection)的元素的类型,To是要构建的容器(collection)的类型,From是该构建器工场适用的类型。通过定义适合的隐式定义的构建器工场,你就可以构建出符合你需要的类型转换行为。以 BitSet 类为例,它的伴生对象包含一个 CanBuildFrom[BitSet, Int, BitSet] 类型的构建器工场。这就意味着,当在一个 BitSet 上执行操作的时候,你可以创建另一个元素类型为整型的 BitSet。如果你需要的类型不同,那么,你还可以使用其他的隐式构建器工场,它们在Set的伴生对象中实现。下面就是一个更通用的构建器,A是通用类型参数: @@ -134,7 +134,7 @@ CanBuildFrom trait: scala> val xs: Iterable[Int] = List(1, 2, 3) xs: Iterable[Int] = List(1, 2, 3) - + scala> val ys = xs map (x => x * x) ys: Iterable[Int] = List(1, 4, 9) @@ -153,7 +153,7 @@ RNA(核糖核酸)碱基(译者注:RNA链即很多不同RNA碱基的序 case object T extends Base case object G extends Base case object U extends Base - + object Base { val fromInt: Int => Base = Array(A, T, G, U) val toInt: Base => Int = Map(A -> 0, T -> 1, G -> 2, U -> 3) @@ -170,37 +170,37 @@ RNA(核糖核酸)碱基(译者注:RNA链即很多不同RNA碱基的序 import collection.IndexedSeqLike import collection.mutable.{Builder, ArrayBuffer} import collection.generic.CanBuildFrom - + final class RNA1 private (val groups: Array[Int], val length: Int) extends IndexedSeq[Base] { - + import RNA1._ - + def apply(idx: Int): Base = { if (idx < 0 || length <= idx) throw new IndexOutOfBoundsException Base.fromInt(groups(idx / N) >> (idx % N * S) & M) } } - + object RNA1 { - + // 表示一组所需要的比特数 private val S = 2 - + // 一个Int能够放入的组数 private val N = 32 / S - + // 分离组的位掩码(bitmask) - private val M = (1 << S) - 1 - + private val M = (1 << S) - 1 + def fromSeq(buf: Seq[Base]): RNA1 = { val groups = new Array[Int]((buf.length + N - 1) / N) for (i <- 0 until buf.length) groups(i / N) |= Base.toInt(buf(i)) << (i % N * S) new RNA1(groups, buf.length) } - + def apply(bases: Base*) = fromSeq(bases) } @@ -212,10 +212,10 @@ RNA(核糖核酸)碱基(译者注:RNA链即很多不同RNA碱基的序 scala> val xs = List(A, G, T, A) xs: List[Product with Base] = List(A, G, T, A) - + scala> RNA1.fromSeq(xs) res1: RNA1 = RNA1(A, G, T, A) - + scala> val rna1 = RNA1(A, U, G, G, T) rna1: RNA1 = RNA1(A, U, G, G, T) @@ -225,10 +225,10 @@ RNA(核糖核酸)碱基(译者注:RNA链即很多不同RNA碱基的序 scala> rna1.length res2: Int = 5 - + scala> rna1.last res3: Base = T - + scala> rna1.take(3) res4: IndexedSeq[Base] = Vector(A, U, G) @@ -240,12 +240,12 @@ RNA(核糖核酸)碱基(译者注:RNA链即很多不同RNA碱基的序 val groups: Array[Int], val length: Int ) extends IndexedSeq[Base] with IndexedSeqLike[Base, RNA2] { - + import RNA2._ - - override def newBuilder: Builder[Base, RNA2] = + + override def newBuilder: Builder[Base, RNA2] = new ArrayBuffer[Base] mapResult fromSeq - + def apply(idx: Int): Base = // as before } @@ -267,7 +267,7 @@ RNA(核糖核酸)碱基(译者注:RNA链即很多不同RNA碱基的序 => scala.collection.mutable.Builder[Base,IndexedSeq[Base]] has incompatible type class RNA2 private (val groups: Array[Int], val length: Int) ^ - + one error found(发现一个错误) 错误信息非常地长,并且很复杂,体现了容器(Collection)库错综复杂的组合。所以,最好忽略有关这些方法来源的信息,因为在这种情况下,它更多得是分散人的精力。而剩下的,则说明需要声明一个具有返回类型Builder[Base, RNA2]的newBuilder方法,但无法找到一个具有返回类型Builder[Base,IndexedSeq[Base]]的newBuilder方法。后者并不覆写前者。第一个方法——返回值类型为Builder[Base, RNA2]——是一个抽象方法,其在RNA2类中通过传递RNA2的类型参数给IndexedSeqLike,来以这种类型实例化。第二个方法的返回值类型为Builder[Base,IndexedSeq[Base]]——是由继承后的IndexedSeq类提供的。换句话说,如果没有声明一个以第一个返回值类型为返回值的newBuilder,RNA2类就是非法的。 @@ -276,10 +276,10 @@ RNA(核糖核酸)碱基(译者注:RNA链即很多不同RNA碱基的序 scala> val rna2 = RNA2(A, U, G, G, T) rna2: RNA2 = RNA2(A, U, G, G, T) - + scala> rna2 take 3 res5: RNA2 = RNA2(A, U, G) - + scala> rna2 filter (U !=) res6: RNA2 = RNA2(A, G, G, T) @@ -291,7 +291,7 @@ RNA(核糖核酸)碱基(译者注:RNA链即很多不同RNA碱基的序 scala> val rna = RNA(A, U, G, G, T) rna: RNA = RNA(A, U, G, G, T) - + scala> rna map { case A => T case b => b } res7: RNA = RNA(T, U, G, G, T) @@ -304,19 +304,19 @@ RNA(核糖核酸)碱基(译者注:RNA链即很多不同RNA碱基的序 scala> rna map Base.toInt res2: IndexedSeq[Int] = Vector(0, 3, 2, 2, 1) - + scala> rna ++ List("missing", "data") - res3: IndexedSeq[java.lang.Object] = + res3: IndexedSeq[java.lang.Object] = Vector(A, U, G, G, T, missing, data) 这就是在理想情况下应认为结果。但是,RNA2类并不提供这样的处理。事实上,如果你用RNA2类的实例来运行前两个例子,结果则是: scala> val rna2 = RNA2(A, U, G, G, T) rna2: RNA2 = RNA2(A, U, G, G, T) - + scala> rna2 map { case A => T case b => b } res0: IndexedSeq[Base] = Vector(T, U, G, G, T) - + scala> rna2 ++ rna2 res1: IndexedSeq[Base] = Vector(A, U, G, G, T, A, U, G, G, T) @@ -331,23 +331,23 @@ RNA(核糖核酸)碱基(译者注:RNA链即很多不同RNA碱基的序 #### RNA链类的最终版本 - final class RNA private (val groups: Array[Int], val length: Int) + final class RNA private (val groups: Array[Int], val length: Int) extends IndexedSeq[Base] with IndexedSeqLike[Base, RNA] { - + import RNA._ - + // 在IndexedSeq中必须重新实现newBuilder - override protected[this] def newBuilder: Builder[Base, RNA] = + override protected[this] def newBuilder: Builder[Base, RNA] = RNA.newBuilder - + // 在IndexedSeq中必须实现apply def apply(idx: Int): Base = { if (idx < 0 || length <= idx) throw new IndexOutOfBoundsException Base.fromInt(groups(idx / N) >> (idx % N * S) & M) } - - // (可选)重新实现foreach, + + // (可选)重新实现foreach, // 来提高效率 override def foreach[U](f: Base => U): Unit = { var i = 0 @@ -363,24 +363,24 @@ RNA(核糖核酸)碱基(译者注:RNA链即很多不同RNA碱基的序 #### RNA伴生对象的最终版本 object RNA { - + private val S = 2 // group中的比特(bit)数 private val M = (1 << S) - 1 // 用于隔离group的比特掩码 private val N = 32 / S // 一个Int中的group数 - + def fromSeq(buf: Seq[Base]): RNA = { val groups = new Array[Int]((buf.length + N - 1) / N) for (i <- 0 until buf.length) groups(i / N) |= Base.toInt(buf(i)) << (i % N * S) new RNA(groups, buf.length) } - + def apply(bases: Base*) = fromSeq(bases) - - def newBuilder: Builder[Base, RNA] = + + def newBuilder: Builder[Base, RNA] = new ArrayBuffer mapResult fromSeq - - implicit def canBuildFrom: CanBuildFrom[RNA, Base, RNA] = + + implicit def canBuildFrom: CanBuildFrom[RNA, Base, RNA] = new CanBuildFrom[RNA, Base, RNA] { def apply(): Builder[Base, RNA] = newBuilder def apply(from: RNA): Builder[Base, RNA] = newBuilder @@ -399,28 +399,28 @@ RNA(核糖核酸)碱基(译者注:RNA链即很多不同RNA碱基的序 在第二个实例中,将介绍如何将一个新的map类型整合到容器框架中的。其方式是通过使用关键字“Patricia trie”,实现以String作为类型的可变映射(mutable map)。术语“Patricia“实际上就是"Practical Algorithm to Retrieve Information Coded in Alphanumeric."(检索字母数字编码信息的实用算法) 的缩写。思想是以树的形式存储一个set或者map,在这种树中,后续字符作为子树可以用唯一确定的关键字查找。例如,一个 Patricia trie存储了三个字符串 "abc", "abd", "al", "all", "xy" 。如下: -patricia 树的例子: +patricia 树的例子: -![20131225160411.png](/pictures/20131225160411.png) +![patricia.png](/resources/images/patricia.png) 为了能够在trie中查找与字符串”abc“匹配的节点,只要沿着标记为”a“的子树,查找到标记为”b“的子树,最后到达标记为”c“的子树。如果 Patricia trie作为map使用,键所对应的值保存在一个可通过键定位的节点上。如果作为set,只需保存一个标记,说明set中存在这个节点。 使用Patricia tries的prefix map实现方式: import collection._ - + class PrefixMap[T] - extends mutable.Map[String, T] + extends mutable.Map[String, T] with mutable.MapLike[String, T, PrefixMap[T]] { - + var suffixes: immutable.Map[Char, PrefixMap[T]] = Map.empty var value: Option[T] = None - + def get(s: String): Option[T] = if (s.isEmpty) value else suffixes get (s(0)) flatMap (_.get(s substring 1)) - - def withPrefix(s: String): PrefixMap[T] = + + def withPrefix(s: String): PrefixMap[T] = if (s.isEmpty) this else { val leading = s(0) @@ -431,23 +431,23 @@ patricia 树的例子: } suffixes(leading) withPrefix (s substring 1) } - + override def update(s: String, elem: T) = withPrefix(s).value = Some(elem) - + override def remove(s: String): Option[T] = if (s.isEmpty) { val prev = value; value = None; prev } else suffixes get (s(0)) flatMap (_.remove(s substring 1)) - + def iterator: Iterator[(String, T)] = (for (v <- value.iterator) yield ("", v)) ++ - (for ((chr, m) <- suffixes.iterator; + (for ((chr, m) <- suffixes.iterator; (s, v) <- m.iterator) yield (chr +: s, v)) - + def += (kv: (String, T)): this.type = { update(kv._1, kv._2); this } - + def -= (s: String): this.type = { remove(s); this } - + override def empty = new PrefixMap[T] } @@ -455,7 +455,7 @@ Patricia tries支持非常高效的查找和更新。另一个良好的特点是 依据这些思想,来看一下作为Patricia trie的映射实现方式。这种map称为PrefixMap。PrefixMap提供了withPrefix方法,这个方法根据给定的前缀查找子映射(submap),其包含了所有匹配该前缀的键。首先,使用键来定义一个prefix map,执行如下。 - scala> val m = PrefixMap("abc" -> 0, "abd" -> 1, "al" -> 2, + scala> val m = PrefixMap("abc" -> 0, "abd" -> 1, "al" -> 2, "all" -> 3, "xy" -> 4) m: PrefixMap[Int] = Map((abc,0), (abd,1), (al,2), (all,3), (xy,4)) @@ -482,21 +482,21 @@ prefix map的伴生对象: import scala.collection.mutable.{Builder, MapBuilder} import scala.collection.generic.CanBuildFrom - + object PrefixMap extends { def empty[T] = new PrefixMap[T] - + def apply[T](kvs: (String, T)*): PrefixMap[T] = { val m: PrefixMap[T] = empty for (kv <- kvs) m += kv m } - - def newBuilder[T]: Builder[(String, T), PrefixMap[T]] = + + def newBuilder[T]: Builder[(String, T), PrefixMap[T]] = new MapBuilder[String, T, PrefixMap[T]](empty) - + implicit def canBuildFrom[T] - : CanBuildFrom[PrefixMap[_], (String, T), PrefixMap[T]] = + : CanBuildFrom[PrefixMap[_], (String, T), PrefixMap[T]] = new CanBuildFrom[PrefixMap[_], (String, T), PrefixMap[T]] { def apply(from: PrefixMap[_]) = newBuilder[T] def apply() = newBuilder[T] @@ -513,7 +513,7 @@ prefix map的伴生对象: scala> PrefixMap("hello" -> 5, "hi" -> 2) res0: PrefixMap[Int] = Map((hello,5), (hi,2)) - + scala> PrefixMap.empty[String] res2: PrefixMap[String] = Map() @@ -537,4 +537,4 @@ prefix map的伴生对象: ### 致谢 -这些页面的素材改编自,由Odersky,Spoon和Venners编写的[Scala编程](http://www.artima.com/shop/programming_in_scala)第2版 。感谢Artima 对于出版的大力支持。 +这些页面的素材改编自,由Odersky,Spoon和Venners编写的[Scala编程](http://www.artima.com/shop/programming_in_scala)第2版 。感谢Artima 对于出版的大力支持。 diff --git a/zh-cn/overviews/parallel-collections/architecture.md b/zh-cn/overviews/parallel-collections/architecture.md index cab30ed770..265167ccf2 100644 --- a/zh-cn/overviews/parallel-collections/architecture.md +++ b/zh-cn/overviews/parallel-collections/architecture.md @@ -24,7 +24,7 @@ Spliter的工作,正如其名,它把一个并行集合分割到了它的元 trait Splitter[T] extends Iterator[T] { def split: Seq[Splitter[T]] } - + 有趣的是,分割器是作为迭代器实现的,这意味着除了分割,他们也被框架用来遍历并行集合(也就是说,他们继承了迭代器的标准方法,如next()和hasNext())。这种“分割迭代器”的独特之处是它的分割方法把自身(迭代器类型的分割器)进一步分割成额外的分割器,这些新的分割器能遍历到整个并行集合的不相交的元素子集。类似于正常的迭代器,分割器在调用分割方法后失效。 一般来说,集合是使用分割器(Splitters)分成大小大致相同的子集。在某些情况下,任意大小的分区是必须的,特别是在并行序列上,PreciseSplitter(精确的分割器)是很有用的,它是继承于Splitter和另外一个实现了精确分割的方法--psplit. @@ -46,7 +46,7 @@ trait Combiner[Elem, To] extends Builder[Elem, To] { Scala的并行集合吸收了很多来自于Scala的(序列)集合库的设计灵感--事实上,它反映了规则地集合框架的相应特征,如下所示。 -![parallel-collections-hierarchy.png](/pictures/parallel-collections-hierarchy.png) +![parallel-collections-hierarchy.png](/resources/images/parallel-collections-hierarchy.png) Scala集合的层次和并行集合库 @@ -59,4 +59,3 @@ Scala集合的层次和并行集合库 引用 1. [On a Generic Parallel Collection Framework, Aleksandar Prokopec, Phil Bawgell, Tiark Rompf, Martin Odersky, June 2011](http://infoscience.epfl.ch/record/165523/files/techrep.pdf) - diff --git a/zh-cn/overviews/parallel-collections/concrete-parallel-collections.md b/zh-cn/overviews/parallel-collections/concrete-parallel-collections.md index f0ff147bfa..623018265f 100644 --- a/zh-cn/overviews/parallel-collections/concrete-parallel-collections.md +++ b/zh-cn/overviews/parallel-collections/concrete-parallel-collections.md @@ -15,13 +15,13 @@ language: zh-cn scala> val pa = scala.collection.parallel.mutable.ParArray.tabulate(1000)(x => 2 * x + 1) pa: scala.collection.parallel.mutable.ParArray[Int] = ParArray(1, 3, 5, 7, 9, 11, 13,... - + scala> pa reduce (_ + _) res0: Int = 1000000 - + scala> pa map (x => (x - 1) / 2) res1: scala.collection.parallel.mutable.ParArray[Int] = ParArray(0, 1, 2, 3, 4, 5, 6, 7,... - + 在内部,分离一个并行数组[分离器](http://docs.scala-lang.org/overviews/parallel-collections/architecture.html#core_abstractions)相当于使用它们的下标迭代器更新来创建两个分离器。[组合](http://docs.scala-lang.org/overviews/parallel-collections/architecture.html#core_abstractions)稍微负责一点。因为大多数的分离方法(如:flatmap, filter, takeWhile等)我们不能预先知道元素的个数(或者数组的大小),每一次组合本质上来说是一个数组缓冲区的一种变量根据分摊时间来进行加减的操作。不同的处理器进行元素相加操作,对每个独立并列数组进行组合,然后根据其内部连结在再进行组合。在并行数组中的基础数组只有在知道元素的总数之后才能被分配和填充。基于此,变换方法比存取方法要稍微复杂一些。另外,请注意,最终数组分配在JVM上的顺序进行,如果映射操作本身是很便宜,这可以被证明是一个序列瓶颈。 通过调用seq方法,并行数组(parallel arrays)被转换为对应的顺序容器(sequential collections) ArraySeq。这种转换是非常高效的,因为新创建的ArraySeq 底层是通过并行数组(parallel arrays)获得的。 @@ -32,7 +32,7 @@ language: zh-cn scala> val pv = scala.collection.parallel.immutable.ParVector.tabulate(1000)(x => x) pv: scala.collection.parallel.immutable.ParVector[Int] = ParVector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9,... - + scala> pv filter (_ % 2 == 0) res0: scala.collection.parallel.immutable.ParVector[Int] = ParVector(0, 2, 4, 6, 8, 10, 12, 14, 16, 18,... 不可变向量表现为32叉树,因此[分离器]通过将子树分配到每个分离器(spliter)来分离。[组合(combiners)]存储元素的向量并通过懒惰(lazily)拷贝来组合元素。因此,转换方法相对于并行数组来说可伸缩性较差。一旦串联操作在将来scala的发布版本中成为可变的,组合器将会使得串联和变量器方法更加有效率。 @@ -45,10 +45,10 @@ language: zh-cn scala> 1 to 3 par res0: scala.collection.parallel.immutable.ParRange = ParRange(1, 2, 3) - + scala> 15 to 5 by -2 par res1: scala.collection.parallel.immutable.ParRange = ParRange(15, 13, 11, 9, 7, 5) - + 正如顺序范围有没有创建者(builders),平行的范围(parallel ranges)有没有组合者(combiners)。映射一个并行范围的元素来产生一个并行向量。顺序范围(sequential ranges)和并行范围(parallel ranges)能够被高效的通过seq和par方法进行转换。 ### 并行哈希表(Parallel Hash Tables) @@ -57,10 +57,10 @@ language: zh-cn scala> val phs = scala.collection.parallel.mutable.ParHashSet(1 until 2000: _*) phs: scala.collection.parallel.mutable.ParHashSet[Int] = ParHashSet(18, 327, 736, 1045, 773, 1082,... - + scala> phs map (x => x * x) res0: scala.collection.parallel.mutable.ParHashSet[Int] = ParHashSet(2181529, 2446096, 99225, 2585664,... - + 并行哈希表组合器元素排序是依据他们的哈希码前缀在桶(buckets)中进行的。它们通过简单地连接这些桶在一起。一旦最后的哈希表被构造出来(如:组合结果的方法被调用),基本数组分配和从不同的桶元素复制在平行于哈希表的数组不同的相邻节段。 连续的哈希映射和散列集合可以被转换成并行的变量使用par方法。并行哈希表内在上要求一个映射的大小在不同块的哈希表元素的数目。这意味着,一个连续的哈希表转换为并行哈希表的第一时间,表被遍历并且size map被创建,因此,第一次调用par方法的时间是和元素个数成线性关系的。进一步修改的哈希表的映射大小保持状态,所以以后的转换使用PAR和序列具有常数的复杂性。使用哈希表的usesizemap方法,映射大小的维护可以开启和关闭。重要的是,在连续的哈希表的修改是在并行哈希表可见,反之亦然。 @@ -71,10 +71,10 @@ language: zh-cn scala> val phs = scala.collection.parallel.immutable.ParHashSet(1 until 1000: _*) phs: scala.collection.parallel.immutable.ParHashSet[Int] = ParSet(645, 892, 69, 809, 629, 365, 138, 760, 101, 479,... - + scala> phs map { x => x * x } sum res0: Int = 332833500 - + 类似于平行散列哈希表,parallel hash trie在桶(buckets)里预排序这些元素和根据不同的处理器分配不同的桶(buckets) parallel hash trie的结果,这些构建subtrie是独立的。 并行散列试图可以来回转换的,顺序散列试图利用序列和时间常数的方法。 @@ -85,12 +85,12 @@ language: zh-cn scala> val numbers = scala.collection.parallel.mutable.ParTrieMap((1 until 100) zip (1 until 100): _*) map { case (k, v) => (k.toDouble, v.toDouble) } numbers: scala.collection.parallel.mutable.ParTrieMap[Double,Double] = ParTrieMap(0.0 -> 0.0, 42.0 -> 42.0, 70.0 -> 70.0, 2.0 -> 2.0,... - + scala> while (numbers.nonEmpty) { | numbers foreach { case (num, sqrt) => | val nsqrt = 0.5 * (sqrt + num / sqrt) | numbers(num) = nsqrt - | if (math.abs(nsqrt - sqrt) < 0.01) { + | if (math.abs(nsqrt - sqrt) < 0.01) { | println(num, nsqrt) | numbers.remove(num) | } @@ -101,7 +101,7 @@ language: zh-cn (7.0,2.64576704419029) (4.0,2.0000000929222947) ... - + 合成器是引擎盖下triemaps实施——因为这是一个并行数据结构,只有一个组合构建整个变压器的方法调用和所有处理器共享。 与所有的并行可变容器(collections),Triemaps和并行partriemaps通过调用序列或PAR方法得到了相同的存储支持,所以修改在一个在其他可见。转换发生在固定的时间。 @@ -110,11 +110,22 @@ language: zh-cn 顺序类型(sequence types)的性能特点: -![20131225152436.png](/pictures/20131225152436.png) +| | head | tail | apply | update| prepend | append | insert | +| -------- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | +| `ParArray` | C | L | C | C | L | L | L | +| `ParVector` | eC | eC | eC | eC | eC | eC | - | +| `ParRange` | C | C | C | - | - | - | - | 性能特征集(set)和映射类型: -![20131225152515.png](/pictures/20131225152515.png) +| | lookup | add | remove | +| -------- | ---- | ---- | ---- | +| **immutable** | | | | +| `ParHashSet`/`ParHashMap`| eC | eC | eC | +| **mutable** | | | | +| `ParHashSet`/`ParHashMap`| C | C | C | +| `ParTrieMap` | eC | eC | eC | + ####Key