Skip to content

Commit 432660f

Browse files
committed
Another try at making concat_003 more reliable
Use array_fill() for the array population loop -- this isn't the part that is being tested and on PHP 7.0 w/o opcache this duplicates the inner array a lot.
1 parent 78675eb commit 432660f

File tree

1 file changed

+14
-28
lines changed

1 file changed

+14
-28
lines changed

Zend/tests/concat_003.phpt

Lines changed: 14 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -2,37 +2,24 @@
22
Concatenating many small strings should not slowdown allocations
33
--SKIPIF--
44
<?php if (PHP_DEBUG) { die ("skip debug version is slow"); } ?>
5-
--INI--
6-
memory_limit=256m
75
--FILE--
86
<?php
97

10-
/* To note is that memory usage can vary depending on whether opcache is on. The actual
11-
measuring that matters is timing here. */
12-
138
$time = microtime(TRUE);
149

1510
/* This might vary on Linux/Windows, so the worst case and also count in slow machines. */
16-
$t0_max = 0.3;
17-
$t1_max = 1.0;
18-
19-
$datas = [];
20-
for ($i = 0; $i < 220000; $i++)
21-
{
22-
$datas[] = [
23-
'000.000.000.000',
24-
'000.255.255.255',
25-
'保留地址',
26-
'保留地址',
27-
'保留地址',
28-
'保留地址',
29-
'保留地址',
30-
'保留地址',
31-
];
32-
}
33-
34-
$t0 = microtime(TRUE) - $time;
35-
var_dump($t0 < $t0_max);
11+
$t_max = 1.0;
12+
13+
$datas = array_fill(0, 220000, [
14+
'000.000.000.000',
15+
'000.255.255.255',
16+
'保留地址',
17+
'保留地址',
18+
'保留地址',
19+
'保留地址',
20+
'保留地址',
21+
'保留地址',
22+
]);
3623

3724
$time = microtime(TRUE);
3825
$texts = '';
@@ -41,12 +28,11 @@ foreach ($datas AS $data)
4128
$texts .= implode("\t", $data) . "\r\n";
4229
}
4330

44-
$t1 = microtime(TRUE) - $time;
45-
var_dump($t1 < $t1_max);
31+
$t = microtime(TRUE) - $time;
32+
var_dump($t < $t_max);
4633

4734
?>
4835
+++DONE+++
4936
--EXPECT--
5037
bool(true)
51-
bool(true)
5238
+++DONE+++

0 commit comments

Comments
 (0)