zsh-users
 help / color / mirror / code / Atom feed
* Why large arrays are extremely slow to handle?
@ 2011-03-25  0:37 nix
  2011-03-25  0:46 ` Mikael Magnusson
  2011-03-25  2:29 ` Bart Schaefer
  0 siblings, 2 replies; 5+ messages in thread
From: nix @ 2011-03-25  0:37 UTC (permalink / raw)
  To: zsh-users

Tested on AMD Phenom(tm) II X6 1090T Processor 3.6GHz using one core.

I think there's is a big flaw somewhere that causes the following:

#!/bin/zsh

emulate zsh

TEST=()

for i in {1..10000} ; do

TEST+="$i" # append (push) to an array

done

--- 10K
time ./bench
real    0m3.944s

--- 50K BOOOM! WTF?

time ./bench
real    1m53.321s

Does not make much sense to me. Im also a PHP developer. Just for
comparison, let's do the same with PHP.

<?php

$test = array();

for ($i=1; $i < 50000; $i++) {

$test[] = $i;

}

print_r($test);

?>

--- 10K

time php TEST_PHP
real    0m0.011s

--- 50K

time php TEST_PHP
real    0m0.025s


Any ideas why it's extremely slow? I have need to use very large arrays
(even over one million elements in a single array) but it's currently
impossible due to the above.





^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Why large arrays are extremely slow to handle?
  2011-03-25  0:37 Why large arrays are extremely slow to handle? nix
@ 2011-03-25  0:46 ` Mikael Magnusson
  2011-03-25  1:12   ` nix
  2011-03-25  2:29 ` Bart Schaefer
  1 sibling, 1 reply; 5+ messages in thread
From: Mikael Magnusson @ 2011-03-25  0:46 UTC (permalink / raw)
  To: nix; +Cc: zsh-users

On 25 March 2011 01:37,  <nix@myproxylists.com> wrote:
> Tested on AMD Phenom(tm) II X6 1090T Processor 3.6GHz using one core.
>
> I think there's is a big flaw somewhere that causes the following:
>
> #!/bin/zsh
>
> emulate zsh
>
> TEST=()
>
> for i in {1..10000} ; do
>
> TEST+="$i" # append (push) to an array
>
> done
>
> --- 10K
> time ./bench
> real    0m3.944s
>
> --- 50K BOOOM! WTF?
>
> time ./bench
> real    1m53.321s
>
> Does not make much sense to me. Im also a PHP developer. Just for
> comparison, let's do the same with PHP.
>
> <?php
>
> $test = array();
>
> for ($i=1; $i < 50000; $i++) {
>
> $test[] = $i;
>
> }
>
> print_r($test);
>
> ?>
>
> --- 10K
>
> time php TEST_PHP
> real    0m0.011s
>
> --- 50K
>
> time php TEST_PHP
> real    0m0.025s
>
>
> Any ideas why it's extremely slow? I have need to use very large arrays
> (even over one million elements in a single array) but it's currently
> impossible due to the above.

The problem is not the array, but that you are handing 50000 arguments
to the for loop. With this optimization it "only" takes 5 seconds ;)
for (( i = 0; i < 10000; i++ )) { arr+=$i }
That said, you generally don't want to use large arrays in zsh, it will be slow.

-- 
Mikael Magnusson


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Why large arrays are extremely slow to handle?
  2011-03-25  0:46 ` Mikael Magnusson
@ 2011-03-25  1:12   ` nix
  0 siblings, 0 replies; 5+ messages in thread
From: nix @ 2011-03-25  1:12 UTC (permalink / raw)
  To: Mikael Magnusson; +Cc: zsh-users

> On 25 March 2011 01:37,  <nix@myproxylists.com> wrote:
>> Tested on AMD Phenom(tm) II X6 1090T Processor 3.6GHz using one core.
>>
>> I think there's is a big flaw somewhere that causes the following:
>>
>> #!/bin/zsh
>>
>> emulate zsh
>>
>> TEST=()
>>
>> for i in {1..10000} ; do
>>
>> TEST+="$i" # append (push) to an array
>>
>> done
>>
>> --- 10K
>> time ./bench
>> real    0m3.944s
>>
>> --- 50K BOOOM! WTF?
>>
>> time ./bench
>> real    1m53.321s
>>
>> Does not make much sense to me. Im also a PHP developer. Just for
>> comparison, let's do the same with PHP.
>>
>> <?php
>>
>> $test = array();
>>
>> for ($i=1; $i < 50000; $i++) {
>>
>> $test[] = $i;
>>
>> }
>>
>> print_r($test);
>>
>> ?>
>>
>> --- 10K
>>
>> time php TEST_PHP
>> real    0m0.011s
>>
>> --- 50K
>>
>> time php TEST_PHP
>> real    0m0.025s
>>
>>
>> Any ideas why it's extremely slow? I have need to use very large arrays
>> (even over one million elements in a single array) but it's currently
>> impossible due to the above.
>
> The problem is not the array, but that you are handing 50000 arguments
> to the for loop. With this optimization it "only" takes 5 seconds ;)
> for (( i = 0; i < 10000; i++ )) { arr+=$i }
> That said, you generally don't want to use large arrays in zsh, it will be
> slow.
>
> --
> Mikael Magnusson
>

There problem is the array. I tried it also on a DUAL X5450 XEON machine,
terribly slow on it as well while using ZSH.

I would love to have a fix. I just coded subnet generator in ZSH and
noticed when i started to generate larger IP-ranges, things started to go
very slowly :(

Mikael, try this with provided example:

arr=( $(print -r -- ${(u)=arr}) ) # List only unique elements in an array

It's terrible slow as well with 50K elements, the problem is nothin but
the array handling.

#!/bin/zsh

emulate zsh

TEST=()

for (( i = 0; i < 50000; i++ )) ; do

TEST+="$i"

done

time ./BENC
real    1m54.353s

Not difference at all to "for {1..50000}" ;)



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Why large arrays are extremely slow to handle?
  2011-03-25  0:37 Why large arrays are extremely slow to handle? nix
  2011-03-25  0:46 ` Mikael Magnusson
@ 2011-03-25  2:29 ` Bart Schaefer
  2011-03-25  2:50   ` nix
  1 sibling, 1 reply; 5+ messages in thread
From: Bart Schaefer @ 2011-03-25  2:29 UTC (permalink / raw)
  To: zsh-users

On Mar 25,  2:37am, nix@myproxylists.com wrote:
}
} I think there's is a big flaw somewhere that causes the following:
} 
} #!/bin/zsh
} emulate zsh
} TEST=()
} for i in {1..10000} ; do
} TEST+="$i" # append (push) to an array
} done
} 
} --- 10K
} time ./bench
} real    0m3.944s
} 
} --- 50K BOOOM! WTF?
} 
} time ./bench
} real    1m53.321s
} 
} Any ideas why it's extremely slow?

It's not the array, it's the loop interpretation thereof.

TEST=({1..50000})

will populate a 50k-element array almost instantly.  Here's a 500,000
element array on my home desktop:

torch% typeset -F SECONDS
torch% print $SECONDS; TEST=({1..500000}); print $SECONDS
24.9600260000
25.4452710000
torch% 

Put that in a loop instead, and you're interpreting a fetch/replace of the
whole array on every cycle.  This is in part because array assignment is
generalized for replacing arbitrary slices of the array; append is not
treated specially.  [If someone wants to try to optimize this, start at
the final "else" block in Src/params.c : setarrvalue() -- but beware of
what happens in freearray().]

As it happens, you can get much better update performance at the cost of
some memory performance by using an associative array instead.  Try:

typeset -A TEST
for i in {1..50000} ; do
TEST[$i]=$i
done

Individual elements of hashes *are* fetched by reference without the
whole hash coming along, and are updated in place rather than treated
as slices, so this is your fastest option without a C-code change.

You can also build up the "array" as a simple text block with delimiters,
then split it to an actual array very quickly.  Append to a scalar isn't
really any better algorithmically than an array, but it does fewer memory
operations.

torch% for i in {1..50000}; do TEST+="$i"$'\n' ; done
torch% TEST=(${(f)TEST})
torch% print $#TEST
50000


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Why large arrays are extremely slow to handle?
  2011-03-25  2:29 ` Bart Schaefer
@ 2011-03-25  2:50   ` nix
  0 siblings, 0 replies; 5+ messages in thread
From: nix @ 2011-03-25  2:50 UTC (permalink / raw)
  To: zsh-users

> On Mar 25,  2:37am, nix@myproxylists.com wrote:
> }
> } I think there's is a big flaw somewhere that causes the following:
> }
> } #!/bin/zsh
> } emulate zsh
> } TEST=()
> } for i in {1..10000} ; do
> } TEST+="$i" # append (push) to an array
> } done
> }
> } --- 10K
> } time ./bench
> } real    0m3.944s
> }
> } --- 50K BOOOM! WTF?
> }
> } time ./bench
> } real    1m53.321s
> }
> } Any ideas why it's extremely slow?
>
> It's not the array, it's the loop interpretation thereof.
>
> TEST=({1..50000})
>
> will populate a 50k-element array almost instantly.  Here's a 500,000
> element array on my home desktop:
>
> torch% typeset -F SECONDS
> torch% print $SECONDS; TEST=({1..500000}); print $SECONDS
> 24.9600260000
> 25.4452710000
> torch%
>
> Put that in a loop instead, and you're interpreting a fetch/replace of the
> whole array on every cycle.  This is in part because array assignment is
> generalized for replacing arbitrary slices of the array; append is not
> treated specially.  [If someone wants to try to optimize this, start at
> the final "else" block in Src/params.c : setarrvalue() -- but beware of
> what happens in freearray().]
>
> As it happens, you can get much better update performance at the cost of
> some memory performance by using an associative array instead.  Try:
>
> typeset -A TEST
> for i in {1..50000} ; do
> TEST[$i]=$i
> done
>
> Individual elements of hashes *are* fetched by reference without the
> whole hash coming along, and are updated in place rather than treated
> as slices, so this is your fastest option without a C-code change.

typeset -A did the trick. Now the speed is decent enough on my 1090T:

For a 50K:

time ./bench
real    0m0.681s

My C skills are very limited (barely basics), so im afraid it's better not
to touch at all to that code.

>
> You can also build up the "array" as a simple text block with delimiters,
> then split it to an actual array very quickly.  Append to a scalar isn't
> really any better algorithmically than an array, but it does fewer memory
> operations.
>
> torch% for i in {1..50000}; do TEST+="$i"$'\n' ; done
> torch% TEST=(${(f)TEST})
> torch% print $#TEST
> 50000
>

As usually, thank you for very detailed explanation of the problem and
solution. Now even 'TEST=( $(print -r -- ${(u)=TEST}) ) # List only unique
elements in an array' give reasonable speed after using associative array.

I like "foreach", very similar to PHP's one.

Thanks.



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2011-03-25  2:50 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-03-25  0:37 Why large arrays are extremely slow to handle? nix
2011-03-25  0:46 ` Mikael Magnusson
2011-03-25  1:12   ` nix
2011-03-25  2:29 ` Bart Schaefer
2011-03-25  2:50   ` nix

Code repositories for project(s) associated with this public inbox

	https://git.vuxu.org/mirror/zsh/

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).