l i n u x - u s e r s - g r o u p - o f - d a v i s
L U G O D
 
Next Meeting:
October 7: Social gathering
Next Installfest:
TBD
Latest News:
Aug. 18: Discounts to "Velocity" in NY; come to tonight's "Photography" talk
Page last updated:
2006 Jun 16 12:58

The following is an archive of a post made to our 'vox-tech mailing list' by one of its subscribers.

Report this post as spam:

(Enter your email address)
Re: [vox-tech] loop efficiency and testing against zero.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [vox-tech] loop efficiency and testing against zero.



On Fri, Jun 16, 2006 at 12:28:10PM -0400, Peter Jay Salzman wrote:
> I've read somewhere that a loop that runs from 0 to some number should be
> written to go in reverse order, e.g. instead of:
> 
>    for ( int i = 0;  i < 10;  ++i )
> 
> we should write:
> 
>    for ( int i = 9;  i >= 0;  --i )
> 
> The rationale is that it's faster to test against 0 than some other integer,
> but it isn't obvious to me *why* it's faster.

In the first case, `++i` and `i < 10` operations are performed after
each loop.

In the second case, `--i` operation is done all the time, but `i >= 0`
need not be performed all the time because [many] CPUs implicitely do
`i >= 0` test after every arithmetic operation.  Many CPUs also do
`i == 0` and `i < 0` tests after every arithmetic operation.

... is the simple way to view it.  To be more precise, after each
arithmetic operation, most CPUs set the following flags internally:

  1. The operation results in zero.
  2. The operation results in a negative number.

which is then used for the conditional jump operation (beq, bne, blt,
bgte, etc.)  Comparison operations like `i < 10` simply executes the
opcode that sets the above flags.

... is the idea.  Back in the real world, optimization makes the
difference moot, and a little test with GCC shows there's no difference
at the -O2 optimization level.[1]  The difference may become real,
however, if you start using the `i` variable inside the loop, which
makes optmization a bit more difficult.

Even if I had a non-optimizing compiler, however, I would still use the
increasing loop because it's what I'm used to, which makes it easier to
think about how the program works, which allows code with fewer bugs,
which is more important than a one instruction-per-loop speed
optimization.  The idea is that we should use the faster CPUs of today
not only to execute codes faster, but also write safer codes, which
seems to be the most popularly accepted idea in the computer industry
these days (and I agree.)

FYI, mixing decreasing and increasing loops together in one function
(let alone a file or program) is confusing and is discouraged by (IIRC)
Brian Kernighan and Rob Pike in "The Practice of Programming."

-Mark

[1] I wrote 2 C programs, each that executes printf("1\n") within your
two loops, then ran `gcc -save-temps program.c -c`, on each program, and
compared the assembly output, "program.s".

_______________________________________________
vox-tech mailing list
vox-tech@lists.lugod.org
http://lists.lugod.org/mailman/listinfo/vox-tech



LinkedIn
LUGOD Group on LinkedIn
Sign up for LUGOD event announcements
Your email address:
facebook
LUGOD Group on Facebook
'Like' LUGOD on Facebook:

Hosting provided by:
Sunset Systems
Sunset Systems offers preconfigured Linux systems, remote system administration and custom software development.

LUGOD: Linux Users' Group of Davis
PO Box 2082, Davis, CA 95617
Contact Us

LUGOD is a 501(c)7 non-profit organization
based in Davis, California
and serving the Sacramento area.
"Linux" is a trademark of Linus Torvalds.

Sponsored in part by:
O'Reilly and Associates
For numerous book donations.