From: Corey Farwell Date: Sat, 25 Mar 2017 16:30:32 +0000 (-0700) Subject: Rollup merge of #40807 - stjepang:optimize-insertion-sort, r=alexcrichton X-Git-Url: https://git.lizzy.rs/?a=commitdiff_plain;h=2bdbcb061806e8e03507ecdfa22493d9195bf25c;p=rust.git Rollup merge of #40807 - stjepang:optimize-insertion-sort, r=alexcrichton Optimize insertion sort This change slightly changes the main iteration loop so that LLVM can optimize it more efficiently. Benchmark: ``` name before ns/iter after ns/iter diff ns/iter diff % slice::sort_unstable_small_ascending 39 (2051 MB/s) 38 (2105 MB/s) -1 -2.56% slice::sort_unstable_small_big_random 579 (2210 MB/s) 575 (2226 MB/s) -4 -0.69% slice::sort_unstable_small_descending 80 (1000 MB/s) 70 (1142 MB/s) -10 -12.50% slice::sort_unstable_small_random 396 (202 MB/s) 386 -10 -2.53% ``` The benchmark is not a fluke. I can see that performance on `small_descending` is consistently better after this change. I'm not 100% sure why this makes things faster, but my guess would be that `v.len()+1` to the compiler looks like it could in theory overflow. --- 2bdbcb061806e8e03507ecdfa22493d9195bf25c