Board Thread:Code Review/@comment-1757994-20151023054248/@comment-1757994-20151024223847

I described all these bugs in the opening post.
 * 1) The original code doesn't find all the duplicates
 * 2) Some of the duplicates that it does find it lists multiple times
 * 3) It errors out on the last iteration

I'm not sure what you mean by improving performance. I got 77 requests for both the original code and the updated code on Community Central (as opposed to 135 and 234). There would be a bit more network traffic, because the API the updated code is actually having the API return information on all the duplicate files, instead of only some of them. 30% more may still be accurate. There's no way to reduce the network traffic, because there's no way to tell allimages to skip files already known to be duplicates or to skip files that have no duplicates. (That second point is a serious annoyance with the API, but it is what it is.) All the other information the API returns is necessary.

As for batching the results, do you mean doing one large DOM operation at the end instead of one DOM operation for each file with duplicates? If so, I considered it, but it's not how the original code works, and I didn't want to change what the user sees as it runs. The intent of the review/update is to fix bugs only. Plus, if something causes the connection between the browser to the server to glitch, the user has at least a partial list, instead of a blank screen.