Loop Perforation

www.cs.cornell.edu
7 min read
standard
Have you ever been frustrated that your code takes too long to run? Do you have a sneaking suspicion that most of the time is spent in loops? Have you ever considered just running fewer loops by having your compiler mangle your code to skip arbitrary loop iterations? Welcome to loop perforation, an idea that sounds so ludicrous that we can barely believe it actually works at all! The basic premise is common across the field of approximate computing: many applications spend a lot of time and energy getting results that are exactly right, when they could happily get away with results that are mostly right. If a programmer is able to define what exactly "mostly right" means for their particular application, then approximate computing techniques allow them to explore trading off cost and correctness. The original loop perforation paper, "Managing Performance vs. Accuracy Trade-offs with Loop Perforation", from ESEC/FSE'11, takes this idea to a beautifully flippant extreme: look at some loops, and replace something like i++ with i += 2 or even i += 21. Skipping loops like this almost definitely makes your code both faster and more energy efficient (assuming it still runs!). For this project, we set out to implement simple loop perforation as a LLVM pass, with the goal of a richer exploration into what it means to actually accept worse accuracy from our programs in this domain. What We Did LLVM is an industrial-strength compiler that structures optimizations as a series of passes that both act on and produce a human-readable intermediate representation. We implemented two new LLVM passes:
Loop Perforation

by Oliver Richardson, Alexa VanHattum, Gregory Yauney November 13, 2019

Have you ever been frustrated that your code takes too long to run? Do you have a sneaking suspicion that most of the time is spent in loops? Have you ever considered just running fewer loops by having your compiler mangle your code to skip arbitrary loop iterations?

Welcome to loop perforation, an idea that sounds so ludicrous that we can barely believe it actually works at all!

The basic premise is common across the field of approximate computing: many applications spend a lot of time and energy getting results that are exactly right, when they could happily get away with results that are mostly right. If a programmer is able to define what exactly "mostly right" means for their particular application, then approximate computing techniques allow them to explore trading off cost and correctness. The original loop perforation paper, "Managing Performance vs. Accuracy Trade-offs with Loop Perforation", from ESEC/FSE'11, takes this idea to a beautifully flippant extreme: look at some loops, and replace something like i++ with i += 2 or even i += 21 . Skipping loops like this almost definitely makes your code both faster and more energy efficient (assuming it still runs!).

For this project, we set out to implement simple loop perforation as a LLVM pass, with the goal of a richer exploration into what it means to actually accept worse accuracy from our programs in this domain.

What We Did

LLVM is an industrial-strength compiler that structures optimizations as a series of passes that both act on and produce a human-readable intermediate representation.

We implemented two new LLVM passes:

LoopCountPass , a function pass that identifies and saves which program loops are candidates for perforation. LoopPerforationPass , a loop pass that perforates loops at a specified rate.

Both passes work in conjunction with additional infrastructure:

An external driver program, written…
Read full article