Document Type

Honors Project

Abstract

Over the last several decades, two important shifts have taken place in the computing world: first, the speed of processors has vastly outstripped the speed of memory, making memory accesses by far the most expensive operations that a typical symbolic program performs. Second, dynamically compiled languages such as Java and C# have become popular, placing new pressures on compiler writers to create effective systems for run-time code generation.

This paper addresses the need created by the lagging speeds of memory accesses in the context of dynamically compiled systems. In such systems memory access optimization is important for resultant program performance, but the compilation time required by most traditional memory access optimizations is prohibitively high for use in such contexts. In this paper, we present a new analysis, memory dependence analysis, which amortizes the cost of performing memory access analysis to a level that is acceptable for dynamic compilation. In addition, we present two memory access optimizations based on this new analysis, and present empirical evidence that using this approach results in significantly improved compilation times without significant loss in resultant code quality.

Share

COinS
 
 

© Copyright is owned by author of this document