FE-Logo
  • Home
  • Study Material
  • Non-Deterministic Finite Automation
    • Introduction to Compiler
    • The Structure of a Compiler
    • Intermediate Code Generation
    • Building a Compiler
    • Applications of Compiler
    • Optimizations for Computer Architectures
    • Design of New Computer Architectures
    • Program Translations
    • Software Productivity Tools
    • Programming Language Basics
    • Minimisation of DFAs
    • Explicit Access Control
    • Parameter Passing Mechanisms
    • Introduction to Lexical Analysis
    • Regular expressions
    • Short hands
    • Nondeterministic finite automata
    • Converting a regular expression to an NFA
    • Deterministic finite automata
    • Converting an NFA to a DFA
    • The subset construction
    • Dead states
    • Lexers and lexer generators
    • Splitting the input stream
    • Lexical errors
    • Properties of regular languages
    • Limits to expressive power
    • The Role of the Lexical Analyzer
    • Input Buffering
    • Specification of Tokens
    • Operations on Languages
    • Regular Definitions and Extensions
    • Recognition of Tokens
    • The Lexical-Analyzer Generator Lex
    • Finite Automata
    • Construction of an NFA from a Regular Expression
    • Efficiency of String-Processing Algorithms
    • The Structure of the Generated Analyzer
    • Optimization of DFA-Based Pattern Matchers

  • Basic Parsing Techniques
    • Introduction to Syntax analysis
    • Context-free grammars
    • Writing context free grammars
    • Derivation
    • Syntax trees and ambiguity
    • Operator precedence
    • Writing ambiguous expression grammars
    • Other sources of ambiguity
    • Syntax analysis and Predictive parsing
    • Nullable and FIRST
    • Predictive parsing revisited
    • FOLLOW
    • LL(1) parsing
    • Methods for rewriting grammars for LL(1) parsing
    • SLR parsing
    • Constructions of SLR parse tables
    • Conflicts in SLR parse-tables
    • Using precedence rules in LR parse tables
    • Using LR-parser generators
    • Properties of context-free languages
    • Introduction to Syntax-Directed Translator
    • Evaluating an SDD at the Nodes of a Parse Tree
    • Evaluation Orders for SDD\'s
    • Ordering the Evaluation of Attributes
    • A larger example of calculating FIRST and FOLLOW
    • Syntax Definition
    • Associativity of Operators
    • Parse Trees
    • Ambiguity
    • Syntax-Directed Translation
    • Synthesized Attributes
    • Tree Traversals
    • Parsing
    • Predictive Parsing
    • Use e-Productions
    • Translator for Simple Expressions
    • Semantic Rules with Controlled Side Effects
    • Applications of Syntax-Directed Translation
    • The Structure of a Type of syntax
    • Switch-Statements
    • Syntax-Directed Translation Schemes
    • Postfix Translation Schemes
    • SDT\'s With Actions Inside Productions
    • Eliminating Left Recursion from SDT\'s
    • SDT\'s for L-Attributed Definitions
    • Implementing L-Attributed SDD\'s
    • On-The-Fly Code Generation
    • L-Attributed SDD\'s and LL Parsing
    • Bottom-Up Parsing of L-Attributed SDD\'s

  • Syntax-directed Translation
    • Register Allocation and Assignment
    • Semantic Analysis
    • Introduction to Intermediate Code Generation
    • Variants of Syntax Trees
    • Variants of Syntax Trees
    • The Value-Number Method for Constructing DAG\'s
    • Three-Address Code
    • Quadruples
    • Triples
    • Static Single-Assignment Form
    • Types and Declarations
    • Type Equivalence
    • Sequences of Declarations
    • Translation of Expressions
    • Incremental Translation
    • Addressing Array Elements
    • Translation of Array References
    • Type Checking
    • Type Conversions
    • Overloading of Functions and Operators
    • Type Inference and Polymorphic Functions
    • Algorithm for Unification
    • Control Flow
    • Flow-of-Control Statements
    • Control-Flow Translation of Boolean Expressions
    • Boolean Values and Jumping Code
    • Back patching
    • Backpatching for Boolean Expressions
    • Flow-of-Control Statements
    • Break-, Continue-, and Goto-Statements
    • Introduction to Run-Time Environments
    • Stack Allocation of Space
    • Activation Records
    • Calling Sequences
    • Variable-Length Data on the Stack
    • Access to Nonlocal Data on the Stack
    • Displays
    • Heap Management
    • Locality in Programs
    • Reducing Fragmentation
    • Managing and Coalescing Free Space
    • Manual Deallocation Requests
    • Reachability
    • Introduction to Garbage Collection
    • Reference Counting Garbage Collectors
    • Introduction to Trace-Based Collection
    • Basic Abstraction
    • Optimizing Mark-and-Sweep
    • Mark-and-Compact Garbage Collectors
    • Copying collectors
    • Short-Pause Garbage Collection
    • Incremental Reachability Analysis
    • Partial-Collection Basics
    • The Train Algorithm
    • Parallel and Concurrent Garbage Collection
    • Partial Object Relocation
    • Introduction Code Generation
    • Issues in the Design of a Code Generator
    • Instruction Selection
    • Register Allocation
    • The Target Language
    • Addresses in the Target Code
    • Stack Allocation
    • Run-Time Addresses for Names
    • Basic Blocks and Flow Graphs
    • Basic Blocks
    • Next-Use Information
    • Representation of Flow Graphs
    • Optimization of Basic Blocks
    • Dead Code Elimination
    • Representation of Array References
    • Pointer Assignments and Procedure Calls
    • A Simple Code Generator
    • The Code-Generation Algorithm
    • Design of the Function getReg
    • Peephole Optimization
    • Algebraic Simplification and Reduction in Strength
    • Register Assignment for Outer Loops
    • Instruction Selection by Tree Rewriting
    • Code Generation by Tiling an Input Tree
    • Pattern Matching by Parsing
    • General Tree Matching
    • Optimal Code Generation for Expressions
    • Evaluating Expressions with an Insufficient Supply of Registers
    • Dynamic Programming Code-Generation

  • Data Flow Analysis
    • The Lazy-Code-Motion Algorithm
    • Introduction to Machine-Independent Optimizations
    • The Dynamic Programming Algorithm
    • The Principal Sources of Optimization
    • Semantics-Preserving Transformations
    • Copy Propagation
    • Induction Variables and Reduction in Strength
    • Introduction to Data-Flow Analysis
    • The Data-Flow Analysis Schema
    • Reaching Definitions
    • Live-Variable Analysis
    • Available Expressions
    • Foundations of Data-Flow Analysis
    • Transfer Functions
    • The Iterative Algorithm for General Frameworks
    • Meaning of a Data-Flow Solution
    • Constant Propagation
    • Transfer Functions for the Constant-Propagation Framework
    • Partial-Redundancy Elimination
    • The Lazy-Code-Motion Problem
    • Loops in Flow Graphs
    • Depth-First Ordering
    • Back Edges and Reducibility
    • Natural Loops
    • Speed of Convergence of Iterative Data-Flow Algorithms
    • Region-Based Analysis
    • Necessary Assumptions About Transfer Functions
    • An Algorithm for Region-Based Analysis
    • Handling Non-reducible Flow Graphs
    • Symbolic Analysis
    • Data-Flow Problem Formulation
    • Region-Based Symbolic Analysis

  • Code Generation
    • Introduction to Software Pipelining of Loops
    • Matrix Multiply: An In-Depth Example
    • Software Pipelining of Loops
    • Introduction Instruction-Level Parallelism
    • Multiple Instruction Issue
    • A Basic Machine Model
    • Code-Scheduling Constraints
    • Finding Dependences Among Memory Accesses
    • Phase Ordering Between Register Allocation and Code Scheduling
    • Speculative Execution Support
    • Basic-Block Scheduling
    • List Scheduling of Basic Blocks
    • Global Code Scheduling
    • Upward Code Motion
    • Updating Data Dependences
    • Advanced Code Motion Techniques
    • Software Pipelining
    • Register Allocation and Code Generation
    • A Software-Pipelining Algorithm
    • Scheduling Cyclic Dependence Graphs
    • Improvements to the Pipelining Algorithms
    • Conditional Statements and Hardware Support for Software Pipelining
    • Basic Concepts of Parallelism and Locality
    • Parallelism in Applications
    • Loop-Level Parallelism
    • Introduction to Affine Transform Theory
    • Optimizations
    • Iteration Spaces
    • Affine Array Indexes
    • Controlling the Order of Execution
    • Changing Axes
    • Intermediate Code for Procedures
    • Data Reuse
    • Self Reuse
    • Self-Spatial Reuse
    • Array Data-Dependence Analysis
    • Integer Linear Programming
    • Heuristics for Solving Integer Linear Programs
    • Solving General Integer Linear Programs
    • Finding Synchronization-Free Parallelism
    • Affine Space Partitions
    • Space-Partition Constraints
    • Solving Space-Partition Constraints
    • A Simple Code-Generation Algorithm
    • Eliminating Empty Iterations
    • Synchronization Between Parallel Loops
    • The Parallelization Algorithm and Hierarchical Time
    • Pipelining
    • Solving Time-Partition Constraints by Farkas' Lemma
    • Code Transformations
    • Parallelism With Minimum Synchronization
    • Locality Optimizations
    • Partition Interleaving
    • Putting it All Together
    • Uses of Affine Transforms
    • Interprocedural Analysis
    • Context Sensitivity
    • Cloning-Based Context-Sensitive Analysis
    • Importance of Interprocedural Analysis
    • SQL Injection
    • A Logical Representation of Data Flow
    • Execution of Datalog Programs
    • Problematic Datalog Rules
    • A Simple Pointer-Analysis Algorithm
    • Flow Insensitivity
    • Context-Insensitive Interprocedural Analysis
    • Context-Sensitive Pointer Analysis
    • Adding Context to Datalog Rules
    • Datalog Implementation by BDD's
    • Relational Operations as BDD Operations

Branch : Computer Science and Engineering
Subject : Compiler design
Unit : Syntax-directed Translation

Manual Deallocation Requests


Introduction: Manual memory management, where the programmer must explicitly arrange for the deallocation of data, as in C and C . Ideally, any storage that will no longer be accessed should be deleted. Conversely, any storage that may be referenced must not be deleted. Unfortunately, it is hard to enforce either of these properties. In addition to considering the difficulties with manual deallocation, we shall describe some of the techniques programmers use to help with the difficulties.

Problems with Manual Deallocation: Manual memory management is error-prone. The common mistakes take two forms: failing ever to delete data that cannot be referenced is called a memory leak error, and referencing deleted data is a dangling-pointer-dereference error.

It is hard for programmers to tell if a program will never refer to some storage in the future, so the first common mistake is not deleting storage that will never be referenced. Note that although memory leaks may slow down the execution of a program due to increased memory usage, they do not affect program correctness, as long as the machine does not run out of memory. Many programs can tolerate memory leaks, especially if the leakage is slow. However, for long-running programs, and especially nonstop programs like operating systems or server code, it is critical that they not have leaks.

Automatic garbage collection gets rid of memory leaks by deallocating all the garbage. Even with automatic garbage collection, a program may still use more memory than necessary. A programmer may know that an object will never be referenced, even though references to that object exist somewhere. In that case, the programmer must deliberately remove references to objects that will never be referenced, so the objects can be deallocated automatically. Being overly zealous about deleting objects can lead to even worse problems than memory leaks. The second common mistake is to delete some storage and then try to refer to the data in the deallocated storage. Pointers to storage that has been deallocated are known as dangling pointers. Once the freed storage has been reallocated to a new variable, any read, write, or deallocation via the dangling pointer can produce seemingly random effects. We refer to any operation, such as read, write, or deallocate, that follows a pointer and tries to use the object it points to, as dereferencing the pointer.

A related form of programming error is to access an illegal address. Common examples of such errors include dereferencing null pointers and accessing an out-of-bounds array element. It is better for such errors to be detected than to have the program silently corrupt the results. In fact, many security violations exploit programming errors of this type, where certain program inputs allow unintended access to data, leading to a "hacker" taking control of the program and machine. One antidote is to have the compiler insert checks with every access, to make sure it is within bounds. The compiler's optimizer can discover and remove those checks that are not really necessary because the optimizer can deduce that the access must be within bounds.

Programming Conventions and Tools: We now present a few of the most popular conventions and tools that have beendeveloped to help programmers cope with the complexity in managing memory:

  • Object ownership is useful when an object's lifetime can be statically reasoned about. The idea is to associate an owner with each object at all times. The owner is a pointer to that object, presumably belonging to some function invocation. The owner (i.e., its function) is responsible for either deleting the object or for passing the object to another owner. It is possible to have other, nonowning pointers to the same object; these pointers can be overwritten any time, and no deletes should ever be applied through them. This convention eliminates memory leaks, as well as attempts to delete the same object twice. However, it does not help solve the dangling-pointer-reference problem, because it is possible to follow a nonowning pointer to an object that has been deleted.
  • Reference counting is useful when an object's lifetime needs to be determined dynamically. The idea is to associate a count with each dynamically allocated object. Whenever a reference to the object is created, we increment the reference count; whenever a reference is removed, we decrement the reference count. When the count goes to zero, the object can no longer be referenced and can therefore be deleted. This technique, however, does not catch useless, circular data structures, where a collection of objects cannot be accessed, but their reference counts are not zero, since they refer to each other. For an illustration of this problem, see Example 7.11. Reference counting does eradicate all dangling-pointer references, since there are no outstanding references to any deleted objects. Reference counting is expensive because it imposes an overhead on every operation that stores a pointer.
  • Region-based allocation is useful for collections of objects whose lifetimes are tied to specific phases in a computation.When objects are created to be used only within some step of a computation, we can allocate all such objects in the same region. We then delete the entire region once that computation step completes. This region-based allocation technique has limited applicability. However, it is very efficient whenever it can be used; instead of deallocating objects one at a time, it deletes all objects in the region in a wholesale fashion.

Questions of this topic


Ask your question

<
>