FE-Logo
  • Home
  • Study Material
  • Non-Deterministic Finite Automation
    • Introduction to Compiler
    • The Structure of a Compiler
    • Intermediate Code Generation
    • Building a Compiler
    • Applications of Compiler
    • Optimizations for Computer Architectures
    • Design of New Computer Architectures
    • Program Translations
    • Software Productivity Tools
    • Programming Language Basics
    • Minimisation of DFAs
    • Explicit Access Control
    • Parameter Passing Mechanisms
    • Introduction to Lexical Analysis
    • Regular expressions
    • Short hands
    • Nondeterministic finite automata
    • Converting a regular expression to an NFA
    • Deterministic finite automata
    • Converting an NFA to a DFA
    • The subset construction
    • Dead states
    • Lexers and lexer generators
    • Splitting the input stream
    • Lexical errors
    • Properties of regular languages
    • Limits to expressive power
    • The Role of the Lexical Analyzer
    • Input Buffering
    • Specification of Tokens
    • Operations on Languages
    • Regular Definitions and Extensions
    • Recognition of Tokens
    • The Lexical-Analyzer Generator Lex
    • Finite Automata
    • Construction of an NFA from a Regular Expression
    • Efficiency of String-Processing Algorithms
    • The Structure of the Generated Analyzer
    • Optimization of DFA-Based Pattern Matchers

  • Basic Parsing Techniques
    • Introduction to Syntax analysis
    • Context-free grammars
    • Writing context free grammars
    • Derivation
    • Syntax trees and ambiguity
    • Operator precedence
    • Writing ambiguous expression grammars
    • Other sources of ambiguity
    • Syntax analysis and Predictive parsing
    • Nullable and FIRST
    • Predictive parsing revisited
    • FOLLOW
    • LL(1) parsing
    • Methods for rewriting grammars for LL(1) parsing
    • SLR parsing
    • Constructions of SLR parse tables
    • Conflicts in SLR parse-tables
    • Using precedence rules in LR parse tables
    • Using LR-parser generators
    • Properties of context-free languages
    • Introduction to Syntax-Directed Translator
    • Evaluating an SDD at the Nodes of a Parse Tree
    • Evaluation Orders for SDD\'s
    • Ordering the Evaluation of Attributes
    • A larger example of calculating FIRST and FOLLOW
    • Syntax Definition
    • Associativity of Operators
    • Parse Trees
    • Ambiguity
    • Syntax-Directed Translation
    • Synthesized Attributes
    • Tree Traversals
    • Parsing
    • Predictive Parsing
    • Use e-Productions
    • Translator for Simple Expressions
    • Semantic Rules with Controlled Side Effects
    • Applications of Syntax-Directed Translation
    • The Structure of a Type of syntax
    • Switch-Statements
    • Syntax-Directed Translation Schemes
    • Postfix Translation Schemes
    • SDT\'s With Actions Inside Productions
    • Eliminating Left Recursion from SDT\'s
    • SDT\'s for L-Attributed Definitions
    • Implementing L-Attributed SDD\'s
    • On-The-Fly Code Generation
    • L-Attributed SDD\'s and LL Parsing
    • Bottom-Up Parsing of L-Attributed SDD\'s

  • Syntax-directed Translation
    • Register Allocation and Assignment
    • Semantic Analysis
    • Introduction to Intermediate Code Generation
    • Variants of Syntax Trees
    • Variants of Syntax Trees
    • The Value-Number Method for Constructing DAG\'s
    • Three-Address Code
    • Quadruples
    • Triples
    • Static Single-Assignment Form
    • Types and Declarations
    • Type Equivalence
    • Sequences of Declarations
    • Translation of Expressions
    • Incremental Translation
    • Addressing Array Elements
    • Translation of Array References
    • Type Checking
    • Type Conversions
    • Overloading of Functions and Operators
    • Type Inference and Polymorphic Functions
    • Algorithm for Unification
    • Control Flow
    • Flow-of-Control Statements
    • Control-Flow Translation of Boolean Expressions
    • Boolean Values and Jumping Code
    • Back patching
    • Backpatching for Boolean Expressions
    • Flow-of-Control Statements
    • Break-, Continue-, and Goto-Statements
    • Introduction to Run-Time Environments
    • Stack Allocation of Space
    • Activation Records
    • Calling Sequences
    • Variable-Length Data on the Stack
    • Access to Nonlocal Data on the Stack
    • Displays
    • Heap Management
    • Locality in Programs
    • Reducing Fragmentation
    • Managing and Coalescing Free Space
    • Manual Deallocation Requests
    • Reachability
    • Introduction to Garbage Collection
    • Reference Counting Garbage Collectors
    • Introduction to Trace-Based Collection
    • Basic Abstraction
    • Optimizing Mark-and-Sweep
    • Mark-and-Compact Garbage Collectors
    • Copying collectors
    • Short-Pause Garbage Collection
    • Incremental Reachability Analysis
    • Partial-Collection Basics
    • The Train Algorithm
    • Parallel and Concurrent Garbage Collection
    • Partial Object Relocation
    • Introduction Code Generation
    • Issues in the Design of a Code Generator
    • Instruction Selection
    • Register Allocation
    • The Target Language
    • Addresses in the Target Code
    • Stack Allocation
    • Run-Time Addresses for Names
    • Basic Blocks and Flow Graphs
    • Basic Blocks
    • Next-Use Information
    • Representation of Flow Graphs
    • Optimization of Basic Blocks
    • Dead Code Elimination
    • Representation of Array References
    • Pointer Assignments and Procedure Calls
    • A Simple Code Generator
    • The Code-Generation Algorithm
    • Design of the Function getReg
    • Peephole Optimization
    • Algebraic Simplification and Reduction in Strength
    • Register Assignment for Outer Loops
    • Instruction Selection by Tree Rewriting
    • Code Generation by Tiling an Input Tree
    • Pattern Matching by Parsing
    • General Tree Matching
    • Optimal Code Generation for Expressions
    • Evaluating Expressions with an Insufficient Supply of Registers
    • Dynamic Programming Code-Generation

  • Data Flow Analysis
    • The Lazy-Code-Motion Algorithm
    • Introduction to Machine-Independent Optimizations
    • The Dynamic Programming Algorithm
    • The Principal Sources of Optimization
    • Semantics-Preserving Transformations
    • Copy Propagation
    • Induction Variables and Reduction in Strength
    • Introduction to Data-Flow Analysis
    • The Data-Flow Analysis Schema
    • Reaching Definitions
    • Live-Variable Analysis
    • Available Expressions
    • Foundations of Data-Flow Analysis
    • Transfer Functions
    • The Iterative Algorithm for General Frameworks
    • Meaning of a Data-Flow Solution
    • Constant Propagation
    • Transfer Functions for the Constant-Propagation Framework
    • Partial-Redundancy Elimination
    • The Lazy-Code-Motion Problem
    • Loops in Flow Graphs
    • Depth-First Ordering
    • Back Edges and Reducibility
    • Natural Loops
    • Speed of Convergence of Iterative Data-Flow Algorithms
    • Region-Based Analysis
    • Necessary Assumptions About Transfer Functions
    • An Algorithm for Region-Based Analysis
    • Handling Non-reducible Flow Graphs
    • Symbolic Analysis
    • Data-Flow Problem Formulation
    • Region-Based Symbolic Analysis

  • Code Generation
    • Introduction to Software Pipelining of Loops
    • Matrix Multiply: An In-Depth Example
    • Software Pipelining of Loops
    • Introduction Instruction-Level Parallelism
    • Multiple Instruction Issue
    • A Basic Machine Model
    • Code-Scheduling Constraints
    • Finding Dependences Among Memory Accesses
    • Phase Ordering Between Register Allocation and Code Scheduling
    • Speculative Execution Support
    • Basic-Block Scheduling
    • List Scheduling of Basic Blocks
    • Global Code Scheduling
    • Upward Code Motion
    • Updating Data Dependences
    • Advanced Code Motion Techniques
    • Software Pipelining
    • Register Allocation and Code Generation
    • A Software-Pipelining Algorithm
    • Scheduling Cyclic Dependence Graphs
    • Improvements to the Pipelining Algorithms
    • Conditional Statements and Hardware Support for Software Pipelining
    • Basic Concepts of Parallelism and Locality
    • Parallelism in Applications
    • Loop-Level Parallelism
    • Introduction to Affine Transform Theory
    • Optimizations
    • Iteration Spaces
    • Affine Array Indexes
    • Controlling the Order of Execution
    • Changing Axes
    • Intermediate Code for Procedures
    • Data Reuse
    • Self Reuse
    • Self-Spatial Reuse
    • Array Data-Dependence Analysis
    • Integer Linear Programming
    • Heuristics for Solving Integer Linear Programs
    • Solving General Integer Linear Programs
    • Finding Synchronization-Free Parallelism
    • Affine Space Partitions
    • Space-Partition Constraints
    • Solving Space-Partition Constraints
    • A Simple Code-Generation Algorithm
    • Eliminating Empty Iterations
    • Synchronization Between Parallel Loops
    • The Parallelization Algorithm and Hierarchical Time
    • Pipelining
    • Solving Time-Partition Constraints by Farkas' Lemma
    • Code Transformations
    • Parallelism With Minimum Synchronization
    • Locality Optimizations
    • Partition Interleaving
    • Putting it All Together
    • Uses of Affine Transforms
    • Interprocedural Analysis
    • Context Sensitivity
    • Cloning-Based Context-Sensitive Analysis
    • Importance of Interprocedural Analysis
    • SQL Injection
    • A Logical Representation of Data Flow
    • Execution of Datalog Programs
    • Problematic Datalog Rules
    • A Simple Pointer-Analysis Algorithm
    • Flow Insensitivity
    • Context-Insensitive Interprocedural Analysis
    • Context-Sensitive Pointer Analysis
    • Adding Context to Datalog Rules
    • Datalog Implementation by BDD's
    • Relational Operations as BDD Operations

Branch : Computer Science and Engineering
Subject : Compiler design
Unit : Non-Deterministic Finite Automation

Recognition of Tokens


Introduction: In this section we’ll study how to take the patterns for all the needed tokens and build a piece of code that examines the input string and finds a prefix that is a lexeme matching one of the patterns.

Transition Diagrams: Transition diagrams have a collection of nodes or circles, called states. Each state represents a condition that could occur during the process of scanning the input looking for a lexeme that matches one of several patterns. We may think of a state as summarizing all we need to know about what characters we have seen between the lexeme Begin pointer and the forward pointer (as in the situation of Fig. 3.3).

Edges are directed from one state of the transition diagram to another. Each edge is labeled by a symbol or set of symbols. If we are in some state 5, and the next input symbol is a, we look for an edge out of state s labeled by a (and perhaps by other symbols, as well). If we find such an edge, we advance the forward pointer arid enter the state of the transition diagram to which that edge leads. We shall assume that all our transition diagrams are deterministic, meaning that there is never more than one edge out of a given state with a given symbol among its labels. Starting in Section 3.5, we shall relax the condition of determinism, making life much easier for the designer of a lexical analyzer, although trickier for the implementer. Some important conventions about transition diagrams are:

1.       Certain states are said to be accepting, or final. These states indicate that a lexeme has been found, although the actual lexeme may not consist of all positions between the lexeme Begin and forward pointers. We always indicate an accepting state by a double circle, and if there is an action to be taken — typically returning a token and an attribute value to the parser — we shall attach that action to the accepting state.

2.       In addition, if it is necessary to retract the forward pointer one position (i.e., the lexeme does not include the symbol that got us to the accepting state), then we shall additionally place a * near that accepting state. In our example, it is never necessary to retract forward by more than one position, but if it were, we could attach any number of *'s to the accepting state.

3.        One state is designated the start state, or initial state; it is indicated by an edge, labeled "start," entering from nowhere. The transition diagram always begins in the start state before any input symbols have been read.

Example: Figure 3.13 is a transition diagram that recognizes the lexemes matching the token relop. We begin in state 0, the start state. If we see < as the first input symbol, then among the lexemes that match the pattern for relop we can only be looking at <, <>, or <=. We therefore go to state 1, and look at the next character. If it is =, then we recognize lexeme <=, enter state 2, and return the token relop with attribute LE, the symbolic constant representing this particular comparison operator. If in state 1 the next character is >, then instead we have lexeme <>, and enter state 3 to return an indication that the not-equals operator has been found. On any other character, the lexeme is <, and we enter state 4 to return that information. Note, however, that state 4 has a * to indicate that we must retract the input one position.

On the other hand, if in state 0 the first character we see is =, then this one character must be the lexeme. We immediately return that fact from state 5.

The remaining possibility is that the first character is >. Then, we must enter state 6 and decide, on the basis of the next character, whether the lexeme is >= (if we next see the = sign), or just > (on any other character). Note that if, in state 0, we see any character besides <, =, or >, we cannot possibly be seeing a r e l op lexeme, so this transition diagram will not be used.

Recognition of Reserved Words and Identifiers: Recognizing keywords and identifiers presents a problem. Usually, keywords like if or then are reserved (as they are in our running example), so they are not identifiers even though they look like identifiers. Thus, although we typically use a transition diagram like that of Fig. 3.14 to search for identifier lexemes, this diagram will also recognize the keywords if, then, and e l s e of our running example.

Questions of this topic


Ask your question

<
>