Compiler Design Overview Computers are a balanced mix of software and hardware. Hardware is just a piece of mechanical device and its functions are being controlled by a compatible software. Hardware understands instructions in the form of electronic charge which is the counterpart of binary language in software programming. Binary language has only two alphabets, 0 and 1. To instruct, the hardware codes must be written in binary format, which is simply a series of 1s and 0s. It would be a difficult and cumbersome task for computer programmers to write such codes, which is why we have compilers to write such codes Language Processing System We have learnt that any computer system is made of hardware and software The hardware understands a language, which humans cannot understand. So we write programs in high-level language, which is easier for us to understand and remember. These programs are then fed into a series of tools and OS components to get the desired code that can be used by the machine. This is known as Language Processing System Source Code Pre Processor Pre-processed Code Compiler Target Assembly Code Assembler
Assembler Relocatable Machine Code Library files/ Relocatable modules Linker Executable Machine Code Loader Memory The high-level language is converted into binary language in various phases. A compiler is a program that converts high-level language to assembly language Similarly, an assembler is a program that converts the assembly language to machine-level language Let us first understand how a program, using C compiler, is executed on a host machine User writes a program in C language (high-level language) The C compiler, compiles the program and translates it to assembly program (low-level language) An assembler then translates the assembly program into machine code (object) a A linker tool is used to link all the parts of the program together for execution (executable machine code) a A loader loads all of them into memory and then the program is
Before diving straight into the concepts of compilers, we should understand a few other tools that work closely with compilers. Preprocessor A preprocessor, generally considered as a part of compiler, is a tool that produces input for compilers. It deals with macro-processing, augmentation, file inclusion, language extension, etc. Interpreter An interpreter, like a compiler, translates high-level language into low-level machine language. The difference lies in the way they read the source code or input. A compiler reads the whole source code at once, creates tokens, checks semantics, generates intermediate code, executes the whole program and may involve many passes. In contrast, an interpreter reads a statement from the input, converts it to an intermediate code, executes it, then takes the next statement in sequence. If an error occurs, an interpreter stops execution and reports it. whereas a compiler reads the whole program even if it encounters several errorS Assembler An assembler translates assembly language programs into machine code.The output of an assembler is called an object file, which contains a combination of machine instructions as well as the data required to place these instructions in memory Linker Linker is a computer program that links and merges various object files together in order to make an executable file. All these files might have been compiled by separate assemblers. The major task of a linker is to search and locate referenced module/routines in a program and to determine the memory location where these codes will be loaded, making the program instruction to have absolute references Loader Loader is a part of operating system and is responsible for loading executable
Cross-compiler A compiler that runs on platform (A) and is capable of generating executable code for platform (B) is called a cross-compiler Source-to-source Compiler A compiler that takes the source code of one programming language and translates it into the source code of another programming language is called a source-to-source compiler Compiler Architecture A compiler can broadly be divided into two phases based on the way they compile Analysis Phase Known as the front-end of the compiler, the analysis phase of the compiler reads the source program, divides it into core parts and then checks for lexical grammar and syntax errors.The analysis phase generates an intermediate representation of the source program and symbol table, which should be fed to the Synthesis phase as input Front-end Back-end Analysis Synthesis Intermediate Source Code Machine Code Representation Synthesis Phase Known as the back-end of the compiler, the synthesis phase generates the target program with the help of intermediate source code representation and symbol table
A compiler can have many phases and passes. Pass A pass refers to the traversal of a compiler through the entire program Phase: A phase of a compiler is a distinguishable stage, which takes input from the previous stage, processes and yields output that can be used as input for the next stage. A pass can have more than one phase. Phases of Compiler The compilation process is a sequence of various phases. Each phase takes input from its previous stage, has its own representation of source program, and feeds its output to the next phase of the compiler. Let us understand the phases of a compiler. Lexical Analyzer Syntax Analyzer Semantic Analyzer Symbol Table Intermediate Code.. Error Handler Generator Machine Independent
Lexical Analyzer Syntax Analyzer Semantic Analyzer Intermediate Code.... Symbol Table Error Handler Generator Machine Independent Code Optimiser Code Generator Machine Dependent Code Optimiser CD 00001011010 100100111 00011010 101110101 10010011 00101010110 0000101111 10105010
Lexical Arlaiysis The first phase of scanner works as a text scanner. This phase scans the source code as a stream of characters and converts it into meaningful lexemes. Lexical analyzer represents these lexemes in the form of tokens as: <token-name, attribute-value> Syntax Analysis The next phase is called the syntax analysis or parsing. It takes the token produced by lexical analysis as input and generates a parse tree (or syntax tree). In this phase, token arrangements are checked against the source code grammar, i.e. the parser checks if the expression made by the tokens is syntactically correct Semantic Analysis Semantic analysis checks whether the parse tree constructed follows the rules of language. For example, assignment of values is between compatible data types, and adding string to an integer. Also, the semantic analyzer keeps track of identifiers, their types and expressions; whether identifiers are declared before use or not etc. The semantic analyzer produces an annotated syntax tree as an output. Intermediate Code Generation After semantic analysis the compiler generates an intermediate code of the source code for the target machine. It represents a program for some abstract machine. It is in between the high-level language and the machine language This intermediate code should be generated in such a way that it makes it easier to be translated into the target machine code Code Optimization The next phase does code optimization of the intermediate code. Optimization can be assumed as something that removes unnecessary code lines, and arranges the sequence of statements in order to speed up the program execution without wasting resources (CPU, memory)
alphabets, i.e. a string of zero length is known as an empty string and is denoted by E (epsilon) Special Symbols A typical high-level language contains the following symbols: Arithmetic Symbols Addition(+), subtraction(-), Modulo(%), Multiplication(*), Division(/) Punctuation Assignment Special Assignment , ,- Comparison Preprocessor Location Specifier & Logical Shift Operator Comma(,), Semicolon(), Dot(.), Arrow(->) Language A language is considered as a finite set of strings over some finite set of alphabets. Computer languages are considered as finite sets, and mathematically set operations can be performed on them. Finite languages can be described by means of regular expressions Regular Expressions The lexical analyzer needs to scan and identify only a finite set of valid string/token/lexeme that belong to the language in hand. It searches for the pattern defined by the language rules Regular expressions have the capability to express finite languages by defining a pattern for finite strings of symbols. The grammar defined by regular
Regular expressions have the capability to express finite languages by defining a pattern for finite strings of symbols. The grammar defined by regular grammar is known as regular language. Regular expression is an important notation for specifying patterns. Each pattern matches a set of strings, so regular expressions serve as names for a set of strings. Programming language tokens can be described by regular languages. The specification of regular expressions is an example of a recursive definition. Regular languages are easy to understand and have efficient implementation There are a number of algebraic laws that are obeyed by regular expressions, which can be used to manipulate regular expressions into equivalent forms. Operations The various operations on languages are: a Union of two languages L and M is written as LUM s s is in L or s is in M) a Concatenation of two languages L and M is written as LM = {st I s is in L and t is in M} The Kleene Closure of a language L is written as L* = Zero or more occurrence of language L. Notations If r and s are regular expressions denoting the languages L(r) and L(s), then a Union (r)l(s) is a regular expression denoting L(r) U L(s) a Concatenation (r)(s) is a regular expression denoting L(r)L(s)
Precedence and Associativity a, concatenation (.), and (pipe sign) are left associative a*has the highest precedence Concatenation (.) has the second highest precedence (pipe sign) has the lowest precedence of all Representing valid tokens of a language in regular expression If x is a regular expression, then: a x* means zero or more occurrence of x i.e., it can generate f e, x, xx, xxx, xxxx, . x means one or more occurrence of x i.e., t can generate x, xx, xxx, xxxx. orx.x* a x? means at most one occurrence of x i.e., it can generate either {x} or {e} [a-z] is all lower-case alphabets of English language [A-Z] is all upper-case alphabets of English language [0-9] is all natural digits used in mathematics Representing occurrence of symbols using regular expressions letter [a zl or [A - zl digit = 0 | 1 12 13141516171819°r [0-9] sign1 Representing language tokens using regular expressions Decimal = (sign)?(digit)
The only problem left with the lexical analyzer is how to verify the validity of a regular expression used in specifying the patterns of keywords of a language. A well-accepted solution is to use finite automata for verification Finite Automata Finite automata is a state machine that takes a string of symbols as input and changes its state accordingly. Finite automata is a recognizer for regular expressions. When a regular expression string is fed into finite automata, it changes its state for each literal. If the input string is successfully processed and the automata reaches its final state, it is accepted, i.e., the string just fed was said to be a valid token of the language in hand The mathematical model of finite automata consists of: Finite set of states (Q) Finite set of input symbols (2) One Start state (q0) Set of final states (qf) Transition function (5) The transition function () maps the finite set of state (Q) to a finite set of input symbols (E), Q x 2Q Finite Automata Construction Let L(r) be a regular language recognized by some finite automata (FA) States States of FA are represented by circles. State names are written inside circles Start state The state from where the automata starts, is known as the start state. Start state has an arrow pointed towards it Intermediate statesAll intermediate states have at least two arrows; one pointing to and another pointing out from them
Finite Automata Construction Let L(r) be a regular language recognized by some finite automata (FA) a States States of FA are represented by circles. State names are written inside circles a Start state The state from where the automata starts, is known as the start state. Start state has an arrow pointed towards it. Intermediate states All intermediate states have at least two arrows; one pointing to and another pointing out from them. a Final state If the input string is successfully parsed, the automata is expected to be in this state. Final state is represented by double circles. It may have any odd number of arrows pointing to it and even number of arrows pointing out from it. The number of odd arrows are one greater than even, i.e. oddeven+1. a Transition The transition from one state to another state happens when a desired symbol in the input is found. Upon transition, automata can either move to the next state or stay in the same state. Movement from one state to another is shown as a directed arrow, where the arrows points to the destination state. If automata stays on the same state, an arrow pointing from a state to itself is drawn. Example We assume FA accepts any three digit binary value ending in digit 1. FA (Q(qo, q),2(0,1), q0, qf-6) go 0 qf start final
Example We assume FA accepts any three digit binary value ending in digit 1 1 qo Of start final 0 Longest Match Rule When the lexical analyzer read the source-code, it scans the code letter by letter and when it encounters a whitespace, operator symbol, or special symbols, it decides that a word is completed For example: int intvalue; While scanning both lexemes till int', the lexical analyzer cannot determine whether it is a keyword int or the initials of identifier int value The Longest Match Rule states that the lexeme scanned should be determined based on the longest match among all the tokens available The lexical analyzer also follows rule priority where a reserved word, e.g., a keyword, of a language is given priority over user input. That is, if the lexical analyzer finds a lexeme that matches with any existing reserved word, it should generate an error Compiler Design Syntax Analysis Syntax analysis or parsing is the second phase of a compiler. In this chapter, we shall learn the basic concepts used in the construction of a parser. We have seen that a lexical analyzer can identify tokens with the help of
We have seen that a lexical analyzer can identify tokens with the help of regular expressions and pattern rules. But a lexical analyzer cannot check the syntax of a given sentence due to the limitations of the regular expressions. Regular expressions cannot check balancing tokens, such as parenthesis. Therefore, this phase uses context-free grammar (CFG), which is recognized by push-down automata. CFG, on the other hand, is a superset of Regular Grammar, as depicted below: Context Free Grammar Regular Grammar It implies that every Regular Grammar is also context-free, but there exists some problems, which are beyond the scope of Regular Grammar. CFG is a helpful tool in describing the syntax of programming languages. Context-Free Grammar In this section, we will first see the definition of context-free grammar and introduce terminologies used in parsing technology. A context-free grammar has four components: a A set of non-terminals (V). Non-terminals are syntactic variables that denote sets of strings. The non-terminals define sets of strings that help define the language generated by the grammar.
denote sets of strings. The non-terminals define sets of strings that help define the language generated by the grammar set of tokens, known as terminal symbols (2). Terminals are the basic symbols from which strings are formed. A a A set of productions (P). The productions of a grammar specify the manner in which the terminals and non-terminals can be combined to form strings. Each production consists of a non-terminal called the left side of the production, an arrow, and a sequence of tokens and/or on terminals, called the right side of the production One of the non-terminals is designated as the start symbol (S); from where the production begins. The strings are derived from the start symbol by repeatedly replacing a non- terminal (initially the start symbol) by the right side of a production, for that non-terminal Example We take the problem of palindrome language, which cannot be described by means of Regular Expression. That is, L w IwwR is not a regular language. But it can be described by means of CFG, as illustrated below: G(V, , P, S) Where: This grammar describes palindrome anguage, such as 1001, 11100111,
Syntax Analyzers A syntax analyzer or parser takes the input from a lexical analyzer in the form of token streams. The parser analyzes the source code (token stream) against the production rules to detect any errors in the code. The output of this phase is a parse tree Token Stream Token Stream Lexical Syntax Analyzer Analyzer Regular expressions Finite automata Context-free Grammar This way, the parser accomplishes two tasks, ie, parsing the code, looking for errors and generating a parse tree as the output of the phase Parsers are expected to parse the whole code even if some errors exist in the program. Parsers use error recovering strategies, which we will learn later in this chapter Derivation A derivation is basically a sequence of production rules, in order to get the input string. During parsing, we take two decisions for some sentential form of input: Deciding the non-terminal which is to be replaced Deciding the production rule, by which, the non-terminal will be replaced To decide which non-terminal to be replaced with production rule, we can have two options Left-most Derivation If the sentential form of an input is scanned and replaced from left to right, it is called left-most derivation. The sentential form derived by the left-most
Right-most Derivation If we scan and replace the input with production rules, from right to left, it is known as right-most derivation. The sentential form derived from the right- most derivation is called the right-sentential form Example Production rules: Input string: id idid The left-most derivation is: Eid + id id Notice that the left-most side non-terminal is always processed first. The right-most derivation is: Eid id E → id + id * id Parse Tree A parse tree is a graphical depiction of a derivation. It is convenient to see how strings are derived from the start symbol. The start symbol of the derivation becomes the root of the parse tree. Let us see this by an example from the last topic. We take the left-most derivation of a b c The left-most derivation is: