#-h- lextut.doc 19548 ascii 05Jan84 08:12:04 .pl 64 .m1 2 .m2 3 .m3 3 .m4 3 .po 10 .rm 62 .bp 1 .in 0 .he ^lextut(tutorial)^%^lextut(tutorial)^ .fo ^^- # -^^ .in 5 .sp .ne 2 .fi .ti -5 NAME .br lextut - explains how to use lex .sp .ne 2 .fi .ti -5 SYNOPSIS .br .nf (see below) .sp .ne 2 .fi .ti -5 DESCRIPTION .br .ne 6 .sp 2 .bd .ul 1. Introduction. .ne 3 .sp .ti +4 Lex generates a program from its input files to perform simple lexical analyis of text. The input files to lex (standard input default) contain regular expressions which the generated program will search the text for. The input files also contain actions (ratfor statements) which will be executed when their corresponding regular expression is matched. .ne 3 .sp .ti +4 The output from lex is a ratfor program defining the integer function lexscan. When the generated program containing lexscan is compiled, linked, and executed, it will search its input text for strings matching the regular expressions, and execute the corresponding action. Text that is not matched by any expression is simply copied to the output. .ne 3 .sp .ti +4 Here is a diagram of the situation: .sp .nf .in +4 regular +---------+ expressions, ----> | lex | ----> lexscan actions +---------+ : ................: +---------+ text ----> | lexscan | ----> output +---------+ .sp .fi .in -4 .ne 6 .sp 2 .bd .ul 2. Simple rules. .ne 3 .sp .ti +4 As a trivial example, here is a lex input file which will generate a program to remove blanks and tabs at the ends of lines: .sp .nf .in +4 ~~ [ @t]+$ # (no action) .sp .fi .in -4 The input file contains a "~~" delimiter to mark the beginning of the rules, and one rule. The rule contains a regular expression which matches one or more instances of the characters blank or tab (written @t) just prior to the end of a line. The brackets indicate the character class made of blank and tab, the + indicates "one or more", and the $ indicates "end of line". No action is specified, so the program genetated by lex, called lexscan, will ignore the matched characters. All other characters will be copied from input to output - that is the default action of lex. .ne 3 .sp .ti +4 To change any remaining string of blanks or tabs to a single blank, add another rule: .sp .nf .in +4 ~~ [ @t]+$ # (no action) [ @t]+ call putc( ' ' ) .sp .fi .in -4 The program generated by this input file will scan for both rules at once, observing at the termination of the string of blanks or tabs whether or not there is a NEWLINE character, and then executing the desired action. The first rule matches all strings of blanks or tabs at the end of lines, and the second rule matches all remaining strings of blanks or tabs. .ne 3 .sp .ti +4 As a slightly more useful example, suppose you want to change a number of words from British to American spelling. Lex rules such as: .sp .nf .in +4 colour call putlin( "color", STDOUT ) mechanise call putlin( "mechanize", STDOUT ) petrol call putlin( "gas", STDOUT ) .sp .fi .in -4 would be a start. These rules are not quite enough, since the word .ul petroleum would become .ul gaseum. To handle cases like that correctly, you would need more complicated expressions. .ne 6 .sp 2 .bd .ul 3. More complicated expressions. .ne 3 .sp .ti +4 Regular expressions in lex are similar to the regular expressions in ch, find, and ed. There is a common set of operators which are exactly the same in all four programs. There are documented in the tutorial on regular expressions. In addition, lex has a number of new operators which the other programs don't know about. The operators which are unique to lex are: .sp .nf .in +4 "" () + | \ {} / <> .sp .fi .in -4 The quotation mark operator, ", is used to quote whole strings just like the @ operator will quote a single character. Whatever is contained between a pair of quotes is just plain text. Thus .sp .nf .in +4 xyz"[12]" .sp .fi .in -4 matches the string xyz[12], and not xyz1 or xyz2; since the []'s are inside quotes, they are interpreted as simple characters instead of character class delimiters. Another use of quotes is to get a blank into an expression; normally a blank would indicate the end of the rule, but if it is inside quotes, it is just another character. .ne 3 .sp .ti +4 Parentheses are used for grouping, just like in math - they indicate the order of evaluation. They are very useful in conjunction with the other operators. For example, without parentheses, the closure operator * can only apply to single characters and character classes. With parentheses, you can apply it to an entire string, like this: .sp .nf .in +4 (foo)* .sp .fi .in -4 This expression will match 0 or more instances of the string foo. .ne 3 .sp .ti +4 The plus operator, +, is called "positive closure". It means "1 or more of the previous", just like * means "0 or more". We saw an example of + at the beginning of this document, in the expression to match 1 or more blanks or tabs at the end of lines. .ne 3 .sp .ti +4 The alternation operator, |, is used to indicate that either one of the alternatives can be matched. For example, to match any one of the strings foo, bar, or bletch, you could say: .sp .nf .in +4 foo|bar|bletch .sp .fi .in -4 Parenthesis are not need around the words because alternation has a lower precedence than concatenation. Precedence is discussed in greater detail below. .ne 3 .sp .ti +4 The optional operator, \, matches an optionally present expression. For example, .sp .nf .in +4 [-+]\[0-9]+ .sp .fi .in -4 matches a string of digits preceded by an optional plus or minus sign. If the sign is present, it is included in the matched text, but if it is not present, the pattern will still successfully match a string of digits. .ne 3 .sp .ti +4 The curly braces, {}, have two different meanings depending on whether the string they enclose is a name or numbers. Curly braces with a name inside indicate that the name is a definition which should be expanded. For example, .sp .nf .in +4 {DIGIT} .sp .fi .in -4 would look for a predefined string named DIGIT and insert it at that point in the expression. The definitions are given in section 1 of the lex input, which is described below. .ne 3 .sp .ti +4 Curly braces with numbers inside indicate a certain number of iterations of the previous - it is a generalization of the * and + operators. There are a three different forms: .sp .nf .in +4 a{2,5} .sp .fi .in -4 would match 2 to 5 occurences of a, .sp .nf .in +4 a{2,} .sp .fi .in -4 would match 2 or more occurences, and .sp .nf .in +4 a{2} .sp .fi .in -4 would match exactly 2 occurences. .ne 3 .sp .ti +4 The / operator is used to indicate trailing context. The expression .sp .nf .in +4 ab/cd .sp .fi .in -4 will match the string "ab", but only if it is followed by the string "cd". Note that the trailing context part - "cd" in this example - is NOT part of the matched string. "ab" is the matched string, and "cd" is pushed back onto the input stream to be matched later. .ne 3 .sp .ti +4 The $ operator, which matches the ends of lines, is a special case of the / operator. Thus .sp .nf .in +4 ab$ .sp .fi .in -4 is the same as .sp .nf .in +4 ab/@n .sp .fi .in -4 .ne 3 .sp .ti +4 Left context is handled in lex by the <> operator, called a start condition. A start condition is declared in the definitions section. Then, if a rule is only to be matched when the lex interpreter is in start condition x, the rule should be prefixed by .sp .nf .in +4 .sp .fi .in -4 Since start conditions are a little hard to understand, they are covered in more detail in the section on left context. .ne 3 .sp .ti +4 The lex operators with the lowest precedence are the alternation (|) and trailing context (\) operators. Just above these two operators in precedence is concatenation, which is the implicit operator that makes "xy" mean "match an x and then match a y". The unary operators, '*', '+', '\', and '{}' have equal precedence, higher than that of concatenation, and associate from left to right. Operators such as '?' and '[...]' are identical to simple characters from a syntactic point of view, and as such have no precedence associated with them. Note that meta-characters (except '-' and ']') lose their special meanings inside '[...]'s and need not be escaped. Parentheses and quoted strings have the highest precedence. Here are some examples of how precedence works: .sp .nf .in +4 ab{3}|cd* .sp .fi .in -4 matches .ul either an a followed by three b's, .ul or a c followed by zero or more d's. .sp .nf .in +4 a(bc)+ .sp .fi .in -4 matches an a followed by one or more occurences of the string "bc". .sp .nf .in +4 a{4}{6,} .sp .fi .in -4 will match 6 or more groups of 4 a's (24 a's, 28 a's, 32 a's, ...). .ne 6 .sp 2 .bd .ul 4. More complicated actions. .ne 3 .sp .ti +4 So far, all we have seen as lex actions are calls to putc and putlin, and the null action. Actually, a lex action can be any ratfor statement, and, if enclosed in curly braces, it can be a group of ratfor statements. .ne 3 .sp .ti +4 One thing that is useful for an action to do is to pass a value back to the main program. This is done by returning it as the function value. For instance, if the lexical scanner generated by lex is being used as the front end of a compiler, then for each token recognized the action might be "return ( TOKENID )". If more than one value needs to be passed back, the others can be placed in common variables declared in the definitions section of the lex input. There is a section further on describing this. .ne 3 .sp .ti +4 There are a number of special routines available to actions. ECHO is a macro which simply writes the current text to STDOUT. It is the default action for characters not otherwise matched. You also might want to call it yourself. For instance, if you wanted to copy a file but also double all occurences of the word "rabbit", you might use the rule: .sp .nf .in +4 rabbit { ECHO ECHO } .sp .fi .in -4 Rabbit will be echoed twice, and everything else will be echoed once by default. .ne 3 .sp .ti +4 Lexgtext is a subroutine which gets the matched text and stores it into a buffer you supply. This is probably the most useful of the special actions. As an example of its use, here is a lex program which turns alphabetic strings into all lower case: .sp .nf .in +4 character buf(MAXLINE) ~~ [A-Z]+ { call lexgtext( buf, MAXLINE ) call fold( buf ) call putlin( buf, STDOUT ) } .sp .fi .in -4 .ne 3 .sp .ti +4 Lexmore is a subroutine which can be called to indicate that the next input expression recognized is to be tacked on to the end of the current text, instead of replacing it. .ne 3 .sp .ti +4 Lexless is a subroutine which can be called to indicate that not all the characters matched by the currently successful expression are wanted right now. The argument n indicates the number of characters to be retained. .ne 3 .sp .ti +4 Lexreject means "go to the next alternative". It is a subroutine which causes whatever rule was second choice after the current rule to be executed instead. .ne 3 .sp .ti +4 BEGIN is a macro which tells lex to enter start condition "name". Until the next BEGIN action is executed, rules with the start condition "name" will be active. Rules with other start conditions will be inactive. Rules with no start conditions at all are always active. To go back to the normal state where only the rules with no start conditions are active, do a "BEGIN(0)". The use of this action is explained more fully in the section on left context. .ne 6 .sp 2 .bd .ul 5. More general input format. .ne 3 .sp .ti +4 Up to now, we have used only the rules section of the input file. There are actually two more sections, delimited by "~~", which we haven't seen used yet. The general form of a lex input file is: .sp .nf .in +4 (definitions) ~~ (rules) ~~ (user routines) .sp .fi .in -4 Any or all of the three sections may be empty, and whenever and end of file is encountered, it is assumed that all subsequent sections are to be empty. Thus, the shortest legal lex input is an empty file, and the program generated from this simply copies its input to its output. .ne 6 .sp 2 .bd .ul 6. Definitions section. .ne 3 .sp .ti +4 The definitions section can contain three types of definitions: name definitions, ratfor definitions, and start condition definitions. Name definitions are similar to the ratfor define statement. They let you set up a shorthand name for a long or frequently-used expression. The format of a name definition is simply .sp .nf .in +4 (name) (translation) .sp .fi .in -4 For example, .sp .nf .in +4 DIGIT [0-9] LETTER [a-zA-Z] .sp .fi .in -4 To use a name definition, you must put the name inside curly braces. Here is an example of a name definition which uses the two previously defined names: .sp .nf .in +4 IDENT {LETTER}({LETTER}|{DIGIT})* .sp .fi .in -4 Names may be made up of any printable characters, except that they must not begin with a digit. Name definitions must start in column 1. Note that name definitions act as though they are surrounded by parenthesis. Thus .sp .nf .in +4 EXAMPLE smegma ~~ {EXAMPLE}+ .sp .fi .in -4 matches one or more occurences of the string "smegma", and not the string "smegm" followed by one or more a's. .ne 3 .sp .ti +4 Ratfor definitions are simply any ratfor declarations which the ratfor actions in the rules section require. You can enter ratfor definitions in either of two forms. The normal format is: .sp .nf .in +4 (whitespace) (code) .sp .fi .in -4 That is, anything that does not start in column 1 is assumed to be a ratfor definition. Ratfor definitions which must start in column 1 can be entered in the following form: .sp .nf .in +4 ~{ (code) ~} .sp .fi .in -4 As an example, suppose you wanted to count how many times the string "feeblevetzer" occurs in a file: .sp .nf .in +4 integer count count = 0 ~~ feeblevetzer count = count + 1 .sp .fi .in -4 would do the trick. .ne 3 .sp .ti +4 Start condition definitions have the form .sp .nf .in +4 ~S (name1) (name2) ... .sp .fi .in -4 Start conditions are referenced in the rules section by beginning a regular expression with "", and by the special action "BEGIN(name)". They specify that the rule they are prefixed to is active only at certain times. Since start conditions are a little hard to understand, they are explained in more detail in the section on left context. .ne 3 .sp .ti +4 Comment lines (lines beginning with a '#' in column 1) are ignored. .ne 6 .sp 2 .bd .ul 7. User routines section. .ne 3 .sp .ti +4 This section is simply copied verbatim to the output program. Any user-written subroutines or functions referenced by the actions may be put here. The main program may also be put here, or, if the user prefers, the main program and auxillary routines may be compiled seperately and linked in. No default main program is provided. The simplest one (which can easily be included in the user routines section) is: .sp .nf .in +4 DRIVER(prog) i = lexscan( 0 ) DRETURN end .sp .fi .in -4 .ne 6 .sp 2 .bd .ul 8. Left context sensitivity. .ne 3 .sp .ti +4 Sometimes it is desireable to have several sets of lexical rules to be applied at different times in the input. For example, a compiler preprocessor might distinguish preprocessor statements and analyze them differently from ordinary statements. This requires sensitivity to prior context, and there are several ways of handling such problems. The % operator, for example, is a prior context operator, recognizing immediately preceding left context, just as $ recognizes immediately following right context. Adjacent left context could be extended, to produce a facility similar to that for adjacent right context, but it is unlikely to be as useful, since often the relevant left context appeared some time earlier, such as at the beginning of a line. .ne 3 .sp .ti +4 This section describes two means of dealing with different environments: a simple use of flags, when only a few rules change from one environment to another, and the use of start conditions on rules. In both cases, there are rules which recognize the need to change the environment in which the following input text is analyzed, and set some parameter to reflect the change. This may be a flag explicitly tested by the user's action code; such a flag is the simplest way of dealing with the problem, since lex is not involved at all. It may be more convenient, however, to have lex remember the flags as initial conditions on the rules. Any rule may be associated with a start condition. It will only be recognized when lex is in that start condition. The current start condition may be changed at any time. .ne 3 .sp .ti +4 Consider the following problem: copy the input to the output, changing the word "magic" to "first" on every line which began with the letter "a", changing "magic" to "second" on every line which began with the letter "b", and changing "magic" to "third" on every line which began with the letter "c". All other words and all other lines are left unchanged. .ne 3 .sp .ti +4 These rules are so simple that the easiest way to do this job is with a flag: .sp .nf .in +4 integer flag flag = 0 ~~ %a flag = 1; ECHO %b flag = 2; ECHO %c flag = 3; ECHO @n flag = 0; ECHO magic { switch ( flag ) { case 1: call putlin( "first", STDOUT ) case 2: call putlin( "second", STDOUT ) case 3: call putlin( "third", STDOUT ) default: ECHO } } .sp .fi .in -4 .ne 3 .sp .ti +4 To handle the same problem with start conditions, each start condition must be introduced to lex in the definitions section with a line reading .sp .nf .in +4 ~S name1 name2 ... .sp .fi .in -4 where the conditions may be named in any order. The conditions may be referenced at the head of a rule with the <> brackets: .sp .nf .in +4 expression .sp .fi .in -4 is a rule which is only recognized when lex is in the start condition name1. To enter a start condition, execute the action statement .sp .nf .in +4 BEGIN(name1) .sp .fi .in -4 which changes the start condition to name1. To resume the normal state, .sp .nf .in +4 BEGIN(0) .sp .fi .in -4 resets the initial condition of the lex automaton interpreter. A rule may be active in several start conditions: .sp .nf .in +4 .sp .fi .in -4 is a legal prefix. Any rule not beginning with the <> prefix operator is always active. .ne 3 .sp .ti +4 The same example as before can be written: .sp .nf .in +4 ~S AA BB CC ~~ %a ECHO; BEGIN(AA) %b ECHO; BEGIN(BB) %c ECHO; BEGIN(CC) @n ECHO; BEGIN(0) magic call putlin( "first", STDOUT ) magic call putlin( "second", STDOUT ) magic call putlin( "third", STDOUT ) .sp .fi .in -4 where the logic is exactly the same as in the previous method of handling the problem, but lex does the work rather than the user's code. .sp .ne 2 .fi .ti -5 SEE ALSO .br .nf lex(1) lexlb(2) regexp(tutorial) "Principles of Compiler Design", Aho and Ullman, chapter 3 "Lex - A Lexical Analyzer Generator", M. E. Lesk and E. Schmidt .sp .ne 2 .fi .ti -5 AUTHOR(S) .br Vern Paxson. Evolved from an original implementation by Jef Poskanzer, with the help of many ideas from Van Jacobson. Most of this document was written by Jef Poskanzer. #-t- lextut.doc 19548 ascii 05Jan84 08:12:04 #-h- yc.doc 49862 ascii 05Jan84 08:12:07 .pl 64 .m1 2 .m2 3 .m3 3 .m4 3 .po 10 .rm 62 .bp 1 .in 0 .he ^YC(T)^%^YC(T)^ .fo ^^- # -^^ .in 5 .sp .ne 2 .fi .ti -5 NAME .br YC - how to construct a yacc input specification .sp .ne 2 .fi .ti -5 SYNOPSIS .br .nf see below .sp .ne 2 .fi .ti -5 DESCRIPTION .br .sp 2 .ul .nf 0. Introduction .fi .ne 3 .sp Yacc provides a general tool for imposing structure on the input to a computer program. The Yacc user prepares a specification of the input process; this includes rules describing the input structure, code to be invoked when these rules are recognized, and a low-level routine to do the basic input. Yacc then generates a function to control the input process. This function, called a .ul parser, calls the user-supplied low-level input routine (the .ul "lexical analyzer") to pick up the basic items (called .ul tokens) from the input stream. These tokens are organized according to the input structure rules, called .ul "grammar rules". When one of these rules has been recognized, the user code supplied for this rule, called an .ul action, is invoked; actions have the ability to return values and make use of the values of other actions. .ne 3 .sp Yacc is written in RATFOR and the actions, and output subroutine, are in RATFOR as well. Moreover, many of the syntactic conventions of Yacc follow RATFOR. .ne 3 .sp The heart of the input specification is a collection of grammar rules. Each rule describes an allowable structure and gives it a name. For example, one grammar rule might be .sp .nf .in +4 date : month_name day ',' year ; .sp .fi .in -4 Here, .ul date, .ul month_name, .ul day, and .ul year represent structures of interest in the input process; presumably, .ul month_name, .ul day, and .ul year are defined elsewhere. The comma ``,'' is enclosed in single quotes; this implies that the comma is to appear literally in the input. The colon and semicolon merely serve as punctuation in the rule, and have no significance in controlling the input. Thus, with proper definitions, the input .sp .nf .in +4 July 4, 1776 .sp .fi .in -4 might be matched by the above rule. .ne 3 .sp An important part of the input process is carried out by the lexical analyzer. This user-supplied routine reads the input stream, recognizing the lower level structures, and communicates these tokens to the parser. For historical reasons, a structure recognized by the lexical analyzer is called a .ul "terminal symbol", while the structure recognized by the parser is called a .ul "nonterminal symbol". To avoid confusion, terminal symbols will usually be referred to as .ul tokens. .ne 3 .sp There is considerable leeway in deciding whether to recognize structures using the lexical analyzer or grammar rules. For example, the rules .ne 8 .sp .nf .in +4 month_name : 'J' 'a' 'n' ; month_name : 'F' 'e' 'b' ; . . . month_name : 'D' 'e' 'c' ; .sp .fi .in -4 might be used in the above example. The lexical analyzer would only need to recognize individual letters, and .ul month_name would be a nonterminal symbol. Such low-level rules tend to waste time and space, and may complicate the specification beyond Yacc's ability to deal with it. Usually, the lexical analyzer would recognize the month names, and return an indication that a .ul month_name was seen; in this case, .ul month_name would be a token. .ne 3 .sp Literal characters such as ``,'' must also be passed through the lexical analyzer, and are also considered tokens. .ne 3 .sp Specification files are very flexible. It is relatively easy to add to the above example the rule .sp .nf .in +4 date : month '/' day '/' year ; .sp .fi .in -4 allowing .sp .nf .in +4 7 / 4 / 1776 .sp .fi .in -4 as a synonym for .sp .nf .in +4 July 4, 1776 .sp .fi .in -4 In most cases, this new rule could be ``slipped in'' to a working system with minimal effort, and little danger of disrupting existing input. .ne 3 .sp The input being read may not conform to the specifications. These input errors are detected as early as is theoretically possible with a left-to-right scan; thus, not only is the chance of reading and computing with bad input data substantially reduced, but the bad data can usually be quickly found. Error handling, provided as part of the input specifications, permits the reentry of bad data, or the continuation of the input process after skipping over the bad data. .ne 3 .sp In some cases, Yacc fails to produce a parser when given a set of specifications. For example, the specifications may be self contradictory, or they may require a more powerful recognition mechanism than that available to Yacc. The former cases represent design errors; the latter cases can often be corrected by making the lexical analyzer more powerful, or by rewriting some of the grammar rules. While Yacc cannot handle all possible specifications, its power compares favorably with similar systems; moreover, the constructions which are difficult for Yacc to handle are also frequently difficult for human beings to handle. Some users have reported that the discipline of formulating valid Yacc specifications for their input revealed errors of conception or design early in the program development. The theory underlying Yacc has been described elsewhere [1,2,3]. .ne 3 .sp The next several sections describe the basic process of preparing a Yacc specification; Section 1 describes the preparation of grammar rules, Section 2 the preparation of the user supplied actions associated with these rules, and Section 3 the preparation of lexical analyzers. Section 4 describes the operation of the parser. Section 5 discusses various reasons why Yacc may be unable to produce a parser from a specification, and what to do about it. Section 6 describes a simple mechanism for handling operator precedences in arithmetic expressions. Section 7 discusses error detection and recovery. Section 8 discusses the operating environment and special features of the parsers Yacc produces. Section 9 gives some suggestions which should improve the style and efficiency of the specifications. Section 10 discusses some advanced topics, and Section 11 gives references. Appendix A has a brief example, and Appendix B gives an example using some of the more advanced features of Yacc. .sp 2 .ul .nf 1. Basic Specification .fi .ne 3 .sp Names refer to either tokens or nonterminal symbols. Yacc requires token names to be declared as such. In addition, for reasons discussed in Section 3, it is often desirable to include the lexical analyzer as part of the specification file; it may be useful to include other programs as well. Thus, every specification file consists of three sections: the .ul declarations, .ul "(grammar) rules", and .ul programs. The sections are separated by double percent ``%%'' marks. (The percent ``%'' is generally used in Yacc specifications as an escape character.) .ne 3 .sp In other words, a full specification file looks like .sp .nf .in +4 declarations %% rules %% programs .sp .fi .in -4 .ne 3 .sp The declaration section may be empty. Moreover, if the programs section is omitted, the second %% mark may be omitted also; thus, the smallest legal Yacc specification is .sp .nf .in +4 %% rules .sp .fi .in -4 .ne 3 .sp Blanks, tabs, and newlines are ignored except that they may not appear in names or multi-character reserved symbols. Comments begin with a hash mark, '#', as in RATFOR. .ne 3 .sp The rules section is made up of one or more grammar rules. A grammar rule has the form: .sp .nf .in +4 A : BODY ; .sp .fi .in -4 A represents a nonterminal name, and BODY represents a sequence of zero or more names and literals. The colon and the semicolon are Yacc punctuation. .ne 3 .sp Names may be of arbitrary length, and may be made up of letters, dot ``.'', underscore ``_'', and non-initial digits. Upper and lower case letters are distinct. The names used in the body of a grammar rule may represent tokens or nonterminal symbols. .ne 3 .sp A literal consists of a character enclosed in single quotes ``'''. The atsign "@" is an escape character within literals, and the following escapes are recognized: .sp .nf .in +4 '@n' newline '@r' return '@'' single quote ``''' '@@' atsign ``@'' '@t' tab '@b' backspace '@f' form feed '@C' C, where C is any other character .sp .fi .in -4 .ne 3 .sp If there are several grammar rules with the same left hand side, the vertical bar ``|'' must be used instead of rewriting the left hand side. In addition, the semicolon at the end of a rule is dropped before a vertical bar. Thus the grammar rules .sp .nf .in +4 A : B C D ; A : E F ; A : G ; .sp .fi .in -4 would be given to Yacc as .sp .nf .in +4 A : B C D | E F | G ; .sp .fi .in -4 It is necessary that all grammar rules with the same left side appear together in the grammar rules section. .ne 3 .sp If a nonterminal symbol matches the empty string, this can be indicated in the obvious way: .sp .nf .in +4 empty : ; .sp .fi .in -4 .ne 3 .sp Names representing tokens must be declared; this is most simply done by writing .sp .nf .in +4 %token name1 name2 . . . .sp .fi .in -4 in the declarations section. (See Sections 3 , 5, and 6 for much more discussion). Every name not defined in the declarations section is assumed to represent a nonterminal symbol. Every nonterminal symbol must appear on the left side of at least one rule. .ne 3 .sp Of all the nonterminal symbols, one, called the .ul "start symbol" , has particular importance. The parser is designed to recognize the start symbol; thus, this symbol represents the largest, most general structure described by the grammar rules. By default, the start symbol is taken to be the left hand side of the first grammar rule in the rules section. .ne 3 .sp The end of the input to the parser is signaled by a special token, called the .ul endmarker . If the tokens up to, but not including, the endmarker form a structure which matches the start symbol, the parser function returns to its caller after the endmarker is seen; it .ul accepts the input. If the endmarker is seen in any other context, it is an error. .ne 3 .sp It is the job of the user-supplied lexical analyzer to return the endmarker when appropriate; see section 3, below. Usually the endmarker represents some reasonably obvious I/O status, such as ``end-of-file'' or ``end-of-record''. .sp 2 .ul .nf 2. Actions .fi .ne 3 .sp With each grammar rule, the user may associate actions to be performed each time the rule is recognized in the input process. These actions may return values, and may obtain the values returned by previous actions. Moreover, the lexical analyzer can return values for tokens, if desired. .ne 3 .sp An action is an arbitrary RATFOR statement, and as such can do input and output, call subroutines and functions, and alter external variables. An action is specified by one or more statements, enclosed in ``%{'' and ``%}''. For example, .sp .nf .in +4 A : '(' B ')' %{ call prompt( pstring, STDOUT ) %} ; .sp .fi .in -4 and .sp .nf .in +4 XXX : YYY ZZZ %{ call putlin( message, ERROUT ) flag = 25 %} ; .sp .fi .in -4 are grammar rules with actions. .ne 3 .sp To facilitate easy communication between the actions and the parser, the action statements are altered slightly. The symbol ``dollar sign'' ``$'' is used as a signal to Yacc in this context. .ne 3 .sp To return a value, the action normally sets the pseudo-variable ``$$'' to some value. For example, an action that does nothing but return the value 1 is .ne 5 .sp .nf .in +4 %{ $$ = 1 %} .sp .fi .in -4 .ne 3 .sp The right side of each rule contains terminal and non-terminal symbols. Each of these symbols has a value associated with it. If the symbol is a terminal symbol, then the value associated with it is the value that the lexical analyzer returned when it recognized this symbol. A non-terminal symbol has the value that was returned by the action associated with the rule which defines this non-terminal symbol. To obtain the values associated with each right side symbol, the action may use the pseudo-variables $1, $2, . . ., which refer to each of the right side symbols, reading the rule from left to right. Thus, if the rule is .sp .nf .in +4 A : B C D ; .sp .fi .in -4 for example, then $2 is the value associated with C, and $3 the value associated with D. .ne 3 .sp As a more concrete example, consider the rule .sp .nf .in +4 expr : '(' expr ')' ; .sp .fi .in -4 The value returned by this rule is usually the value of the .ul expr in parentheses. This can be indicated by .sp .nf .in +4 expr : '(' expr ')' %{ $$ = $2 %} ; .sp .fi .in -4 .ne 3 .sp By default, the value of a rule is the value of the first element in it ($1). Thus, grammar rules of the form .sp .nf .in +4 A : B ; .sp .fi .in -4 frequently need not have an explicit action. .ne 3 .sp In many applications, output is not done directly by the actions; rather, a data structure, such as a parse tree, is constructed in memory, and transformations are applied to it before output is generated. Parse trees are particularly easy to construct, given routines to build and maintain the tree structure desired. For example, suppose there is a RATFOR function .ul node, written so that the call .sp .nf .in +4 index = node( L, n1, n2 ) .sp .fi .in -4 creates a node with label L, and descendants n1 and n2, and returns the index of the newly created node. Then parse tree can be built by supplying actions such as: .ne 6 .sp .nf .in +4 expr : expr '+' expr %{ $$ = node( '+', $1, $3 ) %} ; .sp .fi .in -4 in the specification. .ne 3 .sp The user may define other variables to be used by the actions. Declarations and definitions can appear in the declarations section, enclosed in the marks ``%{'' and ``%}''. These declarations and definitions have global scope, so they are known to the action statements and the lexical analyzer. For example, .sp .nf .in +4 %{ integer variable data variable /0/ %} .sp .fi .in -4 could be placed in the declarations section, making .ul variable accessible to all of the actions. The Yacc parser uses only names beginning in ``yy''; the user should avoid such names. .ne 3 .sp In these examples, all the dollar values are integers, values of other types must be accessed through integer pointers. .sp 2 .ul .nf 3. Lexical Analysis .fi .ne 3 .sp The user must supply a lexical analyzer to read the input stream and communicate tokens (with values, if desired) to the parser. The lexical analyzer is an integer-valued function with one argument. The function returns an integer, the .ul "token number" , representing the kind of token read. If there is a value associated with that token, it should be assigned to yylex's single argument. The synopsis for yylex is: .sp .nf .in +4 yytoken = yylex( yyvalue ) yytoken - next token on input stream yyvalue - value of yytoken .sp .fi .in -4 .ne 3 .sp The parser and the lexical analyzer must agree on the token numbers in order for communication between them to take place. The token numbers may be chosen by Yacc, or chosen by the user. In either case, the "define" mechanism of RATFOR can be used to allow the lexical analyzer to return these numbers symbolically. For example, suppose that the token name DIGIT has been defined in the declarations section of the Yacc specification file. The relevant portion of the lexical analyzer might look like: .ne 20 .sp .nf .in +4 integer function yylex( value ) integer value integer i, ctoi character tokstr( MAXLINE ) #next token string . . . call gettok( tokstr ) #get next token string . . . switch ( tokstr(1) ) { . . . case DIG0, DIG1, DIG2, DIG3, DIG4, DIG5, DIG6, DIG7, DIG8, DIG9: { i = 1 value = ctoi( tokstr, i ) return DIGIT } . . . } . . . .sp .fi .in -4 .ne 3 .sp The intent is to return a token number of DIGIT, and a value equal to the numerical value of the digit. Provided that the lexical analyzer code is placed in the programs section of the specification file, the identifier DIGIT will be defined as the token number associated with the token DIGIT. This mechanism leads to clear, easily modified lexical analyzers. The token name .ul yyerror is reserved for error handling, and should not be used naively (see Section 7). .ne 3 .sp As mentioned above, the token numbers may be chosen by Yacc or by the user. In the default situation, the numbers are chosen by Yacc. The default token number for a literal character is the numerical value of the character in the local character set. Other names are assigned token numbers starting at 259. .ne 3 .sp To assign a token number to a token (including literals), the first appearance of the token name or literal in the declarations section can be immediately followed by a positive integer. This integer is taken to be the token number of the name or literal. Names and literals not defined by this mechanism retain their default definition. It is important that all token numbers be distinct. .ne 3 .sp For historical reasons, the endmarker must have token number equal to 0. This token number cannot be redefined by the user; thus, all lexical analyzers should be prepared to return 0 as a token number upon reaching the end of their input. .ne 3 .sp A very useful tool for constructing lexical analyzers is the .ul Lex tool. This lexical analyzer is designed to work in close harmony with Yacc. The specifications for Lex use regular expressions instead of grammar rules. Lex can be easily used to produce quite complicated lexical analyzers, but there remain some languages (such as FORTRAN) which do not fit any theoretical framework, and whose lexical analyzers must be crafted by hand. .sp 2 .ul .nf 4. How the Parser Works .fi .ne 3 .sp Yacc turns the specification file into a RATFOR program, which parses the input according to the specification given. The algorithm used to go from the specification to the parser is complex, and will not be discussed here (see the references for more information). The parser itself, however, is relatively simple, and understanding how it works, while not strictly necessary, will nevertheless make treatment of error recovery and ambiguities much more comprehensible. .ne 3 .sp The parser produced by Yacc consists of a finite state machine with a stack. The parser is also capable of reading and remembering the next input token (called the .ul lookahead token). The .ul "current state" is always the one on the top of the stack. The states of the finite state machine are given small integer labels; initially, the machine is in state 1, the stack contains only state 1. .ne 3 .sp The machine has only four actions available to it, called .ul shift, .ul reduce, .ul accept, and .ul error. A move of the parser is done as follows: .ne 3 .sp .in +6 .ta 5r .ti -6 1. Based on its current state, the parser decides whether it needs a lookahead token to decide what action should be done; if it needs one, and does not have one, it calls the routine .ul yylex to obtain the next token. .in -6 .ne 3 .sp .in +6 .ta 5r .ti -6 2. Using the current state, and the lookahead token if needed, the parser decides on its next action, and carries it out. This may result in states being pushed onto the stack, or popped off of the stack, and in the lookahead token being processed or left alone. .in -6 .ne 3 .sp The .ul shift action (also called a .ul transition) is the most common action the parser takes. Whenever a shift action is taken, there is always a lookahead token. For example, in state 56 there may be a shift action to state 34, and in state 34 the token just seen is IF. This means that, in state 56, if the lookahead token is IF, the current state (56) is pushed down on the stack, and state 34 becomes the current state (on the top of the stack). The lookahead token is cleared. .ne 3 .sp The .ul reduce action keeps the stack from growing without bounds. Reduce actions are appropriate when the parser has seen the right hand side of a grammar rule, and is prepared to announce that it has seen an instance of the rule, replacing the right hand side by the left hand side. It may be necessary to consult the lookahead token to decide whether to reduce, but usually it is not. .ne 3 .sp Suppose the rule being reduced is .sp .nf .in +4 A : x y z ; .sp .fi .in -4 The reduce action depends on the left hand symbol (A in this case), and the number of symbols on the right hand side (three in this case). To reduce, first pop off the top three states from the stack (The number of states popped equals the number of symbols on the right side of the rule). In effect, these states were the ones put on the stack while recognizing .ul x, .ul y, and .ul z, and no longer serve any useful purpose. After popping these states, a state is uncovered which was the state the parser was in before beginning to process the rule. Using this uncovered state, and the symbol on the left side of the rule, perform what is in effect a shift of A. A new state is obtained, pushed onto the stack, and parsing continues. There are significant differences between the processing of the left hand symbol and an ordinary shift of a token, however, so this action is called a .ul goto action. In particular, the lookahead token is cleared by a shift, and is not affected by a goto. In any case, the uncovered state contains a transition state, which is the state that has just recognized A (i.e. .ul shifted A). This transition state is then pushed onto the stack, becoming the current state. .ne 3 .sp In effect, the reduce action ``turns back the clock'' in the parse, popping the states off the stack to go back to the state where the right hand side of the rule was first seen. The parser then behaves as if it had seen the left side at that time. If the right hand side of the rule is empty, no states are popped off of the stack: the uncovered state is in fact the current state. .ne 3 .sp The reduce action is also important in the treatment of user-supplied actions and values. When a rule is reduced, the code supplied with the rule is executed before the stack is adjusted. In addition to the stack holding the states, another stack, running in parallel with it, holds the values returned from the lexical analyzer and the actions. When a shift takes place, the value assigned to yylex's parameter is copied onto the value stack. After the return from the user code, the reduction is carried out. When the .ul goto action is done, the external variable .ul yyval (i.e. '$$') is copied onto the value stack. The pseudo-variables $1, $2, etc., refer to the value stack. .ne 3 .sp The other two parser actions are conceptually much simpler. The .ul accept action indicates that the entire input has been seen and that it matches the specification. This action appears only when the lookahead token is the endmarker, and indicates that the parser has successfully done its job. The .ul error action, on the other hand, represents a place where the parser can no longer continue parsing according to the specification. The input tokens it has seen, together with the lookahead token, cannot be followed by anything that would result in a legal input. The parser reports an error, and attempts to recover the situation and resume parsing: the error recovery (as opposed to the detection of error) will be covered in Section 7. .ne 3 .sp It is time for an example! Consider the specification .sp .nf .in +4 %token ding dong dell %% rhyme : sound place ; sound : ding dong ; place : dell ; .sp .fi .in -4 .ne 3 .sp When Yacc is invoked with the '-v' option a description of the parser actions is written to standard output. The description of the above grammar produced by the '-v' option would be: .sp .nf .in +4 *** terminals *** *** non terminals *** 262 dell 263 260 ding 266 place 261 dong 264 rhyme 0 end 265 sound *** the productions *** 1 : end rhyme end 2 rhyme : sound place 3 sound : ding dong 4 place : dell *** a vocabulary cross-reference *** dell 4 ding 3 dong 3 end 1 1 -1 place 2 -4 rhyme 1 -2 sound 2 -3 *** the state sets *** state: 1 1 : . end rhyme end end the transitions: 2 state: 2 1 : end . rhyme end end the transitions: 3 4 5 state: 3 3 sound : ding . dong dell the transitions: 6 .br state: 4 1 : end rhyme . end end the transitions: 7 state: 5 2 rhyme : sound . place end the transitions: 8 9 state: 6 3 sound : ding dong . dell the reductions: 3 dell state: 7 1 : end rhyme end . end the reductions: 1 end state: 8 4 place : dell . end the reductions: 4 end state: 9 2 rhyme : sound place . end the reductions: 2 end .sp .fi .in -4 .ne 3 .sp The first section lists the grammar symbols and their definitions. The second section lists the grammar productions. Notice that there is an extra production added at the beginning of the grammar. This production makes it easy to start and stop the parser in a standard state. The "vocabulary cross-reference" list lists each of the grammar symbols and the numbers of the productions the symbol occurs in. The "-" sign in front of a production number indicates a definition production for the symbol. The last section describes the parser. Notice that, in addition to the actions for each state, there is a description of the parsing rules being processed in each state. The dot is used to seperate what has been seen, and what is yet to come, in each rule. In fact, for each rule, the symbol in front of the dot is the lookahead symbol which must be recognized before .ul shifting to the state containing that rule. Each rule in a state is directly followed by a list of terminals. These terminals make up the FOLLOW set for the LHS of the rule, i.e. the set of all terminals which can appear immediately to the right of A, where A is the LHS symbol of the rule. Each state also lists the possible transition states, and all rules which can be reduced in the state. The list of tokens following the reduction rule number are the set of lookahead terminals which would signal a reduction to this rule. .ne 3 .sp Suppose the input is .sp .nf .in +4 ding dong dell .sp .fi .in -4 It is instructive to follow the steps of the parser while processing this input. .ne 3 .sp Initially, the current state is state 1. The parser provides the first token, the endtoken, for this special state. The action upon seeing .ul end is to .ul shift to state 2, so state 2 is pushed onto the stack and becomes the current state. The parser needs to refer to the input in order to decide between the actions available in state 2, so the first real token, .ul ding, is read, becoming the lookahead token. State 2 has three transition states, 3, 4, and 5. The lookahead token for state 3 is .ul ding (the symbol before the dot in the production in state 3). So, the action in state 2 on .ul ding is is ``shift 3'', so state 3 is pushed onto the stack, and the lookahead token is cleared. State 3 becomes the current state. The next token, .ul dong, is read, becoming the lookahead token. The action in state 3 on the token .ul dong is ``shift 6'', since the only transition state is 6 and the symbol before the dot is .ul dong in state 6. So, state 6 is pushed onto the stack, and the lookahead is cleared. The stack now contains 1, 2, 3, and 6. In state 6, the only action is to reduce by rule 3. .sp .nf .in +4 sound : ding dong .sp .fi .in -4 This rule has two symbols on the right hand side, so two states, 6 and 3, are popped off of the stack, uncovering state 2. Consulting the description of state 2, there are again, three transition states. State 5 becomes the goto state, since the symbol before the dot is the same as the left hand side of the rule just reduced by. Thus state 5 is pushed onto the stack, becoming the current state. .ne 3 .sp In state 5, the next token, .ul dell, must be read. The action is ``shift 8'', so state 8 is pushed onto the stack, which now has 1, 2, 5, and 8 on it, and the lookahead token is cleared. In state 8, the only action is to reduce by rule 4. This has one symbol on the right hand side, so one state, 8, is popped off, and state 5 is uncovered. The goto in state 5 on .ul place, the left side of rule 4, is state 9. Now, the stack contains 1,2, 5, and 9. In state 9, the only action is to reduce by rule 2. There are two symbols on the right, so the top two states are popped off, uncovering state 2. In state 2, there is a goto on .ul rhyme causing the parser to enter and stack state 4. In state 4, the input is read; the endmarker is obtained, indicated by ``end'' in the description file. The action in state 4 when the endmarker is seen is to shift to state 7, so now the stack contains states 1, 2, 4, and 7. In state 7, the only action is to reduce by rule 1, popping 3 states off the stack, leaving state 1 on top. Although it is not indicated, the parser knows that state 7 is the final state, and whenever the final state is entered and the next lookahead token is the endmarker, the action is to accept, successfully ending the parse. The final state is always the state containing the added production rule fully parsed, (i.e. the dot is at the end of the extra production). successfully ending the parse. .ne 3 .sp The reader is urged to consider how the parser works when confronted with such incorrect strings as .ul "ding dong dong", .ul "ding dong", .ul "ding dong dell dell", etc. A few minutes spent with this and other simple examples will probably be repaid when problems arise in more complicated contexts. .sp 2 .ul .nf 5. Ambiguity and Conflicts .fi .ne 3 .sp A set of grammar rules is .ul ambiguous if there is some input string that can be structured in two or more different ways. For example, the grammar rule .sp .nf .in +4 expr : expr '-' expr .sp .fi .in -4 is a natural way of expressing the fact that one way of forming an arithmetic expression is to put two other expressions together with a minus sign between them. Unfortunately, this grammar rule does not completely specify the way that all complex inputs should be structured. For example, if the input is .sp .nf .in +4 expr - expr - expr .sp .fi .in -4 the rule allows this input to be structured as either .sp .nf .in +4 ( expr - expr ) - expr .sp .fi .in -4 or as .sp .nf .in +4 expr - ( expr - expr ) .sp .fi .in -4 (The first is called .ul "left association", the second .ul "right association"). .ne 3 .sp Yacc detects such ambiguities when it is attempting to build the parser. It is instructive to consider the problem that confronts the parser when it is given an input such as .sp .nf .in +4 expr - expr - expr .sp .fi .in -4 When the parser has read the second expr, the input that it has seen: .sp .nf .in +4 expr - expr .sp .fi .in -4 matches the right side of the grammar rule above. The parser could .ul reduce the input by applying this rule. After applying the rule; the input is reduced to .ul expr (the left side of the rule). The parser would then read the final part of the input: .sp .nf .in +4 - expr .sp .fi .in -4 and again reduce. The effect of this is to take the left associative interpretation. .ne 3 .sp Alternatively, when the parser has seen .sp .nf .in +4 expr - expr .sp .fi .in -4 it could defer the immediate application of the rule, and continue reading the input until it had seen .sp .nf .in +4 expr - expr - expr .sp .fi .in -4 It could then apply the rule to the rightmost three symbols, reducing them to .ul expr and leaving .sp .nf .in +4 expr - expr .sp .fi .in -4 Now the rule can be reduced once more; the effect is to take the right associative interpretation. Thus, having read .sp .nf .in +4 expr - expr .sp .fi .in -4 the parser can do two legal things, a shift or a reduction, and has no way of deciding between them. This is called a .ul "shift/reduce conflict". It may also happen that the parser has a choice of two legal reductions; this is called a .ul "reduce/ reduce conflict". Note that there are never any ``shift/shift'' conflicts. .ne 3 .sp When there are shift/reduce or reduce/reduce conflicts, Yacc still produces a parser. It does this by selecting one of the valid steps wherever it has a choice. A rule describing which choice to make in a given situation is called a .ul "disambiguating rule". .ne 3 .sp Yacc invokes two disambiguating rules by default: .ne 3 .sp .in +6 .ta 5r .ti -6 1. In a shift/reduce conflict, the default is to do the shift. .in -6 .ne 3 .sp .in +6 .ta 5r .ti -6 2. In a reduce/reduce conflict, the default is to reduce by the .ul earlier grammar rule (in the input sequence). .in -6 .ne 3 .sp Rule 1 implies that reductions are deferred whenever there is a choice, in favor of shifts. Rule 2 gives the user rather crude control over the behavior of the parser in this situation, but reduce/reduce conflicts should be avoided whenever possible. .ne 3 .sp Conflicts may arise because of mistakes in input or logic, or because the grammar rules, while consistent, require a more complex parser than Yacc can construct. The use of actions within rules can also cause conflicts, if the action must be done before the parser can be sure which rule is being recognized. In these cases, the application of disambiguating rules is inappropriate, and leads to an incorrect parser. For this reason, Yacc always reports the shift/reduce and reduce/reduce conflicts resolved by Rule 1 or Rule 2. .ne 3 .sp In general, whenever it is possible to apply disambiguating rules to produce a correct parser, it is also possible to rewrite the grammar rules so that the same inputs are read but there are no conflicts. For this reason, most previous parser generators have considered conflicts to be fatal errors. Further experience suggests that this rewriting is somewhat unnatural, and produces slower parsers; thus, Yacc will produce parsers even in the presence of conflicts. .ne 3 .sp As an example of the power of disambiguating rules, consider a fragment from a programming language involving an ``if-then-else'' construction: .sp .nf .in +4 stat : IF '(' cond ')' stat | IF'(' cond ')' stat ELSE stat ; .sp .fi .in -4 In these rules, .ul IF and .ul ELSE are tokens, .ul cond is a nonterminal symbol describing conditional (logical) expressions, and .ul stat is a nonterminal symbol describing statements. The first rule will be called the .ul simple-if rule, and the second the .ul if-else rule. .ne 3 .sp These two rules form an ambiguous construction, since input of the form .sp .nf .in +4 IF ( C1 ) IF ( C2 ) S1 ELSE S2 .sp .fi .in -4 can be structured according to these rules in two ways: .sp .nf .in +4 IF ( C1 ) { IF ( C2 ) S1 } ELSE S2 .sp .fi .in -4 or .sp .nf .in +4 IF ( C1 ) { IF ( C2 ) S1 ELSE S2 } .sp .fi .in -4 The second interpretation is the one given in most programming languages having this construct. Each .ul ELSE is associated with the last preceding ``un-ELSE'd'' .ul IF. In this example, consider the situation where the parser has seen .sp .nf .in +4 IF ( C1 ) IF ( C2 ) S1 .sp .fi .in -4 and is looking at the .ul ELSE. It can immediately reduce by the simple-if rule to get .sp .nf .in +4 IF ( C1 ) stat .sp .fi .in -4 and then read the remaining input, .sp .nf .in +4 ELSE S2 .sp .fi .in -4 and reduce .sp .nf .in +4 IF ( C1 ) stat ELSE S2 .sp .fi .in -4 by the if-else rule. This leads to the first of the above groupings of the input. .ne 3 .sp On the other hand, the .ul ELSE may be shifted, .ul S2 read, and then the right hand portion of .sp .nf .in +4 IF ( C1 ) IF ( C2 ) S1 ELSE S2 .sp .fi .in -4 can be reduced by the if-else rule to get .sp .nf .in +4 IF ( C1 ) stat .sp .fi .in -4 which can be reduced by the simple-if rule. This leads to the second of the above groupings of the input, which is usually desired. .ne 3 .sp Once again the parser can do two valid things - there is a shift/reduce conflict. The application of disambiguating rule 1 tells the parser to shift in this case, which leads to the desired grouping. .ne 3 .sp This shift/reduce conflict arises only when there is a particular current input symbol, .ul ELSE, and particular inputs already seen, such as .sp .nf .in +4 IF ( C1 ) IF ( C2 ) S1 .sp .fi .in -4 In general, there may be many conflicts, and each one will be associated with an input symbol and a set of previously read inputs. The previously read inputs are characterized by the state of the parser. .ne 3 .sp The conflict messages of Yacc are best understood by examining the verbose (-v) option output file. For example, the output corresponding to the above conflict state might be: .sp .nf .in +4 state: 23 18 stat : IF ( cond ) stat . 19 stat : IF ( cond ) stat . ELSE stat the transitions: 45 the reductions: 18 ELSE *** WARNING: unresolved SHIFT/REDUCE conflict. *** State, Rule, and Token involved are: 45, 18, ELSE *** Default action is to SHIFT. .sp .fi .in -4 The last three lines following the state description describe the conflict, giving the state, rule, and the input symbol. Recall that the dot marks the portion of the grammar rules which has been seen. Thus in the example, in state 23 the parser has seen input corresponding to .sp .nf .in +4 IF ( cond ) stat .sp .fi .in -4 and the two grammar rules shown are active at this time. The parser can do two possible things. If the input symbol is .ul ELSE, it is possible to shift into state 45. State 45 will have, as part of its description, the line .sp .nf .in +4 19 stat : IF ( cond ) stat ELSE . stat .sp .fi .in -4 since the .ul ELSE will have been shifted in this state. Back in state 23, the other action is to reduce by grammar rule 18. By default, the .ul shift will be done, if the lookahead token is ELSE. Otherwise, the parser will reduce by grammar rule 18: .sp .nf .in +4 18 stat : IF '(' cond ')' stat .sp .fi .in -4 Once again, notice that the numbers following ``shift'' commands refer to other states, while the numbers following ``reduce'' commands refer to grammar rule numbers. In most states, there will be at most one reduce action possible in the state. The user who encounters unexpected shift/reduce conflicts will probably want to look at the verbose output to decide whether the default actions are appropriate. In really tough cases, the user might need to know more about the behavior and construction of the parser than can be covered here. In this case, one of the theoretical references [1,2,3] might be consulted; the services of a local guru might also be appropriate. .sp 2 .ul .nf 6. Precedence .fi .ne 3 .sp There is one common situation where the rules given above for resolving conflicts are not sufficient; this is in the parsing of arithmetic expressions. Most of the commonly used constructions for arithmetic expressions can be naturally described by the notion of .ul precedence levels for operators, together with information about left or right associativity. It turns out that ambiguous grammars with appropriate disambiguating rules can be used to create parsers that are faster and easier to write than parsers constructed from unambiguous grammars. The basic notion is to write grammar rules of the form .sp .nf .in +4 expr : expr OP expr .sp .fi .in -4 and .sp .nf .in +4 expr : UNARY expr .sp .fi .in -4 for all binary and unary operators desired. This creates a very ambiguous grammar, with many parsing conflicts. As disambiguating rules, the user specifies the precedence, or binding strength, of all the operators, and the associativity of the binary operators. This information is sufficient to allow Yacc to resolve the parsing conflicts in accordance with these rules, and construct a parser that realizes the desired precedences and associativities. .ne 3 .sp The precedences and associativities are attached to tokens in the declarations section. This is done by a series of lines beginning with a Yacc keyword: %left, %right, or %nonassoc, followed by a list of tokens. All of the tokens on the same line are assumed to have the same precedence level and associativity; the lines are listed in order of increasing precedence or binding strength. Thus, .sp .nf .in +4 %left '+' '-' %left '*' '/' .sp .fi .in -4 describes the precedence and associativity of the four arithmetic operators. Plus and minus are left associative, and have lower precedence than star and slash, which are also left associative. The keyword %right is used to describe right associative operators, and the keyword %nonassoc is used to describe operators, like the operator .LT. in Fortran, that may not associate with themselves; thus, .sp .nf .in +4 A .LT. B .LT. C .sp .fi .in -4 is illegal in Fortran, and such an operator would be described with the keyword %nonassoc in Yacc. As an example of the behavior of these declarations, the description .sp .nf .in +4 %right '=' %left '+' '-' %left '*' '/' %% expr : expr '=' expr | expr '+' expr | expr '-' expr | expr '*' expr | expr '/' expr | NAME ; .sp .fi .in -4 might be used to structure the input .sp .nf .in +4 a = b = c*d - e - f*g .sp .fi .in -4 as follows: .sp .nf .in +4 a = ( b = ( ((c*d)-e) - (f*g) ) ) .sp .fi .in -4 .ne 3 .sp A token declared by %left, %right, and %nonassoc should not be declared by %token as well. The token definitions can be declared on these lines the same way as on %token lines. A token declared with %token, has no associativity and no precedence associated with it. .ne 3 .sp The precedences and associativities are used by Yacc to resolve parsing conflicts; they give rise to disambiguating rules. Formally, the rules work as follows: .ne 3 .sp .in +6 .ta 5r .ti -6 1. The precedences and associativities are recorded for those terminals that have them. .in -6 .ne 3 .sp .in +6 .ta 5r .ti -6 2. A precedence and associativity is associated with each grammar rule; it is the precedence and associativity of the last terminal in the body of the rule. If the rule contains no terminals, or if the last terminal has no precedence, the rule will have no precedence or associativity associated with it. .in -6 .ne 3 .sp .in +6 .ta 5r .ti -6 3. When there is a reduce/reduce conflict and either of the the grammar rules in conflict has no associated precedence, or there is a shift/reduce conflict and either the input symbol or the grammar rule has no associated precedence, then the two disambiguating rules given at the beginning of the section are used, and the conflicts are reported. .in -6 .ne 3 .sp .in +6 .ta 5r .ti -6 4. If there is a shift/reduce conflict, and both the grammar rule and the input character have precedence and associativity associated with them, then the conflict is resolved in favor of the action (shift or reduce) associated with the higher precedence. If the precedences are the same, then the associativity is used; left associative implies reduce, right associative implies shift, and nonassociating implies error. .in -6 .ne 3 .sp .sp 2 .nf .ul 7. Error Handling .fi .ne 3 .sp Error handling is an extremely difficult area, and many of the problems are semantic ones. When an error is found, for example, it may be necessary to reclaim parse tree storage, delete or alter symbol table entries, and, typically, set switches to avoid generating any further output. .ne 3 .sp It is seldom acceptable to stop all processing when an error is found; it is more useful to continue scanning the input to find further syntax errors. This leads to the problem of getting the parser ``restarted'' after an error. A general class of algorithms to do this involves discarding a number of tokens from the input string, and attempting to adjust the parser so that input can continue. .ne 3 .sp To allow the user some control over this process, Yacc provides a simple, but reasonably general, feature. The token name ``yyerror'' is reserved for error handling. This name can be used in grammar rules; in effect, it suggests places where errors are expected, and recovery might take place. The parser pops its stack until it enters a state where the token ``yyerror'' is legal. It then behaves as if the token ``yyerror'' were the current lookahead token, and performs the action encountered. The lookahead token is then reset to the token that caused the error. If no special error rules have been specified, the processing halts when an error is detected. .ne 3 .sp In order to prevent a cascade of error messages, the parser, after detecting an error, remains in error state until three tokens have been successfully read and shifted. If an error is detected when the parser is already in error state, no message is given, and the input token is quietly deleted. .ne 3 .sp As an example, a rule of the form .sp .nf .in +4 stat : yyerror .sp .fi .in -4 would, in effect, mean that on a syntax error the parser would attempt to skip over the statement in which the error was seen. .ne 3 .sp Actions may be used with these special error rules. These actions might attempt to reinitialize tables, reclaim symbol table space, etc. .ne 3 .sp Error rules such as the above are very general, but difficult to control. Somewhat easier are rules such as .sp .nf .in +4 stat : yyerror ';' .sp .fi .in -4 Here, when there is an error, the parser attempts to skip over the statement, but will do so by skipping to the next ';'. All tokens after the error and before the next ';' cannot be shifted, and are discarded. When the ';' is seen, this rule will be reduced, and any ``cleanup'' action associated with it performed. .sp 2 .nf .ul 8. The Yacc Environment .fi .ne 3 .sp When the user inputs a specification to yacc, the result is a binary file in "bin/gronk", where gronk is the simple-most file name of the input file handed to yacc. If the "-b" or "-c" flag is specified, then the result will be a library in "lib/gronk". (All flags not recognized by yacc are passed to rc.) The parse function produced by Yacc is called .ul yyparse; it is an integer valued function, with a single integer valued argument. When it is called, it in turn repeatedly calls .ul yylex, the lexical analyzer supplied by the user (see Section 3) to obtain input tokens. Eventually, either an error is detected, in which case (if no error recovery is possible) .ul yyparse returns ERR, or the lexical analyzer returns the endmarker token and the parser accepts. In this case, .ul yyparse returns OK. .ne 3 .sp The user must provide a certain amount of environment for this parser in order to obtain a working program. For example, a main program should be defined, that eventually calls .ul yyparse. This routine must be supplied in one form or another by the user. The usual way is to put it in the .ul programs section of the yacc input. See section 1. To show the triviality of this program, an example source is given below: .ne 15 .sp .nf .in +4 ### main - main program to call the parser # program main integer yyparse integer sts call initr4 if ( yyparse(sts) == ERR ) call error( "main: FATAL ERROR while parsing input.") call endr4 stop end .sp .fi .in -4 .ne 3 .sp The '-p' flag in yacc puts the generated parser in debug mode. The parser will output a verbose description of its actions, including a discussion of which input symbols have been read, and what the parser actions are. .sp 2 .ul .nf 9. Hints For Preparing Specifications .fi .ne 3 .sp To be added later. .sp 2 .ul .nf 10. Advanced Topics .fi .ne 3 .sp To be added later. .sp 2 .ul .nf .ne 7 11. References .fi .ne 3 .sp .in +6 .ta 5r .ti -6 1. Aho, A.V., and Johnson, S.C. [1974]. "LR parsing", Computing Surveys 6:2, 99-124 .in -6 .ne 3 .sp .in +6 .ta 5r .ti -6 2. Aho, A.V., Johnson, S.C., and Ullman J.D. [1975]. "Deterministic parsing of ambiguous grammars", Comm. ACM 18:8, 441-452. .in -6 .ne 3 .sp .in +6 .ta 5r .ti -6 3. Aho, A.V., and Ullman, J.D., Principles of Compiler Design, Addison-Wesley 1979 .in -6 .ne 3 .sp .sp 2 .nf .ul Appendix A: A Simple Example .fi .ne 3 .sp To be added later. .sp 2 .nf .ul Appendix B: An Advanced Example .fi .ne 3 .sp To be added later. .sp .ne 2 .fi .ti -5 FILES .br none .sp .ne 2 .fi .ti -5 SEE ALSO .br .nf .nf yaclr(1), rc(1), lrgen(1), pr(1), yyplb(2) "Yacc: Yet Another Compiler-Compiler" by S. C. Johnson "LR - Automatic Parser Generator and LR(1) Parser" by C. Wetherell and A. Shannon. .sp .ne 2 .fi .ti -5 AUTHOR(S) .br Major portions of this writeup were adapted from S. C. Johnson's "Yacc: Yet Another Compiler-Compiler". Description's of RTSG peculiarities were added by Theresa Breckon. #-t- yc.doc 49862 ascii 05Jan84 08:12:07