Scores overall are pretty low, and what I learned from grading them is that we really aren't done with LR parsing (LR(0) and especially LALR(1)) yet. Some of you are close to mastering it, some are pretty lost, and many are somewhere in between.
Why do I even care? After all, we've got nice programs like bison and cup to do this for us. But I do care, because LR parsing is an exemplar of a couple important algorithmic patterns:
- Treating a back-tracking problem (searching for a parse tree) as a non-deterministic search (sending angels out to scout different possible parses) and then, because the number of branches we need to explore is finite, folding them together into a deterministic algorithm (just like the subset construction from NFA to DFA). This is a powerful pattern that has other applications.
- Applying a set of rules (adding new items (productions) in a CFSM state) until none of the rules change anything. We'll see this again in data flow analysis, and it appears in other domains as well. Often we can look at a problem this way or in other ways, but this way is simpler and clearer. For example, although most textbooks describe computation of "first" and "follow" sets in other ways, the simplest description is in terms of a set of recursive set inequalities that can be applied over and over until nothing changes.