Skip to main content
This assignment is due on Wednesday, August 7, 2019 before 11:59PM.

Term Project: R2D2 project or NLP assignment

Term project is required for 521 students, but not for 421 students. Nevertheless, if you are a 421 student, you can do it optionally and got extra credit that would worth up to 3% of the total course credit. If you are a 521 student, you can opt to do both assignments and get extra credit for the NLP assignment (up to 3%).

You need to complete 1 of the two things, either a group project that uses your R2D2s or an individual homework assignment (the NLP assignment given below).

Notice:

  1. No late submission is allowed for the Term Project.
  2. The NLP assignement will be autograded. But different from the other homework, you do not have access to all the test cases upon submission. Your will only get to see your final scores after the due date when we apply the autograder.

1. R2D2 team project

This is a self-designed project. You should pick a topic from the class, and write a program to demonstrate how it can apply to your robots. You will turn in your code, plus a short write-up or video that describes what you did. This is a team-based project. Your team can have anywhere from 2-5 people. Everyone in your team will get the same grade.

Here is an example of the scope and style of project that we are hoping that you’ll do: A* search with robots.

Here are some potential ideas of what you could do:

  • Use 8 Puzzle to sort a grid of robots into ascending order by their ID.
  • Design a game between R2D2 and R2Q5 that uses adversarial search.
  • Translate natural language commands into code.
  • Create a Grid World for R2D2 and execute its actions according to a probability distribution. Use an MPD to find the best policy to naviate its Grid World.
  • Add a new sensor to R2D2 by using your smart phone camera’s to estimate its position and orientation on a grid.

Milestones:

  • Monday July 29: Form a team, and announce your team in a private note on Piazza.
  • Wednesday July 31: Submit your term project idea for feedback to Piazza (can be private or public).
  • Wednesday August 7: give an in-class demo of your project. Prepare a short (5 minute) presentation and show your robots in action. Submit your code and report/video on Gradescope.

2. Alternate individual assignment: Use NLP to Generate a Novel

Instructions

In this assignment, you will gain experience working with Markov models on text.

You need to download the following files. The skeleton file containing empty definitions for each question has been provided. Since this assignment will be graded automatically, none of the names or function signatures in this file should be modified. However, you are free to introduce additional variables or functions if needed.

  1. skeleton file markov_models.py
  2. Frankenstein (text file)

You may import definitions from any standard Python library, and are encouraged to do so in case you find yourself reinventing the wheel.

You will find that in addition to a problem specification, most programming questions also include a pair of examples from the Python interpreter. These are meant to illustrate typical use cases, and should not be taken as comprehensive test suites.

You are strongly encouraged to follow the Python style guidelines set forth in PEP 8, which was written in part by the creator of Python. However, your code will not be graded for style.

Once you have completed the assignment, you should submit your markov_models.py file on Gradescope. You may submit as many times as you would like before the deadline, but only the last submission will be saved.

Markov Models [100 points]

In this section, you will build a simple language model that can be used to generate random text resembling a source document. Your use of external code should be limited to built-in Python modules, which excludes, for example, NumPy and NLTK.

  1. [10 points] Write a simple tokenization function tokenize(text) which takes as input a string of text and returns a list of tokens derived from that text. Here, we define a token to be a contiguous sequence of non-whitespace characters, with the exception that any punctuation mark should be treated as an individual token. Hint: Use the built-in constant string.punctuation, found in the string module.

     >>> tokenize("  This is an example.  ")
     ['This', 'is', 'an', 'example', '.']
    
     >>> tokenize("'Medium-rare,' she said.")
     ["'", 'Medium', '-', 'rare', ',', "'", 'she', 'said', '.']
    
  2. [10 points] Write a function ngrams(n, tokens) that produces a list of all $n$-grams of the specified size from the input token list. Each $n$-gram should consist of a 2-element tuple (context, token), where the context is itself an $(n−1)$-element tuple comprised of the $n−1$ words preceding the current token. The sentence should be padded with $n−1$ "<START>" tokens at the beginning and a single "<END>" token at the end. If $n=1$, all contexts should be empty tuples. You may assume that $n\ge1$.

     >>> ngrams(1, ["a", "b", "c"])
     [((), 'a'), ((), 'b'), ((), 'c'), ((), '<END>')]
    
     >>> ngrams(2, ["a", "b", "c"])
     [(('<START>',), 'a'), (('a',), 'b'), (('b',), 'c'), (('c',), '<END>')]
    
     >>> ngrams(3, ["a", "b", "c"])
     [(('<START>', '<START>'), 'a'), (('<START>', 'a'), 'b'), (('a', 'b'), 'c'), (('b', 'c'), '<END>')]
    
  3. [10 points] In the NgramModel class, write an initialization method __init__(self, n) which stores the order $n$ of the model and initializes any necessary internal variables. Then write a method update(self, sentence) which computes the $n$-grams for the input sentence and updates the internal counts. Lastly, write a method prob(self, context, token) which accepts an $(n−1)$-tuple representing a context and a token, and returns the probability of that token occuring, given the preceding context.

     >>> m = NgramModel(1)
     >>> m.update("a b c d")
     >>> m.update("a b a b")
     >>> m.prob((), "a")
     0.3
     >>> m.prob((), "c")
     0.1
     >>> m.prob((), "<END>")
     0.2
    
     >>> m = NgramModel(2)
     >>> m.update("a b c d")
     >>> m.update("a b a b")
     >>> m.prob(("<START>",), "a")
     1.0
     >>> m.prob(("b",), "c")
     0.3333333333333333
     >>> m.prob(("a",), "x")
     0.0
    
  4. [20 points] In the NgramModel class, write a method random_token(self, context) which returns a random token according to the probability distribution determined by the given context. Specifically, let $T=\langle t_1,t_2, \cdots, t_n \rangle$ be the set of tokens which can occur in the given context, sorted according to Python’s natural lexicographic ordering, and let $0\le r<1$ be a random number between 0 and 1. Your method should return the token $t_i$ such that

    You should use a single call to the random.random() function to generate $r$.

     >>> m = NgramModel(1)
     >>> m.update("a b c d")
     >>> m.update("a b a b")
     >>> random.seed(1)
     >>> [m.random_token(())
          for i in range(25)]
     ['<END>', 'c', 'b', 'a', 'a', 'a', 'b', 'b', '<END>', '<END>', 'c', 'a', 'b', '<END>', 'a', 'b', 'a', 'd', 'd', '<END>', '<END>', 'b', 'd', 'a', 'a']
    
     >>> m = NgramModel(2)
     >>> m.update("a b c d")
     >>> m.update("a b a b")
     >>> random.seed(2)
     >>> [m.random_token(("<START>",)) for i in range(6)]
     ['a', 'a', 'a', 'a', 'a', 'a']
     >>> [m.random_token(("b",)) for i in range(6)]
     ['c', '<END>', 'a', 'a', 'a', '<END>']
    
  5. [20 points] In the NgramModel class, write a method random_text(self, token_count) which returns a string of space-separated tokens chosen at random using the random_token(self, context) method. Your starting context should always be the $(n−1)$-tuple ("<START>", ..., "<START>"), and the context should be updated as tokens are generated. If $n=1$, your context should always be the empty tuple. Whenever the special token "<END>" is encountered, you should reset the context to the starting context.

     >>> m = NgramModel(1)
     >>> m.update("a b c d")
     >>> m.update("a b a b")
     >>> random.seed(1)
     >>> m.random_text(13)
     '<END> c b a a a b b <END> <END> c a b'
    
     >>> m = NgramModel(2)
     >>> m.update("a b c d")
     >>> m.update("a b a b")
     >>> random.seed(2)
     >>> m.random_text(15)
     'a b <END> a b c d <END> a b a b a b c'
    
  6. [15 points] Write a function create_ngram_model(n, path) which loads the text at the given path and creates an $n$-gram model from the resulting data. Each line in the file should be treated as a separate sentence.

     # No random seeds, so your results may vary
     >>> m = create_ngram_model(1, "frankenstein.txt"); m.random_text(15)
     'beat astonishment brought his for how , door <END> his . pertinacity to I felt'
     >>> m = create_ngram_model(2, "frankenstein.txt"); m.random_text(15)
     'As the great was extreme during the end of being . <END> Fortunately the sun'
     >>> m = create_ngram_model(3, "frankenstein.txt"); m.random_text(15)
     'I had so long inhabited . <END> You were thrown , by returning with greater'
     >>> m = create_ngram_model(4, "frankenstein.txt"); m.random_text(15)
     'We were soon joined by Elizabeth . <END> At these moments I wept bitterly and'
    
  7. [15 points] Suppose we define the perplexity of a sequence of m tokens $\langle w_1, w_2, \cdots, w_m \rangle$ to be

    For example, in the case of a bigram model under the framework used in the rest of the assignment, we would generate the bigrams $\langle (w_0=\langle \text{START} \rangle , w_1), (w_1, w_2), \cdots,(w_{m−1}, w_m), (w_m, w_{m+1} = \langle \text{END}\rangle)\rangle$, and would then compute the perplexity as

    Intuitively, the lower the perplexity, the better the input sequence is explained by the model. Higher values indicate the input was “perplexing” from the model’s point of view, hence the term perplexity.

    In the NgramModel class, write a method perplexity(self, sentence) which computes the $n$-grams for the input sentence and returns their perplexity under the current model. Hint: Consider performing an intermediate computation in log-space and re-exponentiating at the end, so as to avoid numerical overflow.

     >>> m = NgramModel(1)
     >>> m.update("a b c d")
     >>> m.update("a b a b")
     >>> m.perplexity("a b")
     3.815714141844439
    
     >>> m = NgramModel(2)
     >>> m.update("a b c d")
     >>> m.update("a b a b")
     >>> m.perplexity("a b")
     1.4422495703074083