Termbank
  1. C
    1. Checking Daemon
      Exercises System
    2. Celery Worker
      Checking Daemon
    3. Content Graph
      Content Courses
    4. Content Page
      Content
    5. Course
      Courses
    6. Course Instance
      Courses
    7. Course Prefix
      Courses System
  2. E
    1. Embedded Content
      Content Exercises
    2. Enrollment
      Courses System
  3. F
    1. Feedback
      Content Feedback
    2. File
      Media
    3. File Upload Exercise
      Exercises
    4. Front Page
      Content Courses
  4. H
    1. Hint
      Exercises
  5. I
    1. Instance
      Course Instance
    2. Image
      Media
  6. L
    1. Lecture Page
      Content
    2. Legacy Checker
  7. M
    1. Media File
      File
    2. Markup
      Content
    3. Media
      Media
  8. P
    1. PySenpai
  9. R
    1. Regex
    2. Repeated Exercise Generator
    3. Responsible Teacher
      Courses System
    4. Revision
      System
  10. S
    1. Slug
      System
    2. Staff
      Courses System
    3. Statistics
      Exercises
  11. T
    1. Teacher Toolbox
      System
    2. Term
      Content
    3. Textfield Exercise
    4. Triggerable Highlight
      Exercises
Completed: / exercises

Repeated Template Exercises

Repeated template exercises offer a way to do two things: exercises with multiple questions, and parametricized exercises. They have been designed for reinforcing learning and/or routine. On the surface they work almost exactly the same way as
textfield exercises
. The students write their answers to the answer box and get Correct/Incorrect as the response. However instead of the exercise being marked as correct, they're given a new task instead until a set amount of repetitions have been completed.
Answers are evaluated against regular expressions just like normal textfield exercises, and questions can also include regular expressions for hints.
Currently repeated template exercises have a penalty regime where an incorrect answers terminates the session, and a new session is generated. In other words, if the student makes a mistake they have to start over. There are some plans to re-design this policy.

Exercise Anatomy

Repeated template exercises have the same basic parts as any exercise: name, content, question,
feedback questions
etc. In repeated template exercises, the content box should be an overview of the exercise. The text in the content field is always shown. In addition, repeated template exercises have a few more parts that are not present in normal textfield exercises.
  1. Start session / next question / start over button - this is used to fetch new questions and question sets.
  2. Progress indicator
  3. Question area - this will be changed when a new question is fetched

Templates

Under the hood, repeated template exercises consist of templates and a session
generator
. Templates are questions with named Python formatting placeholders to indicate parametricized values (i.e. {variable}). Each question (called instance) is assigned a random template. Templates do not have to have the same variables. However, the generator does not know which template was selected. This means that in practice all templates have to be variations of the same type of question. If you want to have multiple types of questions within the same exercise, the best way to do this is to make only one template with the following content:
{question}
Then, instead of generating a bunch of variables, your generator simply returns entire questions. This makes the generator slightly more complicated, but allows the creation of exercises with high variety. This approach will be covered a bit later in this guide.
Template title is not shown in the interface.

Exercise Backend

The backend of a repeated template exercise consists of a file or multiple files that generate the exercise sessions. Entire sessions are generated at once. The backend is not interacted with when the student answers the questions, they are simply pulled from the database. Sessions are requested from the
checking daemon
. There is also a separate process running that pre-generates sessions into cache. This reduces the waiting time of requesting a new session to nigh unnoticeable. Cached sessions are invalidated whenever the exercise is edited.
Each backend file is given a name. This is the name the file is given when it's copied to the temporary folder for execution. For instance if you upload a library file in addition to the generator itself, the name given to the library file must match the name used when importing that library file in your generator code. Likewise the name must match the name used in the backend command. The backend command should be a valid Linux command that runs your generator. It has access to files included as backend files, and the server's libraries and executables.

Exercise Example

This exercise allows you to review the inline markups of Lovelace (i.e. markups that can be inside paragraphs). It also serves as a demonstration of how repeated template exercises work. This exercise doesn't care what names and content you use in your answers as long as the markup syntax is correct. I have been kind enough to include the correct answer in a hint whenever you get something wrong, but in real courses you may want less direct hints.
Type your answer here
Warning: You have not logged in. You cannot answer.

Session Generators

Session generators can be any Linux executables and can therefore be implemented in any programming language of your choosing (provided it's available on the Lovelace server). Examples are only provided in Python because you can use Python to create repeated template exercises for any language. If you only want a series of questions without random paramaters you can just write a text file with the response and use cat response.json as the command.

Response Format

The response is read from the generator's output. The response is a JSON document where the top-level object has one attribute, "repeats", a list of generated instances. Each instance object has two attributes: "variables" and "answers". Variables is an object with variable names as attribute names and values as attribute values. Variables can have any JSON serializable values. They are only used for formatting the question template, so you can also just turn everything into strings.
The answers attribute of an instance object is a list of answer objects, each with four attributes: "answer_str", "hint", "correct" and "is_regex". The latter two are booleans indicating whether the answer is correct and whether it is to be interpreted as a
regular expression
(which it almost always should be), just like the checkboxes in
textfield exercise
answers. The answer_str attribute is the string the student's answer is matched against - usually a regular expression. Just like textfield exercises, hint is a comment that's attached to an incorrect answer pattern - if the student answer matches the pattern, they receive the hint.
Here is a sample response from the example exercise (containing only two out of the eight questions):
{
    "repeats": [
        {
            "variables": {
                "markup": "bold",
                "example": "'''bold'''"
            },
            "answers": [
                {
                    "correct": true,
                    "answer_str": "[']{3}.+[']{3}",
                    "is_regex": true
                },
                {
                    "correct": false,
                    "answer_str": ".*",
                    "is_regex": true,
                    "hint": "The correct answer is:\n{{{\n&#8203'''bold'''\n}}}"
                }            
            ]
        },
        {
            "variables": {
                "markup": "term",
                "example": "[!term=Term!]hoverable term[!term!]"
            },
            "answers": [
                {
                    "correct": true,
                    "answer_str": "\[\!term\=.+\!\].+\[\!term\!\]",
                    "is_regex": true
                },
                {
                    "correct": false,
                    "answer_str": ".*",
                    "is_regex": true,
                    "hint": "The correct answer is:\n{{{\n&#8203[!term=Term!]hoverable term[!term!]\n}}}"
                }            
            ]
        },
    ]
}

Implementing a Simple Generator

Simple generators are typically sufficient for exercises that only have variations of a single question type. Mathematical problems often fall into this category. As an example, we'll create an elementary school mental arithmetic exercise. The exercises uses a single template with three variables: the operation symbol and two operands. So, a sample template could be
What is {operand_1} {operator} {operand_2}?
An alternative would be to form the entire problem as a string inside the generator and put that into the template:
What is {problem}?
We'll go with the first approach this time. After creating one template with these variables, we can start working on our generator. Starting with the imports:
import json
import random
import sys
We're going to use sys to read the number of repetitions from command line arguments. This way we don't need to modify the generator in the event we want to change that number. The json module is used for outputting the response and random, well, we need to randomize some numbers. Let's define our operations and main program first:
OPERATORS = "+-*/"

if __name__ == "__main__":
    try:
        repeats = int(sys.argv[1])
    except IndexError:
        print("Not enough arguments", file=sys.stderr)
    except ValueError:
        print("Repeats must be integer", file=sys.stderr)
    else:
        
        instances = []
    
        for i in range(repeats):
            instances.append(format_instance(*generate_problem()))
    
        response = {"repeats": instances}

        print(json.dumps(response))
We're printing to sys.stderr to let the
checking daemon
know that something went wrong. It's also worth noting that we're creating the response as a combination of Python lists and dictionaries as they can are nicer to handle and can be just dumped to JSON at the end with one function call. The function to generate a problem is pretty straightforward:
def generate_problem():
    op = random.choice(OPERATORS)
    operand_2 = random.randint(1, 10)
    if op == "/":
        operand_1 = operand_2 * random.randint(1, 10)
    else:
        operand_1 = random.randint(1, 10)
    
    answer = eval("{} {} {}".format(operand_1, op, operand_2))
    
    return op, operand_1, operand_2, answer
And finally we need the function that forms an object
def format_instance(op, operand_1, operand_2, answer):
    instance = {
        "variables": {
            "operand_1": operand_1,
            "operand_2": operand_2,
            "operator": op
        },
        "answers": [
            {
                "correct": True,
                "answer_str": "{:.0f}".format(answer),
                "is_regex": False
            }
        ]
    }
    
    return instance
The entire code can be downloaded from below, and you can also see the exercise in action.
elementary_math_demo.py
This is a repeated template exercise example where students can do some basic mental arithmetic. Integers only.

Warning: You have not logged in. You cannot answer.
?
The checking daemon is a separate multi-threaded program that is invoked whenever Lovelace needs to execute code on the command line. The most common use case is to evaluate student programs by running checking programs. When a task is sent to the checker daemon, copies of all required files are put into a temporary directory where the test will then run. The daemon also does necessary security operations to prevent malicious code from doing any actual harm.
Content graphs are objects that connect content pages to a course instance's table of contents. Content graphs have several context attributes which define how the content is linked to this particular course instance. A content graph's ordinal number and parent node affect how it is displayed in the table of contents. You can also set a deadline which will be applied to all exercises contained within the linked content page. Content graphs also define which revision of the content to show - this is used when courses are archived.
In Lovelace, content page refers to learning objects that have text content written using a markup language. All types of content pages are treated similarly inside the system and they are interchangeable. Content pages include lecture pages, and all exercise types.
  1. Description
  2. Relations
In Lovelace, course refers to an abstract root course, not any specific instance of instruction. Courses are used for tying together actual instance of instruction (called course instances in Lovelace). In that sense they are like courses in the study guide, while course instances are like courses in WebOodi. The most important attrbutes of a course are its responsible teacher and its staff group - these define which users have access to edit content that is linked to the course.
  1. Description
  2. Relations
  3. Cloning and Archiving
In Lovelace, a course instance refers to an actual instace of instruction of a course. It's comparable to a course in WebOodi. Students can enroll to a course instance. Almost everything is managed by instance - student enrollments, learning objects, student answers, feedback etc. This way teachers can easily treat each instance of instruction separately. Course instances can also be archived through a process called freezing.
Course prefixes are recommended because content page and media names in Lovelace are unique across all courses. You should decide a prefix for each course and use that for all learning objects that are not included in the course table of contents. The prefix will also make it easier to manage learning objects of multiple courses - especially for your friendly superuser who sees everyhing in the admin interface...
  1. Description
  2. Examples
Embedded content refers to learning objects that have been embedded to other learning objects through links written in the content of the parent object. Embedded content can be other content pages or media. When saving a content page, all embedded objects that are linked must exist. A link to embedded content is a reference that ties together course instance, embedded content and the parent content.
Enrollment is the method which connects students to course instances. All students taking a course should enroll to it. Enrollment is used for course scoring and (once implemented) access to course content. Enrollments are either automatically accepted, or need to be accepted through the enrollment management interface.
Lovelace has a built-in feedback system. You can attach any number of feedback questions to any content page, allowing you to get either targeted feedback about single exercises, or more general feedback about entire lecture pages. Unlike almost everything else, feedback questions are currently not owned by any particular course. However, feedback answers are always tied to the page the feedback is for, and also to the course instance where the feedback was given.
  1. Description
  2. Archiving
  3. Embedding
In Lovelace file normally refers to a media file, managed under Files in the admin site. A file has a handle, actual file contents (in both languages) and a download name. The file handle is how the file is referened throughout the system. If a media file is modified by uploading a new version of the file, all references will by default fetch the latest version. The download name is the name that is displayed as the file header when it's embedded, and also as the default name in the download dialog. Files are linked to content through reference objects - one reference per course instance.
Media files are currently stored in the public media folder along with images - they can be addressed directly via URL.
  1. Description
  2. Legacy Checkers
File upload exercises are at the heart of Lovelace. They are exercises where students return one or more code files that are then evaluated by a checking program. File upload exercises can be evaluated with anything that can be run from the Linux command line, but usually a bit more sophisticated tools should be used (e.g. PySenpai). File upload exercises have a JSON format for evaluations returned by checking programs. This evaluation can include messages, hints and highlight triggers - these will ideally help the student figure out problems with their code.
Front page of a course instance is shown at the instance's index page, below the course table of contents. Front page is linked to a course instance just like any other page, but it uses the special ordinar number of 0 which excludes it from the table of contents. Any page can act as the course front page.
Hints are messages that are displayed to students in various cases of answering incorrectly. Hints can be given upon making incorrect choices in choice-type exercises, and they can also be given after a certain number of attempts. In textfield exercises you can define any number of catches for incorrect answers, and attach hints to each. Hints are shown in a hint box in the exercise layout - this box will become visible if there is at least one hint to show.
  1. Description
  2. Archiving
  3. Embedding
Images in Lovelace are managed as media objects similar to files. They have a handle that is used for referencing, and the file itself separately. Images should be always included by using reference. This way if the image is updated, all references to it always show the latest version.
Images stored on disc are accessible directly through URL.
Lecture pages are content pages that do not have any exercise capabilities attached to them. A course instance's table of contents usually consists entirely of lecture pages. Other types of content pages (i.e. exercises) are usually embedded within lecture pages.
Legacy checker is a name for checkers that were used in previous versions of Lovelace and its predecessor Raippa. They test the student submission against a reference, comparing their outputs. If the outputs match (exactly), the submission passes. Otherwise differences in output are highlighted. It is possible to use wrapper programs to alter the outputs, or output different things (e.g. testing return values of individual functions). Legacy checkers should generally be avoided because they are very limiting and often frustrating for students. Legacy checking is still occasionally useful for comparing compiler outputs etc.
Lovelace uses its own wiki style markup for writing content. Beyond basic formatting features, the markup is also used to embed content pages and media, mark highlightable sections in text and create hover-activated term definition popups.
In Lovelace, media refers to embeddable files etc. These come in there categories: images, files and video links. Like content pages, media objects are managed by reference using handles. Unlike other types of files, media files are publicly accessible to anyone who can guess the URL.
PySenpai is a library/framework for creating file upload exercise checking programs. It uses a callback-based architecture to create a consistent and highly customizable testing process. On the one hand it provides reasonable defaults for basic checking programs making them relatively straightforward to implement. On the other hand it also supports much more complex checking programs. Currently PySenpai supports Python, C, Y86 Assembly and Matlab.
Regular expression's are a necessary evil in creating textfield and repeated template exercises. Lovelace uses Python regular expressions in single line mode.
A generator acts as a backend for repeated template exercises, and provides the random values and their corresponding answers to the frontend. Generators can be written in any programming language that can be executed on the Lovelace server. Generators need to return a JSON document by printing it to stdout.
Responsible teacher is the primary teacher in charge of a course. Certain actions are available only to responsible teachers. These actions include managing enrollments and course instances.
Lovelace uses Django Reversion to keep track of version history for all learning objects. This can be sometimes useful if you need to restore a previous version after mucking something up. However the primary purpose is to have access to historical copies of learning objects for archiving purposes. When a course instance is archived, it uses the revision attribute of all its references to set which historical version should be fetched when the learning object is shown. Student answers also include the revision number of the exercise that was active at the time of saving the answer.
Slug is the lingo word for names used in urls. Slugs are automatically generated for courses, course instances and content pages. Slugs are all-lowercase with all non-alphanumeric characters replaced with dashes. Similar naming scheme is recommended for other types of learning objects as well although they do not use generated slugs.
Staff members are basically your TAs. Staff members can see pages hidden from normal users and they can edit and create content (within the confines of the courses they have been assigned to). They can also view answer statistics and evaluate student answers in manually evaluated exercises. Staff members are assigned to courses via staff group.
Lovelace has answer statistics for all exercises. Statistics are collected per instance, and allow you to review how many times an exercise has been answered, what's the success rate etc. All of this can be helpful in identifying where students either have difficulties, or the exercise itself is badly designed. For some types of exercises, there's also more detailed information about answers that have been given. Statistics can be accessed from the left hand toolbox for each exercise.
Teacher toolbox is located on the left hand side of each exercise. It has options to view statistcs, view feedback about the exercise and edit the exercise. For file upload exercises there is also an option to download all answers as a zip file. Do note that this takes some time.
  1. Description
  2. Examples
Terms are keywords that are linked to descriptions within your course. They will be collected into the course term bank, and the keyword can also be used to make term hint popups on any content page. Terms can include multiple tabs and links to pages that are relevant to the term. For instance, this term has a tab for examples, and a link to the page about terms.
Textfield exercises are exercises where the student gives their answer by writing into a text box. This answer is evaluated against predefined answers that can be either correct (accepting the exercise) or incorrect (giving a related hint). Almost always these answers are defined as regular expressions - exact matching is simply far too strict.
  1. Description
  2. Markup
  3. Triggering
Triggerable highlights can be used in content pages to mark passages that can be highlighted by triggers from file upload exercise evaluation responses. When a highlight is triggered the passage will be highlighted. This feature is useful for drawing student attention to things they may have missed. Exercises can trigger highlights in their own description, or in their parent page. It is usually a good idea to use exercise specific prefixes for highlight trigger names.