1. C
    1. Checking Daemon
      Exercises System
    2. Celery Worker
      Checking Daemon
    3. Content Graph
      Content Courses
    4. Content Page
    5. Course
    6. Course Instance
    7. Course Prefix
      Courses System
  2. E
    1. Embedded Content
      Content Exercises
    2. Enrollment
      Courses System
  3. F
    1. Feedback
      Content Feedback
    2. File
    3. File Upload Exercise
    4. Front Page
      Content Courses
  4. H
    1. Hint
  5. I
    1. Instance
      Course Instance
    2. Image
  6. L
    1. Lecture Page
    2. Legacy Checker
  7. M
    1. Media File
    2. Markup
    3. Media
  8. P
    1. PySenpai
  9. R
    1. Regex
    2. Repeated Exercise Generator
    3. Responsible Teacher
      Courses System
    4. Revision
  10. S
    1. Slug
    2. Staff
      Courses System
    3. Statistics
  11. T
    1. Teacher Toolbox
    2. Term
    3. Textfield Exercise
    4. Triggerable Highlight
Completed: / exercises

Introduction to PySenpai

PySenpai is a program checking framework, developed with pedagogical goals in mind. It was initially developed to unify the behavior of checking programs in the Elementary Programming course and make them easier to update and fix. It was initially developed for Python exercises but has later received extensions to work with C, Y86 assembly and Matlab. The key principles behind PySenpai's design are:
  1. Provide a unified testing process for all checkers
  2. Make simple checkers easy to implement
  3. Make complex checkers possible to implement
  4. Allow customization of messages
  5. Provide reasonable default feedback even for the most minimal checkers
  6. Discourage cheating
To achieve these goals, PySenpai uses a callback-based architecture where the testing process itself runs within PySenpai test functions. Checker developers implement callback functions that are called during the testing process at certain stages and can influence the way the student program is tested. If you are unfamiliar with these kinds of architetures, the entire process can feel a bit black box like. But don't worry, that's what this guide chapter is here for.

PySenpai Operation

In many ways PySenpai operates like unit test frameworks - it's just a bit more specialized. At the very basic level testing comprises of preparation and running the tests. Basic preparation starts with loading the student code with PySenpai's loading function (which handles errors gracefully). If the module is successfully loaded, the checker can proceed to call one or more of PySenpai's test functions. For most functions checkers need to provide test vectors or test vector generators and references implementations that match the expected behavior of the student submission. After there are no more test functions to call, PySenpai will automatically output the test report upon exit.
Failures when interacting with student code are always caught and handled by PySenpai and logged properly in the evaluation report (there are, however, some fringe exceptions that cannot be caught). Failures with the checker code are let through into stderr. This will always result in Lovelace only reporting checker failure to the student - no evaluation will be done.

Customizing Loading Behavior

Compared to the actual test functions, there isn't that much to customize in loading behavior. Most of the customization is related to how inputs and outputs are handled. You can provide a list of strings as inputs to the student program. These are written to stdin so that if the student program needs inputs when it's being loaded, it has them. You can also set flags whether code output is shown in the evaluation report and whether it is allowed. You can also customize the output (see below). Customization of non-Python loaders is treated in each extension's chapter.

Available Tests

For Python programs, PySenpai offers five kinds of tests (although one of them has been implemented for an exercise type that is not yet available in Lovelace). These tests are:
  1. Function test. Calls a single student function and compares its behavior to a reference. This is usually the most important test, and is extremely flexible.
  2. Program test. Tests the student main program and compares its behavior to a reference (function). This test can be used to test whole programs, and is also quite flexible. However, it can only test code that is executed when the module is imported (i.e. not under if __name__ == "__main__":.
  3. Code snippet test. Tests a code snippet provided as a string. The snippet is inserted into a temporary module by a constructor function and then executed. The namespace of the executed module is compared with a reference object. Currently not in use, but there are plans to make
    textfield exercises
    that use this functionality instead of regular expressions.
  4. Static test. This test is for custom source code validation, and can inspect either the code of a single function or the entire program. Mostly used in current checkers for rejecting submissions that use solutions that have been specifically forbidden in the exercise. Static tests can also be used as information only, in which case they do not affect the evaluation result.
  5. Lint test. This test uses PyLint to generate a code quality analysis of the submission. This analysis can be used as an evaluation criterion, or it can be provided just as extra information for the student. Sadly PyLint itself doesn't support gettext, so the messages will always be in English

Customizing Function Tests

The function test function has a number of parameters that can be used to affect its behavior. There's a total of 17 different optional parameters, majority of which are callback functions. While this may sound a bit intimidating, for most checkers the defaults are perfectly adequate. The behavior is of course also defined by the mandatory parameters. This section just outlines the options you have for customization - implementation details are provided in a separate guide chapter. Some of the customization options are related to output formatting - these will be treated separately (see below). This section describes the options that affect the behavior of the test function. They will be listed in rough categories.

Test Vector and Reference

Both test vector and reference implementation are mandatory and for some checkers all that's needed. You can provide a list as the test vector, or a function that returns a list. The number of test cases is directly derived from the test vector length. For function tests, each case is a list of arguments to be used when calling the reference function and the student function. You can also provide an input vector (optional) - if provided, it must be the same length as the test vector, and each case is a list of strings to be written into stdin.
The reference function is a function that provides the desired result for each test case. In normal cases it will do exactly what the student submission is expected to do. However there are certain scenarios where it needs to behave differently. The most common example is the way PySenpai deals with inputs: reference functions are not supposed to consume inputs. The reference can be given the inputs as a list, but it has to simulate by reading directly from the list whereas the student submission actually gets to read stdin. Basically this is just removing an extra step but it does mean you cannot just copy paste an input reading function from your reference program to be the checker's reference function.
An important thing to bear in mind is that all reference results are generated in advance and stored, i.e. once PySenpai starts to interact with the student function, the reference is no longer interacted with. Evaluation is done against the stored results.

Result Forming

For function tests result is formed of two parts: return values and output (contents of stdout after running the student function). By default these are fed to validators as is. However both can be modified prior to evaluation by using filtering callbacks. The more common use case is the use of an output parser to convert the raw output into parsed values. When parsing is done separately, default validators can cover more ground. If parsing was to be done in the validator, a custom validator would be needed for every test that cares about output. It also makes the evaluation report better because we can show the parsed values along with the full output.
The other half, return value, can also be altered. This is especially meant for testing functions that do not return anything but rather modify an existing object. You can write a callback function that chooses the result object to use instead of the return value. It can be chosen/formed from test arguments, return value and the parsed output. For example if a function modifies a list it receives as an argument, you would simply write a function that returns the corresponding argument and it will be treated as the "return value" for the remaining test stages. These functions are called result object extractors, and the reasoning to use them is similar to that of output parsers.


Validator is responsible for deciding whether the test passes or not. By default PySenpai validates student functions by comparing their return values with the reference. It also provides a few built-in replacements (e.g. validating output values instead of return values). However, implementing custom validators is the best way to provide more accurate feedback about what went wrong, especially in more complex assignments. Validators are functions that can do any number of assert statements, allowing the comparison to be done in several steps. Each assert statement in the validator can be accompanied with a different rejection message which will be shown as the reason for failing the test in the evaluation log.
Custom validators are also sometimes necessary just because a checker needs to evaluate complex objects where simple equality testing is not reliable. On a similar note, checkers can have some leniency in their validation which can be very important in reducing student frustration. For instance, functions that perform multiple floating point operations can have rounding errors when the implementation is different from the reference but just as correct. In this scenario using a rounding validator is likely to result in a better experience.
PySenpai also has a separate stage for validating messages in the student code. This helps students differentiate between functional issues in their submission and problems with its output messages. If you want to test that student code gives certain messages with certain arguments / inputs, it should be done with a message validator.

Extra Analysis

Analysis callbacks are functions that are called after validation if the student submission didn't pass. These can be used to pinpoint problems in the evaluation log and provide additional hints. There is one built-in check that is enabled by default: it lets the student know their function returned the same result regardless of arguments/inputs. Further analysis needs to be provided as callback functions. There are three categories that can be used:
  1. error references
  2. custom tests
  3. information functions
Error references are functions that simulate typical student mistakes in the assignment. The student result is validated against each error reference function and if any of them match, a related message is added to the evaluation log. They are usually simple to implement because they're just modified copies of the real reference function. However, knowing what the typical mistakes are may take a few iterations of instructing the course.
Custom tests are additional validators that work with extra information. Just like validators, they can do a series of assert statements to find out what's wrong. However unlike validators, they have access to raw output, arguments and inputs in addition to what's available to normal validators. Information functions have access to the same data but instead of doing assert statements, they are expected to return something which will be formatted into a feedback message.

Customizing Messages

Messaging in PySenpai is based on Python dictionaries where each message is accessed via a key that consists of the message handle and language. PySenpai has default messages in Finnish and English. The language can be chosen when invoking a checker by using the -l or --lang option. When implementing checkers, you can add your own messages by creating a similar dictionary (there's a convenience class for doing this) and pass it to PySenpai functions. At the beginning of each function, the default messages dictionary will be updated by messages from the dictionary provided by the checker. This can be used to add new messages (for validators and analysis functions) and to override existing messages.
Messages in PySenpai consist of the message content, list of hints and list of triggers (the latter two being optional). The message content can also contain certain named placeholders which can be used to show values of relevant variables. The available placeholder names for each message can be found from the full message specification.
In addition to customizable messages, PySenpai also uses presenters for certain values in the testing process, namely: argument vector, input vector, reference result, student result, parsed student result and function call. These allow you to show information in a way that makes sense. For instance, if the result you are validating in tests is an object, printing it without a presenter would show something like <__main__.Result object at 0x7f984f5b24a8> which is obviously not very useful in terms of feedback. In this case you'd implement a presenter that returns a nice representation of relevant attributes within that class instead.
When implementing custom validators and info functions, you need to add corresponding messages. For validators, each assertion should raise a different message handle, and this handle should be found from the messages dictionary of your checker.
The checking daemon is a separate multi-threaded program that is invoked whenever Lovelace needs to execute code on the command line. The most common use case is to evaluate student programs by running checking programs. When a task is sent to the checker daemon, copies of all required files are put into a temporary directory where the test will then run. The daemon also does necessary security operations to prevent malicious code from doing any actual harm.
Content graphs are objects that connect content pages to a course instance's table of contents. Content graphs have several context attributes which define how the content is linked to this particular course instance. A content graph's ordinal number and parent node affect how it is displayed in the table of contents. You can also set a deadline which will be applied to all exercises contained within the linked content page. Content graphs also define which revision of the content to show - this is used when courses are archived.
In Lovelace, content page refers to learning objects that have text content written using a markup language. All types of content pages are treated similarly inside the system and they are interchangeable. Content pages include lecture pages, and all exercise types.
  1. Description
  2. Relations
In Lovelace, course refers to an abstract root course, not any specific instance of instruction. Courses are used for tying together actual instance of instruction (called course instances in Lovelace). In that sense they are like courses in the study guide, while course instances are like courses in WebOodi. The most important attrbutes of a course are its responsible teacher and its staff group - these define which users have access to edit content that is linked to the course.
  1. Description
  2. Relations
  3. Cloning and Archiving
In Lovelace, a course instance refers to an actual instace of instruction of a course. It's comparable to a course in WebOodi. Students can enroll to a course instance. Almost everything is managed by instance - student enrollments, learning objects, student answers, feedback etc. This way teachers can easily treat each instance of instruction separately. Course instances can also be archived through a process called freezing.
Course prefixes are recommended because content page and media names in Lovelace are unique across all courses. You should decide a prefix for each course and use that for all learning objects that are not included in the course table of contents. The prefix will also make it easier to manage learning objects of multiple courses - especially for your friendly superuser who sees everyhing in the admin interface...
  1. Description
  2. Examples
Embedded content refers to learning objects that have been embedded to other learning objects through links written in the content of the parent object. Embedded content can be other content pages or media. When saving a content page, all embedded objects that are linked must exist. A link to embedded content is a reference that ties together course instance, embedded content and the parent content.
Enrollment is the method which connects students to course instances. All students taking a course should enroll to it. Enrollment is used for course scoring and (once implemented) access to course content. Enrollments are either automatically accepted, or need to be accepted through the enrollment management interface.
Lovelace has a built-in feedback system. You can attach any number of feedback questions to any content page, allowing you to get either targeted feedback about single exercises, or more general feedback about entire lecture pages. Unlike almost everything else, feedback questions are currently not owned by any particular course. However, feedback answers are always tied to the page the feedback is for, and also to the course instance where the feedback was given.
  1. Description
  2. Archiving
  3. Embedding
In Lovelace file normally refers to a media file, managed under Files in the admin site. A file has a handle, actual file contents (in both languages) and a download name. The file handle is how the file is referened throughout the system. If a media file is modified by uploading a new version of the file, all references will by default fetch the latest version. The download name is the name that is displayed as the file header when it's embedded, and also as the default name in the download dialog. Files are linked to content through reference objects - one reference per course instance.
Media files are currently stored in the public media folder along with images - they can be addressed directly via URL.
  1. Description
  2. Legacy Checkers
File upload exercises are at the heart of Lovelace. They are exercises where students return one or more code files that are then evaluated by a checking program. File upload exercises can be evaluated with anything that can be run from the Linux command line, but usually a bit more sophisticated tools should be used (e.g. PySenpai). File upload exercises have a JSON format for evaluations returned by checking programs. This evaluation can include messages, hints and highlight triggers - these will ideally help the student figure out problems with their code.
Front page of a course instance is shown at the instance's index page, below the course table of contents. Front page is linked to a course instance just like any other page, but it uses the special ordinar number of 0 which excludes it from the table of contents. Any page can act as the course front page.
Hints are messages that are displayed to students in various cases of answering incorrectly. Hints can be given upon making incorrect choices in choice-type exercises, and they can also be given after a certain number of attempts. In textfield exercises you can define any number of catches for incorrect answers, and attach hints to each. Hints are shown in a hint box in the exercise layout - this box will become visible if there is at least one hint to show.
  1. Description
  2. Archiving
  3. Embedding
Images in Lovelace are managed as media objects similar to files. They have a handle that is used for referencing, and the file itself separately. Images should be always included by using reference. This way if the image is updated, all references to it always show the latest version.
Images stored on disc are accessible directly through URL.
Lecture pages are content pages that do not have any exercise capabilities attached to them. A course instance's table of contents usually consists entirely of lecture pages. Other types of content pages (i.e. exercises) are usually embedded within lecture pages.
Legacy checker is a name for checkers that were used in previous versions of Lovelace and its predecessor Raippa. They test the student submission against a reference, comparing their outputs. If the outputs match (exactly), the submission passes. Otherwise differences in output are highlighted. It is possible to use wrapper programs to alter the outputs, or output different things (e.g. testing return values of individual functions). Legacy checkers should generally be avoided because they are very limiting and often frustrating for students. Legacy checking is still occasionally useful for comparing compiler outputs etc.
Lovelace uses its own wiki style markup for writing content. Beyond basic formatting features, the markup is also used to embed content pages and media, mark highlightable sections in text and create hover-activated term definition popups.
In Lovelace, media refers to embeddable files etc. These come in there categories: images, files and video links. Like content pages, media objects are managed by reference using handles. Unlike other types of files, media files are publicly accessible to anyone who can guess the URL.
PySenpai is a library/framework for creating file upload exercise checking programs. It uses a callback-based architecture to create a consistent and highly customizable testing process. On the one hand it provides reasonable defaults for basic checking programs making them relatively straightforward to implement. On the other hand it also supports much more complex checking programs. Currently PySenpai supports Python, C, Y86 Assembly and Matlab.
Regular expression's are a necessary evil in creating textfield and repeated template exercises. Lovelace uses Python regular expressions in single line mode.
A generator acts as a backend for repeated template exercises, and provides the random values and their corresponding answers to the frontend. Generators can be written in any programming language that can be executed on the Lovelace server. Generators need to return a JSON document by printing it to stdout.
Responsible teacher is the primary teacher in charge of a course. Certain actions are available only to responsible teachers. These actions include managing enrollments and course instances.
Lovelace uses Django Reversion to keep track of version history for all learning objects. This can be sometimes useful if you need to restore a previous version after mucking something up. However the primary purpose is to have access to historical copies of learning objects for archiving purposes. When a course instance is archived, it uses the revision attribute of all its references to set which historical version should be fetched when the learning object is shown. Student answers also include the revision number of the exercise that was active at the time of saving the answer.
Slug is the lingo word for names used in urls. Slugs are automatically generated for courses, course instances and content pages. Slugs are all-lowercase with all non-alphanumeric characters replaced with dashes. Similar naming scheme is recommended for other types of learning objects as well although they do not use generated slugs.
Staff members are basically your TAs. Staff members can see pages hidden from normal users and they can edit and create content (within the confines of the courses they have been assigned to). They can also view answer statistics and evaluate student answers in manually evaluated exercises. Staff members are assigned to courses via staff group.
Lovelace has answer statistics for all exercises. Statistics are collected per instance, and allow you to review how many times an exercise has been answered, what's the success rate etc. All of this can be helpful in identifying where students either have difficulties, or the exercise itself is badly designed. For some types of exercises, there's also more detailed information about answers that have been given. Statistics can be accessed from the left hand toolbox for each exercise.
Teacher toolbox is located on the left hand side of each exercise. It has options to view statistcs, view feedback about the exercise and edit the exercise. For file upload exercises there is also an option to download all answers as a zip file. Do note that this takes some time.
  1. Description
  2. Examples
Terms are keywords that are linked to descriptions within your course. They will be collected into the course term bank, and the keyword can also be used to make term hint popups on any content page. Terms can include multiple tabs and links to pages that are relevant to the term. For instance, this term has a tab for examples, and a link to the page about terms.
Textfield exercises are exercises where the student gives their answer by writing into a text box. This answer is evaluated against predefined answers that can be either correct (accepting the exercise) or incorrect (giving a related hint). Almost always these answers are defined as regular expressions - exact matching is simply far too strict.
  1. Description
  2. Markup
  3. Triggering
Triggerable highlights can be used in content pages to mark passages that can be highlighted by triggers from file upload exercise evaluation responses. When a highlight is triggered the passage will be highlighted. This feature is useful for drawing student attention to things they may have missed. Exercises can trigger highlights in their own description, or in their parent page. It is usually a good idea to use exercise specific prefixes for highlight trigger names.