Termipankki
  1. A
    1. Admin Site
  2. C
    1. Cache
    2. Checking Daemon
      System Exercises
    3. Celery Worker
      Checking Daemon
    4. Checker Backend
      Checking Daemon
    5. Content Graph
      Courses Content
    6. Content Page
      Content
    7. Context Node
      Concept
    8. Course
      Courses
    9. Course Completion
    10. Course Instance
      Courses
    11. Course Prefix
      Courses System
  3. E
    1. Embedded Content
      Content Exercises
    2. Enrollment
      Courses System
  4. F
    1. Feedback
      Content Feedback
    2. File
      Media
    3. File Upload Exercise
      Exercises
    4. Front Page
      Courses Content
  5. H
    1. Hint
      Exercises
  6. I
    1. Instance
      Course Instance
    2. Image
      Media
  7. L
    1. Lecture Page
      Content
    2. Legacy Checker
  8. M
    1. Media File
      File
    2. Markup
      Content
    3. Media
      Media
  9. P
    1. Primary Instance
    2. PySenpai
  10. R
    1. Regex
    2. Repeated Exercise Generator
    3. Responsible Teacher
      Courses System
    4. Revision
      System
  11. S
    1. Scoring Group
    2. Slug
      System
    3. Staff
      Courses System
    4. Staff Mode
    5. Statistics
      Exercises
    6. Student Group
  12. T
    1. Teacher Toolbox
      System
    2. Term
      Content
    3. Textfield Exercise
    4. Triggerable Highlight
      Exercises
Ratkaistu: / tehtävää

Introduction to PySenpai

PySenpai is a program checking framework, developed with pedagogical goals in mind. It was initially developed to unify the behavior of checking programs in the Elementary Programming course and make them easier to update and fix. It was initially developed for Python exercises but has later received extensions to work with C, Y86 assembly and Matlab. The key principles behind PySenpai's design are:
  1. Provide a unified testing process for all checkers
  2. Make simple checkers easy to implement
  3. Make complex checkers possible to implement
  4. Allow customization of messages
  5. Provide reasonable default feedback even for the most minimal checkers
  6. Discourage cheating
To achieve these goals, PySenpai uses a callback-based architecture where the testing process itself runs within PySenpai test functions. Checker developers implement callback functions that are called during the testing process at certain stages and can influence the way the student program is tested. If you are unfamiliar with these kinds of architetures, the entire process can feel a bit black box like. But don't worry, that's what this guide chapter is here for.

PySenpai Operation

In many ways PySenpai operates like unit test frameworks - it's just a bit more specialized. At the very basic level testing comprises of preparation and running the tests. Basic preparation starts with loading the student code with PySenpai's loading function (which handles errors gracefully). If the module is successfully loaded, the checker can proceed to call one or more of PySenpai's test functions. For most functions checkers need to provide test vectors or test vector generators and references implementations that match the expected behavior of the student submission. After there are no more test functions to call, PySenpai will automatically output the test report upon exit.
Failures when interacting with student code are always caught and handled by PySenpai and logged properly in the evaluation report (there are, however, some fringe exceptions that cannot be caught). Failures with the checker code are let through into stderr. This will always result in Lovelace only reporting checker failure to the student - no evaluation will be done.

Customizing Loading Behavior

Compared to the actual test functions, there isn't that much to customize in loading behavior. Most of the customization is related to how inputs and outputs are handled. You can provide a list of strings as inputs to the student program. These are written to stdin so that if the student program needs inputs when it's being loaded, it has them. You can also set flags whether code output is shown in the evaluation report and whether it is allowed. You can also customize the output (see below). Customization of non-Python loaders is treated in each extension's chapter.

Available Tests

For Python programs, PySenpai offers five kinds of tests (although one of them has been implemented for an exercise type that is not yet available in Lovelace). These tests are:
  1. Function test. Calls a single student function and compares its behavior to a reference. This is usually the most important test, and is extremely flexible.
  2. Program test. Tests the student main program and compares its behavior to a reference (function). This test can be used to test whole programs, and is also quite flexible. However, it can only test code that is executed when the module is imported (i.e. not under if __name__ == "__main__":.
  3. Code snippet test. Tests a code snippet provided as a string. The snippet is inserted into a temporary module by a constructor function and then executed. The namespace of the executed module is compared with a reference object. Currently not in use, but there are plans to make
    textfield exercises
    that use this functionality instead of regular expressions.
  4. Static test. This test is for custom source code validation, and can inspect either the code of a single function or the entire program. Mostly used in current checkers for rejecting submissions that use solutions that have been specifically forbidden in the exercise. Static tests can also be used as information only, in which case they do not affect the evaluation result.
  5. Lint test. This test uses PyLint to generate a code quality analysis of the submission. This analysis can be used as an evaluation criterion, or it can be provided just as extra information for the student. Sadly PyLint itself doesn't support gettext, so the messages will always be in English

Customizing Function Tests

The function test function has a number of parameters that can be used to affect its behavior. There's a total of 17 different optional parameters, majority of which are callback functions. While this may sound a bit intimidating, for most checkers the defaults are perfectly adequate. The behavior is of course also defined by the mandatory parameters. This section just outlines the options you have for customization - implementation details are provided in a separate guide chapter. Some of the customization options are related to output formatting - these will be treated separately (see below). This section describes the options that affect the behavior of the test function. They will be listed in rough categories.

Test Vector and Reference

Both test vector and reference implementation are mandatory and for some checkers all that's needed. You can provide a list as the test vector, or a function that returns a list. The number of test cases is directly derived from the test vector length. For function tests, each case is a list of arguments to be used when calling the reference function and the student function. You can also provide an input vector (optional) - if provided, it must be the same length as the test vector, and each case is a list of strings to be written into stdin.
The reference function is a function that provides the desired result for each test case. In normal cases it will do exactly what the student submission is expected to do. However there are certain scenarios where it needs to behave differently. The most common example is the way PySenpai deals with inputs: reference functions are not supposed to consume inputs. The reference can be given the inputs as a list, but it has to simulate by reading directly from the list whereas the student submission actually gets to read stdin. Basically this is just removing an extra step but it does mean you cannot just copy paste an input reading function from your reference program to be the checker's reference function.
An important thing to bear in mind is that all reference results are generated in advance and stored, i.e. once PySenpai starts to interact with the student function, the reference is no longer interacted with. Evaluation is done against the stored results.

Result Forming

For function tests result is formed of two parts: return values and output (contents of stdout after running the student function). By default these are fed to validators as is. However both can be modified prior to evaluation by using filtering callbacks. The more common use case is the use of an output parser to convert the raw output into parsed values. When parsing is done separately, default validators can cover more ground. If parsing was to be done in the validator, a custom validator would be needed for every test that cares about output. It also makes the evaluation report better because we can show the parsed values along with the full output.
The other half, return value, can also be altered. This is especially meant for testing functions that do not return anything but rather modify an existing object. You can write a callback function that chooses the result object to use instead of the return value. It can be chosen/formed from test arguments, return value and the parsed output. For example if a function modifies a list it receives as an argument, you would simply write a function that returns the corresponding argument and it will be treated as the "return value" for the remaining test stages. These functions are called result object extractors, and the reasoning to use them is similar to that of output parsers.

Validation

Validator is responsible for deciding whether the test passes or not. By default PySenpai validates student functions by comparing their return values with the reference. It also provides a few built-in replacements (e.g. validating output values instead of return values). However, implementing custom validators is the best way to provide more accurate feedback about what went wrong, especially in more complex assignments. Validators are functions that can do any number of assert statements, allowing the comparison to be done in several steps. Each assert statement in the validator can be accompanied with a different rejection message which will be shown as the reason for failing the test in the evaluation log.
Custom validators are also sometimes necessary just because a checker needs to evaluate complex objects where simple equality testing is not reliable. On a similar note, checkers can have some leniency in their validation which can be very important in reducing student frustration. For instance, functions that perform multiple floating point operations can have rounding errors when the implementation is different from the reference but just as correct. In this scenario using a rounding validator is likely to result in a better experience.
PySenpai also has a separate stage for validating messages in the student code. This helps students differentiate between functional issues in their submission and problems with its output messages. If you want to test that student code gives certain messages with certain arguments / inputs, it should be done with a message validator.

Extra Analysis

Analysis callbacks are functions that are called after validation if the student submission didn't pass. These can be used to pinpoint problems in the evaluation log and provide additional hints. There is one built-in check that is enabled by default: it lets the student know their function returned the same result regardless of arguments/inputs. Further analysis needs to be provided as callback functions. There are three categories that can be used:
  1. error references
  2. custom tests
  3. information functions
Error references are functions that simulate typical student mistakes in the assignment. The student result is validated against each error reference function and if any of them match, a related message is added to the evaluation log. They are usually simple to implement because they're just modified copies of the real reference function. However, knowing what the typical mistakes are may take a few iterations of instructing the course.
Custom tests are additional validators that work with extra information. Just like validators, they can do a series of assert statements to find out what's wrong. However unlike validators, they have access to raw output, arguments and inputs in addition to what's available to normal validators. Information functions have access to the same data but instead of doing assert statements, they are expected to return something which will be formatted into a feedback message.

Customizing Messages

Messaging in PySenpai is based on Python dictionaries where each message is accessed via a key that consists of the message handle and language. PySenpai has default messages in Finnish and English. The language can be chosen when invoking a checker by using the -l or --lang option. When implementing checkers, you can add your own messages by creating a similar dictionary (there's a convenience class for doing this) and pass it to PySenpai functions. At the beginning of each function, the default messages dictionary will be updated by messages from the dictionary provided by the checker. This can be used to add new messages (for validators and analysis functions) and to override existing messages.
Messages in PySenpai consist of the message content, list of hints and list of triggers (the latter two being optional). The message content can also contain certain named placeholders which can be used to show values of relevant variables. The available placeholder names for each message can be found from the full message specification.
In addition to customizable messages, PySenpai also uses presenters for certain values in the testing process, namely: argument vector, input vector, reference result, student result, parsed student result and function call. These allow you to show information in a way that makes sense. For instance, if the result you are validating in tests is an object, printing it without a presenter would show something like <__main__.Result object at 0x7f984f5b24a8> which is obviously not very useful in terms of feedback. In this case you'd implement a presenter that returns a nice representation of relevant attributes within that class instead.
When implementing custom validators and info functions, you need to add corresponding messages. For validators, each assertion should raise a different message handle, and this handle should be found from the messages dictionary of your checker.
?
Django Admin Site is Django's default method for managing content. Teachers in Lovelace have access to the admin site where they can see all pages etc. that they have access to, and they can be edited from there. Compared to Lovelace's own editing tools, the admin site is a more direct mapping to values stored in the database. As Lovelace development progresses, the need to use the admin site diminishes.
In the context of using Lovelace as a teacher, the term Cache refers to the cache that holds pre-rendered HTML of your content pages. When you edit a page, this cache will be refreshed. This is the only time when the markup of a page is actually read and rendered - when users access the page, it will simply show the already rendered content that is stored in cache. In most scnearios the caching is invisible to users. However, there are still some edge cases where a change does not automatically trigger a cache refresh. For these scenarios teachers have access to the cache regeneration tools that can be executed on individual pages, or the whole course. If you know what page is affected, please use the page specific tool.
The checking daemon is a separate multi-threaded program that is invoked whenever Lovelace needs to execute code on the command line. The most common use case is to evaluate student programs by running checking programs. When a task is sent to the checker daemon, copies of all required files are put into a temporary directory where the test will then run. The daemon also does necessary security operations to prevent malicious code from doing any actual harm.
Content graphs are objects that connect content pages to a course instance's table of contents. Content graphs have several context attributes which define how the content is linked to this particular course instance. A content graph's ordinal number and parent node affect how it is displayed in the table of contents. You can also set a deadline which will be applied to all exercises contained within the linked content page. Content graphs also define which revision of the content to show - this is used when courses are archived.
In Lovelace, content page refers to learning objects that have text content written using a markup language. All types of content pages are treated similarly inside the system and they are interchangeable. Content pages include lecture pages, and all exercise types.
Context nodes are used for connecting various pieces of content to course instances in Lovelace. In addition to defining what content to include, they also define context information that can change how the content is treated. The most notable context nodes are index nodes that form the table of contents of a course instance, and embed nodes that are generated from embed markups on pages. Terms, media files, images etc. also have their own context nodes.
  1. Kuvaus
  2. Relations
In Lovelace, course refers to an abstract root course, not any specific instance of instruction. Courses are used for tying together actual instance of instruction (called course instances in Lovelace). In that sense they are like courses in the study guide, while course instances are like courses in WebOodi. The most important attrbutes of a course are its responsible teacher and its staff group - these define which users have access to edit content that is linked to the course.
Course Completion is a teacher tool for viewing students' progress, scores, and grades. It is accessed from the top right menu. By default the view only shows a table with student names and a button to view their individual progress. Calculation of scores and grades can be performed by pressing the button above the table. Note this operation can take some time if the course has a lot of students, which is also the main reason why the scores are not shown initially.
  1. Kuvaus
  2. Relations
  3. Cloning and Archiving
In Lovelace, a course instance refers to an actual instace of instruction of a course. It's comparable to a course in WebOodi. Students can enroll to a course instance. Almost everything is managed by instance - student enrollments, learning objects, student answers, feedback etc. This way teachers can easily treat each instance of instruction separately. Course instances can also be archived through a process called freezing.
Course prefixes are recommended because content page and media names in Lovelace are unique across all courses. You should decide a prefix for each course and use that for all learning objects that are not included in the course table of contents. The prefix will also make it easier to manage learning objects of multiple courses - especially for your friendly superuser who sees everyhing in the admin interface...
  1. Kuvaus
  2. Examples
Embedded content refers to learning objects that have been embedded to other learning objects through links written in the content of the parent object. Embedded content can be other content pages or media. When saving a content page, all embedded objects that are linked must exist. A link to embedded content is a reference that ties together course instance, embedded content and the parent content.
Enrollment is the method which connects students to course instances. All students taking a course should enroll to it. Enrollment is used for course scoring and (once implemented) access to course content. Enrollments are either automatically accepted, or need to be accepted through the enrollment management interface.
Lovelace has a built-in feedback system. You can attach any number of feedback questions to any content page, allowing you to get either targeted feedback about single exercises, or more general feedback about entire lecture pages. Unlike almost everything else, feedback questions are currently not owned by any particular course. However, feedback answers are always tied to the page the feedback is for, and also to the course instance where the feedback was given.
  1. Kuvaus
  2. Archiving
  3. Embedding
In Lovelace file normally refers to a media file, managed under Files in the admin site. A file has a handle, actual file contents (in both languages) and a download name. The file handle is how the file is referened throughout the system. If a media file is modified by uploading a new version of the file, all references will by default fetch the latest version. The download name is the name that is displayed as the file header when it's embedded, and also as the default name in the download dialog. Files are linked to content through reference objects - one reference per course instance.
Media files are currently stored in the public media folder along with images - they can be addressed directly via URL.
  1. Kuvaus
  2. Legacy Checkers
File upload exercises are at the heart of Lovelace. They are exercises where students return one or more code files that are then evaluated by a checking program. File upload exercises can be evaluated with anything that can be run from the Linux command line, but usually a bit more sophisticated tools should be used (e.g. PySenpai). File upload exercises have a JSON format for evaluations returned by checking programs. This evaluation can include messages, hints and highlight triggers - these will ideally help the student figure out problems with their code.
Front page of a course instance is shown at the instance's index page, below the course table of contents. Front page is linked to a course instance just like any other page, but it uses the special ordinar number of 0 which excludes it from the table of contents. Any page can act as the course front page.
Hints are messages that are displayed to students in various cases of answering incorrectly. Hints can be given upon making incorrect choices in choice-type exercises, and they can also be given after a certain number of attempts. In textfield exercises you can define any number of catches for incorrect answers, and attach hints to each. Hints are shown in a hint box in the exercise layout - this box will become visible if there is at least one hint to show.
  1. Kuvaus
  2. Archiving
  3. Embedding
Images in Lovelace are managed as media objects similar to files. They have a handle that is used for referencing, and the file itself separately. Images should be always included by using reference. This way if the image is updated, all references to it always show the latest version.
Images stored on disc are accessible directly through URL.
Lecture pages are content pages that do not have any exercise capabilities attached to them. A course instance's table of contents usually consists entirely of lecture pages. Other types of content pages (i.e. exercises) are usually embedded within lecture pages.
Legacy checker is a name for checkers that were used in previous versions of Lovelace and its predecessor Raippa. They test the student submission against a reference, comparing their outputs. If the outputs match (exactly), the submission passes. Otherwise differences in output are highlighted. It is possible to use wrapper programs to alter the outputs, or output different things (e.g. testing return values of individual functions). Legacy checkers should generally be avoided because they are very limiting and often frustrating for students. Legacy checking is still occasionally useful for comparing compiler outputs etc.
Lovelace uses its own wiki style markup for writing content. Beyond basic formatting features, the markup is also used to embed content pages and media, mark highlightable sections in text and create hover-activated term definition popups.
In Lovelace, media refers to embeddable files etc. These come in there categories: images, files and video links. Like content pages, media objects are managed by reference using handles. Unlike other types of files, media files are publicly accessible to anyone who can guess the URL.
The Primary Instance of a course is a special nomination that can be given to one instance of each course at a time. It gives the instance a special URL that has the course name slug twice instead of having the course name slug followed by the instance name slug. The main use case is to be able to get a shareable link that will always point to the most recent course instance. This way links to Lovelace courses from other sites will not become obsolete whenever a new course instance is created.
PySenpai is a library/framework for creating file upload exercise checking programs. It uses a callback-based architecture to create a consistent and highly customizable testing process. On the one hand it provides reasonable defaults for basic checking programs making them relatively straightforward to implement. On the other hand it also supports much more complex checking programs. Currently PySenpai supports Python, C, Y86 Assembly and Matlab.
Regular expression's are a necessary evil in creating textfield and repeated template exercises. Lovelace uses Python regular expressions in single line mode.
A generator acts as a backend for repeated template exercises, and provides the random values and their corresponding answers to the frontend. Generators can be written in any programming language that can be executed on the Lovelace server. Generators need to return a JSON document by printing it to stdout.
Responsible teacher is the primary teacher in charge of a course. Certain actions are available only to responsible teachers. These actions include managing enrollments and course instances.
Lovelace uses Django Reversion to keep track of version history for all learning objects. This can be sometimes useful if you need to restore a previous version after mucking something up. However the primary purpose is to have access to historical copies of learning objects for archiving purposes. When a course instance is archived, it uses the revision attribute of all its references to set which historical version should be fetched when the learning object is shown. Student answers also include the revision number of the exercise that was active at the time of saving the answer.
Scoring Group is a mechanism related to exercises in Lovelace. It allows the creation of mutually exclusive tasks where a student chooses one task from a set of tasks, and only completes that one. These groups come in two flavors:
  1. task group inside a single page (one task from the group is scored)
  2. group of pages (one page's total score is counted)
In both cases the group is formed by giving each task/page the same tag (any string) in their settings. I.e. if two pages have "my-final-exam" as their scoring group, only one of the two pages will contribute to the student's total score.
Slug is the lingo word for names used in urls. Slugs are automatically generated for courses, course instances and content pages. Slugs are all-lowercase with all non-alphanumeric characters replaced with dashes. Similar naming scheme is recommended for other types of learning objects as well although they do not use generated slugs.
Staff members are basically your TAs. Staff members can see pages hidden from normal users and they can edit and create content (within the confines of the courses they have been assigned to). They can also view answer statistics and evaluate student answers in manually evaluated exercises. Staff members are assigned to courses via staff group.
Staff Mode is an on-site editing mode that can be enabled from the secondary top bar, next to the language select. When in staff mode, various editing buttons are added to editable objects on the page. Pressing these buttons usually brings up an editing form. Staff mode remains enabled until you close the browser tab. Other staff tools that do not interfere with viewing content are always visible regardless of staff mode.
Lovelace has answer statistics for all exercises. Statistics are collected per instance, and allow you to review how many times an exercise has been answered, what's the success rate etc. All of this can be helpful in identifying where students either have difficulties, or the exercise itself is badly designed. For some types of exercises, there's also more detailed information about answers that have been given. Statistics can be accessed from the left hand toolbox for each exercise.
Student Group is a logical group of students that they can form in courses that allow it. Groups can be enabled by setting the max group size setting of an instance to a number (if it is left empty, group related features will not be visible at all). Group submissions can be controlled on a per task basis by checking or unchecking the group submission attribute. When a group member submits an answer to a task that has group submission enabled, their answer and evaluation is automatically copied to all group members. Currently this does not include submitted files, so the file can only be viewed from the original submitters answers. Please note that once answers are copied, they are owned by the students and will be kept even if they are removed from the group. In the current design, the lifetime of a group is the entire course instance.
Teacher toolbox is located on the left hand side of each exercise. It has options to view statistcs, view feedback about the exercise and edit the exercise. For file upload exercises there is also an option to download all answers as a zip file. Do note that this takes some time.
  1. Kuvaus
  2. Examples
Terms are keywords that are linked to descriptions within your course. They will be collected into the course term bank, and the keyword can also be used to make term definition popups on any content page. Terms can include multiple tabs and links to pages that are relevant to the term. For instance, this term has a tab for examples, and a link to the page about terms.
Textfield exercises are exercises where the student gives their answer by writing into a text box. This answer is evaluated against predefined answers that can be either correct (accepting the exercise) or incorrect (giving a related hint). Almost always these answers are defined as regular expressions - exact matching is simply far too strict.
  1. Kuvaus
  2. Markup
  3. Triggering
Triggerable highlights can be used in content pages to mark passages that can be highlighted by triggers from file upload exercise evaluation responses. When a highlight is triggered the passage will be highlighted. This feature is useful for drawing student attention to things they may have missed. Exercises can trigger highlights in their own description, or in their parent page. It is usually a good idea to use exercise specific prefixes for highlight trigger names.