Termipankki
  1. A
    1. Admin Site
  2. C
    1. Cache
    2. Checking Daemon
      System Exercises
    3. Celery Worker
      Checking Daemon
    4. Checker Backend
      Checking Daemon
    5. Content Graph
      Courses Content
    6. Content Page
      Content
    7. Context Node
      Concept
    8. Course
      Courses
    9. Course Completion
    10. Course Instance
      Courses
    11. Course Prefix
      Courses System
  3. E
    1. Embedded Content
      Content Exercises
    2. Enrollment
      Courses System
  4. F
    1. Feedback
      Content Feedback
    2. File
      Media
    3. File Upload Exercise
      Exercises
    4. Front Page
      Courses Content
  5. H
    1. Hint
      Exercises
  6. I
    1. Instance
      Course Instance
    2. Image
      Media
  7. L
    1. Lecture Page
      Content
    2. Legacy Checker
  8. M
    1. Media File
      File
    2. Markup
      Content
    3. Media
      Media
  9. P
    1. Primary Instance
    2. PySenpai
  10. R
    1. Regex
    2. Repeated Exercise Generator
    3. Responsible Teacher
      Courses System
    4. Revision
      System
  11. S
    1. Scoring Group
    2. Slug
      System
    3. Staff
      Courses System
    4. Staff Mode
    5. Statistics
      Exercises
    6. Student Group
  12. T
    1. Teacher Toolbox
      System
    2. Term
      Content
    3. Textfield Exercise
    4. Triggerable Highlight
      Exercises
Ratkaistu: / tehtävää

File Upload Exercises

File upload exercises are the primary feature of Lovelace. They are exercises where students return one or more (usually) code files that are checked by running them or - more often - using a testing program to run the student's code in such a way that its functionality can be tested with more detail. Since anything that runs in the Linux command line can act as the testing backend, file upload exercises have almost limitless potential. The downside of course is that someone needs to implement checking programs.
In the past file upload exercises worked by comparing the output of the student program to the output of the reference program. However due to this having multiple issues, modern exercises generally work by outputting a test log as JSON which shows a summary of all testing that was done. Just like other exercise types, file upload exercises also support hints. However, the output formatting for file upload exercise feedback is richer than that of other exercises.

Creating File Upload Exercises

Unlike other exercise types, file upload exercises use a completely custom admin page (it's still accessed from the admin site). Besides having a more optimized layout, the form also doesn't show both languages side by side. The language is changed from a selector at the bottom of the page instead. Remember that Finnish fields are still the mandatory ones. You should always edit the FI version first.
Overview of the file upload exercise edit form
Just like other exercise types, the content is divided into the content box itself and the question box. The default text for the question box will be "Submit your files here:". You also need to set the number of points given for passing this exercise. You can also ask the students to include a list of collaborators, and to restrict the names accepted by the exercise (useful if the exercise requires a certain file name to be used).
You can also include feedback questions from the box on the right.

Included Files

Note: whenever you upload files, you must press Save from the bottom of the exercise edit form to actually upload them. You can actually use this to your advantage in some browsers (at least Chrome). If you change the file on your computer and press save again, it uploads the changed version without having to click through the upload dialogs again.
The most important part are the included files. There are two categories of files that can be used in file upload exercises. The first set of files are specific for this exercise. These should include the testing program and potentially reference implementation. Although reference implementation is not used in modern checkers, having it available for TAs could be a good idea. Generally speaking files that are only needed by this particular exercise should go here.
When you add a file to this group, the following dialog is shown.
Overview of the add file dialog
Generally the only fields that you need to touch are "Choose file", "Default name", "Name during test" and "Used as". The default name is the name that is used by the system as this file's name while name during test is the name the file is given when it's copied to the temporary directory where the test is run. They can usually be the same. Note that neither of these is the same as the file's name on disk. Just like media files, files on disk are never replaced - new versions are given a generated unique tail instead.
The "used as" field is mostly used for labeling the file. Currently the only behavior that is different between all of the roles is that files marked as reference implementation are not copied to the temporary directory when the student's answer is tested. Instead, they are copied for a reference run where the student's answer is replaced by the reference implementation. This is only relevant for exercises that use the legacy evaluation mode. Normally you don't need to touch the file ownership and permission fields.
The second group of files are shared across the entire
course
. This means all kinds of library files and testing frameworks that are used by more than one exercise. This group is extremely important for maintenance: if you change a shared file for one exercise, the change will affect all exercises that use the file. Shared files use context models similar to
media files
. Therefore version information is maintained by
instance
- archived versions of the exercise will use an archived version of each instance file.
The dialog that is opened when "Add and edit" is clicked is a bit more complex as it allows you to both edit and add instance files, and also choose which of them to link to the exercise. All files needed by the exercise must be linked - otherwise they will not be copied to the temporary folder when the tests are executed. The dialog is shown below:
Overview of the shared files dialog
From this dialog window you can create new instance files. An instance file itself contains no context information, but each of them is tied to a
course
. This is done to avoid conflicts and to control access to these files. Beyond that, an instance file is just an uploaded file with a given default name (again, not the same as its name on disk).
You can use the edit button to modify the shared file itself. Remember that doing so will affect all exercises that use the file (unless they have been archived to use a fixed
revision
). The edit dialog is the same as the one for creating a new instance file. Finally, in order to link the file to this exercise you need to press the link button. This is where you define the context information for the file - i.e. how it is used in this exercise. Usually this only involves setting a local name, and the file's role (usually "Library file"). Remember to click "Add link", otherwise the link will not be saved.
Overview of the instance file link dialog

Tests

File upload exercises can have multiple tests, and tests can have multiple stages. However, these features were primarily designed with
legacy checkers
in mind. In modern checkers, running multiple tests has been moved to the backend code. The only reason to include multiple tests is if you want to mix the two types of tests. When you add a test you will be asked to give the test a name (this will be shown in the student's interface), and to select which of the exercise files will actually be used in this test. You can use Ctrl and Shift to select/deselect multiple files.
After filling in the basic information, you need to add at least one stage with at least one command. In many cases the only command that needs to be run is the one that starts the checking program. Simply type the command into the "Command line" field. When writing the command use $RETURNABLES where you want the names of the returned files to be. E.g.
python3 alku_func_test.py -l fi $RETURNABLES
is the command to start one exercise's checker in the Elementary Programming course. Let's assume the student returns two files, main.py and functions.py. The final command that is executed would then be
python3 alku_func_test.py -l fi main.py functions.py
The checkboxes below define whether the stage is run in legacy mode or in modern mode. If "Provides evaluation data as JSON" is checked, modern mode is used. Otherwise this stage will be treated as a legacy check.
For legacy checks you can type inputs that will be written to the STDIN before running the command. This allows crude testing of programs that prompt user inputs. For legacy checks, pass or fail is determined by comparing with the reference output, and 1:1 similarity is required. You can also define an expected return value for the code execution. Note that while checking itself should be done with JSON evaluation data, legacy mode is still useful for certain pre-check operations such as code compiling. You can add multiple commands by either adding new commands to the current stage, or by adding a new stage. By default stages are dependent on the previous stages being successful. You can edit stage information by clicking its name.
Modern checkers are expected to take care of generating their inputs, and doing the checking internally. If you tick the JSON evaluation box, the other fields should be left empty. Generally the entire testing report with all tests and runs will be in the JSON returned by the checking program. It is also currently rendered separately in the test output in the student's view (see below). Note that only one JSON document from all tests combined will be actually rendered - currently multiple tests that output JSON are not supported.
The timeout field should be used wisely. In particular if a student code has infinite loops, it will run for the entire duration of the timeout. Obviously you don't want legit solutions to timeout however. Most tests run just fine in 5 seconds, but if your exercise requires more complex code, make sure to do some timing test runs. Also keep in mind that students aren't always writing as optimized code as you are.

File Upload Exercise Output

The output displayed to students when they return a file is split into two parts: test information which shows the tests, stages and commands ran; and the messages rendered from JSON evaluation (if available). If any commands were checked in
legacy mode
, stdout and stderr comparisons from those commands are shown in the test information area. If there are multiple tests, students can switch between them using the tabs. Currently switching tabs will not affect the messages area at all.
Example rendering of test information from the Computer Systems course
The messages area which renders the JSON evaluation is much better designed for displaying detailed information about the testing. It's divided into tests, and each test is divided into runs. Each run is a collapsible panel that can be expanded/collapsed by clicking anywhere within it. Initially only the first run of each test is expanded to save space. Runs are also always sorted so that passed tests are in the bottom, drawing student attention to failed tests. Each run contains one or more messages where the first line is always the test result (pass, fail or error). Messages include flags which determine what icon is used as the message bullet.
Bullet icons for different message flags
The messages are rendered though Lovelace's
markup parser
. You can use markup to make feedback cleaner. There are some limitations. If you include
media files
such as images in the messages, they must exist in the Lovelace database. Currently it's not possible to generate images from the checking program and show those images in the feedback. Support for this is planned however.
Example messages from a checker in the computer Systems Course

Implementing Checkers

Checkers can be implemented with anything that runs on the Lovelace server and is capable of outputting JSON into stdout (the
checking daemon
captures the test process' stdout and stderr). You can check the JSON specification below. While it's possible to write each checker from scratch, it is highly recommended to use a framework for creating checkers as that makes them more consistent, less error-prone and easier to maintain. The recommendation is to use
PySenpai
, a library/framework that's been developed alongside Lovelace. While originally created for Python exercises, it has extensions for C, Y86 assembly and Matlab. See the sub-chapters for more information about PySenpai and its supported languages.

Evaluation JSON Format

The evaluation is a JSON document with the following structure with optional attributes marked.
tester (string) - optional
tests (array)
-- test (object)
---- title (string)
---- runs (array)
------ run (object)
-------- output (array)
---------- message (object)
------------ msg (string) 
------------ flag (integer)
------------ triggers (array) - optional
-------------- trigger (string)
------------ hints (array) - optional
-------------- hint (string)
result (object)
-- correct (boolean)
-- score (integer)
-- max (integer)
Where test objects are test scenarios that contain multiple runs. Each run is done with the same setup (same evaluator, same reference etc.) but under different conditions (arguments, inputs etc.). Each run contains one or more messages - these are the feedback given to the student. Messages are objects that contain four attributes, each treated differently when rendering the evaluation:
Log messages should be used to describe the testing process and provide details to the student. This includes letting the student know which part of the program is being tested, what arguments it was tested with and what was expected of it. All of this information should help the student when debugging their code. On the other hand, hints should be more immediate description of actions that are most likely to help the student (e.g. "Check the order of parameters in your function definition."). Triggerable highlights should be used to attract student attention to details in the exercise description that they may have missed.
Here is an example JSON document with the first message of the rendered evaluation shown above.
{
    "tester": "energia_func_test",
    "tests": [
        {
            "title": "Loading source code file energy_test_en.c as library for testing...",
            "runs": [
                {
                    "output": []
                }
            ]
        },
        {
            "title": "Testing function calculate_kinetic_energy",
            "runs": [
                {
                    "output": [
                        {
                            "msg": "The function returned incorrect value(s)",
                            "flag": 0
                        },
                        {
                            "msg": "Function call used:\n{{{highlight=c\ncalculate_kinetic_energy(46.4, 77.3);\n}}}",
                            "flag": 4
                        },
                        {
                            "msg": "Your function returned: 83211.9140625",
                            "flag": 4
                        },
                        {
                            "msg": "Expected result: 83211.904",
                            "flag": 4
                        },
                        {
                            "msg": "Performing additional tests that may suggest cause for the error...",
                            "flag": 3
                        },
                        {
                            "msg": "There was a rounding error caused by insufficient precision. Remember to use double instead of float",
                            "flag": 3,
                            "hints": ["You need to use double for sufficient precision"],
                            "triggers": ["energy-precision-hint"]
                        }
                    ]
                }
            ]
        }
    ],
    "result": {
        "correct": false,
        "score": 0,
        "max": 1
    }
}
?
Django Admin Site is Django's default method for managing content. Teachers in Lovelace have access to the admin site where they can see all pages etc. that they have access to, and they can be edited from there. Compared to Lovelace's own editing tools, the admin site is a more direct mapping to values stored in the database. As Lovelace development progresses, the need to use the admin site diminishes.
In the context of using Lovelace as a teacher, the term Cache refers to the cache that holds pre-rendered HTML of your content pages. When you edit a page, this cache will be refreshed. This is the only time when the markup of a page is actually read and rendered - when users access the page, it will simply show the already rendered content that is stored in cache. In most scnearios the caching is invisible to users. However, there are still some edge cases where a change does not automatically trigger a cache refresh. For these scenarios teachers have access to the cache regeneration tools that can be executed on individual pages, or the whole course. If you know what page is affected, please use the page specific tool.
The checking daemon is a separate multi-threaded program that is invoked whenever Lovelace needs to execute code on the command line. The most common use case is to evaluate student programs by running checking programs. When a task is sent to the checker daemon, copies of all required files are put into a temporary directory where the test will then run. The daemon also does necessary security operations to prevent malicious code from doing any actual harm.
Content graphs are objects that connect content pages to a course instance's table of contents. Content graphs have several context attributes which define how the content is linked to this particular course instance. A content graph's ordinal number and parent node affect how it is displayed in the table of contents. You can also set a deadline which will be applied to all exercises contained within the linked content page. Content graphs also define which revision of the content to show - this is used when courses are archived.
In Lovelace, content page refers to learning objects that have text content written using a markup language. All types of content pages are treated similarly inside the system and they are interchangeable. Content pages include lecture pages, and all exercise types.
Context nodes are used for connecting various pieces of content to course instances in Lovelace. In addition to defining what content to include, they also define context information that can change how the content is treated. The most notable context nodes are index nodes that form the table of contents of a course instance, and embed nodes that are generated from embed markups on pages. Terms, media files, images etc. also have their own context nodes.
  1. Kuvaus
  2. Relations
In Lovelace, course refers to an abstract root course, not any specific instance of instruction. Courses are used for tying together actual instance of instruction (called course instances in Lovelace). In that sense they are like courses in the study guide, while course instances are like courses in WebOodi. The most important attrbutes of a course are its responsible teacher and its staff group - these define which users have access to edit content that is linked to the course.
Course Completion is a teacher tool for viewing students' progress, scores, and grades. It is accessed from the top right menu. By default the view only shows a table with student names and a button to view their individual progress. Calculation of scores and grades can be performed by pressing the button above the table. Note this operation can take some time if the course has a lot of students, which is also the main reason why the scores are not shown initially.
  1. Kuvaus
  2. Relations
  3. Cloning and Archiving
In Lovelace, a course instance refers to an actual instace of instruction of a course. It's comparable to a course in WebOodi. Students can enroll to a course instance. Almost everything is managed by instance - student enrollments, learning objects, student answers, feedback etc. This way teachers can easily treat each instance of instruction separately. Course instances can also be archived through a process called freezing.
Course prefixes are recommended because content page and media names in Lovelace are unique across all courses. You should decide a prefix for each course and use that for all learning objects that are not included in the course table of contents. The prefix will also make it easier to manage learning objects of multiple courses - especially for your friendly superuser who sees everyhing in the admin interface...
  1. Kuvaus
  2. Examples
Embedded content refers to learning objects that have been embedded to other learning objects through links written in the content of the parent object. Embedded content can be other content pages or media. When saving a content page, all embedded objects that are linked must exist. A link to embedded content is a reference that ties together course instance, embedded content and the parent content.
Enrollment is the method which connects students to course instances. All students taking a course should enroll to it. Enrollment is used for course scoring and (once implemented) access to course content. Enrollments are either automatically accepted, or need to be accepted through the enrollment management interface.
Lovelace has a built-in feedback system. You can attach any number of feedback questions to any content page, allowing you to get either targeted feedback about single exercises, or more general feedback about entire lecture pages. Unlike almost everything else, feedback questions are currently not owned by any particular course. However, feedback answers are always tied to the page the feedback is for, and also to the course instance where the feedback was given.
  1. Kuvaus
  2. Archiving
  3. Embedding
In Lovelace file normally refers to a media file, managed under Files in the admin site. A file has a handle, actual file contents (in both languages) and a download name. The file handle is how the file is referened throughout the system. If a media file is modified by uploading a new version of the file, all references will by default fetch the latest version. The download name is the name that is displayed as the file header when it's embedded, and also as the default name in the download dialog. Files are linked to content through reference objects - one reference per course instance.
Media files are currently stored in the public media folder along with images - they can be addressed directly via URL.
  1. Kuvaus
  2. Legacy Checkers
File upload exercises are at the heart of Lovelace. They are exercises where students return one or more code files that are then evaluated by a checking program. File upload exercises can be evaluated with anything that can be run from the Linux command line, but usually a bit more sophisticated tools should be used (e.g. PySenpai). File upload exercises have a JSON format for evaluations returned by checking programs. This evaluation can include messages, hints and highlight triggers - these will ideally help the student figure out problems with their code.
Front page of a course instance is shown at the instance's index page, below the course table of contents. Front page is linked to a course instance just like any other page, but it uses the special ordinar number of 0 which excludes it from the table of contents. Any page can act as the course front page.
Hints are messages that are displayed to students in various cases of answering incorrectly. Hints can be given upon making incorrect choices in choice-type exercises, and they can also be given after a certain number of attempts. In textfield exercises you can define any number of catches for incorrect answers, and attach hints to each. Hints are shown in a hint box in the exercise layout - this box will become visible if there is at least one hint to show.
  1. Kuvaus
  2. Archiving
  3. Embedding
Images in Lovelace are managed as media objects similar to files. They have a handle that is used for referencing, and the file itself separately. Images should be always included by using reference. This way if the image is updated, all references to it always show the latest version.
Images stored on disc are accessible directly through URL.
Lecture pages are content pages that do not have any exercise capabilities attached to them. A course instance's table of contents usually consists entirely of lecture pages. Other types of content pages (i.e. exercises) are usually embedded within lecture pages.
Legacy checker is a name for checkers that were used in previous versions of Lovelace and its predecessor Raippa. They test the student submission against a reference, comparing their outputs. If the outputs match (exactly), the submission passes. Otherwise differences in output are highlighted. It is possible to use wrapper programs to alter the outputs, or output different things (e.g. testing return values of individual functions). Legacy checkers should generally be avoided because they are very limiting and often frustrating for students. Legacy checking is still occasionally useful for comparing compiler outputs etc.
Lovelace uses its own wiki style markup for writing content. Beyond basic formatting features, the markup is also used to embed content pages and media, mark highlightable sections in text and create hover-activated term definition popups.
In Lovelace, media refers to embeddable files etc. These come in there categories: images, files and video links. Like content pages, media objects are managed by reference using handles. Unlike other types of files, media files are publicly accessible to anyone who can guess the URL.
The Primary Instance of a course is a special nomination that can be given to one instance of each course at a time. It gives the instance a special URL that has the course name slug twice instead of having the course name slug followed by the instance name slug. The main use case is to be able to get a shareable link that will always point to the most recent course instance. This way links to Lovelace courses from other sites will not become obsolete whenever a new course instance is created.
PySenpai is a library/framework for creating file upload exercise checking programs. It uses a callback-based architecture to create a consistent and highly customizable testing process. On the one hand it provides reasonable defaults for basic checking programs making them relatively straightforward to implement. On the other hand it also supports much more complex checking programs. Currently PySenpai supports Python, C, Y86 Assembly and Matlab.
Regular expression's are a necessary evil in creating textfield and repeated template exercises. Lovelace uses Python regular expressions in single line mode.
A generator acts as a backend for repeated template exercises, and provides the random values and their corresponding answers to the frontend. Generators can be written in any programming language that can be executed on the Lovelace server. Generators need to return a JSON document by printing it to stdout.
Responsible teacher is the primary teacher in charge of a course. Certain actions are available only to responsible teachers. These actions include managing enrollments and course instances.
Lovelace uses Django Reversion to keep track of version history for all learning objects. This can be sometimes useful if you need to restore a previous version after mucking something up. However the primary purpose is to have access to historical copies of learning objects for archiving purposes. When a course instance is archived, it uses the revision attribute of all its references to set which historical version should be fetched when the learning object is shown. Student answers also include the revision number of the exercise that was active at the time of saving the answer.
Scoring Group is a mechanism related to exercises in Lovelace. It allows the creation of mutually exclusive tasks where a student chooses one task from a set of tasks, and only completes that one. These groups come in two flavors:
  1. task group inside a single page (one task from the group is scored)
  2. group of pages (one page's total score is counted)
In both cases the group is formed by giving each task/page the same tag (any string) in their settings. I.e. if two pages have "my-final-exam" as their scoring group, only one of the two pages will contribute to the student's total score.
Slug is the lingo word for names used in urls. Slugs are automatically generated for courses, course instances and content pages. Slugs are all-lowercase with all non-alphanumeric characters replaced with dashes. Similar naming scheme is recommended for other types of learning objects as well although they do not use generated slugs.
Staff members are basically your TAs. Staff members can see pages hidden from normal users and they can edit and create content (within the confines of the courses they have been assigned to). They can also view answer statistics and evaluate student answers in manually evaluated exercises. Staff members are assigned to courses via staff group.
Staff Mode is an on-site editing mode that can be enabled from the secondary top bar, next to the language select. When in staff mode, various editing buttons are added to editable objects on the page. Pressing these buttons usually brings up an editing form. Staff mode remains enabled until you close the browser tab. Other staff tools that do not interfere with viewing content are always visible regardless of staff mode.
Lovelace has answer statistics for all exercises. Statistics are collected per instance, and allow you to review how many times an exercise has been answered, what's the success rate etc. All of this can be helpful in identifying where students either have difficulties, or the exercise itself is badly designed. For some types of exercises, there's also more detailed information about answers that have been given. Statistics can be accessed from the left hand toolbox for each exercise.
Student Group is a logical group of students that they can form in courses that allow it. Groups can be enabled by setting the max group size setting of an instance to a number (if it is left empty, group related features will not be visible at all). Group submissions can be controlled on a per task basis by checking or unchecking the group submission attribute. When a group member submits an answer to a task that has group submission enabled, their answer and evaluation is automatically copied to all group members. Currently this does not include submitted files, so the file can only be viewed from the original submitters answers. Please note that once answers are copied, they are owned by the students and will be kept even if they are removed from the group. In the current design, the lifetime of a group is the entire course instance.
Teacher toolbox is located on the left hand side of each exercise. It has options to view statistcs, view feedback about the exercise and edit the exercise. For file upload exercises there is also an option to download all answers as a zip file. Do note that this takes some time.
  1. Kuvaus
  2. Examples
Terms are keywords that are linked to descriptions within your course. They will be collected into the course term bank, and the keyword can also be used to make term definition popups on any content page. Terms can include multiple tabs and links to pages that are relevant to the term. For instance, this term has a tab for examples, and a link to the page about terms.
Textfield exercises are exercises where the student gives their answer by writing into a text box. This answer is evaluated against predefined answers that can be either correct (accepting the exercise) or incorrect (giving a related hint). Almost always these answers are defined as regular expressions - exact matching is simply far too strict.
  1. Kuvaus
  2. Markup
  3. Triggering
Triggerable highlights can be used in content pages to mark passages that can be highlighted by triggers from file upload exercise evaluation responses. When a highlight is triggered the passage will be highlighted. This feature is useful for drawing student attention to things they may have missed. Exercises can trigger highlights in their own description, or in their parent page. It is usually a good idea to use exercise specific prefixes for highlight trigger names.