In database terminology primary key refers to the column in a table that's intended to be the primary way of identifying rows. Each table must have exactly one, and it needs to be unique. This is usually some kind of a unique identifier associated with objects presented by the table, or if such an identifier doesn't exist simply a running ID number (which is incremented automatically).
Testing Flask Applications part 2¶
This page is a sequel to part 1 where we tested Flask applications manually and did some simple unit tests for
model classes
. This time we're going to do more "proper" testing. The focus is on API testing - ensuring that our API actually does what it promises. Just like in API implementation, some kind of strategy is crucial to stay sane. Otherwise there will be just endless amounts of copypaste code that is difficult to manage.Resource Testing with Pytest¶
Preliminary Preparations¶
You can grab the single file version of the sensor management API from below.
We're going to implement another test file called
resource_test.py
. If you are using the the more elaborate project structure this file will be pretty similar and there are no complications because we're properly creating a new Flask app object with every test case. However if you are using single file and plan on running these new tests in addition to the database tests we implemented previously, you need to insert one line of code into the db_handle
fixture
after the yield line:app.db.session.remove()
Because we're not actually creating the Flask app anew every time, for some reason the database session persists between test modules even though it doesn't persist between test cases in the database test module. Note that the database tests actually don't fully pass because of a change in the models. If you want to fix this problem, feel free to do so.
New Fix(ture)¶
Our fixture should be changed to one that yields a Flask test client. The new fixture looks like this:
# based on http://flask.pocoo.org/docs/1.0/testing/
# we don't need a client for database testing, just the db handle
@pytest.fixture
def client():
db_fd, db_fname = tempfile.mkstemp()
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///" + db_fname
app.config["TESTING"] = True
db.create_all()
_populate_db()
yield app.test_client()
db.session.remove()
os.close(db_fd)
os.unlink(db_fname)
In order to properly test an API, there usually needs to be some data in the database for GET, PUT and DELETE requests to have something to work on. Our new fixture calls the yet-to-be-defined
_populate_db
function in order to achieve this. This function is in fact quite simple because so far we've only implemented the sensor resources
. Therefore we only need sensors. Let's make three:def _populate_db():
for i in range(1, 4):
s = Sensor(
name="test-sensor-{}".format(i),
model="testsensor"
)
db.session.add(s)
db.session.commit()
Basically you can use the code you used for creating instances of each
model class
in database testing for this purpose, at least as a basis (just remove the asserts...) We're prefixing this function and its ilk with a single underscore to softly hint that these are the test module's internal tools.Basic Testing¶
View and
resource
testing in Flask is generally done with the Flask test client which we introduced in the previous testing material. We already modified the fixture to provide this client, and its use is rather straightforward. We also recommend grouping tests into classes, one test class per resource class
, mostly to get get some organization into the code with the added bonus of defining some constants as class attributes (e.g. the resource URI
). Pytest's discovery automatically creates instances of these classes and calls any methods that start or end with test. For basic unit testing, we want our tests to cover all nooks and crannies in the resource class methods thus ensuring that all lines of code actually work. This means covering all error scenarios in addition to testing with valid
requests
. For valid requests it may also be good to test that we get the data we expected, and likewise that our modifications actually take hold. With these things in mind, let's consider the first test: sensor collection get method test. We're also introducing the TestSensorCollection class.class TestSensorCollection(object):
RESOURCE_URL = "/api/sensors/"
def test_get(self, client):
resp = client.get(self.RESOURCE_URL)
assert resp.status_code == 200
body = json.loads(resp.data)
assert len(body["items"]) == 3
for item in body["items"]:
assert "name" in item
assert "model" in item
Because we created 3 sensors in the database population step, we're now ensuring that the API sends us all three, and that they have both of the attributes we expect them to have. This goes a bit beyond ensuring that the get method works. However, with this we can be sure that our API returns the data it is supposed to. For sensor collection's post method we have two options: we can wrap all the scenarios in one method, or we can put each
response
into its own. The examples below shows this as separate methods - the full example at the end of this material shows everything in one method. The first method tests with a valid request and also checks that we can find the resource we created using the response's location header
. def test_post_valid_request(self, client):
valid = _get_sensor_json()
resp = client.post(self.RESOURCE_URL, json=valid)
assert resp.status_code == 201
assert resp.headers["Location"].endswith(self.RESOURCE_URL + valid["name"] + "/")
resp = client.get(resp.headers["Location"])
assert resp.status_code == 200
body = json.loads(resp.data)
assert body["name"] == "extra-sensor-1"
assert body["model"] == "extrasensor"
The second method sends invalid
media type
. With Flask's test client this can be done by not using the json keyword argument, and instead dumping the dictionary as a string into the data argument. def test_post_wrong_mediatype(self, client):
valid = _get_sensor_json()
resp = client.post(self.RESOURCE_URL, data=json.dumps(valid))
assert resp.status_code == 415
Another relatively simple test is sending a JSON document that doesn't pass through validation. In our case there is only one thing to test because our fields don't have value restriction: missing fields. For testing whether the method's validation handling works, one test case is sufficient.
def test_post_missing_field(self, client):
valid = _get_sensor_json()
valid.pop("model")
resp = client.post(self.RESOURCE_URL, json=valid)
assert resp.status_code == 400
Finally we need to test for conflict. Sensor names are unique, so we should try sending a POST request with a name that's already taken, e.g. one of the test sensors we created in database population.
def test_post_valid_request(self, client):
valid = _get_sensor_json()
valid["name"] = "test-sensor-1"
resp = client.post(self.RESOURCE_URL, json=valid)
assert resp.status_code == 409
On the item side, GET and PUT tests will look very similar and are thus not shown here - you can find them in the full example. That leaves DELETE test in which we should check that the deletion actually took by trying to send a GET request to the resource we just deleted.
class TestSensorItem(object):
RESOURCE_URL = "/api/sensors/test-sensor-1/"
INVALID_URL = "/api/sensors/non-sensor-x/"
def test_delete_valid(self, client):
resp = client.delete(self.RESOURCE_URL)
assert resp.status_code == 204
resp = client.get(self.RESOURCE_URL)
assert resp.status_code == 404
And finally test that sending a DELETE request to a sensor that doesn't exist returns a 404 error:
def test_delete_missing(self, client):
resp = client.delete(self.INVALID_URL)
assert resp.status_code == 404
Using Coverage¶
One nice tool to use with pytest is its coverage plugin. This plugin will track which lines of source code in the application being tested are executed during the tests. It's helpful in determining what kinds of tests are needed to ensure that every line in the program executes correctly. We already asked you to install the plugin along with pytest. We didn't use it for the database tests because database
models
didn't contain any callable code - class definitions are always executed as soon as the module imported, thus they will be always covered. Now that we have some callable code in our
resource class
methods, we can also see the coverage plugin in action. Assuming a single file application, you can run pytest with the following command line arguments to get a coverage summary in the terminal:pytest --cpv-report term-missing --cov=app
Where the
--cov-report term-missing
defines the reporting method to use and --cov=app
defines for which module (or package) coverage should be tracked for. In our case we want to track our single file application, and we want a summary with line numbers printed for each line that is not covered. Using the test file at the end of this material, you should see something like this:----------- coverage: platform linux, python 3.7.0-final-0 ----------- Name Stmts Miss Cover Missing -------------------------------------- app.py 170 4 98% 289, 340, 346, 351
You can see all of the available reporting options in coverage plugin's documentation. The limitation of coverage is that it only shows you that lines have been executed, it doesn't say anything about whether they actually did what was expected. Even with a 100% coverage it is not safe to say that your program is fully working according to its specification - it is "merely" fully working (assuming all tests pass).
Hypermedia Tools¶
In API testing it is equally important that your API conforms to its documentation. Controls are one of the main selling points of
hypermedia
- in an ideal world, clients only need to concern themselves with controls that are available in resource
representations. So in addition to testing that our API code works, we should also work towards ensuring that we send the correct controls along with resources reprenstations, and - going even one step further - ensure that those controls produce valid requests
.Since we want to only test that the control produces a valid request that results in a
response
of the 200 range, the tests for each HTTP method
end up homogenic enough that we can make one helper function for each. For example, a function that tests a GET method control should take the link relation
as a parameter, find the corresponding control object from the given document (or document part) and use the "href"
attribute to send a request to the API server and assert that it got 200 as the response status code. Same in code:def _check_control_get_method(ctrl, client, obj):
href = obj["@controls"][ctrl]["href"]
resp = client.get(href)
assert resp.status_code == 200
A similar checking helper for DELETE controls is almost as simple. This time we just need to also check that the control actually defines the correct method to use in addition to the [term=URI!]URI[!term!].
def _check_control_delete_method(ctrl, client, obj):
href = obj["@controls"][ctrl]["href"]
method = obj["@controls"][ctrl]["method"].lower()
assert method == "delete"
resp = client.delete(href)
assert resp.status_code == 204
POST and PUT methods require a bit more work, largely because we need to actually supply a valid object with the verification request. We're going to use another helper to generate one. We should also make sure there's a schema attached to the control. One way to check the schema is to use it as a basis for generating our request. This approach is used in this course's checkers a lot. However for this example we've taken a reverse approach and instead try to validate our (known to be valid) object against the schema found in the control. Here's the function for POST methods - PUT will look very similar.
def _get_sensor_json(number=1):
return {"name": "extra-sensor-{}".format(number), "model": "extrasensor"}
def _check_control_post_method(ctrl, client, obj):
ctrl_obj = obj["@controls"][ctrl]
href = ctrl_obj["href"]
method = ctrl_obj["method"].lower()
encoding = ctrl_obj["encoding"].lower()
schema = ctrl_obj["schema"]
assert method == "post"
assert encoding == "json"
body = _get_sensor_json()
validate(body, schema)
resp = client.post(href, json=body)
assert resp.status_code == 201
These helper functions should be put to use in GET method tests by calling the appropriate helper for each control the
resource
representation should have. Once again the API state diagram is worth its bytecount in gold here - every arrow that leaves a resource is a control that should be tested.Full Example¶
You can download the full test suite that was described in this material from below. It contains tests for sensor collection and sensor items, and has 98% coverage. The four missing lines are related to unimplemented features Three of them are
pass
in a placeholder method, and one is only visited if a sensor has been assigned to a location - a feature that's not implemented yet.Anna palautetta
Kommentteja materiaalista?