Setting the Scene
Overview
Teaching: 15 min
Exercises: 0 minQuestions
What are we teaching in this course?
Why did we pick specific topics to cover?
Objectives
Setting the scene and expectations
Making sure everyone has all the necessary software installed
Introduction
So, you have gained basic software development skills either by self-learning or attending, e.g., a novice Software Carpentry course. You have been applying those skills for a while by writing code to help with your work and you feel comfortable developing code and troubleshooting problems. However, your software has now reached a point where there’s too much code to be kept in one script. Perhaps it’s involving more researchers (developers) and users, and more collaborative development effort is needed to add new functionality while ensuring previous development efforts remain functional and maintainable.
This course provides the next step in software development - it teaches some intermediate software engineering skills and best practices to help you restructure existing and design more robust, reusable and maintainable code, automate the process of testing and verifying software correctness and support collaborations with others in a way that mimics a typical software development process within a team.
The course uses a number of different software development tools and techniques interchangeably as you would in a real life. We had to make some choices about topics and tools to teach here - based on ease of installation for the audience, length of the course and other considerations. Tools used here are not mandated though - alternatives exist and we point some of them out along the way. Over time, you will develop a preference for certain tools and programming languages based on your personal taste or based on what is commonly used by your group, collaborators or community. However, the topics covered should give you a solid foundation for working on software development in a team and producing high quality software that is easier to develop and and sustain in the future by yourself and others. Skills and tools taught here, while Python-specific, are transferable to other similar tools and programming languages.
The course is organised into the following sections:
Section 1: Setting up Software Environment
In the first section we are going to set up our working environment and familiarise ourselves with various tools and techniques for software development in a typical collaborative code development cycle:
- Integrated Development Environment for code development, testing and debugging,
- Command line for running code and interacting with the command line tool Git for version control and branching the code out for developing new features in parallel,
- GitHub (central and remote source code management platform supporting version control with Git) for code backup, sharing and collaborative development,
- Virtual environments for isolating a project from other projects developed on the same machine, and
- Python code style guidelines to make sure our code is documented, readable and consistently formatted.
Section 2: Verifying Software Correctness at Scale
Once we know our way around different code development tools, techniques and conventions, in this section we learn:
- how to set up a test framework and write tests to verify the correct behaviour of the code, and
- how to automate and scale testing with Continuous Integration (CI) using GitHub Actions (a CI service available on GitHub).
The following three sections complete the development cycle but are covered in a separate workshop:
Section 3: Software Development as a Process
In this section, we step away from writing code for a bit to look at software from a higher level as a process of development and its components:
- different types of software requirements and designing and architecting software to meet them, how these fit within the larger software development process and what we should consider when testing against particular types of requirements.
- different programming and software design paradigms, each representing a slightly different way of thinking about, structuring and implementing the code.
Section 4: Collaborative Software Development for Reuse
Advancing from the solo code development, in this section you will start working with your fellow learners on a group project (as you would do when collaborating on a software project in a team), and learn:
- how code review can help improve team software contributions, identify wider codebase issues, and increase codebase knowledge across a team.
- what we can do to prepare our software for further development and reuse, by adopting best practices in documenting, licencing, tracking issues, supporting your software, and packaging software for release to others.
Section 5: Managing and Improving Software Over Its Lifetime
Finally, we move beyond just software development to managing a collaborative software project and will look into:
- internal planning and prioritising tasks for future development using agile techniques and effort estimation, management of internal and external communication, and software improvement through feedback.
- how to adopt a critical mindset not just towards our own software project but also to assess other people’s software to ensure it is suitable for us to reuse, identify areas for improvement, and how to use GitHub to register good quality issues with a particular code repository.
Before We Start
A few notes before we start.
Prerequisite Knowledge
This is an intermediate-level software development course intended for people who have already been developing code in Python (or other languages) and applying it to their own problems after gaining basic software development skills. So, it is expected for you to have some prerequisite knowledge on the topics covered, as outlined at the beginning of the lesson. Check out this quiz to help you test your prior knowledge and determine if this course is for you.
Required Software
Please make sure that you have all the necessary software installed as described in the Setup section. This section also contains instructions on how to test your setup.
Compulsory and Optional Exercises
Exercises are a crucial part of this course and the narrative. They are used to reinforce the points taught and give you an opportunity to practice things on your own. Please do not be tempted to skip exercises as that will get your local software project out of sync with the course and break the narrative. Exercises that are clearly marked as “optional” can be skipped without breaking things but we advise you to go through them too, if time allows. All exercises contain solutions but, wherever possible, try and work out a solution on your own.
Outdated Screenshots
Throughout this lesson we will make use and show content from Graphical User Interface (GUI) tools (VS Code and GitHub). These are evolving tools and platforms, always adding new features and new visual elements. Screenshots in the lesson may then become out-of-sync, refer to or show content that no longer exists or is different to what you see on your machine. If during the lesson you find screenshots that no longer match what you see or have a big discrepancy with what you see, please open an issue describing what you see and how it differs from the lesson content. Feel free to add as many screenshots as necessary to clarify the issue.
Key Points
This lesson focuses on core, intermediate skills covering the whole software development life-cycle that will be of most use to anyone working collaboratively on code.
For code development in teams - you need more than just the right tools and languages. You need a strategy (best practices) for how you’ll use these tools as a team.
The lesson follows on from the novice Software Carpentry lesson, but this is not a prerequisite for attending as long as you have some basic Python, command line and Git skills and you have been using them for a while to write code to help with your work.
Section 1: Setting Up Environment For Collaborative Code Development
Overview
Teaching: 10 min
Exercises: 0 minQuestions
What tools are needed to collaborate on code development effectively?
Objectives
Provide an overview of all the different tools that will be used in this course.
The first section of the course is dedicated to setting up your environment for collaborative software development. In order to build working (research) software efficiently and to do it in collaboration with others rather than in isolation, you will have to get comfortable with using a number of different tools interchangeably as they’ll make your life a lot easier. There are many options when it comes to deciding which software development tools to use for your daily tasks - we will use a few of them in this course that we believe make a difference. There are sometimes multiple tools for the job - we select one to use but mention alternatives too. As you get more comfortable with different tools and their alternatives, you will select the one that is right for you based on your personal preferences or based on what your collaborators are using.
Here is an overview of the tools we will be using.
Common Issues & Fixes When Running Tools
Check the list of common issues, fixes & tips if you experiencing problems running any of the tools you installed - your issue may be solved there.
Command Line & Python Virtual Development Environment
We will use the command line
(also known as the command line shell/prompt/console) to run our Python code and
interact with the version control tool Git and software sharing platform GitHub.
We will also use command line
tools venv
and pip
to set up a Python virtual development environment and isolate our software project
from other Python projects we may work on.
Note: some Windows users experience the issue where Python hangs from Git Bash (i.e.
typing python
causes it to just hang with no error message or output) -
see the solution to this issue.
Integrated Development Environment (IDE)
An IDE integrates a number of tools that we need to develop a software project that goes beyond a single script - including a smart code editor, a code compiler/interpreter, a debugger, etc. It will help you write well-formatted & readable code that conforms to code style guides (such as PEP8 for Python) more efficiently by giving relevant and intelligent suggestions for code completion and refactoring. IDEs often integrate command line console and version control tools - we teach them separately in this course as this knowledge can be ported to other programming languages and command line tools you may use in the future (but is applicable to the integrated versions too).
We will use VS Code in this course - a free source-code editor. If you are interested to know more about VS Code’s licensing there is an interesting blog article at https://analyticsindiamag.com/is-microsofts-vs-code-really-open-source/.
Git & GitHub
Git is a free and open source distributed version control system designed to save every change made to a (software) project, allowing others to collaborate and contribute. In this course, we use Git to version control our code in conjunction with GitHub for code backup and sharing. GitHub is one of the leading integrated products and social platforms for modern software development, monitoring and management - it will help us with version control, issue management, code review, code testing/Continuous Integration, and collaborative development.
Let’s get started with setting up our software development environment!
Key Points
In order to develop (write, test, debug, backup) code efficiently, you need to use a number of different tools.
When there is a choice of tools for a task you will have to decide which tool is right for you, which may be a matter of personal preference or what the team or community you belong to is using.
Introduction to Our Software Project
Overview
Teaching: 20 min
Exercises: 10 minQuestions
What is the design architecture of our software project?
Why is splitting code into smaller functional units (modules) good when designing software?
Objectives
Use Git to obtain a working copy of our software project from GitHub.
Inspect the structure and architecture of our software project.
Understand Model-View-Controller (MVC) architecture in software design and its use in our project.
Patient Inflammation Study Project
So, you have joined a software development team that has been working on the patient inflammation study project developed in Python and stored on GitHub. The project analyses the data to study the effect of a new treatment for arthritis by analysing the inflammation levels in patients who have been given this treatment. It reuses the inflammation datasets from the Software Carpentry Python novice lesson.
Inflammation study pipeline from the Software Carpentry Python novice lesson
What Does Patient Inflammation Data Contain?
Each dataset records inflammation measurements from a separate clinical trial of the drug, and each dataset contains information for 60 patients, who had their inflammation levels recorded for 40 days whilst participating in the trial (a snapshot of one of the data files is shown in diagram above).
Each of the data files uses the popular comma-separated (CSV) format to represent the data, where:
- Each row holds inflammation measurements for a single patient,
- Each column represents a successive day in the trial,
- Each cell represents an inflammation reading on a given day for a patient (in some arbitrary units of inflammation measurement).
The project is not finished and contains some errors. You will be working on your own and in collaboration with others to fix and build on top of the existing code during the course.
To start working on the project, you will first create a copy of the software project template repository from GitHub within your own GitHub account and then obtain a local copy of the project on your machine. Let’s do this.
- Log into your GitHub account.
- Go to the software project template repository in GitHub.
- Click the
Use this template
button towards the top right of the template repository’s GitHub page to create a copy of the repository under your GitHub account (you will need to be signed into GitHub to see theUse this template
button). Note that each participant is creating their own copy to work on. Also, we are not forking the directory but creating a copy (remember - you can have only one fork but can have multiple copies of a repository in GitHub). - Make sure to select your personal account and set the name of the project to
python-intermediate-inflammation
(you can call it anything you like, but it may be easier for future group exercises if everyone uses the same name). Also set the new repository’s visibility to ‘Public’ - so it can be seen by others and by third-party Continuous Integration (CI) services (to be covered later on in the course). - Click the
Create repository from template
button and wait for GitHub to import the copy of the repository under your account. - Locate the copied repository under your own GitHub account.
Exercise: Obtain the Software Project Locally
Using the command line, clone the copied repository from your GitHub account into the home directory on your computer, (to be consistent with the code examples and exercises in the course). Which command(s) would you use to get a detailed list of contents of the directory you have just cloned?
Solution
- Find the HTTPS URL of the software project repository to clone from your GitHub account. Make sure you do not clone the original template repository but rather your own copy, as you should be able to push commits to it later on. If you have set up a public-private key pair for authentication in your GitHub account and know what you are doing - feel free to use the SSH URL of our software project instead. Otherwise, stick to using HTTPS with password authentication (which will need soon to push changes to our software project to GitHub).
- Make sure you are located in your home directory in the command line with:
cd ~
- From your home directory, do:
git clone https://github.com/<YOUR_GITHUB_USERNAME>/python-intermediate-inflammation
. Make sure you are cloning your copy of the software project and not the template repo.- Navigate into the cloned repository in your command line with:
cd python-intermediate-inflammation
- List the contents of the directory:
ls -l
. Remember the-l
flag of thels
command and also how to get help for commands in the command line using manual pages, e.g.:man ls
.
Our Software Project Structure
Let’s inspect the content of the software project from the command line. From the root directory of the project, you can
use the command ls -l
to get a more detailed list of the contents. You should see something similar to the following.
$ cd ~/python-intermediate-inflammation
$ ls -l
total 24
-rw-r--r-- 1 carpentry users 1055 20 Apr 15:41 README.md
drwxr-xr-x 18 carpentry users 576 20 Apr 15:41 data
drwxr-xr-x 5 carpentry users 160 20 Apr 15:41 inflammation
-rw-r--r-- 1 carpentry users 1122 20 Apr 15:41 inflammation-analysis.py
drwxr-xr-x 4 carpentry users 128 20 Apr 15:41 tests
As can be seen from the above, our software project contains the README
file (that typically describes the project,
its usage, installation, authors and how to contribute), Python script inflammation-analysis.py
,
and three directories - inflammation
, data
and tests
.
The Python script inflammation-analysis.py
provides the main
entry point in the application, and on closer inspection, we can see that the inflammation
directory contains two more Python scripts -
views.py
and models.py
. We will have a more detailed look into these shortly.
$ ls -l inflammation
total 24
-rw-r--r-- 1 alex staff 71 29 Jun 09:59 __init__.py
-rw-r--r-- 1 alex staff 838 29 Jun 09:59 models.py
-rw-r--r-- 1 alex staff 649 25 Jun 13:13 views.py
Directory data
contains several files with patients’ daily inflammation information (along with some other files):
$ ls -l data
total 264
-rw-r--r-- 1 alex staff 5365 25 Jun 13:13 inflammation-01.csv
-rw-r--r-- 1 alex staff 5314 25 Jun 13:13 inflammation-02.csv
-rw-r--r-- 1 alex staff 5127 25 Jun 13:13 inflammation-03.csv
-rw-r--r-- 1 alex staff 5367 25 Jun 13:13 inflammation-04.csv
-rw-r--r-- 1 alex staff 5345 25 Jun 13:13 inflammation-05.csv
-rw-r--r-- 1 alex staff 5330 25 Jun 13:13 inflammation-06.csv
-rw-r--r-- 1 alex staff 5342 25 Jun 13:13 inflammation-07.csv
-rw-r--r-- 1 alex staff 5127 25 Jun 13:13 inflammation-08.csv
-rw-r--r-- 1 alex staff 5327 25 Jun 13:13 inflammation-09.csv
-rw-r--r-- 1 alex staff 5342 25 Jun 13:13 inflammation-10.csv
-rw-r--r-- 1 alex staff 5127 25 Jun 13:13 inflammation-11.csv
-rw-r--r-- 1 alex staff 5340 25 Jun 13:13 inflammation-12.csv
-rw-r--r-- 1 alex staff 22554 25 Jun 13:13 python-novice-inflammation-data.zip
-rw-r--r-- 1 alex staff 12 25 Jun 13:13 small-01.csv
-rw-r--r-- 1 alex staff 15 25 Jun 13:13 small-02.csv
-rw-r--r-- 1 alex staff 12 25 Jun 13:13 small-03.csv
As previously mentioned, each of the inflammation data files contains separate trial data for 60 patients over 40 days.
Exercise: Have a Peek at the Data
Which command(s) would you use to list the contents or a first few lines of
data/inflammation-01.csv
file?Solution
- To list the entire content of a file from the project root do:
cat data/inflammation-01.csv
.- To list the first 5 lines of a file from the project root do:
head -n 5 data/inflammation-01.csv
.0,0,1,3,2,3,6,4,5,7,2,4,11,11,3,8,8,16,5,13,16,5,8,8,6,9,10,10,9,3,3,5,3,5,4,5,3,3,0,1 0,1,1,2,2,5,1,7,4,2,5,5,4,6,6,4,16,11,14,16,14,14,8,17,4,14,13,7,6,3,7,7,5,6,3,4,2,2,1,1 0,1,1,1,4,1,6,4,6,3,6,5,6,4,14,13,13,9,12,19,9,10,15,10,9,10,10,7,5,6,8,6,6,4,3,5,2,1,1,1 0,0,0,1,4,5,6,3,8,7,9,10,8,6,5,12,15,5,10,5,8,13,18,17,14,9,13,4,10,11,10,8,8,6,5,5,2,0,2,0 0,0,1,0,3,2,5,4,8,2,9,3,3,10,12,9,14,11,13,8,6,18,11,9,13,11,8,5,5,2,8,5,3,5,4,1,3,1,1,0
Directory tests
contains several tests that have been implemented already. We will be adding more tests
during the course as our code grows.
An important thing to note here is that the structure of the project is not arbitrary. One of the big differences between novice and intermediate software development is planning the structure of your code. This structure includes software components and behavioural interactions between them (including how these components are laid out in a directory and file structure). A novice will often make up the structure of their code as they go along. However, for more advanced software development, we need to plan this structure - called a software architecture - beforehand.
Let’s have a more detailed look into what a software architecture is and which architecture is used by our software project before we start adding more code to it.
Software Architecture
A software architecture is the fundamental structure of a software system that is decided at the beginning of project development based on its requirements and cannot be changed that easily once implemented. It refers to a “bigger picture” of a software system that describes high-level components (modules) of the system and how they interact.
In software design and development, large systems or programs are often decomposed into a set of smaller
modules each with a subset of functionality. Typical examples of modules in programming are software libraries;
some software libraries, such as numpy
and matplotlib
in Python, are bigger modules that contain several
smaller sub-modules. Another example of modules are classes in object-oriented programming languages.
Programming Modules and Interfaces
Although modules are self-contained and independent elements to a large extent (they can depend on other modules), there are well-defined ways of how they interact with one another. These rules of interaction are called programming interfaces - they define how other modules (clients) can use a particular module. Typically, an interface to a module includes rules on how a module can take input from and how it gives output back to its clients. A client can be a human, in which case we also call these user interfaces. Even smaller functional units such as functions/methods have clearly defined interfaces - a function/method’s definition (also known as a signature) states what parameters it can take as input and what it returns as an output.
There are various software architectures around defining different ways of dividing the code into smaller modules with well defined roles, for example:
- Model–View–Controller (MVC) architecture, which we will look into in detail and use for our software project,
- Service-oriented architecture (SOA), which separates code into distinct services, accessible over a network by consumers (users or other services) that communicate with each other by passing data in a well-defined, shared format (protocol),
- Client-server architecture, where clients request content or service from a server, initiating communication sessions with servers, which await incoming requests (e.g. email, network printing, the Internet),
- Multilayer architecture, is a type of architecture in which presentation, application processing and data management functions are split into distinct layers and may even be physically separated to run on separate machines - some more detail on this later in the course.
Model-View-Controller (MVC) Architecture
MVC architecture divides the related program logic into three interconnected modules:
- Model (data)
- View (client interface), and
- Controller (processes that handle input/output and manipulate the data).
Model represents the data used by a program and also contains operations/rules for manipulating and changing the data in the model. This may be a database, a file, a single data object or a series of objects - for example a table representing patients’ data.
View is the means of displaying data to users/clients within an application (i.e. provides visualisation of the state of the model). For example, displaying a window with input fields and buttons (Graphical User Interface, GUI) or textual options within a command line (Command Line Interface, CLI) are examples of Views. They include anything that the user can see from the application. While building GUIs is not the topic of this course, we will cover building CLIs in Python in later episodes.
Controller manipulates both the Model and the View. It accepts input from the View and performs the corresponding action on the Model (changing the state of the model) and then updates the View accordingly. For example, on user request, Controller updates a picture on a user’s GitHub profile and then modifies the View by displaying the updated profile back to the user.
MVC Examples
MVC architecture can be applied in scientific applications in the following manner. Model comprises those parts of the application that deal with some type of scientific processing or manipulation of the data, e.g. numerical algorithm, simulation, DNA. View is a visualisation, or format, of the output, e.g. graphical plot, diagram, chart, data table, file. Controller is the part that ties the scientific processing and output parts together, mediating input and passing it to the model or view, e.g. command line options, mouse clicks, input files. For example, the diagram below depicts the use of MVC architecture for the DNA Guide Graphical User Interface application.
Exercise: MVC Application Examples From your Work
Think of some other examples from your work or life where MVC architecture may be suitable or have a discussion with your fellow learners.
Solution
MVC architecture is a popular choice when designing web and mobile applications. Users interact with a web/mobile application by sending various requests to it. Forms to collect users inputs/requests together with the info returned and displayed to the user as a result represent the View. Requests are processed by the Controller, which interacts with the Model to retrieve or update the underlying data. For example, a user may request to view its profile. The Controller retrieves the account information for the user from the Model and passes it to the View for rendering. The user may further interact with the application by asking it to update its personal information. Controller verifies the correctness of the information (e.g. the password satisfies certain criteria, postal address and phone number are in the correct format, etc.) and passes it to the Model for permanent storage. The View is then updated accordingly and the user sees its updated profile details.
Note that not everything fits into the MVC architecture but it is still good to think about how things could be split into smaller units. For a few more examples, have a look at this short article on MVC from CodeAcademy.
Separation of Concerns
Separation of concerns is important when designing software architectures in order to reduce the code’s complexity. Note, however, there are limits to everything - and MVC architecture is no exception. Controller often transcends into Model and View and a clear separation is sometimes difficult to maintain. For example, the Command Line Interface provides both the View (what user sees and how they interact with the command line) and the Controller (invoking of a command) aspects of a CLI application. In Web applications, Controller often manipulates the data (received from the Model) before displaying it to the user or passing it from the user to the Model.
Our Project’s MVC Architecture
Our software project uses the MVC architecture. The file inflammation-analysis.py
is the Controller module that
performs basic statistical analysis over patient data and provides the main
entry point into the application. The View and Model modules are contained
in the files view.py
and model.py
, respectively, and are conveniently named. Data underlying the Model is
contained within the directory data
- as we have seen already it contains several files with patients’ daily inflammation information.
We will revisit the software architecture and MVC topics once again in later episodes when we talk in more detail about software’s business/user/solution requirements and software design. We now proceed to set up our virtual development environment and start working with the code using a more convenient graphical tool - Visual Studio Code.
Key Points
Programming interfaces define how individual modules within a software application interact among themselves or how the application itself interacts with its users.
MVC is a software design architecture which divides the application into three interconnected modules: Model (data), View (user interface), and Controller (input/output and data manipulation).
The software project we use throughout this course is an example of an MVC application that manipulates patients’ inflammation data and performs basic statistical analysis using Python.
Virtual Environments For Software Development
Overview
Teaching: 30 min
Exercises: 0 minQuestions
What are virtual environments in software development and why you should use them?
How can we manage Python virtual environments and external (third-party) libraries?
Objectives
Set up a Python virtual environment for our software project using
venv
andpip
.Run our software from the command line.
Introduction
So far we have checked out our software project from GitHub and inspected its contents and architecture a bit. We now want to run our code to see what it does - let’s do that from the command line. For the most part of the course we will run our code and interact with Git from the command line, and, while we will develop and debug our code using VS Code and it is possible to use Git from VS Code too, typing commands in the command line ‘forces’ you to familiarise yourself and learn it well. A bonus is that this knowledge is transferable to running code in other programming languages and is independent from any IDE you may use in the future.
If you have a little peak into our code (e.g. do cat inflammation/views.py
from the project root), you will see the
following two lines somewhere at the top.
from matplotlib import pyplot as plt
import numpy as np
This means that our code requires two external libraries (also called third-party packages or dependencies) -
numpy
and matplotlib
.
Python applications often use external libraries that don’t come as part of the standard Python distribution. This means
that you will have to use a package manager tool to install them on your system.
Applications will also sometimes need a
specific version of an external library (e.g. because they require that a particular
bug has been fixed in a newer version of the library), or a specific version of Python interpreter.
This means that each Python application you work with may require a different setup and a set of dependencies so it
is important to be able to keep these configurations separate to avoid confusion between projects.
The solution for this problem is to create a self-contained virtual
environment per project, which contains a particular version of Python installation plus a number of
additional external libraries.
Virtual environments are not just a feature of Python - all modern programming languages use them to isolate code of a specific project and make it easier to develop, run, test and share code with others. In this episode, we learn how to set up a virtual environment to develop our code and manage our external dependencies.
Virtual Environments
So what exactly are virtual environments, and why use them?
A Python virtual environment is an isolated working copy of a specific version of Python interpreter together with specific versions of a number of external libraries installed into that virtual environment. A virtual environment is simply a directory with a particular structure which includes links to and enables multiple side-by-side installations of different Python interpreters or different versions of the same external library to coexist on your machine and only one to be selected for each of our projects. This allows you to work on a particular project without worrying about affecting other projects on your machine.
As more external libraries are added to your Python project over time, you can add them to its specific virtual environment and avoid a great deal of confusion by having separate (smaller) virtual environments for each project rather than one huge global environment with potential package version clashes. Another big motivator for using virtual environments is that they make sharing your code with others much easier (as we will see shortly). Here are some typical scenarios where the usage of virtual environments is highly recommended (almost unavoidable):
- You have an older project that only works under Python 2. You do not have the time to migrate the project to Python 3 or it may not even be possible as some of the third party dependencies are not available under Python 3. You have to start another project under Python 3. The best way to do this on a single machine is to set up two separate Python virtual environments.
- One of your Python 3 projects is locked to use a particular older version of a third party dependency. You cannot use the latest version of the dependency as it breaks things in your project. In a separate branch of your project, you want to try and fix problems introduced by the new version of the dependency without affecting the working version of your project. You need to set up a separate virtual environment for your branch to ‘isolate’ your code while testing the new feature.
You do not have to worry too much about specific versions of external libraries that your project depends on most of the time. Virtual environments enable you to always use the latest available version without specifying it explicitly. They also enable you to use a specific older version of a package for your project, should you need to.
A Specific Python or Package Version is Only Ever Installed Once
Note that you will not have a separate Python or package installations for each of your projects - they will only ever be installed once on your system but will be referenced from different virtual environments.
Managing Python Virtual Environments
There are several commonly used command line tools for managing Python virtual environments:
venv
, available by default from the standardPython
distribution fromPython 3.3+
virtualenv
, needs to be installed separately but supports bothPython 2.7+
andPython 3.3+
versionspipenv
, created to fix certain shortcomings ofvirtualenv
conda
, package and environment management system (also included as part of the Anaconda Python distribution often used by the scientific community)poetry
, a modern Python packaging tool which handles virtual environments automatically
While there are pros and cons for using each of the above, all will do the job of managing Python
virtual environments for you and it may be a matter of personal preference which one you go for.
In this course, we will use venv
to create and manage our
virtual environment (which is the preferred way for Python 3.3+).
Managing Python Packages
Part of managing your (virtual) working environment involves installing, updating and removing external packages
on your system. The Python package manager tool pip
is most commonly used for this - it interacts
and obtains the packages from the central repository called Python Package Index (PyPI).
pip
can now be used with all Python distributions (including Anaconda).
A Note on Anaconda and
conda
Anaconda is an open source Python distribution commonly used for scientific programming - it conveniently installs Python, package and environment management
conda
, and a number of commonly used scientific computing packages so you do not have to obtain them separately.conda
is an independent command line tool (available separately from the Anaconda distribution too) with dual functionality: (1) it is a package manager that helps you find Python packages from remote package repositories and install them on your system, and (2) it is also a virtual environment manager. So, you can useconda
for both tasks instead of usingvenv
andpip
.
Many Tools for the Job
Installing and managing Python distributions, external libraries and virtual environments is, well,
complex. There is an abundance of tools for each task, each with its advantages and disadvantages, and there are different
ways to achieve the same effect (and even different ways to install the same tool!).
Note that each Python distribution comes with its own version of
pip
- and if you have several Python versions installed you have to be extra careful to use the correct pip
to
manage external packages for that Python version.
venv
and pip
are considered the de facto standards for virtual environment and package management for Python 3.
However, the advantages of using Anaconda and conda
are that you get (most of the) packages needed for
scientific code development included with the distribution. If you are only collaborating with others who are also using
Anaconda, you may find that conda
satisfies all your needs. It is good, however, to be aware of all these tools,
and use them accordingly. As you become more familiar with them you will realise that equivalent tools work in a similar
way even though the command syntax may be different (and that there are equivalent tools for other programming languages
too to which your knowledge can be ported).
Python Environment Hell
From XKCD (Creative Commons Attribution-NonCommercial 2.5 License)
Let us have a look at how we can create and manage virtual environments from the command line using venv
and manage packages using pip
.
Creating a venv
Environment
Creating a virtual environment with venv
is done by executing the following command:
$ python3 -m venv /path/to/new/virtual/environment
where /path/to/new/virtual/environment
is a path to a directory where you want to place it - conventionally within
your software project so they are co-located.
This will create the target directory for the virtual environment (and any parent directories that don’t exist already).
For our project, let’s create a virtual environment called venv
off the project root:
$ python3 -m venv venv
If you list the contents of the newly created venv
directory, on a Mac or Linux system
(slightly different on Windows as explained below) you should see something like:
$ ls -l venv
total 8
drwxr-xr-x 12 alex staff 384 5 Oct 11:47 bin
drwxr-xr-x 2 alex staff 64 5 Oct 11:47 include
drwxr-xr-x 3 alex staff 96 5 Oct 11:47 lib
-rw-r--r-- 1 alex staff 90 5 Oct 11:47 pyvenv.cfg
In Windows (Git Bash) it would look more like this:
$ ls -l venv
total 5
drwxr-xr-x 1 janne 197609 0 Nov 30 19:56 Include/
drwxr-xr-x 1 janne 197609 0 Nov 30 19:56 Lib/
-rw-r--r-- 1 janne 197609 119 Nov 30 19:56 pyvenv.cfg
drwxr-xr-x 1 janne 197609 0 Nov 30 20:02 Scripts/
drwxr-xr-x 1 janne 197609 0 Nov 30 20:02 share/
So, running the python3 -m venv venv
command created the target directory called venv
containing:
pyvenv.cfg
configuration file with a home key pointing to the Python installation from which the command was run,bin
subdirectory (calledScripts
on Windows) containing a symlink of the Python interpreter binary used to create the environment and the standard Python library,lib/pythonX.Y/site-packages
subdirectory (calledLib\site-packages
on Windows) to contain its own independent set of installed Python packages isolated from other projects,- various other configuration and supporting files and subdirectories.
Naming Virtual Environments
What is a good name to use for a virtual environment? Using “venv” or “.venv” as the name for an environment and storing it within the project’s directory seems to be the recommended way - this way when you come across such a subdirectory within a software project, by convention you know it contains its virtual environment details. A slight downside is that all different virtual environments on your machine then use the same name and the current one is determined by the context of the path you are currently located in. A (non-conventional) alternative is to use your project name for the name of the virtual environment, with the downside that there is nothing to indicate that such a directory contains a virtual environment. In our case, we have settled to use the name “venv” since it is not a hidden directory and we want it to be displayed by the command line when listing directory contents (hence, no need for the “.” in its name that would, by convention, make it hidden). In the future, you will decide what naming convention works best for you. Here are some references for each of the naming conventions:
- The Hitchhiker’s Guide to Python notes that “venv” is the general convention used globally
- The Python Documentation indicates that “.venv” is common
- “venv” vs “.venv” discussion
Once you’ve created a virtual environment, you will need to activate it:
$ source venv/bin/activate
(venv) $
or for Windows (Git Bash)
$ source/venv/Scripts/activate
(venv) $
Activating the virtual environment will change your command line’s prompt to show what virtual environment you are currently using (indicated by its name in round brackets at the start of the prompt), and modify the environment so that running Python will get you the particular version of Python configured in your virtual environment.
You can verify you are using your virtual environment’s version of Python by checking the path using which
:
(venv) $ which python3
/home/alex/python-intermediate-inflammation/venv/bin/python3
When you’re done working on your project, you can exit the environment with:
(venv) $ deactivate
If you’ve just done the deactivate
, ensure you reactivate the environment ready for the next part:
source venv/bin/activate
(venv) $
Python Within A Virtual Environment
Within a virtual environment, commands
python
andpip
will refer to the version of Python you created the environment with. If you create a virtual environment withpython3 -m venv venv
,python
will refer topython3
andpip
will refer topip3
.On some machines with Python 2 installed,
python
command may refer to the copy of Python 2 installed outside of the virtual environment instead, which can cause confusion. You can always check which version of Python you are using in your virtual environment with the commandwhich python
to be absolutely sure. We continue usingpython3
andpip3
in this material to avoid confusion for those users, but commandspython
andpip
may work for you as expected.
Note that, since our software project is being tracked by Git, the newly created virtual environment will show up in version control - we will see how to handle it using Git in one of the subsequent episodes.
Installing External Libraries in an Environment with pip
We noticed earlier that our code depends on two external libraries - numpy
and matplotlib
. In order
for the code to run on your machine, you need to
install these two dependencies into your virtual environment.
To install the latest version of a package with pip
you use pip’s install
command and specify the package’s name, e.g.:
(venv) $ pip3 install numpy
(venv) $ pip3 install matplotlib
or like this to install multiple packages at once for short:
(venv) $ pip3 install numpy matplotlib
How About
python3 -m pip install
?Why are we not using
pip
as an argument topython3
command, in the same way we did withvenv
(i.e.python3 -m venv
)?python3 -m pip install
should be used according to the official Pip documentation; other official documentation still seems to have a mixture of usages. Core Python developer Brett Cannon offers a more detailed explanation of edge cases when the two options may produce different results and recommendspython3 -m pip install
. We kept the old-style command (pip3 install
) as it seems more prevalent among developers at the moment - but it may be a convention that will soon change and certainly something you should consider.
If you run the pip3 install
command on a package that is already installed, pip
will notice this and do nothing.
To install a specific version of a Python package give the package name followed by ==
and the version number, e.g.
pip3 install numpy==1.21.1
.
To specify a minimum version of a Python package, you can
do pip3 install numpy>=1.20
.
To upgrade a package to the latest version, e.g. pip3 install --upgrade numpy
.
To display information about a particular installed package do:
(venv) $ pip3 show numpy
Name: numpy
Version: 1.21.2
Summary: NumPy is the fundamental package for array computing with Python.
Home-page: https://www.numpy.org
Author: Travis E. Oliphant et al.
Author-email: None
License: BSD
Location: /Users/alex/work/SSI/Carpentries/python-intermediate-inflammation/inflammation/lib/python3.9/site-packages
Requires:
Required-by: matplotlib
To list all packages installed with pip
(in your current virtual environment):
(venv) $ pip3 list
Package Version
--------------- -------
cycler 0.11.0
fonttools 4.28.1
kiwisolver 1.3.2
matplotlib 3.5.0
numpy 1.21.4
packaging 21.2
Pillow 8.4.0
pip 21.1.3
pyparsing 2.4.7
python-dateutil 2.8.2
setuptools 57.0.0
setuptools-scm 6.3.2
six 1.16.0
tomli 1.2.2
To uninstall a package installed in the virtual environment do: pip3 uninstall package-name
.
You can also supply a list of packages to uninstall at the same time.
Exporting/Importing an Environment with pip
You are collaborating on a project with a team so, naturally, you will want to share your environment with your
collaborators so they can easily ‘clone’ your software project with all of its dependencies and everyone
can replicate equivalent virtual environments on their machines. pip
has a handy way of exporting,
saving and sharing virtual environments.
To export your active environment - use pip freeze
command to
produce a list of packages installed in the virtual environment.
A common convention is to put this list in a requirements.txt
file:
(venv) $ pip3 freeze > requirements.txt
(venv) $ cat requirements.txt
cycler==0.11.0
fonttools==4.28.1
kiwisolver==1.3.2
matplotlib==3.5.0
numpy==1.21.4
packaging==21.2
Pillow==8.4.0
pyparsing==2.4.7
python-dateutil==2.8.2
setuptools-scm==6.3.2
six==1.16.0
tomli==1.2.2
The first of the above commands will create a requirements.txt
file in your current directory.
The requirements.txt
file can then be committed to a version control system (we will see how to do this using Git in
one of the following episodes) and
get shipped as part of your software and shared with collaborators and/or users. They can then replicate your environment and
install all the necessary packages from the project root as follows:
(venv) $ pip3 install -r requirements.txt
As your project grows - you may need to update your environment for a variety of reasons. For example, one of your project’s dependencies has
just released a new version (dependency version number update), you need an additional package for data analysis
(adding a new dependency) or you have found a better package and no longer need the older package (adding a new and
removing an old dependency). What you need to do in this case (apart from installing the new and removing the
packages that are no longer needed from your virtual environment) is update the contents of the requirements.txt
file
accordingly by re-issuing pip freeze
command and propagate the updated requirements.txt
file to your collaborators
via your code sharing platform (e.g. GitHub).
Official Documentation
For a full list of options and commands, consult the official
venv
documentation and the Installing Python Modules withpip
guide. Also check out the guide “Installing packages usingpip
and virtual environments”.
Running Python Scripts From Command Line
Congratulations! Your environment is now activated and set up to run our inflammation-analysis.py
script
from the command line.
You should already be located in the root of the python-intermediate-inflammation
directory
(if not, please navigate to it from the command line now). To run the script, type the following command:
(venv) $ python3 inflammation-analysis.py
usage: inflammation-analysis.py [-h] infiles [infiles ...]
inflammation-analysis.py: error: the following arguments are required: infiles
In the above command, we tell the command line two things:
- to find a Python interpreter (in this case, the one that was configured via the virtual environment), and
- to use it to run our script
inflammation-analysis.py
, which resides in the current directory.
As we can see, the Python interpreter ran our script, which threw an error -
inflammation-analysis.py: error: the following arguments are required: infiles
. It looks like the script expects
a list of input files to process, so this is expected behaviour since we don’t supply any. We will fix this error in a
moment.
Key Points
Virtual environments keep Python versions and dependencies required by different projects separate.
A virtual environment is itself a directory structure.
Use
venv
to create and manage Python virtual environments.Use
pip
to install and manage Python external (third-party) libraries.
pip
allows you to declare all dependencies for a project in a separate file (by convention calledrequirements.txt
) which can be shared with collaborators/users and used to replicate a virtual environment.Use
pip3 freeze > requirements.txt
to take snapshot of your project’s dependencies.Use
pip3 install -r requirements.txt
to replicate someone else’s virtual environment on your machine from therequirements.txt
file.
Integrated Software Development Environments
Overview
Teaching: 25 min
Exercises: 15 minQuestions
What are Integrated Development Environments (IDEs)?
What are the advantages of using IDEs for software development?
Objectives
Set up a (virtual) development environment in VS Code
Use VS Code to run a Python script
Introduction
As we have seen in the previous episode - even a simple software project is typically split into smaller functional units and modules which are kept in separate files and subdirectories. As your code starts to grow and becomes more complex, it will involve many different files and various external libraries. You will need an application to help you manage all the complexities of, and provide you with some useful (visual) facilities for, the software development process. Such clever and useful graphical software development applications are called Integrated Development Environments (IDEs).
Integrated Development Environments (IDEs)
An IDE normally consists of at least a source code editor, build automation tools and a debugger. The boundaries between modern IDEs and other aspects of the broader software development process are often blurred as nowadays IDEs also offer version control support, tools to construct graphical user interfaces (GUI) and web browser integration for web app development, source code inspection for dependencies and many other useful functionalities. The following is a list of the most commonly seen IDE features:
- syntax highlighting - to show the language constructs, keywords and the syntax errors with visually distinct colours and font effects
- code completion - to speed up programming by offering a set of possible (syntactically correct) code options
- code search - finding package, class, function and variable declarations, their usages and referencing
- version control support - to interact with source code repositories
- debugging - for setting breakpoints in the code editor, step-by-step execution of code and inspection of variables
IDEs are extremely useful and modern software development would be very hard without them. There are a number of IDEs available for Python development; a good overview is available from the Python Project Wiki. In addition to IDEs, there are also a number of code editors that have Python support. Code editors can be as simple as a text editor with syntax highlighting and code formatting capabilities (e.g. GNU EMACS, Vi/Vim, Atom). Most good code editors can also execute code and control a debugger, and some can also interact with a version control system. Compared to an IDE, a good dedicated code editor is usually smaller and quicker, but often less feature-rich. You will have to decide which one is the best for you - in this course we will learn how to use VS Code, a free code editor from Microsoft. Some popular alternatives include free and open source IDE Spyder and PyCharm, a free open source Python IDE.
Using Visual Studio Code
Let’s open our project in VS Code now and familiarise ourselves with some commonly used features.
Opening a Software Project
If you don’t have VS Code running yet, start it up now. If this is the very first time that you are running VS Code you should be presented with a windows such as in the image below.
On this screen you can select the theme that you would like. There are two light themes, i.e. dark text on a light background and two dark themes, i.e. light text on a dark background. Select the theme that you think would give you the best environment to work in. When you have made your selection, click on Next Section
at the bottom of the screen.
At this point, we can ignore the next screen which allows you to configure your editing environment. Just click Next Section
at the bottom of the screen.
On the next screen you would want to select Side by side editing
and Install Git
. Leave Customize your shortcuts
unticked for now. When done, click Mark Done
.
Select Open Folder
to find the software project directory python-intermediate-inflammation
you cloned earlier.
A window will pop up asking whether you trust the authors of the files in the folder. You can click the button that says: “Yes, I trust the authors. Trust folder and enable all features.” You could also tick the box above to “Trust the authors of all files in the parent folder”.
This directory is now the current working directory for VS Code, so when we run scripts from VS Code, this is the directory they will run from.
You will notice the editor showing you a list of icons on the left hand side, just below the VS Code logo. This area is called the Activity Bar
. From top to bottom these are:
- Explorer
- Search
- Source Control
- Run and Debug
- Extensions
If you hover over these icons with your mouse a tooltip should pop up showing you what each icon is for. You should also now see the file explorer opened on the left hand side, the Side Bar
, showing you a tree view of the files in the selected folder. The explorer icon will also be highlighted, while the others are greyed:
Select the inflammation-analysis.py
file in the ‘Side Bar’. The file will open in the editor window, but at the bottom of the screen you will see a notification with the question, Do you want to install the recommended extentions for Python?
Click the Install
button.
On the next window you will be able to install the Python extension. Click the install button.
After the installation more tabs might have opened next to the inflammation-analysis.py tab. You can close those tabs by clicking on the X in the tab next to the tab name, but leave the inflammation-analysis.py tab obpen. You might also have noticed in the Side Bar
the Explorer
has been replaced by Extensions
and the extension icon in the Activity Bar
is now hightlighted, while the others are greyed.
Configuring the Terminal
VS Code has a built-in terminal which you can open, as sometimes you might want to execute commands directly in the terminal. By default VS Code, in Windows, will open the PowerShell which has restricted access. You can see what this looks like by clicking Terminal
on the menu and then selecting New Terminal
. The new terminal should open at the bottom of the screen. If you still have the inflammation-analysis.py
file open, you might see error message displayed as in the screenshot below:
To change the default terminal, look at the top of the terminal section. There should be a > powershell
button. Click on the v
arrow to the right of the > powershell
button and select Git Bash
.
A Git Bash
shell should have opened. You should see (.venv) displayed in the shell which means the virtual environment has been detected. There should be no error messages. To the right hand side of the terminal you should notice a section displaying two shells - the powershell that we had open before and below that bash
which is our current Git Bash
shell.
You can close the powershell by hovering over the button with the mouse at which time a garbage bin should appear next to it. Click on the garbage bin to close the terminal. The bash
shell should remain open.
Configuring a Virtual Environment in VS Code
Because we created the venv
environment before we opened the project in VS Code, VS Code and the Python extension were able to detect the environment. We could see that this was the case when we opened the terminal and saw (.venv)
displayed before the prompt.
If we didn’t create the virtual environment beforehand we can do after we opened the project folder in VS Code. To create such a virtual environment you have to click Ctrl+Shift+P
. In the search box, start typing ‘Python: Create Environment’. You won’t have to completely type the string before you’ll notice it in the list. Click on Python: Create Environment
. Then select Venv Creates a '.venv' virtual environment in the the current workspace
.
You should now get a list of installed Python interpreters. You should select the one that is required for your project. A .venv
directory will now be created. You should be able to se this happen in the explorer tab.
Adding an External Library
We have already added packages numpy
and matplotlib
to our virtual environment from the command line
in the previous episode, so we are up-to-date with all external libraries we require at the moment. However, we will need library pytest
soon to implement tests for our code so will use this
opportunity to install it from VS Code. Strictly speaking VS Code is not an IDE but a code editor which is why we still need to do things in the terminal. An IDE such as PyCharm will have alternative ways to do this via the Graphic User Interface.
- If you already have an open terminal at the bottom of the screen you can enter the following commands in there. If you don’t have a terminal open you can open one by clicking on the
Terminal
menu item and then selectingNew Terminal
. - Double check that your virtual environment is active by looking for
(.venv)
displayed with your prompt. - As before we will use pip3 to install the library. In the terminal type:
pip3 install pytest
. It might take a few minutes to install.
Pytest should now be installed. You can also verify this from the command line by listing the venv/lib/python3.9/site-packages
subdirectory. Note, however, that requirements.txt
is not updated - as we mentioned earlier this is something you have to do manually. Let’s do this as an exercise.
Exercise: Update
requirements.txt
After Adding a New DependencyExport the newly updated virtual environment into
requirements.txt
file.Solution
Let’s verify first that the newly installed library
pytest
is appearing in our virtual environment but not inrequirements.txt
. First, let’s check the list of installed packages:(venv) $ pip3 list
Package Version --------------- ------- attrs 21.4.0 cycler 0.11.0 fonttools 4.28.5 iniconfig 1.1.1 kiwisolver 1.3.2 matplotlib 3.5.1 numpy 1.22.0 packaging 21.3 Pillow 9.0.0 pip 20.0.2 pluggy 1.0.0 py 1.11.0 pyparsing 3.0.7 pytest 6.2.5 python-dateutil 2.8.2 setuptools 44.0.0 six 1.16.0 toml 0.10.2 tomli 2.0.0
We can see the
pytest
library appearing in the listing above. However, if we do:(venv) $ cat requirements.txt
cycler==0.11.0 fonttools==4.28.1 kiwisolver==1.3.2 matplotlib==3.5.0 numpy==1.21.4 packaging==21.2 Pillow==8.4.0 pyparsing==2.4.7 python-dateutil==2.8.2 setuptools-scm==6.3.2 six==1.16.0 tomli==1.2.2
pytest
is missing fromrequirements.txt
. To add it, we need to update the file by repeating the command:(venv) $ pip3 freeze > requirements.txt
pytest
is now present inrequirements.txt
:attrs==21.2.0 cycler==0.11.0 fonttools==4.28.1 iniconfig==1.1.1 kiwisolver==1.3.2 matplotlib==3.5.0 numpy==1.21.4 packaging==21.2 Pillow==8.4.0 pluggy==1.0.0 py==1.11.0 pyparsing==2.4.7 pytest==6.2.5 python-dateutil==2.8.2 setuptools-scm==6.3.2 six==1.16.0 toml==0.10.2 tomli==1.2.2
Syntax Highlighting
The first thing you may notice is that code is displayed using different colours. Syntax highlighting is a feature that displays source code terms in different colours and fonts according to the syntax category the highlighted term belongs to. It also makes syntax errors visually distinct. Highlighting does not affect the meaning of the code itself - it’s intended only for humans to make reading code and finding errors easier.
Code Completion
As you start typing code, VS Code will offer to complete some of the code for you in the form of an auto completion popup. This is a context-aware code completion feature that speeds up the process of coding (e.g. reducing typos and other common mistakes) by offering available variable names, functions from available packages, parameters of functions, hints related to syntax errors, etc.
Code Definition & Documentation References
You will often need code reference information to help you code. VS Code shows this useful information, such as definitions of symbols (e.g. functions, parameters, classes, fields, and methods) and documentation references by means of quick popups and inline tooltips.
For a selected piece of code, you can access various code reference information by right clicking for a menu which will offer amongst other things:
- Go to Definition
- Go to Definition
- Go to Type Definition
Code Search
In the current file
You can search for a string in your current file. The easiest is to press Ctrl+F. In the top right hand corner of the edit screen there should be a popup box with a search field.
- If you have anything selected it will automatically be added to the search field. You can delete that if you want, replace it, extend it or use it as is.
- Next to the search string there are three options:
Aa ab .*
.Aa
is to match the case of the search string,ab
is to match complete words and.*
is for using regular expressions. When you click any of these options, they will be highlighted, meaning that it will be used when searching. Click an option again to disable it.
In the whole project
You can search for a text string within a project, use different scopes to narrow your search process, and use regular expressions for complex searches. To find a search string in the whole project:
- From the main menu, select
Edit | Find in Files
. Just below the Edit menu at the top of the side bar, a search field should appear. - If you have anything selected it will automatically be added to the search field. You can delete that if you want, replace it, extend it or use it as is.
- As before the three search options are available for selection.
Version Control
VS Code allows you to do version control from within the editor, i.e. you don’t have to use the terminal. Our project was already under Git version control and VS Code recognised it. If a project is not yet under version control you can do so by navigating to Source Control using the button on the Activity Bar.
Running Scripts in VS Code
We have configured our environment and explored some of the most commonly used VS Code features and are now ready to run our script from VS Code! To do so, right-click the inflammation-analysis.py
file in the Explorer in the Activity Bar and then select Run Python File in Terminal
.
The script will run in a terminal window at the bottom of the IDE window and display something like:
janne@FALCON MINGW64 /g/CARPENTRIES_LESSONS/python-intermediate-inflammation.2 (main)
$ g:/CARPENTRIES_LESSONS/python-intermediate-inflammation.2/.venv/Scripts/python.exe g:/CARPENTRIES_LESSONS/python-intermediate-inflammation.2/inflammation-analysis.py
usage: inflammation-analysis.py [-h] infiles [infiles ...]
inflammation-analysis.py: error: the following arguments are required: infiles
Process finished with exit code 2
This is the same error we got when running the script from the command line. We will get back to this error shortly - for now, the good thing is that we managed to set up our project for development both from the command line and VS Code and are getting the same outputs. Before we move on to fixing errors and writing more code, let’s have a look at the last set of tools for collaborative code development which we will be using in this course - Git and GitHub.
Key Points
An IDE is an application that provides a comprehensive set of facilities for software development, including syntax highlighting, code search and completion, version control, testing and debugging.
With the correct extensions installed VS Code recognises virtual environments configured from the command line using
venv
andpip
.
Collaborative Software Development Using Git and GitHub
Overview
Teaching: 35 min
Exercises: 0 minQuestions
What are Git branches and why are they useful for code development?
What are some best practices when developing software collaboratively using Git?
Objectives
Commit changes in a software project to a local repository and publish them in a remote repository on GitHub
Create different branches for code development
Learn to use feature branch workflow to effectively collaborate with a team on a software project
Introduction
So far we have checked out our software project from GitHub and used command line tools to configure a virtual environment for our project and run our code. We have also familiarised ourselves with VS Code - a graphical tool we will use for code development, testing and debugging. We are now going to start using another set of tools from the collaborative code development toolbox - namely, the version control system Git and code sharing platform GitHub. These two will enable us to track changes to our code and share it with others.
You may recall that we have already made some changes to our project locally - we created a virtual
environment in venv
directory and exported it to the requirements.txt
file.
We should now decide which of those changes we want to check in and share with others in our team. This is a typical
software development workflow - you work locally on code, test it to make sure
it works correctly and as expected, then record your changes using version control and share your work with others
via a shared and centrally backed-up repository.
Firstly, let’s remind ourselves how to work with Git from the Command Line.
Git Refresher
Git is a version control system for tracking changes in computer files and coordinating work on those files among multiple people. It is primarily used for source code management in software development but it can be used to track changes in files in general - it is particularly effective for tracking text-based files (e.g. source code files, CSV, Markdown, HTML, CSS, Tex, etc. files).
Git has several important characteristics:
- support for non-linear development allowing you and your colleagues to work on different parts of a project concurrently,
- support for distributed development allowing for multiple people to be working on the same project (even the same file) at the same time,
- every change recorded by Git remains part of the project history and can be retrieved at a later date, so even if you make a mistake you can revert to a point before it.
The diagram below shows a typical software development lifecycle with Git and the commonly used commands to interact with different parts of Git infrastructure, such as:
- working directory - a directory (including any subdirectories) where your project files live and where you are currently working.
It is also known as the “untracked” area of Git. Any changes to files will be marked by Git in the working directory.
If you make changes to the working directory and do not explicitly tell Git to save them - you will likely lose those
changes. Using
git add filename
command, you tell Git to start tracking changes to filefilename
within your working directory. - staging area (index) - once you tell Git to start tracking changes to files (with
git add filename
command), Git saves those changes in the staging area. Each subsequent change to the same file needs to be followed by anothergit add filename
command to tell Git to update it in the staging area. To see what is in your working directory and staging area at any moment (i.e. what changes is Git tracking), run the commandgit status
. - local repository - stored within the
.git
directory of your project, this is where Git wraps together all your changes from the staging area and puts them using thegit commit
command. Each commit is a new, permanent snapshot (checkpoint, record) of your project in time, which you can share or revert back to. - remote repository - this is a version of your project that is hosted somewhere on the Internet (e.g. on GitHub, GitLab or somewhere else). While your project is nicely version-controlled in your local repository, and you have snapshots of its versions from the past, if your machine crashes - you still may lose all your work. Working with a remote repository involves pushing your changes and pulling other people’s changes to keep your local repository in sync in order to collaborate with others and to backup your work on a different machine.
Software development lifecycle with Git from PNGWing
Checking-in Changes to Our Project
Let’s check-in the changes we have done to our project so far. The first thing to do upon navigating into our software project’s directory root is to check the current status of our local working directory and repository.
$ git status
On branch main
Your branch is up to date with 'origin/main'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
requirements.txt
venv/
nothing added to commit but untracked files present (use "git add" to track)
As expected, Git is telling us that we have some untracked files - requirements.txt
and directory
venv
- present in our working
directory which we have not staged nor committed to our local repository yet.
You do not want
to commit the newly created venv
directory and share it with others because this
directory is specific to your machine and setup only (i.e. it contains local paths to libraries on your
system that most likely would not work on any other machine). You do, however, want to share requirements.txt
with
your team as this file can be used to replicate the virtual environment on your collaborators’ systems.
To tell Git to intentionally ignore and not track certain files and directories, you need to specify them in the .gitignore
text file in the project root. Our project already has .gitignore
, but in cases where you do not have
it - you can simply create it yourself. In our case, we
want to tell Git to ignore the venv
directory (and .venv
as another naming convention for virtual environments)
and stop notifying us about it. Edit your .gitignore
file in VS Code and add a line containing “venv/” and another one containing “.venv/”. It does not matter much
in this case where within the file you add these lines, so let’s do it at the end. Your .gitignore
should look something like this:
# IDEs
.vscode/
.idea/
# Intermediate Coverage file
.coverage
# Output files
*.png
# Python runtime
*.pyc
*.egg-info
.pytest_cache
# Virtual environments
venv/
.venv/
You may notice that we are already not tracking certain files and directories with useful comments about what exactly we are ignoring. You may also notice that each line in .ignore
is actually a pattern, so you can ignore multiple files that match a pattern (e.g. “*.png” will ignore all PNG files in the current directory).
If you run the git status
command now, you will notice that Git has cleverly understood that you want to ignore changes to venv
folder so it is not warning us about it any more. However, it has now detected a change to
.gitignore
file that needs to be committed.
$ git status
On branch main
Your branch is up to date with 'origin/main'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: .gitignore
Untracked files:
(use "git add <file>..." to include in what will be committed)
requirements.txt
no changes added to commit (use "git add" and/or "git commit -a")
To commit the changes .gitignore
and requirements.txt
to the local repository, we first have to add these files to
staging area to prepare them for committing. We can do that at the same time as:
$ git add .gitignore requirements.txt
Now we can commit them to the local repository with:
$ git commit -m "Initial commit of requirements.txt. Ignoring virtual env. folder."
Remember to use meaningful messages for your commits.
So far we have been working in isolation - all the changes we have done are still only stored locally on our individual machines. In order to share our work with others - we should push our changes to the remote repository on GitHub. GitHub has recently strengthened authentication requirements for Git operations accessing GitHub from the command line over HTTPS. This means you cannot use passwords for authentication over HTTPS any more - you either need to set up and use a personal access token for additional security if you want to continue to use HTTPS or switch to use private and public key pair over SSH before you can push remotely the changes you made locally. So, when you run the command below:
$ git push origin main
Git may prompt you to authenticate - enter your GitHub username and the previously generated access token as the
password. You can also enable caching of the credentials using command git config --global credential.helper cache
so your machine remembers the access token and will not ask you to enter it again.
Account Security
When using
git config --global credential.helper cache
, any password or personal access token you enter will be cached for a period of time, a default of 15 minutes. Re-entering a password every 15 minutes can be OK, but for a personal access token it can be inconvenient, and lead to you writing the token down elsewhere. To permanently store passwords or tokens, usestash
instead ofcache
.Storing an access token always carries a security risk. One compromise between short cache timescales and permanent stores is to set a time-out on your personal access token when you make it, reducing the risk of it being stolen after you stop working on the project you issued it for.
In the above command,
origin
is an alias for the remote repository you used when cloning the project locally (it is called that
by convention and set up automatically by Git when you run git clone remote_url
command to replicate a remote
repository locally); main
is the name of our
main (and currently only) development branch.
Git Remotes
Note that systems like Git allow us to synchronise work between any two or more copies of the same repository - the ones that are not located on your machine are “Git remotes” for you. In practice, though, it is easiest to agree with your collaborators to use one copy as a central hub (such as GitHub or GitLab), where everyone pushes their changes to. This also avoid risks associated with keeping the “central copy” on someone’s laptop. You can have more than one remote configured for your local repository, each of which generally is either read-only or read/write for you. Collaborating with others involves managing these remote repositories and pushing and pulling information to and from them when you need to share work.
Git - distributed version control system
From W3Docs (freely available)
Git Branches
When we do git status
, Git also tells us that we are currently on the main
branch of the project.
A branch is one version of your project (the files in your repository) that can contain its own set of commits.
We can create a new branch, make changes to the code which we then commit to the branch, and, once we are happy
with those changes, merge them back to the main branch. To see what other branches are available, do:
$ git branch
* main
At the moment, there’s only one branch (main
) and hence only one version of the code available. When you create a
Git repository for the first time, by default you only get one version (i.e. branch) - main
. Let’s have a look at
why having different branches might be useful.
Feature Branch Software Development Workflow
While it is technically OK to commit your changes directly to main
branch, and you may often find yourself doing so
for some minor changes, the best practice is to use a new branch for each separate and self-contained
unit/piece of work you want to
add to the project. This unit of work is also often called a feature and the branch where you develop it is called a
feature branch. Each feature branch should have its own meaningful name - indicating its purpose (e.g. “issue23-fix”). If we keep making changes
and pushing them directly to main
branch on GitHub, then anyone who downloads our software from there will get all of our
work in progress - whether or not it’s ready to use! So, working on a separate branch for each feature you are adding is
good for several reasons:
- it enables the main branch to remain stable while you and the team explore and test the new code on a feature branch,
- it enables you to keep the untested and not-yet-functional feature branch code under version control and backed up,
- you and other team members may work on several features at the same time independently from one another,
- if you decide that the feature is not working or is no longer needed - you can easily and safely discard that branch without affecting the rest of the code.
Branches are commonly used as part of a feature-branch workflow, shown in diagram below.
Git feature branches
From Git Tutorial by sillevl (Creative Commons Attribution 4.0 International License)
In the software development workflow, we typically have a main branch which is the version of the code that
is tested, stable and reliable. Then, we normally have a development branch
(called develop
or dev
by convention) that we use for work-in-progress
code. As we work on adding new features to the code, we create new feature branches that first get merged into
develop
after a thorough testing process. After even more testing - develop
branch will get merged into main
.
The points when feature branches are merged to develop
, and develop
to main
depend entirely on the practice/strategy established in the team. For example, for smaller projects (e.g. if you are
working alone on a project or in a very small team), feature branches sometimes get directly merged into main
upon testing,
skipping the develop
branch step. In other projects, the merge into main
happens only at the point of making a new
software release. Whichever is the case for you, a good rule of thumb is - nothing that is broken should be in main
.
Creating Branches
Let’s create a develop
branch to work on:
$ git branch develop
This command does not give any output, but if we run git branch
again, without giving it a new branch name, we can see
the list of branches we have - including the new one we have just made.
$ git branch
develop
* main
The *
indicates the currently active branch. So how do we switch to our new branch? We use the git checkout
command with the name of the branch:
$ git checkout develop
Switched to branch 'develop'
Create and Switch to Branch Shortcut
A shortcut to create a new branch and immediately switch to it:
$ git checkout -b develop
Updating Branches
If we start updating files now, the modifications will happen on the develop
branch and will not affect the version
of the code in main
. We add and commit things to develop
branch in the same way as we do to main
.
Let’s make a small modification to inflammation/models.py
in VS Code, and, say, change the spelling of “2d” to
“2D” in docstrings for functions daily_mean()
, daily_max()
and daily_min()
.
If we do:
$ git status
On branch develop
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: inflammation/models.py
no changes added to commit (use "git add" and/or "git commit -a")
Git is telling us that we are on branch develop
and which tracked files have been modified in our working directory.
We can now add
and commit
the changes in the usual way.
$ git add inflammation/models.py
$ git commit -m "Spelling fix"
Currently Active Branch
Remember,
add
andcommit
commands always act on the currently active branch. You have to be careful and aware of which branch you are working with at any given moment.git status
can help with that, and you will find yourself invoking it very often.
Pushing New Branch Remotely
We push the contents of the develop
branch to GitHub in the same way as we pushed the main
branch. However, as we have
just created this branch locally, it still does not exist in our remote repository. You can check that in GitHub by
listing all branches.
To push a new local branch remotely for the first time, you could use the -u
switch and the name of the branch you
are creating and pushing to:
$ git push -u origin develop
Git Push With
-u
SwitchUsing the
-u
switch with thegit push
command is a handy shortcut for: (1) creating the new remote branch and (2) setting your local branch to automatically track the remote one at the same time. You need to use the-u
switch only once to set up that association between your branch and the remote one explicitly. After that you could simply usegit push
without specifying the remote repository, if you wished so. We still prefer to explicitly state this information in commands.
Let’s confirm that the new branch develop
now exist remotely on GitHub too. From the < > Code
tab in your
repository in GitHub, click the branch dropdown menu (currently showing the default branch main
). You should
see your develop
branch in the list too.
Now the others can check out the develop
branch too and continue to develop code on it.
After the initial push of the new
branch, each next time we push to it in the usual manner (i.e. without the -u
switch):
$ git push origin develop
Merging Into Main Branch
Once you have tested your changes on the develop
branch, you will want to merge them onto the main main
branch.
To do so, make sure you have all your changes committed and switch to main
:
$ git checkout main
Switched to branch 'main'
Your branch is up to date with 'origin/main'.
To merge the develop
branch on top of main
do:
$ git merge develop
Updating 05e1ffb..be60389
Fast-forward
inflammation/models.py | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
If there are no conflicts, Git will merge the branches without complaining and replay all commits from
develop
on top of the last commit from main
. If there are merge conflicts (e.g. a team collaborator modified the same
portion of the same file you are working on and checked in their changes before you), the particular files with conflicts
will be marked and you will need to resolve those conflicts and commit the changes before attempting to merge again.
Since we have no conflicts, we can now push the main
branch to the remote repository:
git push origin main
All Branches Are Equal
In Git, all branches are equal - there is nothing special about the
main
branch. It is called that by convention and is created by default, but it can also be called something else. A good example isgh-pages
branch which is the main branch for website projects hosted on GitHub (rather thanmain
, which can be safely deleted for such projects).
Keeping Main Branch Stable
Good software development practice is to keep the
main
branch stable while you and the team develop and test new functionalities on feature branches (which can be done in parallel and independently by different team members). The next step is to merge feature branches onto thedevelop
branch, where more testing can occur to verify that the new features work well with the rest of the code (and not just in isolation). We talk more about different types of code testing in one of the following episodes.
Key Points
A branch is one version of your project that can contain its own set of commits.
Feature branches enable us to develop / explore / test new code features without affecting the stable
main
code.
Git in VS Code
Overview
Teaching: 35 min
Exercises: 0 minQuestions
How does one initialise a repository within VS Code?
How does one stage and commit within VS Code?
Objectives
Clone a repository from within VS Code
Stage, commit and push within VS Code
Create branches from within VS Code
Introduction
In the previous episode we refreshed our memory on how to do Git things from the command line. However, it is possible to do te same things from within VS Code by just clicking a button.
Cloning
Let’s start by cloning the project we have been working on again. Remember that this will create a second copy of the project on your hard drive. For this reason we have to give it a different name.
Start by opening a new VS Code window. Click on the File
menu item and then on New Window
.
Navigate to your GitHub repository in your browser and click on the green ‘Clone’ button. Make sure the SSH tab is selected
and copy the URL, which should start with git@github
in the text area. You can copy the URL by clicking the little copy
icon just to the right of the text area.
Back in VS Code’s new window, in the editor area, you should see an option ‘Clone Git Repository.
In the text area at the top you can now enter the URL of the git repository. Click on Clone from URL
. You now have to select a
directory for the repository to be cloned. You can create it in the same main directory in which you clone the first instance
of the repository but we will give it a new name so that it doesn’t clash in any way. Make sure you don’t create it within the previous
repository, you just want it on the same hierachical level. VS Code will notice that a repository with the name python-intermediate-inflammation
already exists and it will clone this new instance with the name python-intermediate-inflammation-1
. VS Code will also ask you whether
you want to open the new folder. You can click Open
to this question and the project will be openened for you.
The changes you made in the previous lesson were pushed to the GitHub repository so those changes will be in this new instance of the lesson material.
Let’s repeat a few of the git actions we did before, this time not from the command line but with VS Code features.
Creating a virtual environment
- Press Ctrl+Shift+P
- Find and click
Python: Create Environment
- Click
Venv
- Select Python interpreter (3.9.# if possible)
Creating a new folder
Right click in the side bar and select New Folder
. Call the new folder results
.
Usually we don’t want results added to
version control so add results/
to the .gitignore file. See if you can do this by yourself.
Exercise: Add the
results
folder to .gitignoreOpen the .gitignore file and add
results/
Solution
- Click on .gitignore in the Side Bar, to open the file in the editor area.
- Add the following text at the bottom of the file:
# Results Folder results
- Press Ctrl+S to save the file
Making sense of the VS Code window
You might notice that the moment you save the file, a small blue circle with a 1
in it appears over the Source Control
button in the Activity Bar. From this we can see that there is one untracked change. Click on the Source Control button.
Take a moment to study the source control items in the Side Bar.
- Below the heading
SOURCE CONTROL REPOSITORIES
we can see the name of our repository,python-intermediate-inflammation-1
- Below the second heading,
SOURCE CONTROL
, there is a text area. - There is a
Commit
button that is inactive - Then you should see the title
Changes
and to the right of it a1
in a circle - You should see
.gitignore
and to the right of it anM
- In the bottom left hand corner of the screen you should see the source control icon and next to it the word
main
From this information we can tell that:
- we are working on the
python-intermediate-inflammation-1
- One file (indicated by the
1
next to theChanges
heading),.gitignore
, had been modified (hence theM
next to it), but the change has not been staged - The word
main
in the bottom left hand corner of the screen tells us that we are on the main branch
Commit and Push
Remember the order of getting things into the repository?
- Stage the file/s by adding it
- Commit the file with a message
- Push the file to GitHub
To do this from VS Code, first hover over the .gitignore file in the Side Bar. You’ll notice three more icons to the left of the M
. The first icon is for opening the file, the second for reverting all changes and the third, the +
is for staging the file. Click the +
. The heading that used to say Changes
, now changed to Staged Changes
. The Commit
button is now active. Enter the commit message, Ignore results folder
, in the text area above the Commit
button and then click the Commit
button. Next to the repository name there should be a button with three dots, ...
. Click the button and then click Push
on the pop-up menu.
In the side bar you should now notice that there are not files that are changed. To verify that things have happened as we would expect, you can open
a terminal and type git status
, which should result in the following message:
On branch main
Your branch is up to date with 'origin/main'.
nothing to commit, working tree clean
Create a new branch
To create a new branch, just click on the word main
in the bottom left hand corner of the window. At the top of the
window a text area and a drop-down menu will appear:
You could now select any other branch such as develop but let’s create a new branch to see how it is done. Enter issue #1
in the text area and then click Create new branch
. At the bottom of the window you will see that main
has been replaced and that we are now on the issue #1
branch. In the Side Bar there is now a new button Publish Branch
. We won’t publish yet. Let’s first add something. We’ll have you do that as an exercise:
Exercise: Make changes, commit and push
Using VS Code’s features complete the following tasks on the
issue #1
branch:
- Creat a new file and call it
temperature.py
- Add the following code to the file:
def fahr_to_celcius(fahr_temp): return (fahr_temp -32) * 5 / 9 print(fahr_to_celcius(40))
- Run the script and jot down the answer you get from the print statement
- ge and commit the file with the message
Address issue #1
.- Publish the changes to GitHub and check in the GitHub repository whether your changes are reflected there
- Use VS Code to merge the
issue #1
branch into the main branchSolution
- In the activity bar select the Explorer button
- On the menu, click
File
and thenNew File
- Enter
temperature.py
as the filename and press enter- Copy the code into the editor and press Ctrl+S to save it
- Run the script by clicking the
Run Python File
in the top right corner of the window- The answer you get in the terminal should be
-40.0
- Click on the
Source Control
button in the Activity Bar- In the Activity Bar next to the filename,
temperature.py
click the+
button to stage the file- Enter the message
Address issue #1
and press theCommit Button
- Click the three dots,
...
next to the repository name and selectPush
from the menu- Switch to the main branch by clickin on
issue #1
in the bottom left hand corner of the window and then selectmain
- Click the three dots,
...
next to the repository name and selectBranch
and then ‘Merge Branch’ from the menu- Select the
issue #1
branch- Press the
Sync Changes
button- To delete the branch remotely (on GitHub), type
git push origin --delete issue-#1
in a terminal
Key Points
Working with Git within VS Code
Python Code Style Conventions
Overview
Teaching: 20 min
Exercises: 15 minQuestions
Why should you follow software code style conventions?
Who is setting code style conventions?
What code style conventions exist for Python?
Objectives
Understand the benefits of following community coding conventions
Introduction
We now have all the tools we need for software development and are raring to go. But before you dive into writing some more code and sharing it with others, ask yourself what kind of code should you be writing and publishing? It may be worth spending some time learning a bit about Python coding style conventions to make sure that your code is consistently formatted and readable by yourself and others.
“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.” - Martin Fowler, British software engineer, author and international speaker on software development
Python Coding Style Guide
One of the most important things we can do to make sure our code is readable by others (and ourselves a few months down the line) is to make sure that it is descriptive, cleanly and consistently formatted and uses sensible, descriptive names for variable, function and module names. In order to help us format our code, we generally follow guidelines known as a style guide. A style guide is a set of conventions that we agree upon with our colleagues or community, to ensure that everyone contributing to the same project is producing code which looks similar in style. While a group of developers may choose to write and agree upon a new style guide unique to each project, in practice many programming languages have a single style guide which is adopted almost universally by the communities around the world. In Python, although we do have a choice of style guides available, the PEP8 style guide is most commonly used. PEP here stands for Python Enhancement Proposals; PEPs are design documents for the Python community, typically specifications or conventions for how to do something in Python, a description of a new feature in Python, etc.
Style consistency
One of the key insights from Guido van Rossum, one of the PEP8 authors, is that code is read much more often than it is written. Style guidelines are intended to improve the readability of code and make it consistent across the wide spectrum of Python code. Consistency with the style guide is important. Consistency within a project is more important. Consistency within one module or function is the most important. However, know when to be inconsistent – sometimes style guide recommendations are just not applicable. When in doubt, use your best judgment. Look at other examples and decide what looks best. And don’t hesitate to ask!
As we have already covered in the episode on VS Code, VS Code highlights the language constructs (reserved words) and syntax errors to help us with coding. VS Code also gives us recommendations for formatting the code - these recommendations are mostly taken from the PEP8 style guide.
A full list of style guidelines for this style is available from the PEP8 website; here we highlight a few.
Indentation
Python is a kind of language that uses indentation as a way of grouping statements that belong to a particular block of code. Spaces are the recommended indentation method in Python code. The guideline is to use 4 spaces per indentation level - so 4 spaces on level one, 8 spaces on level two and so on. Many people prefer the use of tabs to spaces to indent the code for many reasons (e.g. additional typing, easy to introduce an error by missing a single space character, etc.) and do not follow this guideline. Whether you decide to follow this guideline or not, be consistent and follow the style already used in the project.
Indentation in Python 2 vs Python 3
Python 2 allowed code indented with a mixture of tabs and spaces. Python 3 disallows mixing the use of tabs and spaces for indentation. Whichever you choose, be consistent throughout the project.
There are more complex rules on indenting single units of code that continue over several lines, e.g. function,
list or dictionary definitions can all take more than one line. The preferred way of wrapping such long lines is by
using Python’s implied line continuation inside delimiters such as parentheses (()
), brackets ([]
) and braces
({}
), or a hanging indent.
# Add an extra level of indentation (extra 4 spaces) to distinguish arguments from the rest of the code that follows
def long_function_name(
var_one, var_two, var_three,
var_four):
print(var_one)
# Aligned with opening delimiter
foo = long_function_name(var_one, var_two,
var_three, var_four)
# Use hanging indents to add an indentation level like paragraphs of text where all the lines in a paragraph are
# indented except the first one
foo = long_function_name(
var_one, var_two,
var_three, var_four)
# Using hanging indent again, but closing bracket aligned with the first non-blank character of the previous line
a_long_list = [
[[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[0.33, 0.66, 1], [0.66, 0.83, 1], [0.77, 0.88, 1]]
]
# Using hanging indent again, but closing bracket aligned with the start of the multiline contruct
a_long_list2 = [
1,
2,
3,
# ...
79
]
More details on good and bad practices for continuation lines can be found in PEP8 guideline on indentation.
Maximum Line Length
All lines should be up to 80 characters long; for lines containing comments or docstrings (to be covered later) the
line length limit should be 73 - see this discussion for reasoning behind these numbers. Some teams strongly prefer a longer line length, and seemed to have settled on the
length of 100. Long lines of code can be broken over multiple lines by wrapping expressions in delimiters, as
mentioned above (preferred method), or using a backslash (\
) at the end of the line to indicate
line continuation (slightly less preferred method).
# Using delimiters ( ) to wrap a multi-line expression
if (a == True and
b == False):
# Using a backslash (\) for line continuation
if a == True and \
b == False:
Should a Line Break Before or After a Binary Operator?
Lines should break before binary operators so that the operators do not get scattered across different columns on the screen. In the example below, the eye does not have to do the extra work to tell which items are added and which are subtracted:
# PEP 8 compliant - easy to match operators with operands
income = (gross_wages
+ taxable_interest
+ (dividends - qualified_dividends)
- ira_deduction
- student_loan_interest)
Blank Lines
Top-level function and class definitions should be surrounded with two blank lines. Method definitions inside a class should be surrounded by a single blank line. You can use blank lines in functions, sparingly, to indicate logical sections.
Whitespace in Expressions and Statements
Avoid extraneous whitespace in the following situations:
- immediately inside parentheses, brackets or braces
# PEP 8 compliant: my_function(colour[1], {id: 2}) # Not PEP 8 compliant: my_function( colour[ 1 ], { id: 2 } )
- Immediately before a comma, semicolon, or colon (unless doing slicing where the colon acts like a binary operator
in which case it should should have equal amounts of whitespace on either side)
# PEP 8 compliant: if x == 4: print(x, y); x, y = y, x # Not PEP 8 compliant: if x == 4 : print(x , y); x , y = y, x
- Immediately before the open parenthesis that starts the argument list of a function call
# PEP 8 compliant: my_function(1) # Not PEP 8 compliant: my_function (1)
- Immediately before the open parenthesis that starts an indexing or slicing
# PEP 8 compliant: my_dct['key'] = my_lst[id] first_char = my_str[:, 1] # Not PEP 8 compliant: my_dct ['key'] = my_lst [id] first_char = my_str [:, 1]
- More than one space around an assignment (or other) operator to align it with another
# PEP 8 compliant: x = 1 y = 2 student_loan_interest = 3 # Not PEP 8 compliant: x = 1 y = 2 student_loan_interest = 3
- Avoid trailing whitespace anywhere - it is not necessary and can cause errors. For example, if you use
backslash (
\
) for continuation lines and have a space after it, the continuation line will not be interpreted correctly. - Surround these binary operators with a single space on either side: assignment (=), augmented assignment (+=, -= etc.), comparisons (==, <, >, !=, <>, <=, >=, in, not in, is, is not), booleans (and, or, not).
- Don’t use spaces around the = sign when used to indicate a keyword argument assignment or to indicate a
default value for an unannotated function parameter
# PEP 8 compliant use of spaces around = for variable assignment axis = 'x' angle = 90 size = 450 name = 'my_graph' # PEP 8 compliant use of no spaces around = for keyword argument assignment in a function call my_function( 1, 2, axis=axis, angle=angle, size=size, name=name)
String Quotes
In Python, single-quoted strings and double-quoted strings are the same. PEP8 does not make a recommendation for this apart from picking one rule and consistently sticking to it. When a string contains single or double quote characters, use the other one to avoid backslashes in the string as it improves readability.
Naming Conventions
There are a lot of different naming styles in use, including:
- b (single lowercase letter)
- B (single uppercase letter)
- lowercase
- lower_case_with_underscores
- UPPERCASE
- UPPER_CASE_WITH_UNDERSCORES
- CapitalisedWords (or PascalCase) (note: when using acronyms in CapitalisedWords, capitalise all the letters of the acronym, e.g HTTPServerError)
- camelCase (differs from CapitalisedWords/PascalCase by the initial lowercase character)
- Capitalised_Words_With_Underscores
As with other style guide recommendations - consistency is key. Pick one and stick to it, or follow the one already established if joining a project mid-way. Some things to be wary of when naming things in the code:
- Avoid using the characters ‘l’ (lowercase letter L), ‘O’ (uppercase letter o), or ‘I’ (uppercase letter i) as single character variable names. In some fonts, these characters are indistinguishable from the numerals one and zero. When tempted to use ‘l’, use ‘L’ instead.
- Avoid using non-ASCII (e.g. UNICODE) characters for identifiers
- If your audience is international and English is the common language, try to use English words for identifiers and comments whenever possible but try to avoid abbreviations/local slang as they may not be understood by everyone. Also consider sticking with either ‘American’ or ‘British’ English spellings and try not to mix the two.
Function, Variable, Class, Module, Package Naming
- Function and variable names should be lowercase, with words separated by underscores as necessary to improve readability.
- Class names should normally use the CapitalisedWords convention.
- Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability.
- Packages should also have short, all-lowercase names, although the use of underscores is discouraged.
A more detailed guide on naming functions, modules, classes and variables is available from PEP8.
Comments
Comments allow us to provide the reader with additional information on what the code does - reading and understanding source code is slow, laborious and can lead to misinterpretation, plus it is always a good idea to keep others in mind when writing code. A good rule of thumb is to assume that someone will always read your code at a later date, and this includes a future version of yourself. It can be easy to forget why you did something a particular way in six months’ time. Write comments as complete sentences and in English unless you are 100% sure the code will never be read by people who don’t speak your language.
The Good, the Bad, and the Ugly Comments
As a side reading, check out the ‘Putting comments in code: the good, the bad, and the ugly’ blogpost. Remember - a comment should answer the ‘why’ question”. Occasionally the “what” question. The “how” question should be answered by the code itself.
Block comments generally apply to some (or all) code that follows them, and are indented to the same level as that
code. Each line of a block comment starts with a #
and a single space (unless it is indented text inside the comment).
def fahr_to_cels(fahr):
# Block comment example: convert temperature in Fahrenheit to Celsius
cels = (fahr + 32) * (5 / 9)
return cels
An inline comment is a comment on the same line as a statement. Inline comments should be separated by at least two
spaces from the statement. They should start with a #
and a single space and should be used sparingly.
def fahr_to_cels(fahr):
cels = (fahr + 32) * (5 / 9) # Inline comment example: convert temperature in Fahrenheit to Celsius
return cels
Python doesn’t have any multi-line comments, like you may have seen in other languages like C++ or Java. However, there are ways to do it using docstrings as we’ll see in a moment.
The reader should be able to understand a single function or method from its code and its comments, and should not have to look elsewhere in the code for clarification. The kind of things that need to be commented are:
- Why certain design or implementation decisions were adopted, especially in cases where the decision may seem counter-intuitive
- The names of any algorithms or design patterns that have been implemented
- The expected format of input files or database schemas
However, there are some restrictions. Comments that simply restate what the code does are redundant, and comments must be accurate and updated with the code, because an incorrect comment causes more confusion than no comment at all.
Exercise: Improve Code Style of Our Project
Let’s look at improving the coding style of our project. First create a new feature branch called
style-fixes
off ourdevelop
branch and switch to it (from the project root):$ git checkout develop $ git checkout -b style-fixes
Next look at the
inflammation-analysis.py
file in VS Code and identify where the above guidelines have not been followed. Fix the discovered inconsistencies and commit them to the feature branch.Solution
Modify
inflammation-analysis.py
from VS Code, which is helpfully marking inconsistencies with coding guidelines by underlying them. There are a few things to fix ininflammation-analysis.py
, for example:
Line 24 in
inflammation-analysis.py
is too long and not very readable. A better style would be to use multiple lines and hanging indent, with the closing brace `}’ aligned either with the first non-whitespace character of the last line of list or the first character of the line that starts the multiline construct or simply moved to the end of the previous line. All three acceptable modifications are shown below.# Using hanging indent, with the closing '}' aligned with the first non-blank character of the previous line view_data = { 'average': models.daily_mean(inflammation_data), 'max': models.daily_max(inflammation_data), 'min': models.daily_min(inflammation_data) }
# Using hanging indent with the, closing '}' aligned with the start of the multiline contruct view_data = { 'average': models.daily_mean(inflammation_data), 'max': models.daily_max(inflammation_data), 'min': models.daily_min(inflammation_data) }
# Using hanging indent where all the lines of the multiline contruct are indented except the first one view_data = { 'average': models.daily_mean(inflammation_data), 'max': models.daily_max(inflammation_data), 'min': models.daily_min(inflammation_data)}
Variable ‘InFiles’ in
inflammation-analysis.py
uses CapitalisedWords naming convention which is recommended for class names but not variable names. By convention, variable names should be in lowercase with optional underscores so you should rename the variable ‘InFiles’ to, e.g., ‘infiles’ or ‘in_files’.There is an extra blank line on line 20 in
inflammation-analysis.py
. Normally, you should not use blank lines in the middle of the code unless you want to separate logical units - in which case only one blank line is used. Note how VS Code is warning us by underlying the whole line.Only one blank line after the end of definition of function
main
and the rest of the code on line 30 ininflammation-analysis.py
- should be two blank lines. Note how VS Code is warning us by underlying the whole line.Finally, let’s add and commit our changes to the feature branch. We will check the status of our working directory first.
$ git status
On branch style-fixes Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git restore <file>..." to discard changes in working directory) modified: inflammation-analysis.py no changes added to commit (use "git add" and/or "git commit -a")
Git tells us we are on branch
style-fixes
and that we have unstaged and uncommited changes toinflammation-analysis.py
. Let’s commit them to the local repository.$ git add inflammation-analysis.py $ git commit -m "Code style fixes."
Optional Exercise: Improve Code Style of Your Other Python Projects
If you have another Python project, check to which extent it conforms to PEP8 coding style.
Documentation Strings aka Docstrings
If the first thing in a function is a string that is not assigned to a variable, that string is attached to the function as its documentation. Consider the following code implementing function for calculating the nth Fibonacci number:
def fibonacci(n):
"""Calculate the nth Fibonacci number.
A recursive implementation of Fibonacci array elements.
:param n: integer
:raises ValueError: raised if n is less than zero
:returns: Fibonacci number
"""
if n < 0:
raise ValueError('Fibonacci is not defined for N < 0')
if n == 0:
return 0
if n == 1:
return 1
return fibonacci(n - 1) + fibonacci(n - 2)
Note here we are explicitly documenting our input variables, what is returned by the function, and also when the
ValueError
exception is raised. Along with a helpful description of what the function does, this information can
act as a contract for readers to understand what to expect in terms of behaviour when using the function,
as well as how to use it.
A special comment string like this is called a docstring. We do not need to use triple quotes when writing one, but
if we do, we can break the text across multiple lines. Docstrings can also be used at the start of a Python module (a file
containing a number of Python functions) or at the start of a Python class (containing a number of methods) to list
their contents as a reference. You should not confuse docstrings with comments though - docstrings are context-dependent and should only
be used in specific locations (e.g. at the top of a module and immediately after class
and def
keywords as mentioned).
Using triple quoted strings in locations where they will not be interpreted as docstrings or
using triple quotes as a way to ‘quickly’ comment out an entire block of code is considered bad practice.
In our example case, we used
the Sphynx/ReadTheDocs docstring style formatting
for the param
, raises
and returns
- other docstring formats exist as well.
Python PEP 257 - Recommendations for Docstrings
PEP 257 is another one of Python Enhancement Proposals and this one deals with docstring conventions to standardise how they are used. For example, on the subject of module-level docstrings, PEP 257 says:
The docstring for a module should generally list the classes, exceptions and functions (and any other objects) that are exported by the module, with a one-line summary of each. (These summaries generally give less detail than the summary line in the object's docstring.) The docstring for a package (i.e., the docstring of the package's `__init__.py` module) should also list the modules and subpackages exported by the package.
Note that
__init__.py
file used to be a required part of a package (pre Python 3.3) where a package was typically implemented as a directory containing an__init__.py
file which got implicitly executed when a package was imported.
So, at the beginning of a module file we can just add a docstring explaining the nature of a module. For example, if
fibonacci()
was included in a module with other functions, our module could have at the start of it:
"""A module for generating numerical sequences of numbers that occur in nature.
Functions:
fibonacci - returns the Fibonacci number for a given integer
golden_ratio - returns the golden ratio number to a given Fibonacci iteration
...
"""
...
The docstring for a function or a module is returned when
calling the help
function and passing its name - for example from the interactive Python console/terminal available
from the command line or when rendering code documentation online
(e.g. see Python documentation).
VS Code also displays the docstring for a function/module in a little help popup window when using tab-completion.
help(fibonacci)
Exercise: Fix the Docstrings
Look into
models.py
in VS Code and improve docstrings for functionsdaily_mean
,daily_min
,daily_max
. Commit those changes to feature branchstyle-fixes
.Solution
For example, the improved docstrings for the above functions would contain explanations for parameters and return values.
def daily_mean(data): """Calculate the daily mean of a 2D inflammation data array for each day. :param data: A 2D data array with inflammation data (each row contains measurements for a single patient across all days). :returns: An array of mean values of measurements for each day. """ return np.mean(data, axis=0)
def daily_min(data): """Calculate the daily minimum of a 2D inflammation data array for each day. :param data: A 2D data array with inflammation data (each row contains measurements for a single patient across all days). :returns: An array of minimum values of measurements for each day. """ return np.min(data, axis=0)
def daily_max(data): """Calculate the daily maximum of a 2D inflammation data array for each day. :param data: A 2D data array with inflammation data (each row contains measurements for a single patient across all days). :returns: An array of max values of measurements for each day. """ return np.max(data, axis=0)
Once we are happy with modifications, as usual before staging and commit our changes, we check the status of our working directory:
$ git status
On branch style-fixes Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git restore <file>..." to discard changes in working directory) modified: inflammation/models.py no changes added to commit (use "git add" and/or "git commit -a")
As expected, Git tells us we are on branch
style-fixes
and that we have unstaged and uncommited changes toinflammation/models.py
. Let’s commit them to the local repository.$ git add inflammation/models.py $ git commit -m "Docstring improvements."
In the previous exercises, we made some code improvements on feature branch style-fixes
. We have committed our
changes locally but have not pushed this branch remotely for others to have a look at our code before we merge it
onto the develop
branch. Let’s do that now, namely:
- push
style-fixes
to GitHub - merge
style-fixes
intodevelop
(once we are happy with the changes) - push updates to
develop
branch to GitHub (to keep our main development branch up to date with the latest developments) - finally, merge
develop
branch into the stablemain
branch
Here is a set commands that will achieve the above set of actions (remember to use git status
often in between other
Git commands to double check which branch you are on and its status):
$ git push -u origin style-fixes
$ git checkout develop
$ git merge style-fixes
$ git push origin develop
$ git checkout main
$ git merge develop
$ git push origin main
Typical Code Development Cycle
What you’ve done in the exercises in this episode mimics a typical software development workflow - you work locally on code on a feature branch, test it to make sure it works correctly and as expected, then record your changes using version control and share your work with others via a centrally backed-up repository. Other team members work on their feature branches in parallel and similarly share their work with colleagues for discussions. Different feature branches from around the team get merged onto the main development branch, often in small and quick development cycles. After further testing and verifying that no code has been broken by the new features - the development branch gets merged onto the stable main branch, where new features finally resurface to end-users in bigger “software release” cycles.
Key Points
Always assume that someone else will read your code at a later date, including yourself.
Community coding conventions help you create more readable software projects that are easier to contribute to.
Python Enhancement Proposals (or PEPs) describe a recommended convention or specification for how to do something in Python.
Style checking to ensure code conforms to coding conventions is often part of IDEs.
Consistency with the style guide is important - whichever style you choose.
Verifying Code Style Using Linters
Overview
Teaching: 15 min
Exercises: 10 minQuestions
What tools can help with maintaining a consistent code style?
How can we automate code style checking?
Objectives
Use code linting tools to verify a program’s adherence to a Python coding style convention.
Verifying Code Style Using Linters
We’ve seen how we can use VS Code to help us format our Python code in a consistent style.
This aids reusability, since consistent-looking code is easier to modify since it’s easier to read and understand
if it’s consistent. We can also use tools to identify consistency issues in a report-style too,
using code linters.
Linters analyse source code to identify and report on stylistic and even programming errors. Let’s look at a very well
used one of these called pylint
.
First, let’s ensure we are on the style-fixes
branch once again.
$ git checkout style-fixes
Pylint is just a Python package so we can install it in our virtual environment using:
$ pip3 install pylint
$ pylint --version
We should see the version of Pylint, something like:
pylint 2.13.3
...
We should also update our requirements.txt
with this new addition:
$ pip3 freeze > requirements.txt
Pylint is a command-line tool that can help our code in many ways:
- Check PEP8 compliance: whilst in-IDE context-sensitive highlighting such as that provided via VS Code helps us stay consistent with PEP8 as we write code, this tool provides a full report
- Perform basic error detection: Pylint can look for certain Python type errors
- Check variable naming conventions: Pylint often goes beyond PEP8 to include other common conventions, such as naming variables outside of functions in upper case
- Customisation: you can specify which errors and conventions you wish to check for, and those you wish to ignore
Pylint can also identify code smells.
How Does Code Smell?
There are many ways that code can exhibit bad design whilst not breaking any rules and working correctly. A code smell is a characteristic that indicates that there is an underlying problem with source code, e.g. large classes or methods, methods with too many parameters, duplicated statements in both if and else blocks of conditionals, etc. They aren’t functional errors in the code, but rather are certain structures that violate principles of good design and impact design quality. They can also indicate that code is in need of maintenance and refactoring.
The phrase has its origins in Chapter 3 “Bad smells in code” by Kent Beck and Martin Fowler in Fowler, Martin (1999). Refactoring. Improving the Design of Existing Code. Addison-Wesley. ISBN 0-201-48567-2.
Pylint recommendations are given as warnings or errors, and Pylint also scores the code with an overall mark.
We can look at a specific file (e.g. inflammation-analysis.py
), or a module
(e.g. inflammation
). Let’s look at our inflammation
module and code inside it (namely models.py
and views.py
).
From the project root do:
$ pylint inflammation
You should see an output similar to the following:
************* Module inflammation.models
inflammation/models.py:5:82: C0303: Trailing whitespace (trailing-whitespace)
inflammation/models.py:6:66: C0303: Trailing whitespace (trailing-whitespace)
inflammation/models.py:34:0: C0305: Trailing newlines (trailing-newlines)
************* Module inflammation.views
inflammation/views.py:4:0: W0611: Unused numpy imported as np (unused-import)
------------------------------------------------------------------
Your code has been rated at 8.00/10 (previous run: 8.00/10, +0.00)
Your own outputs of the above commands may vary depending on how you have implemented and fixed the code in previous exercises and the coding style you have used.
The five digit codes, such as C0303
, are unique identifiers for warnings, with the first character indicating
the type of warning. There are five different types of warnings that Pylint looks for, and you can get a summary of
them by doing:
$ pylint --long-help
Near the end you’ll see:
Output:
Using the default text output, the message format is :
MESSAGE_TYPE: LINE_NUM:[OBJECT:] MESSAGE
There are 5 kind of message types :
* (C) convention, for programming standard violation
* (R) refactor, for bad code smell
* (W) warning, for python specific problems
* (E) error, for probable bugs in the code
* (F) fatal, if an error occurred which prevented pylint from doing
further processing.
So for an example of a Pylint Python-specific warning
, see the “W0611: Unused numpy imported
as np (unused-import)” warning.
It is important to note that while tools such as Pylint are great at giving you a starting point to consider how to improve your code, they won’t find everything that may be wrong with it.
How Does Pylint Calculate the Score?
The Python formula used is (with the variables representing numbers of each type of infraction and
statement
indicating the total number of statements):10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
For example, with a total of 31 statements of models.py and views.py, with a count of the errors shown above, we get a score of 8.00. Note whilst there is a maximum score of 10, given the formula, there is no minimum score - it’s quite possible to get a negative score!
Exercise: Further Improve Code Style of Our Project
Select and fix a few of the issues with our code that Pylint detected. Make sure you do not break the rest of the code in the process and that the code still runs.
Make sure you commit and push requirements.txt
and any file with further code style improvements you did and
merge onto your development and main branches.
$ git add requirements.txt
$ git commit -m "Added Pylint library"
$ git push origin style-fixes
$ git checkout develop
$ git merge style-fixes
$ git push origin develop
$ git checkout main
$ git merge develop
$ git push origin main
Optional Exercise: Improve Code Style of Your Other Python Projects
If you have a Python project you are working on or you worked on in the past, run it past Pylint to see what issues with your code are detected, if any.
It is possible to automate these kind of code checks with GitHub’s Continuous Integration service GitHub Actions - we will come back to automated linting in the episode on “Diagnosing Issues and Improving Robustness”.
Key Points
Use linting tools on the command line (or via continuous integration) to automatically check your code style.
Section 2: Ensuring Correctness of Software at Scale
Overview
Teaching: 5 min
Exercises: 0 minQuestions
What should we do to ensure our code is correct?
Objectives
Introduce the testing tools, techniques, and infrastructure that will be used in this section.
We’ve previously looked at building a suitable environment for collaboratively developing software. In this section we’ll look at testing approaches that help us ensure that the software we write is actually correct, and how we can diagnose and fix issues once faults are found. Using such approaches requires us to change our practice of development. This can take time, but potentially saves us considerable time in the medium to long term by allowing us to more comprehensively and rapidly find such faults, as well as giving us greater confidence in the correctness of our code. We will also make use of techniques and infrastructure that allow us to do this in a scalable and more performant way.
In this section we will:
- Make use of a test framework called Pytest, a free and open source Python library to help us structure and run automated tests.
- Design, write and run unit tests using pytest to verify the correct behaviour of code and identify faults, making use of test parameterisation to increase the number of different test cases we can run.
- Automatically run a set of unit tests using GitHub Actions - a Continuous Integration infrastructure that allows us to automate tasks when things happen to our code, such as running those tests when a new commit is made to a code repository.
- Use PyCharm’s integrated debugger to help us locate a fault in our code while it is running.
Key Points
Using testing requires us to change our practice of code development, but saves time in the long run by allowing us to more comprehensively and rapidly find faults in code, as well as giving us greater confidence in the correctness of our code.
The use of test techniques and infrastructures such as parameterisation and Continuous Integration can help scale and further automate our testing process.
Automatically Testing Software
Overview
Teaching: 30 min
Exercises: 20 minQuestions
Does the code we develop work the way it should do?
Can we (and others) verify these assertions for themselves?
To what extent are we confident of the accuracy of results that appear in publications?
Objectives
Explain the reasons why testing is important
Describe the three main types of tests and what each are used for
Implement and run unit tests to verify the correct behaviour of program functions
Introduction
Being able to demonstrate that a process generates the right results is important in any field of research, whether it’s software generating those results or not. So when writing software we need to ask ourselves some key questions:
- Does the code we develop work the way it should do?
- Can we (and others) verify these assertions for themselves?
- Perhaps most importantly, to what extent are we confident of the accuracy of results that appear in publications?
If we are unable to demonstrate that our software fulfills these criteria, why would anyone use it? Having well-defined tests for our software are useful for this, but manually testing software can prove an expensive process.
Automation can help, and automation where possible is a good thing - it enables us to define a potentially complex process in a repeatable way that is far less prone to error than manual approaches. Once defined, automation can also save us a lot of effort, particularly in the long run. In this episode we’ll look into techniques of automated testing to improve the predictability of a software change, make development more productive, and help us produce code that works as expected and produces desired results.
What Is Software Testing?
For the sake of argument, if each line we write has a 99% chance of being right, then a 70-line program will be wrong more than half the time. We need to do better than that, which means we need to test our software to catch these mistakes.
We can and should extensively test our software manually, and manual testing is well-suited to testing aspects such as graphical user interfaces and reconciling visual outputs against inputs. However, even with a good test plan, manual testing is very time consuming and prone to error. Another style of testing is automated testing, where we write code that tests the functions of our software. Since computers are very good and efficient at automating repetitive tasks, we should take advantage of this wherever possible.
There are three main types of automated tests:
- Unit tests are tests for fairly small and specific units of functionality, e.g. determining that a particular function returns output as expected given specific inputs.
- Functional or integration tests work at a higher level, and test functional paths through your code, e.g. given some specific inputs, a set of interconnected functions across a number of modules (or the entire code) produce the expected result. These are particularly useful for exposing faults in how functional units interact.
- Regression tests make sure that your program’s output hasn’t changed, for example after making changes your code to add new functionality or fix a bug.
For the purposes of this course, we’ll focus on unit tests. But the principles and practices we’ll talk about can be built on and applied to the other types of tests too.
Set Up a New Feature Branch for Writing Tests
We’re going to look at how to run some existing tests and also write some new ones, so let’s ensure we’re initially on our develop
branch we created earlier. And then, we’ll create a new feature branch called test-suite
off the develop
branch - a common term we use to refer to sets of tests - that we’ll use for our test writing work:
$ git checkout develop
$ git branch test-suite
$ git checkout test-suite
Good practice is to write our tests around the same time we write our code on a feature branch. But since the code already exists, we’re creating a feature branch for just these extra tests. Git branches are designed to be lightweight, and where necessary, transient, and use of branches for even small bits of work is encouraged.
Later on, once we’ve finished writing these tests and are convinced they work properly, we’ll merge our test-suite
branch back into develop
.
Inflammation Data Analysis
Let’s go back to our patient inflammation software project. Recall that it is based on a clinical trial of inflammation in patients who have been given a new treatment for arthritis.
There are a number of datasets in the data
directory recording inflammation information in patients (each file representing a different trial), and are each stored in comma-separated values (CSV) format: each row holds information for a single patient, and the columns represent successive days when inflammation was measured in patients.
Let’s take a quick look at the data now from within the Python command line console. Change directory to the repository root (which should be in your home directory ~/python-intermediate-inflammation
), ensure you have your virtual environment activated in your command line terminal (particularly if opening a new one), and then start the Python console by invoking the Python interpreter without any parameters, e.g.:
$ cd ~/python-intermediate-inflammation
$ source venv/bin/activate
$ python3
The last command will start the Python console within your shell, which enables us to execute Python commands interactively. Inside the console enter the following:
import numpy as np
data = np.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
data.shape
(60, 40)
The data in this case is two-dimensional - it has 60 rows (one for each patient) and 40 columns (one for each day). Each cell in the data represents an inflammation reading on a given day for a patient.
Our patient inflammation application has a number of statistical functions held in inflammation/models.py
: daily_mean()
, daily_max()
and daily_min()
, for calculating the mean average, the maximum, and the minimum values for a given number of rows in our data. For example, the daily_mean()
function looks like this:
def daily_mean(data):
"""Calculate the daily mean of a 2D inflammation data array for each day.
:param data: A 2D data array with inflammation data (each row contains measurements for a single patient across all days).
:returns: An array of mean values of measurements for each day.
"""
return np.mean(data, axis=0)
Here, we use NumPy’s np.mean()
function to calculate the mean vertically across the data (denoted by axis=0
), which is then returned from the function. So, if data
was a NumPy array of three rows like…
[[1, 2],
[3, 4],
[5, 6]]
…the function would return a 1D NumPy array of [3, 4]
- each value representing the mean of each column (which are, coincidentally, the same values as the second row in the above data array).
To show this working with our patient data, we can use the function like this, passing the first four patient rows to the function in the Python console:
from inflammation.models import daily_mean
daily_mean(data[0:4])
Note we use a different form of import
here - only importing the daily_mean
function from our models
instead of everything. This also has the effect that we can refer to the function using only its name, without needing to include the module name too (i.e. inflammation.models.daily_mean()
).
The above code will return the mean inflammation for each day column across the first four patients (as a 1D NumPy array of shape (40, 0)):
array([ 0. , 0.5 , 1.5 , 1.75, 2.5 , 1.75, 3.75, 3. , 5.25,
6.25, 7. , 7. , 7. , 8. , 5.75, 7.75, 8.5 , 11. ,
9.75, 10.25, 15. , 8.75, 9.75, 10. , 8. , 10.25, 8. ,
5.5 , 8. , 6. , 5. , 4.75, 4.75, 4. , 3.25, 4. ,
1.75, 2.25, 0.75, 0.75])
The other statistical functions are similar. Note that in real situations functions we write are often likely to be more complicated than these, but simplicity here allows us to reason about what’s happening - and what we need to test - more easily.
Let’s now look into how we can test each of our application’s statistical functions to ensure they are functioning correctly.
Writing Tests to Verify Correct Behaviour
One Way to Do It?
One way to test our functions would be to write a series of checks or tests, each executing a function we want to test with known inputs against known valid results, and throw an error if we encounter a result that is incorrect. So, referring back to our simple daily_mean()
example above, we could use [[1, 2], [3, 4], [5, 6]]
as an input to that function and check whether the result equals [3, 4]
:
import numpy.testing as npt
test_input = np.array([[1, 2], [3, 4], [5, 6]])
test_result = np.array([3, 4])
npt.assert_array_equal(daily_mean(test_input), test_result)
So we use the assert_array_equal()
function - part of Numpy’s testing library - to test that our calculated result is the same as our expected result. This function explicitly checks the array’s shape and elements are the same, and throws an AssertionError
if they are not. In particular, note that we can’t just use ==
or other Python equality methods, since these won’t work properly with NumPy arrays in all cases.
We could then add to this with other tests that use and test against other values, and end up with something like:
test_input = np.array([[2, 0], [4, 0]])
test_result = np.array([2, 0])
npt.assert_array_equal(daily_mean(test_input), test_result)
test_input = np.array([[0, 0], [0, 0]])
test_result = np.array([0, 0])
npt.assert_array_equal(daily_mean(test_input), test_result)
test_input = np.array([[1, 2], [3, 4], [5, 6]])
test_result = np.array([3, 4])
npt.assert_array_equal(daily_mean(test_input), test_result)
However, if we were to enter these in this order, we’ll find we get the following after the first test:
...
AssertionError:
Arrays are not equal
Mismatched elements: 1 / 2 (50%)
Max absolute difference: 1.
Max relative difference: 0.5
x: array([3., 0.])
y: array([2, 0])
This tells us that one element between our generated and expected arrays doesn’t match, and shows us the different arrays.
We could put these tests in a separate script to automate the running of these tests. But a Python script halts at the first failed assertion, so the second and third tests aren’t run at all. It would be more helpful if we could get data from all of our tests every time they’re run, since the more information we have, the faster we’re likely to be able to track down bugs. It would also be helpful to have some kind of summary report: if our set of tests - known as a test suite - includes thirty or forty tests (as it well might for a complex function or library that’s widely used), we’d like to know how many passed or failed.
Going back to our failed first test, what was the issue? As it turns out, the test itself was incorrect, and should have read:
test_input = np.array([[2, 0], [4, 0]])
test_result = np.array([3, 0])
npt.assert_array_equal(daily_mean(test_input), test_result)
Which highlights an important point: as well as making sure our code is returning correct answers, we also need to ensure the tests themselves are also correct. Otherwise, we may go on to fix our code only to return an incorrect result that appears to be correct. So a good rule is to make tests simple enough to understand so we can reason about both the correctness of our tests as well as our code. Otherwise, our tests hold little value.
Using a Testing Framework
Keeping these things in mind, here’s a different approach that builds on the ideas we’ve seen so far but uses a unit testing framework. In such a framework we define our tests we want to run as functions, and the framework automatically runs each of these functions in turn, summarising the outputs. And unlike our previous approach, it will run every test regardless of any encountered test failures.
Most people don’t enjoy writing tests, so if we want them to actually do it, it must be easy to:
- Add or change tests,
- Understand the tests that have already been written,
- Run those tests, and
- Understand those tests’ results
Test results must also be reliable. If a testing tool says that code is working when it’s not, or reports problems when there actually aren’t any, people will lose faith in it and stop using it.
Look at tests/test_models.py
:
"""Tests for statistics functions within the Model layer."""
import numpy as np
import numpy.testing as npt
def test_daily_mean_zeros():
"""Test that mean function works for an array of zeros."""
from inflammation.models import daily_mean
test_input = np.array([[0, 0],
[0, 0],
[0, 0]])
test_result = np.array([0, 0])
# Need to use NumPy testing functions to compare arrays
npt.assert_array_equal(daily_mean(test_input), test_result)
def test_daily_mean_integers():
"""Test that mean function works for an array of positive integers."""
from inflammation.models import daily_mean
test_input = np.array([[1, 2],
[3, 4],
[5, 6]])
test_result = np.array([3, 4])
# Need to use NumPy testing functions to compare arrays
npt.assert_array_equal(daily_mean(test_input), test_result)
...
So here, although we have specified two of our tests as separate functions, they run the same assertions. Each of these test functions, in a general sense, are called test cases - these are a specification of:
- Inputs, e.g. the
test_input
NumPy array - Execution conditions - what we need to do to set up the testing environment to run our test, e.g. importing the
daily_mean()
function so we can use it. Note that for clarity of testing environment, we only import the necessary library function we want to test within each test function - Testing procedure, e.g. running
daily_mean()
with ourtest_input
array and usingassert_array_equal()
to test its validity - Expected outputs, e.g. our
test_result
NumPy array that we test against
And here, we’re defining each of these things for a test case we can run independently that requires no manual intervention.
Going back to our list of requirements, how easy is it to run these tests? We can do this using a Python package called pytest
. Pytest is a testing framework that allows you to write test cases using Python. You can use it to test things like Python functions, database operations, or even things like service APIs - essentially anything that has inputs and expected outputs. We’ll be using Pytest to write unit tests, but what you learn can scale to more complex functional testing for applications or libraries.
What About Unit Testing in Other Languages?
Other unit testing frameworks exist for Python, including Nose2 and Unittest, and the approach to unit testing can be translated to other languages as well, e.g. FRUIT for Fortran, JUnit for Java (the original unit testing framework), Catch for C++, etc.
Installing pytest
If you have already installed pytest
package in your virtual environment, you can skip this step. Otherwise,
as we have seen, we have a couple of options for installing external libraries:
- via PyCharm (see “Adding an External Library” section in “Integrated Software Development Environments” episode), or
- via the command line (see “Installing External Libraries in an Environment With
pip
” section in “Virtual Environments For Software Development” episode).
To do it via the command line - exit the Python console first (either with Ctrl-D
or by typing exit()
), then do:
$ pip3 install pytest
Whether we do this via PyCharm or the command line, the results are exactly the same: our virtual environment will now have the pytest
package installed for use.
Writing a Metadata Package Description
Another thing we need to do when automating tests using Pytest is to create a setup.py
in the root of our project repository. A setup.py
file defines metadata about our software, such as its name and current version, and is typically used when writing and distributing Python code as packages. We need this so Pytest is able to locate the Python source files to test in the inflammation
directory.
Create a new file setup.py
in the root directory of the python-intermediate-inflammation
repository, with the following Python content:
from setuptools import setup, find_packages
setup(name="inflammation-analysis", version='1.0', packages=find_packages())
Next, in the command line we need to install our code as a local package in our environment so Pytest will find it:
$ pip3 install -e .
We should see:
Obtaining file:///Users/alex/python-intermediate-inflammation
Preparing metadata (setup.py) ... done
Installing collected packages: inflammation-analysis
Running setup.py develop for inflammation-analysis
Successfully installed inflammation-analysis-1.0
This will install our code, as a package, within our virtual environment. We’re installing it as a ‘development’
package (using the -e
parameter in the above pip3 install
command), which means as we develop and need to test our code we don’t need to install it “properly” as a full package each time we make a change (or edit it - hence the -e
).
Running the Tests
Now we can run these tests using pytest
:
$ pytest tests/test_models.py
So here, we specify the tests/test_models.py
file to run the tests in that file
explicitly.
============================================== test session starts =====================================================
platform darwin -- Python 3.9.6, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/alex/python-intermediate-inflammation
plugins: anyio-3.3.4
collected 2 items
tests/test_models.py .. [100%]
=============================================== 2 passed in 0.79s ======================================================
Pytest looks for functions whose names also start with the letters ‘test_’ and runs each one. Notice the ..
after our test script:
- If the function completes without an assertion being triggered, we count the test as a success (indicated as
.
). - If an assertion fails, or we encounter an error, we count the test as a failure (indicated as
F
). The error is included in the output so we can see what went wrong.
So if we have many tests, we essentially get a report indicating which tests succeeded or failed. Going back to our list of requirements, do we think these results are easy to understand?
Exercise: Write Some Unit Tests
We already have a couple of test cases in
test/test_models.py
that test thedaily_mean()
function. Looking atinflammation/models.py
, write at least two new test cases that test thedaily_max()
anddaily_min()
functions, adding them totest/test_models.py
. Here are some hints:
- You could choose to format your functions very similarly to
daily_mean()
, defining test input and expected result arrays followed by the equality assertion.- Try to choose cases that are suitably different, and remember that these functions take a 2D array and return a 1D array with each element the result of analysing each column of the data.
Once added, run all the tests again with
pytest tests/test_models.py
, and you should also see your new tests pass.Solution
... def test_daily_max(): """Test that max function works for an array of positive integers.""" from inflammation.models import daily_max test_input = np.array([[4, 2, 5], [1, 6, 2], [4, 1, 9]]) test_result = np.array([4, 6, 9]) npt.assert_array_equal(daily_max(test_input), test_result) def test_daily_min(): """Test that min function works for an array of positive and negative integers.""" from inflammation.models import daily_min test_input = np.array([[ 4, -2, 5], [ 1, -6, 2], [-4, -1, 9]]) test_result = np.array([-4, -6, 2]) npt.assert_array_equal(daily_min(test_input), test_result) ...
The big advantage is that as our code develops we can update our test cases and commit them back, ensuring that ourselves (and others) always have a set of tests to verify our code at each step of development. This way, when we implement a new feature, we can check a) that the feature works using a test we write for it, and b) that the development of the new feature doesn’t break any existing functionality.
What About Testing for Errors?
There are some cases where seeing an error is actually the correct behaviour, and Python allows us to test for exceptions. Add this test in tests/test_models.py
:
import pytest
...
def test_daily_min_string():
"""Test for TypeError when passing strings"""
from inflammation.models import daily_min
with pytest.raises(TypeError):
error_expected = daily_min([['Hello', 'there'], ['General', 'Kenobi']])
Note that you need to import the pytest
library at the top of our test_models.py
file with import pytest
so that we can use pytest
’s raises()
function.
Run all your tests as before.
Since we’ve installed pytest
to our environment, we should also regenerate our requirements.txt
:
$ pip3 freeze --exclude-editable > requirements.txt
We use --exclude-editable
here to ensure our locally installed inflammation-analysis
package is not included in this list of installed packages, since it is not required for running the software, and would cause problems for others reusing this environment.
Finally, let’s commit our new test_models.py
file, requirements.txt
file, and test cases to our test-suite
branch, and push this new branch and all its commits to GitHub:
$ git add requirements.txt setup.py tests/test_models.py
$ git commit -m "Add initial test cases for daily_max() and daily_min()"
$ git push -u origin test-suite
Why Should We Test Invalid Input Data?
Testing the behaviour of inputs, both valid and invalid, is a really good idea and is known as data validation. Even if you are developing command line software that cannot be exploited by malicious data entry, testing behaviour against invalid inputs prevents generation of erroneous results that could lead to serious misinterpretation (as well as saving time and compute cycles which may be expensive for longer-running applications). It is generally best not to assume your user’s inputs will always be rational.
Key Points
The three main types of automated tests are unit tests, functional tests and regression tests.
We can write unit tests to verify that functions generate expected output given a set of specific inputs.
It should be easy to add or change tests, understand and run them, and understand their results.
We can use a unit testing framework like
pytest
to structure and simplify the writing of tests.We should test for expected errors in our code.
Testing program behaviour against both valid and invalid inputs is important and is known as data validation.
Scaling Up Unit Testing
Overview
Teaching: 10 min
Exercises: 5 minQuestions
How do we scale up the number of tests we want to run?
How can we know how much of our code is being tested?
Objectives
Use parameterisation to automatically run tests over a set of inputs
Use code coverage to understand how much of our code is being tested using unit tests
Introduction
We’re starting to build up a number of tests that test the same function, but just with different parameters. However, continuing to write a new function for every single test case isn’t likely to scale well as our development progresses. How can we make our job of writing tests more efficient? And importantly, as the number of tests increases, how can we determine how much of our code base is actually being tested?
Parameterising Our Unit Tests
So far, we’ve been writing a single function for every new test we need. But when we simply want to use the same test code but with different data for another test, it would be great to be able to specify multiple sets of data to use with the same test code. Test parameterisation gives us this.
So instead of writing a separate function for each different test, we can parameterise the tests with multiple test inputs. For example, in tests/test_models.py
let us rewrite the test_daily_mean_zeros()
and test_daily_mean_integers()
into a single test function:
@pytest.mark.parametrize(
"test, expected",
[
([[0, 0], [0, 0], [0, 0]], [0, 0]),
([[1, 2], [3, 4], [5, 6]], [3, 4]),
])
def test_daily_mean(test, expected):
"""Test mean function works for array of zeroes and positive integers."""
from inflammation.models import daily_mean
npt.assert_array_equal(daily_mean(np.array(test)), np.array(expected))
Here, we use pytest
’s mark capability to add metadata to this specific test - in this case, marking that it’s a parameterised test. parameterize()
is actually a Python decorator. A decorator, when applied to a function, adds some functionality to it when it is called, and here, what we want to do is specify multiple input and expected output test cases so the function is called over each of them automatically when this test is called.
We specify these as arguments to the parameterize()
decorator, firstly indicating the names of these arguments that will be passed to the function (test
, expected
), and secondly the actual arguments themselves that correspond to each of these names - the input data (the test
argument), and the expected result (the expected
argument). In this case, we are passing in two tests to test_daily_mean()
which will be run sequentially.
So our first test will run daily_mean()
on [[0, 0], [0, 0], [0, 0]]
(our test
argument), and check to see if it equals [0, 0]
(our expected
argument). Similarly, our second test will run daily_mean()
with [[1, 2], [3, 4], [5, 6]]
and check it produces [3, 4]
.
The big plusses here are that we don’t need to write separate functions for each of them, which can mean writing our tests scales better as our code becomes more complex and we need to write more tests.
Exercise: Write Parameterised Unit Tests
Rewrite your test functions for
daily_max()
anddaily_min()
to be parameterised, adding in new test cases for each of them.Solution
... @pytest.mark.parametrize( "test, expected", [ ([[0, 0, 0], [0, 0, 0], [0, 0, 0]], [0, 0, 0]), ([[4, 2, 5], [1, 6, 2], [4, 1, 9]], [4, 6, 9]), ([[4, -2, 5], [1, -6, 2], [-4, -1, 9]], [4, -1, 9]), ]) def test_daily_max(test, expected): """Test max function works for zeroes, positive integers, mix of positive/negative integers.""" from inflammation.models import daily_max npt.assert_array_equal(daily_max(np.array(test)), np.array(expected)) @pytest.mark.parametrize( "test, expected", [ ([[0, 0, 0], [0, 0, 0], [0, 0, 0]], [0, 0, 0]), ([[4, 2, 5], [1, 6, 2], [4, 1, 9]], [1, 1, 2]), ([[4, -2, 5], [1, -6, 2], [-4, -1, 9]], [-4, -6, 2]), ]) def test_daily_min(test, expected): """Test min function works for zeroes, positive integers, mix of positive/negative integers.""" from inflammation.models import daily_min npt.assert_array_equal(daily_min(np.array(test)), np.array(expected)) ...
Try them out!
Let’s commit our revised test_models.py
file and test cases to our test-suite
branch (but don’t push them to remote yet!):
$ git add tests/test_models.py
$ git commit -m "Add parameterisation mean, min, max test cases"
Using Code Coverage to Understand How Much of Our Code is Tested
Pytest can’t think of test cases for us. We still have to decide what to test and how many tests to run. Our best guide here is economics: we want the tests that are most likely to give us useful information that we don’t already have. For example, if daily_mean(np.array([[2, 0], [4, 0]])))
works, there’s probably not much point testing daily_mean(np.array([[3, 0], [4, 0]])))
, since it’s hard to think of a bug that would show up in one case but not in the other.
Now, we should try to choose tests that are as different from each other as possible, so that we force the code we’re testing to execute in all the different ways it can - to ensure our tests have a high degree of code coverage.
A simple way to check the code coverage for a set of tests is to use pytest
to tell us how many statements in our code are being tested. By installing a Python package to our virtual environment called pytest-cov
that is used by Pytest and using that, we can find this out:
$ pip3 install pytest-cov
$ pytest --cov=inflammation.models tests/test_models.py
So here, we specify the additional named argument --cov
to pytest
specifying the code to analyse for test coverage.
============================= test session starts ==============================
platform darwin -- Python 3.9.6, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/alex/python-intermediate-inflammation
plugins: anyio-3.3.4, cov-3.0.0
collected 9 items
tests/test_models.py ......... [100%]
---------- coverage: platform darwin, python 3.9.6-final-0 -----------
Name Stmts Miss Cover
--------------------------------------------
inflammation/models.py 9 1 89%
--------------------------------------------
TOTAL 9 1 89%
============================== 9 passed in 0.26s ===============================
Here we can see that our tests are doing very well - 89% of statements in inflammation/models.py
have been executed. But which statements are not being tested? The additional argument --cov-report term-missing
can tell us:
$ pytest --cov=inflammation.models --cov-report term-missing tests/test_models.py
...
Name Stmts Miss Cover Missing
------------------------------------------------------
inflammation/models.py 9 1 89% 18
------------------------------------------------------
TOTAL 9 1 89%
...
So there’s still one statement not being tested at line 18, and it turns out it’s in the function load_csv()
. Here
we should consider whether or not to write a test for this function, and, in general, any other functions that may not be tested. Of course, if there are hundreds or thousands of lines that are not covered it may not be feasible to write tests for them all. But we should prioritise the ones for which we write tests, considering how often they’re used, how complex they are, and importantly, the extent to which they affect our program’s results.
Again, we should also update our requirements.txt
file with our latest package environment, which now also includes pytest-cov
, and commit it:
$ pip3 freeze --exclude-editable > requirements.txt
$ cat requirements.txt
You’ll notice pytest-cov
and coverage
have been added. Let’s commit this file and push our new branch to GitHub:
$ git add requirements.txt
$ git commit -m "Add coverage support"
$ git push origin test-suite
What about Testing Against Indeterminate Output?
What if your implementation depends on a degree of random behaviour? This can be desired within a number of applications in research, particularly in simulations (for example, molecular simulations) or other stochastic behavioural models of complex systems. So how can you test against such systems if the outputs are different when given the same inputs?
One way is to remove the randomness during testing. For those portions of your code that use a language feature or library to generate a random number, you can instead produce a known sequence of numbers instead when testing, to make the results deterministic and hence easier to test against. You could encapsulate this different behaviour in separate functions, methods, or classes and call the appropriate one depending on whether you are testing or not. This is essentially a type of mocking, where you are creating a “mock” version that mimics some behaviour for the purposes of testing.
Another way is to control the randomness during testing to provide results that are deterministic - the same each time. Implementations of randomness in computing languages, including Python, are actually never truly random - they are pseudorandom: the sequence of ‘random’ numbers are typically generated using a mathematical algorithm. A seed value is used to initialise an implementation’s random number generator, and from that point, the sequence of numbers is actually deterministic. Many implementations just use the system time as the default seed, but you can set your own. By doing so, the generated sequence of numbers is the same, e.g. using Python’s
random
library to randomly select a sample of ten numbers from a sequence between 0-99:random.seed(1) print(random.sample(range(0, 100), 10)) random.seed(1) print(random.sample(range(0, 100), 10))
Will produce:
[17, 72, 97, 8, 32, 15, 63, 57, 60, 83] [17, 72, 97, 8, 32, 15, 63, 57, 60, 83]
So since your program’s randomness is essentially eliminated, your tests can be written to test against the known output. The trick of course, is to ensure that the output being testing against is definitively correct!
The other thing you can do while keeping the random behaviour, is to test the output data against expected constraints of that output. For example, if you know that all data should be within particular ranges, or within a particular statistical distribution type (e.g. normal distribution over time), you can test against that, conducting multiple test runs that take advantage of the randomness to fill the known “space” of expected results. Note that this isn’t as precise or complete, and bear in mind this could mean you need to run a lot of tests which may take considerable time.
Limits to Testing
Like any other piece of experimental apparatus, a complex program requires a much higher investment in testing than a simple one. Putting it another way, a small script that is only going to be used once, to produce one figure, probably doesn’t need separate testing: its output is either correct or not. A linear algebra library that will be used by thousands of people in twice that number of applications over the course of a decade, on the other hand, definitely does. The key is identify and prioritise against what will most affect the code’s ability to generate accurate results.
It’s also important to remember that unit testing cannot catch every bug in an application, no matter how many tests you write. To mitigate this manual testing is also important. Also remember to test using as much input data as you can, since very often code is developed and tested against the same small sets of data. Increasing the amount of data you test against - from numerous sources - gives you greater confidence that the results are correct.
Our software will inevitably increase in complexity as it develops. Using automated testing where appropriate can save us considerable time, especially in the long term, and allows others to verify against correct behaviour.
Key Points
We can assign multiple inputs to tests using parametrisation.
It’s important to understand the coverage of our tests across our code.
Writing unit tests takes time, so apply them where it makes the most sense.
Continuous Integration for Automated Testing
Overview
Teaching: 45 min
Exercises: 0 minQuestions
How can I apply automated repository testing to scale with development activity?
Objectives
Describe the benefits of using Continuous Integration for further automation of testing
Enable GitHub Actions Continuous Integration for public open source repositories
Use continuous integration to automatically run unit tests and code coverage when changes are committed to a version control repository
Introduction
So far we’ve been manually running our tests as we require. Once we’ve made a change, or added a new feature with accompanying tests, we can re-run our tests, giving ourselves (and others who wish to run them) increased confidence that everything is working as expected. Now we’re going to take further advantage of automation in a way that helps testing scale across a development team with very little overhead, using Continuous Integration.
What is Continuous Integration?
The automated testing we’ve done so far only takes into account the state of the repository we have on our own machines. In a software project involving multiple developers working and pushing changes on a repository, it would be great to know holistically how all these changes are affecting our codebase without everyone having to pull down all the changes and test them. If we also take into account the testing required on different target user platforms for our software and the changes being made to many repository branches, the effort required to conduct testing at this scale can quickly become intractable for a research project to sustain.
Continuous Integration (CI) aims to reduce this burden by further automation, and automation - wherever possible - helps us to reduce errors and makes predictable processes more efficient. The idea is that when a new change is committed to a repository, CI clones the repository, builds it if necessary, and runs any tests. Once complete, it presents a report to let you see what happened.
There are many CI infrastructures and services, free and paid for, and subject to change as they evolve their features. We’ll be looking at GitHub Actions - which unsurprisingly is available as part of GitHub.
Continuous Integration with GitHub Actions
A Quick Look at YAML
YAML is a text format used by GitHub Action workflow files. It is also increasingly used for configuration files and storing other types of data, so it’s worth taking a bit of time looking into this file format.
YAML (a recursive acronym which stands for “YAML Ain’t Markup Language”) is a language designed to be human readable. The three basic things you need to know about YAML to get started with GitHub Actions are key-value pairs, arrays, and maps.
So firstly, YAML files are essentially made up of key-value pairs, in the form key: value
, for example:
name: Kilimanjaro
height_metres: 5892
first_scaled_by: Hans Meyer
In general, you don’t need quotes for strings, but you can use them when you want to explicitly distinguish between numbers and strings, e.g. height_metres: "5892"
would be a string, but in the above example it is an integer. It turns out Hans Meyer isn’t the only first ascender of Kilimanjaro, so one way to add this person as another value to this key is by using YAML arrays, like this:
first_scaled_by:
- Hans Meyer
- Ludwig Purtscheller
An alternative to this format for arrays is the following, which would have the same meaning:
first_scaled_by: [Hans Meyer, Ludwig Purtscheller]
If we wanted to express more information for one of these values we could use a feature known as maps (dictionaries/hashes), which allow us to define nested, hierarchical data structures, e.g.
...
height:
value: 5892
unit: metres
measured:
year: 2008
by: Kilimanjaro 2008 Precise Height Measurement Expedition
...
So here, height
itself is made up of three keys value
, unit
, and measured
, with the last of these being another nested key with the keys year
and by
. Note the convention of using two spaces for tabs, instead of Python’s four.
We can also combine maps and arrays to describe more complex data. Let’s say we want to add more detail to our list of initial ascenders:
...
first_scaled_by:
- name: Hans Meyer
date_of_birth: 22-03-1858
nationality: German
- name: Ludwig Purtscheller
date_of_birth: 22-03-1858
nationality: Austrian
So here we have a YAML array of our two mountaineers, each with additional keys offering more information. As we’ll see shortly, GitHub Actions workflows will use all of these.
Defining Our Workflow
With a GitHub repository there’s a way we can set up CI to run our tests automatically when we commit changes. Let’s do this now by adding a new file to our repository whilst on the test-suite
branch. First, create the new directories .github/workflows
:
$ mkdir -p .github/workflows
This directory is used specifically for GitHub Actions, allowing us to specify any number of workflows that can be run under a variety of conditions, which is also written using YAML. So let’s add a new YAML file called main.yml
(note it’s extension is .yml
without the a
) within the new .github/workflows
directory:
name: CI
# We can specify which Github events will trigger a CI build
on: push
# now define a single job 'build' (but could define more)
jobs:
build:
# we can also specify the OS to run tests on
runs-on: ubuntu-latest
# a job is a seq of steps
steps:
# Next we need to checkout out repository, and set up Python
# A 'name' is just an optional label shown in the log - helpful to clarify progress - and can be anything
- name: Checkout repository
uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install Python dependencies
run: |
python3 -m pip install --upgrade pip
pip3 install -r requirements.txt
pip3 install -e .
- name: Test with PyTest
run: |
pytest --cov=inflammation.models tests/test_models.py
Note: be sure to create this file as main.yml
within the newly created .github/workflows
directory, or it won’t work!
So as well as giving our workflow a name - CI - we indicate with on
that we want this workflow to run when we push
commits to our repository. The workflow itself is made of a single job
named build
, and we could define any number of jobs after this one if we wanted, and each one would run in parallel.
Next, we define what our build job will do. With runs-on
we first state which operating systems we want to use, in this case just Ubuntu for now. We’ll be looking at ways we can scale this up to testing on more systems later.
Lastly, we define the step
s that our job will undertake in turn, to set up the job’s environment and run our tests. You can think of the job’s environment initially as a blank slate: much like a freshly installed machine (albeit virtual) with very little installed on it, we need to prepare it with what it needs to be able to run our tests. Each of these steps are:
- Checkout repository for the job:
uses
indicates that want to use a GitHub Action calledcheckout
that does this - Set up Python 3.9: here we use the
setup-python
Action, indicating that we want Python version 3.9 - Install latest version of pip, dependencies, and our inflammation package: In order to locally install our
inflammation
package it’s good practice to upgrade the version of pip that is present first, then we use pip to install our package dependencies. Once installed, we can usepip3 install -e .
as before to install our own package. We userun
here to run theses commands in the CI shell environment - Test with PyTest: lastly, we run
pytest
, with the same arguments we used manually before
What about other Actions?
Our workflow here uses standard GitHub Actions (indicated by
actions/*
). Beyond the standard set of actions, others are available via the GitHub Marketplace. It contains many third-party actions (as well as apps) that you can use with GitHub for many tasks across many programming languages, particularly for setting up environments for running tests, code analysis and other tools, setting up and using infrastructure (for things like Docker or Amazon’s AWS cloud), or even managing repository issues. You can even contribute your own.
Triggering a Build on GitHub Actions
Now if we commit and push this change a CI run will be triggered:
$ git add .github
$ git commit -m "Add GitHub Actions configuration"
$ git push
Since we are only committing the GitHub Actions configuration file to the test-suite
branch for the moment, only the contents of this branch will be used for CI. We can pass this file upstream into other branches (i.e. via merges) when we’re happy it works, which will then allow the process to run automatically on these other branches. This again highlights the usefulness of the feature-branch model - we can work in isolation on a feature until it’s ready to be passed upstream without disrupting development on other branches, and in the case of CI, we’re starting to see its scaling benefits across a larger scale development team working across potentially many branches.
Checking Build Progress and Reports
Handily, we can see the progress of the build from our repository on GitHub by selecting the test-suite
branch from the dropdown menu (which currently says main
), and then selecting commits
(located just above the code directory listing on the right, alongside the last commit message and a small image of a timer).
You’ll see a list of commits for this branch, and likely see an orange marker next to the latest commit (clicking on it yields Some checks haven’t completed yet
) meaning the build is still in progress. This is a useful view, as over time, it will give you a history of commits, who did them, and whether the commit resulted in a successful build or not.
Hopefully after a while, the marker will turn into a green tick indicating a successful build. Clicking it gives you even more information about the build, and selecting Details
link takes you to a complete log of the build and its output.
The logs are actually truncated; selecting the arrows next to the entries - which are the name
labels we specified in the main.yml
file - will expand them with more detail, including the output from the actions performed.
GitHub Actions offers these continuous integration features as a free service with 2000 Actions/minutes a month on as many public repositories that you like. Paid levels are available too.
Scaling Up Testing Using Build Matrices
Now we have our CI configured and building, we can use a feature called build matrices which really shows the value of using CI to test at scale.
Suppose the intended users of our software use either Ubuntu, Mac OS, or Windows, and either have Python version 3.8 or 3.9 installed, and we want to support all of these. Assuming we have a suitable test suite, it would take a considerable amount of time to set up testing platforms to run our tests across all these platform combinations. Fortunately, CI can do the hard work for us very easily.
Using a build matrix we can specify testing environments and parameters (such as operating system, Python version, etc.) and new jobs will be created that run our tests for each permutation of these.
Let’s see how this is done using GitHub Actions. To support this, we define a strategy
as a matrix
of operating systems and Python versions, and using matrix.os
and matrix.python-version
to reference these configuration possibilities instead of using hardcoded values. Then we replace the runs-on
and python-version
parameters
to refer to the values from the matrix. So, our .github/workflows/main.yml
should look like the following:
...
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: [3.8, 3.9]
...
runs-on: ${{ matrix.os }}
...
# a job is a seq of steps
steps:
# Next we need to checkout out repository, and set up Python
# A 'name' is just an optional label shown in the log - helpful to clarify progress - and can be anything
- name: Checkout repository
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
...
The ${{ }}
are used as a means to reference configuration values from the matrix. This way, every possible permutation of Python versions 3.8 and 3.9 with the Ubuntu, Mac OS and Windows operating systems will be tested and we can expect 6 build jobs in total.
Let’s commit and push this change and see what happens:
$ git add .github/workflows/main.yml
$ git commit -m "Add GA build matrix for os and Python version"
$ git push
If we go to our GitHub build now, we can see that a new job has been created for each permutation.
Note all jobs running in parallel (up to the limit allowed by our account) which potentially saves us a lot of time waiting for testing results. Overall, this approach allows us to massively scale our automated testing across platforms we wish to test.
Merging Back to develop
Branch
Now we’re happy with our test suite, we can merge this work (which currently only exist on our test-suite
branch) with our parent develop
branch. Again, this reflects us working with impunity on a logical unit of work, involving multiple commits, on a separate feature branch until it’s ready to be escalated to the develop
branch:
$ git checkout develop
$ git merge test-suite
Then, assuming no conflicts we can push these changes back to the remote repository as we’ve done before:
$ git push origin develop
Now these changes have migrated to our parent develop
branch, develop
will also inherit the configuration to run CI builds, so these will run automatically on this branch as well.
This highlights a big benefit of CI when you perform merges (and apply pull requests). As new branch code is merged into upstream branches like develop
and main
these newly integrated code changes are automatically tested together with existing code - which of course may also have changed in the meantime!
Key Points
Continuous Integration can run tests automatically to verify changes as code develops in our repository.
CI builds are typically triggered by commits pushed to a repository.
We need to write a configuration file to inform a CI service what to do for a build.
Builds can be enabled and configured separately for each branch.
We can run - and get reports from - different CI infrastructure builds simultaneously.
Diagnosing Issues and Improving Robustness
Overview
Teaching: 30 min
Exercises: 20 minQuestions
Once we know our program has errors, how can we locate them in the code?
How can we make our programs more resilient to failure?
Objectives
Use a debugger to explore behaviour of a running program
Describe and identify edge and corner test cases and explain why they are important
Apply error handling and defensive programming techniques to improve robustness of a program
Integrate linting tool style checking into a continuous integration job
Introduction
Unit testing can tell us something is wrong in our code and give a rough idea of where the error is by which test(s) are failing. But it does not tell us exactly where the problem is (i.e. what line of code), or how it came about. To give us a better idea of what is going on, we can:
- output program state at various points, e.g. by using print statements to output the contents of variables,
- use a logging capability to output the state of everything as the program progresses, or
- look at intermediately generated files.
But such approaches are often time consuming and sometimes not enough to fully pinpoint the issue. In complex programs, like simulation codes, we often need to get inside the code while it is running and explore. This is where using a debugger can be useful.
Setting the Scene
Let us add a new function called patient_normalise()
to our inflammation example to normalise a
given inflammation data array so that all entries fall between 0 and 1.
(Make sure you create a new feature branch for this work off your develop
branch.)
To normalise each patient’s inflammation data we need to divide it by the maximum inflammation
experienced by that patient. To do so, we can add the following code to inflammation/models.py
:
def patient_normalise(data):
"""Normalise patient data from a 2D inflammation data array."""
max = np.max(data, axis=0)
return data / max[:, np.newaxis]
Note: there are intentional mistakes in the above code, which will be detected by further testing and code style checking below so bear with us for the moment!
In the code above, we first go row by row and find the maximum inflammation value for each patient and
store these values in a 1-dimensional NumPy array max
. We then want to use
NumPy’s element-wise division, to divide each value in every row of inflammation data (belonging to the same patient)
by the maximum value for that patient stored in the 1D array max
.
However, we cannot do that division automatically as data
is a 2D array (of shape (60, 40)
) and max
is a 1D array (of shape (60, )
), which means that their shapes are not compatible.
Hence, to make sure that we can perform this division and get the expected result, we need to convert max
to be a
2D array by using the newaxis
index operator to insert a new axis into max
, making it a 2D array of shape (60, 1)
.
Now the division will give us the expected result. Even though the shapes are not identical,
NumPy’s automatic broadcasting
(adjustment of shapes) will make sure that the shape of the 2D max
array is now
“stretched” (“broadcast”) to match that of data
- i.e. (60, 40)
, and element-wise division can be performed.
Broadcasting
The term broadcasting describes how NumPy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes. Be careful, though, to understand how the arrays get stretched to avoid getting unexpected results.
Note there is an assumption in this calculation that the minimum value we want is always zero. This is a sensible assumption for this particular application, since the zero value is a special case indicating that a patient experienced no inflammation on a particular day.
Let us now add a new test in tests/test_models.py
to check that the normalisation function is correct for some test data.
@pytest.mark.parametrize(
"test, expected",
[
([[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[0.33, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]])
])
def test_patient_normalise(test, expected):
"""Test normalisation works for arrays of one and positive integers.
Assumption that test accuracy of two decimal places is sufficient."""
from inflammation.models import patient_normalise
npt.assert_almost_equal(patient_normalise(np.array(test)), np.array(expected), decimal=2)
Note another assumption made here that a test accuracy of two decimal places is sufficient - so we state this explicitly and have rounded our expected values up accordingly. Also, we are using the assert_almost_equal()
Numpy testing function instead of assert_array_equal()
, since it allows us to test against values that are almost equal: very useful when we have numbers with arbitrary decimal places and are only concerned with a certain degree of precision, like the test case above.
Run the tests again using pytest tests/test_models.py
and you will note that the new test is failing, with an error message that does not give many clues as to what went wrong.
E AssertionError:
E Arrays are not almost equal to 2 decimals
E
E Mismatched elements: 6 / 9 (66.7%)
E Max absolute difference: 0.57142857
E Max relative difference: 1.345
E x: array([[0.14, 0.29, 0.43],
E [0.5 , 0.62, 0.75],
E [0.78, 0.89, 1. ]])
E y: array([[0.33, 0.67, 1. ],
E [0.67, 0.83, 1. ],
E [0.78, 0.89, 1. ]])
tests/test_models.py:53: AssertionError
Let us use a debugger at this point to see what is going on and why the function failed.
Debugging in PyCharm
Think of debugging like performing exploratory surgery - on code! Debuggers allow us to peer at the internal workings of a program, such as variables and other state, as it performs its functions.
Running Tests Within PyCharm
Firstly, to make it easier to track what’s going on, we can set up PyCharm to run and debug our tests instead of running them from the command line. If you have not done so already, you will first need to enable the Pytest framework in PyCharm. You can do this by:
- Select either
PyCharm
>Preferences
(Mac) orFile
>Settings
(Linux, Windows). - Then, in the preferences window that appears, select
Tools
->Python integrated tools
> from the left. - Under
Testing
, forDefault test runner
selectpytest
. - Select
OK
.
We can now run pytest
over our tests in PyCharm, similarly to how we ran our inflammation-analysis.py
script before. Right-click the test_models.py
file under the tests
directory in the file navigation window on the left, and select Run 'pytest in test_model...'
. You’ll see the results of the tests appear in PyCharm in a bottom panel. If you scroll down in that panel you should see the failed test_patient_normalise()
test result looking something like the following:
We can also run our test functions individually. First, let’s check that our PyCharm running and testing configurations are correct. Select Run
> Edit Configurations...
from the PyCharm menu, and you should see something like the following:
PyCharm allows us to configure multiple ways of running our code. Looking at the figure above, the first of these - inflammation-analysis
under Python
- was configured when we set up how to run our script from within PyCharm. The second - pytest in test_models.py
under Python tests
- is our recent test configuration. If you see just these, you’re good to go. We don’t need any others, so select any others you see and click the -
button at the top to remove them. This will avoid any confusion when running our tests separately. Click OK
when done.
Buffered Output
Whenever a Python program prints text to the terminal or to a file, it first stores this text in an output buffer. When the buffer becomes full or is flushed, the contents of the buffer are written to the terminal / file in one go and the buffer is cleared. This is usually done to increase performance by effectively converting multiple output operations into just one. Printing text to the terminal is a relatively slow operation, so in some cases this can make quite a big difference to the total execution time of a program.
However, using buffered output can make debugging more difficult, as we can no longer be quite sure when a log message will be displayed. In order to make debugging simpler, PyCharm automatically adds the environment variable
PYTHONUNBUFFERED
we see in the screenshot above, which disables output buffering.
Now, if you select the green arrow next to a test function in our test_models.py
script in PyCharm, and select Run 'pytest in test_model...'
, we can run just that test:
Click on the “run” button next to test_patient_normalise
, and you will be able to see that PyCharm runs just that test function, and we see the same AssertionError
that we saw before.
Running the Debugger
Now we want to use the debugger to investigate what is happening inside the patient_normalise
function. To do this we will add a breakpoint in the code. A breakpoint will pause execution at that point allowing us to explore the state of the program.
To set a breakpoint, navigate to the models.py
file and move your mouse to the return
statement of the patient_normalise
function. Click to just to the right of the line number for that line and a small red dot will appear, indicating that you have placed a breakpoint on that line.
Now if you select the green arrow next to the test_patient_normalise
function and instead select Debug 'pytest in test_model...'
, you will notice that execution will be paused at the return
statement of patient_normalise
. In the debug panel that appears below, we can now investigate the exact state of the program prior to it executing this line of code.
In the debug panel below, in the Debugger
tab you will be able to see two sections that looks something like the following:
- The
Frames
section on the left, which shows the call stack (the chain of functions that have been executed to lead to this point). We can traverse this chain of functions if we wish, to observe the state of each function. - The
Variables
section on the right, which displays the local and global variables currently in memory. You will be able to see thedata
array that is input to thepatient_normalise
function, as well as themax
local array that was created to hold the maximum inflammation values for each patient.
We also have the ability run any Python code we wish at this point to explore the state of the program even further! This is useful if you want to view a particular combination of variables, or perhaps a single element or slice of an array to see what went wrong. Select the Console
tab in the panel (next to the Debugger
tab), and you’ll be presented with a Python prompt. Try putting in the expression max[:, np.newaxis]
into the console, and you will be able to see the column vector that we are dividing data
by in the return line of the function.
Now, looking at the max
variable, we can see that something looks wrong, as the maximum values for each patient do not correspond to the data
array. Recall that the input data
array we are using for the function is
[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
So the maximum inflammation for each patient should be [3, 6, 9]
, whereas the debugger shows [7, 8, 9]
. You can see that the latter corresponds exactly to the last column of data
, and we can immediately conclude that we took the maximum along the wrong axis of data
. Now we have our answer, stop the debugging process by selecting the red square at the top right of the main PyCharm window.
So to fix the patient_normalise
function in models.py
, change axis=0
in the first line of the function to axis=1
. With this fix in place, running all the tests again should result in all tests passing. Navigate back to test_models.py
in PyCharm, right click test_models.py
and select Run 'pytest in test_model...'
. You should be rewarded with:
NumPy Axis
Getting the axes right in NumPy is not trivial - the following tutorial offers a good explanation on how axes work when applying NumPy functions to arrays.
Corner or Edge Cases
The test case that we have currently written for patient_normalise
is parameterised with a fairly standard data
array. However, when writing your test cases, it is important to consider parametrising them by unusual or extreme
values, in order to test all the edge or corner cases that your code could be exposed to in practice.
Generally speaking, it is at these extreme cases that you will find your code failing, so it’s beneficial to test them beforehand.
What is considered an “edge case” for a given component depends on what that component is meant to do.
In the case of patient_normalise
function, the goal is to normalise a numeric array of numbers.
For numerical values, extreme cases could be zeros, very large or small values, not-a-number (NaN
) or infinity values.
Since we are specifically considering an array of values, an edge case could be that all the numbers of the array are equal.
For all the given edge cases you might come up with, you should also consider their likelihood of occurrence.
It is often too much effort to exhaustively test a given function against every possible input, so you should prioritise edge cases that are likely to occur. For our patient_normalise
function, some common edge cases might be the occurrence of zeros,
and the case where all the values of the array are the same.
When you are considering edge cases to test for, try also to think about what might break your code.
For patient_normalise
we can see that there is a division by the maximum inflammation value for each patient,
so this will clearly break if we are dividing by zero here, resulting in NaN
values in the normalised array.
With all this in mind, let us add a few edge cases to our parametrisation of test_patient_normalise
.
We will add two extra tests, corresponding to an input array of all 0, and an input array of all 1.
@pytest.mark.parametrize(
"test, expected",
[
([[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]]),
([[1, 1, 1], [1, 1, 1], [1, 1, 1]], [[1, 1, 1], [1, 1, 1], [1, 1, 1]]),
([[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[0.33, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]]),
])
Running the tests now from the command line results in the following assertion error, due to the division by zero as we predicted.
E AssertionError:
E Arrays are not almost equal to 2 decimals
E
E x and y nan location mismatch:
E x: array([[nan, nan, nan],
E [nan, nan, nan],
E [nan, nan, nan]])
E y: array([[0, 0, 0],
E [0, 0, 0],
E [0, 0, 0]])
tests/test_models.py:88: AssertionError
Helpfully, you will also notice that NumPy also provides a run-time warning for division by zero which you can find near the bottom of the log:
RuntimeWarning: invalid value encountered in true_divide
return data / max[:, np.newaxis]
How can we fix this? Luckily, there is a NumPy function that is useful here, np.isnan()
, which we can use to replace all the NaN’s with our desired result, which is 0. We can also silence the run-time warning using
np.errstate
:
...
def patient_normalise(data):
"""
Normalise patient data from a 2D inflammation data array.
NaN values are ignored, and normalised to 0.
Negative values are rounded to 0.
"""
max = np.nanmax(data, axis=1)
with np.errstate(invalid='ignore', divide='ignore'):
normalised = data / max[:, np.newaxis]
normalised[np.isnan(normalised)] = 0
normalised[normalised < 0] = 0
return normalised
...
Exercise: Exploring Tests for Edge Cases
Think of some more suitable edge cases to test our
patient_normalise()
function and add them to the parametrised tests. After you have finished remember to commit your changes.Possible Solution
@pytest.mark.parametrize( "test, expected", [ ( [[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0, 0, 0], [0, 0, 0], [0, 0, 0]], ), ( [[1, 1, 1], [1, 1, 1], [1, 1, 1]], [[1, 1, 1], [1, 1, 1], [1, 1, 1]], ), ( [[float('nan'), 1, 1], [1, 1, 1], [1, 1, 1]], [[0, 1, 1], [1, 1, 1], [1, 1, 1]], ), ( [[1, 2, 3], [4, 5, float('nan')], [7, 8, 9]], [[0.33, 0.67, 1], [0.8, 1, 0], [0.78, 0.89, 1]], ), ( [[-1, 2, 3], [4, 5, 6], [7, 8, 9]], [[0, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]], ), ( [[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[0.33, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]], ) ]) def test_patient_normalise(test, expected): """Test normalisation works for arrays of one and positive integers.""" from inflammation.models import patient_normalise npt.assert_almost_equal(patient_normalise(np.array(test)), np.array(expected), decimal=2) ...
You could also, for example, test and handle the case of a whole row of NaNs.
Defensive Programming
In the previous section, we made a few design choices for our patient_normalise
function:
- We are implicitly converting any
NaN
and negative values to 0, - Normalising a constant 0 array of inflammation results in an identical array of 0s,
- We don’t warn the user of any of these situations.
This could have be handled differently. We might decide that we do not want to silently make these changes to the data, but instead to explicitly check that the input data satisfies a given set of assumptions (e.g. no negative values) and raise an error if this is not the case. Then we can proceed with the normalisation, confident that our normalisation function will work correctly.
Checking that input to a function is valid via a set of preconditions is one of the simplest forms of
defensive programming which is used as a way of avoiding potential errors.
Preconditions are checked at the beginning of the function to make sure that all assumptions are satisfied.
These assumptions are often based on the value of the arguments, like we have already discussed.
However, in a dynamic language like Python one of the more common preconditions is to check that the arguments of a
function are of the correct type. Currently there is nothing stopping someone from calling patient_normalise
with a string, a dictionary, or another object that is not an ndarray
.
As an example, let us change the behaviour of the patient_normalise()
function to raise an error on negative
inflammation values. Edit the inflammation/models.py
file, and add a precondition check to the beginning of the patient_normalise()
function like so:
...
if np.any(data < 0):
raise ValueError('Inflammation values should not be negative')
...
We can then modify our test function in tests/test_models.py
to check that the function raises the correct exception - a ValueError
- when input to the test contains negative values (i.e. input case [[-1, 2, 3], [4, 5, 6], [7, 8, 9]]
).
The ValueError
exception is part of the standard Python
library and is used to indicate that the function received an argument of the right type, but of an inappropriate value.
@pytest.mark.parametrize(
"test, expected, expect_raises",
[
... # other test cases here, with None for expect_raises
(
[[-1, 2, 3], [4, 5, 6], [7, 8, 9]],
[[0, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]],
ValueError,
),
(
[[1, 2, 3], [4, 5, 6], [7, 8, 9]],
[[0.33, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]],
None,
),
])
def test_patient_normalise(test, expected, expect_raises):
"""Test normalisation works for arrays of one and positive integers."""
from inflammation.models import patient_normalise
if expect_raises is not None:
with pytest.raises(expect_raises):
npt.assert_almost_equal(patient_normalise(np.array(test)), np.array(expected), decimal=2)
else:
npt.assert_almost_equal(patient_normalise(np.array(test)), np.array(expected), decimal=2)
Be sure to commit your changes so far and push them to GitHub.
Optional Exercise: Add a Precondition to Check the Correct Type and Shape of Data
Add preconditions to check that data is an
ndarray
object and that it is of the correct shape. Add corresponding tests to check that the function raises the correct exception. You will find the Python functionisinstance
useful here, as well as the Python exceptionTypeError
. Once you are done, commit your new files, and push the new commits to your remote repository on GitHub.Solution
In
inflammation/models.py
:... def patient_normalise(data): """ Normalise patient data between 0 and 1 of a 2D inflammation data array. Any NaN values are ignored, and normalised to 0 :param data: 2D array of inflammation data :type data: ndarray """ if not isinstance(data, np.ndarray): raise TypeError('data input should be ndarray') if len(data.shape) != 2: raise ValueError('inflammation array should be 2-dimensional') if np.any(data < 0): raise ValueError('inflammation values should be non-negative') max = np.nanmax(data, axis=1) with np.errstate(invalid='ignore', divide='ignore'): normalised = data / max[:, np.newaxis] normalised[np.isnan(normalised)] = 0 return normalised ...
In
test/test_models.py
:... @pytest.mark.parametrize( "test, expected, expect_raises", [ ... ( 'hello', None, TypeError, ), ( 3, None, TypeError, ), ( [[1, 2, 3], [4, 5, 6], [7, 8, 9]], [[0.33, 0.67, 1], [0.67, 0.83, 1], [0.78, 0.89, 1]], None, ) ]) def test_patient_normalise(test, expected, expect_raises): """Test normalisation works for arrays of one and positive integers.""" from inflammation.models import patient_normalise if isinstance(test, list): test = np.array(test) if expect_raises is not None: with pytest.raises(expect_raises): npt.assert_almost_equal(patient_normalise(test), np.array(expected), decimal=2) else: npt.assert_almost_equal(patient_normalise(test), np.array(expected), decimal=2) ...
Note the conversion from
list
tonp.array
has been moved out of the call tonpt.assert_almost_equal()
within the test function, and is now only applied to list items (rather than all items). This allows for greater flexibility with our test inputs, since this wouldn’t work in the test case that uses a string.
If you do the challenge, again, be sure to commit your changes and push them to GitHub.
You should not take it too far by trying to code preconditions for every conceivable eventuality.
You should aim to strike a balance between making sure you secure your function against incorrect use,
and writing an overly complicated and expensive function that handles cases that are likely never going to occur.
For example, it would be sensible to validate the shape of your inflammation data array when it is actually read
from the csv file (in load_csv
), and therefore there is no reason to test this again in patient_normalise
.
You can also decide against adding explicit preconditions in your code, and instead state the assumptions and
limitations of your code for users of your code in the docstring and rely on them to invoke your code correctly.
This approach is useful when explicitly checking the precondition is too costly.
Improving Robustness with Automated Code Style Checks
Let’s re-run Pylint over our project after having added some more code to it. From the project root do:
$ pylint inflammation
You may see something like the following in Pylint’s output:
************* Module inflammation.models
...
inflammation/models.py:60:4: W0622: Redefining built-in 'max' (redefined-builtin)
...
The above output indicates that by using the local variable called max
it the patient_normalise
function,
we have redefined a built-in Python function called max
. This isn’t a good idea and may have some undesired
effects (e.g. if you redefine a built-in name in a global scope you may cause yourself some trouble
which may be difficult to trace).
Exercise: Fix Code Style Errors
Rename our local variable
max
to something else (e.g. call itmax_data
), then rerun your tests and commit these latest changes and push them to GitHub using our usual feature branch workflow. Make sure yourdevelop
andmain
branches are up to date.
It may be hard to remember to run linter tools every now and then. Luckily, we can now add this Pylint execution to our
continuous integration builds as on of the extra tasks.
For example, to add it to GitHub Actions we can add the following step to our steps
in .github/workflows/main.yml
:
...
- name: Check style with Pylint
run: |
python3 -m pylint --fail-under=0 --reports=y inflammation
...
Note we need to add --fail-under=0
otherwise the builds will fail if we don’t get a ‘perfect’ score of 10!
This seems unlikely, so let’s be more pessimistic. We’ve also added --reports=y
which will give us a more detailed
report of the code analysis.
Then we can just add this to our repo and trigger a build:
$ git add .github/workflows/main.yml
$ git commit -m "Add Pylint run to build"
$ git push
Then once complete, under the build(s) reports you should see an entry with the output from Pylint as before, but with an extended breakdown of the infractions by category as well as other metrics for the code, such as the number and line percentages of code, docstrings, comments, and empty lines.
So we specified a score of 0 as a minimum which is very low. If we decide as a team on a suitable minimum score for our codebase, we can specify this instead. There are also ways to specify specific style rules that shouldn’t be broken which will cause Pylint to fail, which could be even more useful if we want to mandate a consistent style.
We can specify overrides to Pylint’s rules in a file called .pylintrc
which Pylint can helpfully generate for us.
In our repository root directory:
$ pylint --generate-rcfile > .pylintrc
Looking at this file, you’ll see it’s already pre-populated. No behaviour is currently changed from the default by
generating this file, but we can amend it to suit our team’s coding style. For example, a typical rule to customise -
favoured by many projects - is the one involving line length.
You’ll see it’s set to 100, so let’s set that to a more reasonable 120.
While we’re at it, let’s also set our fail-under
in this file:
...
# Specify a score threshold to be exceeded before program exits with error.
fail-under=0
...
# Maximum number of characters on a single line.
max-line-length=120
...
Don’t forget to remove the --fail-under
argument to Pytest in our GitHub Actions configuration file too,
since we don’t need it anymore.
Now when we run Pylint we won’t be penalised for having a reasonable line length. For some further hints and tips on how to approach using Pylint for a project, see this article.
Before moving on, be sure to commit all you changes and then merge to the develop
and main
branches in the usual
manner, and push them all to GitHub.
Key Points
Unit testing can show us what does not work, but does not help us locate problems in code.
Use a debugger to help you locate problems in code.
A debugger allows us to pause code execution and examine its state by adding breakpoints to lines in code.
Use preconditions to ensure correct behaviour of code.
Ensure that unit tests check for edge and corner cases too.
Using linting tools to automatically flag suspicious programming language constructs and stylistic errors can help improve code robustness.
Section 3: Software Development as a Process
Overview
Teaching: 5 min
Exercises: 0 minQuestions
How can we design and write ‘good’ software that meets its goals and requirements?
Objectives
Describe the differences between writing code and engineering software.
Define the fundamental stages in a software development process.
List the benefits of following a process of software development.
In this section, we will take a step back from coding development practices and tools and look at the bigger picture of software as a process of development.
“If you fail to plan, you are planning to fail.” - Benjamin Franklin
Writing Code vs Engineering Software
Traditionally in academia, software - and the process of writing it - is often seen as a necessary but throwaway artefact in research. For example, there may be research questions (for a given research project), code is created to answer those questions, the code is run over some data and analysed, and finally a publication is written based on those results. These steps are often taken informally.
The terms programming (or even coding) and software engineering are often used interchangeably. They are not. Programmers or coders tend to focus on one part of the software development process: implementation, more than any other. In academic research, often they are writing software for themselves - they are their own stakeholders. And ideally, they are writing software from a design, that fulfils a research goal to publish research papers.
Someone who is engineering software, on the other hand takes a wider view:
- The lifecycle of software: from understanding what is needed, to writing the software and using/releasing it, to what happens afterwards.
- Who will (or may) be involved: software is written for stakeholders. This may only be the researcher initially, but there is an understanding that others may become involved later (even if that isn’t evident yet). A good rule of thumb is to always assume that code will be read and used by others later on, which includes yourself!
- Software (or code) is an asset: software inherently contains value - for example, in terms of what it can do, the lessons learned throughout its development, and as an implementation of a research approach (i.e. a particular research algorithm, process, or technical approach).
- As an asset, it could be reused: again, it may not be evident initially that the software will have use beyond its initial purpose or project, but there is an assumption that the software - or even just a part of it - could be reused in the future.
The Software Development Process
The typical stages of a software development process can be categorised as follows:
- Requirements gathering: the process of identifying and recording the exact requirements for a software project before it begins. This helps maintain a clear direction throughout development, and sets clear targets for what the software needs to do.
- Design: where the requirements are translated into an overall design for the software. It covers what will be the basic software ‘components’ and how they’ll fit together, as well as the tools and technologies that will be used, which will together address the requirements identified in the first stage.
- Implementation: the software is developed according to the design, implementing the solution that meets the requirements set out in the requirements gathering stage.
- Testing: the software is tested with the intent to discover and rectify any defects, and also to ensure that the software meets its defined requirements, i.e. does it actually do what it should do reliably?
- Deployment: where the software is deployed and used for its intended purpose.
- Maintenance: where updates are made to the software to ensure it remains fit for purpose, which typically involves fixing any further discovered issues and evolving it to meet new or changing requirements.
The process of following these stages, particularly when undertaken in this order, is referred to as the waterfall model of software development: each stage’s outputs flow into the next stage sequentially.
Whether projects or people that develop software are aware of them or not, these stages are followed implicitly or explicitly in every software project. What is required for a project (during requirements gathering) is always considered, for example, even if it isn’t explored sufficiently or well understood.
Following a process of development offers some major benefits:
- Stage gating: a quality gate at the end of each stage, where stakeholders review the stage’s outcomes to decide if that stage has completed successfully before proceeding to the next one (and even if the next stage is not warranted at all - for example, it may be discovered during requirements of design that development of the software isn’t practical or even required).
- Predictability: each stage is given attention in a logical sequence; the next stage should not begin until prior stages have completed. Returning to a prior stage is possible and may be needed, but may prove expensive, particularly if an implementation has already been attempted. However, at least this is an explicit and planned action.
- Transparency: essentially, each stage generates output(s) into subsequent stages, which presents opportunities for them to be published as part of an open development process.
- It saves time: a well-known result from empirical software engineering studies is that it becomes exponentially more expensive to fix mistakes in future stages. For example, if a mistake takes 1 hour to fix in requirements, it may take 5 times that during design, and perhaps as much as 20 times that to fix if discovered during testing.
In this section we will place the actual writing of software (implementation) within the context of the typical software development process:
- Explore the importance of software requirements, the different classes of requirements, and how we can interpret and capture them.
- How requirements inform and drive the design of software, the importance, role, and examples of software architecture, and the ways we can describe a software design.
- Implementation choices in terms of programming paradigms, looking at procedural, functional, and object oriented paradigms of development. Modern software will often contain instances of multiple paradigms, so it is worthwhile being familiar with them and knowing when to switch in order to make better code.
- How you can (and should) assess and update a software’s architecture when requirements change and complexity increases - is the architecture still fit for purpose, or are modifications and extensions becoming increasingly difficult to make?
Key Points
Software engineering takes a wider view of software development beyond programming (or coding).
Ensuring requirements are sufficiently captured is critical to the success of any project.
Following a process makes development predictable, can save time, and helps ensure each stage of development is given sufficient consideration before proceeding to the next.
Software Requirements
Overview
Teaching: 15 min
Exercises: 30 minQuestions
Where do we start when beginning a new software project?
How can we classify requirements for software?
Objectives
Describe the different types of software requirement.
Explain the difference between functional and non-functional requirements.
Derive new user and solution requirements from business requirements.
The requirements of our software are the basis on which the whole project rests - if we get the requirements wrong, we’ll build the wrong software. However, it’s unlikely that we’ll be able to determine all of the requirements upfront. Especially when working in a research context, requirements are flexible and may change as we develop our software.
Types of Requirements
Requirements can be categorised in many ways, but at a high level a useful way to split them is into business requirements, user requirements, and solution requirements. Let’s take a look at these now.
Business Requirements
Business requirements describe what is needed from the perspective of the organisation, and define the strategic path of the project, e.g. to increase profit margin or market share, or embark on a new research area or collaborative partnership. These are captured in something like a Business Requirements Specification.
For adapting our inflammation software project, example business requirements could include:
- BR1: improving the statistical quality of clinical trial reporting to meet the needs of external audits
- BR2: increasing the throughput of trial analyses to meet higher demand during peak periods
Exercise: New Business Requirements
Think of a new hypothetical business-level requirements for this software. This can be anything you like, but be sure to keep it at the high-level of the business itself.
Solution
One hypothetical new business requirement (BR3) could be extending our clinical trial system to keep track of doctors who are being involved in the project.
Another hypothetical new business requirement (BR4) may be adding a new parameter to the treatment and checking if improves the effect of the drug being tested - e.g. taking it in conjunction with omega-3 fatty acids and/or increasing physical activity while taking the drug therapy.
User (or Stakeholder) Requirements
These define what particular stakeholder groups each expect from an eventual solution, essentially acting as a bridge between the higher-level business requirements and specific solution requirements. These are typically captured in a User Requirements Specification.
For our inflammation project, they could include things for trial managers such as (building on the business requirements):
- UR1.1 (from BR1): add support for statistical measures in generated trial reports as required by revised auditing standards (standard deviation, …)
- UR1.2 (from BR1): add support for producing textual representations of statistics in trial reports as required by revised auditing standards
- UR2.1 (from BR2): ability to have an individual trial report processed and generated in under 30 seconds (if we assume it usually takes longer than that)
Exercise: New User Requirements
Break down your new business requirements from the previous exercise into a number of logical user requirements, ensuring they stay above the level and detail of implementation.
Solution
For our business requirement BR3 from the previous exercise, the new user/stakeholder requirements may be the ability to see all the patients a doctor is being responsible for (UR3.1), and to find out a doctor looking after any individual patient (UR3.2).
For our business requirement BR4 from the previous exercise, the new user/stakeholder requirements may be the ability to see the effect of the drug with and without the additional parameters in all reports and graphs (UR4.1).
Solution Requirements
Solution (or product) requirements describe characteristics that a concrete solution or product must have to satisfy the stakeholder requirements. They fall into two key categories:
- Functional Requirements focus on functions and features of a solution. For our software, building on our user requirements, e.g.:
- SR1.1.1 (from UR1.1): add standard deviation to data model and include in graph visualisation view
- SR1.2.1 (from UR1.2): add a new view to generate a textual representation of statistics, which is invoked by an optional command line argument
- Non-functional Requirements focus on how the behaviour of a solution is expressed or constrained, e.g. performance, security, usability, or portability. These are also known as quality of service requirements. For our project, e.g.:
- SR2.1.1 (from UR2.1): generate graphical statistics report on clinical workstation configuration in under 30 seconds
Labelling Requirements
Note that the naming scheme we used for labelling our requirements is quite arbitrary - you should reference them in a way that is consistent and makes sense within your projects and team.
The Importance of Non-functional Requirements
When considering software requirements, it’s very tempting to just think about the features users need. However, many design choices in a software project quite rightly depend on the users themselves and the environment in which the software is expected to run, and these aspects should be considered as part of the software’s non-functional requirements.
Exercise: Types of Software
Think about some software you are familiar with (could be software you have written yourself or by someone else) and how the environment it is used in have affected its design or development. Here are some examples of questions you can use to get started:
- What environment does the software run in?
- How do people interact with it?
- Why do people use it?
- What features of the software have been affected by these factors?
- If the software needed to be used in a different environment, what difficulties might there be?
Some examples of design / development choices constrained by environment might be:
- Mobile Apps
- Must have graphical interface suitable for a touch display
- Usually distributed via a controlled app store
- Users will not (usually) modify / compile the software themselves
- Should work on a range of hardware specifications with a range of Operating System (OS) versions
- But OS is unlikely to be anything other than Android or iOS
- Documentation probably in the software itself or on a Web page
- Typically written in one of the platform preferred languages (e.g. Java, Kotlin, Swift)
- Embedded Software
- May have no user interface - user interface may be physical buttons
- Usually distributed pre-installed on a physical device
- Often runs on low power device with limited memory and CPU performance - must take care to use these resources efficiently
- Exact specification of hardware is known - often not necessary to support multiple devices
- Documentation probably in a technical manual with a separate user manual
- May need to run continuously for the lifetime of the device
- Typically written in a lower-level language (e.g. C) for better control of resources
Some More Examples
- Desktop Application
- Has a graphical interface for use with mouse and keyboard
- May need to work on multiple, very different operating systems
- May be intended for users to modify / compile themselves
- Should work on a wide range of hardware configurations
- Documentation probably either in a manual or in the software itself
- Command-line Application - UNIX Tool
- User interface is text based, probably via command-line arguments
- Intended to be modified / compiled by users - though most will choose not to
- Documentation has standard formats - also accessible from the command line
- Should be usable as part of a pipeline
- Command-line Application - High Performance Computing
- Similar to a UNIX Tool
- Usually supports running across multiple networked machines simultaneously
- Usually operated via a scheduler - interface should be scriptable
- May need to run on a wide range of hardware (e.g. different CPU architectures)
- May need to process large amounts of data
- Often entirely or partially written in a lower-level language for performance (e.g. C, C++, Fortran)
- Web Application
- Usually has components which run on server and components which run on the user’s device
- Graphical interface should usually support both Desktop and Mobile devices
- Client-side component should run on a range of browsers and operating systems
- Documentation probably part of the software itself
- Client-side component typically written in JavaScript
Exercise: New Solution Requirements
Now break down your new user requirements from the earlier exercise into a number of logical solution requirements (functional and non-functional), that address the detail required to be able to implement them in the software.
Solution
For our new hypothetical business requirement BR3, new functional solution requirements could be extending the clinical trial system to keep track of:
- the names of all patients (SR3.1.1) and doctors (SR3.1.2) involved in the trial
- the name of the doctor for a particular patient (SR3.1.3)
- a group of patients being administered by a particular doctor (SR3.2.1).
Optional Exercise: Requirements for Your Software Project
Think back to a piece of code or software (either small or large) you’ve written, or which you have experience using. First, try to formulate a few of its key business requirements, then derive these into user and then solution requirements (in a similar fashion to the ones above in Types of Requirements).
Long- or Short-Lived Code?
Along with requirements, here’s something to consider early on…
You (maybe with others on your project) may be developing open-source software with the intent that it will live on after your project completes. It could be important to you that your software is adopted and used by other projects as this may help you get future funding. It can make your software more attractive to potential users if they have the confidence that they can fix bugs that arise or add new features they need, if they can be assured that the evolution of the software is not dependant upon the lifetime of your project. The intended longevity and post-project role of software should be reflected in its requirements - particularly within its non-functional requirements - so be sure to consider these aspects.
On the other hand, you might want to knock together some code to prove a concept or to perform a quick calculation and then just discard it. But can you be sure you’ll never want to use it again? Maybe a few months from now you’ll realise you need it after all, or you’ll have a colleague say “I wish I had a…” and realise you’ve already made one. A little effort now could save you a lot in the future.
From Requirements to Implementation, via Design
In practice, these different types of requirements are sometimes confused and conflated when different classes of stakeholder are discussing them, which is understandable: each group of stakeholder has a different view of what is required from a project. The key is to understand the stakeholder’s perspective as to how their requirements should be classified and interpreted, and for that to be made explicit. A related misconception is that each of these types are simply requirements specified at different levels of detail. At each level, not only are the perspectives different, but so are the nature of the objectives and the language used to describe them, since they each reflect the perspective and language of their stakeholder group.
It’s often tempting to go right ahead and implement requirements within existing software, but this neglects a crucial step: do these new requirements fit within our existing design, or does our design need to be revisited? It may not need any changes at all, but if it doesn’t fit logically our design will need a bigger rethink so the new requirement can be implemented in a sensible way. We’ll look at this a bit later in this episode, but simply adding new code without considering how the design and implementation need to change at a high level can make our software increasingly messy and difficult to change in the future.
Key Points
When writing software used for research, requirements will almost always change.
Consider non-functional as well as functional requirements.
Consider the intended longevity of any code before you write it.
The perspective and language of a particular requirement stakeholder group should be reflected in requirements for that group.
Software Architecture and Design
Overview
Teaching: 15 min
Exercises: 30 minQuestions
Where do we start when beginning a new software project?
How can we make sure the components of our software are reusable?
Objectives
Describe some of the different kinds of software and explain how the environment in which software is used constrains its design.
Understand the use of common design patterns to improve the extensibility, reusability and overall quality of software.
Understand the components of multi-layer software architectures.
Introduction
In this episode, we’ll be looking at how we can design our software to ensure it meets the requirements, but also retains the other qualities of good software. As a piece of software grows, it will reach a point where there’s too much code for us to keep in mind at once. At this point, it becomes particularly important that the software be designed sensibly. What should be the overall structure of our software, how should all the pieces of functionality fit together, and how should we work towards fulfilling this overall design throughout development?
It’s not easy come up with a complete definition for the term software design, but some of the common aspects are:
- Algorithm design - what method are we going to use to solve the core business problem?
- Software architecture - what components will the software have and how will they cooperate?
- System architecture - what other things will this software have to interact with and how will it do this?
- UI/UX (User Interface / User Experience) - how will users interact with the software?
As usual, the sooner you adopt a practice in the lifecycle of your project, the easier it will be. So we should think about the design of our software from the very beginning, ideally even before we start writing code - but if you didn’t, it’s never too late to start.
The answers to these questions will provide us with some design constraints which any software we write must satisfy. For example, a design constraint when writing a mobile app would be that it needs to work with a touch screen interface - we might have some software that works really well from the command line, but on a typical mobile phone there isn’t a command line interface that people can access.
Software Architecture
At the beginning of this episode we defined software architecture as an answer to the question “what components will the software have and how will they cooperate?”. Software engineering borrowed this term, and a few other terms, from architects (of buildings) as many of the processes and techniques have some similarities. One of the other important terms we borrowed is ‘pattern’, such as in design patterns and architecture patterns. This term is often attributed to the book ‘A Pattern Language’ by Christopher Alexander et al. published in 1977 and refers to a template solution to a problem commonly encountered when building a system.
Design patterns are relatively small-scale templates which we can use to solve problems which affect a small part of our software. For example, the adapter pattern (which allows a class that does not have the “right interface” to be reused) may be useful if part of our software needs to consume data from a number of different external data sources. Using this pattern, we can create a component whose responsibility is transforming the calls for data to the expected format, so the rest of our program doesn’t have to worry about it.
Architecture patterns are similar, but larger scale templates which operate at the level of whole programs, or collections or programs. Model-View-Controller (which we chose for our project) is one of the best known architecture patterns. Many patterns rely on concepts from Object Oriented Programming, so we’ll come back to the MVC pattern shortly after we learn a bit more about Object Oriented Programming.
There are many online sources of information about design and architecture patterns, often giving concrete examples of cases where they may be useful. One particularly good source is Refactoring Guru.
Multilayer Architecture
One common architectural pattern for larger software projects is Multilayer Architecture. Software designed using this architecture pattern is split into layers, each of which is responsible for a different part of the process of manipulating data.
Often, the software is split into three layers:
- Presentation Layer
- This layer is responsible for managing the interaction between our software and the people using it
- May include the View components if also using the MVC pattern
- Application Layer / Business Logic Layer
- This layer performs most of the data processing required by the presentation layer
- Likely to include the Controller components if also using an MVC pattern
- May also include the Model components
- Persistence Layer / Data Access Layer
- This layer handles data storage and provides data to the rest of the system
- May include the Model components of an MVC pattern if they’re not in the application layer
Although we’ve drawn similarities here between the layers of a system and the components of MVC, they’re actually solutions to different scales of problem. In a small application, a multilayer architecture is unlikely to be necessary, whereas in a very large application, the MVC pattern may be used just within the presentation layer, to handle getting data to and from the people using the software.
Addressing New Requirements
So, we now want to extend our application - designed around an MVC architecture - with some new functionalities (more statistical processing and a new view to see a patient’s data). Let’s recall the solution requirements we discussed in the previous episode:
- Functional Requirements:
- SR1.1.1 (from UR1.1): add standard deviation to data model and include in graph visualisation view
- SR1.2.1 (from UR1.2): add a new view to generate a textual representation of statistics, which is invoked by an optional command line argument
- Non-functional Requirements:
- SR2.1.1 (from UR2.1): generate graphical statistics report on clinical workstation configuration in under 30 seconds
How Should We Test These Requirements?
Sometimes when we make changes to our code that we plan to test later, we find the way we’ve implemented that change doesn’t lend itself well to how it should be tested. So what should we do?
Consider requirement SR1.2.1 - we have (at least) two things we should test in some way, for which we could write unit tests. For the textual representation of statistics, in a unit test we could invoke our new view function directly with known inflammation data and test the text output as a string against what is expected. The second one, invoking this new view with an optional command line argument, is more problematic since the code isn’t structured in a way where we can easily invoke the argument parsing portion to test it. To make this more amenable to unit testing we could move the command line parsing portion to a separate function, and use that in our unit tests. So in general, it’s a good idea to make sure your software’s features are modularised and accessible via logical functions.
We could also consider writing unit tests for SR2.1.1, ensuring that the system meets our performance requirement, so should we? We do need to verify it’s being met with the modified implementation, however it’s generally considered bad practice to use unit tests for this purpose. This is because unit tests test if a given aspect is behaving correctly, whereas performance tests test how efficiently it does it. Performance testing produces measurements of performance which require a different kind of analysis (using techniques such as code profiling), and require careful and specific configurations of operating environments to ensure fair testing. In addition, unit testing frameworks are not typically designed for conducting such measurements, and only test units of a system, which doesn’t give you an idea of performance of the system as it is typically used by stakeholders.
The key is to think about which kind of testing should be used to check if the code satisfies a requirement, but also what you can do to make that code amenable to that type of testing.
Exercise: Implementing Requirements
Pick one of the requirements SR1.1.1 or SR1.1.2 above to implement and create an appropriate feature branch - e.g.
add-std-dev
oradd-view
from your most up-to-datedevelop
branch.One aspect you should consider first is whether the new requirement can be implemented within the existing design. If not, how does the design need to be changed to accommodate the inclusion of this new feature? Also try to ensure that the changes you make are amenable to unit testing: is the code suitably modularised such that the aspect under test can be easily invoked with test input data and its output tested?
If you have time, feel free to implement the other requirement, or invent your own!
Also make sure you push changes to your new feature branch remotely to your software repository on GitHub.
Note: do not add the tests for the new feature just yet - even though you would normally add the tests along with the new code, we will do this in a later episode. Equally, do not merge your changes to the
develop
branch just yet.Note 2: we have intentionally left this exercise without solution to give you more freedom in implementing it how you see fit. If you are struggling with adding a new view and command line parameter - read on as more code examples will be provided by the end of this section that will give you hints on how to do this.
Best Practices for ‘Good’ Software Design
Aspirationally, what makes good code can be summarised in the following quote from the Intent HG blog:
“Good code is written so that is readable, understandable, covered by automated tests, not over complicated and does well what is intended to do.”
By taking time to design our software to be easily modifiable and extensible, we can save ourselves a lot of time later when requirements change. The sooner we do this the better - ideally we should have at least a rough design sketched out for our software before we write a single line of code. This design should be based around the structure of the problem we’re trying to solve: what are the concepts we need to represent and what are the relationships between them. And importantly, who will be using our software and how will they interact with it?
Here’s another way of looking at it.
Not following good software design and development practices can lead to accumulated ‘technical debt’, which (according to Wikipedia), is the “cost of additional rework caused by choosing an easy (limited) solution now instead of using a better approach that would take longer”. So, the pressure to achieve project goals can sometimes lead to quick and easy solutions, which make the software become more messy, more complex, more difficult to understand and maintain. The extra effort required to make changes in the future is the interest paid on the (technical) debt. It’s natural for software to accrue some technical debt, but it’s important to pay off that debt during a maintenance phase - simplifying, clarifying the code, making it easier to understand - to keep these interest payments on making changes manageable. If this isn’t done, the software may accrue too much technical debt, and it can become too messy and prohibitive to maintain and develop, and then it cannot evolve.
Importantly, there is only so much time available. How much effort should we spend on designing our code properly and using good development practices? The following XKCD comic summarises this tension:
At an intermediate level there are a wealth of practices that could be used, and applying suitable design and coding practices is what separates an intermediate developer from someone who has just started coding. The key for an intermediate developer is to balance these concerns for each software project appropriately, and employ design and development practices enough so that progress can be made. It’s very easy to under-design software, but remember it’s also possible to over-design software too.
Key Points
Planning software projects in advance can save a lot of effort and reduce ‘technical debt’ later - even a partial plan is better than no plan at all.
The environment in which users run our software has an effect on many design choices we might make.
By breaking down our software into components with a single responsibility, we avoid having to rewrite it all when requirements change. Such components can be as small as a single function, or be a software package in their own right.
When writing software used for research, requirements will almost always change.
‘Good code is written so that is readable, understandable, covered by automated tests, not over complicated and does well what is intended to do.’
Programming Paradigms
Overview
Teaching: 10 min
Exercises: 0 minQuestions
How does the structure of a problem affect the structure of our code?
How can we use common software paradigms to improve the quality of our software?
Objectives
Describe some of the major software paradigms we can use to classify programming languages.
Introduction
As you become more experienced in software development, it becomes increasingly important to understand the wider landscape in which you operate - i.e. what software decisions have the people around you made and why? There are hundreds (probably thousands) of different programming languages, each with different approaches of how a programmer will use them to solve a problem. These approaches group the programming languages into paradigms. Each paradigm represents a slightly different way of thinking about and structuring our code and each has certain strengths and weaknesses when used to solve particular types of problems. Once your software begins to get more complex it’s common to use aspects of different paradigms to handle different subtasks. Because of this, it’s useful to know about the major paradigms, so you can recognise where it might be useful to switch.
There are two major families that we can group the common programming paradigms into: Imperative and Declarative. An imperative program uses statements that change the program’s state - it consists of commands for the computer to perform and focuses on describing how a program operates step by step. A declarative program expresses the logic of a computation to describe what should be accomplished rather than describing its control flow as a sequence steps.
We will look into three major paradigms from the imperative and declarative families that may be useful to you - Procedural Programming, Functional Programming and Object-Oriented Programming. Note, however, that most of the languages can be used with multiple paradigms, and it is common to see multiple paradigms within a single program - so this classification of programming languages based on the paradigm they use isn’t as strict.
Procedural Programming
Procedural Programming comes from a family of paradigms known as the Imperative Family. With paradigms in this family, we can think of our code as the instructions for processing data.
Procedural Programming is probably the style you’re most familiar with and the one we used up to this point, where we group code into procedures performing a single task, with exactly one entry and one exit point. In most modern languages we call these functions, instead of procedures - so if you’re grouping your code into functions, this might be the paradigm you’re using. By grouping code like this, we make it easier to reason about the overall structure, since we should be able to tell roughly what a function does just by looking at its name. These functions are also much easier to reuse than code outside of functions, since we can call them from any part of our program.
So far we have been using this technique in our code - it contains a list of instructions that execute one after the other starting from the top. This is an appropriate choice for smaller scripts and software that we’re writing just for a single use. Aside from smaller scripts, Procedural Programming is also commonly seen in code focused on high performance, with relatively simple data structures, such as in High Performance Computing (HPC). These programs tend to be written in C (which doesn’t support Object Oriented Programming) or Fortran (which didn’t until recently). HPC code is also often written in C++, but C++ code would more commonly follow an Object Oriented style, though it may have procedural sections.
Note that you may sometimes hear people refer to this paradigm as “functional programming” to contrast it with Object Oriented Programming, because it uses functions rather than objects, but this is incorrect. Functional Programming is a separate paradigm that places much stronger constraints on the behaviour of a function and structures the code differently as we’ll see soon.
Functional Programming
Functional Programming comes from a different family of paradigms - known as the Declarative Family. The Declarative Family is a distinct set of paradigms which have a different outlook on what a program is - here code describes what data processing should happen. What we really care about here is the outcome - how this is achieved is less important.
Functional Programming is built around a more strict definition of the term function borrowed from mathematics. A function in this context can be thought of as a mapping that transforms its input data into output data. Anything a function does other than produce an output is known as a side effect and should be avoided wherever possible.
Being strict about this definition allows us to break down the distinction between code and data, for example by writing a function which accepts and transforms other functions - in Functional Programming code is data.
The most common application of Functional Programming in research is in data processing, especially when handling Big Data. One popular definition of Big Data is data which is too large to fit in the memory of a single computer, with a single dataset sometimes being multiple terabytes or larger. With datasets like this, we can’t move the data around easily, so we often want to send our code to where the data is instead. By writing our code in a functional style, we also gain the ability to run many operations in parallel as it’s guaranteed that each operation won’t interact with any of the others - this is essential if we want to process this much data in a reasonable amount of time.
Object Oriented Programming
Object Oriented Programming focuses on the specific characteristics of each object and what each object can do. An object has two fundamental parts - properties (characteristics) and behaviours. In Object Oriented Programming, we first think about the data and the things that we’re modelling - and represent these by objects.
For example, if we’re writing a simulation for our chemistry research, we’re probably going to need to represent atoms and molecules. Each of these has a set of properties which we need to know about in order for our code to perform the tasks we want - in this case, for example, we often need to know the mass and electric charge of each atom. So with Object Oriented Programming, we’ll have some object structure which represents an atom and all of its properties, another structure to represent a molecule, and a relationship between the two (a molecule contains atoms). This structure also provides a way for us to associate code with an object, representing any behaviours it may have. In our chemistry example, this could be our code for calculating the force between a pair of atoms.
Most people would classify Object Oriented Programming as an extension of the Imperative family of languages (with the extra feature being the objects), but others disagree.
So Which one is Python?
Python is a multi-paradigm and multi-purpose programming language. You can use it as a procedural language and you can use it in a more object oriented way. It does tend to land more on the object oriented side as all its core data types (strings, integers, floats, booleans, lists, sets, arrays, tuples, dictionaries, files) as well as functions, modules and classes are objects.
Since functions in Python are also objects that can be passed around like any other object, Python is also well suited to functional programming. One of the most popular Python libraries for data manipulation, Pandas (built on top of NumPy), supports functional programming style as most of its functions on data are not changing the data (no side effects) but producing a new data to reflect the result of the function.
Other Paradigms
The three paradigms introduced here are some of the most common, but there are many others which may be useful for addressing specific classes of problem - for much more information see the Wikipedia’s page on programming paradigms. We will now have a closer look at Functional and Object Oriented Programming paradigms and how they can affect our architectural design choices.
Key Points
A software paradigm describes a way of structuring or reasoning about code.
Different programming languages are suited to different paradigms.
Different paradigms are suited to solving different classes of problems.
A single piece of software will often contain instances of multiple paradigms.
Object Oriented Programming
Overview
Teaching: 30 min
Exercises: 20 minQuestions
How can we use code to describe the structure of data?
How should the relationships between structures be described?
Objectives
Describe the core concepts that define the Object Oriented Paradigm
Use classes to encapsulate data within a more complex program
Structure concepts within a program in terms of sets of behaviour
Identify different types of relationship between concepts within a program
Structure data within a program using these relationships
Encapsulating Data
One of the main difficulties we encounter when building more complex software is how to structure our data. So far, we’ve been processing data from a single source and with a simple tabular structure, but it would be useful to be able to combine data from a range of different sources and with more data than just an array of numbers.
data = np.array([[1., 2., 3.],
[4., 5., 6.]])
Using this data structure has the advantage of being able to use NumPy operations to process the data and Matplotlib to plot it, but often we need to have more structure than this. For example, we may need to attach more information about the patients and store this alongside our measurements of inflammation.
We can do this using the Python data structures we’re already familiar with, dictionaries and lists. For instance, we could attach a name to each of our patients:
patients = [
{
'name': 'Alice',
'data': [1., 2., 3.],
},
{
'name': 'Bob',
'data': [4., 5., 6.],
},
]
Structuring Data
Write a function, called
attach_names
, which can be used to attach names to our patient dataset. When used as below, it should produce the expected output.If you’re not sure where to begin, think about ways you might be able to effectively loop over two collections at once. Also, don’t worry too much about the data type of the
data
value, it can be a Python list, or a NumPy array - either is fine.data = np.array([[1., 2., 3.], [4., 5., 6.]]) output = attach_names(data, ['Alice', 'Bob']) print(output)
[ { 'name': 'Alice', 'data': [1., 2., 3.], }, { 'name': 'Bob', 'data': [4., 5., 6.], }, ]
Solution
One possible solution, perhaps the most obvious, is to use the
range
function to index into both lists at the same location:def attach_names(data, names): """Create datastructure containing patient records.""" output = [] for i in range(len(data)): output.append({'name': names[i], 'data': data[i]}) return output
However, this solution has a potential problem that can occur sometimes, depending on the input. What might go wrong with this solution? How could we fix it?
A Better Solution
What would happen if the
data
andnames
inputs were different lengths?If
names
is longer, we’ll loop through, until we run out of rows in thedata
input, at which point we’ll stop processing the last few names. Ifdata
is longer, we’ll loop through, but at some point we’ll run out of names - but this time we try to access part of the list that doesn’t exist, so we’ll get an exception.A better solution would be to use the
zip
function, which allows us to iterate over multiple iterables without needing an index variable. Thezip
function also limits the iteration to whichever of the iterables is smaller, so we won’t raise an exception here, but this might not quite be the behaviour we want, so we’ll also explicitlyassert
that the inputs should be the same length. Checking that our inputs are valid in this way is known as a precondition.If you’ve not previously come across this function, read this section of the Python documentation.
def attach_names(data, names): """Create datastructure containing patient records.""" assert len(data) == len(names) output = [] for data_row, name in zip(data, names): output.append({'name': name, 'data': data_row}) return output
Classes in Python
Using nested dictionaries and lists should work for some of the simpler cases where we need to handle structured data, but they get quite difficult to manage once the structure becomes a bit more complex. For this reason, in the Object Oriented paradigm, we use classes to help with this data structure. A class is a template for a structured piece of data, so when we create some data using a class, we can be certain that it has the same structure each time. In addition to representing a piece of structured data, a class can also provide a set of functions, or methods, which describe the behaviours of the data.
With our list of dictionaries we had in the example above, we have no real guarantee that each dictionary has the same structure, e.g. the same keys (name
and data
) unless we check it manually.
With a class, if an object is an instance of that class (i.e. it was made using that template), we know it will have the structure defined by that class.
Different programming languages make slightly different guarantees about how strictly the structure will match, but in object oriented programming this is one of the core ideas.
Let’s start with a minimal example of a class representing our patients.
# file: inflammation/models.py
class Patient:
def __init__(self, name):
self.name = name
self.observations = []
alice = Patient('Alice')
print(alice.name)
Alice
Here we’ve defined a class with one method: __init__
.
This method is the initialiser method, which is responsible for setting up the initial values and structure of the data inside a new instance of the class - this is very similar to constructors in other languages, so the term is often used in Python too.
The __init__
method is called every time we create a new instance of the class, as in Patient('Alice')
.
The argument self
refers to the instance on which we are calling the method and gets filled in automatically by Python - we don’t need to provide a value for this when we call the method.
In our Patient
initialiser method, we set their name to a value provided, and create a list of inflammation observations, which is currently empty.
You may not have realised, but you should already be familiar with some of the classes that come bundled as part of Python, for example:
my_list = [1, 2, 3]
my_dict = {1: '1', 2: '2', 3: '3'}
my_set = {1, 2, 3}
print(type(my_list))
print(type(my_dict))
print(type(my_set))
<class 'list'>
<class 'dict'>
<class 'set'>
Lists, dictionaries and sets are a slightly special type of class, but they behave in much the same way as a class we might define ourselves:
- They each hold some data (or state), as you will have seen before.
- They also provide some methods describing the behaviours of the data - what can the data do and what can we do to the data?
The behaviours we may have seen previously include:
- Lists can be appended to
- Lists can be indexed (we’ll get to this later)
- Lists can be sliced (we won’t get to this)
- Key-value pairs can be added to dictionaries
- The value at a key can be looked up in a dictionary
- The union of two sets can be found (the set of values present in any of the sets)
- The intersection of two sets can be found (the set of values present in all of the sets)
Test Driven Development
In yesterday’s lesson we learnt how to create unit tests to make sure our code is behaving as we intended. Test Driven Development (TDD) is an extension of this. If we can define a set of tests for everything our code needs to do, then why not treat those tests as the specification.
When doing Test Driven Development, we write our tests first and only write enough code to make the tests pass. We tend to do this at the level of individual features - define the feature, write the tests, write the code. The main advantages are:
- It forces us to think about how our code will be used before we write it
- It prevents us from doing work that we don’t need to do, e.g. “I might need this later…”
You may also see this process called Red, Green, Refactor: ‘Red’ for the failing tests, ‘Green’ for the code that makes them pass, then ‘Refactor’ (tidy up) the result.
For the challenges from here on, try to first convert the specification into a unit test, then try writing the code to pass the test.
Encapsulating Behaviour
Just like the standard Python datastructures, our classes can have behaviour associated with them.
To define the behaviour of a class we can add functions which operate on the data the class contains. These functions are the member functions or methods.
Member functions are the same as normal functions (alternatively known as free functions), except that they live inside a class and have an extra first parameter self
.
Using the name self
isn’t strictly necessary, but is a very strong convention - it’s extremely rare to see any other name chosen.
When we call a method on an object, the value of self
is automatically set to this object - hence the name.
As we saw with the __init__
method previously, we don’t need to explicitly provide a value for the self
argument, this is done for us by Python.
# file: inflammation/models.py
class Patient:
"""A patient in an inflammation study."""
def __init__(self, name):
self.name = name
self.observations = []
def add_observation(self, value, day=None):
if day is None:
try:
day = self.observations[-1]['day'] + 1
except IndexError:
day = 0
new_observation = {
'day': day,
'value': value,
}
self.observations.append(new_observation)
return new_observation
alice = Patient('Alice')
print(alice)
observation = alice.add_observation(3)
print(observation)
print(alice.observations)
<__main__.Patient object at 0x7fd7e61b73d0>
{'day': 0, 'value': 3}
[{'day': 0, 'value': 3}]
Note also how we used day=None
in the parameter list of the add_observation
method, then initialise it if the value is indeed None
.
This is one of the common ways to handle an optional argument in Python, so we’ll see this pattern quite a lot in real projects.
Class and Static Methods
Sometimes, the function we’re writing doesn’t need access to any data belonging to a particular object. For these situations, we can instead use a class method or a static method. Class methods have access to the class that they’re a part of, and can access data on that class - but do not belong to a specific instance of that class, whereas static methods have access to neither the class nor its instances.
By convention, class methods use
cls
as their first argument instead ofself
- this is how we access the class and its data, just likeself
allows us to access the instance and its data. Static methods have neitherself
norcls
so the arguments look like a typical free function. These are the only common exceptions to usingself
for a method’s first argument.Both of these method types are created using a decorator - for more information see the classmethod and staticmethod sections of the Python documentation.
Dunder Methods
Why is the __init__
method not called init
?
There are a few special method names that we can use which Python will use to provide a few common behaviours, each of which begins and ends with a double-underscore, hence the name dunder method.
When writing your own Python classes, you’ll almost always want to write an __init__
method, but there are a few other common ones you might need sometimes. You may have noticed in the code above that the method print(alice)
returned <__main__.Patient object at 0x7fd7e61b73d0>
, which is the string represenation of the alice
object. We
may want the print statement to display the object’s name instead. We can achieve this by overriding the __str__
method of our class.
# file: inflammation/models.py
class Patient:
"""A patient in an inflammation study."""
def __init__(self, name):
self.name = name
self.observations = []
def add_observation(self, value, day=None):
if day is None:
try:
day = self.observations[-1]['day'] + 1
except IndexError:
day = 0
new_observation = {
'day': day,
'value': value,
}
self.observations.append(new_observation)
return new_observation
def __str__(self):
return self.name
alice = Patient('Alice')
print(alice)
Alice
These dunder methods are not usually called directly, but rather provide the implementation of some functionality we can use - we didn’t call alice.__str__()
, but it was called for us when we did print(alice)
.
Some we see quite commonly are:
__str__
- converts an object into its string representation, used when you callstr(object)
orprint(object)
__getitem__
- Accesses an object by key, this is howlist[x]
anddict[x]
are implemented__len__
- gets the length of an object when we uselen(object)
- usually the number of items it contains
There are many more described in the Python documentation, but it’s also worth experimenting with built in Python objects to see which methods provide which behaviour. For a more complete list of these special methods, see the Special Method Names section of the Python documentation.
A Basic Class
Implement a class to represent a book. Your class should:
- Have a title
- Have an author
- When printed using
print(book)
, show text in the format “title by author”book = Book('A Book', 'Me') print(book)
A Book by Me
Solution
class Book: def __init__(self, title, author): self.title = title self.author = author def __str__(self): return self.title + ' by ' + self.author
Properties
The final special type of method we’ll introduce is a property. Properties are methods which behave like data - when we want to access them, we don’t need to use brackets to call the method manually.
# file: inflammation/models.py
class Patient:
...
@property
def last_observation(self):
return self.observations[-1]
alice = Patient('Alice')
alice.add_observation(3)
alice.add_observation(4)
obs = alice.last_observation
print(obs)
{'day': 1, 'value': 4}
You may recognise the @
syntax from episodes on parameterising unit tests and functional programming - property
is another example of a decorator.
In this case the property
decorator is taking the last_observation
function and modifying its behaviour, so it can be accessed as if it were a normal attribute.
It is also possible to make your own decorators, but we won’t cover it here.
Relationships Between Classes
We now have a language construct for grouping data and behaviour related to a single conceptual object. The next step we need to take is to describe the relationships between the concepts in our code.
There are two fundamental types of relationship between objects which we need to be able to describe:
- Ownership - x has a y - this is composition
- Identity - x is a y - this is inheritance
Composition
You should hopefully have come across the term composition already - in the novice Software Carpentry, we use composition of functions to reduce code duplication. That time, we used a function which converted temperatures in Celsius to Kelvin as a component of another function which converted temperatures in Fahrenheit to Kelvin.
In the same way, in object oriented programming, we can make things components of other things.
We often use composition where we can say ‘x has a y’ - for example in our inflammation project, we might want to say that a doctor has patients or that a patient has observations.
In the case of our example, we’re already saying that patients have observations, so we’re already using composition here.
We’re currently implementing an observation as a dictionary with a known set of keys though, so maybe we should make an Observation
class as well.
# file: inflammation/models.py
class Observation:
def __init__(self, day, value):
self.day = day
self.value = value
def __str__(self):
return str(self.value)
class Patient:
"""A patient in an inflammation study."""
def __init__(self, name):
self.name = name
self.observations = []
def add_observation(self, value, day=None):
if day is None:
try:
day = self.observations[-1].day + 1
except IndexError:
day = 0
new_observation = Observation(day, value)
self.observations.append(new_observation)
return new_observation
def __str__(self):
return self.name
alice = Patient('Alice')
obs = alice.add_observation(3)
print(obs)
3
Now we’re using a composition of two custom classes to describe the relationship between two types of entity in the system that we’re modelling.
Inheritance
The other type of relationship used in object oriented programming is inheritance.
Inheritance is about data and behaviour shared by classes, because they have some shared identity - ‘x is a y’.
If class X
inherits from (is a) class Y
, we say that Y
is the superclass or parent class of X
, or X
is a subclass of Y
.
If we want to extend the previous example to also manage people who aren’t patients we can add another class Person
.
But Person
will share some data and behaviour with Patient
- in this case both have a name and show that name when you print them.
Since we expect all patients to be people (hopefully!), it makes sense to implement the behaviour in Person
and then reuse it in Patient
.
To write our class in Python, we used the class
keyword, the name of the class, and then a block of the functions that belong to it.
If the class inherits from another class, we include the parent class name in brackets.
# file: inflammation/models.py
class Observation:
def __init__(self, day, value):
self.day = day
self.value = value
def __str__(self):
return str(self.value)
class Person:
def __init__(self, name):
self.name = name
def __str__(self):
return self.name
class Patient(Person):
"""A patient in an inflammation study."""
def __init__(self, name):
super().__init__(name)
self.observations = []
def add_observation(self, value, day=None):
if day is None:
try:
day = self.observations[-1].day + 1
except IndexError:
day = 0
new_observation = Observation(day, value)
self.observations.append(new_observation)
return new_observation
alice = Patient('Alice')
print(alice)
obs = alice.add_observation(3)
print(obs)
bob = Person('Bob')
print(bob)
obs = bob.add_observation(4)
print(obs)
Alice
3
Bob
AttributeError: 'Person' object has no attribute 'add_observation'
As expected, an error is thrown because we cannot add an observation to bob
, who is a Person but not a Patient.
We see in the example above that to say that a class inherits from another, we put the parent class (or superclass) in brackets after the name of the subclass.
There’s something else we need to add as well - Python doesn’t automatically call the __init__
method on the parent class if we provide a new __init__
for our subclass, so we’ll need to call it ourselves.
This makes sure that everything that needs to be initialised on the parent class has been, before we need to use it.
If we don’t define a new __init__
method for our subclass, Python will look for one on the parent class and use it automatically.
This is true of all methods - if we call a method which doesn’t exist directly on our class, Python will search for it among the parent classes.
The order in which it does this search is known as the method resolution order - a little more on this in the Multiple Inheritance callout below.
The line super().__init__(name)
gets the parent class, then calls the __init__
method, providing the name
variable that Person.__init__
requires.
This is quite a common pattern, particularly for __init__
methods, where we need to make sure an object is initialised as a valid X
, before we can initialise it as a valid Y
- e.g. a valid Person
must have a name, before we can properly initialise a Patient
model with their inflammation data.
Composition vs Inheritance
When deciding how to implement a model of a particular system, you often have a choice of either composition or inheritance, where there is no obviously correct choice. For example, it’s not obvious whether a photocopier is a printer and is a scanner, or has a printer and has a scanner.
class Machine: pass class Printer(Machine): pass class Scanner(Machine): pass class Copier(Printer, Scanner): # Copier `is a` Printer and `is a` Scanner pass
class Machine: pass class Printer(Machine): pass class Scanner(Machine): pass class Copier(Machine): def __init__(self): # Copier `has a` Printer and `has a` Scanner self.printer = Printer() self.scanner = Scanner()
Both of these would be perfectly valid models and would work for most purposes. However, unless there’s something about how you need to use the model which would benefit from using a model based on inheritance, it’s usually recommended to opt for composition over inheritance. This is a common design principle in the object oriented paradigm and is worth remembering, as it’s very common for people to overuse inheritance once they’ve been introduced to it.
For much more detail on this see the Python Design Patterns guide.
Multiple Inheritance
Multiple Inheritance is when a class inherits from more than one direct parent class. It exists in Python, but is often not present in other Object Oriented languages. Although this might seem useful, like in our inheritance-based model of the photocopier above, it’s best to avoid it unless you’re sure it’s the right thing to do, due to the complexity of the inheritance heirarchy. Often using multiple inheritance is a sign you should instead be using composition - again like the photocopier model above.
Exercise: A Model Patient
Let’s use what we have learnt in this episode and combine it with what we have learnt on software requirements to formulate and implement a few new solution requirements to extend the model layer of our clinical trial system.
Let’s can start with extending the system such that there must be a
Doctor
class to hold the data representing a single doctor, which:
- must have a
name
attribute- must have a list of patients that this doctor is responsible for.
In addition to these, try to think of an extra feature you could add to the models which would be useful for managing a dataset like this - imagine we’re running a clinical trial, what else might we want to know? Try using Test Driven Development for any features you add: write the tests first, then add the feature. The tests have been started for you in
tests/test_patient.py
, but you will probably want to add some more.Once you’ve finished the initial implementation, do you have much duplicated code? Is there anywhere you could make better use of composition or inheritance to improve your implementation?
For any extra features you’ve added, explain them and how you implemented them to your neighbour. Would they have implemented that feature in the same way?
Solution
One example solution is shown below. You may start by writing some tests (that will initially fail), and then develop the code to satisfy the new requirements and pass the tests.
# file: tests/test_patient.py """Tests for the Patient model.""" def test_create_patient(): """Check a patient is created correctly given a name.""" from inflammation.models import Patient name = 'Alice' p = Patient(name=name) assert p.name == name def test_create_doctor(): """Check a doctor is created correctly given a name.""" from inflammation.models import Doctor name = 'Sheila Wheels' doc = Doctor(name=name) assert doc.name == name def test_doctor_is_person(): """Check if a doctor is a person.""" from inflammation.models import Doctor, Person doc = Doctor("Sheila Wheels") assert isinstance(doc, Person) def test_patient_is_person(): """Check if a patient is a person. """ from inflammation.models import Patient, Person alice = Patient("Alice") assert isinstance(alice, Person) def test_patients_added_correctly(): """Check patients are being added correctly by a doctor. """ from inflammation.models import Doctor, Patient doc = Doctor("Sheila Wheels") alice = Patient("Alice") doc.add_patient(alice) assert doc.patients is not None assert len(doc.patients) == 1 def test_no_duplicate_patients(): """Check adding the same patient to the same doctor twice does not result in duplicates. """ from inflammation.models import Doctor, Patient doc = Doctor("Sheila Wheels") alice = Patient("Alice") doc.add_patient(alice) doc.add_patient(alice) assert len(doc.patients) == 1 ...
# file: inflammation/models.py ... class Person: """A person.""" def __init__(self, name): self.name = name def __str__(self): return self.name class Patient(Person): """A patient in an inflammation study.""" def __init__(self, name): super().__init__(name) self.observations = [] def add_observation(self, value, day=None): if day is None: try: day = self.observations[-1].day + 1 except IndexError: day = 0 new_observation = Observation(day, value) self.observations.append(new_observation) return new_observation class Doctor(Person): """A doctor in an inflammation study.""" def __init__(self, name): super().__init__(name) self.patients = [] def add_patient(self, new_patient): # A crude check by name if this patient is already looked after # by this doctor before adding them for patient in self.patients: if patient.name == new_patient.name: return self.patients.append(new_patient) ...
Key Points
Classes allow us to organise data into distinct concepts.
By breaking down our data into classes, we can reason about the behaviour of parts of our data.
Relationships between concepts can be described using inheritance (is a) and composition (has a).
Architecture Revisited: Extending Software
Overview
Teaching: 15 min
Exercises: 0 minQuestions
How can we extend our software within the constraints of the MVC architecture?
Objectives
Extend our software to add a view of a single patient in the study and the software’s command line interface to request a specific view.
MVC Revisited
We’ve been developing our software using the Model-View-Controller (MVC) architecture so far, but, as we have seen, MVC is just one of the common architectural patterns and is not the only choice we could have made.
There are many variants of an MVC-like pattern (such as Model-View-Presenter (MVP), Model-View-Viewmodel (MVVM), etc.), but in most cases, the distinction between these patterns isn’t particularly important. What really matters is that we are making decisions about the architecture of our software that suit the way in which we expect to use it. We should reuse these established ideas where we can, but we don’t need to stick to them exactly.
In this episode we’ll be taking our Object Oriented code from the previous episode and integrating it into our existing MVC pattern.
Let’s start with adding a view that allows us to see the data for a single patient.
First, we need to add the code for the view itself and make sure our Patient
class has the necessary data - including the ability to pass a list of measurements to the __init__
method.
Note that your Patient class may look very different now, so adapt this example to fit what you have.
# file: inflammation/views.py
...
def display_patient_record(patient):
"""Display data for a single patient."""
print(patient.name)
for obs in patient.observations:
print(obs.day, obs.value)
# file: inflammation/models.py
...
class Observation:
def __init__(self, day, value):
self.day = day
self.value = value
def __str__(self):
return self.value
class Person:
def __init__(self, name):
self.name = name
def __str__(self):
return self.name
class Patient(Person):
"""A patient in an inflammation study."""
def __init__(self, name, observations=None):
super().__init__(name)
self.observations = []
if observations is not None:
self.observations = observations
def add_observation(self, value, day=None):
if day is None:
try:
day = self.observations[-1].day + 1
except IndexError:
day = 0
new_observation = Observation(value, day)
self.observations.append(new_observation)
return new_observation
Now we need to make sure people can call this view - that means connecting it to the controller and ensuring that there’s a way to request this view when running the program.
The changes we need to make here are that the main
function needs to be able to direct us to the view we’ve requested - and we need to add to the command line interface the necessary data to drive the new view.
# file: inflammation-analysis.py
#!/usr/bin/env python3
"""Software for managing patient data in our imaginary hospital."""
import argparse
from inflammation import models, views
def main(args):
"""The MVC Controller of the patient data system.
The Controller is responsible for:
- selecting the necessary models and views for the current task
- passing data between models and views
"""
infiles = args.infiles
if not isinstance(infiles, list):
infiles = [args.infiles]
for filename in infiles:
inflammation_data = models.load_csv(filename)
if args.view == 'visualize':
view_data = {
'average': models.daily_mean(inflammation_data),
'max': models.daily_max(inflammation_data),
'min': models.daily_min(inflammation_data),
}
views.visualize(view_data)
elif args.view == 'record':
patient_data = inflammation_data[args.patient]
observations = [models.Observation(day, value) for day, value in enumerate(patient_data)]
patient = models.Patient('UNKNOWN', observations)
views.display_patient_record(patient)
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description='A basic patient data management system')
parser.add_argument(
'infiles',
nargs='+',
help='Input CSV(s) containing inflammation series for each patient')
parser.add_argument(
'--view',
default='visualize',
choices=['visualize', 'record'],
help='Which view should be used?')
parser.add_argument(
'--patient',
type=int,
default=0,
help='Which patient should be displayed?')
args = parser.parse_args()
main(args)
We’ve added two options to our command line interface here: one to request a specific view and one for the patient ID that we want to lookup.
For the full range of features that we have access to with argparse
see the Python module documentation.
Allowing the user to request a specific view like this is a similar model to that used by the popular Python library Click - if you find yourself needing to build more complex interfaces than this, Click would be a good choice.
You can find more information in Click’s documentation.
For now, we also don’t know the names of any of our patients, so we’ve made it 'UNKNOWN'
until we get more data.
We can now call our program with these extra arguments to see the record for a single patient:
python3 inflammation-analysis.py --view record --patient 1 data/inflammation-01.csv
UNKNOWN
0 0.0
1 0.0
2 1.0
3 3.0
4 1.0
5 2.0
6 4.0
7 7.0
...
Additional Material
Now we’ve covered the basics of multi-layer architectures and Object Oriented Programming, and how we can integrate it into our existing MVC code, there are two optional extra episodes which you may find interesting.
Both episodes cover the persistence layer of software architectures and methods of persistently storing data, but take different approaches. The episode on persistence with JSON covers some more advanced concepts in Object Oriented Programming, while the episode on databases starts to build towards a true multilayer architecture, which would allow our software to handle much larger quantities of data.
Towards Collaborative Software Development
Having looked at some theoretical aspects of software design, we are now circling back to implementing our software design and developing our software to satisfy the requirements collaboratively in a team. At an intermediate level of software development, there is a wealth of practices that could be used, and applying suitable design and coding practices is what separates an intermediate developer from someone who has just started coding. The key for an intermediate developer is to balance these concerns for each software project appropriately, and employ design and development practices enough so that progress can be made.
One practice that should always be considered, and has been shown to be very effective in team-based software development, is that of code review. Code reviews help to ensure the ‘good’ coding standards are achieved and maintained within a team by having multiple people have a look and comment on key code changes to see how they fit within the codebase. Such reviews check the correctness of the new code, test coverage, functionality changes, and confirm that they follow the coding guides and best practices. Let’s have look at some code review techniques available to us.
Key Points
By breaking down our software into components with a single responsibility, we avoid having to rewrite it all when requirements change. Such components can be as small as a single function, or be a software package in their own right.
Wrap-up
Overview
Teaching: 15 min
Exercises: 0 minQuestions
Looking back at what was covered and how different pieces fit together
Where are some advanced topics and further reading available?
Objectives
Put the course in context with future learning.
Summary
As part of this course we have looked at a core set of established, intermediate-level software development tools and best practices for working as part of a team. The course teaches a selected subset of skills that have been tried and tested in collaborative research software development environments, although not an all-encompassing set of every skill you might need (check some further reading). It will provide you with a solid basis for writing industry-grade code, which relies on the same best practices taught in this course:
- Collaborative techniques and tools play an important part of research software development in teams, but also have benefits in solo development. We’ve looked at the benefits of a well-considered development environment, using practices, tools and infrastructure to help us write code more effectively in collaboration with others.
- We’ve looked at the importance of being able to verify the correctness of software and automation, and how we can leverage techniques and infrastructure to automate and scale tasks such as testing to save us time - but automation has a role beyond simply testing: what else can you automate that would save you even more time? Once found, we’ve also examined how to locate faults in our software.
- We’ve gone beyond procedural programming and explored different software design paradigms, such as object-oriented and functional styles of programming. We’ve contrasted their pros, cons, and the situations in which they work best, and how separation of concerns through modularity and architectural design can help shape good software.
- As an intermediate developer, aspects other than technical skills become important, particularly in development teams. We’ve looked at the importance of good, consistent practices for team working, and the importance of having a self-critical mindset when developing software, and ways to manage feedback effectively and efficiently.
Reflection Exercise: Putting the Pieces Together
As a group, reflect on the concepts (e.g. tools, techniques and practices) covered throughout the course, how they relate to one another, how they fit together in a bigger picture or skill learning pathways and in which order you need to learn them.
Solution
One way to think about these concepts is to make a list and try to organise them along two axes - ‘perceived usefulness of a concept’ versus ‘perceived difficulty or time needed to master a concept’, as shown in the table below (for the exercise, you can make your own copy of the template table for the purpose of this exercise). You then may think in which order you want to learn the skills and how much effort they require - e.g. start with those that are more useful but, for the time being, hold off those that are not too useful to you and take loads of time to master. You will likely want to focus on the concepts in the top right corner of the table first, but investing time to master more difficult concepts may pay off in the long run by saving you time and effort and helping reduce technical debt.
Another way you can organise the concepts is using a concept map (a directed graph depicting suggested relationships between concepts) or any other diagram/visual aid of your choice. Below are some example views of tools and techniques covered in the course using concept maps. Your views may differ but that is not to say that either view is right or wrong. This exercise is meant to get you to reflect on what was covered in the course and hopefully to reinforce the ideas and concepts you learned. A different concept map tries to organise concepts/skills based on their level of difficulty (novice, intermediate and advanced, and in-between!) and tries to show which skills are prerequisite for others and in which order you should consider learning skills.
Further Resources
Below are some additional resources to help you continue learning:
- Additional episode on persisting data
- Additional episode on databases
- CodeRefinery courses on FAIR (Findable, Accessible, Interoperable, and Reusable) software practices
- Python documentation
- GitHub Actions documentation
Key Points
Collaborative techniques and tools play an important part of research software development in teams.