All You Need For Your Next Python Project

  • 2020-03-03 09:32 AM
  • 381

In this article, you'll learn all you need for your next Python project: Ultimate Setup for Your Next Python Project; Automating Every Aspect of Your Python Project

Whether you are working on some machine learning/AI stuff, building web apps in Flask or just writing some quick Python script, it’s always useful to have some template for your project that satisfies all your needs, namely: predefined directory structure, all necessary config files like pytest.ini or requirements.txt, Testing, linting and static code analysis setup, CI/CD tooling, Dokerization of your app and on top of that automation with Makefile. So, here I bring you exactly that in this “Ultimate” all-purpose setup for your Python projects.

Ultimate Setup for Your Next Python Project

Directory Structure

When I was writing this kind of an article for Golang (here), I had hard time figuring out ideal project structure, with Python however, it’s pretty simple:

├── blueprint  # Our source code - name of the application/module
│   ├──
│   ├──
│   ├──
│   └── resources
├── tests
│   ├──
│   ├──
│   ├──
│   └──
├── .github  # GitHub Actions
│   └── workflows
│       ├── build-test.yml
│       └── push.yml
├── Makefile
├── setup.cfg
├── pytest.ini
├── requirements.txt
├── dev.Dockerfile
└── prod.Dockerfile 

Let’s outline what we have here, starting from the top:

  • blueprint - This is our source code directory, which should be named by your application or package you are working on. Inside we have the usual file signifying that it’s a Python package, next there is which is used when we want to run our application directly with python -m blueprint. Last source file here is the which is here really just for demonstration purposes. In real project instead of this you would have few top level source files and more directories (internal packages). We will get to contents of these files a little later. Finally, we also have resources directory here, which is used for any static content your application might need, e.g. images, keystore, etc.

  • tests - In this directory resides our test suite. I’m not gonna go into too much detail here as we will dedicate whole section to testing, but just briefly:

    • is a test file corresponding to in source directory
    • is probably familiar to you if you ever used Pytest - it’s a file used for specifying Pytest fixtures, hooks or loading external plugins.
    • helps with imports of source code files from blueprint directory by manipulating class path. We will see how that works in sec.
  • .github - This is last directory we have in this project. It holds configurations for GitHub Actions which we use for CI/CD. We have two files, first of them - build-test.yml is responsible for building, testing and linting our source code on every push. Second file - push.yml pushes our built application to GitHub Package Registry every time we create tag/release on GitHub. More on this in separate blog post.

  • Makefile - Apart from directories, we also have few top level files in our project, first of them - Makefile contains target that will help us automate commonly performed tasks like building, testing, linting or cleaning our project

  • - This one is a convenience script that sets up project for you. It essentially renames and substitutes dummy values in this project template for real values like name of your project or name of your package. Pretty handy, right?

Rest of the files we have here are configuration files for all tools we will use in this project. Let’s jump over to next section and explore what they do and what’s in them.

Config Files

One thing that can get pretty messy when setting up Python project is the config file soup that you will end-up with when you use bunch of tools like, pylint,, flake8 and so on. Each of these tools would like to have it’s own file, usually something like .flake8 or .coveragerc, which creates lots of unnecessary clutter in root of your project. To avoid this, I merged all these files into single one - setup.cfg:

exclude =

ignore =
    # Put Error/Style codes here e.g. H301

max-line-length = 120
max-complexity = 10

targets: blueprint

branch = True
omit =

exclude_lines =
    pragma: no cover
    if __name__ == .__main__.:

directory = reports

...  # 100 lines of config...

In case you are not familiar with all of the tools used here, I will give quick description:

  • Flake8 - is a tool for enforcing code style in your projects - in other words - it’s linter similar to pylint, which we will use as well. Why use both? It’s true that they overlap, but both of them have some rules that the other doesn’t, so in my experience it’s worth to use them both.

  • Bandit - is a tool for finding common security issues in Python code. It works by creating AST (abstract syntax tree) from your code and running plugins against its nodes. Developers are generally not security experts and also all of us make mistakes here-and-there, so it’s always nice to have tool that can spot at least some of those security mistakes for us.

  • - is a tool for measuring code coverage of Python programs. It gets triggered when we run our test suite with Pytest and generates coverage report from the test run. These reports can be in form of terminal output, but also XML format which then can be consumed by CI tools.

With that out of the way, let’s go over what we have in setup.cfg. For Flake8 we define exclusion patterns, so that we don’t lint code that we don’t care about. Below that is an empty ignore section in case we need to ignore some rule globally. We also set max line length to 120, as keeping line length to 80 is in my opinion unreasonable with size of today’s screens. Final line sets McCabe complexity threshold to 10, if you are not familiar with cyclomatic complexity you can find out more here.

Next up is Bandit, all we configure here is target directory, which is name of our package. We do this so that we can avoid specifying targets on command line.

After that follows First we enable branch coverage, which means that in places where a line in your program could jump to more than one next line, tracks which of those destination lines are actually visited. Next, we omit some files that shouldn’t or can’t be included in coverage measurement, like tests themselves or virtual environment files. We also exclude specific lines, e.g. lines that are labeled with pragma: no cover comment. Last config line tells the tool to store generated reports in reports directory. This directory is created automatically, if it doesn’t exist already.

Final tool we need to configure is Pylint, the configuration though, is very extensive, like more than 100 lines… So, I will leave this one out and point you the source here as well as commented and explained pylintrc in Pylint repository here.

We went through all the tools in setup.cfg but there is one more that cannot be added to setup.cfg and that is Pytest - even though Pytest docs tell you that you can use setup.cfg, it’s not exactly true… As per this issue, the option to use setup.cfg is being deprecated and there are some bugs like interpolation errors, that won’t be fixed, therefore we will also need pytest.ini file for configuration of Pytest:

addopts = --color=yes --cov=blueprint --cov-report=xml --cov-report=term -ra
filterwarnings =
log_cli = 1
log_cli_level = INFO
log_cli_format = %(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)
log_cli_date_format = %Y-%m-%d %H:%M:%S

First thing we do here, is set bunch of commandline arguments - we enable colors in terminal output, then we enable coverage reporting for blueprint directory, after that we enable both generation of XML and stdout (term) coverage reports. Final 2 arguments (-ra) tell Pytest to output short summary for non-passing tests.

On the next line we have filterwarnings option which allows us to disable some annoying warnings in the output, for example deprecation warnings coming out of some library which we have no control over.

Rest of the config sets up logging. First one just turns it on and other 3 configure level, format and datetime format. Easier than explaining the format config is just seeing the output itself, which is shown in next section.

With all the configurations in pytest.ini, all we will need to do to run our test suite is run pytest, not even package argument needed!

Last actual configuration file we have is requirement.txt, which contains list of our dependencies. All you can find in this file is list of Python packages, one per line with optional version of the package. As noted, the package version is optional, but I strongly suggest you lock versions in requirements.txt to avoid situations, where you might download newer, incompatible package during build and deployment, and end-up breaking your application.

There are 2 remaining files which aren’t actually config files - our Dockerfiles, namely, dev.Dockerfile and prod.Dockerfile used for development and production images respectively. I will leave those out for time being as we will explore those in separate article where we will talk about CI/CD and deployment. You can however check those files out already in GitHub repository here -

Actual Source Code

We have done quite a lot without even mentioning source code of our application, but I think it’s time to look at those few lines of code that are in the project skeleton:

class Blueprint:

    def run():
        print("Hello World...")

Only actual source code in this blueprint is this one class with static method. This is really on needed so that we can run something, get some output and test it. This also works as entrypoint to the whole application. In real project you could use the run() method to initialize your application or webserver.

So, how do we actually run this piece of code?

from .app import Blueprint

if __name__ == '__main__':

This short snippet in specially named file is what we need in our project, so that we can run whole package using python -m blueprint. Nice thing about this file and it’s contents is that it will only be ran with that command, therefore if we want to just import something from source of this package without running the whole thing, then we can do so without triggering

There’s one more special file in our package and that’s the file. Usually you would leave it empty a use it only to tell Python that the directory is package. Here however, we will use it to export classes, variables and functions from our package.

from .app import Blueprint

Without this one line above you wouldn’t be able to call from outside of this package. This way we can avoid people using internal parts of our code that should not be exposed.

That’s all for the code of our package, but what about the tests? First, let’s look at the

import sys
import os
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))

import blueprint  # noqa # pylint: disable=unused-import, wrong-import-position

Normally when you use someones package, then you import it like import blueprint or from blueprint import Blueprint, to imitate this in our tests and therefore make it as close as possible to real usage we use file to import the package into our test context. We also insert our project root directory into system path. This is not actually necessary when running tests with pytest, but if you for example ran directly with python ./tests/ or possibly with unittest without including the sys.path.insert..., then you would get ModuleNotFoundError: No module named 'blueprint', so this one line is a little bit of insurance policy.

Now, let’s see the example test:

from .context import blueprint

def test_app(capsys, example_fixture):
    # pylint: disable=W0612,W0613
    captured = capsys.readouterr()

    assert "Hello World..." in captured.out

What we have here is just single test that checks standard output of using built-in Pytest fixture called capsys (capture system output). So, what happens when we run the test suite?

~ $ pytest
=========================================================== test session starts ============================================================
collected 1 item

-------------------------------------------------------------- live log setup --------------------------------------------------------------
2020-01-04 12:22:00 [    INFO] Setting Up Example Fixture... (
PASSED                                                                                                                               [100%]
------------------------------------------------------------ live log teardown -------------------------------------------------------------
2020-01-04 12:22:00 [    INFO] Tearing Down Example Fixture... (

----------- coverage: platform linux, python 3.7.5-final-0 -----------
Name                    Stmts   Miss Branch BrPart  Cover
blueprint/       1      0      0      0   100%
blueprint/            3      0      0      0   100%
TOTAL                       4      0      0      0   100%
Coverage XML written to file coverage.xml

I trimmed few lines from the output so that you can better see the relevant parts of it. What’s to note here? Well, our test passed! Other than that, we can see coverage report and we can also see that the report got written to coverage.xml as configured in pytest.ini. One more thing that we have here in the output are 2 log messages coming from What is that about?

You might have noticed that apart from capsys fixture, we also used example_fixture in parameters of our small test. This fixture resides in as should all custom fixtures we make:

import logging
import pytest

LOGGER = logging.getLogger(__name__)

def example_fixture():"Setting Up Example Fixture...")
    yield"Tearing Down Example Fixture...")

As the name implies, this really is just example fixture. All it does is log one message, then it lets the test run and finally it logs one more message. Nice thing about file is that it gets automatically discovered by Pytest, so you don’t even need to import it to your test files. If you want to find out more about it, then you can check out my previous post about Pytest here or docs here.

One Command for Everything

It would be quite laborious if we were to run each of our tools separately and had to remember their arguments, even though they are always the same. Also it would be equally annoying if later we decided to put all these tools into CI/CD (next article!), right? So, let’s simplify things with Makefile:

MODULE := blueprint
NC='\033[0m' # No Color

    @python -m $(MODULE)


    @echo "\n${BLUE}Running Pylint against source and test files...${NC}\n"
    @pylint --rcfile=setup.cfg **/*.py
    @echo "\n${BLUE}Running Flake8 against source and test files...${NC}\n"
    @echo "\n${BLUE}Running Bandit against source files...${NC}\n"
    @bandit -r --ini setup.cfg

    rm -rf .pytest_cache .coverage .pytest_cache coverage.xml

.PHONY: clean test

In this Makefile we have 4 targets. First of them - run runs our application using we created in root of our source folder. Next, test just runs pytest. It’s that simple thanks to all the configs in pytest.ini. The longest target here - lint - runs all our linting tool. First it runs pylint against all .py files in the project, including test files. After that it runs flake8 and finally bandit. For these 2 it runs only against sources in blueprint directory. If any of those tools find some problem with our code, it will exit with non-zero code, meaning the target will fail, which will be useful in CI/CD. Last target in this file is clean, which well… cleans our projects - it removes all the files generated by previously mentioned tools.


In this article we’ve built project skeleton, that’s ready to be used for any kind of Python project you might be working on or thinking about, so if you want play with or dig a little deeper, then check out the source code which is available in my repository here: Repo also includes information on how to setup your project using convenience script, plus some more docs. Feel free to leave feedback/suggestions in form of issue or just star it if you like this kind of content. 🙂

In the next one we will look into adding CI/CD into the mix with GitHub Actions and GitHub Package Registry. We will also Dockerize our project and create both debuggable and optimized production ready Docker images and add some more code quality tooling using CodeClimate and SonarCloud.

Automating Every Aspect of Your Python Project

Debuggable Docker Containers for Development

Some people don’t like Docker because containers can be hard to debug or because their images take long time to be built. So, let’s start here, by building images that are ideal for development - fast to build and easy to debug.

To make the image easily debuggable we will need base image that includes all the tools we might ever need when debugging - things like bash, vim, netcat, wget, cat, find, grep etc. python:3.8.1-buster seems like a ideal candidate for the task. It includes a lot of tools by default and we can install everything what is missing pretty easily. This base image is pretty thick, but that doesn’t matter here as it’s going to be used only for development. Also as you probably noticed, I chose very specific image - locking both version of Python as well as Debian - that’s intentional, as we want to minimize chance of “breakage” caused by newer, possibly incompatible version of either Python or Debian.

As an alternative you could use Alpine based image. That however, might cause some issues, as it uses musl libc instead of glibc which Python relies on. So, just keep that in mind if decide to choose this route.

As for the speed of builds, we will leverage multistage builds to allow us to cache as many layers as possible. This way we can avoid downloading dependencies and tools like gcc as well as all libraries required by our application (from requirements.txt).

To further speed things up we will create custom base image from previously mentioned python:3.8.1-buster, that will include all tool we need as we cannot cache steps needed for downloading and installation of these tools into final runner image.

Enough talking, let’s see the Dockerfile:

# dev.Dockerfile
FROM python:3.8.1-buster AS builder
RUN apt-get update && apt-get install -y --no-install-recommends --yes python3-venv gcc libpython3-dev && \
    python3 -m venv /venv && \
    /venv/bin/pip install --upgrade pip

FROM builder AS builder-venv

COPY requirements.txt /requirements.txt
RUN /venv/bin/pip install -r /requirements.txt

FROM builder-venv AS tester

COPY . /app
RUN /venv/bin/pytest

FROM martinheinz/python-3.8.1-buster-tools:latest AS runner
COPY --from=tester /venv /venv
COPY --from=tester /app /app


ENTRYPOINT ["/venv/bin/python3", "-m", "blueprint"]
USER 1001


Above you can see that we will go through 3 intermediate images before creating final runner image. First of them is named builder. It downloads all necessary libraries that will be needed to build our final application, this includes gcc and Python virtual environment. After installation it also creates actual virtual environment which is then used by next images.

Next comes the builder-venv image which copies list of our dependencies (requirements.txt) into the image and then installs it. This intermediate image is needed for caching as we only want to install libraries if requirements.txt changes, otherwise we just use cache.

Before we create our final image we first want to run tests against our application. That’s what happens in the tester image. We copy our source code into image and run tests. If they pass we move on to the runner.

For runner image we are using custom image that includes some extras like vim or netcat that are not present in normal Debian image. You can find this image on Docker Hub here and you can also check out the very simple Dockerfile in base.Dockerfile here. So, what we do in this final image - first we copy virtual environment that holds all our installed dependencies from tester image, next we copy our tested application. Now that we have all the sources in the image we move to directory where application is and then set ENTRYPOINT so that it runs our application when image is started. For the security reasons we also set USER to 1001, as best practices tell us that you should never run containers under root user. Final 2 lines set labels of the image. These are going to get replaced/populated when build is ran using make target which we will see a little later.

Optimized Docker Containers for Production

When it comes to production grade images we will want to make sure that they are small, secure and fast. My personal favourite for this task is Python image from Distroless project. What is Distroless, though?

Let me put it this way - in an ideal world everybody would build their image using FROM scratch as their base image (that is - empty image). That’s however not what most of us would like to do, as it requires you to statically link your binaries, etc. That’s where Distroless comes into play - it’s FROM scratch for everybody.

Alright, now to actually describe what Distroless is. It’s set of images made by Google that contain the bare minimum that’s needed for your app, meaning that there are no shells, package managers or any other tools that would bloat the image and create signal noise for security scanners (like CVE) making it harder to establish compliance.

Now that we know what we are dealing with, let’s see the production Dockerfile… Well actually, we are not gonna change that much here, it’s just 2 lines:

# prod.Dockerfile
#  1\. Line - Change builder image
FROM debian:buster-slim AS builder
#  ...
#  17\. Line - Switch to Distroless image
FROM AS runner
#  ... Rest of the Dockefile

All we had to change is our base images for building and running the application! But difference is pretty big - our development image was 1.03GB and this one is just 103MB, that’s quite a difference! I know, I can already hear you - “But Alpine can be even smaller!” - Yes, that’s right, but size doesn’t matter that much. You will only ever notice image size when downloading/uploading it, which is not that often. When the image is running, size doesn’t matter at all. What is more important than size is security and in that regard Distroless is surely superior, as Alpine (which is great alternative) has lots of extra packages, that increase attack surface.

Last thing worth mentioning when talking about Distroless are debug images. Considering that Distroless doesn’t contain any shell (not even sh), it gets pretty tricky when you need to debug and poke around. For that, there are debug versions of all Distroless images. So, when poop hits the fan, you can build your production image using debug tag and deploy it alongside your normal image, exec into it and do - for example - thread dump. You can use the debug version of python3 image like so:

docker run --entrypoint=sh -ti

Single Command for Everything

With all the Dockerfiles ready, let’s automate the hell out of it with Makefile! First thing we want to do is build our application with Docker. So to build dev image we can do make build-dev which runs following target:

# The binary to build (just the basename).
MODULE := blueprint

# Where to push the docker image.


# This version-strategy uses git tags to set the version string
TAG := $(shell git describe --tags --always --dirty)

    @echo "\n${BLUE}Building Development image with labels:\n"
    @echo "name: $(MODULE)"
    @echo "version: $(TAG)${NC}\n"
    @sed                                 \
        -e 's|{NAME}|$(MODULE)|g'        \
        -e 's|{VERSION}|$(TAG)|g'        \
        dev.Dockerfile | docker build -t $(IMAGE):$(TAG) -f- .

This target builds the image by first substituting labels at the bottom of dev.Dockerfile with image name and tag which is created by running git describe and then running docker build.

Next up - building for production with make build-prod VERSION=1.0.0:

    @echo "\n${BLUE}Building Production image with labels:\n"
    @echo "name: $(MODULE)"
    @echo "version: $(VERSION)${NC}\n"
    @sed                                     \
        -e 's|{NAME}|$(MODULE)|g'            \
        -e 's|{VERSION}|$(VERSION)|g'        \
        prod.Dockerfile | docker build -t $(IMAGE):$(VERSION) -f- .

This one is very similar to previous target, but instead of using git tag as version, we will use version passed as argument, in the example above 1.0.0.

When you run everything in Docker, then you will at some point need to also debug it in Docker, for that, there is following target:

# Example: make shell CMD="-c 'date > datefile'"
shell: build-dev
    @echo "\n${BLUE}Launching a shell in the containerized build environment...${NC}\n"
        @docker run                                                     \
            -ti                                                     \
            --rm                                                    \
            --entrypoint /bin/bash                                  \
            -u $(id -u):$(id -g)                                  \
            $(IMAGE):$(TAG)                     \

From the above we can see that entrypoint gets overridden by bash and container command gets overridden by argument. This way we can either just enter the container and poke around or run one off command, like in the example above.

When we are done with coding and want to push the image to Docker registry, then we can use make push VERSION=0.0.2. Let’s see what the target does:


push: build-prod
    @echo "\n${BLUE}Pushing image to GitHub Docker Registry...${NC}\n"
    @docker push $(IMAGE):$(VERSION)

It first runs build-prod target we looked at previously and then just runs docker push. This assumes that you are logged into Docker registry, so before running this you will need to run docker login.

Last target is for cleaning up Docker artifacts. It uses name label that was substituted into Dockerfiles to filter and find artifacts that need to be deleted:

    @docker system prune -f --filter "label=name=$(MODULE)"

You can find full code listing for this Makefile in my repository here:

CI/CD with GitHub Actions

Now, let’s use all these handy make targets to setup our CI/CD. We will be using GitHub Actions and GitHub Package Registry to build our pipelines (jobs) and to store our images. So, what exactly are those?

  • GitHub Actions are jobs/pipelines that help you automate your development workflows. You can use them to create individual tasks and then combine them into custom workflows, which are then executed - for example - on every push to repository or when release is created.

  • GitHub Package Registry is a package hosting service, fully integrated with GitHub. It allows you to store various types of packages, e.g. Ruby gems or npm packages. We will use it to store our Docker images. If you are not familiar with GitHub Package Registry and want more info on it, then you can check out my blog post here.

Now, to use GitHub Actions, we need to create workflows that are going to be executed based on triggers (e.g. push to repository) we choose. These workflows are YAML files that live in .github/workflows directory in our repository:

└── workflows
    ├── build-test.yml
    └── push.yml

In there, we will create 2 files build-test.yml and push.yml. First of them build-test.yml will contain 2 jobs which will be triggered on every push to the repository, let’s look at those:

    runs-on: ubuntu-latest
    - uses: actions/[email protected]
    - name: Run Makefile build for Development
      run: make build-dev

First job called build verifies that our application can be build by running our make build-dev target. Before it runs it though, it first checks out our repository by executing action called checkout which is published on GitHub.

    runs-on: ubuntu-latest
    - uses: actions/[email protected]
    - uses: actions/[email protected]
        python-version: '3.8'
    - name: Install Dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt
    - name: Run Makefile test
      run: make test
    - name: Install Linters
      run: |
        pip install pylint
        pip install flake8
        pip install bandit
    - name: Run Linters
      run: make lint

The second job is little more complicated. It runs tests against our application as well as 3 linters (code quality checkers). Same as for previous job, we use [email protected] action to get our source code. After that we run another published action called [email protected] which sets up python environment for us (you can find details about it here). Now that we have python environment, we also need application dependencies from requirements.txt which we install with pip. At this point we can proceed to run make test target, which triggers our Pytest suite. If our test suite passes we go on to install linters mentioned previously - pylint, flake8 and bandit. Finally, we run make lint target, which triggers each of these linters.

That’s all for the build/test job, but what about the pushing one? Let’s go over that too:

    - '*'

    runs-on: ubuntu-latest
    - uses: actions/[email protected]
    - name: Set env
      run: echo ::set-env name=RELEASE_VERSION::$(echo ${GITHUB_REF:10})
    - name: Log into Registry
      run: echo "${{ secrets.REGISTRY_TOKEN }}" | docker login -u ${{ }} --password-stdin
    - name: Push to GitHub Package Registry
      run: make push VERSION=${{ env.RELEASE_VERSION }}

First 4 lines define when we want this job to be triggered. We specify that this job should start only when tags are pushed to repository (* specifies pattern of tag name - in this case - anything). This is so that we don’t push our Docker image to GitHub Package Registry every time we push to repository, but rather only when we push tag that specifies new version of our application.

Now for the body of this job - it starts by checking out source code and setting environment variable of RELEASE_VERSION to git tag we pushed. This is done using build-in ::setenv feature of GitHub Actions (more info here). Next, it logs into Docker registry using REGISTRY_TOKEN secret stored in repository and login of user who initiated the workflow ( Finally, on the last line it runs push target, which builds prod image and pushes it into registry with previously pushed git tag as image tag.

You can out checkout complete code listing in the files in my repository here.

Code Quality Checks using CodeClimate

Last but not least, we will also add code quality checks using CodeClimate and SonarCloud. These will get triggered together with our test job shown above. So, let’s add few lines to it:

    # test, lint...
    - name: Send report to CodeClimate
      run: |
        export GIT_BRANCH="${GITHUB_REF/refs\/heads\//}"
        curl -L > ./cc-test-reporter
        chmod +x ./cc-test-reporter
        ./cc-test-reporter format-coverage -t coverage.xml
        ./cc-test-reporter upload-coverage -r "${{ secrets.CC_TEST_REPORTER_ID }}"

    - name: SonarCloud scanner
      uses: sonarsource/[email protected]
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}

We start with CodeClimate for which we first export GIT_BRANCH variable which we retrieve using GITHUB_REF environment variable. Next, we download CodeClimate test reporter and make it executable. Next we use it to format coverage report generated by our test suite, and on the last line we send it to CodeClimate with test reporter ID which we store in repository secrets.

As for the SonarCloud, we need to create file in our repository which looks like this (values for this file can be found on SonarCloud dashboard in bottom right):



Other than that, we can just use existing sonarcloud-github-action, which does all the work for us. All we have to do is supply 2 tokens - GitHub one which is in repository by default and SonarCloud token which we can get from SonarCloud website.

Note: Steps on how to get and set all the previously mentioned tokens and secrets are in the repository README here.


That’s it! With tools, configs and code from above, you are ready to build and automate all aspects of your next Python project! If you need more info about topics shown/discussed in this article, then go ahead and check out docs and code in my repository here: and if you have any suggestions/issues, please submit issue in the repository or just star it if you like this little project of mine. 🙂