Testing in this project is a big part of making sure that the implementation is correct and robust. That is why we decided to spend so much time on it.
For testing we decided to use unittest for python and also coverage to see what lines we hit and what lines we missed with our tests. We will also run all our tests every time we commit to a branch, to do this we will use the gitlab-ci.
We first start by writing test cases. Every python file that starts with test will be run by
unittest -m discover -v
We have a template for these in our test folder that makes all the files similar and easy to create new tests. Once we do have a few tests we can run coverage to find out how many lines we hit and how many lines we missed with out unittests. This can be run with
coverage run --source=. -m unittest discover -v
and we can get our report back with
coverage report -m
You can also add status icons to your readme to let people know if your build is passing and how much coverage you have. You can find them in your gitlab settings under
We wanted to test our project with as close to real conditions as possible, this meant that we needed an image to as close to the real one as possible. This is why we chose the kali image to run our project on. In the gitlab-ci.yml file in the top level of the directory we had to put this in
This specifies that we want to get the kali docker image to run this container.
Our next section is what we want to happen before the tests are run.
before_script: # Update kali and install python 3.6 - apt update && apt install python3.6 -y && apt install python3-pip -y - python3.6 -V # Print out python version for debugging
Here we want to update the kali image so that we can download up to date packages. We also install the latest version of python3.6 and pip and just print out the version.
The next we will run our tests and check the coverage
test: script: - pip3 install coverage - cd code # Run install script to install panoptes - ./install # Over ride config to add slack token - python3.6 insert_token.py $slack_token #- echo -e "slack_token = \"$slack_token\"\nslack_channel = \"#random\"" > config.py # Build config with environment variable # Testing begins here - python3.6 -m unittest discover -v - coverage run --source=. -m unittest discover -v - coverage report -m artifacts: paths: - dist/*.whl
We install coverage here in the first line. We then change directory to the code directory and run our install script that will install our project. This is nota formal test but it does give us and indication that our install script is working.
The next section is just for testing. We have our slack token as a secret variable, this line allows us to insert that into our config to test the alerts.
Then we run our tests with unittest. This will run recursivly through our directories and run everything that begins with test. All of our tests just happen to be in a folder called tests.
Then we can run though and see what lines are run and what lines are missed. The coverage report just prints out that report that was generated in the line pervious.
Some problems with this testing format is that a container does not have an interface card so we cannot test any of the interface or hardware specific code, like scanning. But these will be tested when the code is automatically deployed.`