Skip to content

Add Coverage Checks and Badge to CI #57

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Feb 2, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 53 additions & 0 deletions .github/workflows/coverage-badge.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# This workflow will generate and push an updated coverage badge

name: Coverage Badge

on:
push:
branches: [ main ]

jobs:
report:

runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest==6.2.4
pip install pytest-mock==3.6.1
pip install coverage
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Generate coverage report
run: |
coverage run -m --source=src pytest -v tests/unit_test.py

- name: Coverage Badge
uses: tj-actions/coverage-badge-py@v1.8

- name: Verify Changed files
uses: tj-actions/verify-changed-files@v12
id: changed_files
with:
files: coverage.svg

- name: Commit files
if: steps.changed_files.outputs.files_changed == 'true'
run: |
git config --local user.email "github-actions[bot]@users.noreply.github.com"
git config --local user.name "github-actions[bot]"
git add coverage.svg
git commit -m "Updated coverage.svg"

- name: Push changes
if: steps.changed_files.outputs.files_changed == 'true'
uses: ad-m/github-push-action@master
with:
github_token: ${{ secrets.CI_PUSH_TOKEN }}
branch: ${{ github.ref }}
11 changes: 7 additions & 4 deletions .github/workflows/python-app.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,16 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
pip install pytest-mock
pip install pytest==6.2.4
pip install pytest-mock==3.6.1
pip install coverage
pip install black==22.3.0
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Check formatting with black
run: |
black --check .
- name: Test with pytest
- name: Test with pytest and check coverage
run: |
pytest -v tests/unit_test.py
coverage run -m --source=src pytest -v tests/unit_test.py
coverage=$(coverage report -m | tail -1 | tail -c 4 | head -c 2)
if (( $coverage < 90 )); then exit 1; else echo "Coverage passed, ${coverage}%"; fi
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
dist/
.python-version
__pycache__/
.coverage
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,11 @@ For testing, make sure to have installed:

NOTE: Functional tests coming soon, will live in `tests/func_test.py`

For checking code coverage while testing:
- Start by installing `coverage` (can be done via `pip`)
- Now instead when testing run `coverage run -m --source=src pytest tests/unit_test.py`
- To then view a code coverage report w/ missing lines, run `coverage report -m`

For formatting:
- Currently using black v22.3.0 for format checking
- To install, run `pip install black==22.3.0`
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
openshift-client==1.0.18
rich==12.5.1
ray==2.1.0
ray[default]==2.1.0
28 changes: 17 additions & 11 deletions src/codeflare_sdk/cluster/cluster.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def down(self):
oc.invoke("delete", ["AppWrapper", self.app_wrapper_name])
self.config.auth.logout()

def status(self, print_to_console: bool = True):
def status(self, print_to_console: bool = True): # pragma: no cover
"""
TO BE UPDATED: Will soon return (and print by default) the cluster's
status, from AppWrapper submission to setup completion. All resource
Expand Down Expand Up @@ -151,7 +151,7 @@ def cluster_dashboard_uri(self, namespace: str = "default") -> str:
return "Dashboard route not available yet. Did you run cluster.up()?"

# checks whether the ray cluster is ready
def is_ready(self, print_to_console: bool = True):
def is_ready(self, print_to_console: bool = True): # pragma: no cover
"""
TO BE DEPRECATED: functionality will be added into cluster.status().
"""
Expand Down Expand Up @@ -228,15 +228,17 @@ def job_logs(self, job_id: str) -> str:
return client.get_job_logs(job_id)


def get_current_namespace() -> str:
def get_current_namespace() -> str: # pragma: no cover
"""
Returns the user's current working namespace.
"""
namespace = oc.invoke("project", ["-q"]).actions()[0].out.strip()
return namespace


def list_all_clusters(namespace: str, print_to_console: bool = True):
def list_all_clusters(
namespace: str, print_to_console: bool = True
): # pragma: no cover
"""
Returns (and prints by default) a list of all clusters in a given namespace.
"""
Expand All @@ -246,7 +248,7 @@ def list_all_clusters(namespace: str, print_to_console: bool = True):
return clusters


def list_all_queued(namespace: str, print_to_console: bool = True):
def list_all_queued(namespace: str, print_to_console: bool = True): # pragma: no cover
"""
Returns (and prints by default) a list of all currently queued-up AppWrappers
in a given namespace.
Expand All @@ -262,14 +264,18 @@ def list_all_queued(namespace: str, print_to_console: bool = True):
# private methods


def _app_wrapper_status(name, namespace="default") -> Optional[AppWrapper]:
def _app_wrapper_status(
name, namespace="default"
) -> Optional[AppWrapper]: # pragma: no cover
with oc.project(namespace), oc.timeout(10 * 60):
cluster = oc.selector(f"appwrapper/{name}").object()
if cluster:
return _map_to_app_wrapper(cluster)


def _ray_cluster_status(name, namespace="default") -> Optional[RayCluster]:
def _ray_cluster_status(
name, namespace="default"
) -> Optional[RayCluster]: # pragma: no cover
# FIXME should we check the appwrapper first
cluster = None
try:
Expand All @@ -283,7 +289,7 @@ def _ray_cluster_status(name, namespace="default") -> Optional[RayCluster]:
return cluster


def _get_ray_clusters(namespace="default") -> List[RayCluster]:
def _get_ray_clusters(namespace="default") -> List[RayCluster]: # pragma: no cover
list_of_clusters = []

with oc.project(namespace), oc.timeout(10 * 60):
Expand All @@ -296,7 +302,7 @@ def _get_ray_clusters(namespace="default") -> List[RayCluster]:

def _get_app_wrappers(
namespace="default", filter=List[AppWrapperStatus]
) -> List[AppWrapper]:
) -> List[AppWrapper]: # pragma: no cover
list_of_app_wrappers = []

with oc.project(namespace), oc.timeout(10 * 60):
Expand All @@ -311,7 +317,7 @@ def _get_app_wrappers(
return list_of_app_wrappers


def _map_to_ray_cluster(cluster) -> RayCluster:
def _map_to_ray_cluster(cluster) -> RayCluster: # pragma: no cover
cluster_model = cluster.model

with oc.project(cluster.namespace()), oc.timeout(10 * 60):
Expand Down Expand Up @@ -342,7 +348,7 @@ def _map_to_ray_cluster(cluster) -> RayCluster:
)


def _map_to_app_wrapper(cluster) -> AppWrapper:
def _map_to_app_wrapper(cluster) -> AppWrapper: # pragma: no cover
cluster_model = cluster.model
return AppWrapper(
name=cluster.name(),
Expand Down
4 changes: 2 additions & 2 deletions src/codeflare_sdk/utils/generate_yaml.py
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ def generate_appwrapper(
return outfile


def main():
def main(): # pragma: no cover
parser = argparse.ArgumentParser(description="Generate user AppWrapper")
parser.add_argument(
"--name",
Expand Down Expand Up @@ -348,5 +348,5 @@ def main():
return outfile


if __name__ == "__main__":
if __name__ == "__main__": # pragma: no cover
main()
6 changes: 3 additions & 3 deletions src/codeflare_sdk/utils/pretty_print.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@
from ..cluster.model import RayCluster, AppWrapper, RayClusterStatus


def print_no_resources_found():
def print_no_resources_found(): # pragma: no cover
console = Console()
console.print(Panel("[red]No resources found"))


def print_app_wrappers_status(app_wrappers: List[AppWrapper]):
def print_app_wrappers_status(app_wrappers: List[AppWrapper]): # pragma: no cover
if not app_wrappers:
print_no_resources_found()
return # shortcircuit
Expand All @@ -53,7 +53,7 @@ def print_app_wrappers_status(app_wrappers: List[AppWrapper]):
console.print(Panel.fit(table))


def print_clusters(clusters: List[RayCluster], verbose=True):
def print_clusters(clusters: List[RayCluster], verbose=True): # pragma: no cover
if not clusters:
print_no_resources_found()
return # shortcircuit
Expand Down
150 changes: 150 additions & 0 deletions tests/test-case-cmd.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
apiVersion: mcad.ibm.com/v1beta1
kind: AppWrapper
metadata:
name: unit-cmd-cluster
namespace: default
spec:
priority: 9
resources:
GenericItems:
- custompodresources:
- limits:
cpu: 2
memory: 8G
nvidia.com/gpu: 0
replicas: 1
requests:
cpu: 2
memory: 8G
nvidia.com/gpu: 0
- limits:
cpu: 1
memory: 2G
nvidia.com/gpu: 1
replicas: 2
requests:
cpu: 1
memory: 2G
nvidia.com/gpu: 1
generictemplate:
apiVersion: ray.io/v1alpha1
kind: RayCluster
metadata:
labels:
appwrapper.mcad.ibm.com: unit-cmd-cluster
controller-tools.k8s.io: '1.0'
name: unit-cmd-cluster
namespace: default
spec:
autoscalerOptions:
idleTimeoutSeconds: 60
imagePullPolicy: Always
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 500m
memory: 512Mi
upscalingMode: Default
enableInTreeAutoscaling: false
headGroupSpec:
rayStartParams:
block: 'true'
dashboard-host: 0.0.0.0
num-gpus: '0'
serviceType: ClusterIP
template:
spec:
containers:
- image: rayproject/ray:latest
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- ray stop
name: ray-head
ports:
- containerPort: 6379
name: gcs
- containerPort: 8265
name: dashboard
- containerPort: 10001
name: client
resources:
limits:
cpu: 2
memory: 8G
nvidia.com/gpu: 0
requests:
cpu: 2
memory: 8G
nvidia.com/gpu: 0
rayVersion: 1.12.0
workerGroupSpecs:
- groupName: small-group-unit-cmd-cluster
maxReplicas: 2
minReplicas: 2
rayStartParams:
block: 'true'
num-gpus: '1'
replicas: 2
template:
metadata:
annotations:
key: value
labels:
key: value
spec:
containers:
- env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: rayproject/ray:latest
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- ray stop
name: machine-learning
resources:
limits:
cpu: 1
memory: 2G
nvidia.com/gpu: 1
requests:
cpu: 1
memory: 2G
nvidia.com/gpu: 1
initContainers:
- command:
- sh
- -c
- until nslookup $RAY_IP.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local;
do echo waiting for myservice; sleep 2; done
image: busybox:1.28
name: init-myservice
replicas: 1
- generictemplate:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
odh-ray-cluster-service: unit-cmd-cluster-head-svc
name: ray-dashboard-unit-cmd-cluster
namespace: default
spec:
port:
targetPort: dashboard
to:
kind: Service
name: unit-cmd-cluster-head-svc
replica: 1
Items: []
Loading