Skip to content

Add benchmarks endpoint support #43

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Aug 18, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 62 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -461,6 +461,68 @@ The attribute can be passed to the task payloads, in the ``attachment`` paramete
...
)

Evaluation tasks (For Scale Rapid projects only)
________________________________________________

Evaluation tasks are tasks that we know the answer to and are used to measure workers' performance internally to ensure the quality

Create Evaluation Task
^^^^^^^^^^^^^^^^^^^^^^

Create an evaluation task.

.. code-block:: python

client.create_evaluation_task(TaskType, ...task parameters...)

Passing in the applicable values into the function definition. The applicable fields are the same as for create_task. Applicable fields for each task type can be found in `Scale's API documentation`__. Additionally an expected_response is required. An optional initial_response can be provided if it's for a review phase evaluation task.

__ https://docs.scale.com/reference

.. code-block:: python

from scaleapi.tasks import TaskType

expected_response = {
"annotations": {
"answer_reasonable": {
"type": "category",
"field_id": "answer_reasonable",
"response": [
[
"no"
]
]
}
}
}

initial_response = {
"annotations": {
"answer_reasonable": {
"type": "category",
"field_id": "answer_reasonable",
"response": [
[
"yes"
]
]
}
}
}

attachments = [
{"type": "image", "content": "https://i.imgur.com/bGjrNzl.jpeg"}
]

payload = dict(
project = "test_project",
attachments,
initial_response=initial_response,
expected_response=expected_response,
)

client.create_task(TaskType.TextCollection, **payload)


Error handling
Expand Down
40 changes: 40 additions & 0 deletions scaleapi/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from typing import IO, Dict, Generator, Generic, List, TypeVar, Union

from scaleapi.batches import Batch, BatchStatus
from scaleapi.evaluation_tasks import EvaluationTask
from scaleapi.exceptions import ScaleInvalidRequest
from scaleapi.files import File
from scaleapi.projects import Project
Expand Down Expand Up @@ -787,3 +788,42 @@ def import_file(self, file_url: str, **kwargs) -> File:
payload = dict(file_url=file_url, **kwargs)
filedata = self.api.post_request(endpoint, body=payload)
return File(filedata, self)

def create_evaluation_task(
self,
task_type: TaskType,
**kwargs,
) -> EvaluationTask:
"""This method can only be used for Self-Serve projects.
Supported Task Types: [
ImageAnnotation,
Categorization,
TextCollection,
NamedEntityRecognition
]
Parameters may differ based on the given task_type.

Args:
task_type (TaskType):
Task type to be created
e.g.. `TaskType.ImageAnnotation`
**kwargs:
The same set of parameters are expected with
create_task function. Additionally with
an expected_response and an optional initial_response
if you want to make it a review phase evaluation task
The expected_response/initial_response should follow
the format of any other tasks' response on your project.
It's recommended to try a self_label batch to get
familiar with the response format.
Scale's API documentation.
https://docs.scale.com/reference

Returns:
EvaluationTask:
Returns created evaluation task.
"""
endpoint = f"evaluation_tasks/{task_type.value}"

evaluation_task_data = self.api.post_request(endpoint, body=kwargs)
return EvaluationTask(evaluation_task_data, self)
2 changes: 1 addition & 1 deletion scaleapi/_version.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
__version__ = "2.4.0"
__version__ = "2.5.0"
__package_name__ = "scaleapi"
22 changes: 22 additions & 0 deletions scaleapi/evaluation_tasks.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
class EvaluationTask:
"""EvaluationTask class, containing EvaluationTask information."""

def __init__(self, json, client):
self._json = json
self.id = json["id"]
self.initial_response = getattr(json, "initial_response", None)
self.expected_response = json["expected_response"]
self._client = client

def __hash__(self):
return hash(self.id)

def __str__(self):
return f"EvaluationTask(id={self.id})"

def __repr__(self):
return f"EvaluationTask({self._json})"

def as_dict(self):
"""Returns all attributes as a dictionary"""
return self._json