Skip to content

fix: assessments buttons #339

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Oct 15, 2021
Merged

fix: assessments buttons #339

merged 2 commits into from
Oct 15, 2021

Conversation

orronai
Copy link
Collaborator

@orronai orronai commented Oct 15, 2021

  • Unfocus after clicking an assessment
  • Assessment is now changed after select and not after fully checked solution

- Unfocus after clicking an assessment
- Assessment is now changed after select and not after fully checked solution
@codecov
Copy link

codecov bot commented Oct 15, 2021

Codecov Report

Merging #339 (e10165d) into master (cf9039f) will increase coverage by 0.02%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #339      +/-   ##
==========================================
+ Coverage   83.94%   83.96%   +0.02%     
==========================================
  Files          63       63              
  Lines        2940     2950      +10     
==========================================
+ Hits         2468     2477       +9     
- Misses        472      473       +1     
Impacted Files Coverage Δ
lms/lmsdb/models.py 91.08% <100.00%> (-0.13%) ⬇️
lms/lmsweb/views.py 93.11% <100.00%> (+0.06%) ⬆️
lms/models/solutions.py 98.97% <100.00%> (+0.03%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cf9039f...e10165d. Read the comment docs.

@sourcery-ai
Copy link

sourcery-ai bot commented Oct 15, 2021

Sourcery Code Quality Report

✅  Merging this PR will increase code quality in the affected files by 0.10%.

Quality metrics Before After Change
Complexity 1.72 ⭐ 1.68 ⭐ -0.04 👍
Method Length 48.14 ⭐ 47.84 ⭐ -0.30 👍
Working memory 7.48 🙂 7.46 🙂 -0.02 👍
Quality 75.90% 76.00% 0.10% 👍
Other metrics Before After Change
Lines 2518 2529 11
Changed files Quality Before Quality After Quality Change
lms/lmsdb/models.py 84.30% ⭐ 84.48% ⭐ 0.18% 👍
lms/lmsweb/views.py 75.10% ⭐ 75.31% ⭐ 0.21% 👍
lms/models/solutions.py 74.06% 🙂 74.82% 🙂 0.76% 👍
tests/test_notifications.py 70.44% 🙂 70.46% 🙂 0.02% 👍
tests/test_solutions.py 69.11% 🙂 68.82% 🙂 -0.29% 👎

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
lms/lmsweb/views.py comment 13 🙂 202 😞 8 🙂 50.15% 🙂 Try splitting into smaller methods
tests/test_solutions.py TestSolutionBridge.test_staff_and_user_comments 0 ⭐ 236 ⛔ 12 😞 52.40% 🙂 Try splitting into smaller methods. Extract out complex expressions
tests/test_solutions.py TestSolutionBridge.test_share_solution_function 0 ⭐ 248 ⛔ 11 😞 53.48% 🙂 Try splitting into smaller methods. Extract out complex expressions
tests/test_notifications.py TestNotification.test_user_commented_after_check 0 ⭐ 215 ⛔ 12 😞 53.71% 🙂 Try splitting into smaller methods. Extract out complex expressions
lms/models/solutions.py get_view_parameters 5 ⭐ 139 😞 12 😞 55.53% 🙂 Try splitting into smaller methods. Extract out complex expressions

Legend and Explanation

The emojis denote the absolute quality of the code:

  • ⭐ excellent
  • 🙂 good
  • 😞 poor
  • ⛔ very poor

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Help us improve this quality report!

Copy link
Member

@yammesicka yammesicka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AAA Job, thanks!

Comment on lines +769 to +776
def change_assessment(self, assessment_id: Optional[int] = None) -> bool:
assessment = SolutionAssessment.get_or_none(
SolutionAssessment.id == assessment_id,
)
requested_solution = (Solution.id == self.id)
updates_dict = {Solution.assessment.name: assessment}
changes = Solution.update(**updates_dict).where(requested_solution)
return changes.execute() == 1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome job! I like both the decoupling and the cleanness of this function

@orronai orronai merged commit 8958433 into master Oct 15, 2021
@orronai orronai deleted the fix-grades-logic branch October 15, 2021 11:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants