Skip to content

feat: Grade marks #320

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 18 commits into from
Oct 8, 2021
Merged

feat: Grade marks #320

merged 18 commits into from
Oct 8, 2021

Conversation

orronai
Copy link
Collaborator

@orronai orronai commented Sep 30, 2021

  • Added grade marks to solution`s
  • Added a menu to choose the grade mark for the admin in the solution view page
  • Added a test
  • Fixed some tests
  • Added to the user`s table the grade mark
  • Added migration

- Added grades table
- Added backend and frontend implementaion
- Added the grade to the user`s table
@codecov
Copy link

codecov bot commented Sep 30, 2021

Codecov Report

Merging #320 (8f7efff) into master (0e38fb9) will increase coverage by 0.06%.
The diff coverage is 79.34%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #320      +/-   ##
==========================================
+ Coverage   84.02%   84.08%   +0.06%     
==========================================
  Files          62       63       +1     
  Lines        2823     2903      +80     
==========================================
+ Hits         2372     2441      +69     
- Misses        451      462      +11     
Impacted Files Coverage Δ
lms/lmsdb/bootstrap.py 16.07% <13.33%> (-0.20%) ⬇️
lms/lmsdb/models.py 91.27% <89.47%> (+1.19%) ⬆️
lms/lmsweb/views.py 92.99% <100.00%> (+0.05%) ⬆️
lms/models/solutions.py 98.94% <100.00%> (ø)
lms/utils/colors.py 100.00% <100.00%> (ø)
lms/utils/consts.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 8a41e38...8f7efff. Read the comment docs.

@orronai orronai linked an issue Sep 30, 2021 that may be closed by this pull request
Copy link
Member

@yammesicka yammesicka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An awesome start!

- added active color column
- changed the way the user chooses colors
Comment on lines 388 to 390
is_color_changed = not instance.color.startswith('#')
if created or is_color_changed:
instance.color = COLORS.get(instance.color, '#0d6efd')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

Another way that might prove useful:

hex_color = re.compile(r'#?(?P<hex>[a-f0-9]{3}|[a-f0-9{6}])')

def get_hex(number: str) -> Optional[str]:
    if color := hex_color.match(number):
        return color.groupdict()['hex']
    raise ValueError("This is not a valid hex color.")


...
def evalutaion_on_save_handler etc
    try:
        instance.color = get_hex_color(...)
    except ValueError:
        instance.color = DEFAULT_PRIMARY_COLOR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, to be really awesome, you might want to support input like "red", "blue" etc.
There's probably a really easy way to convert these words into hex RGB representation, and this can be awesome.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it like the way you did and added the COLORS dict in case the user entered a word like "red", "blue" and etc.

Comment on lines 370 to 376
class SolutionAssessment(BaseModel):
name = CharField()
icon = CharField(null=True)
color = CharField()
active_color = CharField()
order = IntegerField(default=0, index=True, unique=True)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I think we should support different assessments per course. WDYT?
  2. Also: How we deal with a manager removing an assessment? (both 1. completely from the course and 2. de-selecting an evaluation from a specific solution)

- Changed the exercise number pre_save
- Removed unique from assessments oreder
- Assessments are now per course
@lgtm-com
Copy link

lgtm-com bot commented Oct 4, 2021

This pull request introduces 1 alert when merging 5134179 into f18eeca - view on LGTM.com

new alerts:

  • 1 for Missing variable declaration

- Splitted a function
- Changed some logics of the assessment
@sourcery-ai
Copy link

sourcery-ai bot commented Oct 8, 2021

Sourcery Code Quality Report

✅  Merging this PR will increase code quality in the affected files by 0.06%.

Quality metrics Before After Change
Complexity 1.53 ⭐ 1.57 ⭐ 0.04 👎
Method Length 42.72 ⭐ 43.56 ⭐ 0.84 👎
Working memory 7.25 🙂 7.20 🙂 -0.05 👍
Quality 77.76% 77.82% 0.06% 👍
Other metrics Before After Change
Lines 3069 3245 176
Changed files Quality Before Quality After Quality Change
lms/lmsdb/bootstrap.py 79.33% ⭐ 79.29% ⭐ -0.04% 👎
lms/lmsdb/models.py 84.63% ⭐ 84.22% ⭐ -0.41% 👎
lms/lmsweb/views.py 74.72% 🙂 74.68% 🙂 -0.04% 👎
lms/models/solutions.py 74.27% 🙂 74.06% 🙂 -0.21% 👎
lms/utils/consts.py 99.00% ⭐ 90.75% ⭐ -8.25% 👎
tests/conftest.py 91.35% ⭐ 90.98% ⭐ -0.37% 👎
tests/test_git_solution.py 78.51% ⭐ 78.89% ⭐ 0.38% 👍
tests/test_notifications.py 70.46% 🙂 70.44% 🙂 -0.02% 👎
tests/test_solutions.py 67.67% 🙂 69.11% 🙂 1.44% 👍

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
lms/lmsweb/views.py comment 13 🙂 202 😞 8 🙂 50.15% 🙂 Try splitting into smaller methods
tests/test_solutions.py TestSolutionBridge.test_staff_and_user_comments 0 ⭐ 236 ⛔ 12 😞 52.40% 🙂 Try splitting into smaller methods. Extract out complex expressions
tests/test_solutions.py TestSolutionBridge.test_share_solution_function 0 ⭐ 248 ⛔ 11 😞 53.48% 🙂 Try splitting into smaller methods. Extract out complex expressions
tests/test_notifications.py TestNotification.test_user_commented_after_check 0 ⭐ 216 ⛔ 12 😞 53.64% 🙂 Try splitting into smaller methods. Extract out complex expressions
lms/models/solutions.py get_view_parameters 5 ⭐ 139 😞 12 😞 55.53% 🙂 Try splitting into smaller methods. Extract out complex expressions

Legend and Explanation

The emojis denote the absolute quality of the code:

  • ⭐ excellent
  • 🙂 good
  • 😞 poor
  • ⛔ very poor

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Help us improve this quality report!

@orronai orronai merged commit f3ed156 into master Oct 8, 2021
@orronai orronai deleted the grades-mark branch October 8, 2021 12:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Mark exercise as excellent/plagarism/fail
2 participants