feat: Issue pagination (#4109)
* dev: separate order by of issue queryset to separate utilty function * dev: pagination for spreadhseet and gantt * dev: group pagination * dev: paginate single entities * dev: refactor pagination * dev: paginating issue apis * dev: grouped pagination for empty groups * dev: ungrouped list * dev: fix paginator for single groups * dev: fix paginating true list * dev: state__group pagination * fix: imports * dev: fix grouping on taget date and project_id * dev: remove unused imports * dev: add ruff in dependencies * make store changes for pagination * fix some build errors due to type changes * dev: add total pages key * chore: paginator changes * implement pagination for spreadsheet, list, kanban and calendar * fix: order by grouped pagination * dev: sub group paginator * dev: grouped paginator * dev: sub grouping paginator * restructure gantt layout charts * dev: fix pagination count * dev: date filtering for issues * dev: group by counts * implement new logic for pagination layouts * fix: label id and assignee id interchange * dev: fix priority ordering * fix group by bugs * dev: grouping for priority * fix reeordering while update * dev: fix order by for pagination * fix: total results for sub group pagination * dev: add comments and fix ordering * fix orderby priority for spreadsheet * fix subGroupCount * Fix logic for load more in Kanban * fix issue quick add * dev: fix issue creation * dev: add sorting * fix order by for modules and cycles * fix non render of Issues * fix subGroupKey generation when subGroupId is null * dev: fix cycle and module issue * dev: fix sub grouping * fix: imports * fix minor build errors * fix major build errors * fix priority order by * grouped pagination cursor logic changes * fix calendar pagination * active cycle issues pagination * dev: fix lint errors * fix Kanban subgroup dnd * fix empty subgroup kanbans * fix updation from an empty field with groupBy * fix issue count of groups * fix issue sorting on first page fetch * dev: remove pagination from list endpoint add ordering for sub grouping and handle error for empty issues * refactor module and cycle issues * fix quick add refactor * refactor gantt roots * fix empty states * fix filter params * fix group by module * minor UX changes * fix sub grouping in Kanban * remove unnecessary sorting logic in backend (Nikhil's changes) * dev: add error handling when using without on results * calendar layout loader improvement * list per page count logic change * spreadsheet loader improvement * Added loader for issues load more pagination * fix quick add in gantt * dev: add profile issue pagination * fix all issue and profile issues logic * remove empty state from calendar layout * use useEffect instead of swr to fetch issues to have quick switching between views cycles etc * dev: add aggregation for multi fields * fix priority sorting for workspace issues * fix move from draft for draft issues * fix pagination loader for spreadsheet * fetch project, module and cycle stats on update, create and delete of issues * increase horizontal margin * change load more pagination to on scroll pagination for active cycle issues * fix linting error * dev: fix ordering when order by m2m * dev: fix null paginations * dev: commenting * 0add comments to the issue stores methods * fix order by for array properties * fix: priority ordering * perform optimistic updates while adding or removing cycles or modules * fix build errors * dev: add default values when iterating through sub group * Move code from EE to CE repo * chore: folder structure updates * Move sortabla and radio input to packages/ui * chore: updated empty and loading screens * chore: delete an estimate point * chore: estimate point response change * chore: updated create estimate and handled the build error * chore: migration fixes * chore: updated create estimate * [WEB-1322] dev: conflict free pages collaboration (#4463) * chore: pages realtime * chore: empty binary response * chore: added a ypy package * feat: pages collaboration * chore: update fetching logic * chore: degrade ypy version * chore: replace useEffect fetch logic with useSWR * chore: move all the update logic to the page store * refactor: remove react-hook-form * chore: save description_html as well * chore: migrate old data logic * fix: added description_binary as field name * fix: code cleanup * refactor: create separate hook to handle page description * fix: build errors * chore: combine updates instead of using the whole document * chore: removed ypy package * chore: added conflict resolving logic to the client side * chore: add a save changes button * chore: add read-only validation * chore: remove saving state information * chore: added permission class * chore: removed the migration file * chore: corrected the model field * chore: rename pageStore to page * chore: update collaboration provider * chore: add try catch to handle error --------- Co-authored-by: NarayanBavisetti <narayan3119@gmail.com> * chore: create estimate workflow update * chore: editing and deleting the existing estimate updates * chore: updating the new estinates in update modal * chore: ui changed * chore: response changes of get and post * chore: new field added in estimates * chore: individual endpoint for estimate points * chore: typo changes * chore: create estimate point * chore: integrated new endpoints * chore: update key value pair * chore: update sorting in the estimates * Add custom option in the estimate templates * chore: handled current project active estimate * chore: handle estimate update worklfow * chore: AIO docker images for preview deployments (#4605) * fix: adding single docker base file * action added * fix action * dockerfile.base modified * action fix * dockerfile * fix: base aio dockerfile * fix: dockerfile.base * fix: dockerfile base * fix: modified folder structure * fix: action * fix: dockerfile * fix: dockerfile.base * fix: supervisor file name changed * fix: base dockerfile updated * fix dockerfile base * fix: base dockerfile * fix: docker files * fix: base dockerfile * update base image * modified docker aio base * aio base modified to debian-12-slim * fixes * finalize the dockerfiles with volume exposure * modified the aio build and dockerfile * fix: codacy suggestions implemented * fix: codacy fix * update aio build action --------- Co-authored-by: sriram veeraghanta <veeraghanta.sriram@gmail.com> * chore: handled estimates switch * chore: handled estimate edit * chore: handled close button in estimate edit * chore: updated ceate estimare workflow * chore: updated switch estimate * fix minor bugs in base issues store * single column scroll pagination * UI changes for load more button * chore: UI and typos * chore: resolved build error * [WEB-1184] feat: issue bulk operations (#4530) * chore: bulk operations * chore: archive bulk issues * chore: bulk ops keys changed * chore: bulk delete and archive confirmation modals * style: list layout spacing * chore: create hoc for multi-select groups * chore: update multiple select components * chore: archive, target and start date error messsage * chore: edge case handling * chore: bulk ops in spreadsheet layout * chore: update UI * chore: scroll element into view * fix: shift + arrow navigation * chore: implement bulk ops in the gantt layout * fix: ui bugs * chore: move selection logic to store * fix: group selection * refactor: multiple select store * style: dropdowns UI * fix: bulk assignee and label update mutation * chore: removed migrations * refactor: entities grouping logic * fix performance issue is selection of bulk ops * fix: shift keyboard navigation * fix: group click action * chore: start and target date validation * chore: remove optimistic updates, check archivability in frontend * chore: code optimisation * chore: add store comments * refactor: component fragmentation * style: issue active state --------- Co-authored-by: NarayanBavisetti <narayan3119@gmail.com> Co-authored-by: rahulramesha <rahulramesham@gmail.com> * fix a performance issue when there are too many groups * chore: updated delete dropdown and handled the repeated values while creating and updating the estimate point * [WEB-1424] chore: page and view logo implementation, and emoji/icon picker improvement (#4583) * chore: added logo_props * chore: logo props in cycles, views and modules * chore: emoji icon picker types updated * chore: info icon added to plane ui package * chore: icon color adjust helper function added * style: icon picker ui improvement and default color options updated * chore: update page logo action added in store * chore: emoji code to unicode helper function added * chore: common logo renderer component added * chore: app header project logo updated * chore: project logo updated across platform * chore: page logo picker added * chore: control link component improvement * chore: list item improvement * chore: emoji picker component updated * chore: space app and package logo prop type updated * chore: migration * chore: logo added to project view * chore: page logo picker added in create modal and breadcrumbs * chore: view logo picker added in create modal and updated breadcrumbs * fix: build error * chore: AIO docker images for preview deployments (#4605) * fix: adding single docker base file * action added * fix action * dockerfile.base modified * action fix * dockerfile * fix: base aio dockerfile * fix: dockerfile.base * fix: dockerfile base * fix: modified folder structure * fix: action * fix: dockerfile * fix: dockerfile.base * fix: supervisor file name changed * fix: base dockerfile updated * fix dockerfile base * fix: base dockerfile * fix: docker files * fix: base dockerfile * update base image * modified docker aio base * aio base modified to debian-12-slim * fixes * finalize the dockerfiles with volume exposure * modified the aio build and dockerfile * fix: codacy suggestions implemented * fix: codacy fix * update aio build action --------- Co-authored-by: sriram veeraghanta <veeraghanta.sriram@gmail.com> * fix: merge conflict * chore: lucide react added to planu ui package * chore: new emoji picker component added with lucid icon and code refactor * chore: logo component updated * chore: emoji picker updated for pages and views --------- Co-authored-by: NarayanBavisetti <narayan3119@gmail.com> Co-authored-by: Manish Gupta <59428681+mguptahub@users.noreply.github.com> Co-authored-by: sriram veeraghanta <veeraghanta.sriram@gmail.com> * chore: handled inline errors in the estimate switch * fix module and cycle drag and drop * Fix issue count bug for accumulated actions * chore: handled active and availability vadilation * chore: handled create and update components in projecr estimates * chore: added migration * Add category specific values for custom template * chore: estimate dropdown handled in issues * chore: estimate alerts * fix bulk updates * chore: updated alerts * add optional chaining * Extract the list row actions * change color of load more to match new Issues * list group collapsible * fix: updated and handled the estimate points * fix: upgrader ee banner * Fix issues with sortable * Fix sortable spacing issue in create estimate modal * fix: updated the issue create sorting * chore: removed radio button from ui and updated in the estimates * chore: resolved import error in packaged ui * chore: handled props in create modal * chore: removed ee files * chore: changed default analytics * fix: pagination ordering for grouped and subgrouped * chore: removed the migration file * chore: estimate point value in graph * chore: estimate point key change * chore: squashed migration (#4634) * chore: squashed migration * chore: removed instance migraion * chore: key changes * chore: issue activity back migration * dev: replaced estimate key with estimate id and replaced estimate type from number to string in issue * chore: estimate point value field * chore: estimate point activity * chore: removed the unused function * chore: resolved merge conflicts * chore: deploy board keys changed * chore: yarn lock file change * chore: resolved frontend build --------- Co-authored-by: guru_sainath <gurusainath007@gmail.com> * [WEB-1516] refactor: space app routing and layouts (#4705) * dev: change layout * chore: replace workspace slug and project id with anchor * chore: migration fixes * chore: update filtering logic * chore: endpoint changes * chore: update endpoint * chore: changed url pratterns * chore: use client side for layout and page * chore: issue vote changes * chore: project deploy board response change * refactor: publish project store and components * fix: update layout options after fetching settings * chore: remove unnecessary types * style: peek overview * refactor: components folder structure * fix: redirect from old path * chore: make the whole issue block clickable * chore: removed the migration file * chore: add server side redirection for old routes * chore: is enabled key change * chore: update types * chore: removed the migration file --------- Co-authored-by: NarayanBavisetti <narayan3119@gmail.com> * Merge develop into revamp-estimates-ce * chore: removed migration file and updated the estimate system order and removed ee banner * chore: initial radio select in create estimate * chore: space key changes * Fix sortable component as the sort order was broken. * fix: formatting and linting errors * fix Alignment for load more * add logic to approuter * fix approuter changes and fix build * chore: removed the linting issue --------- Co-authored-by: pablohashescobar <nikhilschacko@gmail.com> Co-authored-by: Satish Gandham <satish.iitg@gmail.com> Co-authored-by: guru_sainath <gurusainath007@gmail.com> Co-authored-by: NarayanBavisetti <narayan3119@gmail.com> Co-authored-by: Aaryan Khandelwal <65252264+aaryan610@users.noreply.github.com> Co-authored-by: Manish Gupta <59428681+mguptahub@users.noreply.github.com> Co-authored-by: sriram veeraghanta <veeraghanta.sriram@gmail.com> Co-authored-by: Anmol Singh Bhatia <121005188+anmolsinghbhatia@users.noreply.github.com> Co-authored-by: Bavisetti Narayan <72156168+NarayanBavisetti@users.noreply.github.com> Co-authored-by: pushya22 <130810100+pushya22@users.noreply.github.com>
This commit is contained in:
parent
7ac07b7b73
commit
666d35afb9
234 changed files with 9056 additions and 6188 deletions
|
|
@ -1,5 +1,9 @@
|
|||
# Python imports
|
||||
import logging
|
||||
import traceback
|
||||
|
||||
# Django imports
|
||||
from django.conf import settings
|
||||
|
||||
# Third party imports
|
||||
from sentry_sdk import capture_exception
|
||||
|
|
@ -11,6 +15,10 @@ def log_exception(e):
|
|||
logger = logging.getLogger("plane")
|
||||
logger.error(e)
|
||||
|
||||
# Log traceback if running in Debug
|
||||
if settings.DEBUG:
|
||||
logger.error(traceback.format_exc(e))
|
||||
|
||||
# Capture in sentry if configured
|
||||
capture_exception(e)
|
||||
return
|
||||
|
|
|
|||
|
|
@ -1,240 +1,191 @@
|
|||
def resolve_keys(group_keys, value):
|
||||
"""resolve keys to a key which will be used for
|
||||
grouping
|
||||
# Django imports
|
||||
from django.contrib.postgres.aggregates import ArrayAgg
|
||||
from django.contrib.postgres.fields import ArrayField
|
||||
from django.db.models import Q, UUIDField, Value
|
||||
from django.db.models.functions import Coalesce
|
||||
|
||||
Args:
|
||||
group_keys (string): key which will be used for grouping
|
||||
value (obj): data value
|
||||
|
||||
Returns:
|
||||
string: the key which will be used for
|
||||
"""
|
||||
keys = group_keys.split(".")
|
||||
for key in keys:
|
||||
value = value.get(key, None)
|
||||
return value
|
||||
# Module imports
|
||||
from plane.db.models import (
|
||||
Cycle,
|
||||
Issue,
|
||||
Label,
|
||||
Module,
|
||||
Project,
|
||||
ProjectMember,
|
||||
State,
|
||||
WorkspaceMember,
|
||||
)
|
||||
|
||||
|
||||
def group_results(results_data, group_by, sub_group_by=False):
|
||||
"""group results data into certain group_by
|
||||
def issue_queryset_grouper(queryset, group_by, sub_group_by):
|
||||
|
||||
Args:
|
||||
results_data (obj): complete results data
|
||||
group_by (key): string
|
||||
FIELD_MAPPER = {
|
||||
"label_ids": "labels__id",
|
||||
"assignee_ids": "assignees__id",
|
||||
"module_ids": "issue_module__module_id",
|
||||
}
|
||||
|
||||
Returns:
|
||||
obj: grouped results
|
||||
"""
|
||||
if sub_group_by:
|
||||
main_responsive_dict = dict()
|
||||
annotations_map = {
|
||||
"assignee_ids": ("assignees__id", ~Q(assignees__id__isnull=True)),
|
||||
"label_ids": ("labels__id", ~Q(labels__id__isnull=True)),
|
||||
"module_ids": (
|
||||
"issue_module__module_id",
|
||||
~Q(issue_module__module_id__isnull=True),
|
||||
),
|
||||
}
|
||||
default_annotations = {
|
||||
key: Coalesce(
|
||||
ArrayAgg(
|
||||
field,
|
||||
distinct=True,
|
||||
filter=condition,
|
||||
),
|
||||
Value([], output_field=ArrayField(UUIDField())),
|
||||
)
|
||||
for key, (field, condition) in annotations_map.items()
|
||||
if FIELD_MAPPER.get(key) != group_by
|
||||
or FIELD_MAPPER.get(key) != sub_group_by
|
||||
}
|
||||
|
||||
if sub_group_by == "priority":
|
||||
main_responsive_dict = {
|
||||
"urgent": {},
|
||||
"high": {},
|
||||
"medium": {},
|
||||
"low": {},
|
||||
"none": {},
|
||||
}
|
||||
return queryset.annotate(**default_annotations)
|
||||
|
||||
for value in results_data:
|
||||
main_group_attribute = resolve_keys(sub_group_by, value)
|
||||
group_attribute = resolve_keys(group_by, value)
|
||||
if isinstance(main_group_attribute, list) and not isinstance(
|
||||
group_attribute, list
|
||||
):
|
||||
if len(main_group_attribute):
|
||||
for attrib in main_group_attribute:
|
||||
if str(attrib) not in main_responsive_dict:
|
||||
main_responsive_dict[str(attrib)] = {}
|
||||
if (
|
||||
str(group_attribute)
|
||||
in main_responsive_dict[str(attrib)]
|
||||
):
|
||||
main_responsive_dict[str(attrib)][
|
||||
str(group_attribute)
|
||||
].append(value)
|
||||
else:
|
||||
main_responsive_dict[str(attrib)][
|
||||
str(group_attribute)
|
||||
] = []
|
||||
main_responsive_dict[str(attrib)][
|
||||
str(group_attribute)
|
||||
].append(value)
|
||||
else:
|
||||
if str(None) not in main_responsive_dict:
|
||||
main_responsive_dict[str(None)] = {}
|
||||
|
||||
if str(group_attribute) in main_responsive_dict[str(None)]:
|
||||
main_responsive_dict[str(None)][
|
||||
str(group_attribute)
|
||||
].append(value)
|
||||
else:
|
||||
main_responsive_dict[str(None)][
|
||||
str(group_attribute)
|
||||
] = []
|
||||
main_responsive_dict[str(None)][
|
||||
str(group_attribute)
|
||||
].append(value)
|
||||
def issue_on_results(issues, group_by, sub_group_by):
|
||||
|
||||
elif isinstance(group_attribute, list) and not isinstance(
|
||||
main_group_attribute, list
|
||||
):
|
||||
if str(main_group_attribute) not in main_responsive_dict:
|
||||
main_responsive_dict[str(main_group_attribute)] = {}
|
||||
if len(group_attribute):
|
||||
for attrib in group_attribute:
|
||||
if (
|
||||
str(attrib)
|
||||
in main_responsive_dict[str(main_group_attribute)]
|
||||
):
|
||||
main_responsive_dict[str(main_group_attribute)][
|
||||
str(attrib)
|
||||
].append(value)
|
||||
else:
|
||||
main_responsive_dict[str(main_group_attribute)][
|
||||
str(attrib)
|
||||
] = []
|
||||
main_responsive_dict[str(main_group_attribute)][
|
||||
str(attrib)
|
||||
].append(value)
|
||||
else:
|
||||
if (
|
||||
str(None)
|
||||
in main_responsive_dict[str(main_group_attribute)]
|
||||
):
|
||||
main_responsive_dict[str(main_group_attribute)][
|
||||
str(None)
|
||||
].append(value)
|
||||
else:
|
||||
main_responsive_dict[str(main_group_attribute)][
|
||||
str(None)
|
||||
] = []
|
||||
main_responsive_dict[str(main_group_attribute)][
|
||||
str(None)
|
||||
].append(value)
|
||||
FIELD_MAPPER = {
|
||||
"labels__id": "label_ids",
|
||||
"assignees__id": "assignee_ids",
|
||||
"issue_module__module_id": "module_ids",
|
||||
}
|
||||
|
||||
elif isinstance(group_attribute, list) and isinstance(
|
||||
main_group_attribute, list
|
||||
):
|
||||
if len(main_group_attribute):
|
||||
for main_attrib in main_group_attribute:
|
||||
if str(main_attrib) not in main_responsive_dict:
|
||||
main_responsive_dict[str(main_attrib)] = {}
|
||||
if len(group_attribute):
|
||||
for attrib in group_attribute:
|
||||
if (
|
||||
str(attrib)
|
||||
in main_responsive_dict[str(main_attrib)]
|
||||
):
|
||||
main_responsive_dict[str(main_attrib)][
|
||||
str(attrib)
|
||||
].append(value)
|
||||
else:
|
||||
main_responsive_dict[str(main_attrib)][
|
||||
str(attrib)
|
||||
] = []
|
||||
main_responsive_dict[str(main_attrib)][
|
||||
str(attrib)
|
||||
].append(value)
|
||||
else:
|
||||
if (
|
||||
str(None)
|
||||
in main_responsive_dict[str(main_attrib)]
|
||||
):
|
||||
main_responsive_dict[str(main_attrib)][
|
||||
str(None)
|
||||
].append(value)
|
||||
else:
|
||||
main_responsive_dict[str(main_attrib)][
|
||||
str(None)
|
||||
] = []
|
||||
main_responsive_dict[str(main_attrib)][
|
||||
str(None)
|
||||
].append(value)
|
||||
else:
|
||||
if str(None) not in main_responsive_dict:
|
||||
main_responsive_dict[str(None)] = {}
|
||||
if len(group_attribute):
|
||||
for attrib in group_attribute:
|
||||
if str(attrib) in main_responsive_dict[str(None)]:
|
||||
main_responsive_dict[str(None)][
|
||||
str(attrib)
|
||||
].append(value)
|
||||
else:
|
||||
main_responsive_dict[str(None)][
|
||||
str(attrib)
|
||||
] = []
|
||||
main_responsive_dict[str(None)][
|
||||
str(attrib)
|
||||
].append(value)
|
||||
else:
|
||||
if str(None) in main_responsive_dict[str(None)]:
|
||||
main_responsive_dict[str(None)][str(None)].append(
|
||||
value
|
||||
)
|
||||
else:
|
||||
main_responsive_dict[str(None)][str(None)] = []
|
||||
main_responsive_dict[str(None)][str(None)].append(
|
||||
value
|
||||
)
|
||||
else:
|
||||
main_group_attribute = resolve_keys(sub_group_by, value)
|
||||
group_attribute = resolve_keys(group_by, value)
|
||||
original_list = ["assignee_ids", "label_ids", "module_ids"]
|
||||
|
||||
if str(main_group_attribute) not in main_responsive_dict:
|
||||
main_responsive_dict[str(main_group_attribute)] = {}
|
||||
required_fields = [
|
||||
"id",
|
||||
"name",
|
||||
"state_id",
|
||||
"sort_order",
|
||||
"completed_at",
|
||||
"estimate_point",
|
||||
"priority",
|
||||
"start_date",
|
||||
"target_date",
|
||||
"sequence_id",
|
||||
"project_id",
|
||||
"parent_id",
|
||||
"cycle_id",
|
||||
"sub_issues_count",
|
||||
"created_at",
|
||||
"updated_at",
|
||||
"created_by",
|
||||
"updated_by",
|
||||
"attachment_count",
|
||||
"link_count",
|
||||
"is_draft",
|
||||
"archived_at",
|
||||
"state__group",
|
||||
]
|
||||
|
||||
if (
|
||||
str(group_attribute)
|
||||
in main_responsive_dict[str(main_group_attribute)]
|
||||
):
|
||||
main_responsive_dict[str(main_group_attribute)][
|
||||
str(group_attribute)
|
||||
].append(value)
|
||||
else:
|
||||
main_responsive_dict[str(main_group_attribute)][
|
||||
str(group_attribute)
|
||||
] = []
|
||||
main_responsive_dict[str(main_group_attribute)][
|
||||
str(group_attribute)
|
||||
].append(value)
|
||||
if group_by in FIELD_MAPPER:
|
||||
original_list.remove(FIELD_MAPPER[group_by])
|
||||
original_list.append(group_by)
|
||||
|
||||
return main_responsive_dict
|
||||
if sub_group_by in FIELD_MAPPER:
|
||||
original_list.remove(FIELD_MAPPER[sub_group_by])
|
||||
original_list.append(sub_group_by)
|
||||
|
||||
else:
|
||||
response_dict = {}
|
||||
required_fields.extend(original_list)
|
||||
return issues.values(*required_fields)
|
||||
|
||||
if group_by == "priority":
|
||||
response_dict = {
|
||||
"urgent": [],
|
||||
"high": [],
|
||||
"medium": [],
|
||||
"low": [],
|
||||
"none": [],
|
||||
}
|
||||
|
||||
for value in results_data:
|
||||
group_attribute = resolve_keys(group_by, value)
|
||||
if isinstance(group_attribute, list):
|
||||
if len(group_attribute):
|
||||
for attrib in group_attribute:
|
||||
if str(attrib) in response_dict:
|
||||
response_dict[str(attrib)].append(value)
|
||||
else:
|
||||
response_dict[str(attrib)] = []
|
||||
response_dict[str(attrib)].append(value)
|
||||
else:
|
||||
if str(None) in response_dict:
|
||||
response_dict[str(None)].append(value)
|
||||
else:
|
||||
response_dict[str(None)] = []
|
||||
response_dict[str(None)].append(value)
|
||||
else:
|
||||
if str(group_attribute) in response_dict:
|
||||
response_dict[str(group_attribute)].append(value)
|
||||
else:
|
||||
response_dict[str(group_attribute)] = []
|
||||
response_dict[str(group_attribute)].append(value)
|
||||
|
||||
return response_dict
|
||||
def issue_group_values(field, slug, project_id=None, filters=dict):
|
||||
if field == "state_id":
|
||||
queryset = State.objects.filter(
|
||||
~Q(name="Triage"),
|
||||
workspace__slug=slug,
|
||||
).values_list("id", flat=True)
|
||||
if project_id:
|
||||
return list(queryset.filter(project_id=project_id))
|
||||
else:
|
||||
return list(queryset)
|
||||
if field == "labels__id":
|
||||
queryset = Label.objects.filter(workspace__slug=slug).values_list(
|
||||
"id", flat=True
|
||||
)
|
||||
if project_id:
|
||||
return list(queryset.filter(project_id=project_id)) + ["None"]
|
||||
else:
|
||||
return list(queryset) + ["None"]
|
||||
if field == "assignees__id":
|
||||
if project_id:
|
||||
return ProjectMember.objects.filter(
|
||||
workspace__slug=slug,
|
||||
project_id=project_id,
|
||||
is_active=True,
|
||||
).values_list("member_id", flat=True)
|
||||
else:
|
||||
return list(
|
||||
WorkspaceMember.objects.filter(
|
||||
workspace__slug=slug, is_active=True
|
||||
).values_list("member_id", flat=True)
|
||||
)
|
||||
if field == "issue_module__module_id":
|
||||
queryset = Module.objects.filter(
|
||||
workspace__slug=slug,
|
||||
).values_list("id", flat=True)
|
||||
if project_id:
|
||||
return list(queryset.filter(project_id=project_id)) + ["None"]
|
||||
else:
|
||||
return list(queryset) + ["None"]
|
||||
if field == "cycle_id":
|
||||
queryset = Cycle.objects.filter(
|
||||
workspace__slug=slug,
|
||||
).values_list("id", flat=True)
|
||||
if project_id:
|
||||
return list(queryset.filter(project_id=project_id)) + ["None"]
|
||||
else:
|
||||
return list(queryset) + ["None"]
|
||||
if field == "project_id":
|
||||
queryset = Project.objects.filter(workspace__slug=slug).values_list(
|
||||
"id", flat=True
|
||||
)
|
||||
return list(queryset)
|
||||
if field == "priority":
|
||||
return [
|
||||
"low",
|
||||
"medium",
|
||||
"high",
|
||||
"urgent",
|
||||
"none",
|
||||
]
|
||||
if field == "state__group":
|
||||
return [
|
||||
"backlog",
|
||||
"unstarted",
|
||||
"started",
|
||||
"completed",
|
||||
"cancelled",
|
||||
]
|
||||
if field == "target_date":
|
||||
queryset = (
|
||||
Issue.issue_objects.filter(workspace__slug=slug)
|
||||
.filter(**filters)
|
||||
.values_list("target_date", flat=True)
|
||||
.distinct()
|
||||
)
|
||||
if project_id:
|
||||
return list(queryset.filter(project_id=project_id))
|
||||
else:
|
||||
return list(queryset)
|
||||
if field == "start_date":
|
||||
queryset = (
|
||||
Issue.issue_objects.filter(workspace__slug=slug)
|
||||
.filter(**filters)
|
||||
.values_list("start_date", flat=True)
|
||||
.distinct()
|
||||
)
|
||||
if project_id:
|
||||
return list(queryset.filter(project_id=project_id))
|
||||
else:
|
||||
return list(queryset)
|
||||
return []
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
import re
|
||||
import uuid
|
||||
from datetime import timedelta
|
||||
|
||||
from django.utils import timezone
|
||||
|
||||
# The date from pattern
|
||||
|
|
@ -63,24 +64,27 @@ def date_filter(filter, date_term, queries):
|
|||
"""
|
||||
for query in queries:
|
||||
date_query = query.split(";")
|
||||
if len(date_query) >= 2:
|
||||
match = pattern.match(date_query[0])
|
||||
if match:
|
||||
if len(date_query) == 3:
|
||||
digit, term = date_query[0].split("_")
|
||||
string_date_filter(
|
||||
filter=filter,
|
||||
duration=int(digit),
|
||||
subsequent=date_query[1],
|
||||
term=term,
|
||||
date_filter=date_term,
|
||||
offset=date_query[2],
|
||||
)
|
||||
else:
|
||||
if "after" in date_query:
|
||||
filter[f"{date_term}__gte"] = date_query[0]
|
||||
if date_query:
|
||||
if len(date_query) >= 2:
|
||||
match = pattern.match(date_query[0])
|
||||
if match:
|
||||
if len(date_query) == 3:
|
||||
digit, term = date_query[0].split("_")
|
||||
string_date_filter(
|
||||
filter=filter,
|
||||
duration=int(digit),
|
||||
subsequent=date_query[1],
|
||||
term=term,
|
||||
date_filter=date_term,
|
||||
offset=date_query[2],
|
||||
)
|
||||
else:
|
||||
filter[f"{date_term}__lte"] = date_query[0]
|
||||
if "after" in date_query:
|
||||
filter[f"{date_term}__gte"] = date_query[0]
|
||||
else:
|
||||
filter[f"{date_term}__lte"] = date_query[0]
|
||||
else:
|
||||
filter[f"{date_term}__contains"] = date_query[0]
|
||||
|
||||
|
||||
def filter_state(params, filter, method, prefix=""):
|
||||
|
|
|
|||
84
apiserver/plane/utils/order_queryset.py
Normal file
84
apiserver/plane/utils/order_queryset.py
Normal file
|
|
@ -0,0 +1,84 @@
|
|||
from django.db.models import (
|
||||
Case,
|
||||
CharField,
|
||||
Min,
|
||||
Value,
|
||||
When,
|
||||
)
|
||||
|
||||
# Custom ordering for priority and state
|
||||
PRIORITY_ORDER = ["urgent", "high", "medium", "low", "none"]
|
||||
STATE_ORDER = [
|
||||
"backlog",
|
||||
"unstarted",
|
||||
"started",
|
||||
"completed",
|
||||
"cancelled",
|
||||
]
|
||||
|
||||
|
||||
def order_issue_queryset(issue_queryset, order_by_param="-created_at"):
|
||||
# Priority Ordering
|
||||
if order_by_param == "priority" or order_by_param == "-priority":
|
||||
issue_queryset = issue_queryset.annotate(
|
||||
priority_order=Case(
|
||||
*[
|
||||
When(priority=p, then=Value(i))
|
||||
for i, p in enumerate(PRIORITY_ORDER)
|
||||
],
|
||||
output_field=CharField(),
|
||||
)
|
||||
).order_by("priority_order")
|
||||
order_by_param = (
|
||||
"-priority_order"
|
||||
if order_by_param.startswith("-")
|
||||
else "priority_order"
|
||||
)
|
||||
# State Ordering
|
||||
elif order_by_param in [
|
||||
"state__group",
|
||||
"-state__group",
|
||||
]:
|
||||
state_order = (
|
||||
STATE_ORDER
|
||||
if order_by_param in ["state__name", "state__group"]
|
||||
else STATE_ORDER[::-1]
|
||||
)
|
||||
issue_queryset = issue_queryset.annotate(
|
||||
state_order=Case(
|
||||
*[
|
||||
When(state__group=state_group, then=Value(i))
|
||||
for i, state_group in enumerate(state_order)
|
||||
],
|
||||
default=Value(len(state_order)),
|
||||
output_field=CharField(),
|
||||
)
|
||||
).order_by("state_order")
|
||||
order_by_param = (
|
||||
"-state_order" if order_by_param.startswith("-") else "state_order"
|
||||
)
|
||||
# assignee and label ordering
|
||||
elif order_by_param in [
|
||||
"labels__name",
|
||||
"assignees__first_name",
|
||||
"issue_module__module__name",
|
||||
"-labels__name",
|
||||
"-assignees__first_name",
|
||||
"-issue_module__module__name",
|
||||
]:
|
||||
issue_queryset = issue_queryset.annotate(
|
||||
min_values=Min(
|
||||
order_by_param[1::]
|
||||
if order_by_param.startswith("-")
|
||||
else order_by_param
|
||||
)
|
||||
).order_by(
|
||||
"-min_values" if order_by_param.startswith("-") else "min_values"
|
||||
)
|
||||
order_by_param = (
|
||||
"-min_values" if order_by_param.startswith("-") else "min_values"
|
||||
)
|
||||
else:
|
||||
issue_queryset = issue_queryset.order_by(order_by_param)
|
||||
order_by_param = order_by_param
|
||||
return issue_queryset, order_by_param
|
||||
|
|
@ -1,33 +1,49 @@
|
|||
from rest_framework.response import Response
|
||||
from rest_framework.exceptions import ParseError
|
||||
from collections.abc import Sequence
|
||||
# Python imports
|
||||
import math
|
||||
from collections import defaultdict
|
||||
from collections.abc import Sequence
|
||||
|
||||
# Django imports
|
||||
from django.db.models import Count, F, Window
|
||||
from django.db.models.functions import RowNumber
|
||||
|
||||
# Third party imports
|
||||
from rest_framework.exceptions import ParseError
|
||||
from rest_framework.response import Response
|
||||
|
||||
# Module imports
|
||||
|
||||
|
||||
class Cursor:
|
||||
# The cursor value
|
||||
def __init__(self, value, offset=0, is_prev=False, has_results=None):
|
||||
self.value = value
|
||||
self.offset = int(offset)
|
||||
self.is_prev = bool(is_prev)
|
||||
self.has_results = has_results
|
||||
|
||||
# Return the cursor value in string format
|
||||
def __str__(self):
|
||||
return f"{self.value}:{self.offset}:{int(self.is_prev)}"
|
||||
|
||||
# Return the cursor value
|
||||
def __eq__(self, other):
|
||||
return all(
|
||||
getattr(self, attr) == getattr(other, attr)
|
||||
for attr in ("value", "offset", "is_prev", "has_results")
|
||||
)
|
||||
|
||||
# Return the representation of the cursor
|
||||
def __repr__(self):
|
||||
return f"{type(self).__name__,}: value={self.value} offset={self.offset}, is_prev={int(self.is_prev)}"
|
||||
|
||||
# Return if the cursor is true
|
||||
def __bool__(self):
|
||||
return bool(self.has_results)
|
||||
|
||||
@classmethod
|
||||
def from_string(cls, value):
|
||||
"""Return the cursor value from string format"""
|
||||
try:
|
||||
bits = value.split(":")
|
||||
if len(bits) != 3:
|
||||
|
|
@ -50,15 +66,19 @@ class CursorResult(Sequence):
|
|||
self.max_hits = max_hits
|
||||
|
||||
def __len__(self):
|
||||
# Return the length of the results
|
||||
return len(self.results)
|
||||
|
||||
def __iter__(self):
|
||||
# Return the iterator of the results
|
||||
return iter(self.results)
|
||||
|
||||
def __getitem__(self, key):
|
||||
# Return the results based on the key
|
||||
return self.results[key]
|
||||
|
||||
def __repr__(self):
|
||||
# Return the representation of the results
|
||||
return f"<{type(self).__name__}: results={len(self.results)}>"
|
||||
|
||||
|
||||
|
|
@ -85,11 +105,14 @@ class OffsetPaginator:
|
|||
max_offset=None,
|
||||
on_results=None,
|
||||
):
|
||||
# Key tuple and remove `-` if descending order by
|
||||
self.key = (
|
||||
order_by
|
||||
if order_by is None or isinstance(order_by, (list, tuple, set))
|
||||
else (order_by,)
|
||||
else (order_by[1::] if order_by.startswith("-") else order_by,)
|
||||
)
|
||||
# Set desc to true when `-` exists in the order by
|
||||
self.desc = True if order_by.startswith("-") else False
|
||||
self.queryset = queryset
|
||||
self.max_limit = max_limit
|
||||
self.max_offset = max_offset
|
||||
|
|
@ -101,11 +124,101 @@ class OffsetPaginator:
|
|||
if cursor is None:
|
||||
cursor = Cursor(0, 0, 0)
|
||||
|
||||
# Get the min from limit and max limit
|
||||
limit = min(limit, self.max_limit)
|
||||
|
||||
# queryset
|
||||
queryset = self.queryset
|
||||
if self.key:
|
||||
queryset = queryset.order_by(*self.key)
|
||||
queryset = queryset.order_by(
|
||||
(
|
||||
F(*self.key).desc(nulls_last=True)
|
||||
if self.desc
|
||||
else F(*self.key).asc(nulls_last=True)
|
||||
),
|
||||
"-created_at",
|
||||
)
|
||||
# The current page
|
||||
page = cursor.offset
|
||||
# The offset
|
||||
offset = cursor.offset * cursor.value
|
||||
stop = offset + (cursor.value or limit) + 1
|
||||
|
||||
if self.max_offset is not None and offset >= self.max_offset:
|
||||
raise BadPaginationError("Pagination offset too large")
|
||||
if offset < 0:
|
||||
raise BadPaginationError("Pagination offset cannot be negative")
|
||||
|
||||
results = queryset[offset:stop]
|
||||
|
||||
if cursor.value != limit:
|
||||
results = results[-(limit + 1) :]
|
||||
|
||||
# Adjust cursors based on the results for pagination
|
||||
next_cursor = Cursor(limit, page + 1, False, results.count() > limit)
|
||||
# If the page is greater than 0, then set the previous cursor
|
||||
prev_cursor = Cursor(limit, page - 1, True, page > 0)
|
||||
|
||||
# Process the results
|
||||
results = results[:limit]
|
||||
|
||||
# Process the results
|
||||
if self.on_results:
|
||||
results = self.on_results(results)
|
||||
|
||||
# Count the queryset
|
||||
count = queryset.count()
|
||||
|
||||
# Optionally, calculate the total count and max_hits if needed
|
||||
max_hits = math.ceil(count / limit)
|
||||
|
||||
# Return the cursor results
|
||||
return CursorResult(
|
||||
results=results,
|
||||
next=next_cursor,
|
||||
prev=prev_cursor,
|
||||
hits=count,
|
||||
max_hits=max_hits,
|
||||
)
|
||||
|
||||
def process_results(self, results):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class GroupedOffsetPaginator(OffsetPaginator):
|
||||
|
||||
# Field mappers
|
||||
FIELD_MAPPER = {
|
||||
"labels__id": "label_ids",
|
||||
"assignees__id": "assignee_ids",
|
||||
"modules__id": "module_ids",
|
||||
}
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
queryset,
|
||||
group_by_field_name,
|
||||
group_by_fields,
|
||||
count_filter,
|
||||
*args,
|
||||
**kwargs,
|
||||
):
|
||||
# Initiate the parent class for all the parameters
|
||||
super().__init__(queryset, *args, **kwargs)
|
||||
self.group_by_field_name = group_by_field_name
|
||||
self.group_by_fields = group_by_fields
|
||||
self.count_filter = count_filter
|
||||
|
||||
def get_result(self, limit=50, cursor=None):
|
||||
# offset is page #
|
||||
# value is page limit
|
||||
if cursor is None:
|
||||
cursor = Cursor(0, 0, 0)
|
||||
|
||||
limit = min(limit, self.max_limit)
|
||||
|
||||
# Adjust the initial offset and stop based on the cursor and limit
|
||||
queryset = self.queryset
|
||||
|
||||
page = cursor.offset
|
||||
offset = cursor.offset * cursor.value
|
||||
|
|
@ -116,20 +229,73 @@ class OffsetPaginator:
|
|||
if offset < 0:
|
||||
raise BadPaginationError("Pagination offset cannot be negative")
|
||||
|
||||
results = list(queryset[offset:stop])
|
||||
if cursor.value != limit:
|
||||
results = results[-(limit + 1) :]
|
||||
# Compute the results
|
||||
results = {}
|
||||
# Create window for all the groups
|
||||
queryset = queryset.annotate(
|
||||
row_number=Window(
|
||||
expression=RowNumber(),
|
||||
partition_by=[F(self.group_by_field_name)],
|
||||
order_by=(
|
||||
(
|
||||
F(*self.key).desc(
|
||||
nulls_last=True
|
||||
) # order by desc if desc is set
|
||||
if self.desc
|
||||
else F(*self.key).asc(
|
||||
nulls_last=True
|
||||
) # Order by asc if set
|
||||
),
|
||||
F("created_at").desc(),
|
||||
),
|
||||
)
|
||||
)
|
||||
# Filter the results by row number
|
||||
results = queryset.filter(
|
||||
row_number__gt=offset, row_number__lt=stop
|
||||
).order_by(
|
||||
(
|
||||
F(*self.key).desc(nulls_last=True)
|
||||
if self.desc
|
||||
else F(*self.key).asc(nulls_last=True)
|
||||
),
|
||||
F("created_at").desc(),
|
||||
)
|
||||
|
||||
next_cursor = Cursor(limit, page + 1, False, len(results) > limit)
|
||||
prev_cursor = Cursor(limit, page - 1, True, page > 0)
|
||||
|
||||
results = list(results[:limit])
|
||||
if self.on_results:
|
||||
results = self.on_results(results)
|
||||
# Adjust cursors based on the grouped results for pagination
|
||||
next_cursor = Cursor(
|
||||
limit,
|
||||
page + 1,
|
||||
False,
|
||||
queryset.filter(row_number__gte=stop).exists(),
|
||||
)
|
||||
prev_cursor = Cursor(
|
||||
limit,
|
||||
page - 1,
|
||||
True,
|
||||
page > 0,
|
||||
)
|
||||
|
||||
# Count the queryset
|
||||
count = queryset.count()
|
||||
max_hits = math.ceil(count / limit)
|
||||
|
||||
# Optionally, calculate the total count and max_hits if needed
|
||||
# This might require adjustments based on specific use cases
|
||||
if results:
|
||||
max_hits = math.ceil(
|
||||
queryset.values(self.group_by_field_name)
|
||||
.annotate(
|
||||
count=Count(
|
||||
"id",
|
||||
filter=self.count_filter,
|
||||
distinct=True,
|
||||
)
|
||||
)
|
||||
.order_by("-count")[0]["count"]
|
||||
/ limit
|
||||
)
|
||||
else:
|
||||
max_hits = 0
|
||||
return CursorResult(
|
||||
results=results,
|
||||
next=next_cursor,
|
||||
|
|
@ -138,6 +304,393 @@ class OffsetPaginator:
|
|||
max_hits=max_hits,
|
||||
)
|
||||
|
||||
def __get_total_queryset(self):
|
||||
# Get total queryset
|
||||
return (
|
||||
self.queryset.values(self.group_by_field_name)
|
||||
.annotate(
|
||||
count=Count(
|
||||
"id",
|
||||
filter=self.count_filter,
|
||||
distinct=True,
|
||||
)
|
||||
)
|
||||
.order_by()
|
||||
)
|
||||
|
||||
def __get_total_dict(self):
|
||||
# Convert the total into dictionary of keys as group name and value as the total
|
||||
total_group_dict = {}
|
||||
for group in self.__get_total_queryset():
|
||||
total_group_dict[str(group.get(self.group_by_field_name))] = (
|
||||
total_group_dict.get(
|
||||
str(group.get(self.group_by_field_name)), 0
|
||||
)
|
||||
+ (1 if group.get("count") == 0 else group.get("count"))
|
||||
)
|
||||
|
||||
return total_group_dict
|
||||
|
||||
def __get_field_dict(self):
|
||||
# Create a field dictionary
|
||||
total_group_dict = self.__get_total_dict()
|
||||
return {
|
||||
str(field): {
|
||||
"results": [],
|
||||
"total_results": total_group_dict.get(str(field), 0),
|
||||
}
|
||||
for field in self.group_by_fields
|
||||
}
|
||||
|
||||
def __result_already_added(self, result, group):
|
||||
# Check if the result is already added then add it
|
||||
for existing_issue in group:
|
||||
if existing_issue["id"] == result["id"]:
|
||||
return True
|
||||
return False
|
||||
|
||||
def __query_multi_grouper(self, results):
|
||||
# Grouping for m2m values
|
||||
total_group_dict = self.__get_total_dict()
|
||||
|
||||
# Preparing a dict to keep track of group IDs associated with each label ID
|
||||
result_group_mapping = defaultdict(set)
|
||||
# Preparing a dict to group result by group ID
|
||||
grouped_by_field_name = defaultdict(list)
|
||||
|
||||
# Iterate over results to fill the above dictionaries
|
||||
for result in results:
|
||||
result_id = result["id"]
|
||||
group_id = result[self.group_by_field_name]
|
||||
result_group_mapping[str(result_id)].add(str(group_id))
|
||||
|
||||
# Adding group_ids key to each issue and grouping by group_name
|
||||
for result in results:
|
||||
result_id = result["id"]
|
||||
group_ids = list(result_group_mapping[str(result_id)])
|
||||
result[self.FIELD_MAPPER.get(self.group_by_field_name)] = (
|
||||
[] if "None" in group_ids else group_ids
|
||||
)
|
||||
# If a result belongs to multiple groups, add it to each group
|
||||
for group_id in group_ids:
|
||||
if not self.__result_already_added(
|
||||
result, grouped_by_field_name[group_id]
|
||||
):
|
||||
grouped_by_field_name[group_id].append(result)
|
||||
|
||||
# Convert grouped_by_field_name back to a list for each group
|
||||
processed_results = {
|
||||
str(group_id): {
|
||||
"results": issues,
|
||||
"total_results": total_group_dict.get(str(group_id)),
|
||||
}
|
||||
for group_id, issues in grouped_by_field_name.items()
|
||||
}
|
||||
|
||||
return processed_results
|
||||
|
||||
def __query_grouper(self, results):
|
||||
# Grouping for single values
|
||||
processed_results = self.__get_field_dict()
|
||||
for result in results:
|
||||
(
|
||||
print(result["created_at"].date(), result["priority"])
|
||||
if str(result[self.group_by_field_name])
|
||||
== "c88dfd3b-e97e-4948-851b-a5fe1e36ffd0"
|
||||
else None
|
||||
)
|
||||
group_value = str(result.get(self.group_by_field_name))
|
||||
if group_value in processed_results:
|
||||
processed_results[str(group_value)]["results"].append(result)
|
||||
|
||||
return processed_results
|
||||
|
||||
def process_results(self, results):
|
||||
# Process results
|
||||
if results:
|
||||
if self.group_by_field_name in self.FIELD_MAPPER:
|
||||
processed_results = self.__query_multi_grouper(results=results)
|
||||
else:
|
||||
processed_results = self.__query_grouper(results=results)
|
||||
else:
|
||||
processed_results = {}
|
||||
return processed_results
|
||||
|
||||
|
||||
class SubGroupedOffsetPaginator(OffsetPaginator):
|
||||
FIELD_MAPPER = {
|
||||
"labels__id": "label_ids",
|
||||
"assignees__id": "assignee_ids",
|
||||
"modules__id": "module_ids",
|
||||
}
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
queryset,
|
||||
group_by_field_name,
|
||||
sub_group_by_field_name,
|
||||
group_by_fields,
|
||||
sub_group_by_fields,
|
||||
count_filter,
|
||||
*args,
|
||||
**kwargs,
|
||||
):
|
||||
super().__init__(queryset, *args, **kwargs)
|
||||
self.group_by_field_name = group_by_field_name
|
||||
self.group_by_fields = group_by_fields
|
||||
self.sub_group_by_field_name = sub_group_by_field_name
|
||||
self.sub_group_by_fields = sub_group_by_fields
|
||||
self.count_filter = count_filter
|
||||
|
||||
def get_result(self, limit=30, cursor=None):
|
||||
# offset is page #
|
||||
# value is page limit
|
||||
if cursor is None:
|
||||
cursor = Cursor(0, 0, 0)
|
||||
|
||||
limit = min(limit, self.max_limit)
|
||||
|
||||
# Adjust the initial offset and stop based on the cursor and limit
|
||||
queryset = self.queryset
|
||||
|
||||
page = cursor.offset
|
||||
offset = cursor.offset * cursor.value
|
||||
stop = offset + (cursor.value or limit) + 1
|
||||
|
||||
if self.max_offset is not None and offset >= self.max_offset:
|
||||
raise BadPaginationError("Pagination offset too large")
|
||||
if offset < 0:
|
||||
raise BadPaginationError("Pagination offset cannot be negative")
|
||||
|
||||
# Compute the results
|
||||
results = {}
|
||||
|
||||
# Create windows for group and sub group field name
|
||||
queryset = queryset.annotate(
|
||||
row_number=Window(
|
||||
expression=RowNumber(),
|
||||
partition_by=[
|
||||
F(self.group_by_field_name),
|
||||
F(self.sub_group_by_field_name),
|
||||
],
|
||||
order_by=(
|
||||
(
|
||||
F(*self.key).desc(nulls_last=True)
|
||||
if self.desc
|
||||
else F(*self.key).asc(nulls_last=True)
|
||||
),
|
||||
"-created_at",
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
# Filter the results
|
||||
results = queryset.filter(
|
||||
row_number__gt=offset, row_number__lt=stop
|
||||
).order_by(
|
||||
(
|
||||
F(*self.key).desc(nulls_last=True)
|
||||
if self.desc
|
||||
else F(*self.key).asc(nulls_last=True)
|
||||
),
|
||||
F("created_at").desc(),
|
||||
)
|
||||
|
||||
# Adjust cursors based on the grouped results for pagination
|
||||
next_cursor = Cursor(
|
||||
limit,
|
||||
page + 1,
|
||||
False,
|
||||
queryset.filter(row_number__gte=stop).exists(),
|
||||
)
|
||||
prev_cursor = Cursor(
|
||||
limit,
|
||||
page - 1,
|
||||
True,
|
||||
page > 0,
|
||||
)
|
||||
|
||||
# Count the queryset
|
||||
count = queryset.count()
|
||||
|
||||
# Optionally, calculate the total count and max_hits if needed
|
||||
# This might require adjustments based on specific use cases
|
||||
if results:
|
||||
max_hits = math.ceil(
|
||||
queryset.values(self.group_by_field_name)
|
||||
.annotate(
|
||||
count=Count(
|
||||
"id",
|
||||
filter=self.count_filter,
|
||||
distinct=True,
|
||||
)
|
||||
)
|
||||
.order_by("-count")[0]["count"]
|
||||
/ limit
|
||||
)
|
||||
else:
|
||||
max_hits = 0
|
||||
return CursorResult(
|
||||
results=results,
|
||||
next=next_cursor,
|
||||
prev=prev_cursor,
|
||||
hits=count,
|
||||
max_hits=max_hits,
|
||||
)
|
||||
|
||||
def __get_group_total_queryset(self):
|
||||
# Get group totals
|
||||
return (
|
||||
self.queryset.order_by(self.group_by_field_name)
|
||||
.values(self.group_by_field_name)
|
||||
.annotate(
|
||||
count=Count(
|
||||
"id",
|
||||
filter=self.count_filter,
|
||||
distinct=True,
|
||||
)
|
||||
)
|
||||
.distinct()
|
||||
)
|
||||
|
||||
def __get_subgroup_total_queryset(self):
|
||||
# Get subgroup totals
|
||||
return (
|
||||
self.queryset.values(
|
||||
self.group_by_field_name, self.sub_group_by_field_name
|
||||
)
|
||||
.annotate(
|
||||
count=Count("id", filter=self.count_filter, distinct=True)
|
||||
)
|
||||
.order_by()
|
||||
.values(
|
||||
self.group_by_field_name, self.sub_group_by_field_name, "count"
|
||||
)
|
||||
)
|
||||
|
||||
def __get_total_dict(self):
|
||||
# Use the above to convert to dictionary of 2D objects
|
||||
total_group_dict = {}
|
||||
total_sub_group_dict = {}
|
||||
for group in self.__get_group_total_queryset():
|
||||
total_group_dict[str(group.get(self.group_by_field_name))] = (
|
||||
total_group_dict.get(
|
||||
str(group.get(self.group_by_field_name)), 0
|
||||
)
|
||||
+ (1 if group.get("count") == 0 else group.get("count"))
|
||||
)
|
||||
|
||||
# Sub group total values
|
||||
for item in self.__get_subgroup_total_queryset():
|
||||
group = str(item[self.group_by_field_name])
|
||||
subgroup = str(item[self.sub_group_by_field_name])
|
||||
count = item["count"]
|
||||
|
||||
if group not in total_sub_group_dict:
|
||||
total_sub_group_dict[str(group)] = {}
|
||||
|
||||
if subgroup not in total_sub_group_dict[group]:
|
||||
total_sub_group_dict[str(group)][str(subgroup)] = {}
|
||||
|
||||
total_sub_group_dict[group][subgroup] = count
|
||||
|
||||
return total_group_dict, total_sub_group_dict
|
||||
|
||||
def __get_field_dict(self):
|
||||
total_group_dict, total_sub_group_dict = self.__get_total_dict()
|
||||
|
||||
return {
|
||||
str(group): {
|
||||
"results": {
|
||||
str(sub_group): {
|
||||
"results": [],
|
||||
"total_results": total_sub_group_dict.get(
|
||||
str(group)
|
||||
).get(str(sub_group), 0),
|
||||
}
|
||||
for sub_group in total_sub_group_dict.get(str(group), [])
|
||||
},
|
||||
"total_results": total_group_dict.get(str(group), 0),
|
||||
}
|
||||
for group in self.group_by_fields
|
||||
}
|
||||
|
||||
def __query_multi_grouper(self, results):
|
||||
# Multi grouper
|
||||
processed_results = self.__get_field_dict()
|
||||
# Preparing a dict to keep track of group IDs associated with each label ID
|
||||
result_group_mapping = defaultdict(set)
|
||||
result_sub_group_mapping = defaultdict(set)
|
||||
|
||||
# Iterate over results to fill the above dictionaries
|
||||
if self.group_by_field_name in self.FIELD_MAPPER:
|
||||
for result in results:
|
||||
result_id = result["id"]
|
||||
group_id = result[self.group_by_field_name]
|
||||
result_group_mapping[str(result_id)].add(str(group_id))
|
||||
|
||||
# Use the same calculation for the sub group
|
||||
if self.sub_group_by_field_name in self.FIELD_MAPPER:
|
||||
for result in results:
|
||||
result_id = result["id"]
|
||||
sub_group_id = result[self.sub_group_by_field_name]
|
||||
result_sub_group_mapping[str(result_id)].add(str(sub_group_id))
|
||||
|
||||
# Iterate over results
|
||||
for result in results:
|
||||
# Get the group value
|
||||
group_value = str(result.get(self.group_by_field_name))
|
||||
# Get the sub group value
|
||||
sub_group_value = str(result.get(self.sub_group_by_field_name))
|
||||
if (
|
||||
group_value in processed_results
|
||||
and sub_group_value
|
||||
in processed_results[str(group_value)]["results"]
|
||||
):
|
||||
if self.group_by_field_name in self.FIELD_MAPPER:
|
||||
# for multi grouper
|
||||
group_ids = list(result_group_mapping[str(result_id)])
|
||||
result[self.FIELD_MAPPER.get(self.group_by_field_name)] = (
|
||||
[] if "None" in group_ids else group_ids
|
||||
)
|
||||
if self.sub_group_by_field_name in self.FIELD_MAPPER:
|
||||
sub_group_ids = list(result_group_mapping[str(result_id)])
|
||||
# for multi groups
|
||||
result[self.FIELD_MAPPER.get(self.group_by_field_name)] = (
|
||||
[] if "None" in sub_group_ids else sub_group_ids
|
||||
)
|
||||
|
||||
processed_results[str(group_value)]["results"][
|
||||
str(sub_group_value)
|
||||
]["results"].append(result)
|
||||
|
||||
return processed_results
|
||||
|
||||
def __query_grouper(self, results):
|
||||
# Single grouper
|
||||
processed_results = self.__get_field_dict()
|
||||
for result in results:
|
||||
group_value = str(result.get(self.group_by_field_name))
|
||||
sub_group_value = str(result.get(self.sub_group_by_field_name))
|
||||
processed_results[group_value]["results"][sub_group_value][
|
||||
"results"
|
||||
].append(result)
|
||||
|
||||
return processed_results
|
||||
|
||||
def process_results(self, results):
|
||||
if results:
|
||||
if (
|
||||
self.group_by_field_name in self.FIELD_MAPPER
|
||||
or self.sub_group_by_field_name in self.FIELD_MAPPER
|
||||
):
|
||||
processed_results = self.__query_multi_grouper(results=results)
|
||||
else:
|
||||
processed_results = self.__query_grouper(results=results)
|
||||
else:
|
||||
processed_results = {}
|
||||
return processed_results
|
||||
|
||||
|
||||
class BasePaginator:
|
||||
"""BasePaginator class can be inherited by any View to return a paginated view"""
|
||||
|
|
@ -171,6 +724,11 @@ class BasePaginator:
|
|||
cursor_cls=Cursor,
|
||||
extra_stats=None,
|
||||
controller=None,
|
||||
group_by_field_name=None,
|
||||
group_by_fields=None,
|
||||
sub_group_by_field_name=None,
|
||||
sub_group_by_fields=None,
|
||||
count_filter=None,
|
||||
**paginator_kwargs,
|
||||
):
|
||||
"""Paginate the request"""
|
||||
|
|
@ -178,15 +736,27 @@ class BasePaginator:
|
|||
|
||||
# Convert the cursor value to integer and float from string
|
||||
input_cursor = None
|
||||
if request.GET.get(self.cursor_name):
|
||||
try:
|
||||
input_cursor = cursor_cls.from_string(
|
||||
request.GET.get(self.cursor_name)
|
||||
)
|
||||
except ValueError:
|
||||
raise ParseError(detail="Invalid cursor parameter.")
|
||||
try:
|
||||
input_cursor = cursor_cls.from_string(
|
||||
request.GET.get(self.cursor_name, f"{per_page}:0:0"),
|
||||
)
|
||||
except ValueError:
|
||||
raise ParseError(detail="Invalid cursor parameter.")
|
||||
|
||||
if not paginator:
|
||||
if group_by_field_name:
|
||||
paginator_kwargs["group_by_field_name"] = group_by_field_name
|
||||
paginator_kwargs["group_by_fields"] = group_by_fields
|
||||
paginator_kwargs["count_filter"] = count_filter
|
||||
|
||||
if sub_group_by_field_name:
|
||||
paginator_kwargs["sub_group_by_field_name"] = (
|
||||
sub_group_by_field_name
|
||||
)
|
||||
paginator_kwargs["sub_group_by_fields"] = (
|
||||
sub_group_by_fields
|
||||
)
|
||||
|
||||
paginator = paginator_cls(**paginator_kwargs)
|
||||
|
||||
try:
|
||||
|
|
@ -196,12 +766,14 @@ class BasePaginator:
|
|||
except BadPaginationError:
|
||||
raise ParseError(detail="Error in parsing")
|
||||
|
||||
# Serialize result according to the on_result function
|
||||
if on_results:
|
||||
results = on_results(cursor_result.results)
|
||||
else:
|
||||
results = cursor_result.results
|
||||
|
||||
if group_by_field_name:
|
||||
results = paginator.process_results(results=results)
|
||||
|
||||
# Add Manipulation functions to the response
|
||||
if controller is not None:
|
||||
results = controller(results)
|
||||
|
|
@ -211,6 +783,9 @@ class BasePaginator:
|
|||
# Return the response
|
||||
response = Response(
|
||||
{
|
||||
"grouped_by": group_by_field_name,
|
||||
"sub_grouped_by": sub_group_by_field_name,
|
||||
"total_count": (cursor_result.hits),
|
||||
"next_cursor": str(cursor_result.next),
|
||||
"prev_cursor": str(cursor_result.prev),
|
||||
"next_page_results": cursor_result.next.has_results,
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue