* dev: separate order by of issue queryset to separate utilty function * dev: pagination for spreadhseet and gantt * dev: group pagination * dev: paginate single entities * dev: refactor pagination * dev: paginating issue apis * dev: grouped pagination for empty groups * dev: ungrouped list * dev: fix paginator for single groups * dev: fix paginating true list * dev: state__group pagination * fix: imports * dev: fix grouping on taget date and project_id * dev: remove unused imports * dev: add ruff in dependencies * make store changes for pagination * fix some build errors due to type changes * dev: add total pages key * chore: paginator changes * implement pagination for spreadsheet, list, kanban and calendar * fix: order by grouped pagination * dev: sub group paginator * dev: grouped paginator * dev: sub grouping paginator * restructure gantt layout charts * dev: fix pagination count * dev: date filtering for issues * dev: group by counts * implement new logic for pagination layouts * fix: label id and assignee id interchange * dev: fix priority ordering * fix group by bugs * dev: grouping for priority * fix reeordering while update * dev: fix order by for pagination * fix: total results for sub group pagination * dev: add comments and fix ordering * fix orderby priority for spreadsheet * fix subGroupCount * Fix logic for load more in Kanban * fix issue quick add * dev: fix issue creation * dev: add sorting * fix order by for modules and cycles * fix non render of Issues * fix subGroupKey generation when subGroupId is null * dev: fix cycle and module issue * dev: fix sub grouping * fix: imports * fix minor build errors * fix major build errors * fix priority order by * grouped pagination cursor logic changes * fix calendar pagination * active cycle issues pagination * dev: fix lint errors * fix Kanban subgroup dnd * fix empty subgroup kanbans * fix updation from an empty field with groupBy * fix issue count of groups * fix issue sorting on first page fetch * dev: remove pagination from list endpoint add ordering for sub grouping and handle error for empty issues * refactor module and cycle issues * fix quick add refactor * refactor gantt roots * fix empty states * fix filter params * fix group by module * minor UX changes * fix sub grouping in Kanban * remove unnecessary sorting logic in backend (Nikhil's changes) * dev: add error handling when using without on results * calendar layout loader improvement * list per page count logic change * spreadsheet loader improvement * Added loader for issues load more pagination * fix quick add in gantt * dev: add profile issue pagination * fix all issue and profile issues logic * remove empty state from calendar layout * use useEffect instead of swr to fetch issues to have quick switching between views cycles etc * dev: add aggregation for multi fields * fix priority sorting for workspace issues * fix move from draft for draft issues * fix pagination loader for spreadsheet * fetch project, module and cycle stats on update, create and delete of issues * increase horizontal margin * change load more pagination to on scroll pagination for active cycle issues * fix linting error * dev: fix ordering when order by m2m * dev: fix null paginations * dev: commenting * 0add comments to the issue stores methods * fix order by for array properties * fix: priority ordering * perform optimistic updates while adding or removing cycles or modules * fix build errors * dev: add default values when iterating through sub group * Move code from EE to CE repo * chore: folder structure updates * Move sortabla and radio input to packages/ui * chore: updated empty and loading screens * chore: delete an estimate point * chore: estimate point response change * chore: updated create estimate and handled the build error * chore: migration fixes * chore: updated create estimate * [WEB-1322] dev: conflict free pages collaboration (#4463) * chore: pages realtime * chore: empty binary response * chore: added a ypy package * feat: pages collaboration * chore: update fetching logic * chore: degrade ypy version * chore: replace useEffect fetch logic with useSWR * chore: move all the update logic to the page store * refactor: remove react-hook-form * chore: save description_html as well * chore: migrate old data logic * fix: added description_binary as field name * fix: code cleanup * refactor: create separate hook to handle page description * fix: build errors * chore: combine updates instead of using the whole document * chore: removed ypy package * chore: added conflict resolving logic to the client side * chore: add a save changes button * chore: add read-only validation * chore: remove saving state information * chore: added permission class * chore: removed the migration file * chore: corrected the model field * chore: rename pageStore to page * chore: update collaboration provider * chore: add try catch to handle error --------- Co-authored-by: NarayanBavisetti <narayan3119@gmail.com> * chore: create estimate workflow update * chore: editing and deleting the existing estimate updates * chore: updating the new estinates in update modal * chore: ui changed * chore: response changes of get and post * chore: new field added in estimates * chore: individual endpoint for estimate points * chore: typo changes * chore: create estimate point * chore: integrated new endpoints * chore: update key value pair * chore: update sorting in the estimates * Add custom option in the estimate templates * chore: handled current project active estimate * chore: handle estimate update worklfow * chore: AIO docker images for preview deployments (#4605) * fix: adding single docker base file * action added * fix action * dockerfile.base modified * action fix * dockerfile * fix: base aio dockerfile * fix: dockerfile.base * fix: dockerfile base * fix: modified folder structure * fix: action * fix: dockerfile * fix: dockerfile.base * fix: supervisor file name changed * fix: base dockerfile updated * fix dockerfile base * fix: base dockerfile * fix: docker files * fix: base dockerfile * update base image * modified docker aio base * aio base modified to debian-12-slim * fixes * finalize the dockerfiles with volume exposure * modified the aio build and dockerfile * fix: codacy suggestions implemented * fix: codacy fix * update aio build action --------- Co-authored-by: sriram veeraghanta <veeraghanta.sriram@gmail.com> * chore: handled estimates switch * chore: handled estimate edit * chore: handled close button in estimate edit * chore: updated ceate estimare workflow * chore: updated switch estimate * fix minor bugs in base issues store * single column scroll pagination * UI changes for load more button * chore: UI and typos * chore: resolved build error * [WEB-1184] feat: issue bulk operations (#4530) * chore: bulk operations * chore: archive bulk issues * chore: bulk ops keys changed * chore: bulk delete and archive confirmation modals * style: list layout spacing * chore: create hoc for multi-select groups * chore: update multiple select components * chore: archive, target and start date error messsage * chore: edge case handling * chore: bulk ops in spreadsheet layout * chore: update UI * chore: scroll element into view * fix: shift + arrow navigation * chore: implement bulk ops in the gantt layout * fix: ui bugs * chore: move selection logic to store * fix: group selection * refactor: multiple select store * style: dropdowns UI * fix: bulk assignee and label update mutation * chore: removed migrations * refactor: entities grouping logic * fix performance issue is selection of bulk ops * fix: shift keyboard navigation * fix: group click action * chore: start and target date validation * chore: remove optimistic updates, check archivability in frontend * chore: code optimisation * chore: add store comments * refactor: component fragmentation * style: issue active state --------- Co-authored-by: NarayanBavisetti <narayan3119@gmail.com> Co-authored-by: rahulramesha <rahulramesham@gmail.com> * fix a performance issue when there are too many groups * chore: updated delete dropdown and handled the repeated values while creating and updating the estimate point * [WEB-1424] chore: page and view logo implementation, and emoji/icon picker improvement (#4583) * chore: added logo_props * chore: logo props in cycles, views and modules * chore: emoji icon picker types updated * chore: info icon added to plane ui package * chore: icon color adjust helper function added * style: icon picker ui improvement and default color options updated * chore: update page logo action added in store * chore: emoji code to unicode helper function added * chore: common logo renderer component added * chore: app header project logo updated * chore: project logo updated across platform * chore: page logo picker added * chore: control link component improvement * chore: list item improvement * chore: emoji picker component updated * chore: space app and package logo prop type updated * chore: migration * chore: logo added to project view * chore: page logo picker added in create modal and breadcrumbs * chore: view logo picker added in create modal and updated breadcrumbs * fix: build error * chore: AIO docker images for preview deployments (#4605) * fix: adding single docker base file * action added * fix action * dockerfile.base modified * action fix * dockerfile * fix: base aio dockerfile * fix: dockerfile.base * fix: dockerfile base * fix: modified folder structure * fix: action * fix: dockerfile * fix: dockerfile.base * fix: supervisor file name changed * fix: base dockerfile updated * fix dockerfile base * fix: base dockerfile * fix: docker files * fix: base dockerfile * update base image * modified docker aio base * aio base modified to debian-12-slim * fixes * finalize the dockerfiles with volume exposure * modified the aio build and dockerfile * fix: codacy suggestions implemented * fix: codacy fix * update aio build action --------- Co-authored-by: sriram veeraghanta <veeraghanta.sriram@gmail.com> * fix: merge conflict * chore: lucide react added to planu ui package * chore: new emoji picker component added with lucid icon and code refactor * chore: logo component updated * chore: emoji picker updated for pages and views --------- Co-authored-by: NarayanBavisetti <narayan3119@gmail.com> Co-authored-by: Manish Gupta <59428681+mguptahub@users.noreply.github.com> Co-authored-by: sriram veeraghanta <veeraghanta.sriram@gmail.com> * chore: handled inline errors in the estimate switch * fix module and cycle drag and drop * Fix issue count bug for accumulated actions * chore: handled active and availability vadilation * chore: handled create and update components in projecr estimates * chore: added migration * Add category specific values for custom template * chore: estimate dropdown handled in issues * chore: estimate alerts * fix bulk updates * chore: updated alerts * add optional chaining * Extract the list row actions * change color of load more to match new Issues * list group collapsible * fix: updated and handled the estimate points * fix: upgrader ee banner * Fix issues with sortable * Fix sortable spacing issue in create estimate modal * fix: updated the issue create sorting * chore: removed radio button from ui and updated in the estimates * chore: resolved import error in packaged ui * chore: handled props in create modal * chore: removed ee files * chore: changed default analytics * fix: pagination ordering for grouped and subgrouped * chore: removed the migration file * chore: estimate point value in graph * chore: estimate point key change * chore: squashed migration (#4634) * chore: squashed migration * chore: removed instance migraion * chore: key changes * chore: issue activity back migration * dev: replaced estimate key with estimate id and replaced estimate type from number to string in issue * chore: estimate point value field * chore: estimate point activity * chore: removed the unused function * chore: resolved merge conflicts * chore: deploy board keys changed * chore: yarn lock file change * chore: resolved frontend build --------- Co-authored-by: guru_sainath <gurusainath007@gmail.com> * [WEB-1516] refactor: space app routing and layouts (#4705) * dev: change layout * chore: replace workspace slug and project id with anchor * chore: migration fixes * chore: update filtering logic * chore: endpoint changes * chore: update endpoint * chore: changed url pratterns * chore: use client side for layout and page * chore: issue vote changes * chore: project deploy board response change * refactor: publish project store and components * fix: update layout options after fetching settings * chore: remove unnecessary types * style: peek overview * refactor: components folder structure * fix: redirect from old path * chore: make the whole issue block clickable * chore: removed the migration file * chore: add server side redirection for old routes * chore: is enabled key change * chore: update types * chore: removed the migration file --------- Co-authored-by: NarayanBavisetti <narayan3119@gmail.com> * Merge develop into revamp-estimates-ce * chore: removed migration file and updated the estimate system order and removed ee banner * chore: initial radio select in create estimate * chore: space key changes * Fix sortable component as the sort order was broken. * fix: formatting and linting errors * fix Alignment for load more * add logic to approuter * fix approuter changes and fix build * chore: removed the linting issue --------- Co-authored-by: pablohashescobar <nikhilschacko@gmail.com> Co-authored-by: Satish Gandham <satish.iitg@gmail.com> Co-authored-by: guru_sainath <gurusainath007@gmail.com> Co-authored-by: NarayanBavisetti <narayan3119@gmail.com> Co-authored-by: Aaryan Khandelwal <65252264+aaryan610@users.noreply.github.com> Co-authored-by: Manish Gupta <59428681+mguptahub@users.noreply.github.com> Co-authored-by: sriram veeraghanta <veeraghanta.sriram@gmail.com> Co-authored-by: Anmol Singh Bhatia <121005188+anmolsinghbhatia@users.noreply.github.com> Co-authored-by: Bavisetti Narayan <72156168+NarayanBavisetti@users.noreply.github.com> Co-authored-by: pushya22 <130810100+pushya22@users.noreply.github.com>
801 lines
26 KiB
Python
801 lines
26 KiB
Python
# Python imports
|
|
import math
|
|
from collections import defaultdict
|
|
from collections.abc import Sequence
|
|
|
|
# Django imports
|
|
from django.db.models import Count, F, Window
|
|
from django.db.models.functions import RowNumber
|
|
|
|
# Third party imports
|
|
from rest_framework.exceptions import ParseError
|
|
from rest_framework.response import Response
|
|
|
|
# Module imports
|
|
|
|
|
|
class Cursor:
|
|
# The cursor value
|
|
def __init__(self, value, offset=0, is_prev=False, has_results=None):
|
|
self.value = value
|
|
self.offset = int(offset)
|
|
self.is_prev = bool(is_prev)
|
|
self.has_results = has_results
|
|
|
|
# Return the cursor value in string format
|
|
def __str__(self):
|
|
return f"{self.value}:{self.offset}:{int(self.is_prev)}"
|
|
|
|
# Return the cursor value
|
|
def __eq__(self, other):
|
|
return all(
|
|
getattr(self, attr) == getattr(other, attr)
|
|
for attr in ("value", "offset", "is_prev", "has_results")
|
|
)
|
|
|
|
# Return the representation of the cursor
|
|
def __repr__(self):
|
|
return f"{type(self).__name__,}: value={self.value} offset={self.offset}, is_prev={int(self.is_prev)}"
|
|
|
|
# Return if the cursor is true
|
|
def __bool__(self):
|
|
return bool(self.has_results)
|
|
|
|
@classmethod
|
|
def from_string(cls, value):
|
|
"""Return the cursor value from string format"""
|
|
try:
|
|
bits = value.split(":")
|
|
if len(bits) != 3:
|
|
raise ValueError(
|
|
"Cursor must be in the format 'value:offset:is_prev'"
|
|
)
|
|
|
|
value = float(bits[0]) if "." in bits[0] else int(bits[0])
|
|
return cls(value, int(bits[1]), bool(int(bits[2])))
|
|
except (TypeError, ValueError) as e:
|
|
raise ValueError(f"Invalid cursor format: {e}")
|
|
|
|
|
|
class CursorResult(Sequence):
|
|
def __init__(self, results, next, prev, hits=None, max_hits=None):
|
|
self.results = results
|
|
self.next = next
|
|
self.prev = prev
|
|
self.hits = hits
|
|
self.max_hits = max_hits
|
|
|
|
def __len__(self):
|
|
# Return the length of the results
|
|
return len(self.results)
|
|
|
|
def __iter__(self):
|
|
# Return the iterator of the results
|
|
return iter(self.results)
|
|
|
|
def __getitem__(self, key):
|
|
# Return the results based on the key
|
|
return self.results[key]
|
|
|
|
def __repr__(self):
|
|
# Return the representation of the results
|
|
return f"<{type(self).__name__}: results={len(self.results)}>"
|
|
|
|
|
|
MAX_LIMIT = 100
|
|
|
|
|
|
class BadPaginationError(Exception):
|
|
pass
|
|
|
|
|
|
class OffsetPaginator:
|
|
"""
|
|
The Offset paginator using the offset and limit
|
|
with cursor controls
|
|
http://example.com/api/users/?cursor=10.0.0&per_page=10
|
|
cursor=limit,offset=page,
|
|
"""
|
|
|
|
def __init__(
|
|
self,
|
|
queryset,
|
|
order_by=None,
|
|
max_limit=MAX_LIMIT,
|
|
max_offset=None,
|
|
on_results=None,
|
|
):
|
|
# Key tuple and remove `-` if descending order by
|
|
self.key = (
|
|
order_by
|
|
if order_by is None or isinstance(order_by, (list, tuple, set))
|
|
else (order_by[1::] if order_by.startswith("-") else order_by,)
|
|
)
|
|
# Set desc to true when `-` exists in the order by
|
|
self.desc = True if order_by.startswith("-") else False
|
|
self.queryset = queryset
|
|
self.max_limit = max_limit
|
|
self.max_offset = max_offset
|
|
self.on_results = on_results
|
|
|
|
def get_result(self, limit=100, cursor=None):
|
|
# offset is page #
|
|
# value is page limit
|
|
if cursor is None:
|
|
cursor = Cursor(0, 0, 0)
|
|
|
|
# Get the min from limit and max limit
|
|
limit = min(limit, self.max_limit)
|
|
|
|
# queryset
|
|
queryset = self.queryset
|
|
if self.key:
|
|
queryset = queryset.order_by(
|
|
(
|
|
F(*self.key).desc(nulls_last=True)
|
|
if self.desc
|
|
else F(*self.key).asc(nulls_last=True)
|
|
),
|
|
"-created_at",
|
|
)
|
|
# The current page
|
|
page = cursor.offset
|
|
# The offset
|
|
offset = cursor.offset * cursor.value
|
|
stop = offset + (cursor.value or limit) + 1
|
|
|
|
if self.max_offset is not None and offset >= self.max_offset:
|
|
raise BadPaginationError("Pagination offset too large")
|
|
if offset < 0:
|
|
raise BadPaginationError("Pagination offset cannot be negative")
|
|
|
|
results = queryset[offset:stop]
|
|
|
|
if cursor.value != limit:
|
|
results = results[-(limit + 1) :]
|
|
|
|
# Adjust cursors based on the results for pagination
|
|
next_cursor = Cursor(limit, page + 1, False, results.count() > limit)
|
|
# If the page is greater than 0, then set the previous cursor
|
|
prev_cursor = Cursor(limit, page - 1, True, page > 0)
|
|
|
|
# Process the results
|
|
results = results[:limit]
|
|
|
|
# Process the results
|
|
if self.on_results:
|
|
results = self.on_results(results)
|
|
|
|
# Count the queryset
|
|
count = queryset.count()
|
|
|
|
# Optionally, calculate the total count and max_hits if needed
|
|
max_hits = math.ceil(count / limit)
|
|
|
|
# Return the cursor results
|
|
return CursorResult(
|
|
results=results,
|
|
next=next_cursor,
|
|
prev=prev_cursor,
|
|
hits=count,
|
|
max_hits=max_hits,
|
|
)
|
|
|
|
def process_results(self, results):
|
|
raise NotImplementedError
|
|
|
|
|
|
class GroupedOffsetPaginator(OffsetPaginator):
|
|
|
|
# Field mappers
|
|
FIELD_MAPPER = {
|
|
"labels__id": "label_ids",
|
|
"assignees__id": "assignee_ids",
|
|
"modules__id": "module_ids",
|
|
}
|
|
|
|
def __init__(
|
|
self,
|
|
queryset,
|
|
group_by_field_name,
|
|
group_by_fields,
|
|
count_filter,
|
|
*args,
|
|
**kwargs,
|
|
):
|
|
# Initiate the parent class for all the parameters
|
|
super().__init__(queryset, *args, **kwargs)
|
|
self.group_by_field_name = group_by_field_name
|
|
self.group_by_fields = group_by_fields
|
|
self.count_filter = count_filter
|
|
|
|
def get_result(self, limit=50, cursor=None):
|
|
# offset is page #
|
|
# value is page limit
|
|
if cursor is None:
|
|
cursor = Cursor(0, 0, 0)
|
|
|
|
limit = min(limit, self.max_limit)
|
|
|
|
# Adjust the initial offset and stop based on the cursor and limit
|
|
queryset = self.queryset
|
|
|
|
page = cursor.offset
|
|
offset = cursor.offset * cursor.value
|
|
stop = offset + (cursor.value or limit) + 1
|
|
|
|
if self.max_offset is not None and offset >= self.max_offset:
|
|
raise BadPaginationError("Pagination offset too large")
|
|
if offset < 0:
|
|
raise BadPaginationError("Pagination offset cannot be negative")
|
|
|
|
# Compute the results
|
|
results = {}
|
|
# Create window for all the groups
|
|
queryset = queryset.annotate(
|
|
row_number=Window(
|
|
expression=RowNumber(),
|
|
partition_by=[F(self.group_by_field_name)],
|
|
order_by=(
|
|
(
|
|
F(*self.key).desc(
|
|
nulls_last=True
|
|
) # order by desc if desc is set
|
|
if self.desc
|
|
else F(*self.key).asc(
|
|
nulls_last=True
|
|
) # Order by asc if set
|
|
),
|
|
F("created_at").desc(),
|
|
),
|
|
)
|
|
)
|
|
# Filter the results by row number
|
|
results = queryset.filter(
|
|
row_number__gt=offset, row_number__lt=stop
|
|
).order_by(
|
|
(
|
|
F(*self.key).desc(nulls_last=True)
|
|
if self.desc
|
|
else F(*self.key).asc(nulls_last=True)
|
|
),
|
|
F("created_at").desc(),
|
|
)
|
|
|
|
# Adjust cursors based on the grouped results for pagination
|
|
next_cursor = Cursor(
|
|
limit,
|
|
page + 1,
|
|
False,
|
|
queryset.filter(row_number__gte=stop).exists(),
|
|
)
|
|
prev_cursor = Cursor(
|
|
limit,
|
|
page - 1,
|
|
True,
|
|
page > 0,
|
|
)
|
|
|
|
# Count the queryset
|
|
count = queryset.count()
|
|
|
|
# Optionally, calculate the total count and max_hits if needed
|
|
# This might require adjustments based on specific use cases
|
|
if results:
|
|
max_hits = math.ceil(
|
|
queryset.values(self.group_by_field_name)
|
|
.annotate(
|
|
count=Count(
|
|
"id",
|
|
filter=self.count_filter,
|
|
distinct=True,
|
|
)
|
|
)
|
|
.order_by("-count")[0]["count"]
|
|
/ limit
|
|
)
|
|
else:
|
|
max_hits = 0
|
|
return CursorResult(
|
|
results=results,
|
|
next=next_cursor,
|
|
prev=prev_cursor,
|
|
hits=count,
|
|
max_hits=max_hits,
|
|
)
|
|
|
|
def __get_total_queryset(self):
|
|
# Get total queryset
|
|
return (
|
|
self.queryset.values(self.group_by_field_name)
|
|
.annotate(
|
|
count=Count(
|
|
"id",
|
|
filter=self.count_filter,
|
|
distinct=True,
|
|
)
|
|
)
|
|
.order_by()
|
|
)
|
|
|
|
def __get_total_dict(self):
|
|
# Convert the total into dictionary of keys as group name and value as the total
|
|
total_group_dict = {}
|
|
for group in self.__get_total_queryset():
|
|
total_group_dict[str(group.get(self.group_by_field_name))] = (
|
|
total_group_dict.get(
|
|
str(group.get(self.group_by_field_name)), 0
|
|
)
|
|
+ (1 if group.get("count") == 0 else group.get("count"))
|
|
)
|
|
|
|
return total_group_dict
|
|
|
|
def __get_field_dict(self):
|
|
# Create a field dictionary
|
|
total_group_dict = self.__get_total_dict()
|
|
return {
|
|
str(field): {
|
|
"results": [],
|
|
"total_results": total_group_dict.get(str(field), 0),
|
|
}
|
|
for field in self.group_by_fields
|
|
}
|
|
|
|
def __result_already_added(self, result, group):
|
|
# Check if the result is already added then add it
|
|
for existing_issue in group:
|
|
if existing_issue["id"] == result["id"]:
|
|
return True
|
|
return False
|
|
|
|
def __query_multi_grouper(self, results):
|
|
# Grouping for m2m values
|
|
total_group_dict = self.__get_total_dict()
|
|
|
|
# Preparing a dict to keep track of group IDs associated with each label ID
|
|
result_group_mapping = defaultdict(set)
|
|
# Preparing a dict to group result by group ID
|
|
grouped_by_field_name = defaultdict(list)
|
|
|
|
# Iterate over results to fill the above dictionaries
|
|
for result in results:
|
|
result_id = result["id"]
|
|
group_id = result[self.group_by_field_name]
|
|
result_group_mapping[str(result_id)].add(str(group_id))
|
|
|
|
# Adding group_ids key to each issue and grouping by group_name
|
|
for result in results:
|
|
result_id = result["id"]
|
|
group_ids = list(result_group_mapping[str(result_id)])
|
|
result[self.FIELD_MAPPER.get(self.group_by_field_name)] = (
|
|
[] if "None" in group_ids else group_ids
|
|
)
|
|
# If a result belongs to multiple groups, add it to each group
|
|
for group_id in group_ids:
|
|
if not self.__result_already_added(
|
|
result, grouped_by_field_name[group_id]
|
|
):
|
|
grouped_by_field_name[group_id].append(result)
|
|
|
|
# Convert grouped_by_field_name back to a list for each group
|
|
processed_results = {
|
|
str(group_id): {
|
|
"results": issues,
|
|
"total_results": total_group_dict.get(str(group_id)),
|
|
}
|
|
for group_id, issues in grouped_by_field_name.items()
|
|
}
|
|
|
|
return processed_results
|
|
|
|
def __query_grouper(self, results):
|
|
# Grouping for single values
|
|
processed_results = self.__get_field_dict()
|
|
for result in results:
|
|
(
|
|
print(result["created_at"].date(), result["priority"])
|
|
if str(result[self.group_by_field_name])
|
|
== "c88dfd3b-e97e-4948-851b-a5fe1e36ffd0"
|
|
else None
|
|
)
|
|
group_value = str(result.get(self.group_by_field_name))
|
|
if group_value in processed_results:
|
|
processed_results[str(group_value)]["results"].append(result)
|
|
|
|
return processed_results
|
|
|
|
def process_results(self, results):
|
|
# Process results
|
|
if results:
|
|
if self.group_by_field_name in self.FIELD_MAPPER:
|
|
processed_results = self.__query_multi_grouper(results=results)
|
|
else:
|
|
processed_results = self.__query_grouper(results=results)
|
|
else:
|
|
processed_results = {}
|
|
return processed_results
|
|
|
|
|
|
class SubGroupedOffsetPaginator(OffsetPaginator):
|
|
FIELD_MAPPER = {
|
|
"labels__id": "label_ids",
|
|
"assignees__id": "assignee_ids",
|
|
"modules__id": "module_ids",
|
|
}
|
|
|
|
def __init__(
|
|
self,
|
|
queryset,
|
|
group_by_field_name,
|
|
sub_group_by_field_name,
|
|
group_by_fields,
|
|
sub_group_by_fields,
|
|
count_filter,
|
|
*args,
|
|
**kwargs,
|
|
):
|
|
super().__init__(queryset, *args, **kwargs)
|
|
self.group_by_field_name = group_by_field_name
|
|
self.group_by_fields = group_by_fields
|
|
self.sub_group_by_field_name = sub_group_by_field_name
|
|
self.sub_group_by_fields = sub_group_by_fields
|
|
self.count_filter = count_filter
|
|
|
|
def get_result(self, limit=30, cursor=None):
|
|
# offset is page #
|
|
# value is page limit
|
|
if cursor is None:
|
|
cursor = Cursor(0, 0, 0)
|
|
|
|
limit = min(limit, self.max_limit)
|
|
|
|
# Adjust the initial offset and stop based on the cursor and limit
|
|
queryset = self.queryset
|
|
|
|
page = cursor.offset
|
|
offset = cursor.offset * cursor.value
|
|
stop = offset + (cursor.value or limit) + 1
|
|
|
|
if self.max_offset is not None and offset >= self.max_offset:
|
|
raise BadPaginationError("Pagination offset too large")
|
|
if offset < 0:
|
|
raise BadPaginationError("Pagination offset cannot be negative")
|
|
|
|
# Compute the results
|
|
results = {}
|
|
|
|
# Create windows for group and sub group field name
|
|
queryset = queryset.annotate(
|
|
row_number=Window(
|
|
expression=RowNumber(),
|
|
partition_by=[
|
|
F(self.group_by_field_name),
|
|
F(self.sub_group_by_field_name),
|
|
],
|
|
order_by=(
|
|
(
|
|
F(*self.key).desc(nulls_last=True)
|
|
if self.desc
|
|
else F(*self.key).asc(nulls_last=True)
|
|
),
|
|
"-created_at",
|
|
),
|
|
)
|
|
)
|
|
|
|
# Filter the results
|
|
results = queryset.filter(
|
|
row_number__gt=offset, row_number__lt=stop
|
|
).order_by(
|
|
(
|
|
F(*self.key).desc(nulls_last=True)
|
|
if self.desc
|
|
else F(*self.key).asc(nulls_last=True)
|
|
),
|
|
F("created_at").desc(),
|
|
)
|
|
|
|
# Adjust cursors based on the grouped results for pagination
|
|
next_cursor = Cursor(
|
|
limit,
|
|
page + 1,
|
|
False,
|
|
queryset.filter(row_number__gte=stop).exists(),
|
|
)
|
|
prev_cursor = Cursor(
|
|
limit,
|
|
page - 1,
|
|
True,
|
|
page > 0,
|
|
)
|
|
|
|
# Count the queryset
|
|
count = queryset.count()
|
|
|
|
# Optionally, calculate the total count and max_hits if needed
|
|
# This might require adjustments based on specific use cases
|
|
if results:
|
|
max_hits = math.ceil(
|
|
queryset.values(self.group_by_field_name)
|
|
.annotate(
|
|
count=Count(
|
|
"id",
|
|
filter=self.count_filter,
|
|
distinct=True,
|
|
)
|
|
)
|
|
.order_by("-count")[0]["count"]
|
|
/ limit
|
|
)
|
|
else:
|
|
max_hits = 0
|
|
return CursorResult(
|
|
results=results,
|
|
next=next_cursor,
|
|
prev=prev_cursor,
|
|
hits=count,
|
|
max_hits=max_hits,
|
|
)
|
|
|
|
def __get_group_total_queryset(self):
|
|
# Get group totals
|
|
return (
|
|
self.queryset.order_by(self.group_by_field_name)
|
|
.values(self.group_by_field_name)
|
|
.annotate(
|
|
count=Count(
|
|
"id",
|
|
filter=self.count_filter,
|
|
distinct=True,
|
|
)
|
|
)
|
|
.distinct()
|
|
)
|
|
|
|
def __get_subgroup_total_queryset(self):
|
|
# Get subgroup totals
|
|
return (
|
|
self.queryset.values(
|
|
self.group_by_field_name, self.sub_group_by_field_name
|
|
)
|
|
.annotate(
|
|
count=Count("id", filter=self.count_filter, distinct=True)
|
|
)
|
|
.order_by()
|
|
.values(
|
|
self.group_by_field_name, self.sub_group_by_field_name, "count"
|
|
)
|
|
)
|
|
|
|
def __get_total_dict(self):
|
|
# Use the above to convert to dictionary of 2D objects
|
|
total_group_dict = {}
|
|
total_sub_group_dict = {}
|
|
for group in self.__get_group_total_queryset():
|
|
total_group_dict[str(group.get(self.group_by_field_name))] = (
|
|
total_group_dict.get(
|
|
str(group.get(self.group_by_field_name)), 0
|
|
)
|
|
+ (1 if group.get("count") == 0 else group.get("count"))
|
|
)
|
|
|
|
# Sub group total values
|
|
for item in self.__get_subgroup_total_queryset():
|
|
group = str(item[self.group_by_field_name])
|
|
subgroup = str(item[self.sub_group_by_field_name])
|
|
count = item["count"]
|
|
|
|
if group not in total_sub_group_dict:
|
|
total_sub_group_dict[str(group)] = {}
|
|
|
|
if subgroup not in total_sub_group_dict[group]:
|
|
total_sub_group_dict[str(group)][str(subgroup)] = {}
|
|
|
|
total_sub_group_dict[group][subgroup] = count
|
|
|
|
return total_group_dict, total_sub_group_dict
|
|
|
|
def __get_field_dict(self):
|
|
total_group_dict, total_sub_group_dict = self.__get_total_dict()
|
|
|
|
return {
|
|
str(group): {
|
|
"results": {
|
|
str(sub_group): {
|
|
"results": [],
|
|
"total_results": total_sub_group_dict.get(
|
|
str(group)
|
|
).get(str(sub_group), 0),
|
|
}
|
|
for sub_group in total_sub_group_dict.get(str(group), [])
|
|
},
|
|
"total_results": total_group_dict.get(str(group), 0),
|
|
}
|
|
for group in self.group_by_fields
|
|
}
|
|
|
|
def __query_multi_grouper(self, results):
|
|
# Multi grouper
|
|
processed_results = self.__get_field_dict()
|
|
# Preparing a dict to keep track of group IDs associated with each label ID
|
|
result_group_mapping = defaultdict(set)
|
|
result_sub_group_mapping = defaultdict(set)
|
|
|
|
# Iterate over results to fill the above dictionaries
|
|
if self.group_by_field_name in self.FIELD_MAPPER:
|
|
for result in results:
|
|
result_id = result["id"]
|
|
group_id = result[self.group_by_field_name]
|
|
result_group_mapping[str(result_id)].add(str(group_id))
|
|
|
|
# Use the same calculation for the sub group
|
|
if self.sub_group_by_field_name in self.FIELD_MAPPER:
|
|
for result in results:
|
|
result_id = result["id"]
|
|
sub_group_id = result[self.sub_group_by_field_name]
|
|
result_sub_group_mapping[str(result_id)].add(str(sub_group_id))
|
|
|
|
# Iterate over results
|
|
for result in results:
|
|
# Get the group value
|
|
group_value = str(result.get(self.group_by_field_name))
|
|
# Get the sub group value
|
|
sub_group_value = str(result.get(self.sub_group_by_field_name))
|
|
if (
|
|
group_value in processed_results
|
|
and sub_group_value
|
|
in processed_results[str(group_value)]["results"]
|
|
):
|
|
if self.group_by_field_name in self.FIELD_MAPPER:
|
|
# for multi grouper
|
|
group_ids = list(result_group_mapping[str(result_id)])
|
|
result[self.FIELD_MAPPER.get(self.group_by_field_name)] = (
|
|
[] if "None" in group_ids else group_ids
|
|
)
|
|
if self.sub_group_by_field_name in self.FIELD_MAPPER:
|
|
sub_group_ids = list(result_group_mapping[str(result_id)])
|
|
# for multi groups
|
|
result[self.FIELD_MAPPER.get(self.group_by_field_name)] = (
|
|
[] if "None" in sub_group_ids else sub_group_ids
|
|
)
|
|
|
|
processed_results[str(group_value)]["results"][
|
|
str(sub_group_value)
|
|
]["results"].append(result)
|
|
|
|
return processed_results
|
|
|
|
def __query_grouper(self, results):
|
|
# Single grouper
|
|
processed_results = self.__get_field_dict()
|
|
for result in results:
|
|
group_value = str(result.get(self.group_by_field_name))
|
|
sub_group_value = str(result.get(self.sub_group_by_field_name))
|
|
processed_results[group_value]["results"][sub_group_value][
|
|
"results"
|
|
].append(result)
|
|
|
|
return processed_results
|
|
|
|
def process_results(self, results):
|
|
if results:
|
|
if (
|
|
self.group_by_field_name in self.FIELD_MAPPER
|
|
or self.sub_group_by_field_name in self.FIELD_MAPPER
|
|
):
|
|
processed_results = self.__query_multi_grouper(results=results)
|
|
else:
|
|
processed_results = self.__query_grouper(results=results)
|
|
else:
|
|
processed_results = {}
|
|
return processed_results
|
|
|
|
|
|
class BasePaginator:
|
|
"""BasePaginator class can be inherited by any View to return a paginated view"""
|
|
|
|
# cursor query parameter name
|
|
cursor_name = "cursor"
|
|
|
|
# get the per page parameter from request
|
|
def get_per_page(self, request, default_per_page=100, max_per_page=100):
|
|
try:
|
|
per_page = int(request.GET.get("per_page", default_per_page))
|
|
except ValueError:
|
|
raise ParseError(detail="Invalid per_page parameter.")
|
|
|
|
max_per_page = max(max_per_page, default_per_page)
|
|
if per_page > max_per_page:
|
|
raise ParseError(
|
|
detail=f"Invalid per_page value. Cannot exceed {max_per_page}."
|
|
)
|
|
|
|
return per_page
|
|
|
|
def paginate(
|
|
self,
|
|
request,
|
|
on_results=None,
|
|
paginator=None,
|
|
paginator_cls=OffsetPaginator,
|
|
default_per_page=100,
|
|
max_per_page=100,
|
|
cursor_cls=Cursor,
|
|
extra_stats=None,
|
|
controller=None,
|
|
group_by_field_name=None,
|
|
group_by_fields=None,
|
|
sub_group_by_field_name=None,
|
|
sub_group_by_fields=None,
|
|
count_filter=None,
|
|
**paginator_kwargs,
|
|
):
|
|
"""Paginate the request"""
|
|
per_page = self.get_per_page(request, default_per_page, max_per_page)
|
|
|
|
# Convert the cursor value to integer and float from string
|
|
input_cursor = None
|
|
try:
|
|
input_cursor = cursor_cls.from_string(
|
|
request.GET.get(self.cursor_name, f"{per_page}:0:0"),
|
|
)
|
|
except ValueError:
|
|
raise ParseError(detail="Invalid cursor parameter.")
|
|
|
|
if not paginator:
|
|
if group_by_field_name:
|
|
paginator_kwargs["group_by_field_name"] = group_by_field_name
|
|
paginator_kwargs["group_by_fields"] = group_by_fields
|
|
paginator_kwargs["count_filter"] = count_filter
|
|
|
|
if sub_group_by_field_name:
|
|
paginator_kwargs["sub_group_by_field_name"] = (
|
|
sub_group_by_field_name
|
|
)
|
|
paginator_kwargs["sub_group_by_fields"] = (
|
|
sub_group_by_fields
|
|
)
|
|
|
|
paginator = paginator_cls(**paginator_kwargs)
|
|
|
|
try:
|
|
cursor_result = paginator.get_result(
|
|
limit=per_page, cursor=input_cursor
|
|
)
|
|
except BadPaginationError:
|
|
raise ParseError(detail="Error in parsing")
|
|
|
|
if on_results:
|
|
results = on_results(cursor_result.results)
|
|
else:
|
|
results = cursor_result.results
|
|
|
|
if group_by_field_name:
|
|
results = paginator.process_results(results=results)
|
|
|
|
# Add Manipulation functions to the response
|
|
if controller is not None:
|
|
results = controller(results)
|
|
else:
|
|
results = results
|
|
|
|
# Return the response
|
|
response = Response(
|
|
{
|
|
"grouped_by": group_by_field_name,
|
|
"sub_grouped_by": sub_group_by_field_name,
|
|
"total_count": (cursor_result.hits),
|
|
"next_cursor": str(cursor_result.next),
|
|
"prev_cursor": str(cursor_result.prev),
|
|
"next_page_results": cursor_result.next.has_results,
|
|
"prev_page_results": cursor_result.prev.has_results,
|
|
"count": cursor_result.__len__(),
|
|
"total_pages": cursor_result.max_hits,
|
|
"total_results": cursor_result.hits,
|
|
"extra_stats": extra_stats,
|
|
"results": results,
|
|
}
|
|
)
|
|
|
|
return response
|