* use common getIssues from issue service instead of multiple different services for modules and cycles * Use SQLite to store issues locally and load issues from it. * Fix incorrect total count and filtering on assignees. * enable parallel API calls * use common getIssues from issue service instead of multiple different services for modules and cycles * Use SQLite to store issues locally and load issues from it. * Fix incorrect total count and filtering on assignees. * enable parallel API calls * chore: deleted issue list * - Handle local mutations - Implement getting the updates - Use SWR to update/sync data * Wait for sync to complete in get issues * Fix build errors * Fix build issue * - Sync updates to local-db - Fallback to server when the local data is loading - Wait when the updates are being fetched * Add issues in batches * Disable skeleton loaders for first 10 issues * Load issues in bulk * working version of sql lite with grouped issues * Use window queries for group by * - Fix sort by date fields - Fix the total count * - Fix grouping by created by - Fix order by and limit * fix pagination * Fix sorting on issue priority * - Add secondary sort order - Fix group by priority * chore: added timestamp filter for deleted issues * - Extract local DB into its own class - Implement sorting by label names * Implement subgroup by * sub group by changes * Refactor query constructor * Insert or update issues instead of directly adding them. * Segregated queries. Not working though!! * - Get filtered issues and then group them. - Cleanup code. - Implement order by labels. * Fix build issues * Remove debuggers * remove loaders while changing sorting or applying filters * fix loader while clearing all filters * Fix issue with project being synced twice * Improve project sync * Optimize the queries * Make create dummy data more realistic * dev: added total pages in the global paginator * chore: updated total_paged count * chore: added state_group in the issues pagination * chore: removed deleted_at from the issue pagination payload * chore: replaced state_group with state__group * Integrate new getIssues API, and fix sync issues bug. * Fix issue with SWR running twice in workspace wrapper * Fix DB initialization called when opening project for the first time. * Add all the tables required for sorting * Exclude description from getIssues * Add getIssue function. * Add only selected fields to get query. * Fix the count query * Minor query optimization when no joins are required. * fetch issue description from local db * clear local db on signout * Correct dummy data creation * Fix sort by assignee * sync to local changes * chore: added archived issues in the deleted endpoint * Sync deletes to local db. * - Add missing indexes for tables used in sorting in spreadsheet layout. - Add options table * Make fallback optional in getOption * Kanban column virtualization * persist project sync readiness to sqlite and use that as the source of truth for the project issues to be ready * fix build errors * Fix calendar view * fetch slimed down version of modules in project wrapper * fetch toned down modules and then fetch complete modules * Fix multi value order by in spread sheet layout * Fix sort by * Fix the query when ordering by multi field names * Remove unused import * Fix sort by multi value fields * Format queries and fix order by * fix order by for multi issue * fix loaders for spreadsheet * Fallback to manual order whn moving away from spreadsheet layout * fix minor bug * Move fix for order_by when switching from spreadsheet layout to translateQueryParams * fix default rendering of kanban groups * Fix none priority being saved as null * Remove debugger statement * Fix issue load * chore: updated isue paginated query from to * Fix sub issues and start and target date filters * Fix active and backlog filter * Add default order by * Update the Query param to match with backend. * local sqlite db versioning * When window is hidden, do not perform any db versioning * fix error handling and fall back to server when database errors out * Add ability to disable local db cache * remove db version check from getIssues function * change db version to number and remove workspaceInitPromise in storage.sqlite * - Sync the entire workspace in the background - Add get sub issue method with distribution * Make changes to get issues for sync to match backend. * chore: handled workspace and project in v2 paginted issues * disable issue description and title until fetched from server * sync issues post bulk operations * fix server error * fix front end build * Remove full workspace sync * - Remove the toast message on sync. - Update the disable local message. * Add Hardcoded constant to disable the local db caching * fix lint errors * Fix order by in grouping * update yarn lock * fix build * fix plane-web imports * address review comments --------- Co-authored-by: rahulramesha <rahulramesham@gmail.com> Co-authored-by: NarayanBavisetti <narayan3119@gmail.com> Co-authored-by: gurusainath <gurusainath007@gmail.com>
85 lines
2.6 KiB
Python
85 lines
2.6 KiB
Python
# python imports
|
|
from math import ceil
|
|
|
|
# constants
|
|
PAGINATOR_MAX_LIMIT = 1000
|
|
|
|
|
|
class PaginateCursor:
|
|
def __init__(self, current_page_size: int, current_page: int, offset: int):
|
|
self.current_page_size = current_page_size
|
|
self.current_page = current_page
|
|
self.offset = offset
|
|
|
|
def __str__(self):
|
|
return f"{self.current_page_size}:{self.current_page}:{self.offset}"
|
|
|
|
@classmethod
|
|
def from_string(self, value):
|
|
"""Return the cursor value from string format"""
|
|
try:
|
|
bits = value.split(":")
|
|
if len(bits) != 3:
|
|
raise ValueError(
|
|
"Cursor must be in the format 'value:offset:is_prev'"
|
|
)
|
|
return self(int(bits[0]), int(bits[1]), int(bits[2]))
|
|
except (TypeError, ValueError) as e:
|
|
raise ValueError(f"Invalid cursor format: {e}")
|
|
|
|
|
|
def paginate(base_queryset, queryset, cursor, on_result):
|
|
# validating for cursor
|
|
if cursor is None:
|
|
cursor_object = PaginateCursor(PAGINATOR_MAX_LIMIT, 0, 0)
|
|
else:
|
|
cursor_object = PaginateCursor.from_string(cursor)
|
|
|
|
# getting the issues count
|
|
total_results = base_queryset.count()
|
|
page_size = min(cursor_object.current_page_size, PAGINATOR_MAX_LIMIT)
|
|
|
|
# getting the total pages available based on the page size
|
|
total_pages = ceil(total_results / page_size)
|
|
|
|
# Calculate the start and end index for the paginated data
|
|
start_index = 0
|
|
if cursor_object.current_page > 0:
|
|
start_index = cursor_object.current_page * page_size
|
|
end_index = min(start_index + page_size, total_results)
|
|
|
|
# Get the paginated data
|
|
paginated_data = queryset[start_index:end_index]
|
|
|
|
# Create the pagination info object
|
|
prev_cursor = f"{page_size}:{cursor_object.current_page-1}:0"
|
|
cursor = f"{page_size}:{cursor_object.current_page}:0"
|
|
next_cursor = None
|
|
if end_index < total_results:
|
|
next_cursor = f"{page_size}:{cursor_object.current_page+1}:0"
|
|
|
|
prev_page_results = False
|
|
if cursor_object.current_page > 0:
|
|
prev_page_results = True
|
|
|
|
next_page_results = False
|
|
if next_cursor:
|
|
next_page_results = True
|
|
|
|
if on_result:
|
|
paginated_data = on_result(paginated_data)
|
|
|
|
# returning the result
|
|
paginated_data = {
|
|
"prev_cursor": prev_cursor,
|
|
"cursor": cursor,
|
|
"next_cursor": next_cursor,
|
|
"prev_page_results": prev_page_results,
|
|
"next_page_results": next_page_results,
|
|
"page_count": len(paginated_data),
|
|
"total_results": total_results,
|
|
"total_pages": total_pages,
|
|
"results": paginated_data,
|
|
}
|
|
|
|
return paginated_data
|