- Published on
API Call times optimization in a data rich iOS app.
Introduction and Problem statement
Recently, I was assigned a JIRA task to optimize the time taken for a series of API calls in a client-facing application. The app utilizes a Content Management System (CMS) to drive data and fetch it using GraphQL. Additionally, multiple REST APIs are used to fetch product data that is also data-rich. The app is localized in English and Arabic, and every time the user switches between languages, a series of API calls are made to store new copies of data models. The issues identified were as follows:
- When the app is launched for the first time, the user’s preferred app language is requested. This results in calling the language change method, leading to a significant delay for the user and potentially losing their interest.
- The time taken to switch languages exceeds a minute, which is not acceptable even if this action is not something the user would repeat multiple times. It’s important to optimize the time taken for these API calls to ensure a smooth and efficient user experience. The goal is to minimize the delay caused by these API calls, especially when the app is launched for the first time. Additionally, the time taken to switch languages needs to be reduced to an acceptable level, regardless of how infrequently this action is performed.
Therefore, with my MacBook, the internet, and a really unhealthy habit of not being able to give up on a problem I started my quest into reducing the language switch times to something more acceptable
GraphQL query complexity
GraphQL is a query language that is commonly used to fetch data from APIs. It allows clients to define the structure of the data they need, and the server will return only the requested data. This is in contrast to REST, where the client must know the structure of the data beforehand and make multiple requests to different endpoints to retrieve all the necessary information.
One of the benefits of using GraphQL is that it allows for a more efficient data retrieval process by reducing the number of API calls required. However, it also has a query complexity limit which is a measure of the amount of computational resources required to execute a query. This limit is in place to prevent overly complex queries from overwhelming the server and causing performance issues.
In the case mentioned above, the app in question is data-rich and uses GraphQL to fetch a lot of data at once from one endpoint. Due to the large number of instances of one data model and the models having several levels of nested attributes, the query complexity limit was quickly reached. To address this issue, the team used Apollo for Swift, which is a popular GraphQL client for iOS. Apollo has an inbuilt caching mechanism that can help to reduce secondary API calls by storing the results of previous queries. This greatly improves the performance of the app by reducing the number of requests made to the server and allowing for faster data retrieval.
Another technique we applied was reducing the query complexity itself.
- One technique for breaking down queries is to split them up by functionality. For example, if a query is retrieving data for a user profile, it can be split into two smaller queries: one for retrieving the user’s basic information, and another for retrieving their specific data. This allows the server to handle the queries separately, reducing the overall complexity of the query.
- Another technique is to use fragments. Fragments allow you to reuse the same fields across multiple queries, reducing the amount of duplicated code. This can help to simplify complex queries by breaking them down into smaller, reusable chunks.
- Finally, we used pagination to break down the large data set into smaller chunks. By using limit and skip variables, we retrieved only a certain number of elements, and skip a certain number of elements, reducing the complexity of the query.
query fetchAllItems($lang: String!, $limit: Int!, $skip: Int!) {
itemsCollections(locale:$lang, limit: $limit, skip: $skip) {
}
}
Strategically using GCD QOS
DispatchQueue
is a powerful tool that allows developers to interact with Grand Central Dispatch (GCD) in order to manage the execution of their code. GCD is a technology that is used to manage the execution of code across multiple threads, and DispatchQueue
is the primary way to interact with it. It’s important to note that DispatchQueue
is not the only way to interact with GCD, but it’s the most common and easiest way to use GCD.
One of the key features of DispatchQueue
is the ability to specify the quality of service (QoS) for a given function. QoS is a way to indicate the importance and priority of a given task, and there are four levels of QoS available. These are represented below in the order of priority.
- User Interactive: Tasks that are associated with providing a responsive and smooth user interface, such as animations and user input handling.
- User Initiated: Tasks that are initiated by the user and are important to the user, such as saving a document or sending an email.
- Utility: Tasks that are not directly related to the user experience but are still important, such as data compression or encryption.
- Background: Tasks that are not directly related to the user experience and can be done at a lower priority, such as indexing or backups.
When specifying the QoS for a given function, the system will use this information to determine the appropriate thread and resources to allocate for the task. For example, a task with a QoS of User Interactive will be given a higher priority and allocated more resources than a task with a QoS of Background.
We use a CacheManager
for managing network requests and caching data in an app. The CacheManager
adds a layer on top of the network requests module, which is responsible for fetching data from a REST API. The primary purpose of the CacheManager
is to improve the performance of the app by reducing the number of API calls and avoiding delays caused by waiting for data to be fetched from the API.
When the CacheManager
is used to fetch data from the API, it first checks if a cached copy of the data is available. If a cached copy is found, the CacheManager
will return the cached data to the client. This allows the app to continue functioning even when a network connection is not available or is slow. However, the CacheManager
still makes an API request in the background to fetch the most recent data. This is done to ensure that the app always has the most up-to-date information, even if the user is not actively using the app.
It’s important to note that this API call is not as important from a user experience standpoint as the user will not be seeing a blank screen until new data is available from the API. The user will see the cached version of the data. Therefore, we can comfortably use the utility
QOS for this secondary request.
However, it is important to execute this API call as soon as possible in case there’s no cache available. This is because, in case the cache is not available, the user will see a blank screen until the new data is fetched from the API. So, we call this method on the userInteractive
QOS to give this call the highest priority.
if cache.isEmpty {
// If cache is empty it is important that we call the API immediately
// so as to preserve User Experience. We call self.fetchData on a global
// thread with the `userInteractive` qos which is the highest priority
// quality of service for a thread.
DispatchQueue.global(qos: .userInteractive).async {
self.fetchData { result in
switch result {
case .success(let data):
completion(.success(data))
case .failure(let error):
completion(.failure(error))
}
}
}
} else {
completion(.success(cache))
// If the cache is not empty the user is able to see something
// so no need to hurry and prioritise calling the API
// We use the `utility` qos which has a lower qos
DispatchQueue.global(qos: .utility).async {
self.fetchData()
}
}
Conclusion
By using proper techniques of thread management, prioritizing queues and breaking down complex GraphQL queries, we were able to achieve significant performance improvements across the board in my app. As a result, the app is now the fastest it has ever been, even though it is still data-rich. Overall, the app feels very snappy and responsive. I never thought that the boring CS class about computer architecture and OS would come in handy, but it turned out to be extremely useful in optimizing this app’s performance.