This article describes how to use the API to perform health checks. Throughout the article a demo application implementing this series of actions - the API flow - is used as an example. The importance of each call needed to perform a typical health check, is clarified. Actions and UI elements in the front-end that can facilitate the process for the end user are also discussed, these can serve as a guide for integration of the API into your platform.
The figure below, which shows the flow to complete a health check in the demo app, will be referenced in each of the steps.
The IntelliProve product is an API-first solution and is thus offered as an API or through one of our SDKs. The mobile application mentioned here only serves as an example/demo of how you, as a customer, can integrate the API into your own mobile app, web app or software. If you've been in touch with us before and want to easily test out our product before starting an actual integration, you can get temporary access to the mobile app. Don't hesitate to reach out to your point-of-contact to request access.
When the app is launched, the user first receives a series of instructions on how to perform the health check correctly. Refer to the Product instructions for a comprehensive overview. When integrating our product into your own platform or app, we recommend also showing these instructions to the user, in order to increase the chance of them passing the check for conditions. We come back to this in the next step.
After pressing continue, the banner goes down and the the health scan screen becomes visible. Here, the user can start the health scan by pressing on the red measurement button.
When pressing the red button, the recording setup is first reviewed. This check, also referred to as 'quality check', ensures that the recording setup meets a few requirements for a qualitative measurement. This includes, for example, checking if there is sufficient lighting, checking the face-camera distance or analyzing the user's motion. The check is performed based on a single frame of the user's face.
To perform the check, the check conditions endpoint of the API can be used. In the app, a loading sign indicates the quality check is in progress.
The response of the request can be used to show relevant instructions to the user when the conditions are not met. In that case, the UI element that outlines the face will turn red in our demo app - as shown in the figures below. You can use a similar visual clue in your own platform or app.
The quality check is blocking, in other words, the measurement can only be started when the check is positive, i.e. the quality conditions are met. In this case, the response form the API will contain a signature, which can be used in the next step.
When the quality check succeeds, the UI element outlining the face turns green and the app automatically starts recording. The recommended recording time is 30 seconds (also see the Technical requirements). A timer indicates the remaining recording time to the user. During the measurement, the user should keep their face still and look straight into the camera. More information can be found in the Product instructions
Once the recording has been finished, the video needs to be processed in the cloud.
For the user, this is a seamless process. The app shows a loading sign while uploading the video and automatically sends a request to process it once the upload is finished, as shown below. As soon as the processing is done, the app shows the results, which is outlined in the next section.
Behind the scenes, this part consists of three steps in the API.
In order to register a new measurement with the cloud and have the video processed, it first needs to be uploaded to our cloud. Keep in mind that you need to have a signature from the previous step (check conditions to be able to request an upload URL). The Get Upload URL endpoint can be used to do this. The response of this request includes the URL, which will be used to upload the video; the unique identifier (UUID) for this measurement, which will be needed for the consecutive requests; and authentication information needed in the next step.
With the obtained URL and authentication info, the video can now be uploaded using the Upload video endpoint.
With the video now uploaded, a request can be sent to start the actual processing of the video. To specify which video to process, the UUID obtained before is sent as a path parameter with the request. Refer to the Process video endpoint in the API docs for more detailed information.
With the processing now finished, the results can be fetched from the API and displayed in the app. Use the Get results endpoint and pass the UUID in the request path parameter. The results are returned in json format. The demo app displays the results as shown below. You can find information about the values that are returned in the response in the Insights & biomarkers page.
Keep in mind that if the UUID that was passed refers to a video that is still being processed, the API will poll the database for up to 20 seconds while waiting for the results to become available, and will send the response as soon as the results are available. This means that the Get results request can be send immediately after the Process video request; the API will take care of the polling for results. Only when the results are still not available after 20 seconds - for example in case the video just hasn't been processed yet - the API responds with a HTTP 204, indicating that no results were found.
The video linked here shows all the steps covered above as they happen in the demo app.