Changelog

Follow up on the latest improvements and updates.

RSS

The "Resend of Rejected Information" project focuses on improving the delivery of information to customers by implementing a system that allows storing events and summaries, which have not been received by our customers, in specific buckets so they can request and retrieve any pending documents proactively. In this way, our aim with the project is to mitigate information loss and ensure effective data delivery to customers. The project also includes implementing a retry process before storing the documents in the bucket, which consists of notifications to the customer to make them aware of pending events and webhooks awaiting acceptance.
notif
The operation of the "Resend of Rejected Information" project can be described in the following steps:
  • Initial document sending: When attempting to send events and summaries to the customer via webhook, a retry process is initiated if the document is not received correctly by the customer in the first attempts.
  • Retry process: A retry flow is established, including three delivery attempts of the document to the customer after 2 hours, 24 hours, and 48 hours. If the customer does not receive the document after these retries, the document is moved to the storage bucket of the corresponding environment (sandbox or production).
  • Storage in buckets: Events and summaries that could not be delivered to the customer are stored in designated buckets for each environment, allowing the customer to request the resend of the stored information if necessary. This information will be available for proactive customer requests for 3 days for the sandbox bucket and 10 days for the production bucket.
  • Customer notifications: Notifications will be sent via webhook to inform customers about the existence of pending events and summaries awaiting acceptance in the buckets.
  • Preference management: Customers have the ability to manage their notification preferences, therefore, they can request not to receive these retry notifications. However, unreceived events and summaries will still be stored in the bucket under the same time rules.
This project ensures that events and summaries not sent to customers are securely stored in production or sandbox buckets, allowing customers to request pending information in case of initial delivery failures. Additionally, notification mechanisms are established to ensure effective data delivery to customers.
Frequently Questions
What is the Resend Reject Information project about?
The goal of Resend Reject Information is to improve the delivery of information to customers by ensuring they receive the maximum amount of data about their users, so we aim to maintain communication despite the lack of response from the customer's side.
How is information loss prevented?
The objective is to establish a system where information sent to customers is stored in buckets to allow them to request and retrieve any pending documents, thus mitigating the risk of information loss.
How long are unreceived documents stored by the client?
Events and summaries not received by customers will be stored in their respective buckets within their environment. For the production bucket, events and summaries will be stored for 10 days, and for the Sandbox bucket, they will be stored for 3 days. After this period, they will be deleted from the buckets, but the customer can still request the events and summaries. This has to be requested directly to the account manager and will incur an additional cost.
What is the process for handling resend attempts before storing documents in the container?
Before storing documents in the specific bucket of the environment, a retry process is initiated, which involves two attempts to send the document to the customer at intervals of 2 hours and 24 hours after the first notification. If the customer does not receive the document after the retries, then it is moved to the respective bucket.
How does the project ensure effective communication with customers regarding pending documents?
The system will use webhooks to notify customers when events and summaries are queued for delivery and are not successfully received. Customers can also request the resend of pending documents through the designated endpoint.
How does the project differentiate between sandbox and production environments in terms of document delivery?
Separate buckets and endpoints will be established for sandbox and production environments to ensure different handling of documents for each environment.
How can I subscribe to this parallel notification webhook?
You will need to send us a URL where you want the notifications to be sent; this process will be manual for the time being.
A changelog is a tool for timely and clear communication about new changes, in this case, regarding our SDKs. It serves as a historical record of updates, improvements, bug fixes, and new features implemented in the product.
The importance of the changelog lies in several aspects:
  • It provides transparency to users, clients, and collaborators about what changes are being made in the respective SDK.
  • It facilitates the update to new versions and provides important information about it.
  • With each SDK update, there may be changes in the interface and functionality that could affect compatibility with previous versions.
How to identify the changes in our SDKs?
To manage the changes in our SDKs, we have defined update rules to maintain semantic versioning for an optimal experience.
This guide will help you quickly interpret and understand what types of changes have been made in our SDKs through the Changelog:
  • Version changes (
    X
    .1.1): Compatibility changes or changes in data communication or transmission.
  • Major changes (1.
    X
    .1): Primarily includes changes to address significant product errors or security improvements.
  • Minor changes (1.1.
    X
    ): Includes minor changes to enhance the customer experience.
To view the Changelog of our SDKs, please refer to the documentation of each SDK at the following link.
HTTP response status codes indicate whether a specific HTTP request has completed successfully. The answers are grouped into five classes:
  • 100: Informative Answers
  • 200: Satisfactory answers
  • 300: Redirects
  • 400: Customer errors
  • 500: Server errors
Informative response
  • 100 - Continue:
    This tentative response indicates that everything so far is fine and that the client should continue with the request or ignore it if it is already finished.
  • 101 - Switching Protocol:
    Indicates that the server accepts the protocol change proposed by the user agent.
  • 102 - Processing:
    This code indicates that the server has received the request and is still processing it, so no response is available.
  • 103 - Early Hints:
    Allowing the user agent to begin pre-fetching (en-US) resources while the server prepares a response.
Satisfactory answers
  • 200 - Okay:
    The request has been successful. The meaning of a success varies depending on the HTTP method
  • 201 - Created:
    The request was successful and a new resource was created as a result. This is typically the response sent after a PUT request.
  • 202 - Accepted:
    The request has been received, but has not yet been acted upon. It is an "uncommitted" request, meaning that there is no way in HTTP to allow an asynchronous response to be sent indicating the result of processing the request. It is intended for cases where another process or server handles the request, or for the batch processing.
  • 203 - Non-Authoritative Information:
    The request has been completed successfully, but its content has not been obtained from the originally requested source, but is collected from a local copy or from a third party. Except this condition, a response of 200 OK should be preferred instead of this response.
  • 204 - No Content:
    The request completed successfully but its response has no content, although headers may be useful. The user agent can update its cached headers for this resource with the new values.
  • 205 - Reset Content:
    The request has been completed successfully, but its response has no content and in addition, the user agent has to initialize the page from which the request was made, this code is useful for example for pages with forms whose content must be deleted after for the user to send it.
  • 206 - Partial Content:
    The request will partially serve the requested content. This feature is used by download tools such as wget to continue the transfer of previously interrupted downloads, or to split a download and process the parts simultaneously.
  • 207 - Multi-Status:
    A Multi-State response transmits information about multiple resources in situations where multiple status codes might be appropriate. The body of the request is an XML message.
  • 208 - Multi-Status:
    The list of DAV elements was previously notified, so they will not be listed again.
  • 226 - IM Used:
    The server has fulfilled a GET request for the resource and the response is a representation of the result of one or more instance manipulations applied to the current instance.
Redirects
  • 300 - Multiple Choice:
    This request has more than one possible answer. User-Agent or the user must choose one of them. There is no standardized way to select one of the answers.
  • 301 - Moved Permanently:
    This response code means that the URI of the requested resource has been changed. A new URI will probably be returned in the response.
  • 302 - Found:
    This response code means that the requested URI resource has been temporarily changed. New changes to the URI will be added in the future. Therefore, the same URI must be used by the client in future requests.
  • 303 - See Other:
    The server sends this response to direct the client to a new resource requested at another address using a GET request.
  • 304 - Not Modified:
    This is used for "caching" purposes. It tells the client that the response has not been modified. The client can then continue using the same version stored in its cache.
  • 305 - Use Proxy:
    It was defined in a previous version of the HTTP protocol specification to indicate that a requested response must be accessed from a proxy. It has been deprecated due to security concerns associated with configuring a proxy.
  • 307 - Temporary Redirect:
    The server sends this response to direct the client to obtain the requested resource at another URI with the same method used in the previous request. It has the same semantics as the HTTP 302 Found response code, with the exception that the user agent must not change the HTTP method used: if a POST was used in the first request, another POST must be used in the second request.
  • 308 - Permanent Redirect:
    It means that the resource is now permanently located at another URI, specified by the Location: HTTP header response. It has the same semantics as the 301 Moved Permanently HTTP response code, with the exception that the user agent must not change the HTTP method used: if a POST was used in the first request, another POST must be used in the second request.
Customer errors
  • 400 - Bad Request:
    This response means that the server could not interpret the request given invalid syntax.
  • 401 - Unauthorized:
    Authentication is required to obtain the requested response. This is similar to 403, but in this case, authentication is possible.
  • 402 - Payment Required:
    This response code is reserved for future use. The initial objective of creating this code was to be used in digital payment systems. However, it is not currently being used.
  • 403 - Forbidden:
    The client does not have the necessary permissions for certain content, so the server is refusing to provide an appropriate response.
  • 404 - Not Found:
    The server could not find the requested content. This response code is one of the most famous given its high occurrence on the web.
  • 405 - Method Not Allowed:
    The requested method is known to the server but has been disabled and cannot be used. The two required methods, GET and HEAD, should never be disabled and should not return this error code.
  • 406 - Not Acceptable:
    This response is sent when the server, after applying a server-driven (en-US) content negotiation, does not find any content matching the criteria given by the user.
  • 407 - Proxy Authentication Required:
    his is similar to the 401 code, but the authentication must be done from a proxy.
  • 408 - Request Timeout:
    This response is sent on an idle connection on some servers, even without any prior request from the client. It means that the server wants to disconnect this unused connection. This response is widely used since some browsers, such as Chrome, Firefox 27+, or IE9, use HTTP pre-connection mechanisms to speed up browsing. Also keep in mind that some servers simply disconnect the connection without sending this message.
  • 409 - Conflict:
    This response can be sent when a request conflicts with the current state of the server.
  • 410 - Gone:
    This response can be sent when the requested content has been deleted from the server.
  • 411 - Length Required:
    The server rejects the request because the Content-Length header field is not defined and the server requires it.
  • 412 - Precondition Failed:
    The client has indicated pre-conditions in its headers which the server does not meet.
  • 413 - Payload Too Large:
    The request entity is longer than the limits defined by the server; the server can close the connection or return a Retry-After header field.
  • 414 - URI Too Long:
    The URI requested by the client is longer than the server is willing to interpret.
  • 415 - Unsupported Media Type:
    The multimedia format of the requested data is not supported by the server, so the server rejects the request.
  • 416 - Requested Range Not Satisfied:
    The range specified by the Range header field in the request does not comply; It is possible that the range is outside the target data size of the URI.
  • 417 - Expectation Failed:
    It means that the expectation indicated by the requested Expect header field cannot be met by the server.
  • 421 - MisdirectedRequest:
    The request was directed to a server that is not capable of producing a response. This can be sent by a server that is not configured to produce responses by the combination of the schema and authority that are included in the requested URI
  • 422 - Unprocessable Entity:
    The request was well formed but could not be followed due to semantic errors.
  • 423 - Locked:
    The resource being accessed is locked.
  • 424 - Failed Dependency:
    The petition failed due to a failure of a previous petition.
  • 426 - Upgrade Required:
    The server refuses to implement the request using the current protocol but may be willing to do so after the client upgrades to a different protocol. The server sends an Upgrade header in a response to indicate the required protocols.
  • 428 - Precondition Required:
    The origin server requires that the request be conditional. It is intended to prevent 'lost update' problems, where a client GETS a state from the resource, modifies it, and PUTS it back to the server, when a third party has modified the state of the server, leading to a - conflict.
  • 429 - Too Many Requests:
    The user has submitted too many requests in a given period of time.
  • 431 - Request Header Fields Too Large:
    The server is unwilling to process the request because the header fields are too long. The request MAY be re-uploaded after reducing the size of the requested header fields.
Server errors
  • 500 - Internal Server Error:
    The server has encountered a situation that it doesn't know how to handle.
  • 501 - Not Implemented:
    The requested method is not supported by the server and cannot be handled. The only methods that servers require to support (and therefore should not return this code) are GET and HEAD.
  • 502 - Bad Gateway:
    This error response means that the server, while working as a gateway to obtain a response necessary to handle the request, got an invalid response.
  • 503 - Service Unavailable:
    The server is not ready to handle the request. Common causes could be that the server is down for maintenance or is overloaded.
  • 504 - Gateway Timeout:
    This error response is given when the server is acting as a gateway and cannot get a response in time.
  • 505 - HTTP Version Not Supported:
    The HTTP version used in the request is not supported by the server.
  • 506 - Variant Also Negotiates:
    The server has an internal configuration error: Transparent content negotiation for the request results in a circular reference.
  • 507 - Insufficient Storage:
    The server has an internal configuration error: the chosen resource variable is configured to engage transparent content negotiation itself, and is therefore not a suitable endpoint for the negotiation process.
  • 508 - Loop Detected:
    The server detected an infinite loop while processing the request.
  • 510 - Not Extended:
    Additional extensions to the request are required for the server to fulfill.
  • 511 - Network Authentication Required:
    Status code 511 indicates that the client needs to authenticate to gain access to the network.
Granular data refers to specific details about health metrics, such as heart rate, blood pressure, and blood oxygen levels. This information is essential for in-depth analysis, but the data sent from granular data can be very large.
This project aims to optimize the structure of health data delivered through webhooks by removing granular data. This benefits clients who, due to their technology, cannot process large .JSON files. In this release, we will focus on the files delivered through our webhooks.
Frequently Asked Questions:
Do these changes apply to all ROOKConnect modules?
No, the changes will apply to version 2 of the data structure, only in webhooks.
What is granular data?
Granular data refers to specific details about health metrics.
Example: We represent Cycling speed using granular data. We can observe in the graph the representation of the speed in km/h the user went through throughout the workout.
Speed Cycling
Can it be configured individually per environment?
Yes, it can be configured if granular data will be received.
Will I still be able to receive granular data?
Yes, this feature is on request. So, if you wish to continue receiving granular data, you don't need to request anything.
Can I select only certain granular data that I want to receive and discard the rest?
No, currently we do not have available customization of the granular data structure.
If I have this functionality, do I lose granular data?
No. If you make the query via API, you will receive the entire JSON in full, including the granular data.
This project will focus on providing customers with real-time logs of their queries to track their integration with ROOK. These logs include information about event queries and user summaries. Implementing these logs efficiently can facilitate issue resolution and enhance the overall quality and independence of our product.
The project will feature a new module, "Logs," in the ROOK Portal. Here, you can view your queries directly and in real-time, historically, made through the integration of our Webhooks APIs with ROOKConnect. You will be able to review the following data:
  • Date and time of the query
  • Log type
  • Linked User ID
  • Query status
  • Source of the consulted data
  • Access to the JSON of each successfully delivered query of events and summaries.
The benefits of this project include:
  • Facilitating integration issue resolution.
  • Enhancing the overall quality of the product.
  • Making information easier to find.
  • Providing users with greater independence by granting access to detailed information.
  • Accessing JSON data from past events and summaries queries.
Frequent questions
What is the main goal of the implementation project for the logs module in the ROOK Portal?
The main goal of this project is to provide our clients with the ability to access real-time logs that track integration with ROOK. These logs contain detailed information about event queries, as well as user summaries.
What data will be visible through the new Logs module in the ROOK Portal?
Through the new Logs module, users will be able to view key information, including the date and time of the query, the log type, the associated User ID, the query status, and the consulted data source. Additionally, they will have the ability to review specific details of each successful query delivered in the form of events and summaries, with access to the corresponding JSON files.
How will this project facilitate integration issue resolution?
The efficient implementation of real-time logs will allow quick and accurate identification of any integration issues.
What are the key benefits for clients in accessing the Logs module?
Clients will benefit from increased independence by easily accessing detailed information about their queries. This will provide them with the ability to autonomously resolve issues, improve the quality of their integration, and quickly find the information they need, contributing to a more efficient and autonomous experience.
What advantages does the ability to query past JSON events and summaries offer?
The ability to query past JSON events and summaries provides clients with complete access to historical information about their users. This not only facilitates the review of previous queries but also allows for retrospective analysis, serving to improve processes and optimize future integrations.
What types of logs are there?
Currently, we work with two types of logs, which vary according to the pillar being queried. These log types are events and summaries.
What are the statuses handled by the module?
We work with HTTP response status codes, which are:
  • 200 - Successful
  • 201 - Created
  • 202 - Accepted
  • 203 - Non-Authoritative Information
  • 204 - No content
  • 205 - Reset content
  • 206 - Partial content
  • 300 - Multiple choice
  • 301 - Moved Permanently
  • 302 - Found
  • 303 - See Other
  • 304 - Not modified
  • 400 - Bad Request
  • 401 - Unauthorized
  • 404 - Not Found
Can I export the logs?
Yes, the queried logs can be exported in a CSV file.
Can I check the JSON of the logs?
Currently, you can only check the logs for events and summaries with a 200 - Successful response. If it doesn't meet any of these conditions, the logs will not have a detailed view or a JSON.
To use this feature, a "status" system is introduced.
  • Status 0 (default) requires the client to manually link their devices.
  • Status 1 enables automatic device linking.
In Status 1, six specific data sources are automatically linked: Garmin, Polar, Withings, Fitbit, Oura, and Whoop. Device unlinking is possible, but re-linking must be done manually.
This approach aims to address the Demo_User functionality and provide clients with an automated and efficient solution for testing with real data from real users.
FAQs
What is the proposed solution?
When a client creates their credentials and starts the integration, they will have the option to allow our team to automatically link to their "connection page". This will allow the sending of real information from a real user, facilitating the visualization of behavior.
What are the requirements to use this feature?
This feature must be activated by a status, since not all clients will want to use it. "Status 0" (default) means that we will not automatically link to the client's "connection page", and they will be responsible for linking their devices. By changing to "Status 1", automatic device linking is activated. To have this alternative, contact our team so they can enable it.
How are these links reflected in the customer portal?
If they have status 1, the client should see in their portal (dashboard) that they have 6 links and will start to see the information that arrives from each of them.

new

Health Score

Duplicity

The goal of the Duplicity Project is to facilitate an uninterrupted flow of data from users who are connected to multiple data sources. This capability empowers our clients to make informed decisions based on their users' data, assured of its superior quality. The project's architecture is built around critical elements such as data prioritization, event generation, summary creation, and the cleansing of data within these summaries. This approach ensures that our clients have access to the most relevant and accurate information, enabling them to take proactive and effective actions based on user data.
Data Prioritization
Rook's method for managing multiple data sources begins with the crucial step of data prioritization. The accompanying table illustrates our ranking of data sources across our 3 health pillars. This ranking is based on the quality and comprehensiveness of the data each source provides in relation to the specific metrics constituting each health pillar.
The table's arrangement, from top to bottom, reveals a clear preference hierarchy. Data sources directly linked to biometric devices take precedence over health kits. Additionally, the hierarchy within these data sources is evident. For instance, Garmin is the top choice for data related to Physical Health, whereas Oura is the foremost source for sleep-related data.
tablas
Event Generation
ROOK provides refined, processed data from our health categories in the form of events and summaries. The Data Duplicity feature in ROOK has a defined method for managing duplicate data points related to a particular event. Priority is assigned to the first recorded event. Later events from different data sources, occurring within a close time range, are disregarded. We permit a time frame of plus or minus 10 minutes to capture a maximum of two successive events within this interval. In this procedure, priority is given to data sources that are directly connected to wearable devices over health kits and SDK extractions.
events
Summary Generation
The creation of summaries relies on the data available at the time of calculation. Initially, a summary is formed using the first data source received. This summary is then enhanced with extra data from other sources. Once the initial summary is dispatched, there is a 15-minute interval before an updated summary, incorporating any new data, is sent. These subsequent summaries are marked as updated versions.
summary
Combine data into summaries
When confronted with two distinct summaries originating from the same date, both are attributed to the corresponding day. Herein, the process of data cleaning is invoked: data originating from the source deemed most relevant within the specific pillar takes precedence. This primary data is then supplemented with additional information sourced from other sources, in accordance with their position in the prioritization table. This meticulous methodology is designed to furnish comprehensive information without manipulating the original data, thereby ensuring accuracy and fidelity to the primary data sources.
datosss
Frequent Questions
What is the primary objective of the Duplicity project?
The fundamental aim of the Duplicity project revolves around interconnecting two or more data sources to mutually enhance their capacities. This comprehensive integration endeavors to furnish our clients and users with a more enriched and thorough oversight of the information provided.
How are data source priorities established within the Duplicity project?
Prioritizing data sources hinges upon specific pillars defined within the framework of ROOK. We rely on a ranking table that assesses the scientific substantiation endorsing each data source's specialization within a given pillar. For instance, Garmin holds the highest priority in physical health, while Oura assumes precedence in sleep health.
How is the aggregation of summaries from multiple data sources managed?
Summaries initiate with data from the primary received source and are augmented with supplementary information from other sources based on their priority levels. A 15-minute interval ensues between the receipt of the initial summary and any subsequent summaries before disseminating consolidated information to the client.
Is there a process to ensure data integrity and precision in the summaries?
Indeed, a meticulous data cleaning strategy is implemented. When two summaries from different sources with identical dates are received, precedence is granted to information sourced from the most relevant data source concerning the specific pillar. This information is then complemented by data from other sources based on their placement in the prioritization table.
What transpires when summaries from distinct sources present conflicting data?
In cases where summaries from diverse sources present contradictory data but share the same date, the Duplicity project prioritizes information from the data source deemed more pertinent to the specific pillar. This data is amalgamated with additional insights from other sources based on their standing in the prioritization table, aiming to provide a comprehensive and coherent perspective without altering the original data.
What unfolds if multiple summaries arrive post the initial delivery to the client?
Upon reception of additional summaries subsequent to the initial delivery to the client, an update notification is dispatched. The client then has the discretion to review and implement this update, facilitating the generation of an updated version of the summary. This process keeps the client informed of any supplemental information received post the initial dissemination.
What occurs in the scenario of receiving multiple events from various data sources?
A lucid protocol is established: precedence is afforded to the initial recorded event, disregarding subsequent events from other sources sharing a similar timestamp. A window of +/- 10 minutes is permitted for continuous events, allowing the recording of up to two events if they fall within this timeframe.
🔒 Enhanced Security Update: Blocking Connection Page Access
We currently have a connection page, this helps our clients a lot to validate and observe how our solution works. This connection page is located in both Sandbox and production.
Exciting news! Our clients can now bolster their security by blocking access to the connection page. This page has been an invaluable tool for clients to validate and observe our solution's functionality, present in both the Sandbox and production environments.
To prioritize everyone's safety, we've introduced a new internal feature. Clients now have the flexibility to disable the connection page specifically in the production environment. Once disabled, attempting to access the page will result in an error, rendering it non-functional.
When the page is locked, it will display as follows: [view.]
image
We recommend using the Connections Page for quick data flow tests. However, in the production environment, we strongly advise clients to create their own customized connections space. This not only enhances security but also allows them to personalize it with their preferred colors and designs
FAQs
1. If I disable the production connections page, will this affect the sandbox page?
  • No, sandbox will work as usual, only the production environment connections page will be blocked.
2. If it blocks the login page in production, can I turn it back on?
  • Yes, it is through a status that changes us, you just have to request it from support to make the changes.
🛡️ For Assistance:
Should you require further assistance or have questions about this feature, please don't hesitate to reach out to our dedicated support team.
To ensure that the information received in your webhook is authentic from ROOK, we have implemented a new security measure by sending an HMAC code as a
header
called
X-ROOK-HASH
as part of the information sent.
This new header will only be available for version 2 of our webhooks.
What does the new header contain?
To use this functionality, our team will give you a secret hash key to create the HMAC. This key is unique to you and therefore you must store it correctly.
The HMAC code sent in the
X-ROOK-HASH
header is the result of the union of the following information.
  • Client UUID (client_uuid)
  • User ID (user_id)
  • Datetime (datetime)
It is the union
without separators between the values
, as you can see in the example below
hash
To this we add that the
secret hash key
that we have previously given you is used as the encryption key and finally the encryption
algorithm is SHA256
.
That will be the value you will receive in the X
-ROOK-HASH
header in your webhooks.
It is important to keep in mind that your secret hash key is partially formed by your secret key, which means that if you change your secret key, your secret hash key will also change.
How can I validate it?
You can validate quickly using the following online tool.
To know if the value sent by ROOK is real, you will need to repeat the above process on your system internally. An example is shown in Python to be able to validate the hash by taking a summary of the body.
code 1
To validate that the process is correct you can try this?
  1. Hash this text:
    ROOK
  2. Use this as a secret hash key:
    ROOK_Secret_Key
  3. Use
    SHA-256
    as Digest Algorithm
  4. This should be the result:
    382da6381717a0fdfeba9ef922041df3ea8db97bbd2d20af21e216bdd1e6096b
Our project, focused on optimizing Webhooks, is geared towards efficiently delivering user information to our clients via these tools, ensuring swift and robust responses to their queries. Moreover, we've fine-tuned requests and queries to operate with heightened efficiency within the limitations of our data sources, ultimately aiming to elevate the overall customer experience.
Webhooks serve as a mechanism for applications to promptly dispatch real-time data or notifications to other applications upon the occurrence of specific events. For instance, in the context of ROOK, when an event is registered or a summary is generated within one of our data sources, the webhook promptly triggers an HTTP request to a client's designated URL. This transmission includes comprehensive details about the event or summary, providing the client with the option to accept or decline the information.
This seamless integration between data sources, ROOK, and our clients eliminates the need for continuous client queries on ROOKConnect or ROOKConnect's continual monitoring of linked users' data sources for updates. Webhooks play a foundational role in APIs by enabling real-time integration and swift synchronization among all involved stakeholders.
Throughout this project, our primary emphasis has been on optimizing incoming webhooks from diverse data sources. This optimization ensures that upon data acquisition, we promptly inform our clients through our established data channels.
Frame 58
Within the framework of this project, we've successfully optimized the following data sources:
Webhook to Polar:
The response time of Polar's webhooks is under 1 minute when the user synchronizes the data source (mobile application) with their device. In cases where direct synchronization doesn't occur, it can take between 10 to 60 minutes. The available pillars with this webhook are:
  • sleep_summary
  • physical_event
  • physical_summary: This doesn't have a direct webhook but is accessed through a bucket.
Note: Integration of a webhook for Body Health hasn't been achieved because Polar doesn't have a webhook for this. Hence, it needs to be requested through endpoints.
Webhook to Fitbit:
The response time of Fitbit's webhooks ranges from 5 to 15 seconds when the user syncs the data source (mobile application) with their device. In cases where direct synchronization doesn't occur, it may take between 7 to 15 minutes. The available pillars with this Webhook are:
  • Sleep_summary
  • Body_summary
  • Physical_event
Note: Integration of a webhook for Physical_summary has not been achieved because Fitbit lacks a webhook for this. Hence, it needs to be requested via endpoints.
Webhook to Oura:
The response time of Oura's webhooks ranges from 1 to 5 minutes when the user syncs the data source (mobile application) with their device. The available pillars with this Webhook are:
  • Sleep_summary
  • Activity_event
  • Activity_summary
  • Daily_spo2
Webhook to Garmin:
Garmin's webhook response time is 1 minute only when the user synchronizes the data source (mobile application) with their device. Hence, it won't synchronize until the user opens their data source (mobile application). The available pillars with this Webhook are:
  • Physical_summary
  • Sleep_summary
  • Activity_event
Note: Garmin does not allow obtaining information via webhook for Body Health, so this process will continue to rely on endpoints.
Webhook to Withings:
Withings' webhook response time is 1 minute only when the user synchronizes the data source (mobile application) with their device. Hence, it won't synchronize until the user opens their data source (mobile application). The available pillars with this Webhook are:
  • Weight_body_event
  • Temperature_event
  • Blood_pressure_event
  • Heart_rate_event
  • Physical_event
  • Sleep_summary
Note: Withings does not allow obtaining information via webhook for Physical Summary, so this process will continue to rely on endpoints.
Frequent questions
What has changed?
ROOK's Webhooks have been enhanced to optimize the delivery of information from multiple data sources. We've focused our efforts on improving efficiency and speed in responding to queries, enabling a smoother and more effective user experience.
What benefits does this optimization bring?
This enhancement not only ensures greater efficiency in data delivery but also allows for better utilization of available data sources. Clients will experience faster and more robust responses to their queries, significantly enhancing their experience.
Which webhooks from which data sources have been optimized?
The data sources improved in this project are:
  • Polar
  • Fitbit
  • Oura
  • Garmin
  • Withings
What limitations does optimizing the webhooks have?
The limitations of optimizing webhooks are as follows:
  • For certain data sources, webhooks have not been implemented for all available data pillars. Therefore, endpoints can be used for manual queries.
  • The response speed of webhooks depends on how frequently the user synchronizes the data source with their device.
Why are there pillars that cannot be obtained via webhooks?
Some pillars, such as physical summaries or body health, cannot be obtained via webhooks because the data source lacks webhook implementations for them. Hence, they can be acquired through endpoints. This is due to the policies set by these data sources.
Will the webhooks of other data sources be optimized?
We are working on Whoop's Webhooks. Currently, Google Fit does not work with webhooks, so they will not be optimized.
If I have the pillars via webhooks, can I no longer make queries via endpoints for those same pillars?
Queries can still be made via endpoints; webhooks are merely a tool to automate notifications of incoming events and summaries.
Do I need to integrate a URL port for each data source to use their webhooks?
No, internally, we integrate the webhooks for each data source, meaning we centralize them and notify you through our ROOK webhook using a single URL. This URL should be entered by the client in the ROOK portal, in the notifications section of the configuration module.
Load More