According to a study, 88% of mHealth applications contain code that potentially exposes confidential patient information. This means that the server-side of the application exposes Protected Health Information (PHI) to additional risks and can jeopardize the security and confidentiality of users.
Any such breach carries significant fines. For each category of violations within a calendar year, fines of up to $25,000 can be imposed.
In this case, without delving into a large amount of legal information, how can you create a Compliance Health app? To address this, we studied the regulations of HIPAA (US), GDPR (Europe), PIPEDA (Canada), as well as community-driven recommendations and standards (often referenced by authorities from different countries).
As examples of recommendations, we examined the well-known FHIR standard and recommendations from OWASP, NIST & ISO.
How are mobile health apps governed?
We then organized these norms/standards/recommendations and explained them in simple terms, allowing you to understand which tools and methods need to be implemented for your Health app to be secure:
– without erroneous interpretation and representation of data;
– without data leaks;
– with user control (ability to delete, move, or modify data);
– and with a complete understanding of the process – how a compliant health app is created and monitored.
We have used these methods in real practice also when creating digital health products. You can get acquainted with them here.
So let’s begin with interpretation and representation.
I. Follow FHIR guidelines for mobile health apps: Resource Mapping and Data Representation
This is indirectly related to security. It involves an unofficial rule for organizing and presenting data. If data is improperly structured, your application may make errors – losing data or displaying it incorrectly.
To ensure consistency and accuracy in data exchange, you can use FHIR for mobile health apps (this is a popular standard for the exchange of medical information) resources. And these resources are a way to represent data. What can they look like? For example, let’s consider one such resource.
Imagine you’re at the doctor’s office, and the doctor checks your temperature, blood pressure, and heart rate. Each of these is an observation – something the doctor measures or notices. In the world of FHIR, an “Observation” resource is like a digital container for these observations.
This container holds information like:
– What was observed (e.g., heart rate)
– The value of the observation (e.g., 80 beats per minute)
– When the observation was made (e.g., today at 2:00 PM)
– Who made the observation (e.g., Nurse Jane)
– Any additional details or notes
And this “Observation” resource is super helpful for sharing health data between different computer systems. When apps or systems need to exchange information about observations, they use the FHIR “Observation” resource to make sure everyone understands the data in the same way.
For example, if your heart rate was recorded in one healthcare app, it can be shared with another app using this “Observation” resource.
Or if your basic personal and health information (stored as a “Patient” resource) was initially entered into one healthcare app during a medical visit, it can be securely shared with another specialized app or healthcare provider.
This way, your health information can be used to provide better care across different parts of the healthcare system and reduce the risk of data loss.
II. Secure Data Transfer: with Encryption and Secure Protocols
In this chapter there are several basic rules of data protection and data security in health apps.
1. Implement Authentication and Authorization mechanisms for secure API access
Require all API requests to be authenticated using secure tokens, such as OAuth 2.0, and implement role-based authorization to control access to resources based on user roles and permissions.
2. Use secure Communication Protocols (e.g., HTTPS) when exchanging data
Implement HTTPS as the communication protocol to encrypt data during transmission between the app and servers, ensuring data confidentiality and integrity.
3. Ensure that data remains Confidential during transit
Implement Transport Layer Security (TLS) or its predecessor Secure Sockets Layer (SSL) to establish a secure and encrypted connection between the health app and the receiving system. This prevents unauthorized parties from intercepting the data during the transmission process.
4. Create Data Encryption
When your app stores data in databases, employ encryption mechanisms to protect the data at rest. For example you can use encryption libraries (like CryptoJS for ReactNative) to encrypt sensitive fields within the database, such as patient health records or lab results. This way, even if unauthorized access occurs, the encrypted data remains unreadable without the decryption keys.
You can ask what encryption libraries are? Encryption libraries are software tools or sets of functions that provide developers with ready-made implementations of encryption algorithms and techniques. These libraries make it easier for developers to incorporate encryption into their applications without needing to implement the low-level details of encryption algorithms themselves.
Here’s what encryption libraries typically offer:
– Encryption Algorithms: libraries include popular encryption algorithms (e.g., AES, RSA) for securing data.
– APIs: libraries provide easy-to-use APIs for encrypting and decrypting data.
– Key Management: many libraries handle the management of encryption keys, making it more convenient for developers.
– Hashing: some libraries also include hashing algorithms for data integrity checks.
By implementing end-to-end encryption for sensitive data, your app ensures that patient information remains confidential both during transmission and while stored in databases. This decision safeguards against potential data breaches and unauthorized access, aligning with best practices for security and privacy in healthcare applications.
III. Audit Trail: Implement its mechanisms to track and record any Changes
1. Here is an example – Security Events Logging: log any security-related events, such as failed login attempts, unauthorized access attempts, or changes to access permissions. These logs help detect and respond to potential security breaches.
Other examples of data protection and data security in health apps we will show in the context of mental health.
2. User Access Logging: whenever a patient accesses a mental health assessment questionnaire through your app, log relevant details such as the patient’s identifier, the time of access, and the specific assessment conducted.
3. Assessment Responses Logging: log each assessment response submitted by the patient, including the timestamp, the assessment questions answered, and the patient’s provided responses. This creates a record of the patient’s self-reported mental health status.
4. Modifications and Corrections Logging: if a patient makes changes or corrections to a previously submitted assessment, log these modifications along with the original data and the updated data. This maintains an audit trail of data changes.
5. Clinician Interactions Logging: if a clinician reviews or accesses a patient’s mental health assessment data, log the clinician’s identity, the date of access, and any notes or observations they add to the data.
By logging these interactions with data, your mental health app establishes an audit trail that offers transparency and accountability.
If patients, clinicians, or administrators need to review past interactions or assess the history of a patient’s mental health assessments, this logging system provides a reliable and tamper-evident record. So, what are the following rules of safety in health apps?
IV. Data Integrity: mechanisms to detect and prevent Data Tampering
Here you can use digital signatures or checksums to verify the integrity of data during transmission and storage, detecting any unauthorized modifications or tampering attempts.
So digital signatures is a simple term.
A checksum is a small piece of data derived from a larger set of data, such as a file or a message. It’s like a unique fingerprint that’s generated based on the content of the data. The purpose of a checksum is to help ensure the integrity of the data – in other words, to make sure the data hasn’t been changed or tampered with.
If the recalculated checksum matches the stored checksum, it indicates that the health data hasn’t been tampered with or altered.
If the checksums don’t match, it suggests that something might have gone wrong with the data, such as a transmission error, unauthorized changes, or data corruption.
Here’s how it might look in practice:
1. The user enters their heart rate as 75 bpm and blood pressure as 120/80 into the app.
The app generates a checksum (a unique value) based on this specific health data.
2. The app stores the health data (heart rate and blood pressure) along with the associated checksum in the database.
3. When the user wants to review their health data later, the app recalculates the checksum based on the stored health data.
4. The app compares the recalculated checksum with the stored checksum:
– If they match, the app can confirm that the user’s health data is intact and hasn’t been tampered with.
– If they don’t match, it indicates a potential issue, and the app can take appropriate measures (e.g., notifying the user, checking for transmission errors).
So as you can see this process helps ensure the data’s integrity, particularly important for health-related information, where accuracy is crucial, and unauthorized changes could have serious consequences.
V. Error Handling: mechanisms to handle Unexpected Situations
Provide informative error messages to users while avoiding exposing sensitive data. What methods can be used for safety in health apps?
1. User-Friendly Messages: if a user tries to access patient data that they don’t have permission to view, the app should display a message like “Access Denied: You don’t have permission to view this patient’s records.” This informs the user about the issue without revealing any specific details about the patient or the system.
2. Avoiding Sensitive Information: if there’s a technical error on the server side while retrieving data, the error message should not contain detailed system information or database errors. Instead, it should say something like “An error occurred while processing your request. Please try again later.” This message doesn’t reveal system specifics, maintaining security.
3. Logging Detailed Errors Securely: while you want to avoid exposing sensitive information to users, it’s essential to log detailed error information on the server side for debugging purposes. However, these detailed logs should be securely stored and accessible only to authorized administrators for troubleshooting.
4. Implementing Different Error Codes: use HTTP status codes to indicate the type of error that occurred. For example, a “403 Forbidden” status code indicates access denial, while a “500 Internal Server Error” indicates a general server issue. This helps users and developers understand the nature of the error without revealing specifics.
By following this strategy, you ensure that users receive meaningful error messages when interacting with data in your app, helping them understand what went wrong. At the same time, you protect the security and privacy of sensitive health information by avoiding the exposure of detailed system or patient data in error messages.
VI. Manage Patient Consent: for Data Exchange as per relevant regulations
This point is important for laws of different countries. How are mobile health apps governed? Give patients the opportunity to control who has access to their sensitive information and under what conditions.
For example, OAuth 2.0 is a widely used authorization framework for allowing third-party applications to access user data without exposing sensitive credentials.
Moreover! (This is a GDPR requirement for mobile health apps!)
– Obtain explicit and informed User Consent before processing any personal or health data;
– Notify users promptly in case of a Data Breach that poses a risk to their rights and freedoms.
VII. Access Control: Ensure Proper Data Segregation and Isolation
This item is especially popular in HIPAA – US Act. How to make a hipaa-compliant healthcare app? For example, enforce role-based access control (RBAC) to grant permissions to users and systems based on their roles and responsibilities, preventing unauthorized access.
What is Role-Based Access Control (RBAC)?
In your health app, there are different types of users, each with specific roles and responsibilities. You want to ensure that users can only access the parts of the app that are relevant to their roles. For instance:
1. Patient Role: patients are the primary users of the app, and they should be able to access their own health records, schedule appointments, and track their medication. They should not have access to other patients’ data or administrative features.
Example: A patient logs into the app and can see their upcoming appointments, view their results, and manage their medication list. However, they can’t view other patients’ records or access any administrative functions.
Important! For GDPR in the mobile health apps you should allow users to delete their data.
2. Doctor Role: doctors need access to patient records for diagnosis, treatment planning, and updating medical information. They should not have access to administrative or billing features.
Example: A doctor logs into the app and can view patient records assigned to them, make medical notes, prescribe medications, and schedule follow-up appointments. They can’t access billing information or modify administrative settings.
3. Administrator Role: administrators manage the app, add new users, and oversee overall operations. They have access to administrative functions and user management, but not to specific patient data.
Example: An administrator logs into the app and can add new doctors, nurses, and staff. They can manage user accounts and view system logs, but they can’t see specific patient records unless explicitly authorized.
By enforcing RBAC in your health app, you ensure that users only have access to the features and data that align with their roles and responsibilities. This prevents unauthorized access to sensitive information and helps maintain the privacy and security of health data.
VIII. Secure Development Lifecycle: Security Quality Assurance (SQA)
And here we’ve reached the most crucial point of safety in health apps .
Earlier, we provided a more detailed explanation of standards and recommendations. Now, we’ll demonstrate how to implement these standards step-by-step during the application development process.
In your healthcare app development process, you want to ensure that security is a top priority. Here’s how you can integrate various security assessments and testing techniques at different stages of development:
1. Requirements Phase: conduct a security requirements analysis to identify potential security risks and define security requirements for the app.
Example: define that user authentication must follow industry standards (e.g., OAuth 2.0), sensitive data must be encrypted both at rest and in transit (point 2 in the article), and that user roles and access control are implemented based on FHIR standards (point 1 in the article).
2. Design Phase: perform threat modeling sessions to identify potential threats and vulnerabilities specific to your app’s architecture.
Example: identify that patient records must be accessible only to authorized healthcare providers, and any third-party integrations must follow secure API practices (point 3).
3. Code Development Phase: implement secure coding practices.
Example: review that user input is properly validated and sanitized to prevent injection attacks, and that sensitive data is not hard-coded in the source code.
– Identify user input points: start by identifying all the areas in your code where user input is accepted. This could be through forms, text inputs, file uploads, API requests, etc.
– Validate data types: check that the user input matches the expected data type. For example, if you’re expecting a numeric value, make sure the input can be safely converted to a number.
– Validate data length and format: ensure that user input meets length and format requirements. For instance, if you’re collecting email addresses, ensure the input follows the standard email format.
We talked about this in part in point 4, when we explained the essence of checksum. And of course don’t forget about error messages for explanations & access rights for different types of users (points 5-6).
4. Integration Testing Phase: perform integration tests to ensure that different components of the app work together securely.
Use automated tools to simulate various attack scenarios (e.g., OWASP ZAP) to identify vulnerabilities.
Example: test that data is transmitted securely using HTTPS and that user access to FHIR resources is properly controlled through RBAC (point 7).
5. When we have implemented all the rules, we do Dynamic Testing: to simulate real-world attacks on the running app.
Example: test for vulnerabilities like injection attacks, data exposure, and session management flaws. Validate that the app resists common security threats.
6. And at the end, Continuous Monitoring: implement continuous monitoring of the app in production to detect and respond to security incidents promptly.
Example: set up security logging, intrusion detection systems, and regularly review logs for unusual activities.
So by integrating these proactive security measures throughout the app’s development lifecycle, you can identify and address security issues early, reducing the risk of vulnerabilities in your healthcare app.
Remember that these rules are not exhaustive, and the specific requirements for your digital health app may vary based on its functionalities, intended use, and the regions it operates in.
For instance, the California Consumer Privacy Act (CCPA) in the U.S.
Furthermore, your Health applications may use payment cards. To do this, you need an understanding of Payment Card Industry (PCI) standards: strong antivirus, up-to-date software, encryption, documentation policies, etc.
So, these were the basic rules of safety in health apps.
We hope our article was useful for you!
Who we are? We are a Digital Health Product Studio, who transforms healthcare digital experiences and sets new standards for delivering digital healthcare in a way that positively impacts people’s lives.
We assist healthcare startups in designing and developing digital products, while also helping healthcare organizations undergo transformative changes.
And write to us now on firstname.lastname@example.org and we will discuss how we can help ensure that your product brings real benefits