INTRODUCTION
The Mental Health Care Act 2017 in India represents a landmark legislation advocating for the rights, dignity, and autonomy of persons facing the challenges of mental illness and aims to transform the delivery of mental health care across the country.1,2 The new law mentions digital data privacy; yet few studies have focused on this to date.3 This has contributed to its low prioritization in emerging digital mental health programs in India.
The Government of India has made a systematic effort to ensure that all health service clients have a unique health identity (UHID), a digital identity issued by health care providers to track patients and secure relevant health documents, and link the UHID to the unique identification number assigned to every Indian resident, called the “Aadhaar” number.4,5 This linkage raises critical questions of how well the system and the community-at-large are prepared for such a large-scale data linkage and its implications for privacy. This has especially important implications for individuals living with mental illness, as safeguarding their data privacy is essential to reduce their risk of being judged or facing stigma, hostility, or adversities in personal or workplace relationships.
In this commentary, we discuss the challenges in protecting mental health data privacy, guidelines to protect the personal data privacy of individuals with mental health disorders in India, and implications for digital mental health services in other low-resource settings.
BENEFITS AND RISKS OF DIGITAL DATA SHARING
Internet penetration in India has shown consistent growth in adoption in urban and rural areas,6 which has brought about increasing interest in digital tools for various aspects of health care. This includes mobile-based services for providing health information7,8 and mobile phone reminders for offering education or counseling in the context of HIV,9,10 diabetes,11 TB,12 and cardiovascular diseases.13,14 There is also mounting interest in digital resources for mental health care, reflected in recent analyses of short message service (SMS)-based services15,16 for mental health issues, with SMS and voice reminders used to reduce missed appointments and improve follow-up at an urban community mental health clinic15; and use of tele-helplines for crisis resolution and follow-up.16 Use of artificial intelligence (AI) has also been reported in the case of commercial smartphone applications in India that are freely accessible to users.17,18 One such example is “Wysa,” an AI-enabled, empathetic, text-based conversational mobile mental well-being app, that has shown improvement in the mood of users with self-reported symptoms of depression.17
Importantly, the Government of India has emphasized the need to scale up digital mental health solutions due to the significant gap between those in need of care and those who receive mental health care, referred to as the “treatment gap.”19,20 It is estimated that 90% of the roughly 200 million people in India who live with mental health disorders21 lack access to necessary services22; yet many of these individuals own smartphones (as phone users represent 88.5% of people in India and more than 625 million internet subscribers23). The National Mental Health Survey (2016) also recommended an expanded usage of smartphone-based applications, digital tools for decision-support (due to the scarce number of psychiatrists), and electronic databases for follow-up of individuals with mental health disorders.24,25 This would enable large-scale mental health data sharing between the heterogeneous providers (i.e., specialists, primary care doctors, frontline workers, informal healers), patients, and other stakeholders.26 Among the existing studies that have evaluated digital mental health services in India,3,15,16,27 there has been limited focus on data privacy.27 With the increase in digital data sharing on clinical, demographic, occupational, and social variables, this potentially raises individual privacy concerns.
The Government of India has emphasized the need to scale up digital mental health solutions due to the significant gap between those in need of care and those who receive mental health care.
Furthermore, there is significant social stigma surrounding mental health conditions, which impedes individual care seeking, social participation, and access to treatment.28 With the widespread challenges in overcoming stigma and negative attitudes toward mental health conditions,29,30 it is critical to safeguard the privacy and confidentiality of users’ mental health data, especially as they interface with digital health systems. Stigma is negatively correlated with help seeking for allopathic or modern medical treatment in the Indian context, while a positive association has been shown with previous informal help seeking.28,31 Stigma motivates families to conceal the affected person, often hiding the condition and its perceived causes (driven by shame) such as previous sins or bad acts, which can substantially delay or inhibit timely access to treatment.28,31 Therefore, protecting the data privacy of individuals with potentially stigmatizing mental health disorders is critical as unintended disclosure could impede their access to care, result in possible denial of additional services, or result in possible discrimination by employers or agencies providing financial aid for treatments.
THE AADHAAR SYSTEM AND ITS LINKAGES
Under the Government of India’s Ministry of Electronics and Information Technology, the Aadhaar is a 12-digit unique number assigned to every Indian resident to record demographic (name, address, date of birth, and sex) and biometric data (fingerprints, iris scans, and a photograph). Aadhaar identification helps deliver subsidies, cash benefits, and incentives to intended beneficiaries, but the number has been increasingly linked to bank and income tax accounts, mobile phone numbers, and social welfare programs such as disability and elderly pension schemes.32,33 This is pertinent in the context of seeking treatment for stigmatizing mental health conditions, where accessing care will be tied to compulsory linking of personal identification information (i.e., Aadhaar).
Health Consequences of Linking Data
The consequences of poorly regulated data linkage have begun to show. In 2017, the Government issued a notification to mandatorily link the Aadhaar number with the patient identification number for patients with TB to receive cash assistance under the Revised National TB Control Program. This led to an interruption in treatments, particularly in cases of patients from lower socioeconomic segments, due to the documents and procedures required for availing an Aadhaar number.34 There have been instances of patients with HIV and AIDS dropping out of antiretroviral therapy, fearing a breach of privacy, when it was made compulsory to include Aadhaar numbers in their treatment reports.35 It should be noted that similar to mental health disorders, TB and AIDS carry a considerable social stigma in India.
The consequences of poorly regulated data linkage have begun to show.
A breach of privacy leading to the denial of a health service to an individual also leads to loss of their autonomy (when benefits are denied and there is no alternative mode of identification that is permitted) and loss of dignity (compromise of the individual’s right to physical or mental integrity, as confidential data are leaked without consent). Both of these losses can potentially worsen the situation for individuals with a mental illness and their families.36 Moreover, being identified as having a mental health problem in India can lead to institutionalized discrimination and loss of civil rights; for example, the loss of a job,19,37 denial of the right to vote,38 divorce on grounds of mental illness (under the Hindu Marriage Act),39 and automatic questioning of an individual’s capacity to make a will.40
Unique Challenges of the Aadhaar Data Linkage
Poor regulation of data linkage has other grave consequences such as systemic leakages, as illustrated by the case of about 200 government websites that inadvertently displayed the Aadhaar numbers of individuals41 and technologists now working for for-profit companies, who were previously involved in the formulation of the Aadhaar system, in the absence of strict regulations to prevent conflict of interest.42 It is not uncommon for health systems to adopt more integrated digital infrastructures, requiring the implementation of new protections for the privacy of users. However, in the case of the Aadhaar system, there are unique challenges and serious threats to privacy,43 as described in the following points.
Other laws in India such as the Registration Act (concerning the mandatory registration of documents of Indian citizens), collect biometric information, as with the Aadhaar system. However, such usage of biometric data comes with stringent legal restrictions specified in the Act,43 adhering to the principle of “purpose limitation,” (or processing of personal data for specified, explicit, and legitimate purposes only; further processing shall not be incompatible with initial purposes). These restrictions have not been mentioned in the Aadhaar Act of 2016.33
Under the Aadhaar system, biometric and demographic data are stored in a centralized database and associated with the individual’s unique Aadhaar number. This number is sought to be “seeded” (added as a new data field) with other public and private databases in the country.43 Normally, we have access to our different data “buckets” (e.g., details on air travel, bank accounts, mobile phones, employment histories, or health records), and only we can construct our full “profile” through these separate data buckets. But if the Aadhaar number is seeded into databases, which to some extent has already begun via linkage of Aadhaar numbers with bank accounts and mobile phone numbers, then these data buckets will become integrated. Therefore, individuals lose control over who can reconstruct their profile. There is a serious concern reported that potentially unauthorized persons in the government would then be able to “profile” an individual by pulling out information from various databases using the Aadhaar number.43 This has other implications too, such as self-censorship and the likely suppression of dissent or public opinion sharing in democratic systems of governance.44
Aadhaar proponents claim that this system allows us to “see individual lives in different spheres”43 to conduct big data analysis, such as econometric and epidemiological analyses, and thus, discover hidden data patterns to establish predictive and/or causal relationships between multiple domains of the economy. However, this very “personal data economy”43,45 could potentially monetize information about individuals’ private lives, much before the creation of sufficient digital literacy or safeguards.
While we have become aware that smartphones, social media platforms, or Internet search engines may violate our privacy, technologies such as encryption or virtual private networks can protect user privacy to an extent. Aadhaar’s centralized system of data integration lacks these safeguards.43
The safeguards against data breaches in the 2016 Aadhaar Act warrant greater scrutiny and strengthening. For example, if data are “leaked,” only the Unique Identification Authority of India32,33—not the affected person—is authorized to file a First Investigation Report, which invests the power to prosecute in the government agency and not the individual whose privacy has been violated.
Broken Consent Mechanism
The Aadhaar system suffers from a “broken consent mechanism” as best illustrated in the recent case of registration of Indian citizens on the Government’s CoWin vaccine portal for COVID-19 vaccination.46 While the government has reiterated that Aadhaar is not mandatory for vaccine registration and that any identity proof would be accepted for vaccination, the realities are playing out differently. The Government’s operational guidelines encourage vaccine officers to verify the recipient’s identity with Aadhaar ID, compared to other forms of identification. In other words, Aadhaar is the “preferred mode” for authentication, and although described as “voluntary,” it is being made “mandatory” for all practical purposes, as in the case of other services such as linkage with bank accounts or registration for mobile phones.
Aadhaar suffers from a “broken consent mechanism” as best illustrated in the recent case of registration of Indian citizens on the Government’s CoWin vaccine portal for COVID-19 vaccination.
Data Erasure
Finally, the Aadhaar system suffers from an absence of the facility of data erasure offered to the data subject or user,33 as enshrined in data protection and privacy laws in other regions globally such as the General Data Protection Regulation (GDPR) in the European Union (further detailed in the next section). This means an absence of the user’s “right to be forgotten” where the data subject has the right to the erasure of personal data concerning themselves without undue delay on certain grounds as mentioned in Article 17 of the GDPR.47 Some examples of such grounds for data erasure include the subject withdrawing consent or opposing the processing of their personal data, unlawful processing of data, or the personal data being no longer necessary in relation to the purposes for which they were originally collected or processed.
FRAMEWORKS FOR PROTECTING PERSONAL DATA PRIVACY
There are key international frameworks and methodologies aimed at protecting personal data privacy. These can inform the development of similar frameworks for the Indian context or incorporate key features into existing Indian policy, legal, and/or ethical frameworks.
General Data Protection Regulation
The GDPR, which came into force in May 2018, is a case in point. Although GDPR guidelines apply to organizations in the EU, they have important privacy considerations that are generalizable. GDPR encourages the development of digital systems that are less privacy invasive. The GDPR defines data pertaining to health as48:
Personal data related to the physical or mental health of a natural person, including the provision of health care services, which reveal information about his or her health status.
GDPR also describes “genetic data” as the characteristics of a natural person that give unique information about her physiology or health, and “biometric data” as information obtained from a specific technical processing relating to the physical, physiological, or behavioral characteristics of a natural person, which allow or confirm their unique identification.
The processing of these kinds of personal health data is prohibited unless the subject has given “explicit consent” or if the processing of such data is necessary for preventive or occupational medical care (e.g., assessment of an employee’s working capacity, medical diagnosis, provision of health care, or social benefits), for reasons of public health interest such as protecting against serious cross-border threats to health, or ensuring optimum standards of quality and safety of health care products or services. Table 1 details the principles of the GDPR.49
Under the GDPR, organizations must conduct a Data Protection Impact Assessment (DPIA)50 that includes documentation of the need to conduct such an assessment, a detailed explanation of data processing, the data controller’s (e.g., the project head) consultation with relevant stakeholders, compliance and proportionality measures undertaken in the project, and a description of likely data privacy risks, their potential impact on individuals, and steps taken to mitigate/eliminate these risks. Table 2 includes a summary of the DPIA requirements.
Global Initiative on Ethics of Autonomous and Intelligent Systems
The Institute of Electrical and Electronics Engineers (IEEE) Global Initiative on Ethics of Autonomous and Intelligent Systems addresses ethical issues raised by the development and dissemination of new digital systems, which is especially relevant to emerging ways of obtaining digital health data.51 This initiative offers guiding principles of “ethically aligned design” (Table 352) and has identified more than 120 key ethical issues and provided recommendations to address them. Currently, the IEEE standards association is developing “standardization projects” to guide technologists and organizations to mitigate the chances of ethical violations of personal data privacy.51
Applying Frameworks to Protect Mental Health-Related Data
There is an immediate need to consider the data protections outlined in the GDPR, DPIA, and IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems given the rising interest in digital mental health technologies in India53 and resulting personal data sharing at scale. Moreover, the absence of an existing Indian framework on mental health data privacy (except for the clauses in the Mental Health Care Act) has generated limited knowledge on data privacy risks for individuals living with mental health conditions, which faces additional threats posed by the comprehensive Aadhaar linkage spanning individuals’ personal data domains.
The absence of an existing Indian framework on mental health data privacy has generated limited knowledge on data privacy risks for individuals living with mental health conditions.
ARTIFICIAL INTELLIGENCE AND PRIVACY IN MENTAL HEALTH
Artificial intelligence has begun to penetrate digital mental health solutions, driven in part by the National Strategy on Artificial Intelligence released by the Government of India.54 Digital interventions allow opportunities for immense data collection, and AI systems using mathematical algorithms55 can seek to make sense of these complex and vast datasets.56 The use of AI has been reported in certain algorithm-based mental health applications17,18; however, such an intervention ecosystem has a fundamental contradiction to the importance of consent and data minimization, as articulated in Indian data protection frameworks such as the Sri Krishna report.57 Linking Aadhaar can make such systems more invasive by obtaining far greater amounts of personal data from individuals. Mental health data points vary due to the context and characteristics of the individual and the disorder, which can complicate the correlations made by AI systems. In addition, meaningful consent is already hard to achieve in the majority of clinical settings in India due to low awareness, literacy, and agency to exercise the right to informed choice; and therefore, consent can get further complicated if clinical data are automatically fed into an AI system. In these situations, it will be difficult for individuals living with mental health conditions to interpret and/or exercise consent, or for their family members, because data are often correlated in ways that are not identifiable, or where the impacts are not immediately known.56
AI algorithms have several other complex applications, notably, predictive modeling.58 Broadly, predictive modeling leverages large quantities of personal data to uncover patterns to predict future health outcomes, which could inform treatment selection and treatment personalization.59 However, this approach fails to recognize the central role of the patients, especially when their personal data will be used for developing such algorithms.58 Consequently, the mental health patient is not sufficiently mentioned as a central collaborator, or the final beneficiary to whom both clinicians and data scientists are accountable.60 These challenges related to the use of AI in mental health research and practice demand far greater scrutiny and effort on the part of regulators and policy makers to safeguard the personal data privacy of individuals with mental health conditions.
Provision of the Aadhaar number by an individual having a mental health condition or by his/her family member should be made completely voluntary.
Efforts are equally needed by AI researchers to bridge the gaps in data and technology literacy for both patients and clinicians.
RECOMMENDATIONS TO SAFEGUARD MENTAL HEALTH PRIVACY
The Government of India’s policy think-tank, NITI Aayog, published a discussion paper on the National Strategy on Artificial Intelligence having guidelines concerning privacy issues in India.54 In the absence of specific guidelines for the mental health context, we refer to NITI Aayog’s guidelines to draft customized recommendations for safeguarding the data privacy of individuals in India with mental health conditions. The following 10 measures can be considered by mental health policy makers, professionals, technologists, and related health system stakeholders to protect the individual’s data privacy, in the context of increasing access to and use of digital interventions for mental health.
Organizations working in the mental health space should adhere to the core principles of data protection such as informed consent and “data minimization” (i.e., personal mental health data should be adequate, relevant, and limited to the purpose of data collection). This should be supported by data-protection laws that are flexible to include changing technologies, relevant in mental health where a range of digital interventions are being piloted in low-income or middle-income countries or “technology agnosticism.”53
Provision of the Aadhaar number by an individual having a mental health condition or by his/her family member should be made completely voluntary and not encouraged by the care provider, staff member, or anyone else in the health system interfacing with the individual. The number should be de-linked from the provision of service or any information related to the service. We frame this recommendation based on the Supreme Court of India’s 2018 decree that Aadhaar is not mandatory61 and the preceding Supreme Court 2017 judgment protecting the Right to Privacy, as an intrinsic part of the Right to Life and Personal Liberty as guaranteed under the Indian Constitution. In the 2017 judgment, 3 distinct connotations of individual privacy were defined62: (1) “spatial control” or creation of private spaces; (2) “decisional autonomy” or intimate choices such as those governing reproduction, faith, or modes of dress; and (3) “informational control,” or use of privacy as a shield to retain control over personal information.
Organizations in digital and traditional mental health systems seeking personal data (including passwords, financial data, and biometric information) should maintain reasonable security to protect sensitive personal data and should be held liable for damages when their negligence results in wrongful loss or harm to any person. In India, this aligns with Section 43A of the Information Technology (IT) Act 2000.63,64 The act was amended in 2011 to frame the “IT Rules”63 (Table 4), which should be upheld at all levels of a mental health system. Rule 3 of these “2011 IT Rules” includes the following as “sensitive personal data”: information relating to passwords, credit or debit cards, biometric information (DNA, fingerprints, voice patterns, etc.), physical, physiological, and mental health condition, medical records and history, and sexual orientation.
Apart from a centrally enacted law, mental health sectoral regulatory frameworks are equally important to establish and concerning that, mental health professionals in India are accountable to the central and state mental health authorities under the Mental Healthcare Act 2017.65 Therefore, these sectoral authorities can supervise the kind of data obtained by digital interventionists and evaluate the extent of privacy protection.
India’s health laws should cover mental health and define privacy protection frameworks and continually update those to reflect an understanding of new and evolving risks by referring to established international standards.48–50,54
AI systems developers working in mental health should conduct a DPIA50 and refer to the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.51,52,54
When considering the role of AI algorithms for supporting symptom monitoring or informing the diagnosis or care of mental health conditions, attention is necessary to avoiding harm to patients and accounting for risk of bias. Developers and researchers should be made aware of the possibilities of such biases due to the subjective and expressive nature of clinical data in text form as reported by mental health patients, and the inherent risks of associating mental disorders to certain patient groups or ethnicities.66 AI systems may reproduce biases in existing data,67 with potentially detrimental consequences to individuals. Also, poor quality data can adversely affect the use of AI systems68 and is further compounded in resource-constrained settings such as in India where there may be additional gaps, errors, or delays in data collection mechanisms. Accepted ethical principles such as autonomy, beneficence, and justice should be prioritized, particularly in the case of using data collected from patients from vulnerable groups who are susceptible to stigma and discrimination, such as many individuals seeking care for mental health challenges.69 Further, clinicians and therapists, due to lack of formal training in this space, may be unaware of managing granular data reported by an AI-based system or app, or may not feel completely confident with clinical insights gathered through these systems.70 To that end, efforts are equally needed by AI researchers to bridge the gaps in data and technology literacy for both patients and clinicians. The challenge herein is that there remain insufficient guidelines for the use of AI in health care settings,71,72 a challenge that is especially stark in lower-resource countries such as India. Even the NITI Aayog’s recommendations need further strengthening by adding dedicated guidelines on deploying AI research for patients with mental health disorders and other potentially stigmatizing conditions, in connection with point 6.
Caution is also needed due to the risk of perpetuating existing racial or ethnic biases or stigma with AI algorithms. A prominent study from the United States73 showed that an algorithm assigned the same level of risk of chronic diseases (i.e., hypertension, diabetes, renal failure, high cholesterol) to Black patients, who presented more risk factors and comorbidities than white patients. This racial bias reduced the number of Black patients identified as requiring additional medical care by more than 50%. The algorithm used health costs as a proxy indicator for health needs, which resulted in this bias. As less money was spent on Black patients who reported the same level of need, the algorithm falsely inferred that Black patients were healthier than white patients with the same medical problems. In the Indian context, there is a similar risk of exclusion of stigmatized groups. As part of the National Digital Health Mission, the Government of India has commenced the process of assigning a digital health ID to every citizen, which is voluntary “until all health data are mandatorily digitized.”74 As the digital health ID would offer the entire health data of an individual across providers and treatments (i.e., digital health profile) and given the risk of its potential linkage to the Aadhaar ID, there may be an unauthorized or unintended disclosure of an individual’s mental illness or other stigmatizing conditions (e.g., HIV, TB) resulting in the denial of access to crucial services or perpetuation of stigma. For example, a transgender individual may experience discrimination by an insurer or financial institution because they would have to reveal their gender and any prescription drugs or treatments taken.74 The linking of data across health care providers may accidentally worsen pre-existing social, cultural, and/or institutional stigma. Developers of algorithms under the National Digital Health Mission should be educated on these threats to users’ rights to access services. One example of improving algorithms is to avoid the use of convenient and seemingly effective proxy measures (e.g., health costs in the aforesaid U.S. study73) for ground truth, which could introduce bias.
We encourage investment and collaboration by mental health researchers and their technology partners to study and co-develop new mathematical models that can preserve privacy by limiting the information that one can obtain from released data, regardless of the extent of associated information.54 An example is “multi-party computation,” a “toolbox” of cryptographic techniques that allows joint computation of data by different partnering organizations working on a digital project, just as if they are sharing a database. Cryptographic techniques protect the data, so the involved parties can view relevant information of individuals, without their underlying sensitive data. This enables a secure analysis of data from different sources, which is pertinent in digital mental health interventions.75
Increasing awareness of data privacy among individuals with mental health conditions and their families is of paramount importance. People often tend to give consent to sharing their data, especially when interfacing with technology, which they would not have done had they known the purpose of providing such information. There is an urgent need for the inclusion of privacy rights and advisories in all digital mental health program material that is disseminated among beneficiaries, and at a deeper level, in the medical and technological training curricula to instill the fundamentals of privacy in medical and engineering graduates.54
While it is important to recognize that these 10 measures are not exhaustive, these guidelines can inform efforts to strengthen data protection frameworks and laws, including the existing draft of the Digital Information Security in Healthcare Act 2018 (DISHA) in India, which the Government of India plans to implement.76 DISHA includes provisions that regulate the generation, collection, access, storage, transmission, and usage of digital health data and the related personally identifiable information. Presently, DISHA includes the details of its regulated entities, affirmative rights of the individual providing sensitive data, guidelines on collection and processing of DHD, types of breach of DHD, and adjudication and enforcement in case of such offenses. Table 5 summarizes the Rights of the Data Subject under DISHA.76
Linking of data across health care providers may accidentally worsen pre-existing social, cultural, and/or institutional stigma.
IMPLICATIONS FOR OTHER COUNTRIES IN SOUTH ASIA
While the examples presented draw extensively from the case of data linkage with Aadhaar in India, these recommendations are relevant for many additional settings globally. Consideration of data safety in the context of emerging digital mental health interventions and expanding delivery of necessary care to those living with stigmatizing mental health conditions is relevant for many other lower-income countries, particularly among countries in the South Asian region where data safety policies are not yet well-established. In Table 6, we have illustrated the various contexts related to data protection in Bangladesh, Bhutan, Nepal, Pakistan, and Sri Lanka.
These countries in the South Asian region account for more than 30% of adolescents globally85 while also experiencing a disproportionately greater share of the global burden of mental disorders.86 These challenges are compounded by having few mental health resources,87,88 highlighting the potential for digital interventions89–91 to bridge the care gap in the region. It should be noted that digital mental health interventions, particularly those involving online platforms and social media, could potentially lead to exposure of young users to hurtful content and hostile interactions with other users,92,93 threats to their data privacy,94,95 stigmatizing experiences that could impact their personal relationships, and unintended effects of online disclosure of personal information.96 Regulatory, systemic, and governmental efforts will be essential, with the participation of specialist and non-specialist health providers, technologists, and mental health interventionists to prioritize the protection of personal data and privacy of all individuals who receive these emerging interventions.
CONCLUSIONS
In India, digital mental health practitioners and interventionists can refer to the guidelines outlined in this commentary and exercise substantial privacy protection while obtaining, storing, and using the personal data of individuals seeking care for mental health concerns. Regulatory agencies in this space should also consider the GDPR, DPIA, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and NITI Aayog to further develop and refine their data protection efforts. Interventionists, who are obligated to adhere to these regulations, would then be enabled to conceive and develop privacy-sensitive intervention models. Data privacy policies are often complex and difficult to navigate, particularly for users with low literacy or those experiencing mental health symptoms; therefore, interventionists should clearly and succinctly communicate the kinds of data they would obtain from users.
Obtaining informed consent should mandatorily follow the privacy policy statement to ensure transparency rather than involve a checkbox indicating “agreement,” thus, giving the user ample opportunity to make an informed decision about their participation (which is often difficult due to the fast-paced nature of installing and using digital applications). Individuals refusing consent should be allowed to use the intervention, with their data excluded from outcome analysis. Provision of services should be de-linked with the receipt of individual personal data. A brief, clear, and comprehensive statement on the protection of personal data privacy, fully exercising “data minimization” and dissociation from Aadhaar would build greater trust and confidence in the digital intervention. This is particularly important as the digital mental health field continues to advance rapidly, where the implications of Aadhaar will require continued scrutiny to ensure the protection of the privacy, rights, and dignity of those living with mental health disorders.
Funding
Dr. Pathare reports receiving funding from the National Institute of Mental Health (5U19MH113174). Dr. Naslund reports receiving funding from the National Institute of Mental Health (5U19MH113211) and the Brain & Behavior Research Foundation and is supported by the Burke Global Health Fellowship at the Harvard Global Health Institute. The funders played no role in the study design; collection, analysis, or interpretation of data; writing of the manuscript; or decision to submit the manuscript for publication.
Author contributions
AB and JN conceived the idea; AB conducted the literature search and wrote the first draft; JN provided edits and revisions to multiple drafts; SP provided policy expertise and revisions to multiple drafts; and AB incorporated feedback and revisions from co-authors for the final submission.
Competing interests
None declared.
Notes
Peer Reviewed
First published online: September 8, 2021.
Cite this article as: Bondre A, Pathare S, Naslund JA. Protecting mental health data privacy in India: the case of data linkage with Aadhaar. Glob Health Sci Pract. 2021;9(3):467-480. https://doi.org/10.9745/GHSP-D-20-00346
- Received: June 23, 2020.
- Accepted: July 20, 2021.
- Published: September 30, 2021.
- © Bondre et al.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly cited. To view a copy of the license, visit https://creativecommons.org/licenses/by/4.0/. When linking to this article, please use the following permanent link: https://doi.org/10.9745/GHSP-D-20-00346
REFERENCES
- 1.↵
- 2.↵
- 3.↵
- 4.↵
- 5.↵
- 6.↵
- 7.↵
- 8.↵
- 9.↵
- 10.↵
- 11.↵
- 12.↵
- 13.↵
- 14.↵
- 15.↵
- 16.↵
- 17.↵
- 18.↵
- 19.↵
- 20.↵
- 21.↵
- 22.↵
- 23.↵
- 24.↵
- 25.↵
- 26.↵
- 27.↵
- 28.↵
- 29.↵
- 30.↵
- 31.↵
- 32.↵
- 33.↵
- 34.↵
- 35.↵
- 36.↵
- 37.↵
- 38.↵
- 39.↵
- 40.↵
- 41.↵
- 42.↵
- 43.↵
- 44.↵
- 45.↵
- 46.↵
- 47.↵
- 48.↵
- 49.↵
- 50.↵
- 51.↵
- 52.↵
- 53.↵
- 54.↵
- 55.↵
- 56.↵
- 57.↵
- 58.↵
- 59.↵
- 60.↵
- 61.↵
- 62.↵
- 63.↵
- 64.↵
- 65.↵
- 66.↵
- 67.↵
- 68.↵
- 69.↵
- 70.↵
- 71.↵
- 72.↵
- 73.↵
- 74.↵
- 75.↵
- 76.↵
- 77.
- 78.
- 79.
- 80.
- 81.
- 82.
- 83.
- 84.
- 85.↵
- 86.↵
- 87.↵
- 88.↵
- 89.↵
- 90.
- 91.↵
- 92.↵
- 93.↵
- 94.↵
- 95.↵
- 96.↵