Original Paper
Abstract
Background: Artificial intelligence (AI) is increasingly used in medical care, particularly in the areas of image recognition and processing. While its practical use in other areas is still limited, an understanding of patients’ needs is essential for the practical and sustainable implementation of AI, which could further acceptance of new innovations.
Objective: The objective of this study was to explore patients’ perceptions toward acceptance, challenges of implementation,
and potential applications of AI in medical care.
Methods: The study used a qualitative research design. To capture a broad range of patient perspectives, we conducted semistructured focus groups (FGs). As a stimulus for the FGs and as an introduction to the topic, we presented a video defining AI and showing 3 potential AI applications in health care. Participants were recruited from different locations in the regions of Halle (Saale) and Erlangen, Germany; all but one group were from outpatient settings. We analyzed the data using a content analysis approach.
Results: A total of 35 patients (13 female and 22 male; age: range 23-92, median 50 years) participated in 6 focus groups. They highlighted that AI acceptance in medical care could be improved through user-friendly applications, clear instructions, feedback mechanisms, and a patient-centered approach. Perceived key barriers included data protection concerns, lack of human oversight, and profit-driven motives. Perceived challenges and requirements for AI implementation involved compatibility, training of end users, environmental sustainability, and adherence to quality standards. Potential AI application areas identified were diagnostics, image and data processing, and administrative tasks, though participants stressed that AI should remain a support tool, not an autonomous system. Psychology was an area where its use was opposed due to the need for human interaction.
Conclusions: Patients were generally open to the use of AI in medical care as a support tool rather than as an independent decision-making system. Acceptance and successful use of AI in medical care could be achieved if it is easy to use, adapted to individual characteristics of the users, and accessible to everyone, with the primary aim of enhancing patient well-being. AI in health care requires a regulatory framework, quality standards, and monitoring to ensure socially fair and environmentally sustainable development. However, the successful implementation of AI in medical practice depends on overcoming the mentioned challenges and addressing user needs.
doi:10.2196/70487
Keywords
Introduction
In the absence of a common definition of artificial intelligence (AI), we used a broad approach in our study: AI involves using machines to simulate human reasoning and intelligent behavior, including thinking, learning, and reasoning with the aim of solving complex problems that can only be solved by human experts [
]. AI comprises machine learning (ML), deep learning, natural language processing, and computer vision [ ].The presence of AI has been steadily growing over the past few decades, particularly in developed countries, and has also expanded in health care in recent years [
, ]. In 2017, Esteva et al [ ] published a study in which a neural network-based AI system outperformed dermatologists in the accuracy of diagnosing and classifying benign and malignant skin conditions. Several smartphone apps for patients promise accuracy in diagnosing skin lesions, but should rather be used for self-examination or in tele-dermatology [ ]. By 2024, 692 AI/ML medical devices or algorithms were authorized and listed by the US FDA (Food and Drug Administration), mainly in radiology, followed by cardiology [ ]. Such a database does not exist in Germany [ ]. The German Federal Institute for Drugs and Medical Devices lists 65 digital health applications for patients that meet European standards for medical devices (CE-marked) and are reimbursed by public health insurance [ ]. For most applications, however, it is not described whether they use AI or not. Approved medical devices in Germany using AI are, for example, applied for the detection of diabetic retinopathy [ ] or for checking symptoms [ , ].Although there are numerous approaches for AI-supported systems in medical care, their practical use seems limited [
]. This is partly due to a lack of adequate datasets and challenges in transferring developed systems into real-world applications [ ]. If the discrepancy between theoretical development and practical implementation is not addressed, it may result in adverse outcomes and potential risks [ , ]. In this context, involving end users becomes crucial. Patient participation in health care and research has become increasingly important in recent years and has been highlighted as essential in numerous studies [ - ], as it not only enhances treatment outcomes [ ] but also improves the quality of research [ ]. Involving end users leads to a better understanding of their needs and better acceptance of new innovations. Otherwise, there is a risk of underuse, circumvention, or resistance to use [ , ]. The early engagement of patients and end users in AI research is essential for understanding their needs and identifying key points for education and practical applications [ ], thereby enabling the development of practical and sustainable AI applications [ ]. A qualitative methodology allows us to gather participants’ perceptions in an unbiased way, and especially to recognize the reasons for these perceptions to enhance understanding [ , ]. The exchange in focus groups and the stimulus given can trigger responses and allow participants to build on ideas that might not have come up in individual interviews [ ]. Previous research has primarily used quantitative methods [ ] or examined patients’ perceptions of specific medical applications or specializations [ , ]. This is also evident in Germany, where only a few studies exist [ , - ]. Therefore, a qualitative methodology is needed, in addition to quantitative work in this field [ ], to capture patients’ perspectives [ ]. In addition, specific patient populations, including outpatients, older or chronically ill patients, and those from lower socioeconomic backgrounds, have been understudied [ , , ]. To our knowledge, there has been no study that included these patient groups and used focus groups to explore general perceptions of AI in medical care.There are several approaches for measuring the acceptance of technical innovations, including the Technology Acceptance Model (TAM) [
, ] and the Unified Theory of Acceptance and Use of Technology (UTAUT) [ ], which have been predominantly used in previous studies, including in the health care sector [ - ]. In addition to the TAM, seven other models were combined in UTAUT by combining the following main factors that influence behavioral intention and usage behavior: performance expectancy, effort expectancy, social influences, and facilitating circumstances. The moderating factors of gender, age, voluntariness, and experience were also added [ ].The overarching aim of our research is to gain insight into a practical and reasonable implementation of AI in medical care by involving potential end users. Therefore, we aimed to examine patients’ attitudes and perceptions toward AI, regarding AI acceptance, challenges to AI implementation, and potential use in medical care, addressing the patient populations mentioned above.
Methods
Ethical Considerations
This qualitative study was conducted by a team of researchers from the universities of Halle-Wittenberg and Erlangen-Nürnberg, Germany, as part of the project “Perspectives on the Use and Acceptance of AI in Medical Care (PEAK)” and was approved by the medical faculty’s ethics committee of Martin Luther University Halle-Wittenberg (protocol 2021-229). After being informed of their rights, all participants provided written informed consent. We informed them that they could withdraw their consent at any time without providing a reason or facing any negative consequences. Participants were provided with snacks and drinks during the focus groups (FGs), though no financial compensation was offered. During transcription of the FGs, we used pseudonyms to maintain participants’ anonymity. We report our findings according to the Consolidated Criteria for Reporting Qualitative Research [
].Study Design, Participants, and Recruitment
We conducted semistructured FGs to capture patients’ perspectives on the acceptance, challenges, and use of AI in medical care. Participants were primarily selected through convenience sampling and further through purposive sampling and snowball sampling. We contacted participants directly and through study information leaflets and facility staff. We recruited our participants mainly in outpatient settings in and around Halle (Saale) and Erlangen, including university areas as well as family medicine and physiotherapy practices. We also included one FG of clinical patients in a psychiatric hospital in Erlangen to increase heterogeneity. Inclusion criteria for the study were defined as patients with first-hand experience of the German health care system, thus patients, who had at least one visit to either a general practitioner (primary care) or other outpatient medical specialists (secondary care) or a visit in a hospital (tertiary care) in their adult life (including current and previous visits). Patients younger than 18 years, lacking proficiency in German, or unable to consent were excluded.
Examples and Focus Group Topic Guide
As a stimulus for the FGs, we (TA and JG) created a video defining AI and showing 3 potential health care applications: (1) diagnosis and symptom check (Ada app [Ada Health GmbH]) [
, ], (2) treatment (alternative medication plan) [ ], and (3) process optimization in patient care (voice assistance) [ ]. To reduce bias, we changed the presenting order of these examples in the FGs.We (JG, SN, and CB) created a topic guide to generate open responses. We developed our guide using Krueger and Casey’s [
] approach to FG guides, Helfferich’s methods [ ], the TAM [ , ], and the UTAUT [ ].Participants were asked the following questions:
- What factors would make you more or less likely to accept an AI system in medical care?
- What challenges do you see for a successful use of AI in medical care?
- Where do you see potential applications for AI in medical care, and where don’t?
We pretested the topic guide and examples with colleagues and with the first FG of patients.
Data Collection
Before the FGs, we collected participants’ sociodemographic data and health-related information. To assess technology affinity, the Perceived Technology Competence scale was used [
]. From June 2022 to March 2023, we conducted 6 FGs at university or medical practice locations with 5 to 8 participants each until thematic saturation was reached. FGs began with a video introduction to the topic, followed by discussions stimulated by our topic guide questions. The interviewer (JG) was a medical doctor, working as a researcher and doctoral candidate, and unknown to most participants. We informed participants about JG’s background and the aim of the study, and made sure that all participants had the opportunity to express their own opinions. We audio-recorded all FGs, which lasted between 86 and 134 minutes, and took field notes (Carsten Fluck [CF, research assistant in PEAK] and SN).Data Analysis
We systematically analyzed the textual material and categorized it using a content analysis approach [
]. To develop a category system, we (SN, CB, CF, and JG) independently coded one exemplary focus group, discussing and refining assigned text segments and categories until consensus was reached. JG and CF applied the category system to all FGs and debated changes after each FG until they reached a consensus. Based on the FG topic guide, we created main themes deductively, while we developed the subthemes inductively from the data. JG grouped coded segments together to identify key issues and to further structure the material. To ensure the quality criteria of validity and reliability of content analysis during the analysis [ ], we compared our analysis tool with similar constructs from the literature (construct validity) and assumed that the category system and category definitions were appropriate (semantic validity) because the coded text passages were homogeneous in the respective categories. All researchers used MAXQDA 2022 (VERBI Software GmbH) for coding and analysis.Results
Participants’ Characteristics and Experience With AI
Thirty-five patients participated, with a median age of 50 years, ranging from 23 to 92 years; most had chronic diseases. 13 identified as female and 22 as male. Participants’ socioeconomic status (SES) varied, with a trend toward higher SES. Participants frequently showed high and medium affinity for new technology (see
).Characteristic | Participants | |
Age (years), median (range) | 50 (23-92) | |
Gender, n (%) | ||
Female | 13 (37) | |
Male | 22 (63) | |
Highest education level, n (%) | ||
General qualification for university entrance (12-13 y) | 19 (54) | |
General certificate of secondary education (9-10 y)a | 16 (46) | |
Vocational qualificationb, n (%) | ||
Completed vocational training | 21 (60) | |
In vocational training | 3 (9) | |
Advanced technical college certificate or university degree | 15 (43) | |
No vocational qualification | 2 (6) | |
Other vocational qualification | 2 (6) | |
Employment status, n (%)c | ||
Employedd | 19 (54) | |
Not employed | 15 (43) | |
Thereof pensioners | 10 (29) | |
Thereof students | 2 (6) | |
Chronic disease(s)e, n (%) | ||
Yes | 20 (57) | |
No | 14 (40) | |
Frequency of GPf consultatione, n (%) | ||
Less than once every 3 months | 15 (43) | |
Once every 3 months | 13 (37) | |
Two to three times in 3 months | 4 (11) | |
Four times or more in 3 months | 2 (6) | |
Relationship to GP, n(%) | ||
Very good | 18 (51) | |
Rather good | 13 (37) | |
Neutral | 1 (3) | |
Rather poor | 2 (6) | |
Very poor | 1 (3) | |
Affinity for new technologyg, n (%) | ||
Low | 7 (20) | |
Medium | 12 (34) | |
High | 16 (46) |
aIncludes the German “Hauptschulabschluss.”
bPartially more than 1 vocational qualification exists.
cOne participant in vocational training only.
dA total of 2 participants are simultaneously in vocational training and one participant is simultaneously in retirement.
eOne participant did not respond.
fGP: general practitioner.
gScale: Mean value of 3 items ≤2 low, >2 to <4 medium, and ≥4 high.
Most participants reported no experience with AI in medical care. A few stated they had had contact with or heard of AI in image recognition and processing (eg, computed tomography and magnetic resonance imaging), dental technology, pill reminder apps, care robots assisting with transport and entertainment, or surgical robots. However, surgical robots were more likely to be perceived as assistance systems that require human guidance.
The following sections present four main themes: (1) factors that promote the acceptance of AI systems in medical care, (2) factors that hinder the acceptance of AI systems in medical care, (3) patients’ perceived challenges and requirements for implementation, and (4) use of AI in medical care.
shows the participants’ illustrative quotes for each subtheme.Theme 1: Factors Promoting Patients’ Acceptance of AI
Participants highlighted comprehensible instructions and explanations of the purpose of AI as beneficial, particularly among users with limited technical knowledge. Participants stated that AI should be easy to use and should improve personal life and patient care without imposing restrictions. Perceived enhancements were reduced costs and waiting times, and accelerated treatment. To guide system development, participants emphasized the importance of providing users with a permanent feedback opportunity to developers. Furthermore, participants indicated that AI should be based on a sufficiently large and representative database.
Ensuring transparency to understand the objectives of the development, data processing, and use of AI systems was identified as essential. Participants often lacked awareness or information about whether AI was running in the background, making it challenging to differentiate it from other technologies. Particularly with AI usage through several institutions, such as health insurance companies, transparency would allow patients to object to AI use.
Participants feared that high costs could disadvantage patients, who could not afford to use AI. Furthermore, they highlighted that the primary objective of AI should be to improve patient well-being, rather than focusing on commercial optimization. They suggested that financiers’ goals could influence AI development, leading to lobbying for more profitable options and potentially subpar medical care. Therefore, participants argued that nonprofit development and funding, for example, by an independent government body, would be crucial to ensure acceptance.
Well, I think the trick is, it has to be like, AI has to be geared towards patients. …And not commercially optimized. … The goal must be like: does it benefit the patient? If not, then it won't happen. That would actually be a good ethical approach, I think.
[FG Participant 3]
Participants felt the need for AI systems to be tested similarly to medical devices; testing with external validation and long-term use would be encouraging. In contrast, most participants did not want to be the first to have an AI system used on them.
A minor point raised was that the media and scientific institutes should provide objective information about AI systems, including details of testing, features, and focusing on positive patient and physician testimonials.
Theme 2: Factors Hindering Patients’ Acceptance of AI
Participants considered human supervision and decision-making authority as a prerequisite for AI implementation, decreasing their acceptance if AI systems made decisions without human intervention. Human supervision was perceived as essential since the AI’s decision-making process is not comprehensible, and physicians could better assess whether the results are appropriate for individual patients.
In my opinion, a human should always make the decision. [FG Participant 5] Definitely, someone who checks what the AI has done.
[FG Participant 2]
Participants identified a possible lack of data protection and misuse of personal data as relevant barriers to the acceptance of AI systems. They emphasized the importance of privacy and data protection in medical applications from external access or trade, as misuse could also have negative effects on patients.
Participants stressed the importance of medical professionals’ attitudes toward AI systems, stating that if physicians were unconvinced or opposed to AI systems, participants' acceptance of these applications would decrease.
shows the subthemes of themes 1 and 2, and therefore, the factors that promote or hinder participants’ acceptance; shows the participants’ quotes.

Theme 3: Challenges and Requirements for the Successful Use of AI in Medical Care
The challenges mentioned were often also perceived as requirements for successful implementation, and are therefore presented in summary herewith (in addition, see
). We categorized 8 subthemes for this topic.
Resource Consumption and Lack of Compatibility
Participants pointed out that the widespread use of AI requires energy and human resources. Furthermore, AI requires substantial technical resources, which are currently not available ubiquitously, as the technical infrastructure, for example, the availability of software and hardware, varies between health care facilities, making widespread use and networking difficult. Participants supported regional and global networking for physicians and health care facilities to build databases and use AI systems, for example, to share information or coordinate treatment, but perceived the technical implementation as challenging. The lack of standardized software and incompatibility between existing systems were also regarded as challenges to the successful use of AI and networking. They emphasized the importance of compatibility between newly developed systems and those already in use.
So I think something like that could perhaps arise as a problem. With more providers and the more diversity, it might be more difficult to pick out a good system, and if they [doctors] want to exchange something with each other, that it's not all compatible.’
[FG Participant 1]
Therefore, participants considered balancing the provision of technology, and in particular, its practical functionality and maintenance, as a major challenge. Ensuring energy security and rapid technical service in the event of system failure was perceived as challenging but crucial, especially for medical applications on patients. Participants noted the need for human resources to develop and test AI, considering it difficult if additional staff were required to analyze and operate the AI, given the current staff shortage in the health sector. Participants also highlighted the importance of environmental sustainability in the context of digitalization and internet use, particularly in AI, which requires large servers for data collection and storage.
But you always have to consider sustainability. ...And if artificial intelligence is not sustainable, we'll have problems.
[FG2 Participant 4]
Comprehensibility and Access For Everyone
Participants stated that AI systems should have an intuitive interface, which is understandable to and usable for people without technical or medical expertise. Adaptable explanations, different languages, and support staff were suggested to ensure comprehensibility for all ages and levels of education. Participants also emphasized that AI systems should be accessible to all population groups.
I’d say accessibility above all. So that someone who is not tech-savvy, can simply still go there. And I have a menu or some kind of interface here that I can just use instinctively.
[FG5 Participant 6]
Education and Training
Participants emphasized the importance of user training and education for the successful application of AI, recommending short instructions and learning platforms for patients who use AI systems independently. They argued that physicians using AI would need training or qualifications, which they should arrange themselves, while others suggested that instructions for new AI systems should be available to physicians, considering integration into medical studies.
…[and for] the instruction to be so that it can be done, let’s say, in maybe ten minutes. So, the patient doesn't have to attend a course for a week to grasp the technology, because I think that would put most people off.
[FG Participant 3]
Financing of AI
According to participants, the development and use of AI could pose a noteworthy financial challenge. Participants questioned whether AI could be made available to everyone for free, and whether health insurance companies would subsidize AI applications.
And the other question is whether I could even pay for it. Whether I could pay for it at all.
[FG2 Participant 4]
Database
Participants identified the provision of sufficiently large amounts of recent and representative AI learning and working data as important. They expressed that a large number of training datasets would be necessary for adequate development and would therefore increase acceptance. In some cases, however, data provision was seen as difficult, for example, due to legal or time constraints from authorities or physicians.
The problem is learning data. I first need a huge amount of data to train it. Otherwise, I don’t get the precision I want.
[FG3 Participant 4]
Building Trust or Acceptance
Another challenge identified by participants was the need to build trust or acceptance of AI among patients and physicians, through time or positive experiences (see factors promoting acceptance). Participants stated that, as patients, they always had to trust the people treating them first, which would be no different with AI.
Acceptance from the users. Either doctors or patients. There has to be a certain level of acceptance. And it has to be built. Perhaps really through publications, information, education.
[FG4 Participant 7]
Integration Into Everyday Work and Practicability
A minor aspect mentioned was the challenges faced by health care professionals in integrating AI into their daily work, including the need for additional skills and time to explain and operate the technology. They suggested developing AI with a practical focus, involving users in the process, and gathering feedback to ensure easy integration into daily workflows.
Well, there has to be a benefit. Nobody is going to develop something that isn’t going to be useful or effective in practice in the end. ...So it has to be useful in practice somehow.
[FG1 Participant 2]
Institutional Surveillance and Certification
Participants preferred treating AI as a medical device, requiring approval before implementation. It was believed that current evaluations lacked oversight and independence, and suggested alternatives to assess AI systems based on medical benefits, applicability, and ethical considerations, rather than just economic factors. They advocated for transparent quality standards with defined goals and guidelines, and discussed the need for regulatory agencies or legal systems to set these guidelines. They also emphasized the need for oversight institutions monitoring AI operations to ensure compliance and consequences for noncompliance. Minor aspects mentioned were a legislation requiring humans rather than AI to make treatment decisions, as well as the role of government regulation and legal framework in promoting the fair use of AI without stifling development.
Well, that I have at least one institution that certifies or monitors the whole thing.
[FG5 Participant 6]
Theme 4: Use of AI in Medical Care
Overview
The use of AI in medical care was a topic of debate for participants, with opinions ranging from its potential use everywhere, to being relevant only in clinical settings and not for individuals, to being entirely inconceivable. The dominant opinions were imagining AI as a support tool and information source, predominantly for physicians, and skepticism about AI making independent decisions, particularly in medicine, as this could involve health-related decisions and ethical considerations.
I’m a little tech-savvy. …But when it comes to medicine, my skepticism grows, to be honest. And there are a lot more ethical problems with medicine. Start small. Don’t immediately think of the doctor-AI doing everything.
[FG4 Participant 7]
Potential Future Areas of Application
Participants mentioned potential future areas of application throughout the treatment process (see the subthemes in
). They considered AI to be beneficial in communicating with individuals who are unable (eg, by suggesting appropriate sentences) or afraid to speak with physicians (eg, due to anxiety or shame-inducing issues) and for adapting language levels or using different languages. Furthermore, AI could be used as a documentation support tool, for instance, during anamnesis or in the creation of drafts for physicians' letters.Themes and subthemes | Description of thinkable or unthinkable tasks provided by AIa | ||
Potential future areas of application | |||
Communication and documentation assistance | To assist in communicating medical history and structuring documentation. | ||
Research, data collection, and networking | For a (long-term) structured collection and accelerating data transfer of patient data, findings, and diagnoses. | ||
Diagnostics | As analysis tools for recording, processing, and monitoring patient values and alerting in case of disease development, as support in imaging procedures. | ||
Therapy support and invasive interventions | As a support in medication planning, physiotherapy, rehabilitation exercises, and (remote) operations (mainly in surgery or orthopedics), for certain interventions with partly autonomous acting systems. | ||
Care and everyday support for people in need of care | As robots to assist with manual tasks and care, and household activities, to record vital signs, for entertainment, including robotic animals. | ||
Process management | To support processes in hospitals or practices. | ||
No potential future areas of application | |||
Invasive interventions | Operating AI or robots that control themselves (especially in operations on vital organs or neurosurgery), or interventions directly on the human body (eg, taking of blood samples). | ||
Care and direct patient contact | Activities requiring physical proximity or interpersonal relations, and entertainment or animal robots. | ||
Empathic conversation (eg, in psychology) | Delivery and disclosure of serious illnesses or news; understanding, analyzing, assessing, and supporting the human psyche; psychotherapy; however, conceivable as support for conversations and medication regarding psychological illnesses. | ||
Other specialties and therapy | Gynecology and urology; general medicine; sole therapeutic use, regardless of the specialty; in nonquantifiable examinations such as visual assessment or palpation. |
aAI: artificial intelligence.
Participants suggested that AI could be used in data collection and networking, building a database for physicians, and enabling early intervention through preventive monitoring of patient data. According to the participants, consolidating patient data and facilitating the exchange of expert knowledge could speed up treatment processes, identify correlations, and gain new insights. Furthermore, AI could support patient-physician coordination. For example, in research, AI could overcome language barriers to compile results for global studies.
Diagnostics was identified as a major field, as AI could accelerate the process through objective data analysis and collection, providing a basis for decision-making for physicians. Participants suggested including patient values in symptom checkers to improve diagnostic suggestions’ trustworthiness and reduce uncertainty. AI’s potential in image recognition, processing, and editing was expected to be leading in the near future (eg, in radiology, in neurology—evaluation of EEG data, in dermatology—screening of naevi), due to its already powerful capabilities in this area.
According to participants, AI could be used in therapy to help ensure that side effects, drug interactions, and current knowledge are taken into account. AI could be useful in providing rehabilitation and physiotherapy support by creating appropriate exercise plans with motivational elements, individual adaptation, and monitoring of exercises. Another possible area was operations, where a personal or verbal component seemed less important.
Participants discussed the use of AI in care and entertainment to address the shortage of skilled workers. Robots (featuring AI) could simplify work for carers and provide companionship for those in need of care. Beneficially, robotic animals would not cause allergies and would not have any physical needs. However, it was argued that AI alone should not be the answer; rather, it should be a change in approach to care to ensure enough carers.
Another area of application identified by participants was process management. They found AI useful in medication provision, operating theatre, and bed management, and patient triage. They identified great potential in administrative tasks, like optimizing ordering systems with faster appointment allocation, assisting patients with follow-up by providing necessary information and reminders, or physical assistance with luggage robots.
No Potential Future Areas of Application
Participants’ opinions differed regarding the potential use of AI in care and invasive interventions, ranging from possible (see above) to unthinkable. They expressed a lack of confidence in AI’s ability to perform operations reliably, as there would be no error tolerance. They could not imagine flexibility and short-term adaptation times (which would be required, for example, due to individual anatomy or the occurrence of errors) in the AI. Participants opposed AI in care, arguing that human interaction in care is crucial and should not be replaced by AI. They feared that vulnerable people in need of care (such as people with disabilities, children, and older adults) could be further excluded from society through the lack of human contact. There were comments about finding the idea of using care robots or animal robots sad, questioning how society will deal with people who need support in the future. In addition, operating these devices could also cost carers more time, potentially further reducing human contact.
Other areas that respondents felt were unsuitable for AI were tasks requiring empathic conversations, certain specialties, and sole AI use in therapy, regardless of specialty. Participants expressed distrust in AI’s ability to possess empathy and understanding of the human psyche (and its illnesses), which was mentioned as especially important in conversations. They also questioned whether AI could make appropriate therapy recommendations and provide support during difficult times. Participants highlighted that they would not want to be informed by AI about serious illnesses, or would be unsure how to deal with such a situation, because of the need for human contact in these settings. Participants opposed AI in gynecology or urology, either due to the sensitivity of health matters shared or for other unspecified reasons. AI in general medicine was also perceived as inappropriate, as patients often seek personal contact. A minor aspect was that research could not be imagined as a potential area for AI, as it requires human foresight. Subthemes with descriptions of thinkable and unthinkable tasks provided by AI are presented in
, while participants’ quotes are presented in the .Discussion
Principal Findings
This study set out to examine patients’ perceptions of AI in medical care regarding acceptance, challenges, and use. According to participants, factors such as practicality, environmental sustainability, comprehensibility, accessibility for all, adherence to quality standards with proper monitoring, and a focus on patient well-being rather than profit should be considered in the development and implementation process. Though participants were not generally opposed to AI, there was some skepticism about its use, particularly in medicine. Participants could imagine AI as a support tool, but not as an autonomous system, indicating a desire for human control. Opinions diverged particularly on the use in care and operations. While diagnostics, including image recognition and processing, were seen as a dominant potential area of AI support, its use in areas where human interaction and conversation are essential was rejected.
Comparison With Previous Work
Acceptance of AI
The UTAUT can help identify factors influencing user acceptance, especially among those hesitant to adopt new technologies, and can be applied in the development of technical innovations [
]. The results of our study regarding patients’ acceptance partly align with the main determinants of UTAUT (performance and effort expectancy, social influence, and facilitating conditions).Participants expected AI to improve care or living conditions (performance expectancy). Important acceptance criteria included simple functionality and handling, and easy-to-understand explanations (effort expectancy), which were in line with previous studies [
, ]. Knowledge about AI can increase its acceptance [ , ], and both our participants and other stakeholders, including the European Commission, consider the transfer of knowledge and the training of medical staff and users to be challenging but indispensable prerequisites [ - ].The participants valued recommendations from peers and physicians (social influences), stating negative attitudes from physicians would reduce their acceptance [
]. In line with the literature [ ], participants’ acceptance would increase if AI was tested in studies and in practice, but no one wanted to be the first to test the system.Factors promoting participants’ acceptance (facilitating conditions) were in line with previous studies and included data protection, patient- and nonprofit-oriented development and implementation [
, ] and a large and representative database for AI [ - ]. Previous studies have demonstrated that data protection and transparency in data use are essential for development, trust, and acceptance of AI [ , ], which is consistent with our findings. To prevent data leaks, robust security measures must be implemented, but this can hinder the acquisition of a sufficiently expansive and representative database [ ]. In accordance with the literature, our participants viewed the attainment as a challenge, but an imperative requirement for AI to function adequately [ - ]. Furthermore, participants and current literature also discuss that AI systems should be trained on diverse data [ ] to avoid inheriting existing inequalities from models or the training dataset [ ]. Due to the sensitivity of health care data, participants emphasized its protection [ ] and trustworthiness of entities receiving their data, which would be essential for data sharing [ ]. Trustworthy AI should be in compliance with ethical and legal regulations as well as technical and social functioning throughout its lifecycle [ ]. The US FDA, which approves medical AI devices, recommends cybersecurity, risk management, as well as postimplementation monitoring and evaluation, and calls for adaptive, science-based regulations that protect against risks without limiting benefits [ ]. The European AI Act identifies AI in health care as a high-risk application, as it deals with personal data, and also emphasizes transparency, cybersecurity, and risk management throughout the AI lifecycle, and data governance to ensure representative and error-free training data [ ]. Patients strongly opposed selling health data to private companies for AI research, despite some arguing it could be justified if the product benefits patients [ ]. This distrust toward private health care companies was also evident in previous studies [ - ]. One of the most important findings was that participants desire independent AI development and funding aimed at patient benefit, with concerns being justified as AI may be used to increase profits [ ]. The FDA also sees this risk, although it acknowledges that the relationship between financial optimization and improved health outcomes can be complex and result in financial disadvantages for provider organizations, insurance companies, or health systems. However, sponsors should be transparent and focus on health outcomes, and a comprehensive and regular approach across the health system is needed to counter the negative risks of financing and keep pace with the development of AI [ ].The UTAUT model provides a good orientation for measuring acceptance and intended use of AI. Due to the models’ criticized lack of complexity [
, ], UTAUT (and TAM) have been adapted and extended for use in the health care sector [ , ]. Nevertheless, it is only partially applicable to all health care issues and their stakeholders, including patients, physicians, and carers. Thus, as our results also show, sociocultural aspects or factors such as training or integration into everyday working life are important in health care applications [ , ] and need to be integrated into these models to reflect the complexity of digital applications in health care [ ].Challenges and Requirements
Participants emphasized that AI should be accessible and usable by all people, not exacerbating inequalities, as could be the case through biased training data, as mentioned above, or a lack of technical or financial possibilities [
].In the context of resource use, the issue was mentioned that environmental sustainability should be considered when developing new AI systems. In a Swedish survey of health care managers, the climate aspect was mentioned in connection with the successful implementation of AI [
], and the European Commission identifies environmental sustainability throughout the lifecycle of AI as a requirement for trustworthy AI [ ]. There are also efforts to assess the environmental compatibility of AI [ ] and to identify the environmental impact of medical digitalization [ ]. Participants considered environmental sustainability necessary, as a global digital infrastructure already consumes many resources and has a high carbon footprint. The participants' assumption is not unfounded: in 2019, 3.8 % of greenhouse gas emissions were attributed to the digital sector [ , ], and the trend is rising. AI has great potential to promote sustainable development and reduce the environmental footprint [ , ]. Yet it also has negative environmental impacts that require careful use and a balanced approach involving regulation and all stakeholders [ - ].Comparison of our findings with existing literature confirms that patients prefer AI applications to be certified by external, independent institutions [
]. Participants and other health care stakeholders agreed that AI should meet quality standards, similar to medical devices [ , ]. As noted above, a regulatory framework was seen as crucial to the development and implementation of AI [ , , ], guiding the development process without hindering progress and requiring oversight during application [ ]. Assuming that AI could be fed with a global dataset in the future, then globally applicable regulations would be a logical consequence, although their implementation would undoubtedly prove challenging. Although there are recent developments, such as the European AI Act [ ] or American regulatory approaches to AI in medical applications [ , , ], trying to introduce compatible standards, there seems to be a lack of global guidelines.AI Use in Medical Care
Despite participants expressing a general openness toward AI, there is considerable skepticism, even among tech-savvy participants, about AI-based decision-making and conversational guidance [
]. Particularly in the case of patients with chronic or terminal illnesses, the human factor appears to be of paramount importance [ , ]. Thus, some participants, who could imagine using AI for surgery, were against its independent use, but in favor of AI support [ ]. In accordance with literature [ ], opinions ranged from the independent use of AI for minor interventions to no use at all in surgical procedures.As a key finding of our study and in line with previous studies, participants preferred AI as support systems with human supervision rather than autonomous systems [
, , ]. As the lack of human involvement is a relevant barrier to acceptance of AI systems [ , ], potential applications should focus on support rather than independent functioning.Our study indicates that most participants can imagine AI being used in diagnostics or data processing [
]. In line with literature [ , ], participants attributed a more objective diagnostic capability to AI and felt that AI could quicken the diagnostic process [ , ] and assist physicians, including in the identification of rare diseases [ ].AI-based systems already exist for outpatient and inpatient care [
]. Strikingly, there were substantial differences in participants' opinions regarding the use of AI in care. People in need of care may no longer be able to advocate for themselves, which increases the need for protection and raises ethical questions about the use of AI in care [ ]. Furthermore, care is an intimate setting that involves both physical and emotional aspects, where maintaining communication and fostering a trusting relationship are of great importance [ ]. Many participants felt that these aspects could not be provided by AI and therefore considered its use in care undesirable for activities beyond the manual relief of staff. In line with Deckert et al [ ], they concluded that AI could only be integrated into care to a limited extent and could not replace nursing staff. Maintaining the interpersonal dimension would be a challenge [ ] that skeptical participants felt AI could not meet. Other participants expressed the opposite view, arguing for the integration of AI in care, often stating that AI would be better than no contact at all. The diversity of opinions and concerns underscores the importance of a balanced approach to implementation in care that combines AI and humanity [ ]. Furthermore, participants could not envisage AI in psychology, where conversational interactions are central. Nevertheless, some stated that certain patients might find it easier to confide in an AI than in medical staff concerning shameful issues or fears. Szalai [ ] describes this aspect in the context of borderline therapy. Furthermore, conversational AI is advancing, as it is capable of engaging in a moderate conversation through language processing, using psychotherapeutic techniques [ ]. Despite the existence of numerous potential applications for AI in mental health treatment, its actual use in clinical practice remains limited [ , ]. Most machine learning solutions for mental health are developed without involving end users and their individual needs, which can create barriers to using existing options [ ]. Furthermore, there are concerns that human characteristics such as imperfection [ ] or empathy [ ] are necessary for psychological treatment, a perception shared by our participants. Exclusively technically generated treatment plans do not include the complete assessment by and emotional awareness of physicians [ ]. This can critically reduce treatment success and discourage patients from continuing treatment [ ]. AI has the potential to support mental health care, which seems particularly beneficial in light of the growing need for it [ ]. However, in this specialist area, it seems particularly important to maintain basic ethical principles and physician involvement [ ].As we examined patients’ perceptions in two regions of Germany (south-west and central-east), and existing German studies in other health care settings have reported similar results [
- ], our findings are applicable to Germany as a whole. Patients’ perceptions of AI are also similar in comparison with other European countries [ , ] and industrialized nations [ , ].Strengths and Limitations
The qualitative design enabled an insight into patients’ perceptions and attitudes toward AI in medical care, and the focus groups contributed to a deeper discussion of the topic. Our questions, examples, and definition of AI merely provided a stimulus, allowing for open-ended responses and making this rather abstract topic more relatable. Yet these opinions should be taken into account in the development of new tools, as potential applications will be used on and by patients. A further strength of our study is the diversity of participants in terms of age, health history, technical affinity, and SES. In addition, the majority were outpatients, who have been underrepresented in previous studies. The wide range of participants made it possible to realistically reflect the perceptions of patients, helping to shape AI development in a practical way. In particular, the challenges and requirements highlighted will contribute to expanding the current state of knowledge and enable the sustainable development of new AI systems.
It is important to note that the sampling and recruitment process may introduce some selection bias into the results. As only patients who were interested in the topic and who tended to consider themselves tech-savvy participated, this may have strongly influenced their perceptions of AI and, therefore, the results. Although we tried to achieve a high level of diversity in terms of SES and affinity for technology, the majority of participants had a higher SES and a medium to high affinity for technology. The results may have limited applicability to populations with low socio-economic status and low affinity for technology, and these groups should continue to be addressed in future studies. As fewer people with a lower SES participated, it is possible that their opinions, especially their needs regarding AI systems in medical care, are underrepresented. Possibly, requirement priorities may differ for this group, such as secure funding, information pathways to reach this population, or the design of AI systems. Although participants stated that AI systems should be understandable and usable by people of all educational levels, this can only be ensured by explicitly asking all groups. In countries with substantial differences from German medical care or use of AI in health care, the applicability of our results is limited. In addition, it is possible that the topic guide questions and the examples provided influenced the patients' responses and the importance of topics. The medical background of the interviewer may have influenced the participants' responses, for example, by making them less open in their criticism of physicians or medical care, or by articulating more desirable topics such as support for medical staff. It is possible that due to the medical background, participants assumed a higher knowledge of AI in medical care and therefore tended to see themselves as inadequate participants, which may have led to more restraint. Most of the scenarios discussed were hypothetical, and the majority of participants were AI laypersons with no experience of AI in medicine and perhaps also with limited understanding of AI’s capabilities. Thus, the results should be interpreted accordingly. However, the perceptions of patients as laypersons are particularly interesting and relevant, as it is important to include end user views in the early development process.
Future Work
The findings of this qualitative study, including the themes that patients identified as important, were used to develop a questionnaire for a subsequent quantitative study. It may therefore be possible to reach the aforementioned underrepresented patient groups, which we were able to address in this study, in greater numbers.
The perspectives of care recipients and patients from different ethnic backgrounds would be of particular interest for future research, as they have not been well studied. In addition, future studies need to examine the actual implementation of AI in health care settings and how the aforementioned requirements could be realized to ensure sustainable and fair development.
Conclusions
Based on the results of our study, recommendations for developing patient-centered AI systems in medical care can be concluded. Recommendations for developers include practical development with user involvement and feedback, constant human control and final decision making, compatibility of newly developed systems with each other and with existing systems, sufficiently large and representative training data, ensuring transparency and data protection, comprehensible instructions and intuitive interface and usability, and adaptation to all age and education groups. Using AI as a supportive tool, rather than a replacement, and ensuring final human control was identified as crucial for implementing AI in medicine. Health care providers should learn about the capabilities of new systems before using them, so that they can evaluate the results of AI and explain the applications in use to patients. They should also maintain a human approach and be aware that their assessments of new AI systems will influence patients’ perceptions of AI. For their part, legislators should introduce clear quality standards and certifications to assess the trustworthiness of new AI systems as medical devices. The standards should include measures for socially fair and environmentally sustainable development and use, as well as education and training for users on system functions and practical use. At the same time, potential systems should be tested and verified for compliance with the standards before use, and compliance should be monitored. The most important guiding principle should always be patient welfare, not profit.
The successful implementation of AI systems in medical care faces many challenges, in part due to the prevailing caution and skepticism in this area. Nevertheless, as long as the development of AI systems is not primarily driven by profit, patients were generally open to their use and recognized their potential to support medical care. The extent to which they can be integrated into everyday medical practice will depend on whether the identified requirements and the needs of users will be taken seriously and whether the aforementioned challenges can be overcome.
Acknowledgments
We thank all the focus group participants for their participation. The study was part of the PEAK project (Perspectives on the Use and Acceptance of Artificial Intelligence in Medical Care). PEAK consortium contributed to conceptualization, funding acquisition, project administration, supervision, data collection, and development of mock-ups and video. Other PEAK consortium members involved in the research were: Hans-Ulrich Prokosch, Professor, contributed to conceptualization; Jan Schildmann, Professor, contributed to guidance and methodology; Carsten Fluck supported investigation and formal analysis; Nadja Kartschmit, PhD, contributed to conceptualization and funding acquisition, and Iryna Manuilova managed the website. The PEAK project is funded by the Innovation Fund of the Federal Joint Committee (G-BA [01VSF20017]). The G-BA had no role in the design and conduct of the study, including data collection, data analysis, interpretation of data, and preparation of the manuscript for submission and publication. The opinions expressed in the submitted article are those of the participants (interpreted and summarized by the authors) and do not represent the official position of the institutions or the funders. The examples of AI in medical care are provided for illustrative purposes only and do not constitute an advertisement or a recommendation by the authors.
Data Availability
The datasets generated and analyzed during this study are not publicly available as the participants’ anonymity and privacy may be at risk if all collected qualitative data is shared. The datasets are available from the corresponding author on reasonable request.
Authors' Contributions
RM and JC contributed to conceptualization. JG, SN, and CB contributed to formal analysis. RM contributed to funding acquisition. JG, SN, and TA contributed to the investigation. JG, SN, CB, CT, and KD contributed to methodology. SN contributed to project administration. TF managed resources. RM, JC, SU, and TF performed supervision. CT and KD contributed to validation. JG contributed to the visualization. JG and CT contributed to writing the original draft. All authors contributed to reviewing and editing. All authors read and approved the final manuscript.
Conflicts of Interest
None declared.
Illustrative quotes organized by themes and subthemes.
DOCX File , 28 KBReferences
- Shen TL, Fu XL. Application and prospect of artificial intelligence in cancer diagnosis and treatment. Zhonghua Zhong Liu Za Zhi. 2018;40(12):881-884. [CrossRef] [Medline]
- Kaul V, Enslin S, Gross SA. History of artificial intelligence in medicine. Gastrointest Endosc. 2020;92(4):807-812. [CrossRef] [Medline]
- Fast E, Horvitz E. Long-term trends in the public perception of artificial intelligence. AAAI Conf Artif Intell. 2017;31(1). [FREE Full text] [CrossRef]
- Wang Q, Sun T, Li R. Does artificial intelligence (AI) reduce ecological footprint? The role of globalization. Environ Sci Pollut Res Int. 2023;30(59):123948-123965. [CrossRef] [Medline]
- Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115-118. [FREE Full text] [CrossRef] [Medline]
- Freeman K, Dinnes J, Chuchu N, Takwoingi Y, Bayliss SE, Matin RN, et al. Algorithm based smartphone apps to assess risk of skin cancer in adults: Systematic review of diagnostic accuracy studies. BMJ. 2020;368:m127. [FREE Full text] [CrossRef] [Medline]
- Artificial intelligence and machine learning (AI/ML)-enabled medical devices. US FDA. URL: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices [accessed 2024-04-25]
- Wehkamp K, Krawczak M, Schreiber S. The quality and utility of artificial intelligence in patient care. Dtsch Arztebl Int. 2023;120(27-28):463-469. [FREE Full text] [CrossRef] [Medline]
- Federal Institute for Drugs and Medical Devices, Germany, Digital Health Applications. URL: https://diga.bfarm.de/de/verzeichnis [accessed 2024-10-29]
- Ipp E, Liljenquist D, Bode B, Shah VN, Silverstein S, Regillo CD, et al. EyeArt Study Group. Pivotal evaluation of an artificial intelligence system for autonomous detection of referrable and vision-threatening diabetic retinopathy. JAMA Netw Open. 2021;4(11):e2134254. [FREE Full text] [CrossRef] [Medline]
- Powered by Ada. Ada, Gesundheit. URL: https://ada.com/de/ [accessed 2024-04-09]
- Morse KE, Ostberg NP, Jones VG, Chan AS. Use characteristics and triage acuity of a digital symptom checker in a large integrated health system: Population-based descriptive study. J Med Internet Res. 2020;22(11):e20549. [FREE Full text] [CrossRef] [Medline]
- Cabitza F, Campagner A, Balsano C. Bridging the "last mile" gap between AI implementation and operation: "data awareness" that matters. Ann Transl Med. 2020;8(7):501. [FREE Full text] [CrossRef] [Medline]
- Obermeyer Z, Topol EJ. Artificial intelligence, bias, and patients' perspectives. Lancet. 2021;397(10289):2038. [CrossRef] [Medline]
- Barello S, Graffigna G, Vegni E. Patient engagement as an emerging challenge for healthcare services: Mapping the literature. Nurs Res Pract. 2012;2012:905934. [FREE Full text] [CrossRef] [Medline]
- Forbat L, Cayless S, Knighting K, Cornwell J, Kearney N. Engaging patients in health care: An empirical study of the role of engagement on attitudes and action. Patient Educ Couns. 2009;74(1):84-90. [CrossRef] [Medline]
- Anderson M, McCleary KK. From passengers to co-pilots: Patient roles expand. Sci Transl Med. 2015;7(291):291fs25. [CrossRef] [Medline]
- Harrington RL, Hanna ML, Oehrlein EM, Camp R, Wheeler R, Cooblall C, et al. Defining patient engagement in research: Results of a systematic review and analysis: Report of the ISPOR patient-centered special interest group. Value Health. 2020;23(6):677-688. [FREE Full text] [CrossRef] [Medline]
- Selby JV, Beal AC, Frank L. The Patient-Centered Outcomes Research Institute (PCORI) national priorities for research and initial research agenda. JAMA. 2012;307(15):1583-1584. [CrossRef] [Medline]
- Park M, Giap T, Lee M, Jeong H, Jeong M, Go Y. Patient- and family-centered care interventions for improving the quality of health care: A review of systematic reviews. Int J Nurs Stud. 2018;87:69-83. [CrossRef] [Medline]
- Crocker JC, Ricci-Cabello I, Parker A, Hirst JA, Chant A, Petit-Zeman S, et al. Impact of patient and public involvement on enrolment and retention in clinical trials: Systematic review and meta-analysis. BMJ. 2018;363:k4738. [FREE Full text] [CrossRef] [Medline]
- Koppel R, Wetterneck T, Telles JL, Karsh B. Workarounds to barcode medication administration systems: Their occurrences, causes, and threats to patient safety. J Am Med Inform Assoc. 2008;15(4):408-423. [FREE Full text] [CrossRef] [Medline]
- Weingart SN, Toth M, Sands DZ, Aronson MD, Davis RB, Phillips RS. Physicians' decisions to override computerized drug alerts in primary care. Arch Intern Med. 2003;163(21):2625-2631. [CrossRef] [Medline]
- Kovarik CL. Patient perspectives on the use of artificial intelligence. JAMA Dermatol. 2020;156(5):493-494. [CrossRef] [Medline]
- Lau AYS, Staccini P, Section Editors for the IMIA Yearbook Section on Education Consumer Health Informatics. Artificial intelligence in health: New opportunities, challenges, and practical implications. Yearb Med Inform. 2019;28(1):174-178. [FREE Full text] [CrossRef] [Medline]
- Baur J, Blasius. Handbuch Methoden der empirischen Sozialforschung, 2nd ed. Wiesbaden. Springer VS; 2019.
- Przyborski A, Wohlrab-Sahr M. Qualitative Sozialforschung: Ein Arbeitsbuch, 5th ed. Berlin, Boston. De Gruyter Oldenbourg; 2021.
- Stewart D. Focus groups: Theory and practice, 3rd ed. Los Angeles. Sage; 2015.
- Esmaeilzadeh P, Mirzaei T, Dharanikota S. Patients' perceptions toward human-artificial intelligence interaction in health care: Experimental study. J Med Internet Res. 2021;23(11):e25856. [FREE Full text] [CrossRef] [Medline]
- Nelson CA, Pérez-Chada LM, Creadore A, Li SJ, Lo K, Manjaly P, et al. Patient perspectives on the use of artificial intelligence for skin Cancer screening: A qualitative study. JAMA Dermatol. 2020;156(5):501-512. [FREE Full text] [CrossRef] [Medline]
- Palmisciano P, Jamjoom AA, Taylor D, Stoyanov D, Marcus HJ. Attitudes of patients and their relatives toward artificial intelligence in neurosurgery. World Neurosurg. 2020;138:e627-e633. [CrossRef] [Medline]
- Lennartz S, Dratsch T, Zopfs D, Persigehl T, Maintz D, Große Hokamp N, et al. Use and control of artificial intelligence in patients across the medical workflow: Single-center questionnaire study of patient perspectives. J Med Internet Res. 2021;23(2):e24221. [FREE Full text] [CrossRef] [Medline]
- Fritsch SJ, Blankenheim A, Wahl A, Hetfeld P, Maassen O, Deffge S, et al. Attitudes and perception of artificial intelligence in healthcare: A cross-sectional survey among patients. Digit Health. 2022;8:20552076221116772. [FREE Full text] [CrossRef] [Medline]
- Riedl R, Hogeterp SA, Reuter M. Do patients prefer a human doctor, artificial intelligence, or a blend, and is this preference dependent on medical discipline? Empirical evidence and implications for medical practice. Front Psychol. 2024;15:1422177. [FREE Full text] [CrossRef] [Medline]
- Knitza J, Muehlensiepen F, Ignatyev Y, Fuchs F, Mohn J, Simon D, et al. Patient's perception of digital symptom assessment technologies in rheumatology: Results from a multicentre study. Front Public Health. 2022;10:844669. [FREE Full text] [CrossRef] [Medline]
- Blease C, Kaptchuk TJ, Bernstein MH, Mandl KD, Halamka JD, DesRoches CM. Artificial intelligence and the future of primary care: Exploratory qualitative study of UK general practitioners' views. J Med Internet Res. 2019;21(3):e12802. [FREE Full text] [CrossRef] [Medline]
- Rehman M, Dean AM, Pires GD. A research framework for examining customer participation in value co-creation: Applying the service dominant logic to the provision of living support services to oncology day-care patients. Int. J. Behav. Med. 2012;3(3/4):226. [CrossRef]
- Antes AL, Burrous S, Sisk BA, Schuelke MJ, Keune JD, DuBois JM. Exploring perceptions of healthcare technologies enabled by artificial intelligence: An online, scenario-based survey. BMC Med Inform Decis Mak. 2021;21(1):221. [FREE Full text] [CrossRef] [Medline]
- Young AT, Amara D, Bhattacharya A, Wei ML. Patient and general public attitudes towards clinical artificial intelligence: A mixed methods systematic review. Lancet Digit Health. 2021;3(9):e599-e611. [FREE Full text] [CrossRef] [Medline]
- Davis FD. A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Theory and Results. Cambridge, MA, USA. Massachusetts Institute of Technology; 1985.
- Davis FD. Perceived usefulness, Perceived ease of use, and user acceptance of information technology. MIS Quarterly. 1989;13(3):319. [FREE Full text] [CrossRef]
- Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: Toward a unified view. MIS Quarterly. 2003;27(3):425-478. [FREE Full text] [CrossRef]
- Ammenwerth E. Technology acceptance models in health informatics: TAM and UTAUT. Stud Health Technol Inform. 2019;263:64-71. [CrossRef] [Medline]
- Rahimi B, Nadri H, Lotfnezhad Afshar H, Timpka T. A systematic review of the technology acceptance model in health informatics. Appl Clin Inform. 2018;9(3):604-634. [FREE Full text] [CrossRef] [Medline]
- Heinsch M, Wyllie J, Carlson J, Wells H, Tickner C, Kay-Lambkin F. Theories informing eHealth implementation: Systematic review and typology classification. J Med Internet Res. 2021;23(5):e18500. [FREE Full text] [CrossRef] [Medline]
- Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349-357. [CrossRef] [Medline]
- Department of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, P³ Personalisierte Pharmakotherapie in der Psychiatrie. URL: https://www.imi.med.fau.de/projekte/abgeschlossene-projekte/p³- personalisierte-pharmakotherapie-in-der-psychiatrie/ [accessed 2024-04-15]
- IAIS, Fraunhofer, Fraunhofer-Institut für Intelligente Analyse und Informationssysteme IAIS, “Whitepaper »Künstliche Intelligenz im Krankenhaus«. 2020. URL: https://www.iais.fraunhofer.de/de/publikationen/studien/2020/lotte.html [accessed 2025-04-19]
- Krueger RA, Casey MA. Focus groups: A practical guide for applied research, 5th ed. Los Angeles. Sage; 2015.
- Helfferich C. Die Qualität qualitativer Daten: Manual für die Durchführung qualitativer Interviews, 4th ed. Wiesbaden. Springer VS; 2010.
- Kamin ST, Lang FR. The Subjective Technology Adaptivity Inventory (STAI): A motivational measure of technology usage in old age. Gerontechnology. 2013;12(1):16-25. [FREE Full text] [CrossRef]
- Mayring P. Qualitative Inhaltsanalyse: Grundlagen und Techniken, 13th ed. Weinheim, Basel. Beltz; 2022.
- Siau K, Wang W. Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal. 2018;31(2). [FREE Full text]
- Omar A, Ellenius J, Lindemalm S. Evaluation of electronic prescribing decision support system at a tertiary care pediatric hospital: The user acceptance perspective. Stud Health Technol Inform. 2017;234:256-261. [CrossRef] [Medline]
- Laï M-C, Brian M, Mamzer M-F. Perceptions of artificial intelligence in healthcare: Findings from a qualitative survey study among actors in France. J Transl Med. 2020;18(1):14. [FREE Full text] [CrossRef] [Medline]
- High-Level Experts Group on Artificial Intelligence, “Ethics guidelines for trustworthy AI,” European Commission, Brussels. 2019. URL: https://op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1/language-de [accessed 2024-05-22]
- Dumić-Čule I, Orešković T, Brkljačić B, Kujundžić Tiljak M, Orešković S. The importance of introducing artificial intelligence to the medical curriculum - assessing practitioners' perspectives. Croat Med J. 2020;61(5):457-464. [FREE Full text] [CrossRef] [Medline]
- Petersson L, Larsson I, Nygren JM, Nilsen P, Neher M, Reed JE, et al. Challenges to implementing artificial intelligence in healthcare: A qualitative interview study with healthcare leaders in Sweden. BMC Health Serv Res. 2022;22(1):850. [FREE Full text] [CrossRef] [Medline]
- Sauerbrei A, Kerasidou A, Lucivero F, Hallowell N. The impact of artificial intelligence on the person-centred, doctor-patient relationship: Some problems and solutions. BMC Med Inform Decis Mak. 2023;23(1):73. [FREE Full text] [CrossRef] [Medline]
- Adams SJ, Tang R, Babyn P. Patient perspectives and priorities regarding artificial intelligence in radiology: Opportunities for patient-centered radiology. J Am Coll Radiol. 2020;17(8):1034-1036. [CrossRef] [Medline]
- Haan M, Ongena YP, Hommes S, Kwee TC, Yakar D. A qualitative study to understand patient perspective on the use of artificial intelligence in radiology. J Am Coll Radiol. 2019;16(10):1416-1419. [CrossRef] [Medline]
- McCradden MD, Sarker T, Paprica PA. Conditionally positive: A qualitative study of public perceptions about using health data for artificial intelligence research. BMJ Open. 2020;10(10):e039798. [FREE Full text] [CrossRef] [Medline]
- Johnson SLJ. AI, machine learning, and ethics in health care. J Leg Med. 2019;39(4):427-441. [CrossRef] [Medline]
- He M, Li Z, Liu C, Shi D, Tan Z. Deployment of artificial intelligence in real-world practice: Opportunity and challenge. Asia Pac J Ophthalmol (Phila). 2020;9(4):299-307. [FREE Full text] [CrossRef] [Medline]
- Jiang L, Wu Z, Xu X, Zhan Y, Jin X, Wang L, et al. Opportunities and challenges of artificial intelligence in the medical field: current application, emerging problems, and problem-solving strategies. J Int Med Res. 2021;49(3):3000605211000157. [FREE Full text] [CrossRef] [Medline]
- Koohi-Moghadam M, Bae KT. Generative AI in medical imaging: Applications, challenges, and ethics. J Med Syst. 2023;47(1):94. [CrossRef] [Medline]
- Potočnik J, Foley S, Thomas E. Current and potential applications of artificial intelligence in medical imaging practice: A narrative review. J Med Imaging Radiat Sci. 2023;54(2):376-385. [FREE Full text] [CrossRef] [Medline]
- Sunarti S, Fadzlul Rahman F, Naufal M, Risky M, Febriyanto K, Masnina R. Artificial intelligence in healthcare: Opportunities and risk for future. Gac Sanit. 2021;35 Suppl 1:S67-S70. [FREE Full text] [CrossRef] [Medline]
- Beets B, Newman TP, Howell EL, Bao L, Yang S. Surveying public perceptions of artificial intelligence in health care in the United States: Systematic review. J Med Internet Res. 2023;25:e40337. [FREE Full text] [CrossRef] [Medline]
- Bærøe K, Miyata-Sturm A, Henden E. How to achieve trustworthy artificial intelligence for health. Bull World Health Organ. 2020;98(4):257-262. [FREE Full text] [CrossRef] [Medline]
- Warraich HJ, Tazbaz T, Califf RM. FDA Perspective on the regulation of artificial intelligence in health care and biomedicine. JAMA. 2025;333(3):241-247. [CrossRef] [Medline]
- THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). EUR-Lex. 2024. URL: http://data.europa.eu/eli/reg/2024/1689/oj [accessed 2025-05-01]
- McCradden MD, Baba A, Saha A, Ahmad S, Boparai K, Fadaiefard P, et al. Ethical concerns around use of artificial intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers: A qualitative study. CMAJ Open. 2020;8(1):E90-E95. [FREE Full text] [CrossRef] [Medline]
- Kim J, Kim H, Bell E, Bath T, Paul P, Pham A, et al. Patient perspectives about decisions to share medical data and biospecimens for research. JAMA Netw Open. 2019;2(8):e199550. [FREE Full text] [CrossRef] [Medline]
- Aitken M, de St Jorre J, Pagliari C, Jepson R, Cunningham-Burley S. Public responses to the sharing and linkage of health data for research purposes: A systematic review and thematic synthesis of qualitative studies. BMC Med Ethics. 2016;17(1):73. [FREE Full text] [CrossRef] [Medline]
- Paprica PA, de Melo MN, Schull MJ. Social licence and the general public's attitudes toward research based on linked administrative health data: A qualitative study. CMAJ Open. 2019;7(1):E40-E46. [FREE Full text] [CrossRef] [Medline]
- King TC, Aggarwal N, Taddeo M, Floridi L. Artificial intelligence crime: An interdisciplinary analysis of foreseeable threats and solutions. Sci Eng Ethics. 2020;26(1):89-120. [FREE Full text] [CrossRef] [Medline]
- Shachak A, Kuziemsky C, Petersen C. Beyond TAM and UTAUT: Future directions for HIT implementation research. J Biomed Inform. 2019;100:103315. [FREE Full text] [CrossRef] [Medline]
- Venkatesh V, Thong JYL. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly. 2012;36(1):157-178. [FREE Full text] [CrossRef]
- Holden RJ, Karsh B. The technology acceptance model: its past and its future in health care. J Biomed Inform. 2010;43(1):159-172. [FREE Full text] [CrossRef] [Medline]
- Ward R. The application of technology acceptance and diffusion of innovation models in healthcare informatics. Health Policy Technol. 2013;2(4):222-228. [FREE Full text] [CrossRef]
- Morgenstern JD, Rosella LC, Daley MJ, Goel V, Schünemann HJ, Piggott T. "AI's gonna have an impact on everything in society, so it has to have an impact on public health": A fundamental qualitative descriptive study of the implications of artificial intelligence for public health. BMC Public Health. 2021;21(1):40. [FREE Full text] [CrossRef] [Medline]
- Vafaei Sadr A, Bülow R, von Stillfried S, Schmitz NEJ, Pilva P, Hölscher DL, et al. Operational greenhouse-gas emissions of deep learning in digital pathology: A modelling study. Lancet Digit Health. 2024;6(1):e58-e69. [FREE Full text] [CrossRef] [Medline]
- Guillory T, Tilmant C, Trécourt A, Gaillot-Durand L. Impacts environnementaux du numérique et de l’intelligence artificielle, à l’heure de la pathologie digitale. Annales de Pathologie. 2024;44(5):353-360. [CrossRef]
- European Commission, The carbon footprint of the digital sector. URL: https://www.europarl.europa.eu/doceo/document/E-9-2020-001324_EN.html [accessed 2024-08-27]
- European Commission, Green digital sector. URL: https://digital-strategy.ec.europa.eu/en/policies/green-digital [accessed 2024-08-27]
- Kar AK, Choudhary SK, Singh VK. How can artificial intelligence impact sustainability: A systematic literature review. J. Clean. Prod. 2022;376:134120. [CrossRef]
- Richie C. Environmentally sustainable development and use of artificial intelligence in health care. Bioethics. 2022;36(5):547-555. [FREE Full text] [CrossRef] [Medline]
- Khajeh Naeeni S, Nouhi N. The environmental impacts of AI and digital technologies. AI Tech Behav. Soc. Sci. 2023;1(4):11-18. [FREE Full text] [CrossRef]
- Vinuesa R, Azizpour H, Leite I, Balaam M, Dignum V, Domisch S, et al. The role of artificial intelligence in achieving the sustainable development goals. Nat Commun. 2020;11(1):233. [FREE Full text] [CrossRef] [Medline]
- FDA. Artificial Intelligence and Machine Learning in Software as a Medical Device. 2021. URL: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device [accessed 2025-04-19]
- FDA US, “Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions: Draft Guidance for Industry and Food and Drug Administration Staff. 2023. URL: https://www.fda.gov/media/166704/download [accessed 2025-04-19]
- Karches KE. Against the iDoctor: Why artificial intelligence should not replace physician judgment. Theor Med Bioeth. 2018;39(2):91-110. [CrossRef] [Medline]
- Stai B, Heller N, McSweeney S, Rickman J, Blake P, Vasdev R, et al. Public perceptions of artificial intelligence and robotics in medicine. J Endourol. 2020;34(10):1041-1048. [FREE Full text] [CrossRef] [Medline]
- Tran V, Riveros C, Ravaud P. Patients' views of wearable devices and AI in healthcare: Findings from the ComPaRe e-cohort. NPJ Digit Med. 2019;2:53. [FREE Full text] [CrossRef] [Medline]
- Deckert R, Rascher I, Recken H. Digitalisierung in der Altenpflege: Analyse und Handlungsempfehlungen. Wiesbaden, Heidelberg. Springer Gabler; 2022.
- Klein B, Rägle S, Klüber S. Künstliche Intelligenz im Healthcare-Sektor. Frankfurt, Germany. Frankfurt University of Applied Sciences; 2024.
- Szalai J. The potential use of artificial intelligence in the therapy of borderline personality disorder. J Eval Clin Pract. 2021;27(3):491-496. [CrossRef] [Medline]
- D'Alfonso S, Santesteban-Echarri O, Rice S, Wadley G, Lederman R, Miles C, et al. Artificial intelligence-assisted online social therapy for youth mental health. Front Psychol. 2017;8:796. [FREE Full text] [CrossRef] [Medline]
- Koutsouleris N, Hauser TU, Skvortsova V, De Choudhury M. From promise to practice: towards the realisation of AI-informed mental health care. Lancet Digit Health. 2022;4(11):e829-e840. [FREE Full text] [CrossRef] [Medline]
- Luxton DD. Recommendations for the ethical use and design of artificial intelligent care providers. Artif Intell Med. 2014;62(1):1-10. [CrossRef] [Medline]
- Fakhoury M. Artificial intelligence in psychiatry. Adv Exp Med Biol. 2019;1192:119-125. [CrossRef] [Medline]
- Carroll KM, Rounsaville BJ. Computer-assisted therapy in psychiatry: be brave-it's a new world. Curr Psychiatry Rep. 2010;12(5):426-432. [FREE Full text] [CrossRef] [Medline]
- Graham S, Depp C, Lee EE, Nebeker C, Tu X, Kim H, et al. Artificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep. 2019;21(11):116. [FREE Full text] [CrossRef] [Medline]
Abbreviations
AI: artificial intelligence |
FDA: US Food and Drug Administration |
FG: focus group |
ML: machine learning |
PEAK: Perspectives on the Use and Acceptance of AI in Medical Care |
SES: socioeconomic status |
TAM: Technology Acceptance Model |
UTAUT: Unified Theory of Acceptance and Use of Technology |
Edited by J Sarvestan; submitted 23.12.24; peer-reviewed by M Nayak, B Meskó; comments to author 14.02.25; revised version received 06.03.25; accepted 03.04.25; published 15.05.25.
Copyright©Jana Gundlack, Carolin Thiel, Sarah Negash, Charlotte Buch, Timo Apfelbacher, Kathleen Denny, Jan Christoph, Rafael Mikolajczyk, Susanne Unverzagt, Thomas Frese. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 15.05.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.