Abstract
Healthcare professionals currently lack guidance for their use of AI. This means they currently lack clear counsel to aid their navigation of the problematic novel issues that will arise from their use of these systems. This pilot study gathered and analysed cross-sectional attitudinal and qualitative data to address the question: what should be in professional ethical guidance (PEG) to support healthcare practitioners in their use of AI? Our survey asked respondents (n = 42) to review 6 themes and 15 items of guidance content for our proposed PEG-AI. The attitudinal data are presented as simple numerical analysis and the accompanying qualitative data were subjected to conventional content analysis; the findings of which are presented in this report. The study data allowed us to identify further items that could be added to the PEG-AI and to test the survey instrument for content and face validity prior to wider deployment. Subject to further funding, we plan to take this work further to a wider study involving the next iteration of this survey, interviews with interested parties regarding PEG-AI, and an iterative Delphi process (comprising an initial co-creation workshop followed by iterative consensus building) to enable experts to reach consensus regarding recommendations for the content of PEG for AI use in healthcare. We aim for this work to inform the healthcare regulators as they develop regulatory strategies in this area.