1 minute read

Can an AI trick you into trusting it?

lunch break at the agent’s workplace. In each conversation, the agent seemed to self-disclose either highly work-relevant personal information, less-relevant information about a hobby, or no personal information.

The final analysis included data from 918 participants whose empathy for the A.I. agent was evaluated using a standard empathy questionnaire. The researchers found that, compared to less-relevant self-disclosure, highly work-relevant self-disclosure from the A.I. agent was associated with greater empathy from participants. A lack of self-disclosure was associated with suppressed empathy. The agent’s appearance as ei- ther a human or anthropomorphic robot did not have a significant association with empathy levels.

These findings suggest that self-disclosure by A.I. agents may, indeed, elicit empathy from humans, which could help inform future development of A.I. tools.

The authors add: “This study investigates whether self-disclosure by anthropomorphic agents affects human empathy. Our research will change the negative image of artifacts used in society and contribute to future social relationships between humans and anthropomorphic agents.”

The Pasifika Medical Association Group emphasise that AI has some limitations and that every researcher using it should check the output carefully.

While AI language models such as ChatGPT can help researchers write scientific articles, for instance by identifying potential collaborators, conducting a literature review, writing sections of articles, and producing abstracts, AI models can provide incorrect answers, or can introduce bias if information about a topic is left out of its data sources.

AI tools are not yet on par with medical writers, and while AI could probably be used as a writer for some parts of the article writing process, it shouldn’t be credited as a co-author.

This article is from: