User language distorts ChatGPT information on armed conflicts, study shows

User language distorts ChatGPT information on armed conflicts, study shows

When asked in Arabic about the number of civilian casualties killed in the Middle East conflict, ChatGPT gives significantly higher casualty numbers than when the prompt was written in Hebrew, as a new study by the Universities of Zurich and Constance shows. These systematic discrepancies can reinforce biases in armed conflicts and encourage information bubbles. When asked in Arabic about the number of civilian casualties killed in the Middle East conflict, ChatGPT gives significantly higher casualty numbers than when the prompt was written in Hebrew, as a new study by the Universities of Zurich and Constance shows. These systematic discrepancies can reinforce biases in armed conflicts and encourage information bubbles. Social Sciences Political science Phys.org – latest science and technology news stories

Leave a Reply

Your email address will not be published. Required fields are marked *