Latest News

ChatGPT produces incorrect answers 52 percent of the time

In a recent study* by The Purdue University, researchers analysed ChatGPT answers to 517 Stack Overflow questions, with the aim of assessing the “correctness, consistency, comprehensiveness, and conciseness” of answers. However, the study found that the AI model produced incorrect answers to more than half (52 percent) of the software engineering questions posed.

This has sparked questions around the limitations of ChatGPT and how it can struggle to provide accurate or reliable responses. In addition to outdated information, the tool is trained on text-based human conversations, and some of that data may be inaccurate, untruthful, and misleading.

Anjan Kundavaram, chief product officer at Precisely, comments on how it’s essential to explore the data ChatGPT is based on, and how it makes decisions:

“The foundation of ChatGPT’s insights lies in historical data, with the artificial intelligence (AI) model powering the tool currently only trained on datasets going up until September 2021. However, the value of AI models is greatly amplified by a steady stream of accurate, current data, that helps businesses react to changing conditions. Furthermore, the base version of the model is trained on text-based human conversations, and some of that data may be inaccurate, untruthful, and otherwise misleading at times – requiring careful model fine-tuning. Despite efforts to reduce biases during training, biased or subjective responses may arise on sensitive topics or when the model encounters ambiguous queries.

“The integrity of the data fuelling an AI model directly impacts its performance and reliability. Therefore, it’s essential to ensure that the data used for training is accurate, consistent, and contextual. A data integrity strategy helps organisations connect different data sources, ensuring it has the highest levels of quality and governance, while proactively addressing issues before they create problems downstream. AI technology also greatly benefits from contextual richness, which allows it to discover more meaningful patterns in the data.

“Addressing and mitigating biases during the AI training process is essential. Techniques such as careful dataset curation, diverse data representation, bias-aware evaluation, and ongoing monitoring can help identify and correct biases and promote fairness and inclusivity. By prioritising data integrity, business leaders can ensure that the insights generated by AI models are both trustworthy and reliable.”

*https://www.itpro.com/technology/artificial-intelligence/chatgpt-gives-wrong-answers-to-programming-questions-more-than-50-of-the-time