Researchers at Virginia Tech recently conducted a study revealing potential limitations in the ability of ChatGPT, a prominent generative AI model developed by OpenAI, to provide location-specific information about environmental justice issues. Published in the journal Telematics and Informatics, the findings suggest the existence of geographic biases in current AI models, raising questions about the technology’s efficacy in delivering contextually grounded knowledge.
Testing environmental justice responses by county
The research team, led by Assistant Professor Junghwan Kim from the College of Natural Resources and Environment, employed a comprehensive approach to evaluate ChatGPT’s performance. Using a list of 3,108 counties in the contiguous United States, the researchers prompted the AI model to provide information on environmental justice issues in each county. This selection aimed to diversify the questions typically used to assess generative AI tools, allowing for a nuanced examination.
Geographic disparities unveiled
The results indicated that ChatGPT demonstrated a capacity to identify location-specific environmental justice challenges in large, high-density population areas. However, the AI tool exhibited limitations in providing contextualized information on local environmental justice issues. Out of the 3,018 counties surveyed, ChatGPT could only supply location-specific information for 17% (515 counties).
In states with larger urban populations, such as Delaware or California, less than 1% of the population lived in counties that could not receive specific information. Conversely, in rural states like Idaho and New Hampshire, over 90% of the population resided in counties where the AI tool could not provide local-specific information.
Implications for AI developers and users
The emergence of generative AI as a primary tool for information gathering underscores the importance of testing for biases in modeling outputs. The study’s findings, according to Kim, mark a starting point for investigating and addressing geographic biases in ChatGPT. The implications extend to how programmers and AI developers can anticipate and mitigate information disparities between large and small cities and urban and rural environments.
Enhancing future capabilities: A call for further research
Assistant Professor Ismini Lourentzou of the College of Engineering, a co-author on the paper, outlined three crucial areas for further research to enhance the capabilities of large-language models like ChatGPT:
- Refine Localized and Contextually Grounded Knowledge: To reduce geographic biases, it is essential to refine the AI model’s ability to provide contextually grounded information tailored to specific locations.
- Safeguard Against Challenging Scenarios: Large-language models should be equipped to handle challenging scenarios, such as ambiguous user instructions or feedback, to ensure reliability and resiliency.
- Enhance User Awareness and Policy: Improving user awareness of AI model strengths and weaknesses, especially regarding sensitive topics like environmental justice, is crucial. Clear policies can guide users through the limitations and potential biases associated with AI-generated content.
Guiding future AI development
As the research at Virginia Tech sheds light on the existing geographic biases in ChatGPT, the call for further exploration into these limitations becomes apparent. The study emphasizes the need for ongoing efforts to refine AI models, ensuring they provide accurate and contextually relevant information across diverse geographical settings. By addressing these challenges, developers can pave the way for more reliable, unbiased, and informed AI applications, contributing to the responsible advancement of artificial intelligence technology.
From Zero to Web3 Pro: Your 90-Day Career Launch Plan