Howard University is leaning boldly into the future of innovation by embracing artificial intelligence as a tool for equity, access, and real-world impact. At the center of this effort is Dr. Gloria Washington, director of Howard’s Human-Centered Artificial Intelligence Institute (HCAI), a research hub funded by the Office of Naval Research. Under her leadership, Washington and her team are working to ensure that AI is not only cutting-edge but actually useful for everyone—from HBCU students and faculty to government agencies and a wide range of industry sectors—while keeping human needs, ethics, and inclusion at the forefront of technological advancement.
This past summer, Washington led Project Elevate Black Voices, an initiative sponsored by Google that aims to make emerging technologies more inclusive. The project has developed a large database containing more than 600 hours of recorded African American dialects collected from communities across the United States.
Launched to address persistent disparities in automated speech recognition systems, Project Elevate Black Voices seeks to reduce the errors that often occur when technology encounters diverse linguistic patterns. By improving how these systems recognize and interpret different dialects, the initiative works to ensure that speech-based technologies are more accurate, equitable, and representative of the people who use them.
According to Washington, the project also serves a broader purpose beyond technical improvement. It lays the groundwork for building a group of HBCUs dedicated to examining how this data can be responsibly preserved, safeguarded, and ethically applied in future technologies, including artificial intelligence.
“That is exciting because the HBCUs that we have already worked with in the past have been excited about using the data set and also about developing these fair usage guidelines around how AI is impacting the larger community that speaks dialects of African American English,” said Washington. “It’s really interesting. This is a project that is ongoing, and we believe it will take us into a new realm.”
Unlike traditional AI research centers that concentrate mainly on technical development, HCAI emphasizes enhancing tactical decision-making in high-pressure environments. Under Washington’s leadership, the institute is developing advanced tools, including chatbots powered by large language models (LLMs)—AI systems trained on vast amounts of text—and extended reality (XR) applications designed to support naval officers in making more informed, timely decisions in the field. These initiatives tackle highly complex challenges, bridging cutting-edge technology with practical, real-world applications to improve performance and outcomes in critical situations.
“What this tool is intended to do is to help [make] decision-making less burdensome,” explained third-year doctoral student, software engineer, and former educator Christopher Watson. “So, it’s going to be a large language model paired with an augmented reality component that can interface with the model.” Simply put, it will turn text output into interactive augmented reality displays showing the importance of decisions using colors, icons, and other graphic information.
Watson focuses on the large language model side of the project, fine-tuning the Tactical Decision-Making Under Stress (TADMUS) model, which incorporates a technique known as retrieval-augmented generation. Yet, as Washington emphasizes, the technical work is only meaningful if it delivers tangible benefits for real people. To ensure this, HCAI employs a multidisciplinary approach, bringing together experts in computer science and engineering with scholars in social sciences, education, health, and public policy. This collaborative framework allows researchers not only to design advanced AI systems but also to assess how these technologies impact individuals, communities, and institutions, ensuring that innovation remains grounded in human needs.
Dr. Lucretia Williams specializes in human-computer interaction, with a particular focus on applications in health and education. For the ONR project, her lab is examining how stress influences decision-making, providing critical insights that inform the development of AI tools designed to support high-pressure, real-world scenarios.
“Specifically, I created two simulated environments, one calm and one stressful,” she said. “The calm environment includes light ambient noise, similar to what you’d hear in a coffee shop, and ample time to read and make a decision. But the stressful environment includes loud background noise, people may be yelling commands, and a super tight time constraint.”
The institute’s initiatives extend across multiple sectors, including healthcare, education, workforce development, and public policy. Researchers at HCAI prioritize developing AI tools that are transparent, reliable, and practical, ensuring these technologies can be effectively adopted by organizations and decision-makers.
Through HCAI, Howard University is showcasing the vital role HBCUs can play in shaping the future of artificial intelligence. By placing humanity, equity, and interdisciplinary collaboration at the center of its work, the university is redefining what responsible and impactful innovation looks like in the AI era.



















