AQai uses a chatbot in its assessment for several reasons, all of which are supported by both practical and research-based evidence.
Back in 2014, Darpa (the Defence Advanced Research Projects Agency in the US) funded a study of a virtual therapist named Ellie, an embodied avatar developed at the University of California’s Institute for Creative Technologies. Two groups of 239 participants talked to Ellie: One thought they were talking to a fully automated bot, and the other was told there was a real person behind the machine.
In reality, all participants were randomly assigned a fully or semi-automated virtual human—but the study showed participants who thought they were talking to a robot were way more likely to open up and reveal their deepest and darkest secrets. Removing even the idea of a human in the room led to more productive, open and valuable sessions.
So as part of our strategy to unlock authentic results, you can see how this approach can help the end user in the assessment, together with future coaching impact too - Our view is this technology will augment human coaching, enabling us all to extend and deepen our value. It was a really conscious choice to develop this technology for AQai, our assessments and as part of the user interface and experience of our solutions.
The advantage for you is gaining access to data that you might not have got before. It is a way in which you can uncover information about some of your clients that you are working with that they might not share in conventional ways, so it gives you a unique advantage to help and support them better.
Later studies, like those led by Lucas et al. (2017) reinforced these findings that virtual human interviewers (chatbots) could effectively reduce barriers to care and encourage veterans to disclose mental health symptoms more openly. Both studies found that the anonymity and lack of perceived judgment in interactions with chatbots led to more honest and detailed reporting of symptoms compared to traditional methods.
Additional research has shown chatbots to yield:
Interactivity and Engagement:
- Enhanced User Engagement: Research indicates that interactive systems, such as chatbots, significantly improve user engagement compared to traditional surveys. A study by Følstad and Brandtzæg (2017) highlights that chatbots can create a more engaging and enjoyable user experience, leading to higher completion rates and better-quality data.
- Reduced Survey Fatigue: Chatbots can mitigate survey fatigue by making the process feel more conversational and less like a traditional questionnaire. This approach can keep users more engaged throughout the assessment, resulting in more thoughtful and accurate responses. Research by Goodyear-Smith et al. (2019) supports the idea that conversational agents can help reduce the cognitive load on users, making it easier for them to stay focused and provide reliable answers.
Personalized Interaction:
- Adaptive Questioning: Chatbots can personalize the assessment experience by adapting questions based on previous responses. This dynamic interaction can help maintain user interest and provide a more tailored assessment. Studies on adaptive testing, such as those by Weiss (2011), demonstrate that personalized assessments can lead to more precise and relevant data collection.
- User-Centered Design: Personalization in chatbot interactions aligns with principles of user-centered design, which emphasize creating experiences that cater to the needs and preferences of users. This approach can increase user satisfaction and the perceived relevance of the assessment, as discussed in the work of Norman and Draper (1986) on human-computer interaction.
Reliable Measurement:
- Consistency and Reliability: The use of chatbots in assessments ensures that questions are presented consistently, reducing variability in how they are interpreted by different users. This consistency enhances the reliability of the data collected. The reliability of chatbot-mediated assessments is supported by studies such as those by Luxton et al. (2016), which found that automated systems can provide consistent and reliable data collection compared to human-administered surveys.
- Cronbach's Alpha Analysis: The AQai assessment uses Cronbach's alpha to measure the internal consistency of its scales, ensuring that the items within each scale are reliably measuring the same construct. The high-reliability scores (e.g., 0.9 for Mental Flexibility and Work Stress) indicate that the chatbot-mediated assessment provides consistent results across different users and contexts.
Academic Backing:
- Foundations in Established Research: The AQai assessment is built on well-researched constructs and validated scales, such as the Grit Scale by Duckworth et al. (2007) and the Brief Resilience Scale by Smith et al. (2008). These foundational elements ensure that the data collected through the chatbot is both reliable and valid.
- Multidimensional Assessments: Research by Savickas and Porfeli (2012) on the Career Adapt-Abilities Scale demonstrates the importance of using multidimensional assessments to capture the complexity of adaptability. The chatbot's ability to guide users through various dimensions of AQ, such as grit, resilience, and mindset, ensures a comprehensive evaluation of adaptability.
Immediate Feedback:
- Real-Time Insights: Providing immediate feedback through chatbots can enhance the user experience by offering instant insights into their adaptability profile. This immediacy can help users understand their strengths and areas for improvement right away, increasing the perceived value of the assessment. Research by Stone and Stone (1984) on feedback interventions highlights the positive impact of timely feedback on user motivation and performance.
- Actionable Insights: Immediate feedback can also provide users with actionable insights, helping them to make informed decisions about how to develop their adaptability skills. Studies by Kluger and DeNisi (1996) on feedback effectiveness suggest that actionable feedback can significantly enhance learning and development outcomes.
References
1. Denecke, K., Abd-Alrazaq, A., Househ, M. (2021). Artificial Intelligence for Chatbots in Mental Health: Opportunities and Challenges. In: Househ, M., Borycki, E., Kushniruk, A. (eds) Multiple Perspectives on Artificial Intelligence in Healthcare. Lecture Notes in Bioengineering. Springer, Cham.
2. Følstad, A., & Brandtzæg, P. B. (2017). Chatbots and the New World of HCI. *Interactions*, 24(4), 38-42.
3. Goodyear-Smith, F., Lobb, B., Davies, G., Nachson, M., & Seelau, S. M. (2019). International variation in questionnaires for measuring health state preferences: Cultural and linguistic differences or measurement bias? *Patient Preference and Adherence*, 13, 1479-1488.
4. Weiss, D. J. (2011). *Better data from better measurements: The evolution of adaptive testing*. Center for Adaptative Testing.
5. Norman, D. A., & Draper, S. W. (1986). *User Centered System Design: New Perspectives on Human-Computer Interaction*. CRC Press.
6. Luxton, D. D., Kayl, R. A., & Mishkind, M. C. (2016). mHealth data security: The need for HIPAA-compliant standardization. *Telemedicine and e-Health*, 22(5), 348-354.
7. Duckworth, A. L., Peterson, C., Matthews, M. R., & Kelly, D. D. (2007). Grit: Perseverance and passion for long-term goals. *Journal of Personality and Social Psychology*, 92(6), 1087-1101.
8. Smith, B. W., Dalen, J., Wiggins, K. J., Tooley, E. M., Christopher, P. J., & Bernard, J. F. (2008). The brief resilience scale: Assessing the ability to bounce back. *International Journal of Behavioral Medicine*, 15(3), 194-200.
9. Savickas, M. L., & Porfeli, E. J. (2012). Career Adapt-Abilities Scale: Construction, reliability, and measurement equivalence across 13 countries. *Journal of Vocational Behavior*, 80(3), 661-673.
10. Stone, D. L., & Stone, E. F. (1984). The effects of feedback consistency and feedback favorability on self-perceived task competence and perceived feedback accuracy. *Organizational Behavior and Human Performance*, 34(3), 304-319.
11. Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. *Psychological Bulletin*, 119(2), 254-284.
Comments
0 comments
Please sign in to leave a comment.