Secure Messaging Protocols for Intelligent Chatbots: Enhancing User Trust and Data Privacy
DOI:
https://doi.org/10.56472/ICCSAIML25-130Keywords:
Adaptive Trust Modeling, AI-Powered Messaging Systems, Chatbot Intelligence, Chatbot Security, Compliance and AI EthicsAbstract
Interaction with chatbots has become a new normal in today’s world. We interact with different types of bots without our knowledge. Given this context, how safe are the chatbots and can we trust them and provide our details? The industry is growing leaps and bounds in terms of AI/ML technologies and their seamless integration with chatbots. But these AI models lack contextual awareness as well as enhanced security. This paper comes up with a proposal to embed secure messaging protocol within the chatbot design, and we also touch the challenges related to privacy and trust of the end user. This paper lists down the common vulnerabilities, limitations of enhancing security and how we can bridge this gap through integration with real-time anomaly detection frameworks. This paper introduces a trust centric model that dynamically checks the risk assessment, bad-actor detection etc., The preliminary experimental data which is done under controlled test environments gave us a clear indicator that security breach attempts were mitigated significantly when compared with traditional chatbots. The concept of Trust-adjusted interactions safeguarded the chats and also improved the user confidence, reliability on system etc. The framework proposed thoroughly focuses on enhanced security, improving privacy, contextual awareness etc
Downloads
References
[1] Vechev et al., “AI Chatbots Can Guess Your Personal Information From What You Type,” Wired, Sep. 2023. [Online]. Available: https://www.wired.com/story/ai-chatbots-can-guess-your-personal-informationWIRED
[2] J. Giordani, “Mitigating Chatbots AI Data Privacy Violations in the Banking Sector: A Qualitative Grounded Theory Study,” Eur. J. Appl. Sci. Eng. Technol., vol. 2, no. 4, pp. 1–12, 2024. [Online]. Available: https://ejaset.com/index.php/journal/article/view/77EJASET
[3] Hariri, “Privacy and Data Protection in ChatGPT and Other AI Chatbots,” SSRN, Jul. 2023. [Online]. Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4454761Academia+2SSRN+2IGI Global+2
[4] S. K. Sharma, “Evaluating Privacy, Security, and Trust Perceptions in Conversational AI Systems,” Comput. Hum. Behav., vol. 150, pp. 107–118, 2024. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0747563224002127ScienceDirect
[5] Følstad et al., Chatbot Research and Design: 7th International Workshop, CONVERSATIONS 2023, Oslo, Norway, Nov. 2023. Cham, Switzerland: Springer, 2023. [Online]. Available: https://link.springer.com/book/10.1007/978-3-031-54975-5SpringerLink
[6] M. Vechev et al., “A Systematic Literature Review of Information Security in Chatbots,” Appl. Sci., vol. 13, no. 11, p. 6355, 2023. [Online]. Available: https://www.researchgate.net/publication/370998368ResearchGate
[7] Fujitsu, “Fujitsu Launches New Technologies to Protect Conversational AI from Hallucinations and Phishing,” Press Release, Sep. 2023. [Online]. Available: https://www.fujitsu.com/global/about/resources/news/press-releases/2023/0926-02.htmlFujitsu
[8] D. Miller et al., “2023 Conversational AI Intelliview: Decision-Makers Guide to Enterprise Intelligent Assistants,” Opus Research, Oct. 2023. [Online]. Available: https://opusresearch.net/pdfreports/2023_ConversationalAI_Intelliview_leadup.pdfopusresearch.net
[9] M. Jo et al., “AI Privacy in Context: A Comparative Study of Public and Institutional Perceptions,” Social Media + Society, vol. 10, no. 1, pp. 1–12, 2024. [Online]. Available: https://journals.sagepub.com/doi/pdf/10.1177/20563051241290845