Google fired an engineer last week for allegedly disclosing some private information about a Chatbot that uses artificial intelligence. Google informed that he had transgressed some of the business’s strict confidentiality rules. Google’s Artificial Intelligence division has experienced issues with staff members before. This time, senior software engineer Blake Lemoine has been placed on administrative leave. Lemoine was tasked with determining whether the AI had ever used any form of hate speech or discriminatory language. However, Lemoine discovered that the chatbot was sentient (a person with the capacity to communicate feelings and thoughts). Lemoine published certain transcripts of his chats with LaMDA. In this context, let’s know about this AI Chatbot.
What is LaMDA?
Google is developing artificial intelligence (AI) chatbot technology. The development of this AI bot is complete and that it functions similarly to a human brain. In a Medium post, Blake claimed that he would soon lose his job as a result of his work on AI ethics. Blake made an odd and startling remark regarding Google’s servers. According to Blake, a “sentient” AI was present on Google’s servers. Blake also asserted that this AI chatbot had human-like thought processes.
LaMDA is the name of the AI that is causing all of the commotion. The Washington Post quoted Blake Lemoine as saying that he initiated a conversation with the LaMDA (Language Model for Dialog Applications) interface and discovered he was speaking to a person. LaMDA was dubbed a significant communication technological innovation by Google last year.
This talking artificial intelligence technology spoke continually in a human voice. That is, you can converse with it by switching the subject frequently, just like you would with a real person. According to Google, this technology can be included into products like Search and Google Assistant. The business had stated that this is the subject of investigation and testing.
Concerns associated:
Gabriel added that while businesses in the AI sector are thinking about the potential long-term impact of Sentient AI, this does not indicate that anthropomorphizing convolutional devices are not sensitive. Systems like LaMDA work by emulating the forms of interchange found in millions of words of human discourse, allowing them to communicate about hypothetical issues as well.