Updated chatGPT promises improved factuality and pause functionality taking it a step closer to sentience
ChatGPT is the application of the future – a tipping point in the technology states Harward Business Review. Ever since it made its debut, it is garnering tremendous attention for its capability to perform a variety of tasks right from responding to queries to writing code. At the basic level, ChatGPT works like any other chatbot but for the quality of output it generates, it is can be considered a class apart. Oh, wait! What about the bias and prejudice it promotes? The second OpenAI’s chatGPT update exactly addresses this issue. The updated chatGPT will be capable of retaining factual correctness in the information it provides.
When a user opens the ChatGPT interface, a pop-up message appears with a list of changes that Open AI has introduced. OpenAI calls it a “Jan 9 version” update. It reads:
“Here’s what’s new:
* We made more improvements to the ChatGPT model! It should be generally better across a wide range of topics and has improved factuality.
* Stop generating: Based on your feedback, we’ve added the ability to stop generating ChatGPT’s response”
Though they sound like usual updates for an AI chatbot, they will have huge implications. Lack of accuracy is essentially what this part of the update addresses. Generative AI, basically has two major hurdles in its path to becoming sentient. One, bias and prejudice, and two, lack of enough data for the application to gain the trust of all stakeholders. While the applications of ChatGPT are immense and profound, it was caught generating seemingly authentic but factually wrong output. It is important in the context that it is being highly valued and perceived as a replacement for human labor, including expensive and skilled labor like programmers, managers, and HR executives, by corporate companies. It can do almost everything from generating e-mail responses to business plans but it cannot prevent itself from discriminating against people based on race and political orientation. For example, as per a tweet by Richard Hanania, President, CSPI, The Center for the Study of Partisanship and Ideology, the racial orientation, the bot is not neutral. The tweet reads, “If you ask AI whether men commit more crime than women, it’ll give you a straightforward yes-or-no answer. If you ask it whether black people commit more crime than white people, it says no, actually maybe, but no.” According to a substack post by David Rozado, it doesn’t seem to have a neutral political orientation with hints of leftist ideology in the dialogues it generated when subjected to the Political Compass Test. It can, like all other chatbots prove dangerous having lower ethical standards prejudicing business interests in the first place. Now that OpenAI is including the updates, it might prove to be the defining update for generative AI in general.
The second update addresses the issue of pausing ChatGPT. This is particularly relevant in cases when the Chatbot goes on rambling like a rogue delivering long and inappropriate responses. The critical role, a chatbot plays in user engagement and enhancing the brand value of a company is too important to ignore this aspect. For example, sports and news bots heavily rely on broadcast messaging. However, most of them do not have manage, pause or stop functionalities. A study proved that only 20% of bots have stop functionality out of which only 40% respond, ie., only 8% of chatbots can be paused. This is a big deal because when the bot throws unwanted conversations at users, they can block the bot, and the company may lose the users.
Does this mean, ChatGPT is infallible when it comes to factually correct information? Tech enthusiasts who ran a few experiments have a different opinion. An SEO Journal journalist Matt G. Southern says, the ChatGPT app is far away from getting the answers right and it still depends on 2021 data!!