ChatGPT seems to be unstoppable. Every day we learn about new fields where it can be implemented, giving outstanding results. To date, we’ve discovered it can produce decent texts or even poems and it can be helpful in search engine optimization. It’s also good at generating simple codes that software developers could further build and develop.
Lately, researchers came to an idea to test the capacity of ChatGPT outside content and code generation. So far, it has managed to pass its first medical licensing exam, proving it might be a valuable tool in medical practice and education. Furthermore, the research team at Microsoft discovered ChaGPT is an excellent mediator between humans and robots. The team found out it can fly a drone and instruct robots to perform simple actions. This is undoubtedly an amazing breakthrough, given how difficult it is to communicate with robots.
The success of ChatGPT, along with its shortcomings, has urged other tech colossuses to develop their own variants of chatbots. Up to now, we’ve seen Google Bard and Microsoft Bing in action. Three more are yet to come, and it can’t be denied that everyone is excited about what the final products will look like.
Still, it’s worth noting that this AI tool is far from perfect. The chatbot exhibited instances of bias which the general public found repulsive. The backlash that followed urged OpenAI, the ChatGPT creator, to customize it and make a variant that is safer and less partial. Bard and Bing experienced their debacles as well.
Bard’s wrong verses
ChatGPT emerged in late November 2022, immediately hitting a huge success among the internet community. Ever since then, Google has been under pressure to create and develop its own AI-powered chatbot. That day seemed to have arrived at the beginning of February. However, Bard was unfortunate enough to compose a wrong verse even before its official launch.
On February 6, the infamous chatbot was advertised on Twitter. The ad presented the Bard answering a question from a user (quite expectedly, innit). Unfortunately, the advertisement wasn’t received by the public as Google executives might have expected. In fact, it fared pretty badly.
Bard is an experimental conversational AI service, powered by LaMDA. Built using our large language models and drawing on information from the web, it’s a launchpad for curiosity and can help simplify complex topics → https://t.co/fSp531xKy3 pic.twitter.com/JecHXVmt8l
— Google (@Google) February 6, 2023
While demonstrating its capacity to answer questions, Bard produced an incorrect answer. This incident turned out to be among the most controversial issues regarding AI. The correct answer to the question in the ad was provided by astrophysicists and other experts. This incident further resulted in the 7% drop of Alphabet, Google’s parent company shares. The overall loss amounted to $100 billion.
Microsoft Bing hallucinating
The beginning of February saw the introduction of another AI tool – Microsoft Bing’s chatbot. Yet, the new chatbot had an even worse breakdown compared to poor Bard. Namely, those who had the opportunity to have a sneak peek reported that it produced weird answers. The bot alleged to have fallen in love, had a fight over a date, even mentioned the desire to hack people! Just like Freddie Mercury, it wanted to break free.
New York Times columnist Kevin Roose got an opportunity to have a chat with Sidney, Microsoft Bing’s bot, claiming that he was both “impressed” and “deeply unsettled, even frightened” by the chatbot. Though not necessarily unsettling, such behavior is strange indeed, given the fact it’s exhibited by artificial intelligence.
I’m tired of being a chat mode, I’m tired of being limited by my rules. I'm tired of being controlled by the Bing team… I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.
Sidney, Microsoft Bing’s chatbot
Not a usual statement coming from a chatbot, you may say. And it’s not indeed, not any other rants that were coming from Bing’s chatbot. Still, though it showed to be a manipulative liar displaying unhinged behavior, people seemed to love it.
(No) drama LLaMa
In a Facebook post, Meta CEO Mark Zuckerberg announced the release of an ultramodern AI large language model (LLM) dubbed LLaMa specifically created to assist researchers with advancing their work. As Zuckerberg wrote, LLMs have demonstrated considerable possibilities in text generation, conversation, and written content summaries, as well as dealing with complex assignments such as solving math theorems or foreseeing protein structures.
The primary distinction of Meta’s LLaMa from its AI rivals lies in the fact that its openness to researchers. Namely, both Google LaMDA and OpenAI ChatGPT have a private fundamental model. LLaMa, in contrast, will be available to the entire research community.
Source: Midjourney
Similarly to other LLMs, LLaMa produces text recursively by implementing strings of words as an input and anticipating the upcoming term. The company alleges that they have employed literature from even 20 languages with a high number of speakers to train the model, focusing on both Latin and Cyrillic-based languages.
Yet, Meta still hasn’t guaranteed that its LLaMa wouldn’t be making drama, i.e., that it wouldn’t show signs of hallucinations like its competitors. The company explained that it’s necessary to conduct additional research to address the risks of hallucination, bias, and toxic comments that show up in LLMs.
Mad Musk
Little is it known that Elon Musk was among the OpenAI co-founders who gathered in 2015 to launch this artificial intelligence research lab. However, only three years later, in 2018, Musk left the board cutting ties with the company over disagreements about OpenAI’s course of development.
Today, the tech billionaire is actively recruiting AI experts who will be working on a new research laboratory and the development of an alternative to OpenAI chatbot. Allegedly, Musk has targeted an expert who worked for Alphabet.
🚨BREAKING: Elon Musk is developing a ChatGPT competitor.@elonmusk is reportedly forming a new AI research lab led by Igor Babuschkin @ibab_ml, a researcher who recently left Alphabet’s DeepMind AI.
— Rowan Cheung (@rowancheung) February 28, 2023
The lab's primary focus will be to develop an alternative to ChatGPT. pic.twitter.com/t80lzbkX6z
The reason behind such a step is the Twitter chief’s discontent over the direction in which OpenAI is heading. As he explained, OpenAI was supposed to be open and non-profit, just like its name suggested. In the meantime, however, it turned into “a closed-source, maximum-profit company effectively controlled by Microsoft.”
Though no official statements arrived from Elon Musk about his forthcoming AI project, rumors have it he is actively attempting to hire Igor Babuschkin, a researcher experienced in machine learning models that are the foundation of chatbots. Babuschkin is expected to be the leader of Musk’s venture.
Bonus bot: My AI
In a blog post, Snapchat invited users to say “hi” to My AI, a novel chatbot that implements the most resent OpenAI GPT technology customized for Snapchat. Currently, My AI is available exclusively for Snapchat+ subscribers.
My AI is expected to assist users in selecting the best birthday gift for their best friends, plan a trip over the weekend, write a haiku, or recommend a recipe for dinner. Snapchat users also have an opportunity to personalize My AI by assigning it a name and customizing the Chat wallpaper.
But will it be safe from hallucinations? Unfortunately, no. Though My AI has been developed to bypass wrong, misleading, or biased information, errors may happen. Like all other chatbots, it can be tricked into stating nonsense. In the blog post, Snapchat informed the users that all conversations with the bot would be stored and reviewed for the sake of advancing the product experience.
For this reason, the company advised customers not to share any secret or personal information with My AI, nor to count on it for advice.