



Jolted by the immense global popularity of ChatGPT, big tech companies around the world are scrambling to up their game in the wake of an artificial intelligence showdown.
On Monday, Google made an announcement that it would soon release an experimental chatbot called Bard, and will roll it out to the public in the coming weeks. According to Google, the name Bard was chosen because it is a storyteller. The technology of Bard is based on the company’s experimental large language model called LaMDA, an acronym for Language Model for Dialogue Applications, which has been in development for several years. Bard will be built into Google's current search function, working in the background of search queries to generate a short text summary of your results, rather than simply an index of links.
Bard is an experimental conversational AI service, powered by LaMDA. Built using our large language models and drawing on information from the web, it’s a launchpad for curiosity and can help simplify complex topics → https://t.co/fSp531xKy3 pic.twitter.com/JecHXVmt8l
— Google (@Google) February 6, 2023
Just a day after Bard was announced, Microsoft quickly responded and laid out its plan to increase its focus on artificial intelligence, including a boost in funding for products and incorporating AI chat features – even more advanced than ChatGPT — into its Bing search engine and web browser Edge. It was only recently that Microsoft announced it would invest US$10 billion in OpenAI, the company that developed ChatGPT, on top of a previous investment of a billion or more in 2022.
During his keynote on Tuesday, Microsoft CEO Satya Nadella declared that “a new race starts today.” Microsoft is calling the new AI-powered Bing and Edge internet “copilots.” With the new Bing, people can ask their search engine questions and get humanlike responses pulled from sources across the internet. Bing will also let you chat with your search engine to refine or elaborate on your search.
Microsoft’s move is by far the biggest threat to Google, who has spent the last two decades as the most popular search engine in the world.
Prior to launching Bard this week, Google has also invested US$300 million in OpenAI’s rival, Anthropic, founded by former OpenAI researchers. The company conducts research into machine-powered language models and has also built its own rival to ChatGPT chatbot named Claude. The company has also recently selected Google Cloud as its preferred cloud provider, similar to how Microsoft is OpenAI’s exclusive cloud provider, in the same vein as how Microsoft is OpenAI’s exclusive cloud provider.
We're excited to use Google Cloud to train our AI systems, including Claude! https://t.co/IaqQ5lpJrP https://t.co/vOn5Cj4sPt
— Anthropic (@AnthropicAI) February 3, 2023
In Silicon Valley, the race for AI supremacy is all the rage. Recently, BuzzFeed's stock price soared 19 percent after a Wall Street Journal report revealed that it would use OpenAI’s technology for its content to create AI-generated listicles and quizzes.
Let’s not forget about GPT-4, which is supposed to be OpenAI’s most advanced and anticipated AI models which is still scheduled to come out this year and dwarf the abilities of ChatGPT (the current ChatGPT is a leveled up version of GPT-3, the company’s previous language model which came out in 2020).
Across the Pacific Ocean, Baidu, the Chinese tech giant, is gearing up to launch a chatbot similar to ChatGPT, called Ernie, in March. And Alibaba, China’s e-commerce megacorp told CNBC on Wednesday that it is also working on a rival to ChatGPT, jumping on board the chatbot hype.
Since its debut in November 2022, ChatGPT has become a global phenomenon, garnering more than 30 million users and gets roughly five million visits a day, making it one of the fastest-growing software products in modern history. By contrast, Instagram, which was launched in 2010, took nearly a year to reach its first 10 million users.
More than a decade in the making from the research at tech companies like Google, OpenAI and Meta, these chatbots are poised to be disruptive in the way computer software is built, used and operated, and will certainly impact the way we, as users, navigate the web.
As people turn to Google for deeper insights and understanding, we’re working on bringing AI-powered features to Search to distill complex information and help you quickly see the big picture and learn more from the web → https://t.co/CYlA0tIZ1K pic.twitter.com/rhJFvnTPQk
— Google (@Google) February 8, 2023
But the technology has flaws. Because the chatbots learn their skills by analyzing vast amounts of data that exist on the internet, there is a tendency for factual inaccuracies.
Last month, CNET was forced to issue corrections on a number of articles, after it was revealed that the news outlet used an AI-powered tool to write dozens of stories.
Google made a misstep with Bard with a factual error about the James Webb Space Telescope as seen in its demo video. The market value of Alphabet Inc., Google’s parent company, immediately plunged US$100 billion on Wednesday following the revelation.
How will the new Bing inspire you? pic.twitter.com/mEk0uVC0dH
— Bing (@bing) February 9, 2023
The ongoing AI rat race also raises concerns other than its functionality. Beena Ammanath, who leads Trustworthy Tech Ethics at Deloitte and is the executive director of the Global Deloitte AI Institute, warned in an interview with CNN of AI’s “unintended consequences”, pointing to issues. Ammanath also equated the swift deployment of AI as “building Jurassic Park, putting some danger signs on the fences, but leaving all the gates open.”
The world’s biggest names in tech are well aware of the power that AI has to reshape the world as we know it. Yet, building the digital equivalent of a human brain is an arduous mission, and there are a whole host of risks and ethical concerns that need to be carefully assessed before rushing in to cement the future of AI. If this battle of the bots continues to follow Silicon Valley’s “move fast and break things” ethos, which has been proven to be rather problematic in the past (looking at you, Meta), there’s a worrying risk of overlooking safety, social, and ethical implications of the nascent technology.