The exciting new AI transforming search — and maybe everything — explained

Generative AI is here. Let’s hope we’re ready.


The world’s first generative AI-powered search engine is here, and it’s in love with you. Or it thinks you’re kind of like Hitler. Or it’s gaslighting you into thinking it’s still 2022, a more innocent time when generative AI seemed more like a cool party trick than a powerful technology about to be unleashed on a world that might not be ready for it.

If you feel like you’ve been hearing a lot about generative AI, you’re not wrong. After a generative AI tool called ChatGPT went viral a few months ago, it seems everyone in Silicon Valley is trying to find a use for this new technology. Generative AI is essentially a more advanced and useful version of the conventional artificial intelligence that already helps power everything from autocomplete to Siri. The big difference is that generative AI can create new content, such as images, text, audio, video, and even code — usually from a prompt or command. It can write news articles, movie scripts, and poetry. It can make images out of some really specific parameters. And if you listen to some experts and developers, generative AI will eventually be able to make almost anything, including entire apps, from scratch. For now, the killer app for generative AI appears to be search.


GENERATIVE AI HAS THE POTENTIAL TO BE A REVOLUTIONARY TECHNOLOGY, AND IT’S CERTAINLY BEING HYPED AS SUCH


One of the first major generative AI products for the consumer market is Microsoft’s new AI-infused Bing, which debuted in January to great fanfare. The new Bing uses generative AI in its web search function to return results that appear as longer, written answers culled from various internet sources instead of a list of links to relevant websites. There’s also a new accompanying chat feature that lets users have human-seeming conversations with an AI chatbot. Google, the undisputed king of search for decades now, is planning to release its own version of AI-powered search as well as a chatbot called Bard in the coming weeks, the company said just days after Microsoft announced the new Bing.



In other words, the AI wars have begun. And the battles may not just be over search engines. Generative AI is already starting to find its way into mainstream applications for everything from food shopping to social media.

Microsoft and Google are the biggest companies with public-facing generative AI products, but they aren’t the only ones working on it. Apple, Meta, and Amazon have their own AI initiatives, and there are plenty of startups and smaller companies developing generative AI or working it into their existing products. TikTok has a generative AI text-to-image system. Design platform Canva has one, too. An app called Lensa creates stylized selfies and portraits (sometimes with ample bosoms). And the open-source model Stable Diffusion can generate detailed and specific images in all kinds of styles from text prompts.


There’s a good chance we’re about to see a lot more generative AI showing up in a lot more applications, too. OpenAI, the AI developer that built the ChatGPT language model, recently announced the release of APIs, or application programming interfaces, for its ChatGPT and Whisper, a speech recognition model. Companies like Instacart and Shopify are already implementing this tech into their products, using generative AI to write shopping lists and offer recommendations. There’s no telling how many more apps might come up with novel ways to take advantage of what generative AI can do.


Generative AI has the potential to be a revolutionary technology, and it’s certainly being hyped as such. Venture capitalists, who are always looking for the next big tech thing, believe that generative AI can replace or automate a lot of creative processes, freeing up humans to do more complex tasks and making people more productive overall. But it’s not just creative work that generative AI can produce. It can help developers make software. It could improve education. It may be able to discover new drugs or become your therapist. It just might make our lives easier and better.

Or it could make things a lot worse. There are reasons to be concerned about the damage generative AI can do if it’s released to a society that isn’t ready for it — or if we ask the AI program to do something it isn’t ready for. How ethical or responsible generative AI technologies are is largely in the hands of the companies developing them, as there are few if any regulations or laws in place governing AI. This powerful technology could put millions of people out of work if it’s able to automate entire industries. It could spawn a destructive new era of misinformation. There are also concerns of bias due to a lack of diversity in the material and data that generative AI is trained on, or the people who are overseeing that training.

Nevertheless, powerful generative AI tools are making their way to the masses. If 2022 was the “year of generative AI,” 2023 may be the year that generative AI is actually put to use, ready or not.

The slow, then sudden, rise of generative AI

Conventional artificial intelligence is already integrated into a ton of products we use all the time, like autocomplete, voice assistants like Amazon’s Alexa, and even the recommendations for music or movies we might enjoy on streaming services. But generative AI is more sophisticated. It uses deep learning, or algorithms that create artificial neural networks that are meant to mimic how human brains process information and learn. And then those models are fed enormous amounts of data to train on. For example, large language models power things like ChatGPT, which train on text collected from around the internet until they learn to generate and mimic those kinds of texts and conversations upon request. Image models have been fed tons of images and captions that describe them in order to learn how to create new content based on prompts.

After years of development, most of it outside of public view, generative AI hit the mainstream in 2022 with the widespread releases of art and text models. Models like Stable Diffusion and DALL-E, which was released by OpenAI, were first to go viral, and they let anyone create new images from text prompts. Then came OpenAI’s ChatGPT (GPT stands for “generative pre-trained transformer”) which got everyone’s attention. This tool could create large, entirely new chunks of text from simple prompts. For the most part, ChatGPT worked really well, too — better than anything the world had seen before.

Though it’s one of many AI startups out there, OpenAI seems to have the most advanced or powerful products right now. Or at least, it’s the startup that has given the general public access to its services, thereby providing the most evidence of its progress in the generative AI field. This is a demonstration of its abilities as well as a source of even more data for OpenAI’s models to learn from.

OpenAI is also backed by some of the biggest names in Silicon Valley. It was founded in 2015 as a nonprofit research lab with $1 billion in support from the likes of Elon Musk, Reid Hoffman, Peter Thiel, Amazon, and former Y Combinator president Sam Altman, who is now the company’s CEO. OpenAI has since changed its structure to become a for-profit company but has yet to make a profit or even much by way of revenue. That’s not a problem yet, as OpenAI has gotten a considerable amount of funding from Microsoft, which began investing in OpenAI in 2019. And OpenAI is seizing on the wave of excitement for ChatGPT to promote its API services, which are not free. Neither is the company’s upcoming ChatGPT Plus service.

Other big tech companies have for years been working on their own generative AI initiatives. There’s Apple’s Gaudi, Meta’s LLaMA and Make-a-Scene, Amazon’s collaboration with Hugging Face, and Google’s LaMDA (which is good enough that one Google engineer thought it was sentient). But thanks to its early investment in OpenAI, Microsoft had access to the AI project everyone knew about and was trying out.

In January 2023, Microsoft announced it was giving $10 billion to OpenAI, bringing its total investment in the company to $13 billion. From that partnership, Microsoft has gotten what it hopes will be a real challenge to Google’s longtime dominance in web search: a new Bing powered by generative AI.


AI search will give us the first glimpse of how generative AI can be used in our everyday lives ... if it works

Tech companies and investors are willing to pour resources into generative AI because they hope that, eventually, it will be able to create or generate just about any kind of content humans ask for. Some of those aspirations may be a long way from becoming reality, but right now, it’s possible that generative AI will power the next evolution of the humble internet search.

After months of rumors that both Microsoft and Google were working on generative AI versions of their web search engines, Microsoft debuted its AI-integrated Bing in January in a splashy media event that showed off all the cool things it could do, thanks to OpenAI’s custom-built technology that powered it. Instead of entering a prompt for Bing to look up and return a list of relevant links, you could ask Bing a question and get a “complete answer” composed by Bing’s generative AI and culled from various sources on the web that you didn’t have to take the time to visit yourself. You could also use Bing’s chatbot to ask follow-up questions to better refine your search results.

Microsoft wants you to think the possibilities of these new tools are just about endless. And notably, Bing AI appeared to be ready for the general public when the company announced it last month. It’s now being rolled out to people on an ever-growing wait list and incorporated into other Microsoft products, like its Windows 11 operating system and Skype.

This poses a major threat to Google, which has had the search market sewn up for decades and makes most of its revenue from the ads placed alongside its search results. The new Bing could chip away at Google’s search dominance and its main moneymaker. And while Google has been working on its own generative AI models for years, its AI-powered search engine and corresponding chatbot, which it calls Bard, appear to be months away from debut. All of this suggests that, so far, Microsoft is winning the AI-powered search engine battle.

Or is it?

Once the new Bing made it to the masses, it quickly became apparent that the technology might not be ready for primetime after all. Right out of the gate, Bing made basic factual errors or made up stuff entirely, also known as “hallucinating.” What was perhaps more problematic, however, was that its chatbot was also saying some disturbing and weird things. One person asked Bing for movie showtimes, only to be told the movie hadn’t come out yet (it had) because the date was February 2022 (it wasn’t). The user insisted that it was, at that time, February 2023. Bing AI responded by telling the user they were being rude, had “bad intentions,” and had lost Bing’s “trust and respect.” A New York Times reporter pronounced Bing “not ready for human contact” after its chatbot — with a considerable amount of prodding from the reporter — began expressing its “desires,” one of which was the reporter himself. Bing also told an AP reporter that he was acting like Hitler.

In response to the bad press, Microsoft has tried to put some limits and guardrails on Bing, like limiting the number of interactions one person can have with its chatbot. But the question remains: How thoroughly could Microsoft have tested Bing’s chatbot before releasing it if it took only a matter of days for users to get it to give such wild responses?

Google, on the other hand, may have been watching this all unfold with a certain sense of glee. Its limited Bard rollout hasn’t exactly gone perfectly, but Bard hasn’t compared any of its users to one of the most reviled people in human history, either. At least, not that we know of. Not yet.


SO FAR, MICROSOFT IS WINNING THE AI-POWERED SEARCH ENGINE BATTLE. OR IS IT?


Again, Microsoft and Google aren’t the only companies working on generative AI, but their public releases have put more pressure on others to roll out their offerings as soon as possible, too. ChatGPT’s release and OpenAI’s partnership with Microsoft likely accelerated Google’s plans. Meanwhile, Meta is working to get its generative AI into as many of its own products as possible and just released a large language model of its own, called Large Language Model Meta AI, or LLaMA.

With the rollout of APIs that help developers add ChatGPT and Whisper to their applications, OpenAI seems eager to expand quickly. Some of these integrations seem pretty useful, too. Snapchat now has a chatbot called “My AI” for its paid subscribers, with plans to offer it to everyone soon. Initial reports say it’s just ChatGPT in Snapchat, but with even more restrictions about what it will talk about (no swearing, sex, or violence). Instacart will use ChatGPT in a feature called “Ask Instacart” that can answer customers’ questions about food. And Shopify’s Shop app has a ChatGPT-powered assistant to make personalized recommendations from the brands and stores that use the platform.


Generative AI is here to stay, but we don’t yet know if that’s for the best


Bing AI’s problems were just a glimpse of how generative AI can go wrong and have potentially disastrous consequences. That’s why pretty much every company that’s in the field of AI goes out of its way to reassure the public that it’s being very responsible with its products and taking great care before unleashing them on the world. Yet for all of their stated commitment to “building AI systems and products that are trustworthy and safe,” Microsoft and OpenAI either didn’t or couldn’t ensure a Bing chatbot could live up to those principles, but they released it anyway. Google and Meta, by contrast, were very conservative about releasing their products — until Microsoft and OpenAI gave them a push.

Error-prone generative AI is being put out there by many other companies that have promised to be careful. Some text-to-image models are infamous for producing images with missing or extra limbs. There are chatbots that confidently declare the winner of a Super Bowl that has yet to be played. These mistakes are funny as isolated incidents, but we’ve already seen one publication rely on generative AI to write authoritative articles with significant factual errors.

These screw-ups have been happening for years. Microsoft had one high-profile AI chatbot flop with its 2016 release of Tay, which Twitter users almost immediately trained to say some really offensive things. Microsoft quickly took it offline. Meta’s Blenderbot is based on a large language model and was released in August 2022. It didn’t go well. The bot seemed to hate Facebook, got racist and antisemitic, and wasn’t very accurate. It’s still available to try out, but after seeing what ChatGPT can do, it feels like a clunky, slow, and weird step backward.

There are even more serious concerns. Generative AI threatens to put a lot of people out of work if it’s good enough to replace them. It could have a profound impact on education. There are also questions of legalities over the material AI developers are using to train their models, which is typically scraped from millions of sources that the developers don’t have the rights to. And there are questions of bias both in the material that AI models are training on and the people who are training them.

On the other side, some conservative bomb-throwers have accused generative AI developers of moderating their platforms’ outputs too much and making them “woke” and biased against the right wing. To that end, Musk, the self-proclaimed free-speech absolutist and OpenAI critic as well as an early investor, is reportedly considering developing a ChatGPT rival that won’t have content restrictions or be trained on supposedly “woke” material.

And then there’s the fear not of generative AI but of the technology it could lead to: artificial general intelligence. AGI can learn and think and solve problems like a human, if not better. This has given rise to science fiction-based fears that AGI will lead to an army of super-robots that quickly realize they have no need for humans and either turn us into slaves or wipe us out entirely.


There are plenty of reasons to be optimistic about generative AI’s future, too. It’s a powerful technology with a ton of potential, and we’ve still seen relatively little of what it can do and who it can help. Silicon Valley clearly sees this potential, and venture capitalists like Andreessen Horowitz and Sequoia seem to be all-in. OpenAI is valued at nearly $30 billion, despite not having yet proved itself as a revenue generator.

Generative AI has the power to upend a lot of things, but that doesn’t necessarily mean it’ll make them worse. Its ability to automate tasks may give humans more time to focus on the stuff that can’t be done by increasingly sophisticated machines, as has been true for technological advances before it. And in the near future — once the bugs are worked out — it could make searching the web better. In the years and decades to come, it might even make everything else better, too.

Oh, and in case you were wondering: No, generative AI did not write this explainer.


Previous
Previous

Machine learning is changing how businesses view customer behavior

Next
Next

Microsoft brings OpenAI ‘Copilot’ to Dynamics 365 (ERP)