Big tech and generative AI: What the incumbents are up to
To stay on top of the latest news on GenAI and other emerging technologies, subscribe to my Substack newsletter (it’s free!).
AI has already revolutionized industries across a wide variety of sectors, from healthcare to finance to entertainment.However, the integration of generative AI into some of the world’s most popular tech products and services takes things to a whole new level.
Generative AI, or GenAI, is an exciting development in the tech world. With GenAI, machines use artificial intelligence to create original content or data, including art, music, content marketing campaigns and even entire novels.
It seems like the internet is abuzz with chatter about GenAI tools like ChatGPT and Dall-E — and the world’s biggest tech companies are paying attention.
At their most recent earnings calls, several Big Tech companies revealed plans for integrating generative AI into their product lines in the next few years. Others were strangely mute on the subject of AI.
Let's explore what these tech giants revealed, and what their plans tell us about the development and use of GenAI.
Microsoft has arguably been the leader in the latest wave of large language models (LLMs) and generative AI, investing over$10 billion into OpenAI.
At the earnings call, Microsoft CEO Satya Nadella said, “The age of AI is upon us and Microsoft is powering it.”
Let’s take a more holistic look at whatMicrosoft is up to.
A. AI for developers
Microsoft made the Azure OpenAI service available broadly in January 2023, and Azure will benefit from this wave of foundation models.
“As customers select their cloud providers and invest in new workloads, we are well-positioned to capture that opportunity asa leader in AI,” Nadella said. “We have the most powerful AI supercomputing infrastructure in the cloud. It's being used by customers and partners likeOpenAI to train state-of-the-art models and services, including ChatGPT.”
Microsoft’s cloud computing platform will also be the exclusive cloud provider for OpenAI.
Microsoft is busy preparing its infrastructure for the new world of GenAI. For the past few years, they’ve been working on building and training supercomputers and inference systems so developers can use ChatGPT within their own applications.
B. AI for end users
Microsoft has made it clear they want to bringLLM-based AI capabilities into all of its apps.
Nadella cited GitHub Copilot as one of the most at-scale LLM-based products in the marketplace, stating that Microsoft intends to incorporate AI into a wide variety of its products, from productivity tools to consumer services.
Microsoft Teams gets AI
We’ve started to see previews of some ofMicrosoft’s new features for its Teams Premium platform, including AI-generated chapters, markers, notes and action items.
On the Microsoft Teams recap page, you can seethe “Weekly Teams Review” meeting recap, including the session recording with different chapters and individual speaker timeline markers. On the right side of the screen, you can see “AI Notes,” which show suggested notes and tasks.
AI in Microsoft 365 is coming
Nadella didn’t state it outright, but he all but confirmed that AI features would come to programs like Excel, PowerPoint,Outlook and Word. There were rumors in January that Microsoft was experimenting with integrating GenAI into these tools, and the hints in the earnings call seem to confirm those murmurings.
“Microsoft 365 is rapidly evolving into anAI-first platform that enables every individual to amplify their creativity and productivity with both our established applications, as well as new applications like Designer, Stream and Loop,” Nadella said.
For example, users can take advantage of a newAI feature Microsoft Editor that provides summarizations of long text with just the click of a button.
C. OpenAI partnership
Microsoft first partnered with OpenAI approximately three years ago, investing $1 billion initially. Over the last few months, they deepened that partnership, reportedly with another $10 billion. In their current investment structure, Microsoft gets 49% of profits back until they’ve been repaid (after other early investors are repaid).
But of course, this isn’t just a financial investment. Here are some other impacts:
● OpenAI uses Azure exclusively for all its training and inference, including its released products. ChatGPT is also run on Azure.
● Azure makes OpenAI available to enterprise customers via the Azure OpenAI service, capturing the cloud computing from these workloads.
● Microsoft integrates OpenAI’s models into its applications and products — which technically any company can do.
Satya sums it up well:
“We look to both, there's an investment part to it and there's a commercial partnership. But fundamentally, it's going to be something that's going to drive, I think, innovation and competitive differentiation in every one of the Microsoft solutions by leading in AI.”
Google has been talking about itself as an“AI-first” company for over 6 years, so it's no surprise that the company continues to be an important part of the discussion. But with many people suggesting that products like ChatGPT could be what finally disrupts Google, this call had a lot of dialogue.
Google’s CEO, Sundar Pichai, said, “AI is the most profound technology we are working on today. Our talented researchers, infrastructure and technology make us extremely well-positioned as AI reaches an inflection point.”
A. AI for end users
Bard, a ChatGPT alternative, is coming
In the past, Google has talked about its AI research projects, like the Pathways Language Model (PaLM) — which was trainedon 540 billion parameters — and the Language Model for Dialogue Applications (LaMDA).
In its earnings call, Google made it clear the company will make these models available so people can start engaging with them. Pichai said:
“We’ve published extensively about LaMDA and PaLM, the industry’s largest, most sophisticated model, plus extensive work at DeepMind. In the coming weeks and months, we’ll make these language models available, starting with LaMDA, so that people can engage directly with them. This will help us continue to get feedback, test and safely improve them. These models are particularly amazing for composing, constructing and summarizing.”
Pichai even hinted that Google’s models might be more factual and up-to-date than the current version of ChatGPT.
In early February 2023, Google announced Bard, its own conversational AI service powered by LaMDA, as an alternative to ChatGPT. Company spokespeople again highlighted Bard’s potential accuracy as a big selling point, saying, “We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.”
Improved Google search
Pichai also hinted that Google’s most powerful language models will be integrated into search in some experimental formats, saying:
“Language models likeBERT and MUM have improved search results for four years now, enabling significant ranking improvements and multimodal search like Google Lens. Very soon, people will be able to interact directly with our newest, most powerful language models as a companion to search in experimental and innovative ways.”
AI in Docs, Mail and Workspace
Google also confirmed they’ll be bringing LLMs to Gmail, Docs and other tools in Workspace.
Users can already use features like SmartCompose for creation and Smart Canvas for collaboration. The team at Google has stated they will also make other generative capabilities available soon, like coding and design.
It should be interesting to see how these integrations will impact existing text and email-based generative AI startups.
B. AI for developers
Google also has plans to make its own language and multimodal AI models available to developers. The company will also strengthen its AI cloud platform. Developers will be able to innovate, build their own applications and discover new AI possibilities.
Pichai said, “Google Cloud is making our technological leadership in AI available to customers via our Cloud AIPlatform, including infrastructure and tools for developers and data scientists, like Vertex AI.”
C. AI for advertisers
Google’s matching algorithms already use a lot of artificial intelligence. LLMs are used to match advertisers to user queries, which can improve performance by up to 35% on certain campaigns.
Now Google will start using GenAI more heavily on the advertising side of its business. To start, Google will be building it into its ad creative products. Advertisers will be able to use AI to generate headlines, descriptions and more.
Philipp Schindler, CBO of Google, said, “We’re excited to start testing our Automatically Created Assets Beta, which uses AI to generate headlines and descriptions for search creatives seamlessly once advertisers opt in.”
Amidst all the talk about the Metaverse, it’s easy to forget that Meta has also been investing heavily in AI — though mostly they’re relying on AI to power recommendations across its apps and advertising products. But Meta intends to push further into generative AI, as CEO MarkZuckerberg made clear on the earnings call.
Zuckerberg said, “...one of my goals for Meta is to build on our research to become a leader in generative AI, in addition to our leading work in recommendation AI.”
OnFebruary 27, Zuckerberg also announced that Meta is rolling out a new product team esponsible for integrating generative AI technology into its socialplatforms.
Zuckerberg highlighted Meta’s two biggest themes for 2023: 1) efficiency and 2) generative AI work.
IV. AI for end users
Zuckerberg hinted that Meta is working on integrating LLMs and diffusion models into all of its apps for generating images, videos, avatars and 3D assets. Once the tech advances, users will be able to do things like image and video editing via prompts, and image and video generation using people’s avatars (or faces).
However, Zuckerberg was tight-lipped on the exact implementation in the apps, saying:
“We have a bunch of different work streams across almost every single one of our products to use the new technologies, especially the large language models and diffusion models for generating images and videos and avatars and 3D assets and all kinds of different stuff across all of the different work streams that we’re working on.”
Meta also highlighted the scaling challenges and its experimental, iterative approach. The company will be launching a number of different initiatives soon, and the company’s leadership team is aware that the space is moving quickly. Zuckerberg said, “I think we’ll learn a lot about what works and what doesn’t.”
Zuckerberg noted that reducing the cost of inference is an important part of bringing GenAI to the billions of users onMeta’s apps. It’s expensive to generate images, videos or chat interaction, so Meta needs to figure out how to scale and make the work more efficient so they can bring features to a larger user base.
B. AI for internal efficiency
Somewhat in passing, Zuckerberg mentioned thatMeta will be “deploying AI tools to help our engineers be more productive.”
Here’s my interpretation of that comment: Meta could potentially roll out a Copilot-like product that is fine-tuned or trained on its internal codebase to increase the efficiency of development. They also might add AI to Meta’s code review and other tools.
C. AI for advertisers
A lot of the core business uses AI extensively, but what about GenAI? While Meta wasn’t specific, Zuckerberg did indicate that the company is heavily investing in AI to build tools to make it easier for advertisers to create and manage profitable ad campaigns.
My prediction is that Meta is working to bring some generative AI capabilities to advertisers, to help them create images and other assets.
“We’re investing heavily in AI to develop and deploy privacy-enhancing technologies and continue building new tools that will make it easier for advertisers to create and deliver more relevant and engaging ads,” Zuckerberg said.
Surprisingly, there were no mentions of AI onAmazon’s recent earnings call. Given the emphasis Azure and GCP are placing onAI workloads, it’s interesting that Amazon CEO Andy Jassy didn’t address the issue — and he also wasn’t asked about the company’s plans.
On the customer front, Jassy mentioned thatStability AI has chosen AWS as its preferred cloud provider. Stability AI is the company that is funding the development of open-source image- and music-generation systems like Stable Diffusion and Dance Diffusion. Jassy said:
“Stability AI selected AWS as its preferred cloud provider to build and train artificial intelligence models for the best performance at the lowest cost.”
Similarly, Jassy called out Inf2 instances for low latency and low-cost ML inference.“Inf2 instances, powered by AWS Inferentia2 chips, which deliver the lowest latency at the lowest cost for ML inference on Amazon EC2,” Jassy said.
Apple was pretty reticent on the company’s AI plans at their earnings call. When prompted, CEO Tim Cook did note that AI is a major focus for Apple and that it would affect all of the company’s products and services.
Judging by the fact that Cook needed to be asked before even mentioning AI, I wouldn’t expect major changes anytime soon.He hinted at their strategic direction, however.
“[AI] is a major focus of ours…look no further than some of the things that we announced in the fall, with crash detection and fall detection or a while back, with ECG. These things have literally saved people's lives, and so we see enormous potential in this space to affect virtually everything we do.”
One feature Apple did launch recently isAI-narrated audiobooks, which could indicate some of the ways they intend to integrate AI into apps.
AI is a big focus for Big Tech
The recent earnings calls of tech giantsMicrosoft, Google, Meta, Apple and Amazon reveal ambitious plans to integrate generative AI into products and services over the coming years.
From Microsoft's focus on AI for developers, to Meta’s integration of GenAI creation tools, these companies are pushing the boundaries of what artificial intelligence can do to enhance user experience and drive innovation.
While there are still challenges to be overcome in the development and implementation of generative AI, the potential benefits are enormous, and it's clear the tech giants are investing heavily in this area. As we move forward, it will be interesting to see how plans unfold and how big tech will continue to shape the future of technology.
To stay on top of the latest news on GenAI and other emerging technologies, subscribe to my Substack newsletter (it’s free!).