Google I/O 2022: Advancing knowledge and computing (2022)

A message from our CEO

May 11, 2022

min read

Google I/O 2022: Advancing knowledge and computing (1)

Sundar Pichai

CEO of Google and Alphabet

Google I/O 2022: Advancing knowledge and computing (2)


Nearly 24 years ago, Google started with two graduate students, one product, and a big mission: to organize the world’s information and make it universally accessible and useful. In the decades since, we’ve been developing our technology to deliver on that mission.

The progress we've made is because of our years of investment in advanced technologies, from AI to the technical infrastructure that powers it all. And once a year — on my favorite day of the year :) — we share an update on how it’s going at Google I/O.

Today, I talked about how we’re advancing two fundamental aspects of our mission — knowledge and computing — to create products that are built to help. It’s exciting to build these products; it’s even more exciting to see what people do with them.

Thank you to everyone who helps us do this work, and most especially our Googlers. We are grateful for the opportunity.

- Sundar

Editor’s note: Below is an edited transcript of Sundar Pichai's keynote address during the opening of today's Google I/O Developers Conference.

Hi, everyone, and welcome. Actually, let’s make that welcome back! It’s great to return to Shoreline Amphitheatre after three years away. To the thousands of developers, partners and Googlers here with us, it’s great to see all of you. And to the millions more joining us around the world — we’re so happy you’re here, too.

Last year, we shared how new breakthroughs in some of the most technically challenging areas of computer science are making Google products more helpful in the moments that matter. All this work is in service of our timeless mission: to organize the world's information and make it universally accessible and useful.

I'm excited to show you how we’re driving that mission forward in two key ways: by deepening our understanding of information so that we can turn it into knowledge; and advancing the state of computing, so that knowledge is easier to access, no matter who or where you are.

Today, you'll see how progress on these two parts of our mission ensures Google products are built to help. I’ll start with a few quick examples. Throughout the pandemic, Google has focused on delivering accurate information to help people stay healthy. Over the last year, people used Google Search and Maps to find where they could get a COVID vaccine nearly two billion times.

Google I/O 2022: Advancing knowledge and computing (3)

We’ve also expanded our flood forecasting technology to help people stay safe in the face of natural disasters. During last year’s monsoon season, our flood alerts notified more than 23 million people in India and Bangladesh. And we estimate this supported the timely evacuation of hundreds of thousands of people.

In Ukraine, we worked with the government to rapidly deploy air raid alerts. To date, we’ve delivered hundreds of millions of alerts to help people get to safety. In March I was in Poland, where millions of Ukrainians have sought refuge. Warsaw’s population has increased by nearly 20% as families host refugees in their homes, and schools welcome thousands of new students. Nearly every Google employee I spoke with there was hosting someone.

Adding 24 more languages to Google Translate

In countries around the world, Google Translate has been a crucial tool for newcomers and residents trying to communicate with one another. We’re proud of how it’s helping Ukrainians find a bit of hope and connection until they are able to return home again.

Google I/O 2022: Advancing knowledge and computing (4)

(Video) Google Keynote (Google I/O ‘22)

Real-time translation is a testament to how knowledge and computing come together to make people's lives better. More people are using Google Translate than ever before, but we still have work to do to make it universally accessible. There’s a long tail of languages that are underrepresented on the web today, and translating them is a hard technical problem. That’s because translation models are usually trained with bilingual text — for example, the same phrase in both English and Spanish. However, there's not enough publicly available bilingual text for every language.

So with advances in machine learning, we’ve developed a monolingual approach where the model learns to translate a new language without ever seeing a direct translation of it. By collaborating with native speakers and institutions, we found these translations were of sufficient quality to be useful, and we'll continue to improve them.

Google I/O 2022: Advancing knowledge and computing (5)

Today, I’m excited to announce that we’re adding 24 new languages to Google Translate, including the first indigenous languages of the Americas. Together, these languages are spoken by more than 300 million people. Breakthroughs like this are powering a radical shift in how we access knowledge and use computers.

Taking Google Maps to the next level

So much of what’s knowable about our world goes beyond language — it’s in the physical and geospatial information all around us. For more than 15 years, Google Maps has worked to create rich and useful representations of this information to help us navigate. Advances in AI are taking this work to the next level, whether it’s expanding our coverage to remote areas, or reimagining how to explore the world in more intuitive ways.

Google I/O 2022: Advancing knowledge and computing (6)

Around the world, we’ve mapped around 1.6 billion buildings and over 60 million kilometers of roads to date. Some remote and rural areas have previously been difficult to map, due to scarcity of high-quality imagery and distinct building types and terrain. To address this, we’re using computer vision and neural networks to detect buildings at scale from satellite images. As a result, we have increased the number of buildings on Google Maps in Africa by 5X since July 2020, from 60 million to nearly 300 million.

We’ve also doubled the number of buildings mapped in India and Indonesia this year. Globally, over 20% of the buildings on Google Maps have been detected using these new techniques. We’ve gone a step further, and made the dataset of buildings in Africa publicly available. International organizations like the United Nations and the World Bank are already using it to better understand population density, and to provide support and emergency assistance.

We’re also bringing new capabilities into Maps. Using advances in 3D mapping and machine learning, we’re fusing billions of aerial and street level images to create a new, high-fidelity representation of a place. These breakthrough technologies are coming together to power a new experience in Maps called immersive view: it allows you to explore a place like never before.

Let’s go to London and take a look. Say you’re planning to visit Westminster with your family. You can get into this immersive view straight from Maps on your phone, and you can pan around the sights… here’s Westminster Abbey. If you’re thinking of heading to Big Ben, you can check if there's traffic, how busy it is, and even see the weather forecast. And if you’re looking to grab a bite during your visit, you can check out restaurants nearby and get a glimpse inside.

What's amazing is that isn't a drone flying in the restaurant — we use neural rendering to create the experience from images alone. And Google Cloud Immersive Stream allows this experience to run on just about any smartphone. This feature will start rolling out in Google Maps for select cities globally later this year.

Another big improvement to Maps is eco-friendly routing. Launched last year, it shows you the most fuel-efficient route, giving you the choice to save money on gas and reduce carbon emissions. Eco-friendly routes have already rolled out in the U.S. and Canada — and people have used them to travel approximately 86 billion miles, helping save an estimated half million metric tons of carbon emissions, the equivalent of taking 100,000 cars off the road.

Google I/O 2022: Advancing knowledge and computing (7)

I’m happy to share that we’re expanding this feature to more places, including Europe later this year. In this Berlin example, you could reduce your fuel consumption by 18% taking a route that’s just three minutes slower. These small decisions have a big impact at scale. With the expansion into Europe and beyond, we estimate carbon emission savings will double by the end of the year.

And we’ve added a similar feature to Google Flights. When you search for flights between two cities, we also show you carbon emission estimates alongside other information like price and schedule, making it easy to choose a greener option. These eco-friendly features in Maps and Flights are part of our goal to empower 1 billion people to make more sustainable choices through our products, and we’re excited about the progress here.

New YouTube features to help people easily access video content

Beyond Maps, video is becoming an even more fundamental part of how we share information, communicate, and learn. Often when you come to YouTube, you are looking for a specific moment in a video and we want to help you get there faster.

Last year we launched auto-generated chapters to make it easier to jump to the part you’re most interested in.

This is also great for creators because it saves them time making chapters. We’re now applying multimodal technology from DeepMind. It simultaneously uses text, audio and video to auto-generate chapters with greater accuracy and speed. With this, we now have a goal to 10X the number of videos with auto-generated chapters, from eight million today, to 80 million over the next year.

Often the fastest way to get a sense of a video’s content is to read its transcript, so we’re also using speech recognition models to transcribe videos. Video transcripts are now available to all Android and iOS users.

(Video) Developer Keynote (Google I/O '22)

Google I/O 2022: Advancing knowledge and computing (8)

Next up, we’re bringing auto-translated captions on YouTube to mobile. Which means viewers can now auto-translate video captions in 16 languages, and creators can grow their global audience. We’ll also be expanding auto-translated captions to Ukrainian YouTube content next month, part of our larger effort to increase access to accurate information about the war.

Helping people be more efficient with Google Workspace

Just as we’re using AI to improve features in YouTube, we’re building it into our Workspace products to help people be more efficient. Whether you work for a small business or a large institution, chances are you spend a lot of time reading documents. Maybe you’ve felt that wave of panic when you realize you have a 25-page document to read ahead of a meeting that starts in five minutes.

At Google, whenever I get a long document or email, I look for a TL;DR at the top — TL;DR is short for “Too Long, Didn’t Read.” And it got us thinking, wouldn’t life be better if more things had a TL;DR?

That’s why we’ve introduced automated summarization for Google Docs. Using one of our machine learning models for text summarization, Google Docs will automatically parse the words and pull out the main points.

This marks a big leap forward for natural language processing. Summarization requires understanding of long passages, information compression and language generation, which used to be outside of the capabilities of even the best machine learning models.

And docs are only the beginning. We’re launching summarization for other products in Workspace. It will come to Google Chat in the next few months, providing a helpful digest of chat conversations, so you can jump right into a group chat or look back at the key highlights.

Google I/O 2022: Advancing knowledge and computing (9)

And we’re working to bring transcription and summarization to Google Meet as well so you can catch up on some important meetings you missed.

Visual improvements on Google Meet

Of course there are many moments where you really want to be in a virtual room with someone. And that’s why we continue to improve audio and video quality, inspired by Project Starline. We introduced Project Starline at I/O last year. And we’ve been testing it across Google offices to get feedback and improve the technology for the future. And in the process, we’ve learned some things that we can apply right now to Google Meet.

Starline inspired machine learning-powered image processing to automatically improve your image quality in Google Meet. And it works on all types of devices so you look your best wherever you are.

Google I/O 2022: Advancing knowledge and computing (10)

We’re also bringing studio quality virtual lighting to Meet. You can adjust the light position and brightness, so you’ll still be visible in a dark room or sitting in front of a window. We’re testing this feature to ensure everyone looks like their true selves, continuing the work we’ve done with Real Tone on Pixel phones and the Monk Scale.

Read Article Improving skin tone representation across Google We're introducing a next step in our commitment to image equity and improving representation across our products. Read Article

These are just some of the ways AI is improving our products: making them more helpful, more accessible, and delivering innovative new features for everyone.

Google I/O 2022: Advancing knowledge and computing (11)

Making knowledge accessible through computing

We’ve talked about how we’re advancing access to knowledge as part of our mission: from better language translation to improved Search experiences across images and video, to richer explorations of the world using Maps.

Now we’re going to focus on how we make that knowledge even more accessible through computing. The journey we’ve been on with computing is an exciting one. Every shift, from desktop to the web to mobile to wearables and ambient computing has made knowledge more useful in our daily lives.

As helpful as our devices are, we’ve had to work pretty hard to adapt to them. I’ve always thought computers should be adapting to people, not the other way around. We continue to push ourselves to make progress here.

(Video) Google I/O ‘22 in Under 12 Minutes

Here’s how we’re making computing more natural and intuitive with the Google Assistant.

Read Article Have more natural conversations with Google Assistant Google Assistant announces more natural and conversational ways to interact with devices. Read Article

Introducing LaMDA 2 and AI Test Kitchen

Google I/O 2022: Advancing knowledge and computing (12)

We're continually working to advance our conversational capabilities. Conversation and natural language processing are powerful ways to make computers more accessible to everyone. And large language models are key to this.

Last year, we introduced LaMDA, our generative language model for dialogue applications that can converse on any topic. Today, we are excited to announce LaMDA 2, our most advanced conversational AI yet.

We are at the beginning of a journey to make models like these useful to people, and we feel a deep responsibility to get it right. To make progress, we need people to experience the technology and provide feedback. We opened LaMDA up to thousands of Googlers, who enjoyed testing it and seeing its capabilities. This yielded significant quality improvements, and led to a reduction in inaccurate or offensive responses.

That’s why we’ve made AI Test Kitchen. It’s a new way to explore AI features with a broader audience. Inside the AI Test Kitchen, there are a few different experiences. Each is meant to give you a sense of what it might be like to have LaMDA in your hands and use it for things you care about.

The first is called “Imagine it.” This demo tests if the model can take a creative idea you give it, and generate imaginative and relevant descriptions. These are not products, they are quick sketches that allow us to explore what LaMDA can do with you. The user interfaces are very simple.

Say you’re writing a story and need some inspirational ideas. Maybe one of your characters is exploring the deep ocean. You can ask what that might feel like. Here LaMDA describes a scene in the Mariana Trench. It even generates follow-up questions on the fly. You can ask LaMDA to imagine what kinds of creatures might live there. Remember, we didn’t hand-program the model for specific topics like submarines or bioluminescence. It synthesized these concepts from its training data. That’s why you can ask about almost any topic: Saturn’s rings or even being on a planet made of ice cream.

Staying on topic is a challenge for language models. Say you’re building a learning experience — you want it to be open-ended enough to allow people to explore where curiosity takes them, but stay safely on topic. Our second demo tests how LaMDA does with that.

In this demo, we’ve primed the model to focus on the topic of dogs. It starts by generating a question to spark conversation, “Have you ever wondered why dogs love to play fetch so much?” And if you ask a follow-up question, you get an answer with some relevant details: it’s interesting, it thinks it might have something to do with the sense of smell and treasure hunting.

You can take the conversation anywhere you want. Maybe you’re curious about how smell works and you want to dive deeper. You’ll get a unique response for that too. No matter what you ask, it will try to keep the conversation on the topic of dogs. If I start asking about cricket, which I probably would, the model brings the topic back to dogs in a fun way.

This challenge of staying on-topic is a tricky one, and it’s an important area of research for building useful applications with language models.

These experiences show the potential of language models to one day help us with things like planning, learning about the world, and more.

Of course, there are significant challenges to solve before these models can truly be useful. While we have improved safety, the model might still generate inaccurate, inappropriate, or offensive responses. That’s why we are inviting feedback in the app, so people can help report problems.

We will be doing all of this work in accordance with our AI Principles. Our process will be iterative, opening up access over the coming months, and carefully assessing feedback with a broad range of stakeholders — from AI researchers and social scientists to human rights experts. We’ll incorporate this feedback into future versions of LaMDA, and share our findings as we go.

Over time, we intend to continue adding other emerging areas of AI into AI Test Kitchen. You can learn more at:

Advancing AI language models

LaMDA 2 has incredible conversational capabilities. To explore other aspects of natural language processing and AI, we recently announced a new model. It’s called Pathways Language Model, or PaLM for short. It’s our largest model to date and trained on 540 billion parameters.

PaLM demonstrates breakthrough performance on many natural language processing tasks, such as generating code from text, answering a math word problem, or even explaining a joke.

It achieves this through greater scale. And when we combine that scale with a new technique called chain-of- thought prompting, the results are promising. Chain-of-thought prompting allows us to describe multi-step problems as a series of intermediate steps.

Let’s take an example of a math word problem that requires reasoning. Normally, how you use a model is you prompt it with a question and answer, and then you start asking questions. In this case: How many hours are in the month of May? So you can see, the model didn’t quite get it right.

In chain-of-thought prompting, we give the model a question-answer pair, but this time, an explanation of how the answer was derived. Kind of like when your teacher gives you a step-by-step example to help you understand how to solve a problem. Now, if we ask the model again — how many hours are in the month of May — or other related questions, it actually answers correctly and even shows its work.

Google I/O 2022: Advancing knowledge and computing (13)

Chain-of-thought prompting increases accuracy by a large margin. This leads to state-of-the-art performance across several reasoning benchmarks, including math word problems. And we can do it all without ever changing how the model is trained.

PaLM is highly capable and can do so much more. For example, you might be someone who speaks a language that’s not well-represented on the web today — which makes it hard to find information. Even more frustrating because the answer you are looking for is probably out there. PaLM offers a new approach that holds enormous promise for making knowledge more accessible for everyone.

Let me show you an example in which we can help answer questions in a language like Bengali — spoken by a quarter billion people. Just like before we prompt the model with two examples of questions in Bengali with both Bengali and English answers.

That’s it, now we can start asking questions in Bengali: “What is the national song of Bangladesh?” The answer, by the way, is “Amar Sonar Bangla” — and PaLM got it right, too. This is not that surprising because you would expect that content to exist in Bengali.

You can also try something that is less likely to have related information in Bengali such as: “What are popular pizza toppings in New York City?” The model again answers correctly in Bengali. Though it probably just stirred up a debate amongst New Yorkers about how “correct” that answer really is.

What’s so impressive is that PaLM has never seen parallel sentences between Bengali and English. Nor was it ever explicitly taught to answer questions or translate at all! The model brought all of its capabilities together to answer questions correctly in Bengali. And we can extend the techniques to more languages and other complex tasks.

We're so optimistic about the potential for language models. One day, we hope we can answer questions on more topics in any language you speak, making knowledge even more accessible, in Search and across all of Google.

Introducing the world’s largest, publicly available machine learning hub

The advances we’ve shared today are possible only because of our continued innovation in our infrastructure. Recently we announced plans to invest $9.5 billion in data centers and offices across the U.S.

(Video) Knowledge and Information | Google I/O 2022

One of our state-of-the-art data centers is in Mayes County, Oklahoma. I’m excited to announce that, there, we are launching the world’s largest, publicly-available machine learning hub for our Google Cloud customers.

Google I/O 2022: Advancing knowledge and computing (14)

This machine learning hub has eight Cloud TPU v4 pods, custom-built on the same networking infrastructure that powers Google’s largest neural models. They provide nearly nine exaflops of computing power in aggregate — bringing our customers an unprecedented ability to run complex models and workloads. We hope this will fuel innovation across many fields, from medicine to logistics, sustainability and more.

And speaking of sustainability, this machine learning hub is already operating at 90% carbon-free energy. This is helping us make progress on our goal to become the first major company to operate all of our data centers and campuses globally on 24/7 carbon-free energy by 2030.

Even as we invest in our data centers, we are working to innovate on our mobile platforms so more processing can happen locally on device. Google Tensor, our custom system on a chip, was an important step in this direction. It’s already running on Pixel 6 and Pixel 6 Pro, and it brings our AI capabilities — including the best speech recognition we’ve ever deployed — right to your phone. It’s also a big step forward in making those devices more secure. Combined with Android’s Private Compute Core, it can run data-powered features directly on device so that it’s private to you.

People turn to our products every day for help in moments big and small. Core to making this possible is protecting your private information each step of the way. Even as technology grows increasingly complex, we keep more people safe online than anyone else in the world, with products that are secure by default, private by design and that put you in control.

Read Article How we make every day safer with Google An update on how Google keeps more people safe online than anyone else in the world. Read Article

We also spent time today sharing updates to platforms like Android. They’re delivering access, connectivity, and information to billions of people through their smartphones and other connected devices like TVs, cars and watches.

Read Article Living in a multi-device world with Android At I/O, Android announced updates to your phone, to your watch and tablet devices, and to help all your devices work better together. Read Article

And we shared our new Pixel Portfolio, including the Pixel 6a, Pixel Buds Pro, Google Pixel Watch, Pixel 7, and Pixel tablet all built with ambient computing in mind. We’re excited to share a family of devices that work better together — for you.

Read Article Take a look at our new Pixel portfolio, made to be helpful The new Pixel portfolio furthers our work in ambient computing, making your hardware work better together for you. Read Article

The next frontier of computing: augmented reality

Today we talked about all the technologies that are changing how we use computers and access knowledge. We see devices working seamlessly together, exactly when and where you need them and with conversational interfaces that make it easier to get things done.

Looking ahead, there's a new frontier of computing, which has the potential to extend all of this even further, and that is augmented reality. At Google, we have been heavily invested in this area. We’ve been building augmented reality into many Google products, from Google Lens to multisearch, scene exploration, and Live and immersive views in Maps.

These AR capabilities are already useful on phones and the magic will really come alive when you can use them in the real world without the technology getting in the way.

That potential is what gets us most excited about AR: the ability to spend time focusing on what matters in the real world, in our real lives. Because the real world is pretty amazing!

It’s important we design in a way that is built for the real world — and doesn’t take you away from it. And AR gives us new ways to accomplish this.

Let’s take language as an example. Language is just so fundamental to connecting with one another. And yet, understanding someone who speaks a different language, or trying to follow a conversation if you are deaf or hard of hearing can be a real challenge. Let's see what happens when we take our advancements in translation and transcription and deliver them in your line of sight in one of the early prototypes we’ve been testing.

You can see it in their faces: the joy that comes with speaking naturally to someone. That moment of connection. To understand and be understood. That’s what our focus on knowledge and computing is all about. And it’s what we strive for every day, with products that are built to help.

Each year we get a little closer to delivering on our timeless mission. And we still have so much further to go. At Google, we genuinely feel a sense of excitement about that. And we are optimistic that the breakthroughs you just saw will help us get there. Thank you to all of the developers, partners and customers who joined us today. We look forward to building the future with all of you.


How do I get Google IO swag? ›

To claim this offer, redeem through get. dev/domainsfordevs and use your unique code XXXXXXXXXXXX. Google Domains is one of the four registrar partner, with I/O 2022 attendees having until June 30, 2022, to redeem this swag. The codes are unique to each attendee/email and can presumably only be used once.

What does Google I O stand for? ›

Google I/O (or simply I/O) is an annual developer conference held by Google in Mountain View, California. "I/O" stands for Input/Output, as well as the slogan "Innovation in the Open".

Where is Google IO? ›

Google I/O

What is the name of the new AI powered conversation technology launched at Google I O 2021? ›

After launching LaMDA (Language Model for Dialog Applications) last year, which allowed Google Assistant to have more natural conversations, Google has announced LaMDA 2 and the AI Test Kitchen, which is an app that will bring access to this model to users.

How do I get my free Google shirt? ›

  1. Participants who complete at least 1 task get a digital certificate!
  2. Participants who complete 3 or more tasks receive a t-shirt too!
  3. At the end of the contest, each organization will choose six finalists to receive limited edition Google Code-in jackets!

Who can attend Google I O? ›

By registering and accepting any discounts, gifts, or items of value related to Google I/O, you certify that you are able to do so in compliance with applicable laws and the internal rules of your organization. Attendees must be at least 18 years of age to attend Google I/O.

How much does Google IO cost? ›

How Much Is Google I/O? In 2022 and 2021, the event was free. Tickets for past events have ranged from $375 (for academics) to $1,150 (general admission).

What can I expect from Google IO? ›

It will have a 6.1-inch display, 6GB of RAM, 128GB of storage, and a 4,306mAh battery. Inside, it will have the same Google Tensor processor that's available on the Pixel 6 phones. However, its camera will have the same hardware as last year's Pixel 5a smartphone.

What will be announced at Google IO? ›

Pixel 7 and Pixel 7 Pro

This was the most surprising announcement for Google IO 2022, with a sneak preview of Pixel 7 and 7 Pro being showcased with an updated, stainless-steel design, and an updated look to the back cameras.

What happened Google 2022? ›

With that announcement, the initial vision of Android Things to support any hardware device has narrowed down to smartphone-class devices. However, a year later in 2020, Google announced that the platform would stop taking on new projects from January 2021 and would be completely shut down in January 2022.

What is Google one and do I need it? ›

Google One is a subscription plan that gives you more storage to use across Google Drive, Gmail, and Google Photos. Your Google One membership will replace your current plan, not add to it.

How do I watch Google IO 2022? ›

Where can I watch Google I/O 2022? You can stream the Google I/O keynote on Google's YouTube channel and Google I/O's website. We'll also be embedding the stream above to watch live and after the event. Other events after the keynote will be available to watch on Google I/O's website.

What language is used in machine learning? ›

Python leads the pack, with 57% of data scientists and machine learning developers using it and 33% prioritising it for development.

What is the difference between AI and machine learning? ›

Put in context, artificial intelligence refers to the general ability of computers to emulate human thought and perform tasks in real-world environments, while machine learning refers to the technologies and algorithms that enable systems to identify patterns, make decisions, and improve themselves through experience ...

What are types of AI? ›

There are four types of artificial intelligence: reactive machines, limited memory, theory of mind and self-awareness.

What do you get for winning google kickstart? ›

$15,000 for the winner, smaller prizes for runners-up. Top Competitors may be contacted by Google for a chance to interview for a career at google.

How do you earn goodies on google? ›

How to earn a skill badge
  1. Pick your quest.
  2. Complete the challenge lab at the end of the quest to prove your skill.
  3. Earn a Google Cloud digital skill badge.
  4. Share your skill badge on your social profiles or resume.

What is google swag? ›

Google's swag offerings for its conferences typically range from cute collectable pins to new Google hardware. This year's event definitely leans more on the side of the former, albeit with a twist.

How many days is Google io? ›

The tech giant's annual event for developers will be held virtually this year. The two-day event starts today May 11 and will run through May12.

Is Google IO Invite only? ›

Join us for this year's Google IO Extended event, where we'll bring together tech enthusiasts and the developer community to be a part of the Google IO 2022 experience. This is an invite-only event.

How long is Google IO? ›

(Pocket-lint) - The Google I/O 2022 keynote is now done and dusted. The two-hour-long event was filmed in front of a small audience of developers and live-streamed globally, and it was jam-packed with announcements on AI, Android, and even new Pixel hardware.

How do you pronounce Google IO? ›

Google Keynote (Google I/O'19) - YouTube

What IO extended? ›

I/O Extended events help developers from around the world take part in the I/O experience. In 2018, developers hosted more than 500 I/O Extended viewing parties around the world and many more joined online. Find an Extended event. I/O Extended events span the world, connecting the global developer community.

How long is Google keynote? ›

Expect the keynote presentation to last about two hours. You can stream it right here at the top of this page or, if you feel like Google's just not getting enough web traffic these days, at the I/O website. You can also watch live on Google's YouTube page.

Will Google make a smart watch? ›

The first smartwatch built by Google, inside and out. A beautiful circular, domed design works smoothly with the new experience of Wear OS by Google. Get the best of Google helpfulness and live healthier with Fitbit.

Which is better iPhone 7 vs Google Pixel? ›

Google Pixel vs iPhone 7 | Which is Best? - YouTube

Will there be a pixel 6? ›

And yes, the Pixel 6 launched with a bigger sibling, but a larger screen and boosted 5,000mAh battery capacity aren't the only things that set the Pixel 6 Pro apart. Its 6.7-inch OLED display has QHD Plus (3120 x 1440) resolution and 120Hz refresh rate, and it comes with 12GB of RAM and up to 512GB of storage.

What was Google IO 2022 announced? ›

Google announced various new products and features at its I/O 2022 conference on May 11, 2022. Among these were a new Pixel 6A smartphone, Pixel earbuds, the new Android 13 operating system, and an upgraded Google Wallet. Later in 2022 will come the Pixel 7 smartphone and the Google Wallet.

What Google announced for developers at Google I o 2022? ›

After months of leaks, Google finally confirmed the Pixel Watch is real. Arriving this fall, the wearable features a nearly bezel-less watch face flanked by a “tactile crown.” It runs Wear OS 3 and includes deep integration with Fitbit software for its health and fitness-tracking features.

Has Google changed its format 2022? ›

The new layout is expected to become the default option by the end of Q2 2022. Google says that there will be a prompt at some point, encouraging users to switch to the new layout. The new interface looks like it will give you easy access to other tools without having them always on the screen.

How can I improve my Google ranking 2022? ›

How to Rank Higher On Google In 2022
  1. Step #1: Improve Your On-Site SEO.
  2. Step #2: Add LSI Keywords To Your Page.
  3. Step #3: Monitor Your Technical SEO.
  4. Step #4: Match Your Content to Search Intent.
  5. Step #5: Reduce Your Bounce Rate.
  6. Step #6: Find Even Keywords to Target.
  7. Step #7: Publish Insanely High-Quality Content.
Nov 8, 2021

What are the latest Google updates? ›

Google Updates since 2010
Update NameDate first Rolled OutConfirmed by Google
Google June/July 2021 Core UpdateJune 2, 2021Yes
Google December 2020 Core UpdateDecember 3, 2020Yes
Google May 2020 Core UpdateMay 4, 2020Yes
Google Update February 2020February 7, 2020Yes and no
79 more rows

What happens if I don't renew Google One? ›

You'll stop future Google One payments. You and your family members will lose access to extra member benefits and Google experts via the Google One app and website. You and your family members will lose access to your additional storage. Each person will keep their default 15 GB of storage at no charge.

Do you have to pay for Google One? ›

If you want to sign up for Google One to get more than the standard free 15GB of service, here's a look at the different storage tiers and prices: 100GB: $2 a month or $20 annually. 200GB: $3 a month or $30 annually. 2TB: $10 a month or $100 annually.

What is the difference between Google and Google One? ›

Google Drive is a storage service. Google One is a subscription plan that gives you more storage to use across Google Drive, Gmail, and Google Photos. Plus, with Google One, you get extra benefits and can share your membership with your family.

How do I create a Google event? ›

Create an event from a Gmail message
  1. On your computer, go to Gmail.
  2. Open the message.
  3. At the top, click More. Create event. Google Calendar creates an event, copying the Gmail message title and text. ...
  4. You can change the event time, date, and location.
  5. When you're done, click Save.

How do I submit a Google event? ›

Steps to Add an Event to Google Search

Enter all of the required information into each field. Upload your event logo. Double-check your information. Click “Submit.”

How do I search Google events? ›

How to search
  1. On your computer, open Google Calendar.
  2. On the top right, select Search .
  3. Enter your search terms.
  4. Results appear as you enter text, including ones from other Google products you use, like Gmail and Google Drive.
  5. Click on a result to see the details for that event.

Can I learn AI without coding? ›

These SaaS tools offer the same computing power of AI giants, like Google and Apple, but with no coding skills required. No-code AI platforms make machine learning accessible to everyone – some are simply plug and play and some allow you to train advanced models to your specific needs.

Which language is best for AI? ›

Best Programming Languages for AI Development in 2022
  1. Python. Python tends to top the list of best AI programming languages, no matter how you slice it up. ...
  2. Java. ...
  3. R. ...
  4. C++ ...
  5. Julia. ...
  6. Haskell. ...
  7. Prolog. ...
  8. LISP.
Apr 7, 2022

What should I study for artificial intelligence? ›

A computer science degree is a common choice for students who want to work in artificial intelligence. Many schools offer computer science programs with a track in AI or machine learning. This specialization allows students to take various classes in AI to help prepare them for careers in this field.

Is artificial intelligence worth studying? ›

The field of artificial intelligence has a tremendous career outlook, with the Bureau of Labor Statistics predicting a 31.4 percent, by 2030, increase in jobs for data scientists and mathematical science professionals, which are crucial to AI.

How do I get started with AI? ›

How to Get Started with AI
  1. Pick a topic you are interested in. First, select a topic that is really interesting for you. ...
  2. Find a quick solution. ...
  3. Improve your simple solution. ...
  4. Share your solution. ...
  5. Repeat steps 1-4 for different problems. ...
  6. Complete a Kaggle competition. ...
  7. Use machine learning professionally.
May 21, 2019

Is Alexa AI or machine learning? ›

Alexa and Siri, Amazon and Apple's digital voice assistants, are much more than a convenient tool—they are very real applications of artificial intelligence that is increasingly integral to our daily life.

What is Google swag? ›

Google's swag offerings for its conferences typically range from cute collectable pins to new Google hardware. This year's event definitely leans more on the side of the former, albeit with a twist.

How do I get swag on GitHub? ›

The swag bag includes GitHub educational materials like Git cheat sheets, GitHub Flow guides, and GitHub Flavored Markdown guides – plus motivators including Octocat stickers. Because you are a student, you must ask a teacher to make the request. To qualify, the teacher must be a GitHub Education member.

What is Google's 30 day cloud? ›

30 Days of Google Cloud program will provide you an opportunity to kickstart your career in cloud and get hands-on practice on Google Cloud - the tool that powers apps like Google Search, Gmail and YouTube.

Does Google have hardware? ›

Google is still best known for its search engine and Android operating system. However, the company is swiftly becoming a major hardware player too. Between smartphones, smart home, and IoT ventures, Google is building hardware in the biggest and fastest-growing market segments.


1. Sundar Pichai Intro At Google I/O 2022: How AI Is Improving Google Products.
2. [Audio Described] Google Keynote (Google I/O ‘22)
3. Google I/O 2022 keynote in 18 minutes
(The Verge)
4. 3 things to know about Modern Android Development at Google I/O '22
(Android Developers)
5. Top 10 web things to know from Google I/O
(Google Chrome Developers)
6. What's new in Chrome OS I/O 2022
(Google Chrome Developers)

You might also like

Latest Posts

Article information

Author: Errol Quitzon

Last Updated: 10/04/2022

Views: 5912

Rating: 4.9 / 5 (79 voted)

Reviews: 94% of readers found this page helpful

Author information

Name: Errol Quitzon

Birthday: 1993-04-02

Address: 70604 Haley Lane, Port Weldonside, TN 99233-0942

Phone: +9665282866296

Job: Product Retail Agent

Hobby: Computer programming, Horseback riding, Hooping, Dance, Ice skating, Backpacking, Rafting

Introduction: My name is Errol Quitzon, I am a fair, cute, fancy, clean, attractive, sparkling, kind person who loves writing and wants to share my knowledge and understanding with you.