Steve Thomas - IT Consultant

At the Google I/O developer conference today, the company introduced several changes designed to make it easier for Android app developers to generate revenue via subscriptions, particularly when trying to reach users in emerging markets. Most notably, the company said it will now allow developers to offer users the ability to subscribe via prepaid plans which essentially provide access to an app and its services for fixed amount of time the developer sets.

The users would then be able to buy top-ups in the app when their subscription ran out and they had the funds to continue. Google said the feature would make sense in regions where pay-as-you-go cellular plans are standard. In those markets, consumers are already used to the prepaid model, so extending it to apps could help to boost developers’ subscription revenues. However, prepaid subscriptions could also help to target other use cases as well — like subscription-adverse customers who are hesitant to get locked into ongoing charges and who want more control over when and how much they’re spending on their mobile apps.

Google also announced expanded pricing options with the launch of “ultra-low” price points to reach users in emerging markets.

Last March, Google had reduced the minimum price limit for products in more than 20 markets across Latin America, Europe, the Middle East, Africa, and Asia-Pacific, allowing developers to drop prices down to as low as 10 to 30 cents (USD). At the time, the company explained these “sub-dollar prices” would allow developers to reach new potential buyers by adjusting pricing to “better reflect local purchasing power and demand.”

Now, Google says developers can drop prices to as low as 5 US cents. This would allow developers to also run local sales and promotions and support various micro-transactions, like in-app tipping.

While these changes will help to better target Android app users in emerging markets, Google made other improvements to app subscriptions, as well.

The company said it’s making it easier to sell subscriptions on Google Play by allowing developers to configure multiple base plans and special offers, in order to reduce the overhead of having to manage an increasing number of SKUs as developers tweak how they want to sell subscriptions with offers.

In this setup, a developer can establish multiple base plans each with its own billing period and renewal type — like monthly or annual auto-renewing plans or monthly prepaid plans. Then, for each base plan, they can create multiple special offers across the subscription lifecycle. For example, they could create an acquisition offer for a limited time free trial, an upgrade offer to move from a prepaid plan to an auto-renewing tier, or even a downgrade offer to help retain a subscriber who may be looking to cancel as they’re not using their full subscription benefits.

Google also offers an In-App Messaging API that can be used to remind users to update their payment information when their payment method is declined, it noted.

The company last year announced it would begin to support other payment methods, including both cash and prepaid plans. In the time since, it’s expanded its payment method library to include over 300 local payment methods in 70 countries, and added eWallet payment methods such as MerPay in Japan, KCP in Korea, and Mercado Pago in Mexico, Google said.

"Read

At Google’s I/O developer conference, the company rolled out a series of updates for Android app developers who publish to Google Play. Among these were two high-profile changes to its Google Play app marketplace, custom store listings and in-app events, which follow updates Apple made to its own App Store just last year.

Google had been offering A/B testing for product page enhancements since 2015 — a feature that allows developers to see which text and graphics would best convert users.

Apple later adopted a similar feature when, at last year’s Worldwide Developer Conference, it introduced Product Page Optimization designed to help developers to try out different app screenshots, videos, and even app icons to try to appeal to different types of users. Developers could segment a certain percentage of App Store traffic to these cohorts to see which product pages performed better before deciding which page should be their default.

Apple last year also announced a related feature called Custom Product Pages that lets developers create different product pages to highlight different app features, each with its own unique URL to be used in external marketing channels.

Today, Google is following suit and essentially launching the same thing with Custom Store Listings.

Instead of simply testing different product pages, Android app developers will be able to make up to 50 custom store listings for their apps. Each page will have its own analytics and deep links available. Notably, this is more listings than Apple’s solution offers, which is currently set at 35 per app. Google explains developers can use this feature to display different listings to users based on where they’re coming from. For example, a developer with a recipe-finding app could target ad campaigns to U.S. users based on U.S. holidays, by showcasing recipes for Thanksgiving or July 4th. But it could target users from other markets at different times with recipes related to their own cultural traditions.

Apple last year also introduced an App Store feature, in-app events, to allow developers to promote real-time happenings going on inside their apps — like special events or even just seasonal deals.

Google Play is now rolling out its own take on this feature, as well.

With the launch of what it’s calling “LiveOps,” developers will be able to submit content for featuring on the Play Store, including major updates for their app or game, in-app events, and limited-time offers.

Google says LiveOps can drive 5% more 28-day active users to apps and deliver 4% higher revenue for those using the feature compared with those who don’t. The feature is in an invite-only beta testing phase for the time being.

While these changes were the highlights among those designed to help developers target, acquire and re-engage their users, Google also announced a few other notable Google Play updates.

The company said the Play Store would be updated to help people find the best tablet-optimized apps with new large-screen focused editorial content and a separate review and rating system for large-screen applications. Google Play will also later this year be updated to look better on tablets and foldable devices.

Image Credits: Google

For developers, Google also launched the Google Play SDK Index which lists over 100 popular SDKs and which app permissions they use, so developers can determine if they adhere to Google Play policies and help fill out their app’s privacy labels.

The company said it will soon launch a new Play Console page dedicated to deep links to put all the information and tools for deep links in one place. It also improved its Store Listing Experiments feature (aka A/B testing) to allow developers to see their results more quickly, with more transparency and control so they can better understand how long each experiment may need to run.

And beyond this, it rolled out features focus on improving app quality, including a new Developer Reporting API for accessing Android vitals metrics and issues data outside the Play Console; support for viewing vitals data at the country level; and Google said it’s making it possible to view vitals alongside Firebase Crashlytics. It updated the Play Console by adding revenue and revenue growth metrics to Reach and devices and overhauling its device catalog to include install data and filters by new device attributes like shared libraries. It said it’s now easier to test apps on different form factors including Android Auto and soon, Wear OS.

Play App Signing was updated to use Google Cloud Key Management and the ability for any app to perform an app signing key rotation in the event of an incident or as a security best practice from the Play Console.

And finally, Google’s In-app Updates API will now let users know if there’s an update available within 15 minutes instead of up to 24 hours.

"Read

At its I/O developer conference, Google today announced a number of updates to Firebase, Google’s popular Backend-as-a-Service platform. The focus here is mostly on deeper integrations with the rest of Google’s developer tools and platforms, as well as the overall developer ecosystem, as well as a number of updates that will help developers better secure their applications.

One of the first major announcements is that Android Studio will now feature a new App Quality Insights window that gives developers direct access to Firebase’s Crashlytics crash data, which allows developers to see their stack traces and identify the specific lines of codes that triggered a crash. “Now, developers can be in the flow as they are building features. They can also see, ‘oh, this line of code in my last release had a bunch of errors.’ They can click into that, see the Crashlytics data in terms of the severity of crashes, which devices they might have happened on, etc., so that they can really quickly address those issues and reproduce them,” explained Firebase product lead Francis Ma.

Flutter developers, too, will get better Crashlytics support. They’ll now be able to set up Crashlytics for their apps with just a few clicks and get improved crash reports, as well as the ability to log fatal errors in a Flutter app and receive crash alerts from Crashlytics.

Image Credits: Google

For web developers, Firebase it making it easier for developers to use modern web development frameworks like Angular and Next.js but helps them more easily deploy these web apps. Modern web frameworks may be very powerful, after all, but they have also introduced a lot of complexity when it comes to deploying apps. Now, developers can simply use the “firebase deploy” command and Firebase will automatically figure out which part of an application to deploy where, without having to worry about dependencies. Currently, this works for Angular and Next.js, but the team plans to add support for more frameworks in the future.

Across platforms, Firebase is also making it easier for developers to use third-party APIs by allowing them to customize Firebase extensions to use services like Stripe and Twilio. The existing pre-packaged extensions make it easier for developers to tap into third-party APIs, but as is so often the case, developers regularly hit edge cases or want to do something slightly different. “We recognize that developers may use twenty to even forty APIs in their apps — and while extensions have been working really well for developers to quickly deploy these solutions, we’ve heard from them that they would like more customizations to be able to take this baseline deployment and really make it their own,” Ma said.

Image Credits: Google

The team is also now adding third-party extensions for Snap, to allow users to log in with Snap, for example, as well as new Stream extensions to help developers implement chat in their apps and a new RevenueCat extension for managing in-app subscriptions.

On the security side, Firebase is now integrated with the new Play Integrity API, which allows developers to trust that a given Android app that is communicating with its backend hasn’t been manipulated (something that often happens with games).

For Apple developers, the Firebase team is improving its support for Apple’s Swift language. Swift support isn’t entirely new, but as Ma noted, the team has now reached a milestone where it has full coverage of Swift. “Apple developers that are Swift-only can expect the sort of the intuitive, more native support in using the Firebase SDKs Swift,” said Ma.

"Read

At its I/O developer conference, Google today announced the launch of Flutter 3, the latest version of its open-source, multiplatform UI development framework for building natively compiled applications. It’s been about four years since the company first launched a beta of Flutter 1.0. At the time, the team’s focus was mostly on helping developers build cross-platform mobile apps. Since then, it started adding web and desktop support, too, and now, with version 3, the team is closing the loop here by making Linux and macOS desktop support generally available, as well as adding support for Apple Silicon, among many other new features.

Image Credits: Google

“We’re announcing Flutter 3, which is the culmination of our journey to delivering multi-platform UI development across phone and desktop and web,” Tim Sneath, the director of product and UX for Flutter and the Dart language, told me. “This really comes all the way back from when we first launched Flutter a couple of years ago. With the Flutter 1 launch, we were fairly clear, at least in terms of a vision, even at that point, that we didn’t intend to be a mobile toolkit. We wanted to be thought of as being broader than just phones.”

With the Flutter 3 release, the platform now supports iOS, Android and web apps, as well as Windows, macOS and Linux desktop apps, all as part of Flutter’s stable release. On macOS, this includes support for Universal Binaries so apps can run natively on Intel and Apple Silicon chips, while for the Linux release, Google partnered with Ubuntu’s Canonical to “offer a highly-integrated, best-of-breed option for development.”

Despite the desktop support, most developers probably still think of Flutter as a framework for building mobile apps. But a number of developers are actively using it for building desktop apps as well, including the former Wunderlist founders who are launching their new productivity app, Superlist, into beta today as a Flutter app on the desktop.

Image Credits: Google/Superlist

On the mobile side, companies like WeChat, ByteDance, Betterment, SHEIN and BMW are now betting on Flutter — as does Google itself. Indeed, as Google announced today, over 500,000 Flutter apps have now been published, twice as many as a year ago.

As Sneath noted, a number of developers are also now using Flutter to write casual games, in part because of its built-in hardware acceleration support. Some games, like PUBG Mobile, also use Flutter for their non-game user interface. That’s something the team did not expect, but to help those developers, Google is now releasing the Flutter Casual Games Toolkit, using the open-source Flame game engine.

“We’ve released this toolkit at I/O that helps people through all the bits that are shared logic for those games,” Sneath explained. “Things like, how do I integrate with Apple Game Center or the Play Services equivalent? How do I do leaderboards or splash screens? How do I accept in-app payments for microtransactions? How do I do ads so that I can monetize? We’ve got this toolkit, which includes best practices, source code, videos, and a sample app that puts it all together. We think that’ll help developers that are interested in making games with Flutter be successful.”

Image Credits: Google

The sample game, a Flutter-themed pinball simulator, is available here.

Also new in Flutter 3 are deeper integrations with Firebase, Google’s backend platform for building mobile and web applications. That doesn’t take away from Flutter’s integrations with third-party services, including the likes of Firebase competitor AWS Amplify (which itself will happily let you build Flutter apps in its no-code Amplify Studio, too). But as the Flutter team notes, the Flutter/Firebase integration is now a fully-supported core part of Firebase and the two teams plan to evolve “Firebase support for Flutter in lockstep with Android and iOS.”

Image Credits: Google

Also new here is better support for Flutter apps in Crashlytics, Firebase’s crash reporting service, which can now track fatal crashes in real time, among other things.

In addition, the Flutter team has now also mostly completed its move to Material Design 3, Google’s in-house design language.

Image Credits: Google

"Read

It’s Google I/O keynote day! Once a year Google kicks off its developer conference with a rapid-fire stream of announcements, finally unveiling so many of the things they’ve been working on behind-the-scenes lately.

Didn’t have time to tune into the whole two-hour presentation? We get it — that’s why we’ve packaged the biggest news up in an easy-to-digest, easy-to-skim list. Lets dive in!

Google finally makes a smart watch

“Here’s something to wrap your brain around: Google has never made its own smartwatch” writes Brian Heater — but “all of that will change this fall.”

Details are still very light (much of whats out there now actually leaked ahead of time) but Google has shown off the first official images of its first Pixel watch, which is expected to launch later this fall. Find all the details here.

Pixel 7 and Pixel 7 Pro

Image Credits: Google

Last year, Google tried something new at I/O: they showed a bit of info about their new flagship Pixel phone — then the Pixel 6 — but saved most of the information for another announcement a few months later.

They’re doing the same thing this year with the announcement of the Pixel 7 and Pixel 7 Pro. Details like price and release date are still under wraps, but here’s what we do know: it’ll run Android 13, and use the next generation of Tensor chip. It borrows much of its design from the Pixel 6, including the raised “camera bar” that runs across the back. Want more? Check out the full post here.

Pixel 6a

Image Credits: Google

Google is taking much of what made the already-very-good Pixel 6, shaving down some of the specs to get the price down to $449 (from $599), and releasing it as the Pixel 6a. Its got a smaller screen (6.1″ vs 6.4″), less RAM, and downgraded cameras — but it still has things like Google’s custom Tensor chip, the Titan M2 security chip, and 5G support. Find the full breakdown here. 

Pixel Buds Pro

Image Credits: Google

Pixel Buds with noise cancellation! At last! These $199 earbuds are also IPX2 sweat resistant, and use things like beam forming mics, mesh wind blockers, and bone conduction to improve how you sound on calls. Find Brian’s overview here.

The Next Google Glass?

Like many of the other products today, Google was light on specifics — but just as they wrapped the keynote, Sundar Pichai played a quick demo reel of what appears to be an ongoing AR glasses project in the same spiritual vein as Google Glass (albeit in a much less jarring form factor). One particularly wild thing hinted at was live transcription/translation — think subtitles for real life, with a speaker’s words rendered in your view. Check out Brian’s notes here.

Pixel Tablet

Google is making Android tablets again! Eventually.

Teased today, Google won’t actually ship the new “Pixel tablet” until sometime in 2023. Besides the fact that it’s in the works, though, Google is saying pretty much nothing about it.

Google Wallet

Google has long offered Google Pay, an Android app where you could store digital credit cards for contactless payments. They’re now expanding the concept with a new app called Google Wallet, which will also “allow users to store things like credit cards, loyalty cards, digital IDs, transit passes, concert tickets, vaccination cards and more.”  The roll out process varies a bit depending on what country you’re in — find all those details here.

Google Assistant Improvements

Image Credits: Google

Google Assistant is getting quite a bit smarter! Frederic has the full breakdown here.

More natural communication: Google Assistant will now be able to better understand when you’ve flubbed a command, or when you need a second to figure out what you’re trying to say. The example given on stage had the speaker say “Can you play that new song frommmm….”, with Google Assistant responding by saying “mmhmm?” and waiting for them to finish their thought.

Look and Talk: On Google Assistant devices with a camera built in (like the Nest Hub Max), you’ll no longer have to say “Hey Google” before asking a question — just look at the device, and it’ll use things like proximity/head direction/gaze direction to understand that you’re asking it a question.

Quick Phrases: Nest Hub Max will now let you use certain often-used commands, as picked by you, without first saying the hot word. So you can just shout “What time is it?” or “Turn off the lights” into the room and Google Assistant will act accordingly.

Google Maps Improvements

Image Credits: Google

Google Maps is picking up a few new tricks — check out Sarah’s full post here.

“Immersive” view: Google Maps is getting a new 3d exploration mode, starting in select major cities, allowing you to zoom around a 3D model of that city to get a better sense of where everything is. As their data set expands, that 3D model will grow to include the interiors of popular restaurants and locations.

Eco-friendly routing expansion: Late last year Google launched a feature that let you choose your route to optimize for vehicle efficiency, rather than just whatever’s fastest. It’ll expand that feature to Europe later this year.

Live View for third parties: Back in 2019, Google started rolling out a feature that used your phone’s camera and the buildings/landmarks around you to determine exactly where you are in the world for more accurate navigation — most commonly, to figure out which direction to walk when you’ve just started a new route. Google says it’s opening this tech up to third parties, showing off examples like helping concertgoers find their seats or helping commuters find where to park their rented e-bikes.

New languages for Google Translate

Google Translate is learning dozens of new languages, with a focus on “languages with very large but underserved populations”. Additions include Quechua, Guarani, Aymara, Sanskrit, and Tsonga. Google says that many of the languages they’re adding today would’ve been technically impossible to support even just a few years ago, enabled today only by advances in machine learning. Find all the details here.

Virtual credit cards in Chrome

Google Chrome will now be able to generate a “virtual” credit card number meant to keep your real credit card number safe. Should the virtual number ever get stolen, you just revoke it and generate a new one without the hassle of getting a whole new card. Here’s all the details.

A better understanding of skin tone

google-monk-scale

Image Credits: Google

In addition to its work around Real Tone for more accurately capturing all skin tones in photos, Google is aiming to improve its understanding of skin tone within search results. “For example,” writes Aisha, “if you’re looking for ‘bridal makeup looks,’ you’ll have the option to find results that work best” for a specific skin tone.

"Read

At its Google I/O developer conference, Google and Samsung today announced Health Connect, a new initiative that will simplify the connectivity between health and fitness apps to allow users to share their data across apps.

Currently, accessing and syncing this data this relatively difficult for developers, so Health Connect will give them a series of services and APIs to make this easier.

“Health Connect lets you store and access health-related information across devices with user consent, taking out all the boilerplate code, taking care of the security issues, but also allowing you to mash up that information,” Sean McBreen, who leads developer experience for Android at Google, told me in a briefing ahead of today’s announcement. He stressed that like with all things Wear OS, Google worked closely with Samsung on this project, so going forward, apps like Google Fit, Samsung Health, MyFitnessPal, Leap Fitness, Withings and others will also start using this new API, which will give these services a new sync surface on the device.

Image Credits: Google

Essentially, Health Connect will function as the on-device clearinghouse for health and fitness data. Indeed, all of the data is on the device and encrypted to ensure privacy. “Users will have full control over their privacy settings, with granular controls to see which apps are requesting access to data at any given time,” Google explains. This also means users can easily shut off access or delete data as they see fit.

One neat feature: if multiple apps provide the same data, users can choose which one to prioritize.

The beta of Health Connect is now available to developers on Google Play so they can start building on top of it and testing their apps.

"Read

It’s Google I/O today and as is tradition, the company is using the event to introduce the latest releases of its Android Studio development environment. Launching today are a new beta of Android Studio Dolphin with features like View Compose animations and Wear OS emulators, as well as a preview of Android Studio Electric Eel, which is now in the early access canary channel.

For the most part, all of the interesting announcements are part of Electric Eel, which doesn’t come as a shock. The marquee feature, I think, is Live Edit, which allows developers to make code changes and immediately see the results in the Compose Preview in Android Studio and in the running app on the built-in emulator or a physical device.

Image Credits: Google

Sean McBreen, who leads developer experience for Android at Google, noted that this is something developers have been requesting for a long time, given that it will allow developers to speed up their cycle times without having to wait to test their changes. He noted that Android Studio already featured Live Literals, which allowed developers to change small individual values. Now, however, developers will be able to do things like add new functions and tweak the user interface and see the effect of those changes in real-time.

Also new in Android Studio is built-in support for Firebase’s Crashlytics, Google’s crash reporting service. Using the new App Quality Insights windows, developers can now see their stack traces right in their IDE and even see which lines of codes are likely to cause a crash. The idea here, of course, is to allow developers to do more of their work in a single application without having to switch contexts.

“A pain point for a developer today is that while they’re getting really good insights from Crashlytics, to see what errors are happening and the events that led up to them, in order for them to debug and reproduce some of the errors and fix them, they often need to switch to a different tool,” Firebase product lead Francis Ma explained. “This is a first big step where we’re bringing the Firebase experience into Android Studio.”

Image Credits: Google

Given that Google today announced both its first in-house smartwatch and a new tablet, it’s maybe also no surprise that Android Studio is adding additional support for large-screen devices, including tablets and foldables, as well as wearables. The idea behind Google’s “modern Android development” is, after all, that developers can learn how to write applications for one form factor and then apply that to all of the other ones. But foldables and wearables introduce their own UI challenges and developers may not always have access to them, so Android Studio now offers developers a single resizable emulator that allows them to quickly test an app on different form factors.

For WearOS specifically, the Android team is introducing a new way to create declarative user interfaces with the beta launch of the Compose UI SDK. A couple of other updates related to WearOS include the ability to see Wear Devices in the Device Manager and to pair multiple watch emulators to a single phone. Android Studio will now also remember these pairings after being closed.

In related news, Jetpack, Google’s suite of reusable Android libraries that aims to help developers avoid (re-)writing boilerplate code, is also getting a few interesting updates, especially on the user-interface side of Jetpack Compose, the company’s toolkit for building native Android user interfaces. The updated WindowsManger library now supports multi-windows environments for large-screen devices, as well as the ability to adapt apps to the physical state of a foldable device. There is also a new DragAndDrop library that allows developers to accept drag-and-drop data from both inside and outside of their application. That’s obviously an increasingly common scenario on large-screen devices. Both of these libraries have now hit their 1.0 stable milestones.

And talking about optimizing apps for large screens, Google itself is also currently going through a process where it is updating its own apps like Photos, Gmail, YouTube Music and YouTube for large-screen devices. As McBreen told me, the company is working on getting all of its 50 most used apps ready for large-screen devices by the end of the year. “The idea here is that we’ve got to role model to the industry what we want to do, but we also have to make sure our guidance and make sense — and so we’re working through those apps,” he said.

"Read

Google today announced the launch of AlloyDB, a new fully-managed PostgreSQL-compatible database service that the company claims to be twice as fast for transactional workloads as AWS’s comparable Aurora PostgreSQL (and four times faster than standard PostgreSQL for the same workloads and up to 100 times faster for analytical queries).

If you’re deep into the Google Cloud ecosystem, then a fully manage PostgreSQL database service may sound familiar. The company, after all, already offers CloudSQL for PostgreSQL and Spanner, Google Cloud’s fully managed relational database service also offers a PostgreSQL interface. But these are services that offer an interface that is compatible with PostgreSQL to allow developers with these skills to use these services. AlloyDB is the standard PostgreSQL database at its core, though the team did modify the kernel to allow it to use Google’s infrastructure to its fullest, all while allowing the team to stay up to date with new versions as they launch.

Image Credits: Google

Andi Gutmans, who joined Google as its GM and VP of Engineering for its database products in 2020 after a long stint at AWS, told me that one of the reasons the company is launching this new product is that while Google has done well in helping enterprise customers move their MySQL and PostgreSQL servers to the cloud with the help of services like CloudSQL, the company didn’t necessarily have the right offerings for those customers who wanted to move their legacy databases (Gutmans didn’t explicitly say so, but I think you can safely insert ‘Oracle’ here) to an open-source service.

“There are different reasons for that,” he told me. “First, they are actually using more than one cloud provider, so they want to have the flexibility to run everywhere. There are a lot of unfriendly licensing gimmicks, traditionally. Customers really, really hate that and, I would say, whereas probably two to three years ago, customers were just complaining about it, what I notice now is customers are really willing to invest resources to just get off these legacy databases. They are sick of being strapped and locked in.”

Add to that Postgres’ rise to becoming somewhat of a de facto standard for relational open-source databases (and MySQL’s decline) and it becomes clear why Google decided that it wanted to be able to offer a dedicated high-performance PostgreSQL service.

Image Credits: Google

Gutmans also noted that a lot of Google’s customers now want to use their relational databases for analytics use cases, so the team spent quite a lot of effort on making Postgres perform better for these users. Given Gutmans’ background at AWS, where he was the engineering owner for a number of AWS analytics services, that’s probably no surprise.

“When I joined AWS, it was an opportunity to stay in the developer space but really work on databases,” he explained.  “That’s when I worked on things like graph databases and [Amazon] ElastiCache and, of course, got the opportunity to see how important and critical data is to customers. […] That kind of audience was really a developer audience primarily, because that’s developers using databases to build their apps. Then I went into the analytics space at AWS, and I kind of discovered the other side of it. On one hand, the folks I was talking to were not necessarily developers anymore — a lot of them were on the business side or analysts — but I also then saw that these worlds are really converging.” These users wanted to get real-time insights from their data, run fraud detection algorithms over it or do real-time personalization or inventory management at scale.

Image Credits: Goolge

On the technical side, the AlloyDB team built on top of Google’s existing infrastructure, which disaggregates compute and storage. That’s the same infrastructure layer that runs Spanner, BigQuery and essentially all of Google’s services. This, Gutmans argued, already gives the service a leg up over its competition, in addition to the fact that AlloyDB specifically focuses on PostgreSQL and nothing else. “You don’t always get to optimize as much when you have to support more than one [database engine and query language]. We decided that what enterprises are asking us for [is] Postgres for these legacy database migrations, so let’s just do the best in Postgres.”

The changes the team made to the Postgres kernel, for example, now allow it to scale the system linearly to over 64 virtual cores while on the analytical side, the team built a custom machine learning-based caching service to learn a customer’s access patterns and then convert Postgres’ row format into an in-memory columnar format that can be analyzed significantly faster.

"Read

At Google’s I/O developer conference, the company introduced a new tool that, later this year, will allow users more control and visibility over how their ads are personalized across Google’s apps and sites, including Google Search, YouTube, and the Discover feed in the Google app.

From a new three-dot menu that will appear on all the ads across the different sites, users will be able to engage with the ad in a number of ways. They’ll be able to like it or share it, block it or report it, see who’s paid for it, and find out why they were targeted with it.

And if users don’t want to see ads of that kind, they can use embedded tools from this menu or visit the new My Ad Center hub to inform Google of that preference. To get to the hub, users just have to click the menu option that says “customize more of the ads you see” to be directed to the new experience.

Image Credits: Google

Within the new My Ad Center hub, users will be able to learn more about how ads are personalized and gain control over how their data is used, says Google. It’s specifically meant to address ads appearing on Google’s owned and operated sites, like Search, YouTube, and Discover, but doesn’t extend to the Google Display Network.

Image Credits: Google

From the hub’s home screen, users will be able to turn on or off various categories of ads by clicking plus and minus buttons across a variety of categories — like fitness, vacation rentals, skincare, and many others. For example, if you wanted to see fewer beauty ads, you could just click to remove them from your lineup.

You can also browse a screen featuring brands you like and then click to either add or remove them from your personalized ad round-up.

Image Credits: Google

Another screen lets users limit ads on more sensitive topics, like ads about alcohol, gambling, and, as of April’s expansion, dating, pregnancy, parenting, and weight loss. These are the types of ads that could be welcome by some users, but could feel harmful to others.

Image Credits: Google

For instance, if someone was struggling to conceive, they may not want to see any ads related to pregnancy or parenting. Previously, Google allowed users to adjust those ad preferences in the Ads Settings section on their Google Account dashboard. But now these toggles are consolidated into the new My Ad Center tool.

Most notably, the new My Ad Center hub includes a big button at the top of the screen where users can choose to turn off personalized ads altogether.

But Google believes most users won’t take that more extreme step.

“We see personalized ads as valuable and useful — just like personalized movie recommendations, personalized news recommendations, personalized commerce recommendations,” says Google’s Director of Ads Privacy and Trust, David Temkin.

He additionally explains this feature offers users for the first time the ability to control the content of the ads they see, beyond just sensitive ads, and make that process easy to navigate.

The tool, of course, follows a larger set of changes impacting the ads industry. Google earlier this year introduced the idea of Topics, a way for the browser to learn a user’s interests as they move around the web. The system came about after complaints from EU antitrust regulators over Google’s plan to deprecate cookies using a different method that they said would entrench Google’s market power. With Topics, Google categorizes the sites a user visits by categorizing them within one of 300 topics. This Topics-based system began testing in March, alongside other related privacy tools.

As a part of those trials, Google had said it would offer tools where users could remove interests assigned to them by this Topics-based surveillance of their browsing activities. This new My Ad Center tool combines Google’s existing tools with the ability to customize the types of ads you’re shown more specifically.

The My Ad Center Hub is still in development so the preview offered at Google I/O today could change between now and when the product ships to the public later this year.

"Read

At its I/O developer conference, Google today launched Google Wallet, a new Android and Wear OS app that will allow users to store things like credit cards, loyalty cards, digital IDs, transit passes, concert tickets, vaccination cards and more.

That’s pretty straightforward, but from here on out, it gets a bit confusing. Google, after all, has long offered the Google Pay app (and yes — a Google Wallet app, too), where you could store your credit cards for online and contactless payments. Back in 2020, Google made some major changes to Google Pay to refocus it more on tracking your spending and sending and receiving money between friends and family members. At that point, Google even wanted to launch its own bank account, in partnership with financial institutions like Citi, that users would manage in Google Pay. That project, dubbed Plex, never saw the light of day and was quickly shelved after the executive behind the project left Google barely six months after the announcement.

Image Credits: Google

Currently, Google Pay is available in 42 markets, Google says. Because in 39 of those markets, Google Pay is still primarily a wallet, those users will simply see the Google Pay app update to the new Google Wallet app. But in the U.S. and Singapore, Google Pay will remain the payments-focused app while the Wallet app will exist in parallel to focus on storing your digital cards. Meanwhile, in India, Google says that “people will continue to use their Google Pay app they are familiar with today.”

Image Credits: Google

“The Google Pay app will be a companion app to the Wallet,” said Arnold Goldberg, the VP and GM of Payments at Google, who joined the company earlier this year after a long stint at PayPal. “Think of [the Google Pay app] as this higher value app that will be a place for you to make payments and manage money, whereas the wallet will really be this container for you to store your payment assets and your non-payment assets.”

Goldberg noted that Google decided to go this route because of the rapid digitization we’ve been seeing during the last two years of the pandemic. “We talk about ten years of change in two years from just a behavior perspective and people almost demanding now digitization versus it being a nice-to-have pre-COVID,” he said. “It’s clarified our focus on what we need to do, as a payments organization — what we need to do as a company — to reimagine not just what we’re doing from a payments perspective online and in-store, but also thinking about what we can enable people to do with their digital wallets.”

"Read

Google today announced that its Chrome browser will now offer users the ability to use a virtual credit card number in online payment forms on the web. These virtual card numbers allow you to keep your ‘real’ credit card number safe when you buy something online since they can be easily revoked if a merchant’s systems get hacked. A number of credit card issuers already offer these virtual credit card numbers, but they are probably far less mainstream than they should be.

Image Credits: Google /

Google says these virtual cards will roll out in the U.S. later this summer. Since Google is working with both card issuers like Capital One, which is the launch partner for this feature, but also the major networks like Visa and American Express, which will be supported at launch, with Mastercard support coming later this year. Having support from the networks is definitely a big deal here, because trying to get every individual card issuer on board would be a difficult task.

The new feature will be available on Chrome on desktop and Android first, with iOS support rolling out later.

“This is a landmark step in bringing the security of virtual cards to as many consumers as possible,” said Arnold Goldberg, the Vice President and General Manager of Payments at Google. “Shoppers using Chrome on desktop and Android can enjoy a fast checkout experience when shopping online while having the peace of mind knowing that their payment information is protected.”

From the user perspective, this new autofill option will simply enter the virtual card’s details for you, including the CVV that you can never remember for your physical cards, and then you can manage the virtual cards and see your transactions at pay.google.com.  While these virtual cards are typically used for one-time purchases, you will also be able to use these cards for subscriptions, too.

Since this is Google, some users will obviously worry that the company will use this additional data about your purchase habits, but Google says it will not use any of this information for ad targeting purposes.

"Read

Among the privacy and security-related updates announced today at Google’s I/O conference, the company says it’s bringing phishing protection to its suite of productivity apps, including Docs, Sheets, and Slides. It will also newly alert users to other possible security issues with their accounts directly on their account profile and offer a new tool that makes it easier to request the removal of your personal information from Google Search.

The company already developed technology to protect users against phishing scams elsewhere across its products and services, including in Gmail and Chrome, for example. Those protections have detected and blocked billions of threats to date, says Google, which has helped to further strengthen Google’s A.I.-powered protections. That’s why it’s able to now extend this protection to other apps that are often used in the workplace.

Image Credits: Google

Soon, if users are working in a document where Google spots a suspicious link, Google will alert you to the issue and take you back to safety, much as it does on the web. The addition will help to increase user safety amid a growing number of phishing scams, which are now responsible for over 90% of recent cyberattacks, the company notes. (The company pre-announced this feature ahead of I/O in April.)

Along with this release, Google Apps users will also be warned about other security issues right on their profiles.

“We were the first consumer tech company to offer two-step verification over 10 years ago. And last year, we were the first to turn it on by default…We don’t ever want people to worry about the safety of their accounts, so at I/O we’re also launching a new alert on the profile picture across all Google Apps, letting users know that if there’s a security issue that needs their attention,” said Guemmy Kim, Director, Account Security at Google.

The company at I/O announced it enrolled an additional 150 million accounts in two-step verification in the last year alone.

Image Credits: Google

When there’s an issue, a yellow alert will pop up on the screen on top of the account profile picture. When clicked, users will be taken to a page with a set of recommended actions they need to take in order to stay safe online. This isn’t necessarily offering new functionality in terms of the protections offered to users, but is highlighting potential risks in a more obvious way that users may be less inclined to ignore.

Google also introduced Protected Computing, a toolkit of technologies designed to minimize users’ data footprint, de-identify data, and restrict access to data. The feature powers Smart Reply in Messages by Google and Live Translation on Pixel.

Another new feature is also an iteration related to an existing protection.

In April, Google announced it would allow users to request the removal of their personal contact information from Google Search, including a phone number, email address, or physical address. The change followed the E.U.’s 2018 adoption of the General Data Protection Regulation, which included a section that gave individuals the right to have information about themselves removed from search engines, also known as the “right to be forgotten.”

Previously, this process involved filling out and signing a form.

But now, Google says it will roll out a new tool to streamline the request process.

When it launches, if you come across Google Search results that contain your phone number, home address, or email, you’ll be able to quickly request the removal from Google Search right where you found them.

Image Credits: Google

Instead of then filling out a form, you can use Google’s user interface to click on the type of result you want to be removed and submit it directly to Google. You’ll about be able to track your requests in a single place to see which ones you’ve submitted, which are pending and which have been approved.

Google says this feature will be available in the coming months in the Google app and will be accessible in individual Google search results on the web.

"Read