Steve Thomas - IT Consultant

Spotify is today opening up access to Podcast Subscriptions to all podcast creators in the U.S., after first launching the service for testing with a smaller number of creators back in April. Through Spotify’s podcast creation tool Anchor, podcasters of all sizes will now able to mark select episodes as subscriber-only content, then publish them to Spotify and other platforms. Since launch, over 100 podcasts have adopted subscriptions, Spotify says. Based on the early feedback from these creators, the company is now making a couple of key changes to both pricing and functionality as the service becomes more broadly available.

Before, creators could choose between one of three price points: either $2.99, $4.99, or $7.99 per month. Creators were able to choose which price point made the most sense for their audience.

But the company learned that creators wanted even more flexibility in pricing, which is why it’s now expanding the number of price points to 20 options, starting as low as $0.49 and then increasing all the way up to $150.

Image Credits: Spotify

 

Spotify explained that its research found that creators wanted some sense of where to start with pricing, rather than offering a completely open-ended system. That’s why the pricing isn’t something creators today manually enter. Going forward, Spotify will show the three price points that tested well — $0.99, $4.99 and $9.99 — before listing the other 17 options. Of those three, the company told us $4.99 was the best performing.

In addition to the ability to set pricing and gain access to a private RSS feed that can be used by listeners who prefer using a different podcast app, Spotify will now offer podcast creators the ability to download a list of contact addresses for their subscribers. This allows them to further engage with their subscriber base to offer them more benefits, the company notes. It could also be a selling point for creators who would otherwise not want to get on board with a paid subscription offering like this, if it meant losing out on a more direct relationship with their customer.

Image Credits: Spotify

 

Spotify is not the only service offering paid podcasts. Apple recently announced its own podcast subscription platform. But Spotify’s is currently the more affordable of the two. Apple will take a 30% cut from podcast revenue in year one, dropping to 15% in year two — similar to other subscription apps. Spotify, meanwhile, is keeping its program free for the next two years, meaning that creators keep 100% of revenues until 2023. After that, Spotify plans to take just a 5% cut of subscription revenues.

With this first step into a marketplace model, it’s notable to see Spotify — a staunch Apple critic in the antitrust fight — taking such a small percentage of creator revenues. Spotify has argued for years that Apple’s cut of Spotify’s own subscription business is an anticompetitive practice, especially since Apple is a business rival via its subscription-based Apple Music service, and now, its podcast subscriptions, too.

Today, Spotify hosts a number of subscription-based podcasts, including bigger names like NPR (which is on Apple’s paid podcasts service, too), as well as independent creators like Betches U Up?, Cultivating H.E.R. Space, and Mindful in Minutes. Creators who choose to work with Spotify aren’t locked in — they can share private RSS fees with their customers and publish to other platforms, like Apple Podcasts.

The news of Spotify’s broader launch follows a growing chorus of complaints from podcasters that Apple’s own subscriptions service is off to a rough start. A report from The Verge documented creators’ complaints about bugs, confusing user interfaces, interoperability issues, and more. In the meantime, Spotify claims its waitlist for creators interested in its podcast subscriptions had “thousands” of sign-ups.

The company says it will expand access to international customers soon. Starting on September 15, international listeners will gain access to subscriber-only content. And shortly after, creators will gain access to Podcast Subscriptions, too.

Welcome back to This Week in Apps, the weekly TechCrunch series that recaps the latest in mobile OS news, mobile applications and the overall app economy.

The app industry continues to grow, with a record 218 billion downloads and $143 billion in global consumer spend in 2020. Consumers last year also spent 3.5 trillion minutes using apps on Android devices alone. And in the U.S., app usage surged ahead of the time spent watching live TV. Currently, the average American watches 3.7 hours of live TV per day, but now spends four hours per day on their mobile devices.

Apps aren’t just a way to pass idle hours — they’re also a big business. In 2019, mobile-first companies had a combined $544 billion valuation, 6.5x higher than those without a mobile focus. In 2020, investors poured $73 billion in capital into mobile companies — a figure that’s up 27% year-over-year.

This Week in Apps offers a way to keep up with this fast-moving industry in one place with the latest from the world of apps, including news, updates, startup fundings, mergers and acquisitions, and suggestions about new apps and games to try, too.

Do you want This Week in Apps in your inbox every Saturday? Sign up here: techcrunch.com/newsletters

Top Stories

OnlyFans to ban sexually explicit content

OnlyFans logo displayed on a phone screen and a website

(Photo Illustration by Jakub Porzycki/NurPhoto via Getty Images)

Creator platform OnlyFans is getting out of the porn business. The company announced this week it will begin to prohibit any “sexually explicit” content starting on October 1, 2021 — a decision it claimed would ensure the long-term sustainability of the platform. The news angered a number of impacted creators who weren’t notified ahead of time and who’ve come to rely on OnlyFans as their main source of income.

However, word is that OnlyFans was struggling to find outside investors, despite its sizable user base, due to the adult content it hosts. Some VC firms are prohibited from investing in adult content businesses, while others may be concerned over other matters — like how NSFW content could have limited interest from advertisers and brand partners. They may have also worried about OnlyFans’ ability to successfully restrict minors from using the app, in light of what appears to be soon-to-come increased regulations for online businesses. Plus, porn companies face a number of other issues, too. They have to continually ensure they’re not hosting illegal content like child sex abuse material, revenge porn or content from sex trafficking victims — the latter which has led to lawsuits at other large porn companies.

The news followed a big marketing push for OnlyFans’ porn-free (SFW) app, OFTV, which circulated alongside reports that the company was looking to raise funds at a $1 billion+ valuation. OnlyFans may not have technically needed the funding to operate its current business — it handled more than $2 billion in sales in 2020 and keeps 20%. Rather, the company may have seen there’s more opportunity to cater to the “SFW” creator community, now that it has big names like Bella Thorne, Cardi B, Tyga, Tyler Posey, Blac Chyna, Bhad Bhabie and others on board.

U.S. lawmakers demand info on TikTok’s plans for biometric data collection

The TikTok logo is seen on an iPhone 11 Pro max

The TikTok logo is seen on an iPhone 11 Pro max. Image Credits: Nur Photo/Getty Images

U.S. lawmakers are challenging TikTok on its plans to collect biometric data from its users. TechCrunch first reported on TikTok’s updated privacy policy in June, where the company gave itself permission to collect biometric data in the U.S., including users’ “faceprints and voiceprints.” When reached for comment, TikTok could not confirm what product developments necessitated the addition of biometric data to its list of disclosures about the information it automatically collects from users, but said it would ask for consent in the case such data collection practices began.

Earlier this month, Senators Amy Klobuchar (D-MN) and John Thune (R-SD) sent a letter to TikTok CEO Shou Zi Chew, which said they were “alarmed” by the change, and demanded to know what information TikTok will be collecting and what it plans to do with the data. This wouldn’t be the first time TikTok got in trouble for excessive data collection. Earlier this year, the company paid out $92 million to settle a class-action lawsuit that claimed TikTok had unlawfully collected users’ biometric data and shared it with third parties.

Weekly News

Platforms: Apple

Image Credits: Apple

  • ⭐ Apple told developers that some of the features it announced as coming in iOS 15 won’t be available at launch. This includes one of the highlights of the new OS, SharePlay, a feature that lets people share music, videos and their screen over FaceTime calls. Other features that will come in later releases include Wallet’s support for ID cards, the App Privacy report and others that have yet to make it to beta releases.
  • Apple walked back its controversial Safari changes with the iOS 15 beta 6 update. Apple’s original redesign had shown the address bar at the bottom of the screen, floating atop the page’s content. Now the tab bar will appear below the page’s content, offering access to its usual set of buttons as when it was at the top. Users can also turn off the bottom tab bar now and revert to the old, Single Tab option that puts the address bar back at the top as before.
  • In response to criticism over its new CSAM detection technology, Apple said the version of NeuralHash that was reverse-engineered by a developer, Asuhariet Ygvar, was a generic version, and not the complete version that will roll out later this year.
  • The Verge dug through over 800 documents from the Apple-Epic trial to find the best emails, which included dirt on a number of other companies like Netflix, Hulu, Sony, Google, Nintendo, Valve, Microsoft, Amazon and more. These offered details on things like Netflix’s secret arrangement to pay only 15% of revenue, how Microsoft also quietly offers a way for some companies to bypass its full cut, how Apple initially saw the Amazon Appstore as a threat and more.

Platforms: Google

  • A beta version of the Android Accessibility Suite app (12.0.0) which rolled out with the fourth Android beta release added something called “Camera Switches” to Switch Access, a toolset that lets you interact with your device without using the touchscreen. Camera Switches allows users to navigate their phone and use its features by making face gestures, like a smile, open mouth, raised eyebrows and more.
  • Google announced its Pixel 5a with 5G, the latest A-series Pixel phone, will arrive on August 27, offering IP67 water resistance, long-lasting Adaptive Battery, Pixel’s dual-camera system and more, for $449. The phone makes Google’s default Android experience available at a lower price point than the soon to arrive Pixel 6.
  • An unredacted complaint from the Apple-Epic trial revealed that Google had quietly paid developers hundreds of millions of dollars via a program known as “Project Hug,” (later “Apps and Games Velocity Program”) to keep their games on the Play Store. Epic alleges Google launched the program to keep developers from following its lead by moving their games outside the store.

Augmented Reality

  • Snap on Thursday announced it hired its first VP of Platform Partnerships to lead AR, Konstantinos Papamiltiadis (“KP”). The new exec will lead Snap’s efforts to onboard partners, including individual AR creators building via Lens Studio as well as large companies that incorporate Snapchat’s camera and AR technology (Camera Kit) into their apps. KP will join in September, and report to Ben Schwerin, SVP of Content and Partnerships.

Fintech

  • Crypto exchange Coinbase will enter the Japanese market through a new partnership with Japanese financial giant Mitsubishi UFJ Financial Group (MUFG). The company said it plans to launch other localized versions of its existing global services in the future.

Social

Image Credits: Facebook

  • Facebook launched a “test” of Facebook Reels in the U.S. on iOS and Android. The new feature brings the Reels experience to Facebook, allowing users to create and share short-form video content directly within the News Feed or within Facebook Groups. Instagram Reels creators can also now opt in to have their Reels featured on users’ News Feed. The company is heavily investing its its battle with TikTok, even pledging that some portion of its $1 billion creator fund will go toward Facebook Reels.
  • Twitter’s redesign of its website and app was met with a lot of backlash from users and accessibility experts alike. The company choices add more visual contrast between various elements and may have helped those with low vision. But for others, the contrast is causing strain and headaches. Experts believe accessibility isn’t a one-size fits all situation, and Twitter should have introduced tools that allowed people to adjust their settings to their own needs.
  • The pro-Trump Twitter alternative Gettr’s lack of moderation has allowed users to share child exploitation images, according to research from the Stanford Internet Observatory’s Cyber Policy Center.
  • Pinterest rolled out a new set of more inclusive search filters that allow people to find styles for different types of hair textures — like coily, curly, wavy, straight, as well as shaved or bald and protective styles. 

Photos

  • Photoshop for iPad gained new image correction tools, including the Healing Brush and Magic Wand, and added support for connecting an iPad to external monitors via HDMI or USB-C. The company also launched a Photoshop Beta program on the desktop.

Messaging

  • WhatsApp is being adopted by the Taliban to spread its message across Afghanistan, despite being on Facebook’s list of banned organizations. The company says it’s proactively removing Taliban content — but that may be difficult to do since WhatsApp’s E2E encryption means it can’t read people’s texts. This week, Facebook shut down a Taliban helpline in Kabul, which allowed civilians to report violence and looting, but some critics said this wasn’t actually helping local Afghans, as the group was now in effect governing the region.
  • WhatsApp is also testing a new feature that will show a large preview when sharing links, which some suspect may launch around the time when the app adds the ability to have the same account running on multiple devices.

Streaming & Entertainment

  • Netflix announced it’s adding spatial audio support on iPhone and iPad on iOS 14, joining other streamers like HBO Max, Disney+ and Peacock that have already pledged to support the new technology. The feature will be available to toggle on and off in the Control Center, when it arrives.
  • Blockchain-powered streaming music service Audius partnered with TikTok to allow artists to upload their songs using TikTok’s new SoundKit in just one click.
  • YouTube’s mobile app added new functionality that allows users to browse a video’s chapters, and jump into the chapter they want directly from the search page.
  • Spotify’s Anchor app now allows users in global markets to record “Music + Talk” podcasts, where users can combine spoken word recordings with any track from Spotify’s library of 70 million songs for a radio DJ-like experience.
  • Podcasters are complaining that Apple’s revamped Podcasts platform is not working well, reports The Verge. Podcasts Connect has been buggy, and sports a confusing interface that has led to serious user errors (like entire shows being archived). And listeners have complained about syncing problems and podcasts they already heard flooding their libraries.

Dating

  • Tinder announced a new feature that will allow users to voluntarily verify their identity on the platform, which will allow the company to cross-reference sex offender registry data. Previously, Tinder would only check this database when a user signed up for a paid subscription with a credit card.

Gaming

Image Source: The Pokémon Company

  • Pokémon Unite will come to iOS and Android on September 22, The Pokémon Company announced during a livestream this week. The strategic battle game first launched on Nintendo Switch in late July.
  • Developer Konami announced a new game, Castlevania: Grimoire of Souls, which will come exclusively to Apple Arcade. The game is described as a “full-fledged side-scrolling action game,” featuring a roster of iconic characters from the classic game series. The company last year released another version of Castelvania on the App Store and Google Play.
  • Dragon Ball Z: Dokkan Battle has now surpassed $3 billion in player spending since its 2015 debut, reported Sensor Tower. The game from Bandai Namco took 20 months to reach the figure after hitting the $2 billion milestone in 2019. The new landmark sees the game joining other top-grossers, including Clash Royale, Lineage M and others.
  • Sensor Tower’s mobile gaming advertising report revealed data on top ad networks in the mobile gaming market, and their market share. It also found puzzle games were among the top advertisers on gaming-focused networks like Chartboost, Unity, IronSource and Vungle. On less game-focused networks, mid-core games were top titles, like Call of Duty: Mobile and Top War. 

Image Credits: Sensor Tower

Health & Fitness

  • Apple is reportedly scaling back HealthHabit, an internal app for Apple employees that allowed them to track fitness goals, talk to clinicians and coaches at AC Wellness (a doctors’ group Apple works with) and manage hypertension. According to Insider, 50 employees had been tasked to work on the project.
  • Samsung launched a new product for Galaxy smartphones in partnership with healthcare nonprofit The Commons Project, that allows U.S. users to save a verifiable copy of their vaccination card in the Samsung Pay digital wallet.

Image Credits: Samsung

Adtech

Government & Policy

  • China cited 43 apps, including Tencent’s WeChat and an e-reader from Alibaba, for illegally transferring user data. The regulator said the apps had transferred users location data and contact list and harassed them with pop-up windows. The apps have until August 25 to make changes before being punished.

Security & Privacy

  • A VICE report reveals a fascinating story about a jailbreaking community member who had served as a double agent by spying for Apple’s security team. Andrey Shumeyko, whose online handles included JVHResearch and YRH04E, would advertise leaked apps, manuals and stolen devices on Twitter and Discord. He would then tell Apple things like which Apple employees were leaking confidential info, which reporters would talk to leakers, who sold stolen iPhone prototypes and more. Shumeyko decided to share his story because he felt Apple took advantage of him and didn’t compensate him for the work.

Funding and M&A

💰 South Korea’s GS Retail Co. Ltd will buy Delivery Hero’s food delivery app Yogiyo in a deal valued at 800 billion won ($685 million USD). Yogiyo is the second-largest food delivery app in South Korea, with a 25% market share.

💰 Gaming platform Roblox acquired a Discord rival, Guilded, which allows users to have text and voice conversations, organize communities around events and calendars and more. Deal terms were not disclosed. Guilded raised $10.2 million in venture funding. Roblox’s stock fell by 7% after the company reported earnings this week, after failing to meet Wall Street expectations.

💰 Travel app Hopper raised $175 million in a Series G round of funding led by GPI Capital, valuing the business at over $3.5 billion. The company raised a similar amount just last year, but is now benefiting from renewed growth in travel following COVID-19 vaccinations and lifting restrictions.

💰 Indian quiz app maker Zupee raised $30 million in a Series B round of funding led by Silicon Valley-based WestCap Group and Tomales Bay Capital. The round values the company at $500 million, up 5x from last year.

💰 Danggeun Market, the publisher of South Korea’s hyperlocal community app Karrot, raised $162 million in a Series D round of funding led by DST Global. The round values the business at $2.7 billion and will be used to help the company launch its own payments platform, Karrot Pay.

💰 Bangalore-based fintech app Smallcase raised $40 million in Series C funding round led by Faering Capital and Premji Invest, with participation from existing investors, as well as Amazon. The Robinhood-like app has over 3 million users who are transacting about $2.5 billion per year.

💰 Social listening app Earbuds raised $3 million in Series A funding led by Ecliptic Capital. Founded by NFL star Jason Fox, the app lets anyone share their favorite playlists, livestream music like a DJ or comment on others’ music picks.

💰 U.S. neobank app One raised $40 million in Series B funding led by Progressive Investment Company (the insurance giant’s investment arm), bringing its total raise to date to $66 million. The app offers all-in-one banking services and budgeting tools aimed at middle-income households who manage their finances on a weekly basis.

Public Markets

📈 Indian travel booking app ixigo is looking to raise Rs 1,600 crore in its initial public offering, The Economic Times reported this week.

📉 Trading app Robinhood disappointed in its first quarterly earnings as a publicly traded company, when it posted a net loss of $502 million, or $2.16 per share, larger than Wall Street forecasts. This overshadowed its beat on revenue ($565 million versus $521.8 million expected) and its more than doubling of MAUs to 21.3 million in Q2.  Also of note, the company said dogecoin made up 62% of its crypto revenue in Q2.

Downloads

Polycam (update)

Image Credits: Polycam

3D scanning software maker Polycam launched a new 3D capture tool, Photo Mode, that allows iPhone and iPad users to capture professional-quality 3D models with just an iPhone. While the app’s scanner before had required the use of the lidar sensor built into newer devices like the iPhone 12 Pro and iPad Pro models, the new Photo Mode feature uses just an iPhone’s camera. The resulting 3D assets are ready to use in a variety of applications, including 3D art, gaming, AR/VR and e-commerce. Data export is available in over a dozen file formats, including .obj, .gtlf, .usdz and others. The app is a free download on the App Store, with in-app purchases available.

Jiobit (update)

Jiobit, the tracking dongle acquired by family safety and communication app Life360, this week partnered with emergency response service Noonlight to offer Jiobit Protect, a premium add-on that offers Jiobit users access to an SOS Mode and Alert Button that work with the Jiobit mobile app. SOS Mode can be triggered by a child’s caregiver when they detect — through notifications from the Jiobit app — that a loved one may be in danger. They can then reach Noonlight’s dispatcher who can facilitate a call to 911 and provide the exact location of the person wearing the Jiobit device, as well as share other details, like allergies or special needs, for example.

Tweets

When your app redesign goes wrong…

Image Credits: Twitter.com

Prominent App Store critic Kosta Eleftheriou shut down his FlickType iOS app this week after too many frustrations with App Review. He cited rejections that incorrectly argued that his app required more access than it did — something he had successfully appealed and overturned years ago. Attempted follow-ups with Apple were ignored, he said. 

Image Credits: Twitter.com

Anyone have app ideas?

Twitter is rolling out changes its newly rebuilt API that will allow third-party developers to build tools and other solutions specifically for its audio chatroom product, Twitter Spaces. The company today announced it’s shipping new endpoints to support Spaces on the Twitter API v2, with the initial focus on enabling discovery of live or scheduled Spaces. This may later be followed by an API update that will make it possible for developers to build out more tools for Spaces’ hosts.

The company first introduced its fully rebuilt API last year, with the goal of modernizing its developer platform while also making it easier to add support for Twitter’s newer features at a faster pace. The new support for Twitter Spaces in the API is one example of that plan now being put into action.

With the current API update, Twitter hopes developers will build new products that enable users — both on and off Twitter — to find Twitter Spaces more easily, the company says. This could potentially broaden the reach of Spaces and introduce its audio chats to more people, which could give Twitter a leg up in the increasingly competitive landscape for audio-based social networking. Today, Twitter Spaces isn’t only taking on Clubhouse, but also the audio chat experiences being offered by Facebook, Discord, Reddit, Public.com, Spotify, and smaller social apps.

According to Twitter, developers will gain access to two new endpoints, Spaces lookup and Spaces search, which allow them to lookup live and scheduled Spaces using specific criteria — like the Spaces ID, user ID, or keywords. The Spaces lookup endpoint also offers a way to begin to understand the public metadata and metrics associated with a Space, like the participant count, speaker count, host profile information, detected language being used, start time, scheduled start time, creation time, status, and whether the Space is ticketed or not, Twitter tells us.

To chose what Spaces functionality to build into its API first, Twitter says it spoke to developers who told the company they wanted functionality that could help people discover Spaces they may find interesting and set reminders for attending. Developers said they also want to build tools that would allow Spaces hosts to better understand how well their audio chats are performing. But most of these options are yet available with today’s API update. Twitter only said it’s “exploring” other functionality — like tools that would allow developers to integrate reminders into their products, as well as those that would be able to surface certain metrics fields available in the API or allow developers to build analytics dashboards.

These ideas for other endpoints haven’t yet gained a spot on Twitter’s Developer Platform Roadmap, either.

Twitter also told us it’s not working on any API endpoints that would allow developers to build standalone client apps for Twitter Spaces, as that’s not something it heard interest in from its developer community.

Several developers have been participating in a weekly Spaces hosted by Daniele Bernardi from Twitter’s Spaces team, and were already clued in to coming updates. Developers with access to the v2 API will be able to begin building with the new endpoints starting today, but none have new experiences ready to launch at this time. Twitter notes Bernardi will also host another Spaces event today at 12 PM PT to talk in more detail about the API update and what’s still to come.

Apple is slowly walking back its controversial decision to redesign mobile Safari in iOS 15 to show the address bar at the bottom of the screen, floating atop the page’s content. The revamp, which was largely meant to make it easier to reach Safari’s controls with one hand, had been met with criticism as Apple’s other design choices actually made the new experience less usable than before. With the latest release of iOS 15 beta 6, Apple is responding to user feedback and complaints with the introduction of yet another design that now shows the bottom tab bar below the page content, offering a more standardized experience for those who would have otherwise liked the update. More importantly, perhaps, Apple is no longer forcing the bottom tab bar on users.

With the new release, there’s now an option to show the address bar at the top of the page, as before. For all those who truly hated the update, this means they can set things back to “normal.”

Image Credits: Screenshots, tab bar before and after

One significant complaint with the floating tab bar was that it made some websites nearly unusable, as the bar would block out elements you needed to click. (To get to these unreachable parts of the page, you’d have to swipe the bar down — a less-than-ideal experience).

In iOS 15 beta 6, these and other issues are addressed. Essentially, the tab bar looks much like it used to — with a familiar row of buttons, like it had before when it had been available at the top of the screen. And the bar will no longer get in the way of website content.

Testers had also pointed out that Apple’s original decision to hide often-used features — like the reload button or Reader Mode — under the three-dot “more” menu made Safari more difficult to use than in the past. With the release of iOS 15 beta 4, Apple had tried to solve this problem by bringing back the reload and share buttons, and making Reader Mode appear when available. But the buttons were still small and harder to tap than before.

The new tab bar and the return to normal it offers — regardless of its placement at the top or bottom of the screen — is an admission from Apple that users’ complaints on this matter were, in fact, valid. And it’s a demonstration of what beta testing is meant to be about: trying out new ideas and fixing what doesn’t work.

Separately, beta 6 users can now restore the tab bar to the top of the page, if that’s your preference. You can now find an option under Settings –> Safari to choose between the Tab Bar default and the Single Tab option — the latter which relocates the address bar to the top of the screen. (Doing so means you’ll lose the option to swipe through your open tabs, as you could with the Tab Bar, however.)

It’s fairly common for Apple to offer alternatives to its default settings — like how it allows users to configure how gestures and clicks work on the Mac’s trackpad, for example, or how it allows users to turn off the oncedebated “natural” scroll direction option. But adding the option to return the tab at the top is an admission that some good portion of Safari users didn’t want to relearn how to use one of the iPhone’s most frequently-accessed apps. And if forced to do so, they may have switched browsers instead.

As Apple typically releases the latest version of its iOS software in the fall, this update may represent one of the final changes to Safari we’ll see ahead of the public release.

Last fall, Spotify introduced a new format that combined spoken word commentary with music, allowing creators to reproduce the  radio-like experience of listening to a DJ or music journalist who shared their perspective on the tracks they would then play. Today, the company is making the format, which it calls “Music + Talk,” available to global creators through its podcasting software Anchor.

Creators who want to offer this sort of blended audio experience can now do so by using the new “Music” tool in Anchor, which provides access to Spotify’s full catalog of 70 million tracks that they can insert into their spoken-word audio programs. Spotify has said this new type of show will continue to compensate the artist when the track is streamed, the same as it would elsewhere on Spotify’s platform. In addition, users can also interact with the music content within the shows as they would otherwise — by liking the song, viewing more information about the track, saving the song, or sharing it, for example.

The shows themselves, meanwhile, will be available to both free and Premium Spotify listeners. Paying subscribers will hear the full tracks when listening to these shows, but free users will only hear a 30-second preview of the songs, due to licensing rights.

The format is somewhat reminiscent of Pandora’s Stories, which was also a combination of music and podcasting, introduced in 2019. However, in Pandora’s case, the focus had been on allowing artists to add their own commentary to music — like talking about the inspiration for a song — while Spotify is making it possible for anyone to annotate their favorite playlists with audio commentary.

Since launching last year, the product has been tweaked somewhat in response to user feedback, Spotify says. The shows now offer clearer visual distinction between the music and talk segments during an episode, and they include music previews on episode pages.

The ability to create Music + Talk shows was previously available in select markets ahead of this global rollout, including in the U.S., Canada, the U.K., Ireland, Australia, and New Zealand.

With the expansion, creators in a number of other major markets are now gaining access, including Japan, India, the Philippines, Indonesia, France, Germany, Spain, Italy, the Netherlands, Sweden, Mexico, Brazil, Chile, Argentina, and Colombia. Alongside the expansion, Spotify’s catalog of Music + Talk original programs will also grow today, as new shows from Argentina, Brazil, Colombia, Chile, India, Japan, and the Philippines will be added.

Spotify will also begin to more heavily market the feature with the launch of its own Spotify Original called “Music + Talk: Unlocked,” which will offer tips and ideas for creators interested in trying out the format.

Google has now taken another step towards the public release of the latest version of the Android operating system, Android 12. The company today released the fourth beta of Android 12, whose most notable new feature is that it has achieved the Platform Stability milestone — meaning the changes impacting Android app developers are now finalized, allowing them to test their apps without worrying about breaking changes in subsequent releases.

While the updated version of Android brings a number of new capabilities for developers to tap into, Google urges its developers to first focus on releasing an Android 12-compatible update. If users find their app doesn’t work properly when they upgrade to the new version of Android, they may stop using the app entirely or even uninstall it, the company warns.

Among the flagship consumer-facing features in Android 12 is the new and more adaptive design system called “Material You,” which lets users apply themes that span across the OS to personalize their Android experience. It also brings new privacy tools, like microphone and camera indicators that show if an app is using those features, as well as a clipboard read notification, similar to iOS, which alerts to apps that read the user’s clipboard history. In addition, Android 12 lets users play games as soon as they download them, through a Google Play Instant feature. Other key Android features and tools, like Quick Settings, Google Pay, Home Controls, and Android widgets, among others, have been improved, too.

Google has continued to roll out smaller consumer-facing updates in previous Android 12 beta releases, but beta 4 is focused on developers getting their apps ready for the public release of Android, which is expected in the fall.

Image Credits: Google

The company suggested developers look out for changes that include the new Privacy Dashboard in Settings, which lets users see which app are accessing what type of data and when, and other privacy features like the indicator lights for the mic and camera, clipboard read tools, and new toggles that lets users turn off mic and camera access across all apps.

There’s also a new “stretch” overscroll effect that replace the older “glow” overscroll effect systemwide, new splash screen animations for apps and keygen changes to be aware of. And there are a number of SDKs and libraries that developers use that will need to be tested for compatibility, including those from Google and third-parties.

The new Android 12 beta 4 release is available on supported Pixel devices, and on devices from select partners including ASUS, OnePlus, Oppo, Realme, Sharp, and ZTE. Android TV developers can access beta 4 as well, via the ADT-3 developer kit.

WhatsApp users will finally be able to move their entire chat history between mobile operating systems — something that’s been one of users’ biggest requests to date. The company today introduced a feature that will soon become available to users of both iOS and Android devices, allowing them to move their WhatsApp voice notes, photos, and conversations securely between devices when they switch between mobile operating systems.

The company had been rumored to be working on such functionality for some time, but the details of which devices would be initially supported or when it would be released weren’t yet known.

In product leaks, WhatsApp had appeared to be working on an integration into Android’s built-in transfer app, the Google Data Transfer Tool, which lets users move their files from one Android device to another, or switch from iOS to Android.

The feature WhatsApp introduced today, however, works with Samsung devices and Samsung’s own transfer tool, known as Smart Switch. Today, Smart Switch helps users transfer contacts, photos, music, messages, notes, calendars, and more to Samsung Galaxy devices. Now, it will transfer WhatsApp chat history, too.

WhatsApp showed off the new tool at Samsung’s Galaxy Unpacked event, and announced Samsung’s newest Galaxy foldable devices would get the feature first in the weeks to come. The feature will later roll out to Android more broadly. WhatsApp didn’t say when iOS users would gain access.

To use the feature, WhatsApp users will connect their old and new device together via a USB-C to Lightning cable, and launch Smart Switch. The new phone will then prompt you to scan a QR code using your old phone and export your WhatsApp history. To complete the transfer, you’ll sign into WhatsApp on the new device and import the messages.

Building such a feature was non-trivial, the company also explained, as messages across its service are end-to-end encrypted by default and stored on users’ devices. That meant the creation of a tool to move chat history between operating systems required additional work from both WhatsApp as well as operating system and device manufacturers in order to build it in a secure way, the company said.

“Your WhatsApp messages belong to you. That’s why they are stored on your phone by default, and not accessible in the cloud like many other messaging services,” noted Sandeep Paruchuri, product manager at WhatsApp, in a statement about the launch. “We’re excited for the first time to make it easy for people to securely transfer their WhatsApp history from one operating system to another. This has been one of our most requested features from users for years and we worked together with operating systems and device manufacturers to solve it,” he added.

 

Last week, Apple announced a series of new features targeted at child safety on its devices. Though not live yet, the features will arrive later this year for users. Though the goals of these features are universally accepted to be good ones — the protection of minors and the limit of the spread of Child Sexual Abuse Material (CSAM), there have been some questions about the methods Apple is using.

I spoke to Erik Neuenschwander, Head of Privacy at Apple, about the new features launching for its devices. He shared detailed answers to many of the concerns that people have about the features and talked at length to some of the tactical and strategic issues that could come up once this system rolls out. 

I also asked about the rollout of the features, which come closely intertwined but are really completely separate systems that have similar goals. To be specific, Apple is announcing three different things here, some of which are being confused with one another in coverage and in the minds of the public. 

CSAM detection in iCloud Photos – A detection system called NeuralHash creates identifiers it can compare with IDs from the National Center for Missing and Exploited Children and other entities to detect known CSAM content in iCloud Photo libraries. Most cloud providers already scan user libraries for this information — Apple’s system is different in that it does the matching on device rather than in the cloud.

Communication Safety in Messages – A feature that a parent opts to turn on for a minor on their iCloud Family account. It will alert children when an image they are going to view has been detected to be explicit and it tells them that it will also alert the parent.

Interventions in Siri and search – A feature that will intervene when a user tries to search for CSAM-related terms through Siri and search and will inform the user of the intervention and offer resources.

For more on all of these features you can read our articles linked above or Apple’s new FAQ that it posted this weekend.

From personal experience, I know that there are people who don’t understand the difference between those first two systems, or assume that there will be some possibility that they may come under scrutiny for innocent pictures of their own children that may trigger some filter. It’s led to confusion in what is already a complex rollout of announcements. These two systems are completely separate, of course, with CSAM detection looking for precise matches with content that is already known to organizations to be abuse imagery. Communication Safety in Messages takes place entirely on the device and reports nothing externally — it’s just there to flag to a child that they are or could be about to be viewing explicit images. This feature is opt-in by the parent and transparent to both parent and child that it is enabled.

Apple’s Communication Safety in Messages feature. Image Credits: Apple

There have also been questions about the on-device hashing of photos to create identifiers that can be compared with the database. Though NeuralHash is a technology that can be used for other kinds of features like faster search in photos, it’s not currently used for anything else on iPhone aside from CSAM detection. When iCloud Photos is disabled, the feature stops working completely. This offers an opt-out for people but at an admittedly steep cost given the convenience and integration of iCloud Photos with Apple’s operating systems.

Though this interview won’t answer every possible question related to these new features, this is the most extensive on-the-record discussion by Apple’s senior privacy member. It seems clear from Apple’s willingness to provide access and its ongoing FAQ’s and press briefings (there have been at least 3 so far and likely many more to come) that it feels that it has a good solution here. 

Despite the concerns and resistance, it seems as if it is willing to take as much time as is necessary to convince everyone of that. 

This interview has been lightly edited for clarity.

TC: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. Obviously there are no current regulations that say that you must seek it out on your servers, but there is some roiling regulation in the EU and other countries. Is that the impetus for this? Basically, why now?

Erik Neuenschwander: Why now comes down to the fact that we’ve now got the technology that can balance strong child safety and user privacy. This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users libraries on cloud services that — as you point out — isn’t something that we’ve ever done; to look through user’s iCloud Photos. This system doesn’t change that either, it neither looks through data on the device, nor does it look through all photos in iCloud Photos. Instead what it does is gives us a new ability to identify accounts which are starting collections of known CSAM.

So the development of this new CSAM detection technology is the watershed that makes now the time to launch this. And Apple feels that it can do it in a way that it feels comfortable with and that is ‘good’ for your users?

That’s exactly right. We have two co-equal goals here. One is to improve child safety on the platform and the second is to preserve user privacy, And what we’ve been able to do across all three of the features, is bring together technologies that let us deliver on both of those goals.

Announcing the Communications safety in Messages features and the CSAM detection in iCloud Photos system at the same time seems to have created confusion about their capabilities and goals. Was it a good idea to announce them concurrently? And why were they announced concurrently, if they are separate systems?

Well, while they are [two] systems they are also of a piece along with our increased interventions that will be coming in Siri and search. As important as it is to identify collections of known CSAM where they are stored in Apple’s iCloud Photos service, It’s also important to try to get upstream of that already horrible situation. So CSAM detection means that there’s already known CSAM that has been through the reporting process, and is being shared widely re-victimizing children on top of the abuse that had to happen to create that material in the first place. for the creator of that material in the first place. And so to do that, I think is an important step, but it is also important to do things to intervene earlier on when people are beginning to enter into this problematic and harmful area, or if there are already abusers trying to groom or to bring children into situations where abuse can take place, and Communication Safety in Messages and our interventions in Siri and search actually strike at those parts of the process. So we’re really trying to disrupt the cycles that lead to CSAM that then ultimately might get detected by our system.

The process of Apple’s CSAM detection in iCloud Photos system. Image Credits: Apple

Governments and agencies worldwide are constantly pressuring all large organizations that have any sort of end-to-end or even partial encryption enabled for their users. They often lean on CSAM and possible terrorism activities as rationale to argue for backdoors or encryption defeat measures. Is launching the feature and this capability with on-device hash matching an effort to stave off those requests and say, look, we can provide you with the information that you require to track down and prevent CSAM activity — but without compromising a user’s privacy?

So, first, you talked about the device matching so I just want to underscore that the system as designed doesn’t reveal — in the way that people might traditionally think of a match — the result of the match to the device or, even if you consider the vouchers that the device creates, to Apple. Apple is unable to process individual vouchers; instead, all the properties of our system mean that it’s only once an account has accumulated a collection of vouchers associated with illegal, known CSAM images that we are able to learn anything about the user’s account. 

Now, why to do it is because, as you said, this is something that will provide that detection capability while preserving user privacy. We’re motivated by the need to do more for child safety across the digital ecosystem, and all three of our features, I think, take very positive steps in that direction. At the same time we’re going to leave privacy undisturbed for everyone not engaged in the illegal activity.

Does this, creating a framework to allow scanning and matching of on-device content, create a framework for outside law enforcement to counter with, ‘we can give you a list, we don’t want to look at all of the user’s data but we can give you a list of content that we’d like you to match’. And if you can match it with this content you can match it with other content we want to search for. How does it not undermine Apple’s current position of ‘hey, we can’t decrypt the user’s device, it’s encrypted, we don’t hold the key?’

It doesn’t change that one iota. The device is still encrypted, we still don’t hold the key, and the system is designed to function on on-device data. What we’ve designed has a device side component — and it has the device side component by the way, for privacy improvements. The alternative of just processing by going through and trying to evaluate users data on a server is actually more amenable to changes [without user knowledge], and less protective of user privacy.

Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature. I understand that it’s a complex attribute that a feature of the service has a portion where the voucher is generated on the device, but again, nothing’s learned about the content on the device. The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers which we’ve never done for iCloud Photos. It’s those sorts of systems that I think are more troubling when it comes to the privacy properties — or how they could be changed without any user insight or knowledge to do things other than what they were designed to do.

One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference If pressured or asked by a government to compromise the system?

Well first, that is launching only for US, iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the US when they speak in that way, and the therefore it seems to be the case that people agree US law doesn’t offer these kinds of capabilities to our government. 

But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled. And secondly, the system requires the threshold of images to be exceeded so trying to seek out even a single image from a person’s device or set of people’s devices won’t work because the system simply does not provide any knowledge to Apple for single photos stored in our service. And then, thirdly, the system has built into it a stage of manual review where, if an account is flagged with a collection of illegal CSAM material, an apple team will review that to make sure that it is a correct match of illegal CSAM material prior to making any referral to any external entity. And so the hypothetical requires jumping over a lot of hoops, including having Apple change its internal process to refer material that is not illegal, like known CSAM and that we don’t believe that there’s a basis on which people will be able to make that request in the US. And the last point that I would just add is that it does still preserve user choice, if a user does not like this kind of functionality, they can choose not to use iCloud Photos and if iCloud Photos is not enabled no part of the system is functional.

So if iCloud Photos is disabled, the system does not work, which is the public language in the FAQ. I just wanted to ask specifically, when you disable iCloud Photos, does this system continue to create hashes of your photos on device, or is it completely inactive at that point?

If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image. None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos is functioning if you’re not using iCloud Photos. 

In recent years, Apple has often leaned into the fact that on-device processing preserves user privacy. And in nearly every previous case and I can think of that’s true. Scanning photos to identify their content and allow me to search them, for instance. I’d rather that be done locally and never sent to a server. However, in this case, it seems like there may actually be a sort of anti-effect in that you’re scanning locally, but for external use cases, rather than scanning for personal use — creating a ‘less trust’ scenario in the minds of some users. Add to this that every other cloud provider scans it on their servers and the question becomes why should this implementation being different from most others engender more trust in the user rather than less?

I think we’re raising the bar, compared to the industry standard way to do this. Any sort of server side algorithm that’s processing all users photos is putting that data at more risk of disclosure and is, by definition, less transparent in terms of what it’s doing on top of the user’s library. So, by building this into our operating system, we gain the same properties that the integrity of the operating system provides already across so many other features, the one global operating system that’s the same for all users who download it and install it, and so it in one property is much more challenging, even how it would be targeted to an individual user. On the server side that’s actually quite easy — trivial. To be able to have some of the properties and building it into the device and ensuring it’s the same for all users with the features enable give a strong privacy property. 

Secondly, you point out how use of on device technology is privacy preserving, and in this case, that’s a representation that I would make to you, again. That it’s really the alternative to where users’ libraries have to be processed on a server that is less private.

The things that we can say with this system is that it leaves privacy completely undisturbed for every other user who’s not into this illegal behavior, Apple gain no additional knowledge about any users cloud library. No user’s Cloud Library has to be processed as a result of this feature. Instead what we’re able to do is to create these cryptographic safety vouchers. They have mathematical properties that say, Apple will only be able to decrypt the contents or learn anything about the images and users specifically that collect photos that match illegal, known CSAM hashes, and that’s just not something anyone can say about a cloud processing scanning service, where every single image has to be processed in a clear decrypted form and run by routine to determine who knows what? At that point it’s very easy to determine anything you want [about a user’s images] versus our system only what is determined to be those images that match a set of known CSAM hashes that came directly from NCMEC and and other child safety organizations. 

Can this CSAM detection feature stay holistic when the device is physically compromised? Sometimes cryptography gets bypassed locally, somebody has the device in hand — are there any additional layers there?

I think it’s important to underscore how very challenging and expensive and rare this is. It’s not a practical concern for most users though it’s one we take very seriously, because the protection of data on the device is paramount for us. And so if we engage in the hypothetical where we say that there has been an attack on someone’s device: that is such a powerful attack that there are many things that that attacker could attempt to do to that user. There’s a lot of a user’s data that they could potentially get access to. And the idea that the most valuable thing that an attacker — who’s undergone such an extremely difficult action as breaching someone’s device — was that they would want to trigger a manual review of an account doesn’t make much sense. 

Because, let’s remember, even if the threshold is met, and we have some vouchers that are decrypted by Apple. The next stage is a manual review to determine if that account should be referred to NCMEC or not, and that is something that we want to only occur in cases where it’s a legitimate high value report. We’ve designed the system in that way, but if we consider the attack scenario you brought up, I think that’s not a very compelling outcome to an attacker.

Why is there a threshold of images for reporting, isn’t one piece of CSAM content too many?

We want to ensure that the reports that we make to NCMEC are high value and actionable, and one of the notions of all systems is that there’s some uncertainty built in to whether or not that image matched, And so the threshold allows us to reach that point where we expect a false reporting rate for review of one in 1 trillion accounts per year. So, working against the idea that we do not have any interest in looking through users’ photo libraries outside those that are holding collections of known CSAM the threshold allows us to have high confidence that those accounts that we review are ones that when we refer to NCMEC, law enforcement will be able to take up and effectively investigate, prosecute and convict.

Weeks after Instagram rolled out increased protections for minors using its app, Google is now doing the same for its suite of services, including Google search, YouTube, YouTube Kids, Google Assistant, and others. The company this morning announced a series of product and policy changes that will allow younger people to stay more private and protected online and others that will limit ad targeting.

The changes in Google’s case are even more expansive than those Instagram announced, as they span across an array of Google’s products, instead of being limited to a single app.

Though Congress has been pressing Google and other tech companies on the negative impacts their services may have on children, not all changes being made are being required by law, Google says.

“While some of these updates directly address upcoming regulations, we’ve gone beyond what’s required by law to protect teens on Google and YouTube,” a Google spokesperson told TechCrunch. “Many of these changes also extend beyond any single current or upcoming regulation. We’re looking at ways to develop consistent product experiences and user controls for kids and teens globally,” they added.

In other words, Google is building in some changes based on where it believes the industry is going, rather than where it is right now.

On YouTube, Google says it will “gradually” start adjusting the default upload setting to the most private option for users ages 13 to 17, which will limit the visibility of videos only to the the users and those they directly share with, not the wider public. These younger teen users won’t be prevented from changing the setting back to “public,” necessarily, but they will now have to make an explicit and intentional choice when doing so. YouTube will then provide reminders indicating who can see their video, the company notes.

YouTube will also turn on its “take a break” and bedtime reminders by default for all users ages 13 to 17 and will turn off autoplay. Again, these changes are related to the default settings  — users can disable the digital well-being features if they choose.

On YouTube’s platform for younger children, YouTube Kids, the company will also add an autoplay option, which is turned off autoplay by default so parents will have to decide whether or not they want to use autoplay with their children. The change puts the choice directly in parents’ hands, after complaints from child safety advocates and some members of Congress suggested such an algorithmic feature was problematic. Later, parents will also be able to “lock” their default selection.

YouTube will also remove “overly commercial content” from YouTube Kid, in a move that also follows increased pressure from consumer advocacy groups and childhood experts, who have long since argued that YouTube encourages kids to spend money (or rather, beg their parents to do so.) How YouTube will draw the line between acceptable and “overly commercial” content is less clear, but the company says it will, for example, remove videos that focus on product packaging — like the popular “unboxing” videos. This could impact some of YouTube’s larger creators of videos for kids, like multi-millionaire Ryan’s Toy Review.

youtube kids laptop red1

Image Credits: YouTube

Elsewhere on Google, other changes impacting minors will also begin rolling out.

In the weeks ahead, Google will introduce a new policy that will allow anyone under the age of 18, or a parent or guardian, to request the removal of their images from Google Image search results. This expands upon the existing “right to be forgotten” privacy policies already live in the E.U., but will introduce new products and controls for both kids and teenagers globally.

The company will make a number of adjustments to user accounts for people under the age of 18, as well.

In addition to the changes to YouTube, Google will restrict access to adult content by enabling its SafeSearch filtering technology by default to all users under 13 managed by its Google Family Link service. It will also enable SafeSearch for all users under 18 and make this the new default for teens who set up new accounts. Google Assistant will enable SafeSearch protections by default on shared devices, like smart screens and their web browsers. In school settings where Google Workspace for Education is used, SafeSearch will be the default and switching to Guest Mode and Incognito Mode web browsing will be turned off by default, too, as was recently announced.

Meanwhile, location history is already off by default on all Google accounts, but children with supervised accounts now won’t be able to enable it. This change will be extended to all users under 18 globally, meaning location can’t be enabled at all under the children are legal adults.

On Google Play, the company will launch a new section that will inform parents about which apps follow its Families policies, and app developers will have to disclose how their apps collect and use data. These features — which were partially inspired by Apple’s App Store Privacy Labels — had already been detailed for Android developers before today.

Google’s parental control tools are also being expanded. Parents and guardians who are Family Link users will gain new abilities to filter and block news, podcasts, and access to webpages on Assistant-enabled smart devices.

For advertisers, there are significant changes in store, too.

Google says it will expand safeguards to prevent age-sensitive ad categories from being shown to teens and it will block ad targeting based on factors like age, gender, or interests for users under 18. While somewhat similar to the advertising changes Instagram introduced, as ads will no longer leverage “interests” data for targeting young teens and kids, Instagram was still allowing targeting by age and gender. Google will not. The advertising changes will roll out globally in the “coming months,” the company says.

All the changes across Google and YouTube will roll out globally.

 

Apple later this year will roll out new tools that will warn children and parents if the child sends or receives sexually explicit photos through the Messages app. The feature is part of a handful of new technologies Apple is introducing that aim to limit the spread of Child Sexual Abuse Material (CSAM) across Apple’s platforms and services.

As part of these developments, Apple will be able to detect known CSAM images on its mobile devices, like iPhone and iPad, and in photos uploaded to iCloud, while still respecting consumer privacy.

The new Messages feature, meanwhile, is meant to enable parents to play a more active and informed role when it comes to helping their children learn to navigate online communication. Through a software update rolling out later this year, Messages will be able to use on-device machine learning to analyze image attachments and determine if a photo being shared is sexually explicit. This technology does not require Apple to access or read the child’s private communications, as all the processing happens on the device. Nothing is passed back to Apple’s servers in the cloud.

If a sensitive photo is discovered in a message thread, the image will be blocked and a label will appear below the photo that states, “this may be sensitive” with a link to click to view the photo. If the child chooses to view the photo, another screen appears with more information. Here, a message informs the child that sensitive photos and videos “show the private body parts that you cover with bathing suits” and “it’s not your fault, but sensitive photos and videos can be used to harm you.”

It also suggests that the person in the photo or video may not want it to be seen and it could have been shared without their knowing.

Image Credits: Apple

These warnings aim to help guide the child to make the right decision by choosing not to view the content.

However, if the child clicks through to view the photo anyway, they’ll then be shown an additional screen that informs them that if they choose to view the photo, their parents will be notified. The screen also explains that their parents want them to be safe and suggests that the child talk to someone if they feel pressured. It offers a link to more resources for getting help, as well.

There’s still an option at the bottom of the screen to view the photo, but again, it’s not the default choice. Instead, the screen is designed in a way where the option to not view the photo is highlighted.

These types of features could help protect children from sexual predators, not only by introducing technology that interrupts the communications and offers advice and resources, but also because the system will alert parents. In many cases where a child is hurt by a predator, parents didn’t even realize the child had begun to talk to that person online or by phone. This is because child predators are very manipulative and will attempt to gain the child’s trust, then isolate the child from their parents so they’ll keep the communications a secret. In other cases, the predators have groomed the parents, too.

Apple’s technology could help in both cases by intervening, identifying and alerting to explicit materials being shared.

However, a growing amount of CSAM material is what’s known as self-generated CSAM, or imagery that is taken by the child, which may be then shared consensually with the child’s partner or peers. In other words, sexting or sharing “nudes.” According to a 2019 survey from Thorn, a company developing technology to fight the sexual exploitation of children, this practice has become so common that 1 in 5 girls ages 13 to 17 said they have shared their own nudes, and 1 in 10 boys have done the same. But the child may not fully understand how sharing that imagery puts them at risk of sexual abuse and exploitation.

The new Messages feature will offer a similar set of protections here, too. In this case, if a child attempts to send an explicit photo, they’ll be warned before the photo is sent. Parents can also receive a message if the child chooses to send the photo anyway.

Apple says the new technology will arrive as part of a software update later this year to accounts set up as families in iCloud for iOS 15, iPadOS 15, and macOS Monterey in the U.S.

This update will also include updates to Siri and Search that will offer expanded guidance and resources to help children and parents stay safe online and get help in unsafe situations. For example, users will be able to ask Siri how to report CSAM or child exploitation. Siri and Search will also intervene when users search for queries related to CSAM to explain that the topic is harmful and provide resources to get help.

Spotify announced this morning a new partnership with online GIF database GIPHY to enable discovery of new music through GIFs. No, the GIFs themselves won’t play song clips, if that’s what you’re thinking. Instead, through a series of new Spotify-linked GIFs, there will be an option to click a button to be taken to Spotify directly to hear the artist’s music. At launch, artists including Doja CatThe WeekndPost MaloneNicki MinajThe Kid LAROIConan Gray, and others will have Spotify-linked GIFs available on their official GIPHY profile page. More artists will be added over time.

The idea behind the new integration is to help connect users with Spotify music from their everyday communications, like texts, group chats, and other places where GIFs are used. This is similar to Spotify’s existing integrations with social media apps like Snapchat and Instagram, where users can share music through the Stories and messages they post. Essentially, it’s a user acquisition strategy that leverages online social activities — in this case, sharing GIFs — while also benefiting the artists through the exposure they receive.

You can find the new Spotify-linked GIFs on the artist’s page on GIPHY.com or through GIPHY’s mobile app. The supported GIFs will include a new “Listen on Spotify” button at the bottom which will appear alongside the GIF when it’s shared. When clicked, users are redirected from the GIF to the artist’s page on Spotify where they can stream their music or browse to discover more songs they want to hear.

Image Credits: Spotify/GIPHY

Spotify says the feature is part of a broader partnership it has with GIPHY, which will later focus on bringing a more interactive listening experience to users.

The move to partner with GIPHY follows a recent expansion of the existing partnership between Spotify and GIPHY’s parent company, Facebook. The social networking giant bought the popular GIF platform in a deal worth a reported $400 million back in 2020, a couple years after Google snatched up GIPHY rival, Tenor. Since then, Facebook has worked to better integrate GIPHY with its apps, like Facebook and Instagram.

Earlier this year, Facebook and Spotify had also teamed up on a new “Boombox” project that allows Facebook users to listen to music hosted on Spotify while browsing through the Facebook app. This is powered by a “miniplayer” that allows anyone who comes across the shared music to click to play the content while they scroll their feed.

Spotify says the new feature will be available to users globally from verified GIPHY artists’ pages.

At its Game Developer Summit, Google today announced a new feature for Android game developers today that will speed up the time from starting a download in the Google Play store to the game launching by almost 2x — at least on Android 12 devices. The name of the new feature, ‘play as you download,’ pretty much gives away what this is all about. Even before all the game’s assets have been downloaded, players will be able to get going.

On average, modern games are likely the largest apps you’ll ever download and when that download takes a couple of minutes, you may have long moved on to the next TikTok session before the game is ever ready to play. With this new feature, Google promises that it’ll take only half the time to jump into a game that weighs in at 400MB or so. If you’re a console gamer, this whole concept will also feel familiar, given that Sony pretty much does the same thing for PlayStation games.

Now, this isn’t Google’s first attempt at making games load faster. With ‘Google Play Instant,’ the company already offers a related feature that allows gamers to immediately start a game from the Play Store. The idea there, though, is to completely do away with the install process and give potential players an opportunity to try out a new game right away.

Like Play Instant, the new ‘play as you download’ feature is powered by Google’s Android App Bundle format, which is, for the most part, replacing the old APK standard

Image Credits: Google