Steve Thomas - IT Consultant

The iPhone XS proves one thing definitively: that the iPhone X was probably one of the most ambitious product bets of all time.

When Apple told me in 2017 that they put aside plans for the iterative upgrade that they were going to ship and went all in on the iPhone X because they thought they could jump ahead a year, they were not blustering. That the iPhone XS feels, at least on the surface, like one of Apple’s most “S” models ever is a testament to how aggressive the iPhone X timeline was.

I think there will be plenty of people who will see this as a weakness of the iPhone XS, and I can understand their point of view. There are about a half-dozen definitive improvements in the XS over the iPhone X, but none of them has quite the buzzword-worthy effectiveness of a marquee upgrade like 64-bit, 3D Touch or wireless charging — all benefits delivered in previous “S” years.

That weakness, however, is only really present if you view it through the eyes of the year-over-year upgrader. As an upgrade over an iPhone X, I’d say you’re going to have to love what they’ve done with the camera to want to make the jump. As a move from any other device, it’s a huge win and you’re going head-first into sculpted OLED screens, face recognition and super durable gesture-first interfaces and a bunch of other genre-defining moves that Apple made in 2017, thinking about 2030, while you were sitting back there in 2016.

Since I do not have an iPhone XR, I can’t really make a call for you on that comparison, but from what I saw at the event and from what I know about the tech in the iPhone XS and XS Max from using them over the past week, I have some basic theories about how it will stack up.

For those with interest in the edge of the envelope, however, there is a lot to absorb in these two new phones, separated only by size. Once you begin to unpack the technological advancements behind each of the upgrades in the XS, you begin to understand the real competitive edge and competence of Apple’s silicon team, and how well they listen to what the software side needs now and in the future.

Whether that makes any difference for you day to day is another question, one that, as I mentioned above, really lands on how much you like the camera.

But first, let’s walk through some other interesting new stuff.

Notes on durability

As is always true with my testing methodology, I treat this as anyone would who got a new iPhone and loaded an iCloud backup onto it. Plenty of other sites will do clean room testing if you like comparison porn, but I really don’t think that does most folks much good. By and large most people aren’t making choices between ecosystems based on one spec or another. Instead, I try to take them along on prototypical daily carries, whether to work for TechCrunch, on vacation or doing family stuff. A foot injury precluded any theme parks this year (plus, I don’t like to be predictable) so I did some office work, road travel in the center of California and some family outings to the park and zoo. A mix of uses cases that involves CarPlay, navigation, photos and general use in a suburban environment.

In terms of testing locale, Fresno may not be the most metropolitan city, but it’s got some interesting conditions that set it apart from the cities where most of the iPhones are going to end up being tested. Network conditions are pretty adverse in a lot of places, for one. There’s a lot of farmland and undeveloped acreage and not all of it is covered well by wireless carriers. Then there’s the heat. Most of the year it’s above 90 degrees Fahrenheit and a good chunk of that is spent above 100. That means that batteries take an absolute beating here and often perform worse than other, more temperate, places like San Francisco. I think that’s true of a lot of places where iPhones get used, but not so much the places where they get reviewed.

That said, battery life has been hard to judge. In my rundown tests, the iPhone XS Max clearly went beast mode, outlasting my iPhone X and iPhone XS. Between those two, though, it was tougher to tell. I try to wait until the end of the period I have to test the phones to do battery stuff so that background indexing doesn’t affect the numbers. In my ‘real world’ testing in the 90+ degree heat around here, iPhone XS did best my iPhone X by a few percentage points, which is what Apple does claim, but my X is also a year old. I didn’t fail to get through a pretty intense day of testing with the XS once though.

In terms of storage I’m tapping at the door of 256GB, so the addition of 512GB option is really nice. As always, the easiest way to determine what size you should buy is to check your existing free space. If you’re using around 50% of what your phone currently has, buy the same size. If you’re using more, consider upgrading because these phones are only getting faster at taking better pictures and video and that will eat up more space.

The review units I was given both had the new gold finish. As I mentioned on the day, this is a much deeper, brassier gold than the Apple Watch Edition. It’s less ‘pawn shop gold’ and more ‘this is very expensive’ gold. I like it a lot, though it is hard to photograph accurately — if you’re skeptical, try to see it in person. It has a touch of pink added in, especially as you look at the back glass along with the metal bands around the edges. The back glass has a pearlescent look now as well, and we were told that this is a new formulation that Apple created specifically with Corning. Apple says that this is the most durable glass ever in a smartphone.

My current iPhone has held up to multiple falls over 3 feet over the past year, one of which resulted in a broken screen and replacement under warranty. Doubtless multiple YouTubers will be hitting this thing with hammers and dropping it from buildings in beautiful Phantom Flex slo-mo soon enough. I didn’t test it. One thing I am interested in seeing develop, however, is how the glass holds up to fine abrasions and scratches over time.

My iPhone X is riddled with scratches both front and back, something having to do with the glass formulation being harder, but more brittle. Less likely to break on impact but more prone to abrasion. I’m a dedicated no-caser, which is why my phone looks like it does, but there’s no way for me to tell how the iPhone XS and XS Max will hold up without giving them more time on the clock. So I’ll return to this in a few weeks.

Both the gold and space grey iPhones XS have been subjected to a coating process called physical vapor deposition or PVD. Basically metal particles get vaporized and bonded to the surface to coat and color the band. PVD is a process, not a material, so I’m not sure what they’re actually coating these with, but one suggestion has been Titanium Nitride. I don’t mind the weathering that has happened on my iPhone X band, but I think it would look a lot worse on the gold, so I’m hoping that this process (which is known to be incredibly durable and used in machine tooling) will improve the durability of the band. That said, I know most people are not no-casers like me so it’s likely a moot point.

Now let’s get to the nut of it: the camera.

Bokeh let’s do it

I’m (still) not going to be comparing the iPhone XS to an interchangeable lens camera because portrait mode is not a replacement for those, it’s about pulling them out less. That said, this is closest its ever been.

One of the major hurdles that smartphone cameras have had to overcome in their comparisons to cameras with beautiful glass attached is their inherent depth of focus. Without getting too into the weeds (feel free to read this for more), because they’re so small, smartphone cameras produce an incredibly compressed image that makes everything sharp. This doesn’t feel like a portrait or well composed shot from a larger camera because it doesn’t produce background blur. That blur was added a couple of years ago with Apple’s portrait mode and has been duplicated since by every manufacturer that matters — to varying levels of success or failure.

By and large, most manufacturers do it in software. They figure out what the subject probably is, use image recognition to see the eyes/nose/mouth triangle is, build a quick matte and blur everything else. Apple does more by adding the parallax of two lenses OR the IR projector of the TrueDepth array that enables Face ID to gather a 9-layer depth map.

As a note, the iPhone XR works differently, and with less tools, to enable portrait mode. Because it only has one lens it uses focus pixels and segmentation masking to ‘fake’ the parallax of two lenses.

With the iPhone XS, Apple is continuing to push ahead with the complexity of its modeling for the portrait mode. The relatively straightforward disc blur of the past is being replaced by a true bokeh effect.

Background blur in an image is related directly to lens compression, subject-to-camera distance and aperture. Bokeh is the character of that blur. It’s more than just ‘how blurry’, it’s the shapes produced from light sources, the way they change throughout the frame from center to edges, how they diffuse color and how they interact with the sharp portions of the image.

Bokeh is to blur what seasoning is to a good meal. Unless you’re the chef, you probably don’t care what they did you just care that it tastes great.

Well, Apple chef-ed it the hell up with this. Unwilling to settle for a templatized bokeh that felt good and leave it that, the camera team went the extra mile and created an algorithmic model that contains virtual ‘characteristics’ of the iPhone XS’s lens. Just as a photographer might pick one lens or another for a particular effect, the camera team built out the bokeh model after testing a multitude of lenses from all of the classic camera systems.

I keep saying model because it’s important to emphasize that this is a living construct. The blur you get will look different from image to image, at different distances and in different lighting conditions, but it will stay true to the nature of the virtual lens. Apple’s bokeh has a medium-sized penumbra, spreading out light sources but not blowing them out. It maintains color nicely, making sure that the quality of light isn’t obscured like it is with so many other portrait applications in other phones that just pick a spot and create a circle of standard gaussian or disc blur.

Check out these two images, for instance. Note that when the light is circular, it retains its shape, as does the rectangular light. It is softened and blurred, as it would when diffusing through the widened aperture of a regular lens. The same goes with other shapes in reflected light scenarios.

Now here’s the same shot from an iPhone X, note the indiscriminate blur of the light. This modeling effort is why I’m glad that the adjustment slider proudly carries f-stop or aperture measurements. This is what this image would look like at a given aperture, rather than a 0-100 scale. It’s very well done and, because it’s modeled, it can be improved over time. My hope is that eventually, developers will be able to plug in their own numbers to “add lenses” to a user’s kit.

And an adjustable depth of focus isn’t just good for blurring, it’s also good for un-blurring. This portrait mode selfie placed my son in the blurry zone because it focused on my face. Sure, I could turn the portrait mode off on an iPhone X and get everything sharp, but now I can choose to “add” him to the in-focus area while still leaving the background blurry. Super cool feature I think is going to get a lot of use.

It’s also great for removing unwanted people or things from the background by cranking up the blur.

And yes, it works on non humans.

If you end up with an iPhone XS, I’d play with the feature a bunch to get used to what a super wide aperture lens feels like. When its open all the way to f1.4 (not the actual widest aperture of the lens btw, this is the virtual model we’re controlling) pretty much only the eyes should be in focus. Ears, shoulders, maybe even nose could be out of the focus area. It takes some getting used to but can produce dramatic results.

Developers do have access to one new feature though, the segmentation mask. This is a more precise mask that aids in edge detailing, improving hair and fine line detail around the edges of a portrait subject. In my testing it has led to better handling of these transition areas and less clumsiness. It’s still not perfect, but it’s better. And third-party apps like Halide are already utilizing it. Halide’s co-creator, Sebastiaan de With, says they’re already seeing improvements in Halide with the segmentation map.

“Segmentation is the ability to classify sets of pixels into different categories,” says de With. “This is different than a “Hot dog, not a hot dog” problem, which just tells you whether a hot dog exists anywhere in the image. With segmentation, the goal is drawing an outline over just the hot dog. It’s an important topic with self driving cars, because it isn’t enough to tell you there’s a person somewhere in the image. It needs to know that person is directly in front of you. On devices that support it, we use PEM as the authority for what should stay in focus. We still use the classic method on old devices (anything earlier than iPhone 8), but the quality difference is huge.

The above is an example shot in Halide that shows the image, the depth map and the segmentation map.

In the example below, the middle black-and-white image is what was possible before iOS 12. Using a handful of rules like, “Where did the user tap in the image?” We constructed this matte to apply our blur effect. It’s no bad by any means, but compare it to the image on the right. For starters, it’s much higher resolution, which means the edges look natural.

My testing of portrait mode on the iPhone XS says that it is massively improved,  but that there are still some very evident quirks that will lead to weirdness in some shots like wrong things made blurry and halos of light appearing around subjects. It’s also not quite aggressive enough on foreground objects — those should blur too but only sometimes do. But the quirks are overshadowed by the super cool addition of the adjustable background blur.

Live preview of the depth control in the camera view is not in iOS 12 at the launch of the iPhone XS, but it will be coming in a future version of iOS 12 this fall.

I also shoot a huge amount of photos with the telephoto lens. It’s closer to what you’d consider to be a standard lens on a camera. The normal lens is really wide and once you acclimate to the telephoto you’re left wondering why you have a bunch of pictures of people in the middle of a ton of foreground and sky. If you haven’t already, I’d say try defaulting to 2x for a couple of weeks and see how you like your photos. For those tight conditions or really broad landscapes you can always drop it back to the wide. Because of this, any iPhone that doesn’t have a telephoto is a basic non-starter for me, which is going to be one of the limiters on people moving to iPhone XR from iPhone X, I believe. Even iPhone 8 Plus users who rely on the telephoto I believe will miss it if they don’t go to the XS.

But, man, Smart HDR is where it’s at

I’m going to say something now that is surely going to cause some Apple followers to snort, but it’s true. Here it is:

For a company as prone to hyperbole and Maximum Force Enthusiasm about its products, I think that they have dramatically undersold how much improved photos are from the iPhone X to the iPhone XS. It’s extreme, and it has to do with a technique Apple calls Smart HDR.

Smart HDR on the iPhone XR encompasses a bundle of techniques and technology including highlight recovery, rapid-firing the sensor, an OLED screen with much improved dynamic range and the Neural Engine/image signal processor combo. It’s now running faster sensors and offloading some of the work to the CPU, which enables firing off nearly two images for every one it used to in order to make sure that motion does not create ghosting in HDR images, it’s picking the sharpest image and merging the other frames into it in a smarter way and applying tone mapping that produces more even exposure and color in the roughest of lighting conditions.

iPhone XS shot, better range of tones, skintone and black point

iPhone X Shot, not a bad image at all, but blocking up of shadow detail, flatter skin tone and blue shift

Nearly every image you shoot on an iPhone XS or iPhone XS Max will have HDR applied to it. It does it so much that Apple has stopped labeling most images with HDR at all. There’s still a toggle to turn Smart HDR off if you wish, but by default it will trigger any time it feels it’s needed.

And that includes more types of shots that could not benefit from HDR before. Panoramic shots, for instance, as well as burst shots, low light photos and every frame of Live Photos is now processed.

The results for me have been massively improved quick snaps with no thought given to exposure or adjustments due to poor lighting. Your camera roll as a whole will just suddenly start looking like you’re a better picture taker, with no intervention from you. All of this is capped off by the fact that the OLED screens in the iPhone XS and XS Max have a significantly improved ability to display a range of color and brightness. So images will just plain look better on the wider gamut screen, which can display more of the P3 color space.

Under the hood

As far as Face ID goes, there has been no perceivable difference for me in speed or number of positives, but my facial model has been training on my iPhone X for a year. It’s starting fresh on iPhone XS. And I’ve always been lucky that Face ID has just worked for me most of the time. The gist of the improvements here are jumps in acquisition times and confirmation of the map to pattern match. There is also supposed to be improvements in off-angle recognition of your face, say when lying down or when your phone is flat on a desk. I tried a lot of different positions here and could never really definitively say that iPhone XS was better in this regard, though as I said above, it very likely takes training time to get it near the confidence levels that my iPhone X has stored away.

In terms of CPU performance the world’s first at-scale 7nm architecture has paid dividends. You can see from the iPhone XS benchmarks that it compares favorably to fast laptops and easily exceeds iPhone X performance.

The Neural Engine and better A12 chip has meant for better frame rates in intense games and AR, image searches, some small improvement in app launches. One easy way to demonstrate this is the video from the iScape app, captured on an iPhone X and an iPhone XS. You can see how jerky and FPS challenged the iPhone X is in a similar AR scenario. There is so much more overhead for AR experiences I know developers are going to be salivating for what they can do here.

The stereo sound is impressive, surpassingly decent separation for a phone and definitely louder. The tradeoff is that you get asymmetrical speaker grills so if that kind of thing annoys you you’re welcome.

Upgrade or no

Every other year for the iPhone I see and hear the same things — that the middle years are unimpressive and not worthy of upgrading. And I get it, money matters, phones are our primary computer and we want the best bang for our buck. This year, as I mentioned at the outset, the iPhone X has created its own little pocket of uncertainty by still feeling a bit ahead of its time.

I don’t kid myself into thinking that we’re going to have an honest discussion about whether you want to upgrade from the iPhone X to iPhone XS or not. You’re either going to do it because you want to or you’re not going to do it because you don’t feel it’s a big enough improvement.

And I think Apple is completely fine with that because iPhone XS really isn’t targeted at iPhone X users at all, it’s targeted at the millions of people who are not on a gesture-first device that has Face ID. I’ve never been one to recommend someone upgrade every year anyway. Every two years is more than fine for most folks — unless you want the best camera, then do it.

And, given that Apple’s fairly bold talk about making sure that iPhones last as long as they can, I think that it is well into the era where it is planning on having a massive installed user base that rents iPhones from it on a monthly or yearly or biennial period. Because that user base will need for-pay services that Apple can provide. And it seems to be moving in that direction already, with phones as old as the five-year-old iPhone 5s still getting iOS updates.

With the iPhone XS, we might just be seeing the true beginning of the iPhone-as-a-service era.

YouTube Kids’ latest update is giving parents more control over what their kids watch. Following a change earlier this year that allowed parents to limit viewing options to human-reviewed channels, YouTube today is adding another feature that will give parents the ability to explicitly whitelist every channel or video they want to be available to their children through the app.

Additionally, YouTube Kids is launching an updated experience to serve the needs of a slightly older demographic: tween viewers ages 8 through 12. This mode adds new content, like popular music and gaming videos.

The company had promised in April these changes were in the works, but didn’t note when they’d be going live.

With the manual whitelisting feature, parents can visit the app’s Settings, go to their child’s profile, and toggle on an “Approved Content Only” option. They can then handpick the videos they want their kids to have access to watch through the YouTube Kids app.

Parents can opt to add any video, channel, or collection of channels they like by tapping the “+” button, or they can search for a specific creator or video through this interface.

Once this mode is enabled, kids will no longer be able to search for content on their own.

While this is a lot of manual labor on parents’ part, it does serve the needs of those with very young children who aren’t comfortable with YouTube Kids’ newer “human-reviewed channels” filtering option, as mistakes could still slip through.

A “human-reviewed” channel means that a YouTube moderator has watched several videos on the channel, to determine if the content is generally appropriate and kid-friendly, but it doesn’t mean every single video that is later added to the channel will be human-reviewed.

Instead, future uploads to the channel will only go through YouTube’s algorithmic layers of security, the company has said.

Unfortunately, while there is now a whitelisting option, there’s still no option to blacklist videos or channels to block them from the app.

That’s a problem because there are videos that are perfectly “kid-safe” that parents just want to limit for other reasons. “How to make slime” videos come to mind – something that parents everywhere likely want to block at scale after having their houses destroyed by the goo. (Thanks YouTube. Thanks Katrina Garcia.)

YouTube Kids expands to tweens

The other new feature now arriving will update YouTube Kids for an older audience who’s beginning to outgrow the preschool-ish look-and-feel of the app, and the way it sometimes pushes content that’s “for babies,” as my 8-year old would put it.

Instead, parents will be able to turn on the “Older” content level setting that opens up YouTube Kids to include less restricted content for kids ages 8 to 12.

According to the company, this includes music and gaming videos – which is basically something like 90% of kids’ YouTube watching at this age. (Not an official stat. Just what it feels like over here.)

The “Younger” option will continue to feature things like sing-alongs and other age-appropriate educational videos, but YouTube Kids’ “Older” mode will let kids watch different kinds of videos, like music videos, gaming video, shows, nature and wildlife videos, and more.

YouTube stresses to parents that its ability to filter content isn’t perfect – inappropriate content could still slip through. It needs parents to participate by blocking and flagging videos, as that comes up.

It’s best if kids continue to watch YouTube while in parents’ presence, of course, and without headphones, or on the big screen in the living room where you can moderate kids’ viewing yourself.

But there are times when you need to use YouTube as the babysitter or a distraction so you can get things done. The new whitelisting option could help parents feel more comfortable letting their kids loose on the app.

Meanwhile, older kids will appreciate the expanded freedom. (And you won’t be constantly begged for your own phone where “regular YouTube” is installed, as a result.)

YouTube says the parental controls are rolling today globally on Android and coming soon to iOS. The “Older” option is rolling out now in the U.S. and will expand globally in the future.

Instagram is testing a way to allow users to tag their friends in their video posts, not just in photos, TechCrunch has learned and the company confirmed. The option works similarly to tagging photos, but instead of pressing the small icon at the bottom left to see the list of tagged names appear over top of the content – something that would be more difficult with videos – the button links to a list of tagged people.

When you tap this button, you’re directed to a new page titled “People in this Video” with all the Instagram users who have either appeared in the video, or who the original poster wants to alert in some way.

As far as we can tell, these videos don’t also copy over to the tagged users’ profiles, where tagged photos typically appear today. But that could come further down the road.

Video tagging is also not appearing on the web version of Instagram at present, only on mobile.

Instagram didn’t want to share much information about the test, nor discuss its plans for a larger rollout of the feature – which is currently unsupported for anyone who hasn’t been opted in to the test by the company.

However, it would say that the experiment is underway right now with a “small percentage” of Instagram’s users.

“We’re always testing ways to improve the experience on Instagram and bring you closer to the people and things you love,” a spokesperson confirmed, in a statement.

Above: video tagging spotted on Instagram account @cablegirlsrd

Instagram has offered photo tagging since 2013, and later rolled out support for things like tagging products and tagging friends in Stories. But although video sharing arrived on the platform in June 2013, Instagram has yet to introduce a way for users to properly tag their friends. Rather, its FAQ suggests that users should mention friends in a comment so they’ll get a notification.

That may have been sufficient for some time, but video is a more critical aspect to Instagram’s platform these days, especially as it explores how to enable better video discovery through its user interface, direct people to its newest product, IGTV, and connect larger groups together in video chat sessions.

Tagging videos, then is an obvious, if long past due, next step – and one that can drive increased engagement as the tagged users relaunch the app following their notifications.

The feature could also make way for shoppable videos, not just photos, and allow Instagram influencers to post videos of their favorite products and places, while pointing fans to those brands’ own Instagram accounts in a more structured fashion than is possible today.

Priscilla Chan is so much more than Mark Zuckerberg’s wife. A teacher, doctor, and now one of the world’s top philanthropists, she’s a dexterous empath determined to help. We’ve all heard Facebook’s dorm-room origin story, but Chan’s epiphany of impact came on a playground.

In this touching interview this week at TechCrunch Disrupt SF, Chan reveals how a child too embarrassed to go to class because of their broken front teeth inspired her to tackle healthcare. “How could I have prevented it? Who hurt her? And has she gotten healthcare, has she gotten the right dental care to prevent infection and treat pain? That moment compelled me, like, ‘I need more skills to fight these problems.'”

That’s led to a $3 billion pledge towards curing all disease from the Chan Zuckerberg Initiative’s $45 billion-plus charitable foundation. Constantly expressing gratitude for being lifted out of the struggle of her refugee parents, she says “I knew there were so many more deserving children and I got lucky”.

Here, Chan shares her vision for cause-based philanthropy designed to bring equity of opportunity to the underserved, especially in Facebook’s backyard in The Bay. She defends CZI’s apolitical approach, making allies across the aisle despite the looming spectre of the Oval Office. And she reveals how she handles digital well-being and distinguishes between good and bad screen time for her young daughters Max and August. Rather than fielding questions about Mark, this was Priscilla’s time to open up about her own motivations.

Most importantly, Chan calls on us all to contribute in whatever way feels authentic. Not everyone can sign the Giving Pledge or dedicate their full-time work to worthy causes. But it’s time for tech’s rank-and-file rich to dig a little deeper. Sometimes that means applying their engineering and product skills to develop sustainable answers to big problems. Sometimes that means challenging the power structures that led to the concentration of wealth in their own hands. She concludes, “You can only try to break the rules so many times before you realize the whole system’s broken.”

[gallery ids="1706485,1706479,1706480,1706481,1706483,1706484,1706492,1706491,1706487,1706486,1706478,1706477"]

YouTube today announced a suite of new features designed to offer creators and their fans new ways to contribute to charitable causes. This includes beta versions of new fundraising and campaign matching tools, as well as a variation of YouTube’s Super Chat service, called “Super Chat for Good.”

Explains the company, YouTube creators have already been using its video platform to raise awareness about causes they care about, and bring their communities together. The launch of YouTube Giving, as this combined toolset is called, will now allow them to do more by making it easier for fans to donate to over 1 million nonprofits.

With Fundraisers, YouTube creators and qualifying U.S. nonprofits (registered 501(c)(3) nonprofits) will be able to create fundraising campaigns that are embedded next to their YouTube videos.

Directly beneath the video, viewers will see a “Donate” button that will allow them contribute to the campaign. YouTube says it will handle the logistics and payment processing.

This is rolling out now to a small group of U.S. and Canadian creators during this beta. One example, live now, is a Hope for Paws Fundraiser that’s raising funds towards animal rescue and recovery.

During the beta, YouTube will cover all transaction fees, allowing 100% of donations to reach the nonprofits.

Community Fundraisers, now launching in beta to U.S. creators, will allow YouTubers to team up together to co-host the same fundraiser. The feature set here is similar to regular fundraisers, but is designed so the fundraiser appears at the same time across all participants’ videos. It will also display how much money all communities have raised together.

This is being kicked off with a group fundraiser by a dozen gaming creators who will raise money from their 37 million subscribers for St. Jude’s Children’s Hospital.

Campaign Matching has yet launched, but will soon allow creators to organize fundraisers where they can receive matching pledges from other creators, brands, and businesses to increase how much they’re able to raise.

The matching pledges and who they’re from will also be displayed as part of this feature. This is expected to arrive in the weeks ahead, says YouTube.

Another new addition leverages YouTube’s existing Super Chat system, which allows fans to pay to have their comments highlighted. In Super Chat for Good, 100% of viewers’ Super Chat purchases will go towards the nonprofit the creator is supporting.

YouTube says it will take in feedback from the community and expand the features to more creators over the next few months.

Online fundraising is a popular activity today across sites like GoFundMe, Kickstarter, Indiegogo, and Patreon. Facebook also entered the market a couple of years ago. In mid-2016 it rolled out the ability for its users to raise funds for nonprofits they support, before later expanding this fundraising toolset set to live video, and broadening the types of fundraisers people could host.

Facebook charges platform fees on some of these fundraisers, except for those for charitable organizations.

YouTube says it also won’t charge fees during the beta, but declined to tell us what its plans for fees are when the beta period wraps.

The company this year has been expanding the types of things creators can do with their videos, in the face of increased competition from Facebook Watch and Amazon’s Twitch. Earlier this summer, YouTube introduced a suite of other features like channel memberships, merchandise shelves, marketing partnerships via FameBit and the launch of “Premieres,” to offer creators a middle ground between live streaming and pre-recorded video.

Facebook Watch, the social network’s home to original video content and answer to YouTube, is now becoming available worldwide. The Watch tab had first launched last August, only in the U.S., and now touts over 50 million monthly viewers who watch at least a minute of video within Watch. Since the beginning of the year, total time spent viewing videos in Watch is up by 14x, says Facebook.

The company has continued to add more social features to Watch over the past year, including participatory viewing experiences like Watch Parties, Premiers, and those with audience involvement, like an HQ Trivia competitor, Confetti, built on the new gameshow platform.

Watch also offers basic tools for discovery, saving videos for later viewing, and lets users customize a feed of videos from Facebook Pages they follow.

Along with international availability, Facebook is introducing “Ad Breaks” to more publishers. These can be either mid-roll or pre-roll ads, or images below the video. Publishers can either insert the ads themselves or use Facebook’s automated ad insertion features. Facebook says 70+ percent of mid-roll ads are viewed to completion.

Ad Breaks are now offered to creators who publish 3-minute videos that generate over 30,000 1-minute views in total over the past 2 months; who have 10,000 Facebook followers or more; who are in a supported country; and who meet other eligibility criteria.

Supported countries today include the U.S., UK, Ireland, Australia, and New Zealand. Next month, that list will expand to include Argentina, Belgium, Bolivia, Chile, Colombia, Denmark, The Dominican Republic, Ecuador, El Salvador, France, Germany, Guatemala, Honduras, Mexico, Netherlands, Norway, Peru, Portugal, Spain, Sweden, and Thailand, supporting English content and other local languages. More countries and languages will then follow.

Also new today is the global launch of Creator Studio, where Pages can manage their entire content library and business. This includes the ability to search across their library to view post-level details and insights, as well as manage interactions across Pages, Facebook Messages, comments, and Instagram. Other tools here focus on using Ad Breaks, viewing monetization and payments, and publishing the videos.

The Creator Studio is also seeing the addition of a new metric on audience retention added now, allowing publishers to better program their content.

YouTube, too, also this year launched an updated version of its Creator Studio, now called YouTube Studio, offering similar analytics for its own network.

Facebook isn’t the only one making a play for YouTube’s creators – Amazon’s Twitch has been offering deals to woo creators to its game-streaming site, a recent report claimed.

“Our goal is to provide publishers and creators with the tools they need to build a business on Facebook,” the company said in an announcement. “Facebook’s Fostering an active, engaged community and sharing longer content that viewers seek out and regularly come back to are key to finding success,” it noted.

 

Facebook Watch, the social network’s home to original video content and answer to YouTube, is now becoming available worldwide. The Watch tab had first launched last August, only in the U.S., and now touts over 50 million monthly viewers who watch at least a minute of video within Watch. Since the beginning of the year, total time spent viewing videos in Watch is up by 14x, says Facebook.

The company has continued to add more social features to Watch over the past year, including participatory viewing experiences like Watch Parties, Premiers, and those with audience involvement, like an HQ Trivia competitor, Confetti, built on the new gameshow platform.

Watch also offers basic tools for discovery, saving videos for later viewing, and lets users customize a feed of videos from Facebook Pages they follow.

Along with international availability, Facebook is introducing “Ad Breaks” to more publishers. These can be either mid-roll or pre-roll ads, or images below the video. Publishers can either insert the ads themselves or use Facebook’s automated ad insertion features. Facebook says 70+ percent of mid-roll ads are viewed to completion.

Ad Breaks are now offered to creators who publish 3-minute videos that generate over 30,000 1-minute views in total over the past 2 months; who have 10,000 Facebook followers or more; who are in a supported country; and who meet other eligibility criteria.

Supported countries today include the U.S., UK, Ireland, Australia, and New Zealand. Next month, that list will expand to include Argentina, Belgium, Bolivia, Chile, Colombia, Denmark, The Dominican Republic, Ecuador, El Salvador, France, Germany, Guatemala, Honduras, Mexico, Netherlands, Norway, Peru, Portugal, Spain, Sweden, and Thailand, supporting English content and other local languages. More countries and languages will then follow.

Also new today is the global launch of Creator Studio, where Pages can manage their entire content library and business. This includes the ability to search across their library to view post-level details and insights, as well as manage interactions across Pages, Facebook Messages, comments, and Instagram. Other tools here focus on using Ad Breaks, viewing monetization and payments, and publishing the videos.

The Creator Studio is also seeing the addition of a new metric on audience retention added now, allowing publishers to better program their content.

YouTube, too, also this year launched an updated version of its Creator Studio, now called YouTube Studio, offering similar analytics for its own network.

Facebook isn’t the only one making a play for YouTube’s creators – Amazon’s Twitch has been offering deals to woo creators to its game-streaming site, a recent report claimed.

“Our goal is to provide publishers and creators with the tools they need to build a business on Facebook,” the company said in an announcement. “Facebook’s Fostering an active, engaged community and sharing longer content that viewers seek out and regularly come back to are key to finding success,” it noted.

 

Spotify today is announcing a new way for students to access its Premium service, along with Hulu and Showtime, for a discounted price of $4.99 per month for all three. The new deal is an expansion of the existing Hulu and Spotify bundle for students, which launched around a year ago at the same price. Now those existing subscribers as well as new ones will be able to stream from all three services when they sign up.

The new bundle consists of Spotify Premium for Students, Hulu with Limited Commercials, and Showtime . Students will need to be attending a Title IV accredited institution in the U.S. to qualify for the discounted pricing.

When Spotify teamed up with Hulu back in September 2017, it was the first time it had ever partnered with a streaming video service on a bundle deal. The deal had arrived just as Spotify’s own efforts into original video were failing, and its head of video Tom Calderone was departing amid a shift in content strategy.

For both Spotify and Hulu, a bundle of music and video allows them to steel themselves against the looming threat from Apple, and its expected launch of its own streaming video service, which itself could be bundled with an Apple Music subscription. Because of Apple’s built-in advantage that comes with the iPhone, Apple Music has already outpaced Spotify in the U.S. – and clearly, the streaming services are concerned about its video plans.

According to Spotify, the reasoning behind a bundle has to do with the fact that college students are streaming entertainment more than any other age group. It wanted to reach them with better pricing, it says.

“We’ve been really pleased about the uptake of the original Hulu bundle, so are happy to be expanding the offering,” a spokesperson told TechCrunch.

The company, however, declined to share the number of students who had taken advantage of the bundle discount so far. Spotify had also expanded this same bundle to all customers in April, at $12.99 per month for both, instead of $7.99 per month for Hulu and $9.99 per month for Spotify, when sold separately.

Spotify has added subscribers since those launches, but it’s unclear how many were from bundles. Today, it has 83 million paying subscribers out of 180 million monthly users. That’s up from the 60 million paying subscribers it had when the student bundle was first announced, when it was then twice as big as Apple Music.

With the addition of Showtime, students will be able to watch series like “Shameless,” “Who Is America?,” “The Chi,” “Billions,” “Ray Donovan,” “Smilf,” “The Affair,” Homeland,” “Twin Peaks,” the upcoming Jim Carrey comedy “Kidding,” and upcoming “Escape at Dannemora,” among others, plus movies, documentaries, sports and comedy specials.

Showtime currently costs $10.99 per month over-the-top, when purchased directly from the network itself, though it’s possible to find it for less elsewhere. For example, Amazon Channels sells the subscription a la carte for $8.99 per month, at present.

To get all three services for $4.99 per month is an almost ridiculous price at this point, and one that’s intended to serve as a way to addict students at a time when their media consumption is heavy, so they’ll become avid users.

Once students have created their playlists, downloaded their songs, followed their favorite bands, networks, and shows, they will benefit from the personalization these services offer. After a few years’ time, it will be difficult for the students to abandon the services when the price increases after graduation – or, at least, that’s the thinking on the streamers’ part.

Spotify won’t discuss the partnership particulars, but it’s obviously subsidizing the services here.

To sign up for the triple-play bundle, students can go to spotify.com/us/student. During the first three months, Spotify will only be $0.99, bringing the cost down even further.

Spotify today is announcing a new way for students to access its Premium service, along with Hulu and Showtime, for a discounted price of $4.99 per month for all three. The new deal is an expansion of the existing Hulu and Spotify bundle for students, which launched around a year ago at the same price. Now those existing subscribers as well as new ones will be able to stream from all three services when they sign up.

The new bundle consists of Spotify Premium for Students, Hulu with Limited Commercials, and Showtime . Students will need to be attending a Title IV accredited institution in the U.S. to qualify for the discounted pricing.

When Spotify teamed up with Hulu back in September 2017, it was the first time it had ever partnered with a streaming video service on a bundle deal. The deal had arrived just as Spotify’s own efforts into original video were failing, and its head of video Tom Calderone was departing amid a shift in content strategy.

For both Spotify and Hulu, a bundle of music and video allows them to steel themselves against the looming threat from Apple, and its expected launch of its own streaming video service, which itself could be bundled with an Apple Music subscription. Because of Apple’s built-in advantage that comes with the iPhone, Apple Music has already outpaced Spotify in the U.S. – and clearly, the streaming services are concerned about its video plans.

According to Spotify, the reasoning behind a bundle has to do with the fact that college students are streaming entertainment more than any other age group. It wanted to reach them with better pricing, it says.

“We’ve been really pleased about the uptake of the original Hulu bundle, so are happy to be expanding the offering,” a spokesperson told TechCrunch.

The company, however, declined to share the number of students who had taken advantage of the bundle discount so far. Spotify had also expanded this same bundle to all customers in April, at $12.99 per month for both, instead of $7.99 per month for Hulu and $9.99 per month for Spotify, when sold separately.

Spotify has added subscribers since those launches, but it’s unclear how many were from bundles. Today, it has 83 million paying subscribers out of 180 million monthly users. That’s up from the 60 million paying subscribers it had when the student bundle was first announced, when it was then twice as big as Apple Music.

With the addition of Showtime, students will be able to watch series like “Shameless,” “Who Is America?,” “The Chi,” “Billions,” “Ray Donovan,” “Smilf,” “The Affair,” Homeland,” “Twin Peaks,” the upcoming Jim Carrey comedy “Kidding,” and upcoming “Escape at Dannemora,” among others, plus movies, documentaries, sports and comedy specials.

Showtime currently costs $10.99 per month over-the-top, when purchased directly from the network itself, though it’s possible to find it for less elsewhere. For example, Amazon Channels sells the subscription a la carte for $8.99 per month, at present.

To get all three services for $4.99 per month is an almost ridiculous price at this point, and one that’s intended to serve as a way to addict students at a time when their media consumption is heavy, so they’ll become avid users.

Once students have created their playlists, downloaded their songs, followed their favorite bands, networks, and shows, they will benefit from the personalization these services offer. After a few years’ time, it will be difficult for the students to abandon the services when the price increases after graduation – or, at least, that’s the thinking on the streamers’ part.

Spotify won’t discuss the partnership particulars, but it’s obviously subsidizing the services here.

To sign up for the triple-play bundle, students can go to spotify.com/us/student. During the first three months, Spotify will only be $0.99, bringing the cost down even further.

Netflix is testing video promos that play in between episodes of shows a viewer is streaming, the company confirmed to TechCrunch. The promos are full-screen videos, personalized to the user, featuring content Netflix would have otherwise suggested elsewhere in its user interface – like on a row of recommendations, for example. The promos also displace the preview information for the next episode being binged, like the title, description, and thumbnail that previously appeared on the right side of the screen.

The test was first spotted by Cord Cutters News, following a Reddit thread filled with complaints. A number of Twitter users are angrily tweeting about the change, too. (See below examples.)

We understand the introduction of promos in between the episodes is not a feature Netflix is rolling out to its subscribers at this time.

Instead, it’s one of the hundreds of tests Netflix runs every year, many of which are focused on how to better promote Netflix’s original programming to its customers.

This test is currently live for a small percentage of Netflix’s global audience.

And unlike some prior tests, the promos may feature any content in Netflix’s catalog – not just its original programming.

There is some misinformation about the way the test works out there because of what may be user error on the part of the original Reddit user, or an undocumented bug.

Image credit: Reddit user WhyAllTheTrains via this post

The original Reddit post said these new video promos are “unskippable,” noting there’s a Continue button with a countdown timer on it that looks similar to the one you’d see on a YouTube ad.

But we understand that the test in question does allow users to push that Continue button at any time to move forward to the next episode.

The promos, in other words, are interruptive, but they are not unskippable.

Needless to say, consumer reaction to these promos – which consumers perceive as advertisements – has been fairly critical so far. Netflix is a paid subscription service, not an ad-supported one like Hulu with Limited Commercials. That means customers expect on-demand viewing with no ads. And they think of anything that disrupts their viewing as an advertisement, as a result.

But Netflix is always trying to figure out how to better showcase its content for subscribers, in order to help them discover new shows and keep them engaged.

It has run many experiments like this over the years, not all of which pan out. For example, last year it toyed with pre-roll video previews, and more recently it began a test that promotes its shows on the background of the login screen.

Only when Netflix sees data that proves a test increases user engagement or another metric it cares about will it roll out the feature to all subscribers. That’s been the case with those auto-playing trailers, for instance. While not necessarily beloved, they seem to be doing the job.

The company’s longer-term goal is to make its user interface more video-rich and personalized, so it’s not surprising that it’s finding new ways to insert video into that experience.

Netflix, reached for comment about the new test, offering the following statement:

At Netflix, we conduct hundreds of tests every year so we can better understand what helps members more easily find something great to watch. A couple of years ago, we introduced video previews to the TV experience, because we saw that it significantly cut the time members spend browsing and helped them find something they would enjoy watching even faster. Since then, we have been experimenting even more with video based on personalized recommendations for shows and movies on the service or coming shortly, and continue to learn from our members.

In this particular case, we are testing whether surfacing recommendations between episodes helps members discover stories they will enjoy faster. It is important to note that a member is able to skip a video preview at anytime if they are not interested.

Tweets from testers:

Google has launched a new video-based Q&A app called Cameos on the App Store, which allows people to answer questions about themselves, then share those answers directly on Google. The app appears to be aimed at celebrities and other public figures, who are often the subject of people’s Google searches. With the Cameos app, they can address fans’ questions in their own voice, instead of leaving the answers up to other websites.

The feature is an extension of the company’s “Posts on Google” platform which has been slowly rolling out over the past couple of years, giving some people and organizations the ability to post directly to Google’s search result pages.

Initially, “Posts on Google” was open only to a small number of celebrities, sports teams and leagues, movie studios and museums. But last year, it expanded to local businesses who could then publish their events, products and services. This spring, it opened up to musicians.

Those invited to use the service have been able to post updates to Google which include text, images, video, GIFs, events, and links to other sites. In a way, it’s like Google’s version of Twitter – but with the goal of helping web searchers find answers to questions.

The new Cameos app is focused specifically on video posts.

As the App Store description explains: “Record video answers to the most asked questions on Google and then post them right to Google. Now, when people search for you, they’ll get answers directly from you.”

The app also allows celebrities using Cameos to see the top questions the internet wants answers to, so they can pick and choose which of those they want to answer. Their answers, recorded with their iPhone’s camera, will be published directly to Google search and in the Google app.

The service brings to mind Instagram’s new Q&A feature, launched this July. Via a Questions widget that’s added to an Instagram Story, users can solicit questions from their followers. The recipient can then select the questions they want to respond to, and post their replies publicly to their Instagram Story.

The feature become so popular, so quickly, that it began to dominate people’s Stories feed. There was even a bit of backlash.

Google’s Cameo video answers could be more useful, as they’d only appear when that question was searched on Google. It would also give Google a social platform of sorts – a market it has tried to compete in for years, and is now littered with failures like Orkut, Dodgeball, Latitude, Lively, Google Wave, Google Buzz, and of course, Google+. At least with Posts, Google is focusing on what it does best: Search.

Google has been asked to comment. We’ll update if one is provided.

The Cameos app description also notes that it will add more questions for celebs to answer on a regular basis.

Access to use Cameos is only available upon invitation. Those interested can download the iOS app to request access.

 

Jeffrey Katzenberg’s new mobile video startup NewTV, now headed by CEO Meg Whitman, has closed on a billion in new funding in round led by Meg Whitman and Jeffrey Katzenberg, the company has confirmed. WndrCo, Katzenberg’s tech and media holding company, officially announced the round’s close on Tuesday, following last month’s report from CNN which had first leaked the news of the billion-dollar investment.

CNN’s report had attributed the funding to investors like Disney, 21st Century Fox, Warner Bros, Entertainment One and other media companies, noting they had put in a combined $200 million.

The company has now confirmed the investor lineup includes Hollywood studios 21st Century Fox, Disney, Entertainment One, ITV, Lionsgate, Metro Goldwyn Mayer, NBCUniversal, Sony Pictures Entertainment, Viacom, and Warner Media. On the technology side, it say Alibaba is invested.

In addition, the round was led by strategic partners The Goldman Sachs Group, Inc., JPMorgan Chase & Co., Liberty Global, and VC firm Madrone Capital.

“More so than ever, people want easy access to the highest quality entertainment that fits perfectly into their busy, on-the-go lifestyles,” said Meg Whitman, CEO of NewTV, in a statement. “With NewTV, we’ll give consumers a user-friendly platform, built for mobile, that delivers the best stories, created by the world’s top talent, allowing users to make the most of every moment of their day.”

NewTV had not shared much detail about its ambitions ahead of this fundraise, beyond its bigger goal of reinventing TV for the mobile era. Specifically, it’s interested in taking the sort of quality programming you’d find on a service like Netflix, broken up into smaller, bite-sized videos of 10 minutes or less – designed specifically for mobile viewing.

In an interview with Variety, the company has now disclosed that NewTV will launch later in 2019 with a premium lineup of original, short-form series where each episode is 10 minutes long. The service will include both an ad-supported tier and an commercial-free plan, similar to Hulu.

Its original content will include both scripted and unscripted shows, like sitcoms, dramas, reality shows, and documentaries, but not live TV like you’d find on Sling TV or YouTube TV, for example. NewTV will partner with producers to license their programming, but it won’t own or produce shows itself.

Katzenberg also positioned NewTV – which the company says is only the “working title” for now – as something that’s not a direct competitor with Netflix, Hulu, or HBO, but is rather “a different use case.”

As he told Variety, the difference isn’t just the length of the content, but that the NewTV platform itself will be built from scratch for the mobile viewing experience.

In terms of distribution, NewTV will look to telco partnerships.

This could be attractive to some players, who are concerned by the implications of the AT&T / Time Warner merger – after all, AT&T is already leveraging its new asset to run not one, but two streaming TV services. Meanwhile, Verizon, TC’s parent company by way of Oath, could also be looking for a better entry into the market following the closure of its own new-fangled mobile video service, go90, whose failure cost it $658 million.

That being said, NewTV – however clever the format or the app it runs in – will still have to compete for viewers’ time – and a lot of that time today is spent watching streaming services’ programming, even if NewTV doesn’t think of them as rivals. In addition, younger people also stream YouTube videos, which are often short-form, original programs, too. And while they may not be of “HBO quality,” that doesn’t seem to matter to the audience.

WndrCo has raised $750 million prior to this round, much of which had also been invested in NewTV. The company has additionally backed other tech and media startups, including  MixcloudAxiosNodeFlowspace, Whistle Sports, and TYT Network.