Steve Thomas - IT Consultant

Google is playing catch-up in the cloud, and as such it wants to provide flexibility to differentiate itself from AWS and Microsoft. Today, the company announced a couple of new options to help separate it from the cloud storage pack.

Storage may seem stodgy, but it’s a primary building block for many cloud applications. Before you can build an application you need the data that will drive it, and that’s where the storage component comes into play.

One of the issues companies have as they move data to the cloud is making sure it stays close to the application when it’s needed to reduce latency. Customers also require redundancy in the event of a catastrophic failure, but still need access with low latency. The latter has been a hard problem to solve until today when Google introduced a new dual-regional storage option.

As Google described it in the blog post announcing the new feature, “With this new option, you write to a single dual-regional bucket without having to manually copy data between primary and secondary locations. No replication tool is needed to do this and there are no network charges associated with replicating the data, which means less overhead for you storage administrators out there. In the event of a region failure, we transparently handle the failover and ensure continuity for your users and applications accessing data in Cloud Storage.”

This allows companies to have redundancy with low latency, while controlling where it goes without having to manually move it should the need arise.

Knowing what you’re paying

Companies don’t always require instant access to data, and Google (and other cloud vendors) offer a variety of storage options, making it cheaper to store and retrieve archived data. As of today, Google is offering a clear way to determine costs, based on customer storage choice types. While it might not seem revolutionary to let customers know what they are paying, Dominic Preuss, Google’s director of product management says it hasn’t always been a simple matter to calculate these kinds of costs in the cloud. Google decided to simplify it by clearly outlining the costs for medium (Nearline) and long-term (Coldline) storage across multiple regions.

As Google describes it, “With multi-regional Nearline and Coldline storage, you can access your data with millisecond latency, it’s distributed redundantly across a multi-region (U.S., EU or Asia), and you pay archival prices. This is helpful when you have data that won’t be accessed very often, but still needs to be protected with geographically dispersed copies, like media archives or regulated content. It also simplifies management.”

Under the new plan, you can select the type of storage you need, the kind of regional coverage you want and you can see exactly what you are paying.

Google Cloud storage pricing options. Chart: Google

Each of these new storage services has been designed to provide additional options for Google Cloud customers, giving them more transparency around pricing and flexibility and control over storage types, regions and the way they deal with redundancy across data stores.

Egnyte launched in 2007 just two years after Box, but unlike its enterprise counterpart, which went all-cloud and raised hundreds of millions of dollars, Egnyte saw a different path with a slow and steady growth strategy and a hybrid niche, recognizing that companies were going to keep some content in the cloud and some on prem. Up until today it had raised a rather modest $62.5 million, and hadn’t taken a dime since 2013, but that all changed when the company announced a whopping $75 million investment.

The entire round came from a single investor, Goldman Sachs’ Private Capital Investing arm, a part of Goldman’s Special Situations group. Holger Staude, vice president of Goldman Sachs Private Capital Investing will join Egnyte’s board under the terms of the deal. He says Goldman liked what it saw, a steady company poised for bigger growth with the right influx of capital. In fact, the company has had more than eight straight quarters of growth and have been cash flow positive since Q4 in 2016.

“We were impressed by the strong management team and the company’s fiscal discipline, having grown their top line rapidly without requiring significant outside capital for the past several years. They have created a strong business model that we believe can be replicated with success at a much larger scale,” Staude explained.

Company CEO Vineet Jain helped start the company as a way to store and share files in a business context, but over the years, he has built that into a platform that includes security and governance components. Jain also saw a market poised for growth with companies moving increasing amounts of data to the cloud. He felt the time was right to take on more significant outside investment. He said his first step was to build a list of investors, but Goldman shined through, he said.

“Goldman had reached out to us before we even started the fundraising process. There was inbound interest. They were more aggressive compared to others. Given there was prior conversations, the path to closing was shorter,” he said.

He wouldn’t discuss a specific valuation, but did say they have grown 6x since the 2013 round and he got what he described as “a decent valuation.” As for an IPO, he predicted this would be the final round before the company eventually goes public. “This is our last fund raise. At this level of funding, we have more than enough funding to support a growth trajectory to IPO,” he said.

Philosophically, Jain has always believed that it wasn’t necessary to hit the gas until he felt the market was really there. “I started off from a point of view to say, keep building a phenomenal product. Keep focusing on a post sales experience, which is phenomenal to the end user. Everything else will happen. So this is where we are,” he said.

Jain indicated the round isn’t about taking on money for money’s sake. He believes that this is going to fuel a huge growth stage for the company. He doesn’t plan to focus these new resources strictly on the sales and marketing department, as you might expect. He wants to scale every department in the company including engineering, posts-sales and customer success.

Today the company has 450 employees and more than 14,000 customers across a range of sizes and sectors including Nasdaq, Thoma Bravo, AppDynamics and Red Bull. The deal closed at the end of last month.

Over the last several months, Dropbox has been undertaking an overhaul of its internal search engine for the first time since 2015. Today, the company announced that the new version, dubbed Nautilus, is ready for the world. The latest search tool takes advantage of a new architecture powered by machine learning to help pinpoint the exact piece of content a user is looking for.

While an individual user may have a much smaller body of documents to search across than the World Wide Web, the paradox of enterprise search says that the fewer documents you have, the harder it is to locate the correct one. Yet Dropbox faces of a host of additional challenges when it comes to search. It has more than 500 million users and hundreds of billions of documents, making finding the correct piece for a particular user even more difficult. The company had to take all of this into consideration when it was rebuilding its internal search engine.

One way for the search team to attack a problem of this scale was to put machine learning to bear on it, but it required more than an underlying level of intelligence to make this work. It also required completely rethinking the entire search tool from an architectural level.

That meant separating two main pieces of the system, indexing and serving. The indexing piece is crucial of course in any search engine. A system of this size and scope needs a fast indexing engine to cover the number of documents in a whirl of changing content. This is the piece that’s hidden behind the scenes. The serving side of the equation is what end users see when they query the search engine, and the system generates a set of results.

Nautilus Architecture Diagram: Dropbox

Dropbox described the indexing system in a blog post announcing the new search engine: “The role of the indexing pipeline is to process file and user activity, extract content and metadata out of it, and create a search index.” They added that the easiest way to index a corpus of documents would be to just keep checking and iterating, but that couldn’t keep up with a system this large and complex, especially one that is focused on a unique set of content for each user (or group of users in the business tool).

They account for that in a couple of ways. They create offline builds every few days, but they also watch as users interact with their content and try to learn from that. As that happens, Dropbox creates what it calls “index mutations,” which they merge with the running indexes from the offline builds to help provide ever more accurate results.

The indexing process has to take into account the textual content assuming it’s a document, but it also has to look at the underlying metadata as a clue to the content. They use this information to feed a retrieval engine, whose job is to find as many documents as it can, as fast it can and worry about accuracy later.

It has to make sure it checks all of the repositories. For instance, Dropbox Paper is a separate repository, so the answer could be found there. It also has to take into account the access-level security, only displaying content that the person querying has the right to access.

Once it has a set of possible results, it uses machine learning to pinpoint the correct content. “The ranking engine is powered by a [machine learning] model that outputs a score for each document based on a variety of signals. Some signals measure the relevance of the document to the query (e.g., BM25), while others measure the relevance of the document to the user at the current moment in time,” they explained in the blog post.

After the system has a list of potential candidates, it ranks them and displays the results for the end user in the search interface, but a lot of work goes into that from the moment the user types the query until it displays a set of potential files. This new system is designed to make that process as fast and accurate as possible.

Microsoft Azure is getting a number of new storage options today that mostly focus on use cases where disk performance matters.

The first of these is Azure Ultra SSD Managed Disks, which are now in public preview. Microsoft says that these drives will offer “sub-millisecond latency,” which unsurprisingly makes them ideal for workloads where latency matters.

Earlier this year, Microsoft launched its Premium and Standard SSD Managed Disks offerings for Azure into preview. As far as we can tell, these ‘ultra’ SSDs represent the next tier up from the Premium SSDs with even lower latency and higher throughput.

And talking about Standard SSD Managed Disks, this service is now generally available after only three months in preview. To top things off, all of Azure’s storage tiers (Premium and Standard SSD, as well as Standard HDD) now offer 8, 16 and 32 TB storage capacity.

Also new today is Azure Premium files, which is now in preview. This, too, is an SSD-based service. Azure Files itself isn’t new, though. It offers users access to cloud storage using the standard SMB protocol. This new premium offering promises higher throughput and lower latency for these kind of SMB operations.

more Microsoft Ignite 2018 coverage

AWS has its Snowball (and Snowmobile truck), Google Cloud has its data transfer appliance and Microsoft has its Azure Data Box. All of these are physical appliances that allow enterprises to ship lots of data to the cloud by uploading it into these machines and then shipping them to the cloud. Microsoft’s Azure Data Box launched into preview about a year ago and today, the company is announcing a number of updates and adding a few new boxes, too.

First of all, the standard 50-pound, 100-terabyte Data Box is now generally available. If you’ve got a lot of data to transfer to the cloud — or maybe collect a lot of offline data — then FedEx will happily pick this one up and Microsoft will upload the data to Azure and charge you for your storage allotment.

If you’ve got a lot more data, though, then Microsoft now also offers the Azure Data Box Heavy. This new box, which is now in preview, can hold up to one petabyte of data. Microsoft did not say how heavy the Data Box Heavy is, though.

Also new is the Azure Data Box Edge, which is now also in preview. In many ways, this is the most interesting of the additions since it goes well beyond transporting data. As the name implies, Data Box Edge is meant for edge deployments where a company collects data. What makes this version stand out is that it’s basically a small data center rack that lets you process data as it comes in. It even includes an FPGA to run AI algorithms at the edge.

Using this box, enterprises can collect the data, transform and analyze it on the box, and then send it to Azure over the network (and not in a truck). Using this, users can cut back on bandwidth cost and don’t have to send all of their data to the cloud for processing.

Also part of the same Data Box family is the Data Box Gateway. This is a virtual appliance, however, that runs on Hyper-V and VMWare and lets users create a data transfer gateway for importing data in Azure. That’s not quite as interesting as a hardware appliance but useful nonetheless.

more Microsoft Ignite 2018 coverage

Chances are you see a story about cloud storage, and you yawn and move on, but Wasabi, a startup from the folks who brought you Carbonite backup, might make you pause. That’s because they claim to have found a cheaper, faster way to store data, and apparently investors like what they are seeing, forking over $68 million for a Series B investment.

Yes, that’s a hefty amount for an early round, but with founders who have multiple successful exits, investors might have seen a lower risk than you might think. The company didn’t go with your usual Sand Hill Road suspects here, instead opting for an unconventional set of industry veterans and family offices along with Forestay Capital, Swiss entrepreneur, Ernesto Bertarelli’s technology fund.

Much like Packet, a startup that scored $25 million the other day, they are hoping to take on cloud giants by finding a seam in the market they can exploit. While Packet was looking at customized compute, Wasabi is concentrating squarely on storage, an area they understand well from their Carbonite days.

CEO David Friend reports they are offering a terabyte of storage for just $5 a month, and says they are growing 30-40 percent month over month, since they launched in May 2017. In fact, he says they already have 3500 customers.

They took their time building their own custom storage solution, which he claims is faster and more efficient than any out there, allowing them to undercut Amazon S3 storage prices. Amazon is charging .023 cents per gigabyte for up to 50 terabytes. That works out to $23 a terabyte, substantially more than Wasabi’s asking price.

It begs the question though, how they can afford to keep scaling such a solution. For starters, they use co-location facilities like Digital Realty and Equinix for their storage solution instead of building out their own data centers. Friend says as they scale, they won’t be using their investment capital to add more capacity. Instead, they will be borrowing from banks in an apartment building kind of model, where you build the building, rent out the apartments and break even after a certain amount of time. He says, Wasabi can continue to grow this way.

They are going after fat data targets like media and entertainment and genomics, where they believe companies looking for the best price possible will bypass the big three — Amazon, Google and Microsoft — to build a more cost-effective storage solution.

The road is littered with failed cloud storage plays, but these folks have an experienced team and plenty of money behind them. Time will tell if they can buck the odds and take on the world’s biggest cloud companies by competing on price and performance, or if they can continue to keep prices this low as they grow and must add increasing capacity without the benefit of being webscale.

When you’re primarily a storage company with enterprise aspirations, as Dropbox is, you need a layer to to help people use the content in your system beyond simple file sharing. That’s why Dropbox created Paper, to act as that missing collaboration layer. They announced some enhancements to Paper to keep people working in their collaboration tool without having to switch programs.

“Paper is Dropbox’s collaborative workspace for teams. It includes features where users can work together, assign owners to tasks with due dates and embed rich content like video, sound, photos from Youtube, SoundCloud, Pinterest and others,” a Dropbox spokesperson told TechCrunch.

With today’s enhancements you can paste a number of elements into Paper and get live previews. For starters, they are letting you link to a Dropbox folder in Paper, where you can view the files inside the folder, even navigating any sub-folders. When the documents in the folder change, Paper updates the preview automatically because the folder is actually a live link to the Dropbox folder. This one seems like a table stakes feature for a company like Dropbox.

Gif: Dropbox

In addition, Dropbox now supports Airtables, a kind of souped up spreadsheet. With the new enhancement, you just grab an Airtable embed code and drop it into Paper. From there, you can see a preview in whatever Airtable view you’ve saved the table.

Finally, Paper now supports LucidCharts. As with Airtables and folders, you simply paste the link and you can see a live preview inside Paper. If the original chart changes, updates are reflected automatically in the Paper preview.

By now, it’s clear that workers want to maintain focus and not be constantly switching between programs. It’s why Box created the recently announced Activity Stream and Recommended Apps. It’s why Slack has become so popular inside enterprises. These tools provide a way to share content from different enterprise apps without having to open a bunch of tabs or separate apps.

Dropbox Paper is also about giving workers a central place to do their work where you can pull live content previews from different apps without having to work in a bunch of content silos. Dropbox is trying to push that idea along for its enterprise customers with today’s enhancements.

When you’re primarily a storage company with enterprise aspirations, as Dropbox is, you need a layer to to help people use the content in your system beyond simple file sharing. That’s why Dropbox created Paper, to act as that missing collaboration layer. They announced some enhancements to Paper to keep people working in their collaboration tool without having to switch programs.

“Paper is Dropbox’s collaborative workspace for teams. It includes features where users can work together, assign owners to tasks with due dates and embed rich content like video, sound, photos from Youtube, SoundCloud, Pinterest and others,” a Dropbox spokesperson told TechCrunch.

With today’s enhancements you can paste a number of elements into Paper and get live previews. For starters, they are letting you link to a Dropbox folder in Paper, where you can view the files inside the folder, even navigating any sub-folders. When the documents in the folder change, Paper updates the preview automatically because the folder is actually a live link to the Dropbox folder. This one seems like a table stakes feature for a company like Dropbox.

Gif: Dropbox

In addition, Dropbox now supports Airtables, a kind of souped up spreadsheet. With the new enhancement, you just grab an Airtable embed code and drop it into Paper. From there, you can see a preview in whatever Airtable view you’ve saved the table.

Finally, Paper now supports LucidCharts. As with Airtables and folders, you simply paste the link and you can see a live preview inside Paper. If the original chart changes, updates are reflected automatically in the Paper preview.

By now, it’s clear that workers want to maintain focus and not be constantly switching between programs. It’s why Box created the recently announced Activity Stream and Recommended Apps. It’s why Slack has become so popular inside enterprises. These tools provide a way to share content from different enterprise apps without having to open a bunch of tabs or separate apps.

Dropbox Paper is also about giving workers a central place to do their work where you can pull live content previews from different apps without having to work in a bunch of content silos. Dropbox is trying to push that idea along for its enterprise customers with today’s enhancements.

Cloudian, a company that specializes in helping businesses store petabytes of data, today announced that it has raised a $94 million Series E funding round. Investors in this round, which is one of the largest we have seen for a storage vendor, include Digital Alpha, Fidelity Eight Roads, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures and WS Investments. This round includes a $25 million investment from Digital Alpha, which was first announced earlier this year.

With this, the seven-year-old company has now raised a total of $174 million.

As the company told me, it now has about 160 employees and 240 enterprise customers. Cloudian has found its sweet spot in managing the large video archives of entertainment companies, but its customers also include healthcare companies, automobile manufacturers and Formula One teams.

What’s important to stress here is that Cloudian’s focus is on on-premise storage, not cloud storage, though it does offer support for multi-cloud data management, as well. “Data tends to be most effectively used close to where it is created and close to where it’s being used,” Cloudian VP of worldwide sales Jon Ash told me. “That’s because of latency, because of network traffic. You can almost always get better performance, better control over your data if it is being stored close to where it’s being used.” He also noted that it’s often costly and complex to move that data elsewhere, especially when you’re talking about the large amounts of information that Cloudian’s customers need to manage.

Unsurprisingly, companies that have this much data now want to use it for machine learning, too, so Cloudian is starting to get into this space, as well. As Cloudian CEO and co-founder Michael Tso also told me, companies are now aware that the data they pull in, no matter whether that’s from IoT sensors, cameras or medical imaging devices, will only become more valuable over time as they try to train their models. If they decide to throw the data away, they run the risk of having nothing with which to train their models.

Cloudian plans to use the new funding to expand its global sales and marketing efforts and increase its engineering team. “We have to invest in engineering and our core technology, as well,” Tso noted. “We have to innovate in new areas like AI.”

As Ash also stressed, Cloudian’s business is really data management — not just storage. “Data is coming from everywhere and it’s going everywhere,” he said. “The old-school storage platforms that were siloed just don’t work anywhere.”

Microsoft today announced a couple of AI-centric updates for OneDrive and SharePoint users with an Office 365 subscription that bring more of the company’s machine learning smarts to its file storage services.

All of these features will launch at some point later this year. With the company’s Ignite conference in Orlando coming up next month, it’s probably a fair guess that we’ll see some of these updates make a reappearance there.

The highlight of these announcements is that starting later this year, both services will get automated transcription services for video and audio files. While video is great, it’s virtually impossible to find any information in these files without spending a lot of time. And once you’ve found it, you still have to transcribe it. Microsoft says this new service will handle the transcription automatically and then display the transcript as you’re watching the video. The service can handle over 320 file types, so chances are it’ll work with your files, too.

Other updates the company today announced include a new file view for OneDrive and Office.com that will recommend files to you by looking at what you’ve been working on lately across the Microsoft 365 and making an educated guess as to what you’ll likely want to work on now. Microsoft will also soon use a similar set of algorithms to prompt you to share files with your colleagues after you’ve just presented them in a meeting with PowerPoint, for example.

Power users will also soon see access statistics for any file in OneDrive and SharePoint.

We have long known that the price of cloud storage services like Dropbox, Google Drive and Microsoft OneDrive have been getting cheaper over time. Yesterday’s launch of Google One in the U.S. dropped the price for Google storage even further, cutting the cost per terabyte per month in half, driving this point home even more clearly.

As Frederic Lardinois pointed out in his post, 2 terabytes of storage now costs $9.99 a month. Consider that without joining Google One, that was the same price for 1 terabyte of storage. By signing up for Google One, you could double your storage without paying one penny more, and let’s face it this was a ton of storage before the change.

Let’s compare that with some of the other players out there. Each one is a little different, but the storage costs tell a story.

Google One’s shift to 2 TB for $9.99 a month puts it in line with Apple’s pricing, which surprisingly had given you the most storage bang for your buck out of these four companies before Google One came along. Who would have thought that Apple was giving its users the best price on anything? Of course, you get access to Office 365, including Word and PowerPoint, with your terabyte of Microsoft OneDrive storage, which is going to add a fair bit of value for many users over and above the pure storage being offered.

Regardless, if you consider Apple and Google’s pricing, the price of a terabyte of cloud storage has dropped to $5.00 a month. That’s pretty darn cheap and it shows just how commoditized online storage has become and how much scale you require to make money.

Alan Pelz-Sharpe, principal analyst at Deep Analysis, who has been watching this space for years says the consumer space consumer cloud storage pricing has always been a race to the bottom. “You can only make a margin with mass scale. That’s why firms who are not Microsoft, Amazon or Google are pushing hard for business and enterprise customers. Google One just brings that message home,” he said.

If you get enough scale, as Dropbox has with an estimated 500 million users, if you can get a percentage to pay $8.25 a month for a terabyte of storage, it can add up to real money. When Dropbox filed its S-1 to before it went public earlier this year, it reported more than $1 billion in consumer revenue. It would be difficult if not impossible for a startup launching today to compete with the existing players, but the ones out there continue to compete with one another, driving the cost down even further.

Today’s announcement is just another step in that downward price pressure of consumer cloud storage, and when you get double the storage from one day to the next for the exact same price, it shows just how true that it is.

The official launch promo video for Samsung’s next flagship smartphone in the long-running Galaxy Note line — the Note 9 — appears to have leaked, with links to the video now cropping up on YouTube.

And via Twitter…

The forthcoming phablet has been pretty comprehensively leaked already. And clearly hasn’t had a radical (cosmetic nor form factor) makeover. (This is not the fabled folding phone Samsung is slated to be working on for next year.)

The Note 9 will also be officially unveiled on August 9. So Samsung fans don’t have long left to wait for any last minute details they were keen to nail down.

But, in the few days remaining, the Samsung-branded video offers a more polished look at what’s going to be up for pre-order next week…

Samsung kicks off touting the power of the Note 9 — telling us it’s not just powerful but “super powerful” (leaked benchmarks have previously suggested a big performance boost); and with a bottoms-up ports & rear view pan that shows a 3.5mm headphone jack sitting in the frame — confirming my TC colleague Brian Heater’s eagle eye.

Also of note: A repositioned fingerprint sensor (now in a less stupid location below the dual lens camera housing).

Next, the video flips focus to a snazzy yellow (or is that gold?) S Pen stylus, which Samsung describes as “all new powerful”, before showing its physical button being pressed by an invisible force (human, we hope) which then does a spot of aimless doodling.

After this, Samsung moves to brag about the Note 9’s “all day battery” (which it’s confidently teased before — so the company looks to have put the Note 7 battery fiasco well and truly behind it), although the usual small print disclaimers warn about variable battery performance.

On the storage front, there’s a big bold claim of the device being “1 terabyte ready” — although this is on account of a 512GB SD card shown being pulled out of the expandable memory slot.

And in the small print displayed on the video at that point the company caveats that the 1TB claim is for 512GB models equipped with another 512GB in expandable memory (at the owner’s separate expense).

“The power to store more” [photos] “Delete less” [photos] is what the company’s marketing team has come up with to try to excite people over the utility of owning a smartphone that can have 1TB in storage capacity. i.e. if you stump up extra for the extra storage.

The video shows a camera roll chock-full of stock photos of pets, snacks and people. Hopefully Note 9 owners will find more creative things to do with 1TB storage.