Steve Thomas - IT Consultant

Dropbox started shifting workloads away from AWS to its own data centers several years ago because it needed more control over how files were stored and accessed. It developed a storage architecture called Magic Pocket to help, but over time it recognized that most people moved files to Dropbox for backup purposes, then rarely accessed them again.

Engineers realized it made little sense to have everything stored in the same way when many files weren’t being accessed much after the first day of putting them on the service. The company decided to create two levels of storage, warm storage (previously Magic Pocket) and a new level of longer-term storage called Cold Storage, which lets Dropbox store these files less expensively, yet still deliver them in a timely manner should a customer need to see one.

Dropbox customers obviously don’t care about the engineering challenges the company faces with such an approach. They only know that when they click a file, they expect it to open without a significant amount of latency, regardless of how old it is. But Dropbox saw an opportunity to store these files in a separate layer.

“When one is talking about cold storage, we are thinking of files that are accessed less often. And for those files, we can make some trade-offs between storage, performance and network bandwidth,” Preslav Le, a software engineer in charge of the cold storage project, told TechCrunch.

So it was up to the engineers to design a system with an acceptable level of latency to retrieve files stored in the cold layer without so much delay that customers would notice. It involved walking a tight design tightrope and considering all of the trade-offs that would be required with such an approach.

“Our cold tier runs on the same hardware and network but saves costs through innovatively reducing disk usage by 25%, without compromising durability or availability. The end experience for users is almost indistinguishable between the two tiers,” Dropbox wrote in a blog post announcing the new feature.

The company needed to ensure durability and reliability while creating a new storage layer to reduce their overall costs, and while the project wasn’t easy, they expect the dual tier system to save them 10-15% in costs over time.

Dropbox started shifting workloads away from AWS to its own data centers several years ago because it needed more control over how files were stored and accessed. It developed a storage architecture called Magic Pocket to help, but over time it recognized that most people moved files to Dropbox for backup purposes, then rarely accessed them again.

Engineers realized it made little sense to have everything stored in the same way when many files weren’t being accessed much after the first day of putting them on the service. The company decided to create two levels of storage, warm storage (previously Magic Pocket) and a new level of longer-term storage called Cold Storage, which lets Dropbox store these files less expensively, yet still deliver them in a timely manner should a customer need to see one.

Dropbox customers obviously don’t care about the engineering challenges the company faces with such an approach. They only know that when they click a file, they expect it to open without a significant amount of latency, regardless of how old it is. But Dropbox saw an opportunity to store these files in a separate layer.

“When one is talking about cold storage, we are thinking of files that are accessed less often. And for those files, we can make some trade-offs between storage, performance and network bandwidth,” Preslav Le, a software engineer in charge of the cold storage project, told TechCrunch.

So it was up to the engineers to design a system with an acceptable level of latency to retrieve files stored in the cold layer without so much delay that customers would notice. It involved walking a tight design tightrope and considering all of the trade-offs that would be required with such an approach.

“Our cold tier runs on the same hardware and network but saves costs through innovatively reducing disk usage by 25%, without compromising durability or availability. The end experience for users is almost indistinguishable between the two tiers,” Dropbox wrote in a blog post announcing the new feature.

The company needed to ensure durability and reliability while creating a new storage layer to reduce their overall costs, and while the project wasn’t easy, they expect the dual tier system to save them 10-15% in costs over time.

It’s absolutely necessary for your business to update Windows 10, mainly for the security patches that will protect your business. Updating basically puts your computers on hold, but this is much much better than hackers exploiting gaps in unpatched systems. Need to speed up the waiting process? You can with these tips!

Why do updates take so long to install?

Windows 10 updates take a while to complete because Microsoft is constantly adding larger files and features. What’s more, internet speed can significantly affect installation times, especially if your network is overburdened by multiple people downloading the update at the same time.

If multiple downloads aren’t being attempted and you still experience slowness, then either some broken software components are preventing the installation from running smoothly, or apps and drivers that run upon startup are likely to blame.

When you experience any of these issues, try the following:

Free up storage space and defragment your hard drive

Because many Windows 10 updates take up a lot of space on your hard drive, you need to leave enough room for them. First, try deleting files and uninstalling software you no longer need.

Then, you’ll also need to defragment your hard drive, a process that organizes data on your hard drive so it can read and write files faster. It’s quite an easy process. Press the Windows button and type “defragment and optimize drives”. Select the hard drive, click Analyze, and if the drive is more than 10% fragmented, select Optimize.

Run Windows Update Troubleshooter

Software components may also cause installation problems. Run the Windows Update Troubleshooter and it might just be the solution to the issue, and decrease download and install times.

Disable startup software

Before your update begins, disable third-party applications. They can potentially cause disruptions. To do this, press the Windows button again and type “msconfig”. In the System Configuration Window, go to Services, click Hide all Microsoft services, then click Disable all. Afterwards, access Task Manager (press Ctrl + Alt + Delete) and disable any startup program that might interfere with updates like an Adobe app or printer software.

Optimize your network

Sometimes a faster connection is all you need. Consider switching to fiber optic cables or purchasing more bandwidth from your internet service provider. It’s also a good idea to use bandwidth management tools to make sure enough network resources are reserved for things like Windows 10 updates, not bandwidth hogs like Skype or YouTube.

Schedule updates for low-traffic periods

In some cases, however, you may have to accept that certain updates do take a substantial amount of time. So schedule them for after hours when you’re not using your computers. Simply go to the Windows 10 update settings and specify when you prefer updates to be installed.

If you need help with any of the tips above, we’re always here to help. Call us today to meet with our Windows specialists!

Updating your Windows 10 is an essential security measure for protecting your business from threats such as malware or ransomware. It’s free! And you don’t have to do much, all you have to do is wait. However, for some, that’s the downside. Can’t stand waiting? One of these will speed up your Windows 10 update.

Why do updates take so long to install?

Windows 10 updates take a while to complete because Microsoft is constantly adding larger files and features. What’s more, internet speed can significantly affect installation times, especially if your network is overburdened by multiple people downloading the update at the same time.

If multiple downloads aren’t being attempted and you still experience slowness, then either some broken software components are preventing the installation from running smoothly, or apps and drivers that run upon startup are likely to blame.

When you experience any of these issues, try the following:

Free up storage space and defragment your hard drive

Because many Windows 10 updates take up a lot of space on your hard drive, you need to leave enough room for them. First, try deleting files and uninstalling software you no longer need.

Then, you’ll also need to defragment your hard drive, a process that organizes data on your hard drive so it can read and write files faster. It’s quite an easy process. Press the Windows button and type “defragment and optimize drives”. Select the hard drive, click Analyze, and if the drive is more than 10% fragmented, select Optimize.

Run Windows Update Troubleshooter

Software components may also cause installation problems. Run the Windows Update Troubleshooter and it might just be the solution to the issue, and decrease download and install times.

Disable startup software

Before your update begins, disable third-party applications. They can potentially cause disruptions. To do this, press the Windows button again and type “msconfig”. In the System Configuration Window, go to Services, click Hide all Microsoft services, then click Disable all. Afterwards, access Task Manager (press Ctrl + Alt + Delete) and disable any startup program that might interfere with updates like an Adobe app or printer software.

Optimize your network

Sometimes a faster connection is all you need. Consider switching to fiber optic cables or purchasing more bandwidth from your internet service provider. It’s also a good idea to use bandwidth management tools to make sure enough network resources are reserved for things like Windows 10 updates, not bandwidth hogs like Skype or YouTube.

Schedule updates for low-traffic periods

In some cases, however, you may have to accept that certain updates do take a substantial amount of time. So schedule them for after hours when you’re not using your computers. Simply go to the Windows 10 update settings and specify when you prefer updates to be installed.

If you need help with any of the tips above, we’re always here to help. Call us today to meet with our Windows specialists!

In May 2019, Microsoft will be releasing another Windows 10 major update with security patches, bug fixes, and new features. More than improving user experience, these updates will help your organization secure your IT systems. If you can’t afford to let an update be a long and frustrating process, here are some tips that will speed it up.

Why do updates take so long to install?

Windows 10 updates take a while to complete because Microsoft is constantly adding larger files and features. What’s more, internet speed can significantly affect installation times, especially if your network is overburdened by multiple people downloading the update at the same time.

If multiple downloads aren’t being attempted and you still experience slowness, then either some broken software components are preventing the installation from running smoothly, or apps and drivers that run upon startup are likely to blame.

When you experience any of these issues, try the following:

Free up storage space and defragment your hard drive

Because many Windows 10 updates take up a lot of space on your hard drive, you need to leave enough room for them. First, try deleting files and uninstalling software you no longer need.

Then, you’ll also need to defragment your hard drive, a process that organizes data on your hard drive so it can read and write files faster. It’s quite an easy process. Press the Windows button and type “defragment and optimize drives”. Select the hard drive, click Analyze, and if the drive is more than 10% fragmented, select Optimize.

Run Windows Update Troubleshooter

Software components may also cause installation problems. Run the Windows Update Troubleshooter and it might just be the solution to the issue, and decrease download and install times.

Disable startup software

Before your update begins, disable third-party applications. They can potentially cause disruptions. To do this, press the Windows button again and type “msconfig”. In the System Configuration Window, go to Services, click Hide all Microsoft services, then click Disable all. Afterwards, access Task Manager (press Ctrl + Alt + Delete) and disable any startup program that might interfere with updates like an Adobe app or printer software.

Optimize your network

Sometimes a faster connection is all you need. Consider switching to fiber optic cables or purchasing more bandwidth from your internet service provider. It’s also a good idea to use bandwidth management tools to make sure enough network resources are reserved for things like Windows 10 updates, not bandwidth hogs like Skype or YouTube.

Schedule updates for low-traffic periods

In some cases, however, you may have to accept that certain updates do take a substantial amount of time. So schedule them for after hours when you’re not using your computers. Simply go to the Windows 10 update settings and specify when you prefer updates to be installed.

If you need help with any of the tips above, we’re always here to help. Call us today to meet with our Windows specialists!

Omni simply couldn’t scale storing stuff in giant warehouses while dropping it and off picking it up from people on demand. Storage was designed to bootstrap Omni into peer-to-peer rentals of the goods in its care. But now it’s found a better way by partnering with retailers which will host and rent out goods for Omni that users will pick up themselves.

With that strategy, Omni is now formally pivoting from storage alongside its expansion from San Francisco and Portland into Los Angeles and New York. In SF and its new markets starting today, users can rent GoPros, strollers, drills, guitars, and more for pick up and drop off at 100 local storefronts which will receive 80 percent of the revenue while Omni keeps 20 percent.

“Storage was always meant to supply a rentals marketplace. We launched storage in an Uber-for everything era and now it’s no secret that physical operations are tough to scale” Omni’s COO Ryan Delk telss me. “This new model gives our users more supply, local entrepreneurs a new revenue stream, and us the ability to launch new markets much more quickly than the old model of building rentals on top of the storage business.”

LA Omni users will be able to rent surf equipment for pickup and dropoff from local surf shop Jay’s

To that end, storage won’t come to any more markets, though storage services with delivery will continue in San Francisco. Users there and in Portland will also be able to pick up and drop off rental items from a few Omni-owned locations including its SF headquarter office. Omni will add retailer pickups in Portland and more in San Francisco soon. At least that’s one way to make Omni’s investors like Highland, Founders Fund, Shrug.vc, and Dream Machine feel better about SF real estate prices.

“Ownership has a bit of a burden associated with it” Delk tells me, referencing the shifting attitudes highlighted by Marie Kondo and the tidyness movement. Ownership requires you to pay up front for tons of use down the line that may never happen. “Paying for access when you need it unlocks all these amazing experiences.”

Omni’s COO Ryan Delk (left) and CEO Thomas McCleod (right)

Omni discovered the potential for the model when it ran an experiment. “What if we could pick up items directly from Omni?” Delk explains. Omni learned that many people “can’t afford to pay for transit both ways. It was pricing out a lot of people.” But pick-ups unlocked a new price demographic.

Meanwhile, Omni noticed some semi-pro renters had cropped up on its platform whowere buying tons of a popular item like chairs on Amazon, shipping them to its warehouse, then renting them out and quickly recouping their costs. It saw an opportunity to partner with local retailers who could give it instant supplies of items in new markets while handling all the pick up and drop off logistics.

Omni’s retail partners like Adventure 16 Outdoor & Travel Outfitters, Blazing Saddles and Sierra Surf School can choose their own prices and adjust for demand, set black-out dates, pause for vacations, and sell items like normal and let Omni know to restock them so rentals don’t cannibalize their sales. Rentals are covered by up to $10,000 in insurance so both the retailers and people who rent from them don’t have to worry. Omni users just show their ID at pick up to verify their identity, but that will soon be part of the app. Last fall, Omni hired Uber’s head of sales strategy and operations who oversaw UberEats growth from zero to 200,000 restaurants to run its retail partnerships as VP of special projects.

Delk says Omni is “all-in on the rentals” which he sees as a “pure play marketplace vs a recurring ARR business” that “democratizes access to Omni to people who aren’t the 1% in major markets.” Now someone who couldn’t afford to buy a drill for a quick home improvement project or pay to have a rental delivered and picked up can drop by their local retailer to grab it and return it later for $6 per day with no extra fees.

That in-store experience of actually being able to go same-day, hold an item, and ask questions about it could allow Omni’s rental model to compete with Amazon’s prices and delivery logistics. The one thing Amazon can’t do right now is let you try before you buy. Omni could win by letting you try without ever having to.

When the former CTOs of YouTube, Facebook, and Dropbox seed fund a database startup, you know there’s something special going on under the hood. Jiten Vaidya and Sugu Sougoumarane saved YouTube from a scalability nightmare by inventing and open sourcing Vitess, a brilliant relational data storage system. But in the decade since working there, the pair have been inundated with requests from tech companies desperate for help building the operational scaffolding needed to actually integrate Vitess.

So today the pair are revealing their new startup PlanetScale that makes it easy to build multi-cloud databases that handle enormous amounts of information without locking customers into Amazon, Google, or Microsoft’s infrastructure. Battletested at YouTube, the technology could allow startups to fret less about their backend and focus more on their unique value proposition. “Now they don’t have to reinvent the wheel” Vaidya tells me. “A lot of companies facing this scaling problem end up solving it badly in-house and now there’s a way to solve that problem by using us to help.”

PlanetScale has quietly raised a $3 million seed round in April led by SignalFire and joined by a who’s who of engineering luminaries. They include YouTube co-founder and CTO Steve Chen, Quora CEO and former Facebook CTO Adam D’Angelo, former Dropbox CTO Aditya Agarwal, PayPal and Affirm co-founder Max Levchin, MuleSoft co-founder and CTO Ross Mason, Google director of engineering Parisa Tabriz, and Facebook’s first female engineer and South Park Commons Founder Ruchi Sanghvi. If anyone could foresee the need for Vitess implementation services, it’s these leaders who’ve dealt with scaling headaches at tech’s top companies.

But how can a scrappy startup challenge the tech juggernauts for cloud supremacy? First, by actually working with them. The PlanetScale beta that’s now launching lets companies spin up Vitess clusters on its database-as-a-service, their own through a licensing deal, or on AWS with Google Cloud and Microsoft Azure coming shortly. Once these integrations with the tech giants are established, PlanetScale clients can use it as an interface for a multi-cloud setup where they could keep their data master copies on AWS US-West with replicas on Google Cloud in Ireland and elsewhere. That protects companies from becoming dependent on one provider and then getting stuck with price hikes or service problems.

PlanetScale also promises to uphold the principles that undergirded Vitess. “It’s our value that we will keep everything in the query pack completely open source so none of our customers ever have to worry about lock-in” Vaidya says.

PlanetScale co-founders (from left): Jiten Vaidya and Sugu Sougoumarane

Battletested, YouTube Approved

He and Sougoumarane met 25 years ago while at Indian Institute Of Technology Bombay. Back in 1993 they worked at pioneering database company Informix together before it flamed out. Sougoumarane was eventually hired by Elon Musk as an early engineer for X.com before it got acquired by PayPal, and then left for YouTube. Vaidya was working at Google and the pair were reunited when it bought YouTube and Sougoumarane pulled him on to the team.

“YouTube was growing really quickly and the relationship database they were using with MySQL was sort of falling apart at the seams” Vaidya recalls. Adding more CPU and memory to the database infra wasn’t cutting it, so the team created Vitess. The horizontal scaling sharding middleware for MySQL let users segment their database to reduce memory usage while still being able to rapidly run operations. YouTube has smoothly ridden that infrastructure to 1.8 billion users ever since.

“Sugu and Mike Solomon invented and made Vitess open source right from the beginning since 2010 because they knew the scaling problem wasn’t just for YouTube, and they’ll be at other companies 5 or 10 years later trying to solve the same problem” Vaidya explains. That proved true, and now top apps like Square and HubSpot run entirely on Vitess, with Slack now 30 percent onboard.

Vaidya left YouTube in 2012 and became the lead engineer at Endorse, which got acquired by Dropbox where he worked for four years. But in the meantime, the engineering community strayed towards MongoDB-style key-value store databases, which Vaidya considers inferior. He sees indexing issues and says that if the system hiccups during an operation, data can become inconsistent — a big problem for banking and commerce apps. “We think horizontally-scaled relationship databases are more elegant and are something enterprises really need.

Database Legends Reunite

Fed up with the engineering heresy, a year ago Vaidya committed to creating PlanetScale. It’s composed of four core offerings: professional training in Vitess, on-demand support for open source Vitess users, Vitess database-as-a-service on Planetscale’s servers, and software licensing for clients that want to run Vitess on premises or through other cloud providers. It lets companies re-shard their databases on the fly to relocate user data to comply with regulations like GDPR, safely migrate from other systems without major codebase changes, make on-demand changes, and run on Kubernetes.

The PlanetScale team

PlanetScale’s customers now include Indonesian ecommerce giant Bukalapak, and it’s helping Booking.com, GitHub, and New Relic migrate to open source Vitess. Growth is suddenly ramping up due to inbound inquiries. Last month around when Square Cash became the number one app, its engineering team published a blog post extolling the virtues of Vitess. Now everyone’s seeking help with Vitess sharding, and PlanetScale is waiting with open arms. “Jiten and Sugu are legends and know firsthand what companies require to be successful in this booming data landscape” says Ilya Kirnos, founding partner and CTO of SignalFire.

The big cloud providers are trying to adapt to the relational database trend, with Google’s Cloud Spanner and Cloud SQL, and Amazon’s AWS SQL and AWS Aurora. Their huge networks and marketing war chests could pose a threat. But Vaidya insists that while it might be easy to get data into these systems, it can be a pain to get it out. PlanetScale is designed to give them freedom of optionality through its multi-cloud functionality so their eggs aren’t all in one basket.

Finding product market fit is tough enough. Trying to suddenly scale a popular app while also dealing with all the other challenges of growing a company can drive founders crazy. But if it’s good enough for YouTube, startups can trust PlanetScale to make databases one less thing they have to worry about.

Amazon has had storage options for Linux file servers for some time, but it recognizes that a number of companies still use Windows file servers, and they are not content to cede that market to Microsoft. Today the company announced Amazon FSx for Windows File Server to provide a fully compatible Windows option.

“You get a native Windows file system backed by fully-managed Windows file servers, accessible via the widely adopted SMB (Server Message Block) protocol. Built on SSD storage, Amazon FSx for Windows File Server delivers the throughput, IOPS, and consistent sub-millisecond performance that you (and your Windows applications) expect,” AWS’s Jeff Barr wrote in a blog post introducing the new feature.

That means if you use this service, you have a first-class Windows system with all of the compatibility with Windows services that you would expect, such as Active Directory and Windows Explorer.

AWS CEO Andy Jassy introduced the new feature today at AWS re:Invent, the company’s customer conference going on in Las Vegas this week. He said that even though Windows File Server usage is diminishing as more IT pros turn to Linux, there are still a fair number of customers who want a Windows-compatible system and they wanted to provide a service for them to move their Windows files to the cloud.

Of course, it doesn’t hurt that it provides a path for Microsoft customers to use AWS instead of turning to Azure for these workloads. Companies undertaking a multi-cloud strategy should like having a fully compatible option.

more AWS re:Invent 2018 coverage

Dropbox has had APIs for years that enable companies to tap into content stored in their repositories, and they have had partnerships with large vendors like Adobe, Google, Autodesk and Microsoft. Today, the company announced Dropbox Extensions to enhance the ability to build workflows and integrations with third party partners.

Quentin Clark, SVP of engineering, product and design at Dropbox says they have long recognized the need to take the content stored in their repositories and provide ways to integrate it with other tools people are using. “We are on this journey to help this broader ecosystem get the most value possible. Extensions is another way to remove friction and allow better engagement,” Clark said.

He said that while APIs could pick up content, do something with and put it into Dropbox, Extensions allows users to take action directly in Dropbox. This is part of a broader trend we are seeing in enterprise tools to keep the user where they are without forcing them to explicitly open another app to complete a task.

It also introduces a level of automation to certain processes that was missing. As an example, in a Dropbox Extensions integration with eSignature services Adobe Sign, DocuSign or HelloSign, you could have a contract stored in Dropbox, send it to various parties for signature and the signed document gets returned to Dropbox automatically once all of the signatures have been collected. What’s more, the person who initiated the process gets a notification that the process is complete.

The integrations with today’s release include the ability to edit video in Vimeo, edit images in Pixlr, edit PDFs in Nitro, airSlate and Smallpdf and send faxes with HelloFax (for people who still fax stuff). Clark says these initial integrations were not random. They were chosen because they were hearing from customers that these were tools they wanted to see deeper integration with Dropbox.

Clark says the partnership team at Dropbox will continue to look for other uses for Extensions, but that it takes a concerted effort on the part of the engineering team to build in meaningful integrations. “We prioritize based on common users,” he said.

While they are announcing Extensions today, it will be generally available later this month on November 27th. It’s worth noting that it will be available to all users, not just Dropbox’s business customers. Clark says they decided to expose it to everyone to show how to make broader use of Dropbox content beyond pure storage. The company hopes that in doing so, it could drive more users to the business products as they see the value of this integrated approach.

You may recall my tale of woe from last year when I recounted how I was locked out of my Google account for a month. It was a tough time, made all the more frustrating because there wasn’t any customer support to contact. That is changing for Google One users though, and it’s about time.

I received an email this week from Google informing me that my paid Google storage had been upgraded to Google One, Google’s freshly designed storage options announced last May. It comes with twice the storage, giving me two terabytes for the same $9.99 per month I was paying for one. It allows me to share my generous storage allotment with my family members, but the thing that really caught my eye was actual customer support.

With Google One, which is available for as little as $1.99 per month for 100 gigs of storage, everyone has access to actual customer support where they can talk to someone, who can (presumably) help them with issues like password recovery.

Brandon Badger, who is Google One product manager, says this is a critical component of the new storage package. “Support is important to us, we want people using our products to have a great experience and get questions or issues addressed in a timely manner,” Badger told TechCrunch. He added that users with paid storage plans often use many other Google products and services and this provides a way for customers to get answers to problems they have across the Google cloud ecosystem.

Photo: Google

Obviously, this is long overdue and something that G Suite customers, the business side of Google’s tools, have had for some time. This ability to contact a customer service organization shows a maturation of consumer cloud products that had been missing previously.

As a journalist, when I got locked out I was forced to use my contacts at Google PR to give me that help. After many attempts I was able to get my account credentials back, but since I wrote that article I have received dozens of emails from other unfortunate souls who faced the same predicament, but lacked the connections I had. Unfortunately, as much as I could empathize with their plight (how could I not?), there wasn’t much I could do other than refer them to Google. I wrote about my level of frustration in my post:

Once you have gone through the recovery protocol, what is a person supposed to do to get Google’s attention? They don’t have customer service, yet I’m paying for storage. They don’t have a reasonable system for navigating this kind of problem and they don’t have a sensible appeals process.

While I hope I never get locked out of my Google account again, I’m happy to know that if I do, I and so many others like me at least have someone to contact about it. That’s no guarantee our problems will be resolved, of course, but it’s at least a path to getting something done that hadn’t previously existed.

Dropbox has been building out Paper, its document-driven collaboration tool since it was first announced in 2015, slowly but surely layering on more functionality. Today, it added a timeline feature, pushing beyond collaboration into a light-weight project planning tool.

Dropbox has been hearing that customers really need a way to plan with Paper that was lacking. “That pain—the pain of coordinating all those moving pieces—is one we’re taking on today with our new timelines feature in Dropbox Paper,” the company wrote in a blog post announcing the new feature.

As you would expect with such a tool, it enables you to build a timeline with milestones, but being built into Paper, you can assign team members to each milestone and add notes with additional information including links to related documents.

You can also embed a To-do lists for the person assigned to a task right in the timeline to help them complete the given task, giving a single point of access for all the people assigned to a project

Gif: Dropbox

“Features like to-dos, @mentions, and due dates give team members easy ways to coordinate projects with each other. Timelines take these capabilities one step further, letting any team member create a clean visual representation of what’s happening when—and who’s responsible,” Dropbox wrote in the blog post announcement.

Dropbox has recognized it cannot live as simply a content storage tool. It needs to expand beyond that into collaboration and coordination around that content, and that’s what Dropbox Paper has been about. By adding timelines, the company is looking to expand that capability even further.

Alan Lepofsky, who covers the “future of work” for Constellation Research sees Paper as part of the changing face of collaboration tools. “I refer to the new breed of content creation tools as digital canvases. These apps simplify the user experience of integrating content from multiple sources. They are evolving the word-processor paradigm,” Lepofsky told TechCrunch.

It’s probably not going to replace a project manager’s full-blown planning tools any time soon, but it at least the potential to be a useful adjunct for the Paper arsenal to allow customers to continue to find ways to extract value from the content they store in Dropbox.

It’s been over five years since NSA whistleblower Edward Snowden lifted the lid on government mass surveillance programs, revealing, in unprecedented detail, quite how deep the rabbit hole goes thanks to the spread of commercial software and connectivity enabling a bottomless intelligence-gathering philosophy of ‘bag it all’.

Yet technology’s onward march has hardly broken its stride.

Government spying practices are perhaps more scrutinized, as a result of awkward questions about out-of-date legal oversight regimes. Though whether the resulting legislative updates, putting an official stamp of approval on bulk and/or warrantless collection as a state spying tool, have put Snowden’s ethical concerns to bed seems doubtful — albeit, it depends on who you ask.

The UK’s post-Snowden Investigatory Powers Act continues to face legal challenges. And the government has been forced by the courts to unpick some of the powers it helped itself to vis-à-vis people’s data. But bulk collection, as an official modus operandi, has been both avowed and embraced by the state.

In the US, too, lawmakers elected to push aside controversy over a legal loophole that provides intelligence agencies with a means for the warrantless surveillance of American citizens — re-stamping Section 702 of FISA for another six years. So of course they haven’t cared a fig for non-US citizens’ privacy either.

Increasingly powerful state surveillance is seemingly here to stay, with or without adequately robust oversight. And commercial use of strong encryption remains under attack from governments.

But there’s another end to the surveillance telescope. As I wrote five years ago, those who watch us can expect to be — and indeed are being — increasingly closely watched themselves as the lens gets turned on them:

“Just as our digital interactions and online behaviour can be tracked, parsed and analysed for problematic patterns, pertinent keywords and suspicious connections, so too can the behaviour of governments. Technology is a double-edged sword – which means it’s also capable of lifting the lid on the machinery of power-holding institutions like never before.”

We’re now seeing some of the impacts of this surveillance technology cutting both ways.

With attention to detail, good connections (in all senses) and the application of digital forensics all sorts of discrete data dots can be linked — enabling official narratives to be interrogated and unpicked with technology-fuelled speed.

Witness, for example, how quickly the Kremlin’s official line on the Skripal poisonings unravelled.

After the UK released CCTV of two Russian suspects of the Novichok attack in Salisbury, last month, the speedy counter-claim from Russia, presented most obviously via an ‘interview’ with the two ‘citizens’ conducted by state mouthpiece broadcaster RT, was that the men were just tourists with a special interest in the cultural heritage of the small English town.

Nothing to see here, claimed the Russian state, even though the two unlikely tourists didn’t appear to have done much actual sightseeing on their flying visit to the UK during the tail end of a British winter (unless you count vicarious viewing of Salisbury’s wikipedia page).

But digital forensics outfit Bellingcat, partnering with investigative journalists at The Insider Russia, quickly found plenty to dig up online, and with the help of data-providing tips. (We can only speculate who those whistleblowers might be.)

Their investigation made use of a leaked database of Russian passport documents; passport scans provided by sources; publicly available online videos and selfies of the suspects; and even visual computing expertise to academically cross-match photos taken 15 years apart — to, within a few weeks, credibly unmask the ‘tourists’ as two decorated GRU agents: Anatoliy Chepiga and Dr Alexander Yevgeniyevich Mishkin.

When public opinion is faced with an official narrative already lacking credibility that’s soon set against external investigation able to closely show workings and sources (where possible), and thus demonstrate how reasonably constructed and plausible is the counter narrative, there’s little doubt where the real authority is being shown to lie.

And who the real liars are.

That the Kremlin lies is hardly news, of course. But when its lies are so painstakingly and publicly unpicked, and its veneer of untruth ripped away, there is undoubtedly reputational damage to the authority of Vladimir Putin.

The sheer depth and availability of data in the digital era supports faster-than-ever evidence-based debunking of official fictions, threatening to erode rogue regimes built on lies by pulling away the curtain that invests their leaders with power in the first place — by implying the scope and range of their capacity and competency is unknowable, and letting other players on the world stage accept such a ‘leader’ at face value.

The truth about power is often far more stupid and sordid than the fiction. So a powerful abuser, with their workings revealed, can be reduced to their baser parts — and shown for the thuggish and brutal operator they really are, as well as proved a liar.

On the stupidity front, in another recent and impressive bit of cross-referencing, Bellingcat was able to turn passport data pertaining to another four GRU agents — whose identities had been made public by Dutch and UK intelligence agencies (after they had been caught trying to hack into the network of the Organisation for the Prohibition of Chemical Weapons) — into a long list of 305 suggestively linked individuals also affiliated with the same GRU military unit, and whose personal data had been sitting in a publicly available automobile registration database… Oops.

There’s no doubt certain governments have wised up to the power of public data and are actively releasing key info into the public domain where it can be poured over by journalists and interested citizen investigators — be that CCTV imagery of suspects or actual passport scans of known agents.

A cynic might call this selective leaking. But while the choice of what to release may well be self-serving, the veracity of the data itself is far harder to dispute. Exactly because it can be cross-referenced with so many other publicly available sources and so made to speak for itself.

Right now, we’re in the midst of another fast-unfolding example of surveillance apparatus and public data standing in the way of dubious state claims — in the case of the disappearance of Washington Post journalist Jamal Khashoggi, who went into the Saudi consulate in Istanbul on October 2 for a pre-arranged appointment to collect papers for his wedding and never came out.

Saudi authorities first tried to claim Khashoggi left the consulate the same day, though did not provide any evidence to back up their claim. And CCTV clearly showed him going in.

Yesterday they finally admitted he was dead — but are now trying to claim he died quarrelling in a fistfight, attempting to spin another after-the-fact narrative to cover up and blame-shift the targeted slaying of a journalist who had written critically about the Saudi regime.

Since Khashoggi went missing, CCTV and publicly available data has also been pulled and compared to identify a group of Saudi men who flew into Istanbul just prior to his appointment at the consulate; were caught on camera outside it; and left Turkey immediately after he had vanished.

Including naming a leading Saudi forensics doctor, Dr Salah Muhammed al-Tubaigy, as being among the party that Turkish government sources also told journalists had been carrying a bone saw in their luggage.

Men in the group have also been linked to Saudi crown prince Mohammed bin Salman, via cross-referencing travel records and social media data.

“In a 2017 video published by the Saudi-owned Al Ekhbariya on YouTube, a man wearing a uniform name tag bearing the same name can be seen standing next to the crown prince. A user with the same name on the Saudi app Menom3ay is listed as a member of the royal guard,” writes the Guardian, joining the dots on another suspected henchman.

A marked element of the Khashoggi case has been the explicit descriptions of his fate leaked to journalists by Turkish government sources, who have said they have recordings of his interrogation, torture and killing inside the building — presumably via bugs either installed in the consulate itself or via intercepts placed on devices held by the individuals inside.

This surveillance material has reportedly been shared with US officials, where it must be shaping the geopolitical response — making it harder for President Trump to do what he really wants to do, and stick like glue to a regional US ally with which he has his own personal financial ties, because the arms of that state have been recorded in the literal act of cutting off the fingers and head of a critical journalist, and then sawing up and disposing of the rest of his body.

Attempts by the Saudis to construct a plausible narrative to explain what happened to Khashoggi when he stepped over its consulate threshold to pick up papers for his forthcoming wedding have failed in the face of all the contrary data.

Meanwhile, the search for a body goes on.

And attempts by the Saudis to shift blame for the heinous act away from the crown prince himself are also being discredited by the weight of data…

And while it remains to be seen what sanctions, if any, the Saudis will face from Trump’s conflicted administration, the crown prince is already being hit where it hurts by the global business community withdrawing in horror from the prospect of being tainted by bloody association.

The idea that a company as reputation-sensitive as Apple would be just fine investing billions more alongside the Saudi regime, in SoftBank’s massive Vision Fund vehicle, seems unlikely, to say the least.

Thanks to technology’s surveillance creep the world has been given a close-up view of how horrifyingly brutal the Saudi regime can be — and through the lens of an individual it can empathize with and understand.

Safe to say, supporting second acts for regimes that cut off fingers and sever heads isn’t something any CEO would want to become famous for.

The power of technology to erode privacy is clearer than ever. Down to the very teeth of the bone saw. But what’s also increasingly clear is that powerful and at times terrible capability can be turned around to debase power itself — when authorities themselves become abusers.

So the flip-side of the surveillance state can be seen in the public airing of the bloody colors of abusive regimes.

Turns out, microscopic details can make all the difference to geopolitics.

RIP Jamal Khashoggi