Steve Thomas - IT Consultant

AWS today announced the preview of Amazon Location, a new service for developers who want to add location-based features to their web-based and mobile applications.

Based on mapping data from Esri and HERE Technologies, the service provides all of the basic mapping and point-of-interest data you would expect from a mapping service, including built-in tracking and geofencing features. It does not offer a routing feature, though.

“We want to make it easier and more cost-effective for you to add maps, location awareness, and other location-based features to your web and mobile applications,” AWS’s Jeff Barr writes in today’s announcement. “Until now, doing this has been somewhat complex and expensive, and also tied you to the business and programming models of a single provider.”

Image Credits: Amazon

At its core, Amazon Location provides the ability to create maps, based on the data and styles available from its partners (with more partners in the works) and access to their points of interest. Those are obviously the two core features for any mapping service. On top of this, Location also offers built-in support for trackers, so that apps can receive location updates from devices and plot them on a map. This feature can also be linked to Amazon Location’s geofencing tool so apps can send alerts when a device (or the dog that wears it) leaves a particular area.

It may not be as fully-featured as the Google Maps Platform, for example, but AWS promises that Location will be more affordable, with a variety of pricing plans (and a free three-month trial) that start at $0.04 for retrieving 1,000 map tiles. As with all things AWS, the pricing gets more complicated from there but seems quite reasonable overall.

While you can’t directly compare AWS’s tile-based pricing with Google’s plans, it’s worth noting that after you go beyond Google Map Platform’s $200 of free usage per month, static maps cost $2 per 1,000 requests.

After a number of pricing changes, Google’s mapping services lost a lot of goodwill from developers. AWS may be able to capitalize on this with this new platform, especially if it continues to build out its feature set to fill in some of the current gaps in the service.

 

As businesses gather, store and analyze an ever-increasing amount of data, tools for helping them discover, catalog, track and manage how that data is shared are also becoming increasingly important. With Azure Purview, Microsoft is launching a new data governance service into public preview today that brings together all of these capabilities in a new data catalog with discovery and data governance features.

As Rohan Kumar, Microsoft’s corporate VP for Azure Data told me, this has become a major paint point for enterprises. While they may be very excited about getting started with data-heavy technologies like predictive analytics, those companies’ data- and privacy- focused executives are very concerned to make sure that the way the data is used is compliant or that the company has received the right permissions to use its customers’ data, for example.

In addition, companies also want to make sure that they can trust their data and know who has access to it and who made changes to it.

“[Purview] is a unified data governance platform which automates the discovery of data, cataloging of data, mapping of data, lineage tracking — with the intention of giving our customers a very good understanding of the breadth of the data estate that exists to begin with, and also to ensure that all these regulations that are there for compliance, like GDPR, CCPA, etc, are managed across an entire data estate in ways which enable you to make sure that they don’t violate any regulation,” Kumar explained.

At the core of Purview is its catalog that can pull in data from the usual suspects like Azure’s various data and storage services but also third-party data stores including Amazon’s S3 storage service and on-premises SQL Server. Over time, the company will add support for more data sources.

Kumar described this process as a ‘multi-semester investment,’ so the capabilities the company is rolling out today are only a small part of what’s on the overall roadmap already. With this first release today, the focus is on mapping a company’s data estate.

Image Credits: Microsoft

“Next [on the roadmap] is more of the governance policies,” Kumar said. “Imagine if you want to set things like ‘if there’s any PII data across any of my data stores, only this group of users has access to it.’ Today, setting up something like that is extremely complex and most likely you’ll get it wrong. That’ll be as simple as setting a policy inside of Purview.”

In addition to launching Purview, the Azure team also today launched Azure Synapse, Microsoft’s next-generation data warehousing and analytics service, into general availability. The idea behind Synapse is to give enterprises — and their engineers and data scientists — a single platform that brings together data integration, warehousing and big data analytics.

“With Synapse, we have this one product that gives a completely no code experience for data engineers, as an example, to build out these [data] pipelines and collaborate very seamlessly with the data scientists who are building out machine learning models, or the business analysts who build out reports for things like Power BI.”

Among Microsoft’s marquee customers for the service, which Kumar described as one of the fastest-growing Azure services right now, are FedEx, Walgreens, Myntra and P&G.

“The insights we gain from continuous analysis help us optimize our network,” said Sriram Krishnasamy, senior vice president, strategic programs at FedEx Services. “So as FedEx moves critical high value shipments across the globe, we can often predict whether that delivery will be disrupted by weather or traffic and remediate that disruption by routing the delivery from another location.”

Image Credits: Microsoft

AI and data analytics company Databricks today announced the launch of SQL Analytics, a new service that makes it easier for data analysts to run their standard SQL queries directly on data lakes. And with that, enterprises can now easily connect their business intelligence tools like Tableau and Microsoft’s Power BI to these data repositories as well.

SQL Analytics will be available in public preview on November 18.

In many ways, SQL Analytics is the product Databricks has long been looking to build and that brings its concept of a ‘lake house’ to life. It combines the performance of a data warehouse, where you store data after it has already been transformed and cleaned, with a data lake, where you store all of your data in its raw form. The data in the data lake, a concept that Databrick’s co-founder and CEO Ali Ghodsi has long championed, is typically only transformed when it gets used. That makes data lakes cheaper, but also a bit harder to handle for users.

Image Credits: Databricks

“We’ve been saying Unified Data Analytics, which means unify the data with the analytics. So data processing and analytics, those two should be merged. But no one picked that up,” Ghodsi told me. But ‘lake house’ caught on as a term.

“Databricks has always offered data science, machine learning. We’ve talked about that for years. And with Spark, we provide the data processing capability. You can do [extract, transform, load]. Thas always been possible. SQL Analytics enables you to now do the data warehousing workloads directly, and concretely, the business intelligence and reporting workloads, directly on the data lake.”

The general idea here is that with just one copy of the data, you can enable both traditional data analyst use cases (think BI) and the data science workloads (think AI) Databricks was already known for. Ideally, that makes both use cases cheaper and simpler.

The service sits on top of an optimized version of Databricks’ open-source Delta Lake storage layer to enable the service to quickly complete queries. In addition, Delta Lake also provides auto-scaling endpoints to keep the query latency consistent, even under high loads.

While data analysts can query these data sets directly, using standard SQL, the company also built a set of connectors to BI tools. Its BI partners include Tableau, Qlik, Looker and Thoughtspot, as well as ingest partners like Fivetran, Fishtown Analytics, Talend and Matillion.

Image Credits: Databricks

“Now more than ever, organizations need a data strategy that enables speed and agility to be adaptable,” said Francois Ajenstat, Chief Product Officer at Tableau. “As organizations are rapidly moving their data to the cloud, we’re seeing growing interest in doing analytics on the data lake. The introduction of SQL Analytics delivers an entirely new experience for customers to tap into insights from massive volumes of data with the performance, reliability and scale they need.”

In a demo, Ghodsi showed me what the new SQL Analytics workspace looks like. It’s essentially a stripped-down version of the standard code-heavy experience that Databricks users are familiar with. Unsurprisingly, SQL Analytics provides a more graphical experience that focuses more on visualizations and not Python code.

While there are already some data analysts on the Databricks platform, this obviously opens up a large new market for the company — something that would surely bolster its plans for an IPO next year.

For several years, blockchain technology has been touted as a way to verify the sale of property. Any kind of property. And so entrepreneurs busily began the process of trying to create a startup that could complete a property deal on the blockchain.

One that stood out from the start was Propy which was started by Natalie Karayaneva, an experienced, real-world property developer who had subsequently joined the blockchain world. Propy’s other co-founder is Denitza Tyufekchieva (pictured). 

Propy has now raised an undisclosed funding round from venture capitalist investor Tim Draper, best known for his early investments into Tesla, Skype, Twitter, Coindesk and Robinhood. TechCrunch understands this is part of a wider, ongoing fund-raise. 

Propy’s platform uses blockchain technology to, it says, simplify the home purchasing experience and eliminate fraudulent transactions. The idea is to close a traditional real estate deal entirely online. Thus, the offer, signed purchase agreements with Docusign, secure wire payments, and title deeds are all taken care of. Propy claims its platforms saves 10 hours of paperwork, per transaction.

“My vision for Propy is to bring self-driving real estate transactions to the world, with all of the logistics seamlessly executed on the back-end”,  Karayaneva said in a statement. “Our platform offers a terminal to observe transactions in real-time, making the process transparent for real estate executives, title companies, homebuilders, buyers, and REITs. With this new investment we are excited to bring much-needed change to the industry, satisfy consumers and empower real estate professionals all over the world.”

But this is not some out-there, wacky crypto-play. Most of the transactions are done in dollars on Propy, meaning it could be used by mainstream users from day one, as it’s able to process wire transfers via integration with a money transmitter connected to 70 banks.

Speaking to TechCrunch, Karayaneva added: “We do not replace lawyers, but rather help them, closing attorney’s share documents with consumers and agents via Propy. With DocuSign integrated, they can sign the documents on Propy and all parties get notified. In the US, agents have ready forms in Propy to fill out and they don’t need lawyers in a transaction at all.”

Crucially, Propy has an enterprise play going on here as well. Its platform can provide the back-office system to real estate enterprises with real-time transaction reports and automated compliance.

Draper said: “Propy has the potential to transform Real Estate, making transactions and titles simpler, more secure, and less expensive through innovative use of blockchain technology. [It] eliminates fraud and makes the closing process more secure, effective, and streamlined.”

According to one survey, almost one-fifth of millennials have now thought about buying a home become of the lock-downs induced by the Covid-19 pandemic, meaning that many will be looking for an easy way to transact, especially if it has the ease of use Propy has. 

Propy has some fellow-travelers in the blockchain prop-tech space. ShelterZoom is Blockchain platform used for virtual and remote collaboration with offices and clients, while StreetWire is a Blockchain-based data service for the real estate industry.

Varada, a Tel Aviv-based startup that focuses on making it easier for businesses to query data across services, today announced that it has raised a $12 million Series A round led by Israeli early-stage fund MizMaa Ventures, with participation by Gefen Capital.

“If you look at the storage aspect for big data, there’s always innovation, but we can put a lot of data in one place,” Varada CEO and co-founder Eran Vanounou told me. “But translating data into insight? It’s so hard. It’s costly. It’s slow. It’s complicated.”

That’s a lesson he learned during his time as CTO of LivePerson, which he described as a classic big data company. And just like at LivePerson, where the team had to reinvent the wheel to solve its data problems, again and again, every company — and not just the large enterprises — now struggles with managing their data and getting insights out of it, Vanounou argued.

Image Credits: Varada

The rest of the founding team, David Krakov, Roman Vainbrand and Tal Ben-Moshe, already had a lot of experience in dealing with these problems, too, with Ben-Moshe having served at the Chief Software Architect of Dell EMC’s XtremIO flash array unit, for example. They built the system for indexing big data that’s at the core of Varada’s platform (with the open-source Presto SQL query engine being one of the other cornerstones).

Image Credits: Varada

Essentially, Varada embraces the idea of data lakes and enriches that with its indexing capabilities. And those indexing capabilities is where Varada’s smarts can be found. As Vanounou explained, the company is using a machine learning system to understand when users tend to run certain workloads and then caches the data ahead of time, making the system far faster than its competitors.

“If you think about big organizations and think about the workloads and the queries, what happens during the morning time is different from evening time. What happened yesterday is not what happened today. What happened on a rainy day is not what happened on a shiny day. […] We listen to what’s going on and we optimize. We leverage the indexing technology. We index what is needed when it is needed.”

That helps speed up queries, but it also means less data has to be replicated, which also brings down the cost. AÅs Mizmaa’s Aaron Applebaum noted, since Varada is not a SaaS solution, the buyers still get all of the discounts from their cloud providers, too.

In addition, the system can allocate resources intelligently to that different users can tap into different amounts of bandwidth. You can tell it to give customers more bandwidth than your financial analysts, for example.

“Data is growing like crazy: in volume, in scale, in complexity, in who requires it and what the business intelligence uses are, what the API uses are,” Applebaum said when I asked him why he decided to invest. “And compute is getting slightly cheaper, but not really, and storage is getting cheaper. So if you can make the trade-off to store more stuff, and access things more intelligently, more quickly, more agile — that was the basis of our thesis, as long as you can do it without compromising performance.”

Varada, with its team of experienced executives, architects and engineers, ticked a lot of the company’s boxes in this regard, but he also noted that unlike some other Israeli startups, the team understood that it had to listen to customers and understand their needs, too.

“In Israel, you have a history — and it’s become less and less the case — but historically, there’s a joke that it’s ‘ready, fire, aim.’ You build a technology, you’ve got this beautiful thing and you’re like, ‘alright, we did it,’ but without listening to the needs of the customer,” he explained.

The Varada team is not afraid to compare itself to Snowflake, which at least at first glance seems to make similar promises. Vananou praised the company for opening up the data warehousing market and proving that people are willing to pay for good analytics. But he argues that Varada’s approach is fundamentally different.

“We embrace the data lake. So if you are Mr. Customer, your data is your data. We’re not going to take it, move it, copy it. This is your single source of truth,” he said. And in addition, the data can stay in the company’s virtual private cloud. He also argues that Varada isn’t so much focused on the business users but the technologists inside a company.

 

Early-stage companies often have trouble dealing with the amount of data that can run through the organization, especially as it grows. Large sums are spent on data software, dislocated data, dealing with data pipelines. All of which involve data warehousing, cleaning tools and a visualization platform.

Count is a startup that is an attempt to create an all-in-one data platform to deal with this problem, providing early-stage teams with tools to build data pipelines more cheaply.

It’s also coming out of stealth mode and announcing a $2.4m fund-raise led by LocalGlobe, with participation from Global Founders Capital . Its angel investors include Charlie Songhurst, the former head of corporate strategy at Microsoft .

The company was founded in 2016 by former management consultant Oliver Hughes and Imperial College physicist Oliver Pike, who identified that companies weren’t able to make data-driven decisions because of the complexity of standard data software and the technical and design constraints accepted by the industry. 

In a statement, Hughes described the problem they are addressing: “The teams making the most progress were having to invest hundreds of thousands of dollars a year, across separate solutions, to help them get their data under control and it was taking them up to 12-18 months to purchase and implement it all. So many startups get locked into long term contracts with tools that are no longer suitable for them. Count has a simple pay-as-you-go model so teams can start using the platform for free and only pay more as their team and data grow.”

Remus Brett, Partner at LocalGlobe, said: “Most people know that data is incredibly important but the ability to take it and tell stories with it still remains difficult. Now more than ever, we see the value in being able to process and analyze data at speed, to help us make critical decisions. Count makes it possible for even very early stage companies to begin making decisions based on analysis of their data.”

Edd Read, CTO at Tiney.co which uses count said: “Count has given us a way to pull all our data together and build reports for the whole team,” said. “Notebooks are a powerful way for us to share insights in context and give the team the ability to query data without having to learn SQL.” 

Count competes with a number of different solutions including Data warehouses such as Snowflake; Data cleaning tools like DBT; and analytics platforms like Looker.

Google says Coronavirus has become its biggest search topic by a country mile this year, and to continue its efforts to harness that attention in the best possible way, late on Friday the company launched a new information portal dedicated to the pandemic as well as an improved search experience for desktop and mobile.

The search experience, Google says, was updated in response to “people’s information needs expanding,” while the new information portal also provides the basic, most useful information (for example around symptoms), plus a lot of links and on-site options to explore further.

Something notably absent on Google’s page or search experience are any links to conversation forums or places to hear and talk to other average people. Google has never been particularly successful in its many efforts to break into social media and this underscores that, while also helping it steer away from the fact that many of these forums are not always well managed. I would imagine that more tools for direct communication, such as the Google Hangouts product, and possibly others in that same category, might well be added or linked to as well over time.

Let’s dive into some more details.

The new search experience now not only includes search results but also a number of additional links to “authoritative information” from health authorities and updated data and visualisations.

“This new format organizes the search results page to help people easily navigate information and resources, and it will also make it possible to add more information over time as it becomes available,” Emily Moxley, Google’s product manager for search, writes in a blog post.

The search experience now also includes links to a Twitter carousel featuring accounts from civic organizations local to you, and also a new “most common questions” section related to the pandemic from the World Health Organization and the Centers for Disease Control and Prevention.

This is rolling out first in the US in English and Google said it would be adding more languages and regions soon.

Meanwhile, the portal — also available first for the US — features tips on staying healthy and advice for those who are concerned; links to further official resources; links to more localised resources; links to fundraising efforts; the latest statistics; and an overview of all of Google’s own work (for example, the specific efforts it’s making for educators). We have asked the company when and if it plans to cover other regions beyond the US, and we’ll update this as we learn more.

This is an important move for Google. The internet has figured as critical platform from the earliest days of the Novel Coronavirus emerging out of China, but it hasn’t all been positive.

On one hand, there has been a ton of misinformation spread around about the virus, and the internet overall (plus specific sites like Google’s search and social media platforms like Facebook and Twitter) has played a huge role in being responsible for disseminating the majority of that bad news. (Not all those searches and clicks lead to the right information, or good data, unfortunately.)

On the other hand, it’s also been an indispensable resource: in countries where health services have already become overwhelmed by the influx of people seeking help, official online portals (like this one) are serving a very important role in triaging inbound requests before people resort to physically getting themselves into the system (if they need to). And the internet is the main place people will turn in the days and weeks ahead as they are asked to socially isolate themselves to slow down the spread of the pandemic, serving its role in providing information, but hopefully also some diversion and enrichment.

Google’s site is bringing together as many of the positive and legitimate strands of information as it can.

The main page focuses on the most important basics: an brief overview of the virus, a list of the most common symptoms, a list of most common things you can do to prevent getting infected or spreading the infection and a (very brief, for now) section on treatments.

From this, it goes on to more detailed links to videos and other resources for specific interests such as advice for the elderly, a map-based data overview to monitor what is going on elsewhere; and then resources for further help for topics that are coming up a lot, such as advice for people working from home, or for how to set up self-isolation, online education advice, cooking resources and more. Relief efforts so far only has one link, to the Solidarity Response Fund started by the UN Foundation, which has had a donation of $50 million from Google. \

There are a number of other relief and fundraising efforts underway, including those to help fund the race for research to improve the medical tools and medicine we have to fight this. I think the idea is that all of these sections will grow and evolve as the situation evolves.

Databricks today announced that launch of its new Data Ingestion Network of partners and the launch of its Databricks Ingest service. The idea here is to make it easier for businesses to combine the best of data warehouses and data lakes into a single platform — a concept Databricks likes to call ‘lakehouse.’

At the core of the company’s lakehouse is Delta Lake, Databricks’ Linux Foundation-managed open-source project that brings a new storage layer to data lakes that helps users manage the lifecycle of their data and ensures data quality through schema enforcement, log records and more. Databricks users can now work with the first five partners in the Ingestion Network — Fivetran, Qlik, Infoworks, StreamSets, Syncsort — to automatically load their data into Delta Lake. To ingest data from these partners, Databricks customers don’t have to set up any triggers or schedules — instead, data automatically flows into Delta Lake.

“Until now, companies have been forced to split up their data into traditional structured data and big data, and use them separately for BI and ML use cases. This results in siloed data in data lakes and data warehouses, slow processing and partial results that are too delayed or too incomplete to be effectively utilized,” says Ali Ghodsi, co-founder and CEO of Databricks. “This is one of the many drivers behind the shift to a Lakehouse paradigm, which aspires to combine the reliability of data warehouses with the scale of data lakes to support every kind of use case. In order for this architecture to work well, it needs to be easy for every type of data to be pulled in. Databricks Ingest is an important step in making that possible.”

Databricks VP or Product Marketing Bharath Gowda also tells me that this will make it easier for businesses to perform analytics on their most recent data and hence be more responsive when new information comes in. He also noted that users will be able to better leverage their structured and unstructured data for building better machine learning models, as well as to perform more traditional analytics on all of their data instead of just a small slice that’s available in their data warehouse.

 

Dating app Tinder is the latest tech service to find itself under formal investigation in Europe over how it handles user data.

Ireland’s Data Protection Commission (DPC) has today announced a formal probe of how Tinder processes users’ personal data; the transparency surrounding its ongoing processing; and compliance with obligations with regard to data subject right’s requests.

Under Europe’s General Data Protection Regulation (GDPR) EU citizens have a number of rights over their personal data — such as the right to request deletion or a copy of their data.

While those entities processing people’s personal data must have a valid legal basis to do so.

Data security is another key consideration baked into the data protection regulation.

The DPC said complaints about the dating app have been made from individuals in multiple EU countries, not just in Ireland — with the Irish regulator taking the lead under a GDPR mechanism to manage cross-border investigations.

It said the Tinder probe came about as a result of active monitoring of complaints received from individuals “both in Ireland and across the EU” — in order to identify “thematic and possible systemic data protection issues”.

“The Inquiry of the DPC will set out to establish whether the company has a legal basis for the ongoing processing of its users’ personal data and whether it meets its obligations as a data controller with regard to transparency and its compliance with data subject right’s requests,” the DPC added.

It’s not clear exactly which GDPR rights have been complained about by Tinder users at this stage.

We’ve reached out to Tinder for a response.

Also today the DPC has finally responded to long-standing complaints by consumer rights groups of Google’s handling of location data — announcing a formal investigation of that too.

The amount of data that most companies now store — and the places they store it — continues to increase rapidly. With that, the risk of the wrong people managing to get access to this data also increases, so it’s no surprise that we’re now seeing a number of startups that focus on protecting this data and how it flows between clouds and on-premises servers. Satori Cyber, which focuses on data protecting and governance, today announced that it has raised a $5.25 million seed round led by YL Ventures.

“We believe in the transformative power of data to drive innovation and competitive advantage for businesses,” the company says. “We are also aware of the security, privacy and operational challenges data-driven organizations face in their journey to enable broad and optimized data access for their teams, partners and customers. This is especially true for companies leveraging cloud data technologies.”

Satori is officially coming out of stealth mode today and launching its first product, the Satori Cyber Secure Data Access Cloud. This service provides enterprises with the tools to provide access controls for their data, but maybe just as importantly, it also offers these companies and their security teams visibility into their data flows across cloud and hybrid environments. The company argues that data is “a moving target” because it’s often hard to know how exactly it moves between services and who actually has access to it. With most companies now splitting their data between lots of different data stores, that problem only becomes more prevalent over time and continuous visibility becomes harder to come by.

“Until now, security teams have relied on a combination of highly segregated and restrictive data access and one-off technology-specific access controls within each data store, which has only slowed enterprises down,” said Satori Cyber CEO and Co-founder Eldad Chai. “The Satori Cyber platform streamlines this process, accelerates data access and provides a holistic view across all organizational data flows, data stores and access, as well as granular access controls, to accelerate an organization’s data strategy without those constraints.”

Both co-founders previously spent nine years building security solutions at Imperva and Incapsula (which acquired Imperva in 2014). Based on this experience, they understood that onboarding had to be as easy as possible and that operations would have to be transparent to the users. “We built Satori’s Secure Data Access Cloud with that in mind, and have designed the onboarding process to be just as quick, easy and painless. On-boarding Satori involves a simple host name change and does not require any changes in how your organizational data is accessed or used,” they explain.

 

 

If you’ve managed to convince yourself that only large enterprises have the money to take advantage of Business Intelligence (BI), then think again. In the past, companies needed to hire expensive experts to really delve into BI. But these days, there is a range of affordable self-service tools that will allow small- and medium-sized businesses (SMBs) to make use of BI. What’s more, your SMB creates and holds much more data than you realize, which means you can start using BI for your business.

You’ve already got the data you need

It’s easy to underestimate the amount of data your SMB already has at its disposal. In every area of your business, from finance and sales to customer relations and website management, the software packages you use to simplify your everyday operations are packed with reams of information that most of us don’t even think twice about. By talking to key stakeholders in your organization’s various departments, you can get an idea of the kind of data you already have, how it’s generated, and where it’s stored. You can then start to think about using BI tools to transform that information into meaningful business insights that will inform your decision-making. No need for you to invest in time-consuming data generation from scratch!

Self-service BI tools are plentiful — and affordable

The emergence of self-service BI puts useful business analytics within reach of smaller business owners who lack the fancy-pants budgets of larger corporations. In fact, there are numerous self-service BI tools that you can use to get started in this area without even spending a dime. Microsoft Power BI is a powerful application that’s pleasingly user-friendly, and most businesses will find the functions they need in the free version. Zoho Analytics has a low entry-level cost, too, and the slightly pricier yet still affordable Tableau is another option that’s worth exploring.

It’s easy to get started

BI is an intimidating term, especially for the average business owner. But by taking small steps, it’s easy for anyone to get started — and before you know it, you’ll be enjoying the benefits of having data-driven, intelligence-based insights that will enable you to make better business decisions.

Most self-service BI tools come with built-in suggestions for reports that businesses commonly run and find useful. Other worthwhile statistics to explore include the percentage of your clients who cancel within a set period, website landing pages that generate the longest visits, your most profitable individual products or services, the days or months in which you generate your highest revenues, and which of your clients bring in the most revenue and profit.

Truly harnessing data is the future of the business world — it’s how companies like yours can make smarter decisions that increase efficiency and profitability. And having self-service tools available means SMBs no longer need a crazy budget to be able to afford the benefits of BI. To find out more about putting in place the tools that can help you do smarter business, just give us a call.

Business Intelligence (BI) has conventionally been limited to big business; only they can afford pricey experts with specialist knowledge who can leverage BI’s value. But the rise of self-service BI tools has leveled the playing field, allowing small- and medium-sized businesses (SMBs) to get in on the game too. And with SMBs now producing far greater volumes of data than in the past, there’s never been a better time to put BI to use in your organization. Here’s what you need to know about BI.

You’ve already got the data you need

It’s easy to underestimate the amount of data your SMB already has at its disposal. In every area of your business, from finance and sales to customer relations and website management, the software packages you use to simplify your everyday operations are packed with reams of information that most of us don’t even think twice about. By talking to key stakeholders in your organization’s various departments, you can get an idea of the kind of data you already have, how it’s generated, and where it’s stored. You can then start to think about using BI tools to transform that information into meaningful business insights that will inform your decision-making. No need for you to invest in time-consuming data generation from scratch!

Self-service BI tools are plentiful — and affordable

The emergence of self-service BI puts useful business analytics within reach of smaller business owners who lack the fancy-pants budgets of larger corporations. In fact, there are numerous self-service BI tools that you can use to get started in this area without even spending a dime. Microsoft Power BI is a powerful application that’s pleasingly user-friendly, and most businesses will find the functions they need in the free version. Zoho Analytics has a low entry-level cost, too, and the slightly pricier yet still affordable Tableau is another option that’s worth exploring.

It’s easy to get started

BI is an intimidating term, especially for the average business owner. But by taking small steps, it’s easy for anyone to get started — and before you know it, you’ll be enjoying the benefits of having data-driven, intelligence-based insights that will enable you to make better business decisions.

Most self-service BI tools come with built-in suggestions for reports that businesses commonly run and find useful. Other worthwhile statistics to explore include the percentage of your clients who cancel within a set period, website landing pages that generate the longest visits, your most profitable individual products or services, the days or months in which you generate your highest revenues, and which of your clients bring in the most revenue and profit.

Truly harnessing data is the future of the business world — it’s how companies like yours can make smarter decisions that increase efficiency and profitability. And having self-service tools available means SMBs no longer need a crazy budget to be able to afford the benefits of BI. To find out more about putting in place the tools that can help you do smarter business, just give us a call.