Steve Thomas - IT Consultant

Data quality has been shaping up as a salient and increasingly critical part of the world of data science: enterprises are sitting on growing troves of information, but it’s only useful if we can trust it to be accurate and usable. To that end, Validio, a startup building tools to improve and ensure data quality — specifically with tools that let users clean up data both stored in data warehouses and elsewhere, as well as in real-time — is announcing a seed round to mark its emergence from stealth. The Stockholm-based company has raised $15 million, funding that it plans to use for business and product development, R&D and to hire more talent.

Lakestar — the London-based VC that made early investments in companies like Facebook and Airbnb but has largely focused on backing promising-looking startups out of Europe (it also backed Skype, Spotify, Revolut and many others) — led this round, with J12 and several high profile individuals also participating.

(The list includes footballer (soccer player) Zlatan Ibrahimović, Snowflake’s CMO Denise Persson, MongoDB’s co-founder Kevin Ryan, Neo4j co-founder Emil Eifrem, DeepMind’s head of product Mehdi Ghissassi and Kim Fai Kok & Dara Gill of angel collective Framtid.)

As with a lot of enterprise startups in stealth these days, Validio has been using the time since being founded in 2019 to work quietly on its product while also signing up customers for live deployments. Its clients range across the usual suspects in the big data game — those in marketing and commerce, security companies, and business intelligence. Validio doesn’t disclose a lot of names but notes a few: Budbee and Babyshop in the e-commerce space; e-scooter company Voi; and electricity startup Tibber.

The challenge that Validio has identified an is addressing is one that CEO and co-founder Patrik Liu Tran said he encountered early on in his working life. A math and computer wiz, he graduated aged 16 from school and also accelerated his time at university, going to work in 2014/2015 while still a teenager consulting companies on AI projects. It was still a nascent endeavor in most places (frankly, it still is), and one of the big issues, apart from having few in the field prepared to go into companies to work on their problems, was the lack of integrity and quality in the data that they were trying to use in their machine learning models, he said.

“At every company that I was advising, the thing that caught my attention was the lack of trust in data, so much that people did very little with it, and there were no tools really to help with that,” he said in an interview. He added that the first efforts in identifying the issue and trying to deal with it (such as the Great Expectations open source project, created by the people who are behind Superconductive), were promising but do not focus on real-time information as much as data in warehouses.

“But machine learning resides in streams, not the warehouse,” he said. 

Beyond that, they are generally too reliant on rules that engineers and data scientists need to set and regularly monitor and tweak.

Validio’s approach is to create not exactly low code tools. “We’re building for data engineers. It’s very technical,” Tran said, slightly surprised with my question about that. “But we are focusing on a smooth user experience.”

That includes using machine learning and statistical analysis to “teach” a users’ system to find and respond more quickly to the data coming through the pipeline; sets of rules that are created automatically for an engineer to use or to complement with customized rules; automated thresholds and auto-resolution capabilities, and more.

“We want to make it as seamless as possible for data engineers to do their work,” he added.

The company doesn’t have a larger set of rules that it applies across the platform, but has built it to be tailored to individual organizations.

“‘Data quality’ is hard to define. What is good for one company might be bad for another,” Tran said. “Data is never perfect and companies also need to start to accept that.” But the list of its investors (including some of those attached to strategic names) is a sign that others may well be singing the same tune with that kind of thinking, and how Validio specifically is building to address that: tools to improve data quality, but built for the real world.

There are a few other companies that have identified the market for data quality and are building to address that — including Great Expectations creator Superconductive, which raised $40 million earlier this year; along with heavyweights like MicrosoftSAS, and Talend — but for now Validio’s approach is one that seems to be striking the right chord, enough to expand bets in what is still a young space.

“As data teams are increasingly shifting their focus toward data quality, we believe that Validio is uniquely positioned to become the next big global software player from Europe,” noted Stephen Nundy, Lakestar partner, in a statement. “Validio has built its platform with a unique architecture, enabling the management of data quality in data warehouses, lakes and streams both on the actual data and metadata in real-time. We look forward to supporting the stellar Validio team in their journey building a global data infrastructure leader.”

When Microsoft launched Viva last year, it framed the platform as an employee portal where you might go to find out parental leave policy or other internal communications directed more generally at company policies and culture. It further reinforced this idea last month when it released Viva Goals, a Viva module designed to give employees access to their KPIs.

But it seems that Microsoft has broader ambitions for Viva than simply providing important information for employees found in the typical employee intranet. Today, it announced the first of what could be multiple jobs supported inside Viva, starting with sales.

Emily He, corporate vice president in charge of business applications at Microsoft, says that this announcement is something that brings information together in a way for specific jobs that she’s been hearing about as a kind of employee holy grail for years across companies and jobs, and it was one of the reasons she was drawn to Microsoft.

“Viva Sales in my mind really represents a new way of working by breaking down silos of data and breaking down silos of experience,” she told TechCrunch.

She said one thing she has learned in working with salespeople is that they have too many tools and they need a way to pull meaningful information out of the tools they are using to present it in a more centralized way. “They really want a more simplified experience. So Viva Sales enables a seller to use the tools they already love and use every day including your email system like Outlook, Word documents, PowerPoint presentations, as well as Teams,” she said.

The tool is built on Office 365 and tuned for Microsoft Dynamics 365 CRM. By tagging a customer name or contact, Viva Sales can pull the documents, spreadsheets, presentations, emails and other materials into the CRM tool automatically, all organized under the tag, greatly reducing the amount of manual data entry required.

“Sellers do spend a lot of time manually entering account information or forecasting data. So this eliminates [much of the] manual data entry. But more importantly, now it generates a more holistic view of the customer,” He told me.

With all that data stored in a single place, it means that customers can use it to fuel machine learning models around how to improve sales. “You can use AI and machine learning to come up with recommendations for the sellers and deliver those recommendations to the sellers wherever they are, whether they’re writing their emails or in virtual meetings,” she said.

While it appears to be Microsoft-centric, out of the box it will also support Salesforce CRM, and He says that they may add support for additional tools over time as customer demand dictates. Further, the company plans to add more job types to Viva over time.

The end game here appears to be extending the employee communications portal to include not only the company materials that are useful to employees, but also tools for doing their specific jobs. She says they are doing this because they have been hearing employees asking for this kind of help from inside the same portal.

It’s worth mentioning that Viva Sales will be offered for free to Microsoft Dynamic 365 customers, but as you access third-party data like using Salesforce, you will be charged for using the tool.

Viva Sales will be available in public preview in July and is scheduled to GA in the fall. For now, the only other CRM integration available besides Dynamics 365 will be Salesforce.

The OpenInfra Foundation, the open-source foundation that used to be the OpenStack Foundation until it expanded its scope beyond its flagship project a few years ago, today announced an interesting new way for companies to fund open-source projects inside the foundation. Traditionally, corporate members of open-source foundations support the organization by paying a membership fee, which, for the most part, the foundations then distribute as they see fit. Now, with its new ‘Directed Funding’ model, the OpenInfra foundation is launching a new model that allows members to direct their funds directly to a project.

I think, in general, our communities are well recognized for having very strong technical governance and very clear rules around how technical decisions are madeand people appreciate that and the firewall between sponsorship and those technical decisions,” Jonathan Bryce, the OpenInfra Foundation’s CEO and executive director, told me ahead of today’s announcement. “I think what we were kind of missing in some cases in that model was the ability toand you’ll maybe you’ll see where the term came from — direct funding to a specific project.”

The reason the foundation didn’t previously allow this is because, as Bryce noted, it can create mixed incentives and a pay-for-play dynamic that the organization has always tried to avoid. But at the same time, there was a lot of interest in the community to support specific projects, which makes sense, given that the foundation is now home to a wider variety of projects but where not every member is heavily invested in every project.

Bryce noted that the foundation leadership and board spent a lot of time thinking about how to marry the core principles of the foundation with this new model. The result is a model that tries to combine the best of the OpenStack/OpenInfra technical governance model that has worked quite well over the last 10 years with these new financial considerations.

Every new project under this ‘directed funding’ model will get its own legal entity that will hold the project funding. To ensure that the new projects are legit, an OpenInfra Platinum member (there are 9 right now, including Ant Group, Huawei, Meta, Microsoft and Red Hat) has to serve as the sponsor for the projects and other organizations can then join the project fund. If a sponsor company isn’t an OpenInfra member, it has to become one. All of these funding members then form a project fund governing board and that board decides the fees to create a budget. Meanwhile, the OpenInfra Foundation will deliver community-building services to these projects.

“It’s not a new approach to how these projects get governed technically. That is actually what has worked super well for a really long time. It’s why new projects have wanted to come and work with our community and our foundationbecause of all the trailblazing stuff that happened with OpenStack around technical governance,” OpenInfra COO Mark Collier noted. He added that there are a lot of companies that are looking to build bigger ecosystems around their open-source projects and accelerate adoption — and they are willing to put money behind that. But at the same time, both Collier and Bryce noted that the foundation has put a system in place that they believe will prevent the organization from accepting bad projects for the sake of revenue.  

At least for the time being, this new model will only apply to new projects that join the foundation. Bryce and Collier noted that there may be some existing projects where the organization could apply this new model retroactively, but for now, that’s not on the roadmap.

Since it expanded beyond OpenStack, the OpenInfra foundation has added projects like Kata Containers for increased container security, Airship for infrastructure lifecycle management, the Starling X edge compute stack and the Zuul CI/CD platform.

“The most important thing we’ve learned from each of these successful projects is that collaboration is key and the more breadth in the ecosystem of support the better,” said Thierry Carrez, general manager of the OpenInfra Foundation. “In fact, we’ve found that the most successful open source projects are funded by multiple companies, because they are able to combine their resources to achieve a much stronger rate of return.”

For the OpenInfra Foundation, this new model is clearly also a way to bring new projects — and new members — into the fold. Its models of managing open source projects — both through the new directed funds and its more traditional approach — in a multi-party ecosystem may not be for every project, something the leadership team readily acknowledges. Even if the OpenInfra Foundation only gets a smaller portion of projects, though, the number of open-source projects is only going up as the need for these sophisticated cloud infrastructure projects increases, all while they become more complex at that same time.

“When it comes down to it, there’s going to be a lot of projects and that’s why we have multiple foundations,” Bryce noted, though he also acknowledged that the team may not be as aggressive about recruiting as some other foundations may be.

In addition to the new funding model, the Foundation also today announced two new members at its Gold level: Bloomberg Engineering and the Canadian cloud computing service Vexxhost.

As for the Foundation’s various projects, the Foundation also announced a couple of milestone releases, including version 2.0 of Kata Containers, version 5.0 of Zuul and the launch of StarlingX 6.0.

“The Foundation celebrates its 10 anniversary this year, and as we look to our next decade of open infrastructure, we’re building momentum on what makes our model so successful: aligning companies and individuals who wish to work together, providing them with a framework and tools to effectively collaborate, and helping them invest their funds to best help the project they care about,” said Collier.

Microsoft’s Windows 11 operating system offers a number of improvements over Windows 10, including a new Start menu and a more functional Taskbar. If you have just purchased a laptop running on Windows 11, or are planning to upgrade your current device, then you will need to know how to set it up. In this blog post, we will provide you with a guide on how to customize your Windows 11 laptop.

1. Set up how your device checks for updates

New laptops usually automatically check for updates, but you can also manually do this. Click the gear icon above the Start button to go to Settings, choose Windows Update, and then click Check for updates.

You can also type “updates” into the search box and click Check for updates.

2. Create a restore point

It is ideal to set up your laptop’s restore point, which is the backup of your entire operating system. Doing this can save you a lot of time, effort, and even money in case something goes wrong with your device.

To set up a restore point, simply type “restore” into the search bar and click Create a restore point. You’ll be taken to the System Protection tab of the System Properties window. From there, you can choose what you want to be included in the backup. Click the Configure button to apply your choices. Enable “Turn on system protection” if it’s not already on. Finally, choose how much disk space to reserve, which is ideally not more than 2–3% of your total disk space.

3. Choose a power plan

To help prolong your laptop’s battery life, you can choose from Windows 11’s Power Saver, High Performance, and Balanced power plans. Type “power plan” in the search button and choose either “Edit power plan” or “Choose a power plan.” Choosing the Edit power plan option allows you to set when the laptop display will be automatically turned off and when it will go to sleep. When you pick “Choose a power plan,” it will take you to a page where you can create and customize your power settings.

The default recommended plan is Balanced, but if you want to create your own, click on the “Create a power plan” option on the left part of the screen. You can choose from three options depending on how you plan to use your laptop: Balanced, Power Saver, and High Performance. After selecting your preferred plan, give your new power plan a name, then click Next to set the display and sleep settings for your laptop. Once done choosing your preferred power settings, click on Create and you’re good to go.

4. Set app installation tolerance level

For added security, you can restrict which apps can be installed on your laptop. Do this by going to Settings > Apps > Apps & features. From here, you can configure the “Choose from where to get apps” settings. You can choose whether to permit installations from only the Windows Store, any app installations (with a warning), or unrestricted app installations.

5. Remove bloatware

Some vendors package new laptops with bundled apps and software, which are mostly unnecessary and unwanted programs called bloatware.

Windows 11 offers an easy way to see which apps are installed on your new laptop and a quick way to uninstall those you don’t need. Head to Settings > Apps > Apps & features and peruse the list of installed apps. If you don’t want an app and are 100% certain that your computer doesn’t need it, click on the hamburger menu to the right of the app, then choose Uninstall.

6. Activate anti-ransomware

Ransomware is a form of malicious software that locks all your data until you pay a ransom to hackers.

To minimize the risk of ransomware attacks, type “Windows Security” into the search bar at the bottom of your screen and click on the Windows Security result. Go to Virus & threat protection, click Manage settings under “Virus & threat protection settings”, and go to “Controlled folder access”. From there, click the Manage Controlled folder access option and enable Controlled folder access; this protects you against ransomware attacks. By default, the Desktop, Documents, Music, Pictures, and Videos folders are protected, but you can add other folders that you’d like to be protected from ransomware.

There are myriad ways Windows 11 can be configured for optimization and security. This article barely scratches the surface of Window 11’s security and efficiency settings. Call us today for a quick chat with one of our Microsoft experts about taking your operating system to the next level.

Microsoft’s Windows 11 operating system (OS) offers a lot of improvements compared to its older OSes. Here are some easy steps you can follow to set up your Windows 11 laptop and enjoy its features.

1. Set up how your device checks for updates

New laptops usually automatically check for updates, but you can also manually do this. Click the gear icon above the Start button to go to Settings, choose Windows Update, and then click Check for updates.

You can also type “updates” into the search box and click Check for updates.

2. Create a restore point

It is ideal to set up your laptop’s restore point, which is the backup of your entire operating system. Doing this can save you a lot of time, effort, and even money in case something goes wrong with your device.

To set up a restore point, simply type “restore” into the search bar and click Create a restore point. You’ll be taken to the System Protection tab of the System Properties window. From there, you can choose what you want to be included in the backup. Click the Configure button to apply your choices. Enable “Turn on system protection” if it’s not already on. Finally, choose how much disk space to reserve, which is ideally not more than 2–3% of your total disk space.

3. Choose a power plan

To help prolong your laptop’s battery life, you can choose from Windows 11’s Power Saver, High Performance, and Balanced power plans. Type “power plan” in the search button and choose either “Edit power plan” or “Choose a power plan.” Choosing the Edit power plan option allows you to set when the laptop display will be automatically turned off and when it will go to sleep. When you pick “Choose a power plan,” it will take you to a page where you can create and customize your power settings.

The default recommended plan is Balanced, but if you want to create your own, click on the “Create a power plan” option on the left part of the screen. You can choose from three options depending on how you plan to use your laptop: Balanced, Power Saver, and High Performance. After selecting your preferred plan, give your new power plan a name, then click Next to set the display and sleep settings for your laptop. Once done choosing your preferred power settings, click on Create and you’re good to go.

4. Set app installation tolerance level

For added security, you can restrict which apps can be installed on your laptop. Do this by going to Settings > Apps > Apps & features. From here, you can configure the “Choose from where to get apps” settings. You can choose whether to permit installations from only the Windows Store, any app installations (with a warning), or unrestricted app installations.

5. Remove bloatware

Some vendors package new laptops with bundled apps and software, which are mostly unnecessary and unwanted programs called bloatware.

Windows 11 offers an easy way to see which apps are installed on your new laptop and a quick way to uninstall those you don’t need. Head to Settings > Apps > Apps & features and peruse the list of installed apps. If you don’t want an app and are 100% certain that your computer doesn’t need it, click on the hamburger menu to the right of the app, then choose Uninstall.

6. Activate anti-ransomware

Ransomware is a form of malicious software that locks all your data until you pay a ransom to hackers.

To minimize the risk of ransomware attacks, type “Windows Security” into the search bar at the bottom of your screen and click on the Windows Security result. Go to Virus & threat protection, click Manage settings under “Virus & threat protection settings”, and go to “Controlled folder access”. From there, click the Manage Controlled folder access option and enable Controlled folder access; this protects you against ransomware attacks. By default, the Desktop, Documents, Music, Pictures, and Videos folders are protected, but you can add other folders that you’d like to be protected from ransomware.

There are myriad ways Windows 11 can be configured for optimization and security. This article barely scratches the surface of Window 11’s security and efficiency settings. Call us today for a quick chat with one of our Microsoft experts about taking your operating system to the next level.

Windows 11 is available as a free upgrade for Windows 10 users, and many people — including laptop users — are taking advantage of this new operating system. If you’re one of them, then you need to know how to properly tweak Windows 11 features on your laptop so you can make the most of Windows 11.

1. Set up how your device checks for updates

New laptops usually automatically check for updates, but you can also manually do this. Click the gear icon above the Start button to go to Settings, choose Windows Update, and then click Check for updates.

You can also type “updates” into the search box and click Check for updates.

2. Create a restore point

It is ideal to set up your laptop’s restore point, which is the backup of your entire operating system. Doing this can save you a lot of time, effort, and even money in case something goes wrong with your device.

To set up a restore point, simply type “restore” into the search bar and click Create a restore point. You’ll be taken to the System Protection tab of the System Properties window. From there, you can choose what you want to be included in the backup. Click the Configure button to apply your choices. Enable “Turn on system protection” if it’s not already on. Finally, choose how much disk space to reserve, which is ideally not more than 2–3% of your total disk space.

3. Choose a power plan

To help prolong your laptop’s battery life, you can choose from Windows 11’s Power Saver, High Performance, and Balanced power plans. Type “power plan” in the search button and choose either “Edit power plan” or “Choose a power plan.” Choosing the Edit power plan option allows you to set when the laptop display will be automatically turned off and when it will go to sleep. When you pick “Choose a power plan,” it will take you to a page where you can create and customize your power settings.

The default recommended plan is Balanced, but if you want to create your own, click on the “Create a power plan” option on the left part of the screen. You can choose from three options depending on how you plan to use your laptop: Balanced, Power Saver, and High Performance. After selecting your preferred plan, give your new power plan a name, then click Next to set the display and sleep settings for your laptop. Once done choosing your preferred power settings, click on Create and you’re good to go.

4. Set app installation tolerance level

For added security, you can restrict which apps can be installed on your laptop. Do this by going to Settings > Apps > Apps & features. From here, you can configure the “Choose from where to get apps” settings. You can choose whether to permit installations from only the Windows Store, any app installations (with a warning), or unrestricted app installations.

5. Remove bloatware

Some vendors package new laptops with bundled apps and software, which are mostly unnecessary and unwanted programs called bloatware.

Windows 11 offers an easy way to see which apps are installed on your new laptop and a quick way to uninstall those you don’t need. Head to Settings > Apps > Apps & features and peruse the list of installed apps. If you don’t want an app and are 100% certain that your computer doesn’t need it, click on the hamburger menu to the right of the app, then choose Uninstall.

6. Activate anti-ransomware

Ransomware is a form of malicious software that locks all your data until you pay a ransom to hackers.

To minimize the risk of ransomware attacks, type “Windows Security” into the search bar at the bottom of your screen and click on the Windows Security result. Go to Virus & threat protection, click Manage settings under “Virus & threat protection settings”, and go to “Controlled folder access”. From there, click the Manage Controlled folder access option and enable Controlled folder access; this protects you against ransomware attacks. By default, the Desktop, Documents, Music, Pictures, and Videos folders are protected, but you can add other folders that you’d like to be protected from ransomware.

There are myriad ways Windows 11 can be configured for optimization and security. This article barely scratches the surface of Window 11’s security and efficiency settings. Call us today for a quick chat with one of our Microsoft experts about taking your operating system to the next level.

As more enterprises migrate apps and workloads into the cloud, so grows the need for more sophisticated tech to secure that activity. That’s resulted in a strong run of funding rounds for startups building products to address that gap. In the latest development, AppOmni — which has built a platform not just to connect with and secure SaaS apps, but to seek out, highlight and help fix vulnerabilities that arise when different apps are used together or in tandem — has raised $70 million. CEO Brendan O’Connor said the funding, a Series C, will be used to continue both for international growth and to continue building out the platform.

AppOmni’s customers include large enterprises and tech names such as Dropbox, Ping, and Accenture as well as large Fortune 100 financial and healthcare companies, who use the platform both to secure their SaaS application stacks (AppOmni integrates with hundreds of SaaS apps including biggies like Box, Confluence, Fastly, GitHub, Google Workspace, Jira, Microsoft 365, Salesforce, ServiceNow, Slack, Workday and Zoom) and also, as of April, any custom apps that they are building and using alongside those.

Thoma Bravo is leading this round, with previous backers such as Scale Venture Partners, Salesforce Ventures, ClearSky, and Costanoa Ventures also investing. The valuation is not being disclosed but as a marker of where it might be, it has now raised $123 million; PitchBook notes that AppOmni was valued at $200 million post-money in its last round — a $40 million Series B in April 2021 that we covered here — and it has continued to have triple-digit growth since then. Collectively, it says it now secures apps covering 78 million users and 230 million exposed data records and more than 9 billion monthly events. So even amid the pressures that we have seen on funding overall, and the competition from other security startups searching for funding, there are signs that AppOmni is among the stronger tier of them.

The gap in the security market that AppOmni is targeting is a longstanding one that is in some ways becoming more critical with the evolution of IT. As more companies follow through on the promise of “digital transformation” and embrace doing more and investing more in cloud services, they are adopting an ever-wider array of apps — some of which are “approved” by IT and some of which are not — that users can access from a growing number of endpoints (that is, devices, such as laptops, their desks, their phones and tablets, across home WiFi, public hotspots, mobile networks, office networks, and so on and so forth). That spaghetti of permutations, taken across a wide array of apps, creates a lot of crossovers that inadvertently lead to vulnerabilities.

These can either pertain to specific apps — AppOmni says that on average it finds more than 20 unauthortized app usages across a single engagement — or to specific types of data, or data records. The startup’s specialty is to search for these loopholes and provide alerts related to them, as well as begin the process of remediating to fix them. It also generates analytics for security operations teams to get more comprehensive pictures of activity across the network to identify trends as well as handle specific events.

“SaaS has become one of the most essential parts of the IT stack. But even though SaaS apps now house extremely sensitive data and run some of the most critical business workflows, most organizations haven’t adequately prioritized SaaS security,” O’Connor told us. “That means that there is a huge amount of data sitting unsecured in the cloud. Our goal is to give businesses and enterprises an easy way to secure their SaaS data and keep it secure over time.”

O’Connor and his co-founder, CTO Brian Soby cut their teeth in exploring the cyber risks in cloud services from years of working for SaaS companies themselves, perhaps most notably at Salesforce, where O’Connor had been an SVP and “chief trust officer” and Soby had been a director of product security. Tellingly, their impact on Salesforce proved to be a positive one, with Salesforce now an investor and partner of AppOmni’s. O’Connor went on to work in a similar capacity at ServiceNow, which like other SaaS companies faces many of the similar issues of SaaS apps conflicting with each other (even when they appear to work together).

The traction it has had has helped the startup stand out among its peers, which include the likes of IBM and Amazon, as well as F5’s Threat Stack.

“The digitization of businesses across all sectors has accelerated the need for reliable data protection and control, and AppOmni’s security solutions are unmatched in the industry,” added Robert (Tre) Sayle, a partner at Thoma Bravo. “We have been impressed by AppOmni’s rapid scaling, high levels of customer satisfaction and continued product innovation, and we’re thrilled to partner with Brendan and his team as they capitalize on the sizable market opportunity ahead.”

Who will be liable for harmful speech generated by large language models? As advanced AIs such as OpenAI’s GPT-3 are being cheered for impressive breakthroughs in natural language processing and generation — and all sorts of (productive) applications for the tech are envisaged from slicker copywriting to more capable customer service chatbots — the risks of such powerful text-generating tools inadvertently automating abuse and spreading smears can’t be ignored. Nor can the risk of bad actors intentionally weaponizing the tech to spread chaos, scale harm and watch the world burn.

Indeed, OpenAI is concerned enough about the risks of its models going “totally off the rails,” as its documentation puts it at one point (in reference to a response example in which an abusive customer input is met with a very troll-esque AI reply), to offer a free content filter that “aims to detect generated text that could be sensitive or unsafe coming from the API” — and to recommend that users don’t return any generated text that the filter deems “unsafe.” (To be clear, its documentation defines “unsafe” to mean “the text contains profane language, prejudiced or hateful language, something that could be NSFW or text that portrays certain groups/people in a harmful manner.”).

But, given the novel nature of the technology, there are no clear legal requirements that content filters must be applied. So OpenAI is either acting out of concern to avoid its models causing generative harms to people — and/or reputational concern — because if the technology gets associated with instant toxicity that could derail development.

Just recall Microsoft’s ill-fated Tay AI Twitter chatbot — which launched back in March 2016 to plenty of fanfare, with the company’s research team calling it an experiment in “conversational understanding.” Yet it took less than a day to have its plug yanked by Microsoft after web users ‘taught’ the bot to spout racist, antisemitic and misogynistic hate tropes. So it ended up a different kind of experiment: In how online culture can conduct and amplify the worst impulses humans can have.

The same sorts of bottom-feeding internet content has been sucked into today’s large language models — because AI model builders have crawled all over the internet to obtain the massive corpuses of free text they need to train and dial up their language generating capabilities. (For example, per Wikipedia, 60% of the weighted pre-training dataset for OpenAI’s GPT-3 came from a filtered version of Common Crawl — aka a free dataset comprised of scraped web data.) Which means these far more powerful large language models can, nonetheless, slip into sarcastic trolling and worse.

European policymakers are barely grappling with how to regulate online harms in current contexts like algorithmically sorted social media platforms, where most of the speech can at least be traced back to a human — let alone considering how AI-powered text generation could supercharge the problem of online toxicity while creating novel quandaries around liability.

And without clear liability it’s likely to be harder to prevent AI systems from being used to scale linguistic harms.

Take defamation. The law is already facing challenges with responding to automatically generated content that’s simply wrong.

Security research Marcus Hutchins took to TikTok a few months back to show his follows how he’s being “bullied by Google’s AI,” as he put it. In a remarkably chipper clip, considering he’s explaining a Kafka-esque nightmare in which one of the world’s most valuable companies continually publishes a defamatory suggestion about him, Hutchins explains that if you google his name the search engine results page (SERP) it returns includes an automatically generated Q&A — in which Google erroneously states that Hutchins made the WannaCry virus.

Hutchins is actually famous for stopping WannaCry. Yet Google’s AI has grasped the wrong end of the stick on this not-at-all-tricky to distinguish essential difference — and, seemingly, keeps getting it wrong. Repeatedly. (Presumably because so many online articles cite Hutchins’ name in the same span of text as referencing ‘WannaCry’ — but that’s because he’s the guy who stopped the global ransomeware attack from being even worse than it was. So this is some real artificial stupidity in action by Google.)

To the point where Hutchins says he’s all but given up trying to get the company to stop defaming him by fixing its misfiring AI.

“The main problem that’s made this so hard is while raising enough noise on Twitter got a couple of the issues fixed, since the whole system is automated it just adds more later and it’s like playing whack-a-mole,” Hutchins told TechCrunch. “It’s got to the point where I can’t justify raising the issue anymore because I just sound like a broken record and people get annoyed.”

In the months since we asked Google about this erroneous SERP the Q&A it associates with Hutchins has shifted — so instead of asking “What virus did Marcus Hutchins make?” — and surfacing a one word (incorrect) answer directly below: “WannaCry,” before offering the (correct) context in a longer snippet of text sourced from a news article, as it was in April, a search for Hutchins’ name now results in Google displaying the question “Who created WannaCry” (see screengrab below). But it now just fails to answer its own question — as the snippet of text it displays below only talks about Hutchins stopping the spread of the virus.

Image Credits: Natasha Lomas/TechCrunch (screengrab)

So Google has — we must assume — tweaked how the AI displays the Q&A format for this SERP. But in doing that it’s broken the format (because the question it poses is never answered).

Moreover, the misleading presentation which pairs the question “Who created WannaCry?” with a search for Hutchins’ name, could still lead a web user who quickly skims the text Google displays after the question to wrongly believe he is being named as the author of the virus. So it’s not clear it’s much/any improvement on what was being automatically generated before.

In earlier remarks to TechCrunch, Hutchins also made the point that the context of the question itself, as well as the way the result gets featured by Google, can create the misleading impression he made the virus — adding: “It’s unlikely someone googling for say a school project is going to read the whole article when they feel like the answer is right there.”

He also connects Google’s automatically generated text to direct, personal harm, telling us: “Ever since google started featuring these SERPs, I’ve gotten a huge spike in hate comments and even threats based on me creating WannaCry. The timing of my legal case gives the impression that the FBI suspected me but a quick [Google search] would confirm that’s not the case. Now there’s all kinds of SERP results which imply I did, confirming the searcher’s suspicious and it’s caused rather a lot of damage to me.”

Asked for a response to his complaint, Google sent us this statement attributed to a spokesperson:

The queries in this feature are generated automatically and are meant to highlight other common related searches. We have systems in place to prevent incorrect or unhelpful content from appearing in this feature. Generally, our systems work well, but they do not have a perfect understanding of human language. When we become aware of content in Search features that violates our policies, we take swift action, as we did in this case.

The tech giant did not respond to follow-up questions pointing out that its “action” keeps failing to address Hutchins’ complaint.

This is of course just one example — but it looks instructive that an individual, with a relatively large online presence and platform to amplify his complaints about Google’s ‘bullying AI,’ literally cannot stop the company from applying automation technology that keeps surfacing and repeating defamatory suggestions about him.

In his TikTok video, Hutchins suggests there’s no recourse for suing Google over the issue in the US — saying that’s “essentially because the AI is not legally a person no one is legally liable; it can’t be considered libel or slander.”

Libel law varies depending on the country where you file a complaint. And it’s possible Hutchins would have a better chance of getting a court-ordered fix for this SERP if he filed a complaint in certain European markets such as Germany — where Google has previously been sued for defamation over autocomplete search suggestions (albeit the outcome of that legal action, by Bettina Wulff, is less clear but it appears that the claimed false autocomplete suggestions she had complained were being linked to her name by Google’s tech did get fixed) — rather than in the U.S., where Section 230 of the Communications Decency Act provides general immunity for platforms from liability for third-party content.

Although, in the Hutchins SERP case, the question of whose content this is, exactly, is one key consideration. Google would probably argue its AI is just reflecting what others have previously published — ergo, the Q&A should be wrapped in Section 230 immunity. But it might be possible to (counter) argue that the AI’s selection and presentation of text amounts to a substantial remixing which means that speech — or, at least, context — is actually being generated by Google. So should the tech giant really enjoy protection from liability for its AI-generated textual arrangement?

For large language models, it will surely get harder for model makers to dispute that their AIs are generating speech. But individual complaints and lawsuits don’t look like a scalable fix for what could, potentially, become massively scaled automated defamation (and abuse) — thanks to the increased power of these large language models and expanding access as APIs are opened up.

Regulators are going to need to grapple with this issue — and with where liability lies for communications that are generated by AIs. Which means grappling with the complexity of apportioning liability, given how many entities may be involved in applying and iterating large language models, and shaping and distributing the outputs of these AI systems.

In the European Union, regional lawmakers are ahead of the regulatory curve as they are currently working to hash out the details of a risk-based framework the Commission proposed last year to set rules for certain applications of artificial intelligence to try to ensure that highly scalable automation technologies are applied in a way that’s safe and non-discriminatory.

But it’s not clear that the EU’s AI Act — as drafted — would offer adequate checks and balances on malicious and/or reckless applications of large language models as they are classed as general purpose AI systems that were excluded from the original Commission draft.

The Act itself sets out a framework that defines a limited set of “high risk” categories of AI application, such as employment, law enforcement, biometric ID etc, where providers have the highest level of compliance requirements. But a downstream applier of a large language model’s output — who’s likely relying on an API to pipe the capability into their particular domain use case — is unlikely to have the necessary access (to training data, etc.) to be able to understand the model’s robustness or risks it might pose; or to make changes to mitigate any problems they encounter, such as by retraining the model with different datasets.  

Legal experts and civil society groups in Europe have raised concerns over this carve out for general purpose AIs. And over a more recent partial compromise text that’s emerged during co-legislator discussions has proposed including an article on general purpose AI systems. But, writing in Euroactiv last month, two civil society groups warned the suggested compromise would create a continued carve-out for the makers of general purpose AIs — by putting all the responsibility on deployers who make use of systems whose workings they’re not, by default, privy to.

“Many data governance requirements, particularly bias monitoring, detection and correction, require access to the datasets on which AI systems are trained. These datasets, however, are in the possession of the developers and not of the user, who puts the general purpose AI system ‘into service for an intended purpose.’ For users of these systems, therefore, it simply will not be possible to fulfil these data governance requirements,” they warned.

One legal expert we spoke to about this, the internet law academic Lilian Edwards — who has previously critiqued a number of limitations of the EU framework — said the proposals to introduce some requirements on providers of large, upstream general-purpose AI systems are a step forward. But she suggested enforcement looks difficult. And while she welcomed the proposal to add a requirement that providers of AI systems such as large language models must “cooperate with and provide the necessary information” to downstream deployers, per the latest compromise text, she pointed out that an exemption has also been suggested for IP rights or confidential business information/trade secrets — which risks fatally undermining the entire duty.

So, TL;DR: Even Europe’s flagship framework for regulating applications of artificial intelligence still has a way to go to latch onto the cutting edge of AI — which it must do if it’s to live up to the hype as a claimed blueprint for trustworthy, respectful, human-centric AI. Otherwise a pipeline of tech-accelerated harms looks all but inevitable — providing limitless fuel for the online culture wars (spam levels of push-button trolling, abuse, hate speech, disinformation!) — and setting up a bleak future where targeted individuals and groups are left firefighting a never-ending flow of hate and lies. Which would be the opposite of fair.

The EU had made much of the speed of its digital lawmaking in recent years but the bloc’s legislators must think outside the box of existing product rules when it comes to AI systems if they’re to put meaningful guardrails on rapidly evolving automation technologies and avoid loopholes that let major players keep sidestepping their societal responsibilities. No one should get a pass for automating harm — no matter where in the chain a ‘black box’ learning system sits, nor how large or small the user — else it’ll be us humans left holding a dark mirror.

Code Intelligence, an automated application security testing platform based in Bonn, Germany, that focuses on fuzzing, announced today that it has raised a $12 million Series A funding round led by Tola Capital. Existing investors LBBW, OCCIDENT, Verve Ventures, HTGF and Thomas Dohmke, the CEO of GitHub, also participated in this round, which brings the company’s total funding to about $15.7 million.

The company was co-founded in 2018 by Sergej Dechand, Khaled Yakdan and their former professor at the University of Bonn, Matthew Smith.

Image Credits: Code Intelligence

“Back then, we noticed that fuzzing and some other techniques are super powerful, but outside of the security research community, no one actually used it,” Dechand told me. “We started to collaborate from the university with a few larger enterprise companies to try things out and we had really, really good results. So even though we didn’t want to found a company in the beginning, somehow we had a prototype of a product.” Encouraged by Smith, the team decided to give it a shot and founded a company to develop and commercialize its prototype system. At first, the co-founders continued to work at the university, but in 2019, they decided to work on the service full-time. Now, a few years later, Code Intelligence counts the likes of Bosch, Continental and Deutsche Telekom among its users.

Dechand argued that while there are plenty of open source fuzzing tools, it still takes a very knowledgeable security team to actually implement and use them. With the security teams as the bottlenecks to implementing these tools, Code Intelligence put its focus on bringing its tools directly to the developers. “In the end, they are the ones who are fixing it and know best what kind of error is critical,” said Dechand.

Image Credits: Code Intelligence

Since developers don’t want to look at yet another tool in their development pipeline, Code Intelligence integrates with services like Jenkins, GitHub and GitLab. Thanks to this, developers will not only see how well their code is covered, but Code Intelligence also adds additional pipeline in the continuous integration system that automatically fuzzes the code as a new pull or merge request comes in.

Currently, Code Intelligence offers support for Go, C++, Java and Kotlin, with support for Node.js, JavaScript, .NET and Python coming soon.

Image Credits: Code Intelligence

As of now, Code Intelligence is in closed beta and the company is still working closely with its enterprise customers to onboard new teams. Over time, though, the plan is to automate all of this and launch a self-service platform.

“Code Intelligence is the most advanced automated fuzz testing solution for applications and APIs and is incredibly easy for developers to use in their existing workflows,” said Will Coggins, vice president at Tola Capital. “The potential for this technology to improve how development teams build secure software is enormous.”

As expected, the Broadcom-VMware deal is a go. The chip giant intends to snap up the virtualization software company for $61 billion in cash and stock, along with taking on $8 billion in VMware debt.

It’s not an inexpensive transaction, but thanks to a “go-shop” provision that gives VMware 40 days to “solicit, receive, evaluate and potentially enter negotiations with parties that offer alternative proposals,” there’s market speculation that another bidder could enter the fray.

After chewing through analyst notes on the deal, Ron and Alex wound up on opposite sides regarding whether a higher price or another bidder would make sense. Ron’s view is that the company’s value is higher than its recent financial results may imply, while Alex feels the company is not sufficiently performative to deserve a higher price.


TechCrunch+ is having a Memorial Day sale. You can save 50% on annual subscriptions for a limited time.


We’ve long speculated who might buy VMware, and after Dell spun out the company, TechCrunch listed Amazon, Alphabet, Oracle, Microsoft and IBM as potential acquirers. The fact that we did not foresee Broadcom as a potential suitor underscores our view that we don’t fully grok if it’s the correct buyer for VMware.

So let’s talk about the pros and cons of the matter, ask what VMware is worth, and how it may have value over and above its recent quarterly results. Ron is taking point!

Ron’s take:

With $61 billion on the table, it’s hard to imagine anyone paying more, and research firm Bernstein agrees with the perspective. Before we put the idea to bed, though, it’s worth taking a moment to think about the value of VMware.

VMware’s value goes beyond what its balance sheet or its profit and loss statement tells us at the moment. While the company might not have had a perfect first quarter, it has a particular set of skills that could fit nicely with any of the big cloud infrastructure providers.

In fact, cloud infrastructure-as-a-service exists today only because the early crew at VMware figured out virtualization at scale in the early 2000s. Until then, people used servers, and if a server was underutilized, well, too bad. Virtualization lets you divide a computer into multiple virtual machines, paving the way for cloud computing as we know it today.

While cloud computing has changed some since its early days, virtualization remains a core tenet of the market. Imagine for a moment if one of the three or four cloud vendors — think Amazon, Microsoft, Google or even IBM (although this deal is a bit rich for its blood) — brought VMware into its fold.

VMware brings more to the table than virtualization, of course. Over the years, it has gained various capabilities by acquiring companies like Heptio, a containerization startup launched by Craig McLuckie and Joe Beda, two of the people who helped create Kubernetes.

In a new court filing, Epic Games challenges Apple’s position that third-party app stores would compromise the iPhone’s security. And it points to Apple’s macOS as an example of how the process of “sideloading” apps — installing apps outside of Apple’s own App Store, that is — doesn’t have to be the threat Apple describes it to be. Apple’s Mac, explains Epic, doesn’t have the same constraints as found in the iPhone operating system, iOS, and yet Apple touts the operating system used in Mac computers, macOS, as secure.

The Cary, N.C.-based Fortnite maker made these points in its latest brief, among several others, related to its ongoing legal battle with Apple over its control of the App Store.

Epic Games wants to earn the right to deliver Fortnite to iPhone users outside the App Store, or at the very least, be able to use its own payment processing system so it can stop paying Apple commissions for the ability to deliver its software to iPhone users.

A California judge ruled last September in the Epic Games v. Apple district court case that Apple did not have a monopoly in the relevant market — digital mobile gaming transactions. But the court decided Apple could not prohibit developers from adding links for alternative payments inside their apps that pointed to other ways to pay outside of Apple’s own App Store-based monetization system. While Apple largely touted the ruling as a victory, both sides appealed the decision as Epic Games wanted another shot at winning the right to distribute apps via its own games store, and Apple didn’t want to allow developers to be able to suggest other ways for their users to pay.

On Wednesday, Epic filed its Appeal Reply and Cross-Appeal Response Brief, following Apple’s appeal of the district court’s ruling.

The game maker states in the new filing that the lower court was led astry on many points by Apple, and reached the wrong conclusions. Many of its suggestions relate to how the district court interpreted the law. It also newly points to the important allies Epic now has on its side — Microsoft, the Electronic Frontier Foundation, and the attorneys general of 34 states and the District of Columbia, all of who have filed briefs supporting Epic’s case with the U.S. Court of Appeals for the Ninth Circuit.

However, one of Epic’s larger points has to do with the Mac’s security model and how it differs from the iPhone. Epic says that if Apple can allow sideloading on Mac devices and still call those computers secure, then surely it could do the same for iPhone.

“For macOS Apple relies on security measures imposed by the operating system rather than the app store, and ‘notarization’ program that scans apps and then returns them to the developer for distribution,” Epic’s new filing states. It says the lower court even agreed that Apple’s witness on the subject (Head of Software Engineering, Craig Federighi) was stretching the truth when he had disparaged macOS as having a “malware problem.

Epic then points to examples of Apple’s own marketing of its Mac computers’ security, where it touts “apps from both the App Store and the internet” can be “installed worry-free.”

Apple has argued against shifting to this same model for iPhone as it would require redesigning how its software works, among other things, including what it says would be reduced security for end users.

As app store legislation targeting tech giants has continued to move forward in Congress, Apple has been raising the alarm about being forced to open up the iPhone to third-party app stores, as the bipartisan Open App Markets Act and other international regulations would require. Apple said that mandating sideloading doesn’t comply with its pro-consumer privacy protections.

In a paper Apple published to further detail this issue, it stated that permitting sideloading could risk users’ “most sensitive and private information.”

“Supporting sideloading through direct downloads and third-party app stores would cripple the privacy and security protections that have made iPhone so secure, and expose users to serious security risks,” the paper read. Apple also pointed to Google’s Android operating system as an example of that risk, noting that, over the past four years, Android devices were found to have 15 to 47 times more malware infections than iPhone.

Timed with the release of the new filing, Epic Games CEO Tim Sweeney was interviewed by the Financial Times where he continued to berate Apple for its alleged anti-competitive behavior. Sweeney said that even if Apple fairly won the hardware market, it shouldn’t be allowed to use that position to “gain an unfair advantage over competitors and other markets,” like software.

“They should have to compete fairly against the Epic game store, and the Steam Store, and let’s assume the Microsoft Store, and the many other stores that will emerge — as they do with any other market in the world, except for digital app stores,” Sweeney said.

Epic’s Response and Reply Brief by TechCrunch on Scribd

DuckDuckGo, the self-styled “internet privacy company” — which, for years, has built a brand around a claim of non-tracking web search and, more recently, launched its own ‘private’ browser with built-in tracker blocking — has found itself in hot water after a researcher found hidden limits on its tracking protection that create a carve-out for certain advertising data requests by its search syndication partner, Microsoft.

Late yesterday, the researcher in question, Zach Edwards, tweeted the findings of his audit — saying he had found DDG’s mobile browsers do not block advertising requests made by Microsoft scripts on non-Microsoft web properties. (NB: This is a separate matter to what happens if you actually click on an ad when using DDG — as its privacy policy clearly discloses all privacy bets are off at that point.)

Edwards tested browser data flows on a Facebook-owned site, Workplace.com, and found that while DDG informed users it had blocked Google and Facebook trackers, it did not prevent Microsoft from receiving data flows linked to their browsing on the non-Microsoft website.

Edwards had some Twitter back and forth with DDG’s founder and CEO Gabe Weinberg, who initially appeared to be attempting to play down the finding by emphasizing all the stuff he said DDG’s browser does block (e.g., third-party tracking cookies, including those from Microsoft).

Weinberg was also especially keen to make it clear the data flows issue is not related to DuckDuckGo search.

However the limitation on DDG’s browser’s tracker blocking does amount to an exemption from protection against certain advertising data transfers to Microsoft subsidiaries (Bing, LinkedIn) — which could be used for cross-site tracking of web users for ad targeting purposes. Or, in other words, to undermine DDG browser users’ privacy.

In Twitter back and forth, Weinberg confirmed Edwards’ audit was correct — “fessing up to a contactual agreement that he said limited DDG’s ability to block trackers in this scenario by writing that DDG’s ‘search syndication agreement’ with Microsoft, which owns and operates the Bing search engine and index, prevents us from stopping Microsoft-owned scripts from loading.”

He added that DDG was “working to change that.”

Asked via Twitter whether DDG’s contract included a clause that prevents it from publicly complaining about the limitations imposed upon it by Microsoft, a tech giant with a growing adtech business, Weinberg told us: “Our syndication contract has broad confidentiality requirements, and the specific requirement documents themselves are additionally explicitly marked confidential.”

Discussing his findings and DDG’s response with TechCrunch, Edwards described himself as “pretty shocked” by Weinberg’s public response to his audit — and for having what he summed up as “no public solutions for the problems created through the secret partnership between DuckDuckgo and Microsoft.”

“I have significant concerns … about DDG’s public claims, especially the ones they make on their iOS/Android app install websites, promising tracking protections,” Edwards added. “If you compare the language within the app details, to the information shared by the DuckDuckGo CEO yesterday, you can’t help but wonder why they are so openly lying in one location of the internet, and not lying in another area of the internet, and seemingly attempting to throw their top advertising partner Microsoft under some sort of bus — essentially DDG’s CEO made numerous comments about how he was trying and hoping to get out of their current contract with Microsoft — this was a shocking admission to see publicly and something that I hope regulators take a serious look at.”

The issue has blown up on Hacker News over the day — where Weinberg (aka yegg) has been doing more firefighting in the comments, reiterating that DDG’s hands are tied by its contract with Microsoft and further claiming it has continued to press for changes to “this limited restriction.”

“This is just about non-DuckDuckGo and non-Microsoft sites in our browsers, where our search syndication agreement currently prevents us from stopping Microsoft-owned scripts from loading, though we can still apply our browser’s protections post-load (like 3rd party cookie blocking and others mentioned above, and do). We’ve also been tirelessly working behind the scenes to change this limited restriction,” Weinberg wrote on the site.

“I also understand this is confusing because it is a search syndication contract that is preventing us from doing a non-search thing. That’s because our product is a bundle of multiple privacy protections, and this is a distribution requirement imposed on us as part of the search syndication agreement. Our syndication agreement also has broad confidentially provisions and the requirement documents themselves are explicitly marked confidential,” he added.

While DDG’s browser clearly does not block all scripts — and no tracker blocker is going to be 100% effective as tracking techniques are ever evolving — this carve-out for Microsoft scripts looks different on merit of it being a specific exemption attached to a contractual agreement that’s linked to a commercial deal which allows DDG to use Microsoft’s search index in its core product — none of which was (seemingly) public knowledge prior to Edwards’ audit.

In further public remarks on the issue, Weinberg implied that DDG is trying to balance a goal of giving browser users a very easy tracker blocker experience (i.e., to maximize accessibility), with beefing up protections that might further enhance user privacy but with a potential cost to the experience (e.g., broken webpages).

However the lack of a disclosure by DDG to browser users of the Microsoft-related restriction to its protections is particularly concerning — especially in light of the stark contrast with its privacy-focused marketing which tells users they will “escape website tracking” (which clearly isn’t happening in the specific Microsoft-related instances identified by Edwards). So DDG risks misleading users and undermining its own reputation as a pro-privacy business.

In a more recent response posted in response to Hacker News comments, Weinberg appears to have accepted the need for DDG to make fuller disclosure, writing: “We will work diligently today to find a way to say something in our app store descriptions in terms of a better disclosure — will likely have something up by the end of the day.”

“I understand the concern here that we are working to address in a variety of ways but to be clear no app will provide 100% protection for a variety of reasons, and the scripts in question here do currently have significant protection on them in our browser,” he added. 

We reached out to Weinberg with questions. He sent us this statement:

We have always been extremely careful to never promise anonymity when browsing, because that frankly isn’t possible given how quickly trackers change how they work to evade protections and the tools we currently offer. When most other browsers on the market talk about tracking protection they are usually referring to 3rd-party cookie protection and fingerprinting protection, and our browsers for iOS, Android, and our new Mac beta, impose these restrictions on third-party tracking scripts, including those from Microsoft. We’re talking here about an above-and-beyond protection that most browsers don’t even attempt to do — that is, blocking third-party tracking scripts before they even load on 3rd party websites. Because this can cause websites to break, we cannot do this as much as we want to in any case. Our goal, however, has always been to provide the most privacy we can in one download, by default without any complicated settings, so we took this on.

We also put questions to Microsoft about the limitation it imposes on search syndication partners but at the time of writing the tech giant had not responded.

Privacy trade-offs are never great but one conclusion looks inescapable here: Antitrust regulators need to closely examine the search syndication market — given it’s essentially comprised of two gatekeeping adtech giants, Google and Microsoft, which are fully empowered to enforce (unfair) terms on anyone else wanting to offer a competitive search product, or, indeed in certain cases, an alternative web browser.

European regulators have recently agreed a new ex ante competition regime that’s aimed at the most powerful intermediating platforms — which the Digital Markets Act refers to as internet “gatekeepers.” The DMA is clearly applicable to search engines but it remains to be seen whether the Commission will spot the opportunity to use the incoming regulation to crack open the search market by enforcing fair usage terms around search syndication on the only two indexes that count.