Steve Thomas - IT Consultant

If you're looking for a Bluetooth speaker with some punch, IP67 waterproofing, and flashy lights to boot, the Earfun Uboom X has you covered.
The Acer Swift 14 AI won't wow you with a fancy design or award-winning display, but for a work laptop, it nails the essentials.

  • 'Fix problems using Windows Update' is a handy tool for solving issues
  • It lets you quickly reinstall Windows 11
  • It's only available in Windows 11 22H2 or newer

Windows 11 has been having a rough time of it recently, with Microsoft releasing a series of controversial and sometimes faulty updates – but to be fair, it’s also been releasing tools to help its users who encounter problems, and one of the most promising is called, with a refreshingly straightforward name, ‘Fix problems using Windows Update.’

As Neowin reports, while the tool first appeared for testing back in 2023, it’s now officially been added to Windows 11, and Microsoft has released support documentation explaining what the tool does, saying it “will reinstall the current version of Windows on your device.”

Over my many years of helping friends, family, and - most importantly of all - TechRadar readers fix their PCs, one sure-fire way of getting things running normally again is to reinstall Windows. In the past, this was usually left as a last resort due to how time consuming reinstalling the entire operating system was.

To Microsoft’s credit, reinstalling Windows 11 is now a much easier process, as you don’t need to dig out a DVD or product key, and there are options to ‘reset’ your PC while keeping your personal files (rather than having to back them up to external storage).

It looks like ‘Fix problems using Windows Update’ will be another easy way to reinstall Windows 11 with (hopefully) minimal disruption to users. Windows 11's Settings app says the tool will “Reinstall your current version of Windows (your apps, files, and settings will be preserved).”

I’ve not tried it yet (thankfully I’ve not needed to) but when it mentions preserving your apps, hopefully that means your applications remain installed, rather than what currently happens when you reset Windows 11, which removes all programs, but gives shortcuts to redownload apps from the Microsoft Store.

Not for everyone

The tool can be found by going to Settings > System > Recovery, and will also appear if an update fails to install.

This does seem to be a genuinely useful tool that is sadly increasingly necessary as more Windows 11 problems emerge. The issue Microsoft has, which Apple doesn’t face with its macOS operating system, is that there are essentially an infinite range of PCs it needs to support with a mixture of hardware from different companies, and this means that issue-free releases for Windows can be rare. Making it easier to reinstall important files and fix problems is a good step in the right direction.

However, not everyone with Windows 11 will be able to use the tool – you’ll need to have Windows 11 22H2 or newer installed with the February 2024 optional update also installed. People on older versions of Windows 11, or use Windows 10, are out of luck.

You might also like...

  • Former PlayStation chief Shawn Layden thinks hardware innovation "is starting to plateau"
  • Layden doesn't think there will be another PS1 to PS2 jump in performance
  • He says the real competition is "content"

Former PlayStation boss Shawn Layden has said there likely won't be another major jump in hardware performance as it's already been 'maxed out'.

Speaking in a recent interview with Eurogamer, Layden discussed the future of PlayStation amid the company's 30th anniversary, as well as the technological advancement of today's current consoles compared to that of the PS1.

"I think we're at a point where the console becomes irrelevant in the next... if not the next generation then the next next generation definitely," Layden said.

When asked if he thinks consoles could see another major leap in performance ever again, like the recent release of the PS5 Pro, the ex-PlayStation chief said he isn't sure what that would look like.

"I don't think so. I mean, what would that leap look like? It would be perfectly-realized human actors in a game that you completely control. That could happen one day. I don't think it's going to happen in my lifetime," he said.

"We're at a point now where the innovation curve on the hardware is starting to plateau, or top out. At the same time, the commoditization of the silicon means that when you open up an Xbox or PlayStation, it's really pretty much the same chipset. It's all built by AMD. Each company has their own OS and proprietary secret sauce, but in essence [it's the same]. I think we're pretty much close to final spec for what a console could be."

Layden went on to discuss the release of PlayStation's consoles over the years and how each improved upon the other in some way, however, he doesn't think the market will see something as significant as the jump from PS1 to PS2 again.

"If you look at it from my lens, which is of course the PlayStation lens, the leap from PS1 to PS2 was dramatic..." Layden said, before touching on the following generations.

He explained that the jump from PS2 to PS3 was "remarkable" with HD standard and the introduction of 60 FPS gameplay and network capability.

"Then PS3 to PS4 was just, like, getting the network thing done right. Then to PS5, which is a fantastic piece of kit, but the actual difference in performance... we're getting to the realm, frankly, where only dogs can hear the difference now," Layden added.

"You're not going to see another PS1 to PS2 jump in performance - we have sort of maxed out there. If we're talking about teraflops and ray-tracing, we're already off the sheet that most people begin to understand."

Layden concluded by saying that the "real competition" will be "content", which "should be the competition for publishers, not which hardware you get behind."

You might also like...


  • MirrorFace pivoted to spear phishing to target high-profile Japanese
  • The group is looking for information regarding China-US relations
  • It is using backdoors not seen in years

MirrorFace, a Chinese state-sponsored threat actor also known as Earth Kasha, has been observed stepping away from its usual practice to target specific individuals, with even more specific backdoors.

Cybersecurity researchers from Trend Micro recently observed MirrorFace engaging in spear phishing attacks, targeting individuals in Japan.

Previously, the group was focused on business entities, and abused vulnerabilities in endpoint devices such as Array Networks and Fortinet for initial access.

Targeting individuals

This time around, MirrorFace seems to be particularly interested in topics around Japan’s national security and international relations, the researchers stressed. They came to this conclusion after analyzing the victims, and the lures used in the spear phishing emails. The lures were mostly fake documents discussing Japan's economic security from the perspective of the current US - China relations.

"Many of the targets are individuals, such as researchers, who may have different levels of security measures in place compared to enterprise organizations, making these attacks more difficult to detect," Trend Micro said. "It is essential to maintain basic countermeasures, such as avoiding opening files attached to suspicious emails."

Those who failed to spot the attack, ended up getting two backdoors - NOODPOOR (also known as HiddenFace) and ANEL (also known as UPPERCUT). Trend Micro said the latter was particularly interesting, since it was basically nonexistent for years.

"An interesting aspect of this campaign is the comeback of a backdoor dubbed ANEL, which was used in campaigns targeting Japan by APT10 until around 2018 and had not been observed since then," they said. APT10 is likely MirrorFace’s umbrella organization.

Earth Kasha is quite an active group these days. In late November, researchers saw the group targeting organizations in Japan, Taiwan, India, and even Europe, through holes in Array AG, ProSelf, and FortiNet. They were also seen using SoftEther VPN, a legitimate open-source VPN tool, to bypass a target’s firewall and blend into legitimate traffic.

Via The Hacker News

You might also like

Artificial Intelligence is introducing a new wave of technological capabilities, and therefore, businesses are increasingly looking for ways to integrate it into their products and day-to-day operations. As businesses race to unlock the potential of artificial intelligence, they increasingly recognize that cloud infrastructure is essential. It may come as a surprise, however, that although 67% of companies report having advanced cloud infrastructure, only 8% have fully integrated AI into their business processes (Infosys & MIT, 2024). And this figure highlights a clear disconnect that, despite cloud maturity, businesses are lagging behind when it comes to AI implementation.

This article will explore the reasons behind this lag and outline the key strategies for businesses to align their cloud infrastructure with the specific demands of AI to unlock its full potential.

Why is there a disconnect between cloud and AI readiness?

There are many factors to consider when implementing AI and the most important of which is cost. Currently, the biggest challenge in adopting AI is the significant upfront investment required to create an AI-ready environment. The hardware costs alone often don't match the short lifecycles of the technology, as it evolves rapidly and organizations need to continually upgrade their systems to meet the demands. As a result, it can be difficult to justify the long-term ROI of AI.

Many organizations are rushing to integrate AI tools into their operations without fully considering the infrastructure implications. Despite widespread recognition of AI's potential, as evidenced by 98% of executives expecting increased AI spending on the cloud, businesses often neglect the specific technical requirements of AI.

To effectively support AI workloads, organizations must prioritize compatibility, scalability, security, and cost-effectiveness. However, performance remains a critical factor, and striking the right balance between Graphics Processing Units (GPU) requirements and costs is essential. AI's demanding nature necessitates cloud environments capable of handling intensive data processing, low-latency response times, and specialized hardware like GPUs or custom accelerators.

Another factor influencing the disconnect is the IT industry’s ongoing skills gap, with 84% of UK businesses currently struggling to source the talent they need to address their IT challenges. Since there are a limited number of skilled professionals who can manage AI workloads, even businesses that have prepared their cloud infrastructure may lack the expertise needed to fully embrace AI’s capabilities.

Key considerations for AI

1. AI workloads

The specific AI requirements of different companies can vary significantly. For example, a company developing an advanced image recognition system may have different infrastructure needs than one building a sophisticated chatbot. To address these unique demands, bespoke cloud optimization strategies are essential for businesses to consider.

Each AI project has unique resources and high-performance computing requirements. For example, one of our customers is developing an alternative to neuro-symbolic architecture, combining neural and symbolic learning, which acts similarly to the human brain. The company needed a hosting provider for training one of their products - the Expert Verbal Agent (EVA) model, an LLM designed for thoughtful queries and problem-solving. Unlike many AI models, which only run on GPUs as their computational model, EVA can use CPU, GPU, or both. Consequently, they required a CPU-powered server for software development and testing.

2. Scalability

Scalability is vital for AI, but it must be balanced with cost-effectiveness. An AI environment should be able to adapt to changing demands, providing additional processing power when needed but this can be expensive.

AI workloads can be unpredictable and fluctuate in size. It's common for AI workloads to be needed only for short, intensive periods of time, for example to regenerate a model. This often means that the hardware involved sits idle for long periods of time and therefore does not generate a return on investment. This is an important consideration for companies looking to build AI-enabled platforms, who should consider leasing time on pre-built environments as an alternative to ensure the best and most resource-efficient outcome. While public cloud models offer flexibility, they tend to be more expensive for such projects, especially during peak usage periods. Organizations need to carefully consider their scalability demands and choose the infrastructure that is right for them.

3. Security

Security is critical in AI projects, especially when outsourcing GPU or processing components. Sensitive data must be protected to safeguard customer privacy. While public cloud models can be convenient, they may not offer the same level of security as private or hybrid cloud solutions, where servers are dedicated solely to a business. Businesses should evaluate the sensitivity of their data and select a cloud environment that aligns with the security and control requirements of their AI workloads.

AI security in the cloud is a critical concern as organizations increasingly leverage the power of artificial intelligence (AI) to process and analyze vast amounts of data in cloud-based environments. The first key aspect of AI security in the cloud involves protecting the AI models and data. Encryption and access controls are vital to ensure that sensitive AI models and training data are safeguarded from unauthorized access or breaches. Additionally, regular audits and monitoring are essential to detect any unusual activities or vulnerabilities that could compromise AI systems in the cloud.

4. Performance

Certain AI tasks require specific hardware to run most effectively. In some scenarios, GPUs are essential, while some projects require specialized AI chips or TPUs (Tensor Processing Units). These chips are specifically designed to deliver the best performance when processing machine learning workloads. It is extremely important for companies to understand the specific technical needs of each project when choosing the perfect architecture for running an AI model, as there are many variations of hardware that can be used for these platforms.

Understanding the memory requirements of the AI model being trained is also extremely important. Some models will not fit on a basic graphics card, while others will require huge amounts of onboard RAM to be processed at all. NVIDIA's latest cards, such as the H100 NVL, have a whopping 188GB of HBM3 memory, allowing very large models to be trained. Cloud providers often have access to advanced hardware and infrastructure that can significantly improve the performance of AI algorithms and reduce training time.

Steps to bridge the disconnect

To bridge the gap between cloud readiness and AI integration, businesses can start by understanding their key requirements and clarifying desired goals for the AI. This will allow the creation of a comprehensive brief which is an essential first step.

Next, evaluate existing cloud capabilities and identify key goals and requirements in order to identify any gaps in performance, scalability, or data handling – all necessary for the effective use of AI applications. Furthermore, establishing data management, security, and compliance policies ensures that quality data is readily available for AI initiatives.

Companies should also consider which cloud infrastructure best suits the unique needs of each AI project. For example, if security and regulatory compliance are priorities, hybrid or private cloud models, with infrastructure dedicated to a business rather than shared across businesses, maybe a better fit than public cloud options.

Finally, incorporating regular performance evaluations and iterative infrastructure adjustments will help maintain alignment with evolving AI capabilities, ensuring a strong foundation that adapts as AI technology advances.

Working with a Managed Service Provider

These steps can seem overwhelming to tackle alone, which is why some businesses opt to work with a Managed Service Provider (MSP) on their AI integration. Currently, 65% of UK businesses work with MSPs as they offer a holistic approach to AI optimization by supporting infrastructure design, compliance, and ongoing optimization. MSPs also help companies with their security posture through continuous monitoring to protect cloud environments from threats and vulnerabilities.

Additionally, MSPs can help bridge the skills gap, which remains a common barrier to successful AI adoption. In fact, 46% of businesses use MSPs to address the ongoing skills shortage. MSPs can help businesses achieve their AI goals cost-effectively by providing the most efficient infrastructure and hardware backed by their expertise and service. Collaborating with cloud infrastructure management experts also reduces the risk of misconfigurations as well as unnecessary costs, ensuring that businesses have an optimized and secure foundation for AI.

Cloud readiness and AI go hand-in-hand

As AI continues to transform our lives and modern businesses AI integration will be essential for companies aiming to stay competitive. By tailoring cloud infrastructure to AI-specific requirements and leveraging the expert knowledge of MSPs, organizations can overcome the most pressing hurdles (financial, technical, and talent-related), to make the most of AI's potential. With a strategic approach and the right support, businesses can lay a solid foundation that can not only meet current demand but also adapt as AI technology evolves.

We've featured the best cloud storage.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Companies today operate in a world defined by information overload. The figures are mindboggling. In 2010, the global datasphere totaled 2 trillion gigabytes (2 zettabytes). By 2020, it had expanded to 64 zettabytes, and by 2026, the International Data Corporation (IDC) expects that it will reach 221 zettabytes.

With approximately 252,000 new websites being created every day, it is becoming increasingly difficult for knowledge workers to locate the right resources and information needed to make informed decisions.

Solving the information overload problem

To solve this problem, many knowledge workers have begun leveraging generative AI platforms as a new information finding resource. It’s easy to see why – when provided with a simple query, AI can quickly provide answers, removing the need for knowledge workers to endlessly scour through search engine results to no avail. And so it’s no surprise that the launch of OpenAI’s new search platform, ChatGPT Search, was highly anticipated, touted by many as a competitor that could truly take on Google as the primary resource for knowledge workers.

However, while the potential is there, significant concerns about the use of generative AI as an information gathering tool remain. Yes, these new tools are fast, scalable and cheap. Yet, in a very human way, they can also lie. In their eagerness to respond, we’ve seen many AI platforms hallucinating information which to date has caused several notable, high-profile blunders.

In multiple incidents, lawyers using ChatGPT have been found to cite non-existent legal cases – something that can result in significant impacts, from case dismissals and fines to a broader erosion of trust in the legal system. Elsewhere, meanwhile, an NYC chatbot was found to be providing incorrect and illegal information and advice to business owners.

As a result, skepticism over the use of AI rightly remains. In fact, in a survey of 1,000 business decision makers, we found that over three quarters (78%) of knowledge workers say popular generative AI models like ChatGPT are eroding people’s trust in AI.

While these are powerful tools for consumers, they’re simply not built to drive effective decision making in the business world, as these high-profile blunders have shown.

Instead, many business leaders continue to put their faith in trusty search engine, with this being the second most trusted information gathering method behind official/third party reports. In fact, almost three quarters of decision makers (72%) never or rarely go past the first page of a search engine when seeking information, showing just how big of an influence search engines have on decision making.

Four steps to improving trust in AI as an information gathering tool

Such is the degree of trust in Google, the Department of Justice is considering breaking up the tech giant’s monopoly as an antitrust remedy, driving debate over where people can and should source information. However, if AI is to rival Google, we need to build trust in it, finding ways to eliminate key issues such as hallucinations.

I personally see a future in which generative AI will augment our knowledge, advise on potential choices, interrogate our thoughts to expose weaknesses in our thinking, and even make decisions autonomously. But for it to do any of these things, we first need to trust it wholly – from the content that trains it, to the references it uses and analyses it applies.

Can we bridge the gap that currently exists, and turn AI into a viable tool for supporting effective, trusted decision making? To even begin to do so, it is critical that several steps are taken:

1 – Craft effective inputs to guide AI responses The first step is to ensure we are guiding AI in the right way. By providing clear context and specific instructions, using examples to demonstrate desired output formats and implementing constraints to limit unwanted responses, we can reduce the scope for ambiguity and misinterpretation and boost output relevance and accuracy.

2 – Retrieve relevant information from external knowledge bases Second, it’s also important to leverage relevant information from external sources to guide more effective outputs. By integrating up-to-date, curated information sources into the input process, ideally through efficient retrieval mechanisms, we can both increase factual accuracy and benefit from verifiable sources for generated content.

3 – Guide AI to break down complex problems with reasoning processes Third, it’s possible to assist AI in solving complex problems with the right processes. By prompting the AI to show its work or explain its reasoning, encouraging intermediate steps in problem solving, and implementing self-correction mechanisms, logical consistency will be improved.

4 – Implement self-awareness and self-evaluation capabilities We can also develop mechanisms for the AI to assess its own confidence levels and recognize where knowledge gaps exist. Doing so can help encourage the AI to provide caveats or qualifications with its outputs, serving to enhance transparency into AI certainty and limitations.

If trust can be achieved, then the opportunity is massive

For AI to become an effective information gathering tool, it is vital that guardrails such as these are put in place to ensure that it can be trusted. To reiterate, decision makers are to be rightly wary of AI right now. Indeed, our survey shows that 80% have knowingly made a business decision based on information they were not sure about, with 88% of decision makers having discovered inaccuracies in information used for business decisions post decision.

However, if current issues can be addressed, and the trust gap that currently exists can be bridged, then the opportunity for AI to excel in supporting knowledge workers is significant.

We’re talking about a powerful tool that can quickly answer queries. If the right mechanisms can be put in place to ensure those answers are credible, logical and accurate, then users will be able to source exactly the information they need at speed. Critically, 95% of decision makers believe that better access to information will improve decision making. By taking the right steps to ensure that AI becomes a trustworthy information gathering asset, the decision making process can be vastly accelerated for knowledge workers.

We've featured the best AI website builder.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

  • Oura Labs is adding its beta-tested Symptom Radar feature to the Oura App
  • The data is synced each morning and can help identify cold or flu symptoms ahead of time
  • The feature has been in development since 2020

Oura is doing some really interesting things in the world of health and fitness technology right now. Following a period in beta testing through the Oura Labs program, Oura is rolling out Symptom Radar to its best smart rings, the Oura Generation 3 and Oura Ring 4.

This feature activates when the app syncs with the data collected by the user's Ring each morning. It checks resting heart rate, HRV, temperature trends, and breathing rate to "determine if there are any deviations from your personal baseline", which can indicate the onset of sickness.

The app reports on whether a user is displaying signs of respiratory issues that could be caused by a cold or flu-like illness, reportedly with a high degree of accuracy. Working with the Unversity of San Francisco's Osher Center for Integrated Health and its TemPredict initiative, Oura Rings could reportedly spot pre-symptomatic signs of fever with 76% accuracy, and has since "up-leveled" its accuracy with a new algorithm.

Oura can tell you you've got a cold before you know it

Oura Ring 4

(Image credit: Oura)

According to Oura, the Symptom Radar can identify symptoms in three levels; 'No Signs', 'Minor Signs', and 'Major Signs'.

This is done by analyzing metrics like resting heart rate, temperature trends, breathing rate and HRV to spot deviations from a user's baseline metrics.

If potential symptoms are detected, Oura will " encourage members to turn on Rest Mode and take proactive steps toward rest and recovery," a press release explains.

Oura says development with the TemPredict initiative in 2020. This led to the development of its new algorithm which was then tested as part of the Oura Labs eta testing service. According to Oura, it maintains 99% temperature accuracy compared to lab standards.

The feature is rolling out to Gen 3 rings and Oura Ring 4 by Monday, December 9.

You might also like...

Coros and Dura bike computer owners can now join the public beta and enable Strava Live Segments for additional motivation to set new PRs.

  • Meta commits to spending $10 billion on a new Louisiana data center for AI
  • At 2,250 acres, it will be the company’s biggest ever campus
  • Louisiana will see new job creation and local infrastructure investments

Social networking giant Meta has confirmed plans to invest $10 billion in a new data center designed to support its artificial intelligence systems, and it’ll be Meta’s largest data center to date.

With northeast Louisiana selected for its construction, it’s set to spread out over 2,250 acres on a site formerly known as Franklin Farm, extending as much as one mile from front to back at its largest cross section. The data center will also include a four-million-square-foot technology campus.

Work is expected to begin this month, with construction continuing until 2030. Construction alone is anticipated to generate up to 5,000 jobs, with 500 direct jobs and 1,000 indirect jobs set to be created upon completion.

Meta to build its biggest data center

In an announcement by the Office of the Governor of the State of Louisiana, it was revealed that Meta’s investment could mark the “largest private capital investment announcement in the state’s history.”

Governor Jeff Landry also confirmed that Meta would match its energy consumption with 100% clean and renewable energy, and that the company would invest $200 million in local infrastructure improvements including renewed road and water systems.

Landry commented: “Meta’s investment establishes the region as an anchor in Louisiana’s rapidly expanding tech sector, revitalizes one of our state’s beautiful rural areas, and creates opportunities for Louisiana workers to fill high-paying jobs of the future.”

Speaking about the company’s decision to select northeast Louisiana for its next data center, Meta Director of Data Center Strategy Kevin Janda said: “Richland Parish in Louisiana is an outstanding location for Meta to call home for a number of reasons. It provides great access to infrastructure, a reliable grid, a business-friendly climate, and wonderful community partners that have helped us move this project forward.”

Other state and regional benefits include Meta’s pledged $1 million annual contribution to Entergy’s “The Power to Care” low-income ratepayer support program (and a further $1 million annual contribution by Entergy Louisiana), and investments in training efforts to support the construction and operational workforces.

You might also like


  • 7 digital and 5 analog audio inputs, hi-res streaming and HDMI ARC
  • 100 to 160 watts per channel
  • $8,000 USD / $11,200 CAD / €9,990 EUR / £9,995 GBP

McIntosh, maker of reassuringly expensive audiophile equipment (and as of November 19, under the acquisition of Bose, along with fellow luxury audio brand Sonus Faber), has officially announced its new MSA5500 Streaming Integrated Amplifier for all your audio needs.

I really do mean all your audio needs: with integrated streaming (including lossless) as well as turntable and HDMI ARC connectivity, it's an impressive do-everything home entertainment device.

The new MSA5500 has seven digital and five analog audio inputs to connect all your audio components as well as the aforementioned HDMI for your TV. The turntable connection is a moving magnet phono input.

McIntosh MSA5500: key features and pricing

The spec-sheet here is impressive. There's a next-gen eight-channel 32-bit DAC that McIntosh says is "audiophile grade", and the Bluetooth 5.0 streaming supports AAC, aptX HD and aptX Adaptive. The amp is Roon Ready too.

The MSA5500 is designed to be compact, but there's plenty of power to drive your speakers: 100 Watts per channel for 8 Ohm speakers or 160 Watts per channel with 4 Ohm speakers. And the amp also features a collection of proprietary capitalised features including Power Guard; Sentry Monitor; Monogrammed Heatsinks; Home Theater PassThru; Gold-plated Solid Cinch speaker binding posts; Headphone Crossfeed Director; and Power Control.

The MSA5500 is designed to be your do-everything amp, but it's also intended to last a long time – so if you decide to upgrade to an even more powerful amp in the future, you can bypass the internal amp and use the MSA5500 as a streaming source and pre-amp.

The MSA5500 will be available starting in December through authorized McIntosh dealers with an MSRP of $8,000 USD / $11,200 CAD / €9,990 EUR / £9,995 (so around AU$12,430, give or take).

You might also like


  • Microsoft found out about a new FTC investigation the same way as rest of us
  • The company is still waiting on formal legal action from the Commission
  • Redmond is asking the FTC to investigate itself

Microsoft has expressed its frustration about a recently revealed FTC investigation because it was never actually informed about the probe – the tech giant only found out about the FTC’s plans after reading the news.

Consequentially, Redmond’s Deputy General Counsel Rima Alaily has asked the FTC Inspector General to “investigate whether FTC management improperly leaked confidential information about a potential antitrust investigation last week in violation of the agency’s ethics rules and rules of practice.”

The letter, shared publicly on Alaily’s LinkedIn, argues that the company only found out about the probe via Bloomberg’s coverage.

Microsoft asks FTC to investigate… itself?

The letter alleges that the FTC has “opened an antitrust investigation” spanning “cloud computing and software licensing bundles to cybersecurity offerings and artificial intelligence products,” however it’s still unclear where this information came from.

Alaily argues: “Ironically, almost a week after telling the press about an information demand issued to Microsoft, we still cannot even obtain from the FTC a copy of this document.”

Apart from the most recent leak of information, Microsoft slates the FTC for what the Commission itself calls a “steadily increasing” number of unauthorized disclosures, something that members of the US Senate and the US House of Representatives have also noted.

Clearly unhappy with how it learned about the FTC’s information demand “like the rest of the world” – via the Bloomberg story – Microsoft claims that it still has not received any formal legal process.

The letter concludes: “While this leak is an unfortunate development for Microsoft, it is more problematic for the integrity of the FTC’s processes.”

The Deputy General Counsel has also urged the FTC to share the findings of its investigation publicly and to hold itself accountable for any leakage.

TechRadar Pro has asked the FTC for a comment, but we haven’t received an immediate response.

You might also like