Steve Thomas - IT Consultant


It's a new calendar month, and you know what that means: more movies on Paramount Plus. And this month's list of new additions to the Paramount Plus catalog is a doozy, with some absolute classics joining the line-up.

With so much to choose from it's hard to pick just three favorites, but whether you're looking for a big old weepie, a beautifully acted take on a very modern obsession or just a story about a man with a really big part – and we don't necessarily mean an acting part – then Paramount Plus has you covered.

Her

Score: 95%
Rating:
R
Run time:
1h 59m
Director:
Spike Jonze

This is even more fun if you imagine it as a prequel to the Joker movie, because the star here is Joaquin Phoenix. But this is a very different role. Phoenix plays Theodore Twombly, a quiet, sensitive man who discovers who he thinks is 'Miss Right', played by Scarlett Johansson. There's just one problem. She's Siri.

If you liked Michel Gondry’s Eternal Sunshine Of The Spotless Mind I think you'll love this. Empire magazine says it's "a sweet, smart, silly, serious film for our times, only set in the future," while RogerEbert.com said that it was "one of the most engaging and genuinely provocative movies you're likely to see this year".

45 Years

Score: 97%
Rating:
R
Run time:
1h 33m
Director:
Andrew Haigh

Charlotte Rampling is magnificent alongside Tom Courtenay in this incredibly poignant tale of lost love and missed opportunities. Rampling is Kate, a married woman whose life is thrown into upheaval when her husband's long-lost ex is finally discovered in sad circumstances. The revelation puts incredible strain on their relationship, and it's a definite box-of-tissues weepie thanks to the towering performances by both leads.

According to MovieFreak it's "a drama of profound majesty sure to be marveled at for many years to come," while the Associated Press was enchanted: "How many great movies could be written across the enigmatic, profound face of Charlotte Rampling? Hundreds? Thousands? At any rate, Andrew Haigh's 45 Years is one of them."

Boogie Nights

Score: 94%
Rating:
R
Run time:
2h 32m
Director:
Paul Thomas Anderson

It's 1977 and in the San Fernando Valley Eddie (Mark Wahlberg) and his impressive attributes are discovered by porn producer Jack Horner (Burt Reynolds), who turns him into porn superstar Dirk Diggler. According to the Chicago Tribune the story is told as "a beautifully made survey of '70s excess, filtered through the trashy world of the burgeoning porno film industry in southern California".

The film was frequently compared to Quentin Tarantino's work, but as Vice suggests "the Tarantino comparison is ultimately less about technique than a shared joyful electricity of the filmmaking, the sense of an artist clearly high on the sheer act of making a movie." Entertainment Weekly was one of many publications that felt it really hit the spot. "Boogie Nights, an epic tale of porn, pleasure, and excess, offers a purer hit of exhilaration than any movie this year," it ejaculated.

You might also like

When it comes to headphones, the Beyerdynamic sound is well-known as clean, accurate, and rich. The Amiron 300 wireless earbuds bring that sound to a much more portable form factor.

  • Beta code hints that Gemini will get NotebookLM access
  • NotebookLM will allow Gemini to create AI podcasts from PDFs or videos
  • Linked to your mobile, AI Podcasts could help you learn about any subject around you

Delving deep into the code for the latest beta version of the Google Gemini app, it looks like Google’s NotebookLM AI podcast creation software might be coming to Google Gemini on your phone.

Android Authority has found the following lines of code in a beta version of Gemini:

<string name="assistant_zero_state_suggestions_create_podcast_prompt_query">Generate audio overview</string>
<string name="assistant_zero_state_suggestions_create_podcast_snippet_highlight">Generate audio</string>
<string name="assistant_zero_state_suggestions_create_podcast_snippet_simplified">overview</string>

As you can see, both “create_podcast” and “Generate audio overview” are visible, indicating that Gemini will have the ability to generate a podcast. Moreover, the NotebookLM section that creates the podcast is called an audio overview. Taken together, these two things would seem to indicate a role for Google’s NotebookLM in a future version of Gemini.

How would it work?

Of all the weird and wonderful uses of AI to arrive in 2024, Google’s NotebookLM remains one of the most captivating. NotebookLM contains a number of products that use AI to help you learn any subject. You feed in your source material as a text file, PDF, or video, and it helps you organize that material. One of the ways it does this is through audio overviews.

An audio overview is essentially an audio file that takes the form of a podcast show between two hosts who are discussing whatever subject you’ve fed it via PDFs, web pages, or a YouTube video. Listening to two people discuss a subject is a great way to help you learn about it.

What makes NotebookLM great is how realistic the podcast sounds. It’s very hard to believe you’re not listening to two real people discussing the subject at hand.

If Gemini gives you the ability to create audio podcasts from data sources you feed it, then it’s going to be a great way to help you learn about new subjects.

You can imagine the situation where you upload a PDF to Gemini and then ask it, “Hey, can you make me a podcast about this PDF?” Combine that with Google Lens, and you could be able to get Gemini to generate podcasts about things you are looking at. Just imagine taking a trip around a famous building, like the Vatican, or having Gemini produce a podcast about the building that acts as a guided tour.

The potential for NotebookLM to integrate with other apps or be useful in new situations is almost unlimited, and we’d expect to see Google coming up with new and interesting ways to use it in the very near future.

You might also like...


  • A hacker with the alias "Nam3L3ss" started leaking data from six companies
  • The companies include Nokia, Bank of America, and others
  • The data came from the MOVEit breach that happened more than a year ago

Hackers are still leaking sensitive information stolen via the MOVEit flaw, more than a year after it was first disclosed, experts have warned.

A threat actor with the alias “Nam3L3ss” recently started leaking sensitive data from six major companies to BreachForums: Xerox (42,735), Koch (237,487), Nokia (94,253), Bank of America (288,297), Bridgewater (2,141), Morgan Stanley (32,861), and JLL (62,349), The Register reports.

The publication further added that security researchers analyzed the data dump and confirmed its authenticity, adding that among the leaked information are people’s full names, phone numbers, email addresses, job addresses, employee badges, job titles, and usernames.

Reader Offer: Save up to 70% on Aura identity theft protectionTechRadar editors praise Aura's upfront pricing and simplicity. Aura also includes a password manager, VPN, and antivirus to make its security solution an even more compelling deal.

Preferred partner (What does this mean?)View Deal

MOVEit files keep leaking

This is the type of information cybercriminals like most (apart from passwords and banking data, obviously), since it allows them to run phishing, identity theft, and similar attacks that can lead to ransomware, wire fraud, and more.

"This data is a goldmine for social engineering," Zack Ganot, chief strategy officer for Atlas Privacy said. "Knowing exactly what employee sits on which team, who they report to, what their badge number is, what building they work in, their organizational email and phone number – this is some wild stuff for an attacker looking to exploit an org."

MOVEit is a managed file transfer (MFT) tool, used by large companies to securely share sensitive files. In late May 2023, it was discovered that it had a flaw, which was successfully exploited by a Russian ransomware actor called Cl0p. This group used the flaw to exfiltrate sensitive data from hundreds of companies using MOVEit.

Among the victims were numerous high-profile organizations across various sectors, including US government entities (Department of Energy, Office of Personnel Management), educational institutions (Johns Hopkins University), private enterprises (Shell, British Airways, Ernst & Young), and many others. In total over 62 million individuals were directly affected, with the true number likely higher.

You might also like

The Lenovo Tab Plus proves big things come in small packages. It is an 11.5-inch tablet that houses a surprisingly powerful speaker system and 2K display.
New reports point to the Korean giant unveiling its first smart glasses during a January Unpacked event alongside the expected Galaxy S25 series. Here's what we know.

  • Kaspersky found a new campaign, using malicious JavaScript to deploy RATs
  • The RATs are used to deploy two infostealers
  • Among the victims are people and businesses in Russia

Hackers are targeting people and businesses in Russia with malicious JavaScript, in order to install backdoors on their devices. This is according to a new report from cybersecurity researchers Kaspersky, who named the campaign “Horns&Hooves”.

As per the researchers, Horns&Hooves started in March last year, and has since infected roughly 1,000 endpoints.

The campaign starts with a phishing email, in which the attackers impersonate individuals and businesses, and send emails that mimic requests and bids from potential customers, or partners.

Actively developed campaign

The emails come with various attachments, among which is the JavaScript payload. This payload delivers two Remote Access Trojans (RAT): NetSupport RAT and BurnsRAT. In turn, these RATs are used to deploy the final payload: either Rhadamanthys, or Meduza.

These two are known infostealers. Since late 2022, Rhadamanthys is being offered on the dark web as a service, enabling crooks to steal a vast range of information from the target device, from system details, passwords, to browsing data. Rhadamanthys has specialized tools for stealing cryptocurrency credentials, with support for over 30 different wallets.

Meduza, on the other hand, is part of the growing threat landscape for personal and business cybersecurity. Like Rhadamanthys, it steals user credentials and other sensitive information, including login credentials for various services and applications. However, Meduza operates with a more focused scope, aiming to evade detection through various obfuscation and anti-analysis techniques​.

Horns&Hooves is an actively developed campaign, the researchers are saying, stressing that the code was revamped and upgraded numerous times. While attribution proved difficult, there is reason to believe that TA569 is behind the attacks. This group, according to The Hacker News, is also called Mustard Tempest, or Gold Prelude) and is the one running the SocGholish malware.

The same publication also stated that TA569 was seen acting as an initial access broker for affiliates deploying the WastedLocker ransomware strain.

Via The Hacker News

You might also like

It’s getting harder for organizations to identify the extent of damage incurred from a cyberattack – after the initial shock wave of panic anyway. You don’t want it to be difficult to trace the origins of an attack when the frequency of breaches is as rampant as it is today. Data breaches are more of an eventuality than a possibility.

Ask CISO heads how long it takes them to identify the blast radius of a breach, and the average response you’ll get is, at best, ‘hours.’ But ‘hours’ isn’t fast enough today. Just a single hour is all it takes for an attacker to pivot across infrastructure to access highly sensitive resources.

If the repeated Internet Archive breaches taught us anything, it’s how damaging exposure of the wrong information can be. Hackers used exposed access tokens from previous incidents to penetrate the organization’s Zendesk implementation. These API keys, left static since the original breach, provided hackers with easy access to over 800,000 support tickets. To add insult to injury, the hackers started replying to old support tickets criticizing the Internet Archive for failing to rotate these keys.

Unfortunately, the number of times we keep seeing these incidents is a symptom of how complex IT infrastructure has become. Finding out who breached your data, where, and how is often headache-inducing. This largely stems from how extremely fragmented identity silos have become, and the pile of identities needing management just keeps growing bigger. But there’s also the fact that access relationships between resources are also fragmented. This fragmentation of access and security models makes organizations vulnerable to human error.

What would fix this? A new cybersecurity paradigm – one without static credentials, eliminating the attack surface targeted by threat actors. Companies can further harden their security by shifting their access model from role-based authentication to attribute-based authentication.

The complexity of identity management

Microsoft’s recent report identified over 600 million identity attacks in its 2024 fiscal year alone. If you’re wondering why that number is so high, it’s because humans make it easy. We leave credentials like passwords, browser cookies, and API keys lying around in the most obvious places. Further, long-lived, stale privileges allow a bad actor to pivot from their initial breach to other destinations on a network.

This makes it only a matter of time before a user inadvertently reveals too much information or prior credentials. Hackers are ready to pounce on these mistakes. We saw this happen with the initial Internet Archive breach, where an exposed GitLab configuration file contained an authentication token that enabled hackers to download the Internet Archive’s source code, which included additional credentials.

It also doesn’t help that access is often managed in completely different ways across Kubernetes clusters, cloud APIs, IoT devices, databases, etc. The silos emerging from this approach obstruct the ability to revoke access to compromised data, or to figure out who had access to what data in the first place.

If we want to begin to thwart cyberattacks, then step one to reducing the attack surface and blast radius has to be to remove all static credentials like passwords, as well as standing privileges. Our industry needs to shift to a mindset of securing identities cryptographically based on physical-world attributes that cannot be stolen (like biometric authentication). Additionally, access should only ever be enforced based on ephemeral privileges that are granted only for the period of time that work needs to be completed. Above all, companies shouldn’t treat identity management, policy governance, and access control as distinct endeavors. They are all interconnected.

Not everyone needs access, and they don’t need it anywhere, anytime

Traditionally, a lot of emphasis has been placed on assigning permissions to users based on their role within an organization – role-based authentication (RBAC). For cybersecurity models to modernize, however, there’s more companies can do to harden access controls, and one way is to ensure that resource access only ever takes place in an appropriate context.

Attribute-based authentication (ABAC) is how we get there, effectively setting very granular requirements for when someone can access a resource.

Imagine you have a database table housing sensitive data. Yes, you can grant access to employees with a certain job title – “Senior IT manager” – but there are other factors you should weigh for whether or not someone should gain access:

Where is the employee? Are they in the office? Or are they in Hawaii?

What device are they on? Are they using a work laptop, a phone, a tablet, or something else?

What time is it? Do they really need access to a resource when it’s in production?

The goal of this mindset is to give organizations the freedom to say things like, “all senior programmers trying to access database table X have to be in Milwaukee between 1pm and 3pm.” You’ve now effectively shut down the ability for anyone to access this database if they don’t fulfill these select requirements. No more access for the random guy drinking a slurpee in Hawaii.

Everyone should be able to govern on attributes when granting access to users, as opposed to granting access to anyone inside ‘the network.’ The mindset should be ‘locked by default’. That’s imperative to reducing the attack surface.

We've featured the best endpoint protection software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

By now, many enterprises are aware that integrating AI and generative AI into their work processes can streamline operations, improve efficiencies, and save them time and money. Some are on their way to meeting this reality; according to Deloitte’s recent generative AI report, 18-36% of organizations say they are already achieving the benefits they expect from the use of generative AI to a “large” or “very large” extent, depending on the type of benefit being pursued.

However, despite the clear advantages of leveraging this ground-breaking technology, integrating fast evolving AI into business operations also presents several challenges. As AI continues to develop, the obstacles facing business leaders when adopting these technologies also continue to grow. There are therefore challenges that must be overcome for organizations to fully harness the benefits of AI and generative AI – but business leaders can rest assured that there are key strategies they can implement that will play a crucial role in enabling successful integration.

Don’t do AI for AI’s sake

A common pitfall for many AI projects is the absence of a coherent strategy and defined objectives. An AI project will fail to deliver, even with heavy investment, if businesses don’t take the time to align it with their company’s goals and define exactly how it will add value – and how much value – relative to the cost of implementation. After all, there’s no point rolling out a project with the goal of adding five million pounds to your top line, if it’s going to cost you ten million to get there.

Whether you aim to deploy in your own data center or in the cloud, AI projects can be expensive, especially when you factor in both the infrastructure needed for deployment and the services needed to make it all work. Organizations therefore need to be very clear as to why they’re doing what they’re doing and what the return on investment will be. Business leaders should resist the urge to jump on the AI bandwagon and instead pursue thoughtfully conceived projects that align with the overarching goals of their organization.

Part of this means exercising caution against “AI washing” – the exaggerated promotion of overhyped AI solutions – and concentrating on pragmatic applications that deliver genuine value. These may well be smaller, more niche use cases, as opposed to massive process overhauls. For example, a construction company that counts health and safety as a key business priority might install cameras with AI onsite that can monitor workers’ workwear throughout the day. If someone isn’t wearing the right protective gear, the AI will flag to a supervisor who can step in to provide it, ensuring optimum health and safety levels at all times. In this way, the company is leveraging AI in a way that presents them with tangible, measurable business results that are right for them.

To aid them in finding these targeted use cases, organizations should look for partners who can help them to analyze their business from the outside in and identify the areas where AI can really make a difference for them.

Well-ordered data is the backbone of successful AI

Once they have defined their AI strategy, organisations then need to consider how they can successfully implement it. AI is extremely data-heavy, particularly when it comes to some of the latest generative AI use cases being explored. Businesses must therefore be able to locate all their data assets, consolidate and clean their data, and streamline their repositories, to render them suitable for AI applications. This demands a comprehensive understanding of both their data sources and storage services.

To create an AI chatbot, for example, a company needs to train it on a plethora of disparate data sources, from user manuals to previous customer call conversations. Only then can it be programmed, using that existing data, to respond accurately to common questions.

Get the right skills onboard and pay attention to regulations

In order to successfully execute AI projects, enterprises must find or recruit the necessary skills into their business. With AI expertise currently in extremely high demand, those with the relevant skillset can be both hard to find and expensive, so allowing time for this should be factored in.

They must also stay abreast of evolving AI regulations to ensure compliance. For example, the European Union’s landmark AI Act recently came into effect, regulating the development, use, and application of AI for developers and deployers alike. This significant step highlights the importance placed on the safe and ethical development of AI technologies within Europe – a sentiment that we are also seeing come into force across the globe.

With these regulations, there are numerous elements to consider. If organizations are feeding data into AI models, they must, for example, ensure that they’ve obtained the right consents to use it, and that it is anonymized as required. There are also restrictions on data leaving the premises of whoever gathered it – to move it to the cloud, for example.

Back to basics

If the right strategy, skills and data sources are not in place, AI projects could fail to be successful. However – somewhat reassuringly – this is not a new challenge; these are all obstacles that may have contributed to the failure of IT projects since such projects began. Yes, the possibilities that AI presents are different, but the fundamental elements that enterprises must think through when implementing these projects remain the same. It’s a fact that many business and IT leaders should take solace in when embarking upon their AI journeys.

If enterprises can do their homework – with support from the right partners – to ensure that the AI initiatives they look to integrate will contribute tangible value to their business, and then take the necessary steps for a rewarding implementation, they will set themselves up for success. Successful AI integration is there for the taking – organizations first just need to take the time to really get it right.

We've featured the best AI phone.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro


  • GitHub says its AI-generated code is more readable, reliable and maintainable
  • The test focused on a highly repetitive task – AI’s ultimate role
  • Only 243 developers took part in the study

Software developer Dan Cîmpianu has criticized the quality of AI-generated code in a blog post targeted at GitHub’s claims about its Copilot AI tool.

More specifically, the Romanian developer slated the statistical accuracy and experimental design used by GitHub in a recent study, where it claimed that its Copilot-assisted code was “significantly more functional, readable, reliable, maintainable, and concise.”

However, the study focused on writing API endpoints for a web server, or Create, Read, Update and Delete actions (CRUDs), which Cîmpianu described as “one of the most boring, repetitive, uninspired, and cognitively unchallenged aspects of development.”

Is GitHub’s AI code actually that good?

The study compared GitHub’s OpenAI-backed AI-generated code with that of over 200 experienced developers, and found the AI code to perform better across multiple metrics.

However, Cîmpianu has criticized GitHub for using percentages to denote differences without actually providing the baseline metrics for comparison, which could artificially make the percentage values look higher than they are.

GitHub’s study also defines errors as, “inconsistent naming, unclear identifiers, excessive line length, excessive whitespace, missing documentation, repeated code, excessive branching or loop depth, insufficient separation of functionality, and variable complexity,” meaning that bugs produced by its code were not included within the statistics of

Another criticism of the study is that, despite being a “home to 1 billion developers,” the study only uses a sample size of 243 developers.

Cîmpianu concluded: “This does not seem to be even attempting to [be] aimed towards developers, but rather has the perfume of marketing, catered to the C-suites with buying power.”

Moreover, the developer also highlighted the skill required to write strong code, stating that AI should be seen as a supplement and an aid rather than a substitute for continued training.

You might also like

Veo, Google's Sora competitor, can now be used to create company assets, and the results are impressive. Take a look for yourself.

  • LogoFAIL, image parsing vulnerabilities on Linux and Windows, are being actively abused
  • Researchers are saying crooks are installing Bootkitty, the first-ever Linux UEFI bootkit
  • Bootkitty works on both Linux and Windows devices

LogoFAIL, a string of vulnerabilities that allow threat actors to install malware at boot level, is now actively being abused in the wild. This is according to a new report from cybersecurity researchers Binarly.

Discovered roughly a year ago, LogoFAIL is a group of vulnerabilities that allow malicious actors to replace the logo image displayed on Windows and Linux devices during the boot process.

The replaced images can contain malicious code that the device will run, and since the code is installed on boot, before the OS or any antivirus programs, most cybersecurity programs cannot detect or remove it.

Purely theoretical

In fact, even reinstalling the operating system, or replacing the hard drive, will not help. The malware installed this way is generally called UEFI bootkits, since they target the Unified Extensible Firmware Interface (UEFI), responsible for initializing hardware and launching the operating system.

When it was first discovered, LogoFAIL was deemed purely theoretical, as no active exploits, or code, were seen in the wild. However, Binarly now says that things have changed, and that it observed LogoFAIL being used to deploy Bootkitty.

Bootkitty was first observed, and reported, late last week. It is the first malware of its kind, since it targets Linux devices. Spotted by researchers from ESET, the malware was described as an early development stage version.

Bootkitty relies on a self-signed certificate, which means it won’t run on systems with Secure Boot - therefore, it can only target some Ubuntu distributions.

Furthermore, the use of hardcoded byte patterns and the fact that the best patterns for covering multiple kernel or GRUB versions were not used, means that the bootkit cannot be widely distributed. Finally, Bootkitty comes with many unused functions, and does not have kernel-version checks, which often results in system crashes.

In any case, the finding marks an important moment in the development and destructive potential of UEFI bootkits.

Via Ars Technica

You might also like